The previous chapter showed how to supress an exponential cost for
	  processing semantics that generate phrases in another language.
	  We defined what OR-blocks should do, and what shared blocks should do,
	  to supress exponential blowup.

	  We now concentrate on the last language in a sequence, such as the
	  datatype language in our case.  We must decide what OR-blocks should
	  do in the semantics of the datatype grammar, as well as what to do
	  at shared blocks, again in order to supress exponential or even
	  infinite blowup.

	  Unlike the previous chapter, we place no restrictions on the grammar,
	  which may now have cyclic coercions like:

		    INT	->	  REAL
		    REAL	->	  INT

What OR-Blocks Should Do In The Semantics Of The ~Datatype ~Grammar

	  Ambiguities from the syntax grammar generate ambiguous phrases in the
	  datatype language, as we've just seen.  All ambiguities, both
	syntactical and those arising from the datatype grammar, are
	represented as ambiguous phrases in the datatype language.

	All the ambiguities that survive thru datatype processing are actual
	ambiguities.  They've survived both syntax and datatype processing.

	  We will ~disambiguate ~by ~common ~sense or ~resistance among those
	  surviving ambiguities.  That is, we will choose a meaning that
	  involves a minimal number of type conversions.

	  From Section 15.2, we learned that surviving ambiguities wind
	  up as OR-blocks in the final semantic structure.

	  We now consider what OR-blocks ~in ~the ~datatype ~language's
	~semantics should do.  The previous chapter showed what OR-blocks are
	supposed to do within the semantics of the syntax grammar.

Disambiguation By Common Sense, Or Resistance

	Of all possible meanings, common sense would choose the simplest
	alternative.  The simplest alternative minimizes the number of
	conversions from one type to another.

	For example, given the datatype rules:

		~INT	->	~REAL
		~REAL	->	~INT

	how would you expect the coercions to apply to turn an INT into a
	REAL?  You probably would pick one application of the first coercion.

	However, there are more possible interpretations.  For example, an INT
	could be turned into a REAL via ~two applications of the first rule
	and one application of the second rule, as in:

		~INT	->	~REAL	->	~INT	->	~REAL

	Clearly, the solution which involves three coercion applications is
	less desirable than the direct approach, which applies only one

Another Example

	For another example, consider the datatype expression (which we show in
	the infix notation for convenience):

		~INT  +  ~INT

	Consider the datatype grammar (which we also show in the infix

		~INT  +  ~INT		->	~INT
		~REAL +  ~REAL		->	~REAL

		~INT		->	~REAL

	There are two distinct ways by which the ~INT+INT can be interpretted
	as a REAL:

	   a)	The two INTs can be summed, using integer addition (the first
		rule), resulting in an INT.  That INT is then turned into a
		REAL (the third rule).

	   b)	Each of the two INTs can be turned into REALs (by two
		applications of the third rule).  Those REALs
		are then summed using floating-point addition, resulting in
		a REAL (the second rule).


	These are illustrated in figure 28.1(a and b).

	If we choose to minimize the total number of coercions employed, then
	interpretation (a), which uses only one coercion, will win over
	interpretation (b), which uses two coercions.

	Figure 28.1(c) shows the CHOICE_OF_PHRASES structure for our example
	~INT+INT.  There appears to be two full-spanning REALs, one for each
	of our two interpretations.  In fact, there is only one full-spanning
	REAL, but its semantics are ambiguous (see Section 15.2).


	Figure 28.2 shows the ambiguous semantics under the one full-spanning
	REAL.  The figure has two pointers labelled A and B, which show the
	two interpretations under the REAL.  They are combined via the OR-

Associating A Resistance With Each Block

	We introduce resistance into the semantic structure by associating
	a resistance at each block.  The resistance of a block is the ~sum
	of the resistances of the blocks that it references below.  Blocks at
	the bottom of the structure have resistance zero.

	So far, all blocks have resistance zero.  However, a coercion
	introduces a resistance of one.  That is, the resistance of a semantic
	block that is the result of a coercion gets as its resistance ~one
	~plus the resistance of the block to which it points below.

	Figure 28.2 includes with each block its resistance.  Interpretation A
	has a resistance of one whereas interpretation B has resistance two.

	To choose an interpretation that minimizes resistances, each OR-block,
	like the one in figure 28.2, chooses the interpretation below having
	minimum resistance.

	In our example, the OR-block chooses its left alternative (A) and
	ceases to reference the more resistive alternative (B).  The OR-block
	thus ceases to represent two choices.  It becomes a ~identity.  The
	invocation of the disambiguated OR-block merely invokes its one
	remaining interpretation, A.

	If an OR-block's two interpretations have identical resistances, the
	  OR-block remains ambiguous, still holding onto both the

		    BOX:	What does it mean to disambiguate by common sense?
		    BOX:	What are the two interpretations for rendering a REAL
		    BOX:	from the expression "INT+INT"?
		    BOX:	Which interpretation will be chosen after we
		    BOX:	disambiguate?
		    BOX:	How many ways are there to render "-INT" as a REAL,
		    BOX:	assuming that "-" is defined for both INTs and REALs?

Implementation Of Resistance

	  How do we implement this disambiguation by resistance?  We need some
	  way to ask a semantic block what its resistance is.

	  We make each semantic block perform two tasks.  One is to deliver its
	  resistance and the other is to act out its original meaning that we
	  expect of the semantic block.

	  Let's introduce a global variable that dictates to a semantic block
	which of the two tasks to perform:


	A TRUE value demands a resistance from each semantic block, whereas a
	FALSE demands the original semantic action.  When a resistance is
	demanded, we expect the semantic block to put the computed resistance
	into the following global variable:


	For example, the rule we used to write as:

		<REAL:X>  <REAL:Y>  <PLUS>	->

			<REAL:   INSTRUCTION(270,X,Y);    >

	must now be implemented as:

		<REAL:X>  <REAL:Y>  <PLUS>	->



					"The resistance of X goes into RES..."


					"Add in Y's resistance..."

						    THE_RESISTANCE::= + RES;


	  When COMPUTE_RESISTANCE is TRUE, we invoke each of X and Y so as to
	  have each place its resistance into THE_RESISTANCE.	 We form their
	  sum and leave it in THE_RESISTANCE, as we are supposed to do.  Recall
	  that the resistance of a block is the sum of the resistances of its

	  In the other case, when COMPUTE_RESISTANCE is FALSE, we perform as we
	  did originally, calling INSTRUCTION.

	  We will take the liberty to write this all more simply as:

		    <REAL:X>  <REAL:Y>	<PLUS>	->

					  RESIST =>	 X+Y
					  USUAL  =>	 INSTRUCTION(270,X,Y)

	  This shows the essential information most clearly.	The resistance
	  is X+Y, and the USUAL action calls INSTRUCTION.

	  For another example, let's revisit how a NUMBER becomes an EXPR:

		    <NUMBER:N>	  ->

					  RESIST =>	 0
					  USUAL  =>	 LOAD( 1, ADDRESS_OF(N));

	  The INT's resistance is zero.  Its USUAL action is to load the number
	  into register 1.

	  Here is an example coercion, which introduces non-zero resistance:

		    <INT:X>		  ->	    <REAL:
							  RESIST =>	 X+1
							  USUAL  =>	 INSTRUCTION(300,X);

	  The resistance of the resulting REAL is one greater than that of the
	  given INT.

Resistance At The OR-Block

	  Let's modify the OR-block so as to be able to deliver a resistance
	  like other blocks do, and to disambiguate based on resistances.

	  Recall that ~within ~the ~function ~GIVE, we assigned into OR the OR-
	  block as follows:

		    OR:=  //[A;B;]  F(A,B);	\\ ;

	  (We used the names NEW and COPY_OLD in place of A and B).	 F was
	  left unspecified at the time.

	  In Section 25.5.4 we decided on what F should do for semantics which
	  generate phrases in another language (e.g., our syntax grammar),
	  resulting in this:

		    OR:= //[A;B;]
					  <*B*>;		\\ ;

	  This OR-block causes both alternatives A and B to generate their
	  phrases, resulting possibly in ambiguous phrases.

	  In contrast, for the semantics of the datatype grammar, where
	  resistances exist, we would like to assign into OR as follows:

	     OR:= //{A;B;}	  BEGIN  VAR  RA,RB = REAL;

			 IF  DEFINED(B)  "We haven't resolved this ambiguity yet"



			   <*A*>;   RA:= THE_RESISTANCE;  "Get A's resistance
									     into RA"
				   <*B*>;	RB:= THE_RESISTANCE;  "Get B's resistance"

			   THE_RESISTANCE:= RA MIN RB;	"Our resistance is the

			   IF  CERTAIN  "RA and RB are reliable"  THEN

				"Pick a winner between A and B, and leave the
				 winner in A... "

				IF  RA < RB  "A wins"  THEN  B:= NIL;

				EF  RB < RA  "B wins"  THEN  A:= B;  B:=NIL; FI

				"A has the winner"

			ELSE  "Our non-resistance (usual) action... "

			      <*A*>;   "Act as identity.  B is ignored.  We may
					wish to report an ambiguity.  We are
					making an arbitrary choice."

		   ELSE  "Unambiguous"

		   FI						END \\ ;

	The outer IF-THEN-ELSE enters the big THEN clause only if B is still
	defined.  B is always defined initially.  However, an earlier
	invocation of this OR-block might have already caused disambiguation,
	by putting NIL into B.

	If B isn't defined then we have no ambiguity at all.	Here, we
	  invoke A in the outer ELSE-clause.  A is our official meaning when
	  B isn't defined.

	B can be set to NIL in the outer THEN-clause.  We set B to NIL as
	soon as we can disambiguate, i.e., choose one meaning over the other.
	Focus now on the big, outer THEN-clause.

	The outer THEN-clause responds to COMPUTE_RESISTANCE properly.  If
	TRUE, it acquires the resistances of A and B, assigning those
	resistances into RA and RB.  The minimum of those two is our
	resistance (as we will retain only the interpretation having least

	We then put into A either A or B, whichever has the least resistance.
	(This action is done only if CERTAIN is TRUE.  So far, it's always
	  TRUE.  It may become FALSE only when we consider cycles (soon to

	  In the inner ELSE clause, when we are not computing resistances, we
	  invoke our favorite A, so that this whole OR-block comes to act exactly
	  like A alone does.

	  If A and B have identical resistances, then we make an arbitrary
	  choice by always choosing A.

Differing Meanings For OR-Blocks

	  So far, we want OR-blocks to behave one way during the semantics of
	  the syntax grammar, and another way during the semantics of the
	  datatype grammar.

	  OR-blocks are created inside of GIVE.  We will need
	  to introduce an IF-THEN-ELSE in GIVE so as to pick the different
	  desired OR-block behavior in each case.	 In fact, this is generalized
	  so that an OR-action can be dictated on a ~per ~part-of-speech basis.

		    BOX:	What roles do the variables COMPUTE_RESISTANCE and
		    BOX:	What is the resistance of an OR-block?
		    BOX:	What must be true for an OR-block to disambiguate?

What We Do At Shared Blocks ~In ~The ~Semantics ~Of ~The ~Datatype ~Grammar

	  Recall that a semantic structure can represent exponentially many
	  meanings in only polynomial memory.  The invocation of the semantic
	  structure will incur an exponential cost unless we supress all but the
	  first invocation of any block.  This supression achieves a polynomial

	  Shared blocks are the only blocks that could be invoked more than once,
	  since each has more than one pointer pointing to it.

	  For the computation of resistance, let's remember our computed
	resistance upon the first invocation.  For subsequent invocations, we
	will immediately deliver our previously computed resistance.

	We hold onto a variable called LAST_TIME, as we did in Section 27.2.2,
	to allow our shared blocks to be reset:

	   LAST_TIME:= 0;




			   LAST_TIME:= PRESENT_TIME;	"We will be up-to-date"

			   <*ORIGINAL*>;  "Continue downward thru the semantic
					   structure, so as to compute our

			   MY_RESIST:= THE_RESISTANCE; "Remember newly computed

				  Announce my resistance... "


		    ELSE  "The usual action.  (We act as the identity)..."

		    FI   \\ ;

	We first set ORIGINAL to be a copy of S, and LAST_TIME to be zero,
	as we had done before for shared blocks in the previous chapter.

	We then respond to COMPUTE_RESISTANCE by ultimately setting
	THE_RESISTANCE.  If this is a non-first encounter, (LAST_TIME =
	PRESENT_TIME), we enter the inner ELSE-clause and set THE_RESISTANCE
	to our previously computed resistance (MY_RESIST).  In contrast, we
	enter the inner THEN-clause if this is our first encounter.  There, we
	invoke ORIGINAL so as to compute our resistance.  That result will
	reside in THE_RESISTANCE, and we remember that value in MY_RESIST.

	If this overall assignment, into "@(S)", is applied to all shared
	blocks in a semantic structure, we will have achieved the resistance
	computation in only polynomial time, even if there are exponentially
	many meanings represented.

		BOX:	What does a shared block do if LAST_TIME is not equal
		BOX:	What does it do if those two variables are equal?
		BOX:	What value is in MY_RESIST initially?

	We've considered almost all kinds of semantic structures.  The one
	  last topic to consider is ~cycles in the semantic structure.

	  We now consider how to cope with cycles.  In summary, Section 28.4.5
	  shows the slight modification required for cycles at shared blocks.
	  Section shows what we need to do at the top of a semantic
	  structure to deal with cycles.

	  How can there be a cycle in a semantic structure?  Consider the
	  expressions "5".  Suppose we have the two coercions between INT and

		    ~INT	->	  ~REAL
		    ~REAL	->	  ~INT

	  There are many ways by which "5" can be turned into the REAL "5.0":

	     1)   INT "5"	  ->	 REAL "5.0"

	     2)   INT "5"	 ->  REAL "5.0"  ->  INT "5"	->  REAL "5.0"

	     3)   INT "5"	 ->  REAL  ->  INT  ->	REAL	->  INT  ->	 REAL "5.0"

	  In fact there are infinitely many ways to turn the INT into a REAL.


	  Even with the posiblity of infinitely many meanings, the semantic
	  structure that contains all of them consume only finite (polynomial)
	  memory.  How is that possible?  Figure 28.3(b) shows a finite
	  structure that represents infinitely many meanings.

	  The second to topmost block is the semantic block that represents the
	  REAL.  Below that block is an OR-block that offers two meanings for
	  the INT.	One of those INT meanings is directly the number 5.

	  The other choice refers back to the top of the figure.  If you follow
	  this option, you will have to go thru the top INT, which derived from
	  the REAL just below it, which is itself derived from the OR-block at
	  the bottom, an INT.

	  You can now make a right turn at the OR-block and terminate the trip
	  at the "5" block, or you can make a left turn and travel once more
	  thru the top two blocks.

	  Whenever you make a left turn, you go around the cycle once again.
	  Your trip is finished as soon as you take a right turn.

	  In general, N left turns introduces N transistions from INT to REAL
	  and back to INT.

	  Oddly enough, the arrows in the diagram point in the ~opposite
	  direction from the arrows in our coercions.  The arrows in the diagram
	  show that a block ~depends ~on the blocks to which it points.  For
	  example, the coercion:

		    ~REAL	->	  ~INT

	  builds an INT block that points back to the REAL block.

How Cycles Are Formed

	  Figure 28.3 illustrates how a cycle can be created within a semantic
	  structure.  Part (a) shows the original INT (5) at the bottom, along
	  with the REAL derived from that INT (in the middle).  This REAL block
	  is generated by the application of the coercion:

		    ~INT	->	  ~REAL

	  With the creation of the REAL, the other coercion applies:

		    ~REAL	->	  ~INT

	  Consider what happens when this new INT is generated (GIVEn).

	  GIVE takes the new INT and is about to introduce it as a new PHRASE
	  block on the CHOICE_OF_PHRASES C (not shown in figure).
	  However, GIVE sees that an INT already exists (the original INT 5),
	  sharing the same span with the newly proposed INT.	GIVE therefore
	  does not introduce the second INT onto C.  Rather, it takes the two
	  INTs' meanings and ties them together with an OR-block (figure

	As shown in Section 15.2, GIVE modifies objectively the
	original INT's semantics.  It uses the "@" operator to place the OR
	  block directly over the original INT's semantics.  Figure 28.3(b) shows
	the result.  The OR-block now sits where the original INT's semantics

	  As discussed in Section 10.2.6, cycles can arise only from the use of
	  objective modification.  Our one use of objective modification in fact
	  can and here does introduce a cycle.
Any Cycle Contains An OR-Block

	  The cycle we've seen has an OR-block in it.  All semantic cycles do.
	Recall that a cycle comes to exist only upon an objective modification
	(Section 10.2.6).  All our objective modifications involve OR-blocks
	(inside GIVE).  Thus, the moment a cycle comes to exist, it has an
	OR-block within it.

	OR-blocks provide for the escape from cycles.  In our previous cycle
	example, a right turn at the OR-block led to the escape from the cycle.

		BOX:	What causes cycles in a semantic structure?
		BOX:	Can you think of one rule which by itself causes a
		BOX:	cycle?
		BOX:	How does a cycle represent infinitely many meanings?
		BOX:	Any cycle contains at least one of what kind of block?
All Cycles Are Built Only Out Of Coercions, Besides OR-Blocks

	A cycle can exist in a semantic structure when one or more rules can
	be applied infintely many times.  The only kind of rules that can
	apply infinitely many times are coercions.  All non-coercion rules in
	a context-free grammar map ~two ~or ~more items to one item, as in:


	Such a non-coercion rule performs a rewrite by erasing ~more than
	what it puts back into the string.  That is, each application of a non-
	coercion rule shortens the string.  Since the original string given to
	parse is finite, non-coercion rules can apply, and hence shorten the
	string, only a finite number of times.

	Therefore, cycles can contain no non-coercion rules.  A cycle can
	contain only coercions and OR-blocks.

We Never Want To Travel The Entirety Of A Cycle

	Because a cycle consists only of coercions and OR-blocks, consider the
	resistance encountered by travelling around the entire cycle once.
	Each coercion has a resistance of one, and so if there are N (coercion)
	blocks in the cycle, a complete trip around the cycle encounters a
	resistance of N.


	Any overall trip that goes around an entire cycle is more resistive
	than the same trip that avoids the round trip.  For example, in going
	from Los Angeles To New York, the shortest trip will not pass thru any
	given city more than once.  Figure 28.4(a) shows a cycle.  Part (b)
	shows a trip that travels thru the entire cycle.  Part (c) shows a
	more optimal (less resistive) trip.
Cycle Cutting Occurs At OR-Blocks

	A ~solution to a semantic structure is an association of resistances
	to blocks, and the clipping of all cycles.  A cycle is clipped by
	cutting off one of the two pointers emanating from an OR-block.


	Figure 28.5 shows a cycle with resistances shown.  There are two
	OR-blocks in the cycle.  The lower-right OR-block's two pointers point
	  to paths having resistances 22 and 40.	This OR-block thus chooses to
	  keep the 22 pointer, and cut the 40 pointer.

	  The leftmost OR-block sees its two choices having resistances 23 and
	  20.	 This OR-block chooses to keep the 20, and clip the 23.  It is
	  this clipping which cuts a cycle.

	  We prove that any cycle will be clipped, assuming of course that we
	  can solve for resistances correctly:

		    Pick any cycle.  Not all blocks in the cycle have the same
		    resistance.  A cycle contains at least one coercion, and that
		    coercion, like any coercion, has resistance one greater than
		    the block to which it points (e.g., the rightmost block in
		    figure 28.5).

		    Thus the coercion block and the block to which it points have
		    different resistances.

		    Pick a block in the cycle that has minimal resistance.	Go
		    forward in the cycle until the next block differs in
		    resistance.  That second block has resistance strictly greater
		    than the first block's minimal resistance, by assumption.

		In figure 28.5, the block with lowest resistance (20) is
		the OR-block on the left.  It points within the cycle to a
		block of greater resistance, 23.

		Our chosen block of least resistance must be an OR-block,
		because the only other kind of block, a coercion block, would
		itself point to a block of resistance even less than the
		minimum.  (That is impossible, by assumption).

		For this OR-block to have minimum resistance, the exit pointer
		(the pointer that leaves the cycle) must itself be of our
		minimal resistance.  This is so, because the other pointer
		(into the cycle) references a block having resistance strictly
		greater than our resistance.

		This inequality, between the exit pointer's minimum
		    resistance and the other pointer's strictly greater resistance,
		assures that the pointer into the cycle will be clipped, when
		the OR-block ultimately disambiguates (chooses one or the other

		We have shown that this cycle will be clipped ~at ~least at
		the chosen OR-block.

	  BOX:	BOX:	Why must a cycle consist only of OR-blocks and
		BOX:	coercions?
		BOX:	How do you reduce the resistance of a trip that goes
		BOX:	the entirety of a cycle?
		BOX:	Why, assuming that resistances are computed correctly,
		BOX:	will any cycle be clipped?
Cycles Are Entered At Shared Blocks

	Back before we considered cycles (Section 27.4), we introduced a step
	that was to be performed ~prior to invoking a semantic structure.
	That step discovered all shared blocks, and modified them objectively
	so that we could implant a desired behavior at all shared blocks.


	Shared blocks play an important role within cycles.  Any cycle that
	you can get to contains at least one shared block.  Figure 28.6(a)
	shows a cycle, and 28.6(b) shows a cycle we can get to.  The block at
	which we enter a cycle is a shared block.  It is pointed to from two
	places:  The block before it in the cycle points to it, and also
	our entry pointer points to it.

More Complex Cycle Examples


	Figure 28.7(a) shows the semantic structure that arises from the two
	pairs of coercions:

		A  ->  B
		B  ->  A

		A  ->  C
		C  ->  A

	There are two cycles here.  Part (b) shows a path entering thru the
	C block.  That path represents the coercion

		A  ->  C

	Figure 28.7(c) shows the same but with one more coercion:

		X  ->  B

	Figure 28.7(d) shows a path thru the two cycles that represents three
	coercion applications:

		X  ->  B  ->  A  ->  C

Cyclic Encounters


	Figure 28.8(a) shows in detail how we traverse a tree structure with
	a shared block.  For purposes of illustration, we will use ~three
	~states now to denote that status of each block:

		0  =>	~Never ~been ~here ~before
		1  =>	~Busy
		2  =>	~Done

	A block starts off in state 0, having not yet been seen.  Upon
	encountering a block, we set the block's state to 1.	State 1 means
	  ~busy.  We have ~begun processing at that block but we are as yet not
	  finished processing that block.

	  The first frame in figure 28.8(a) shows a semantic structure at whose
	  top we've just arrived.  That block enters state 1, and all the blocks
	below, which haven't been encountered yet, are still in state 0.
	  The second frame shows us encountering another block.  Its state is set
	  to 1.  The third frame shows us touching one of the bottom blocks in
	  the structure.

	  The fourth frame shows that we've finished processing that bottom
	block.  Its state winds up 2, since we've finished processing it.

	  So far, every block we've encountered is initially in state 0.  The
	ninth frame shows us for the first time encountering a block which is
	already in state 2.  This implies we've been there before.	We have
	  thus discovered a shared block.

	  This is all exactly the same as the way we detected shared blocks
	  before.  Our states correspond to a block's ~markedness as follows:

		0  =>	~Never ~been ~here ~before.  IT IS NOT MARKED
		1  =>	~Busy			     IT IS MARKED
		2  =>	~Done			     IT IS MARKED

	Part b of figure 28.8 shows what happens when we encounter a cycle.
	We begin travelling the cycle, putting encountered blocks into state
	1.  The fourth frame in figure 28.8(b) shows us encountering a block
	in state 1!  This is our first encounter with a block in state 1.
	We have detected a cycle.  We call this a ~cyclic ~encounter.  We're
	  entering for a second time a block whose processing is as yet

		    BOX:	How do you detect a cyclic encounter?
		    BOX:	How do you detect a shared block?
A Cyclic Encounter While Computing Resistance

	  Recall our process at shared blocks S.	That process appears
	  immediately following the "@(S):=" in:

		    LAST_TIME:= 0;

		    @(S):= //{ RESIST; LAST_TIME; ORIGINAL; }







				ELSE	<*ORIGINAL*>;	FI   \\ ;

	  Let's focus in on the action we do while COMPUTE_RESISTANCE is TRUE:







	Figure 28.9(a) shows a cycle.  It is entered at its top block.  We
	enter the cycle and then travel around the cycle until our entry block
	is reached again.  This is a cyclic encounter because upon first entry
	into the cycle, we ~began to compute its resistance, but hadn't
	  finished it yet.

	  Let's reconsider this action in more detail.

	When we entered the top block, we began to compute its resistance,
	e.g., by invoking a block:


	It is this invocation that causes the second block (below and to the
	left of the top block) to compute its resistance.  Its resistance is
	computed by first obtaining the resistance of both of the blocks it
	points to.  That causes the third block in the cycle to be invoked, to
	find its resistance.  Eventually, the top block is encountered again.

	The computation of resistance occurs as we return from our travels:

		The top block delivers infinity (MY_RESIST) as its resistance
		upon this second encounter.  Figure 28.9(a) shows the
		resistance delivered by each block in the cycle.  The top
		block yields infinity, and so the upper-right block also
		delivers infinite resistance, since its resistance is based on
		the top block.

		The bottom-right block computes its resistance like any OR-
		block does.  It takes the ~minimum of the resistances delivered
		by each of the two blocks it points to.  The one pointer from
		that OR-block that leaves the cycle has some resistance, say
		40.  That OR-block therefore delivers 40 as its resistance, the
		minimum of infinity and 40.

		Similarly, the leftmost OR-block delivers as its resistance 20,
		which is the minimum of 41 and 20, the resistances of the two
		blocks to which the OR-block points.  We finally return to the
		top block, with a computed resistance of 21.

		The top block in the cycle finally has as its resistance 21.
		That is indeed the minimum resistance obtainable by starting
		at the top block, no matter how one leaves the cycle.


	Note the following in this figure and figure 28.10.  Each ~entry
	points to a ~shared ~block, which is not shown in the figure.  The
	entered coercion block actually stands for both the coercion block
	and our new, inserted shared block which references it.  The
	resistance stored in the shared block (MY_RESIST) is shown on the
	interior of the coercion block.
Another Look

	Let's assume, as is true initially, that LAST_TIME is less than

	  Upon first entry, the THEN-clause executes, setting LAST_TIME to
	  PRESENT_TIME.  What happens upon the second, cyclic, encounter with
	  the top block?	On the second encounter, we find that LAST_TIME equals
	  PRESENT_TIME, and so we enter the ELSE-clause.

		    It is fortunate that we ~don't enter the THEN-clause again.
		The THEN-clause would invoke ORIGINAL again, thus
		initiating another trip around the cycle.  Upon the third
		encounter, the THEN-clause would be taken again, leading to
		infinitely many trips around the cycle.

	Within the ELSE-clause, which is taken upon the second encounter, we
	deliver as our resistance the value in MY_RESIST.  Unfortunately, that
	value is not final.  We've begun to compute our resistance, but we
	  haven't completed it by the time of this second encounter.

		BOX:	How is it that a block might deliver an inaccurate
		BOX:	resistance?
		BOX:	What keeps us from going around the cycle more than
		BOX:	once?

Not All Blocks Have Accurate Resistances

	As figure 28.9(a) shows, not all blocks in the cycle have accurate
	resistances.  The top block may now have an accurate resistance, 21,
	but before, when it was a cyclic encounter, it delivered back its
	incompletely computed resistance, MY_RESIST, which was infinity then.

	From the point of view of a second entry at this cycle (see the
	figure), inaccurate resistances are found.  That second entry is a
	shared block, which remembers its resistance from before, a 40.  That
	is not the least resistance for that block (see part (b) of the

	Figure 28.9(b) shows the accurate resistances, assuming that the
	exits' resistances are accurate (20 and 40).  This is achieved by
	  making the same trip a second time.  We enter again at the top block,
	  and go around the cycle, just like last time.	 We have a cyclic
	  encounter again, at the top block.  This time however, it yields as its
	  resistance the 21 instead of last time's infinity.  Figure 28.9(b)
	shows the new resistances arrived at as a result of our new second
	trip.  Part (c) shows a path of least resistance that includes part
	of this cycle.

	It appears that a second trip thru the entire semantic structure may
	render all blocks with accurate resistances.  This time, cyclic
	encounters appear to deliver accurate resistances.  However, figure
	28.10 shows an example which is not solved completely by two trips
	alone.  Part (d) shows for the first time that the block with an
	asterisk achieves its accurate resistance of 25.

	A shared block's computed resistance may differ from the resistance
	  it delivers at cyclic encounters, MY_RESIST.	This change in a block's
	resistance is a sign that other blocks may not yet get accurate
	resistances.  Let's change our activity at shared blocks, by inserting
	  the italics in the following:





					  ~"Those ~blocks ~that ~depend ~on ~me
					    ~may ~be ~incorrect."




	  Now, we set NEED_ANOTHER_TRIP to TRUE if our resistance changes.
Obtaining Accurate Resistances Everywhere

	  We obtain accurate resistances by making trips repeatedly until no
	  shared block changes:



				PRESENT_TIME::= + 1;

	  We invoke TOP repeatedly until no block's resistance changes
	throughout the entire semantic structure.  TOP is the semantic
	structure in question.  TOP may be the semantic structure acquired from
	the final rule application that completes the (datatype) parsing.

	We increment PRESENT_TIME prior to each trip so that all shared blocks
	will recompute resistances.  This is necessary in order to
	accomodate changes in other shared blocks' resistances.  (We do indeed
	  still need our notion of PRESENT_TIME, because during any ~one trip,
	  we want to execute shared structures only ~once).

	  Because faulty resistances can exist on all but the last trip, we hold
	  CERTAIN to FALSE.  Recall that CERTAIN is read by OR-blocks.  OR-
	  blocks will disambiguate (make one choice over the other) only when
	  CERTAIN is TRUE.  When no more trips are needed, (i.e.,
	  NEED_ANOTHER_TRIP is FALSE), we make one extra trip, this time with
	  CERTAIN held at TRUE.	 Disambiguation thus happens only when all
	  resistances are reliable.

	  In summary, given a semantic structure TOP, we compute resistance
	  and disambiguate based on those resistances by:




					  PRESENT_TIME::= + 1;



	  TOP is now ready to be used for the original semantics (where
Will Taking Extra Trips Ever Terminate?

	  We must ask:  Will NEED_ANOTHER_TRIP ever wind up FALSE?	Yes.

	  For the following argument, let's assume that we set MY_RESIST
	not to infinity, but to a finite number greater than the number of
	blocks in the entire semantic structure.

	Consider that MY_RESIST, in each shared block, can change only by
	~decreasing in value.  We start it at "infinity".  ~All of our
	resistance computations involve only additions and MINimums.  There are
	no minus signs.  Therefore, if an input to a resistance computation
	drops in value, the result of the computation can either stay the same
	or drop in value.  The result of a resistance computation cannot ever

	We now know that values in MY_RESIST can only decrease over time.
	Since all resistances are integers, and all solved resistances are non-
	negative, there can be only finitely many decreases.  This assures
	that our WHILE-loop will terminate.

	We can show even more.  Each trip will solve at least part of one
	cycle.  That follows now.

		BOX:	What happens to cause NEED_ANOTHER_TRIP to become TRUE?
		BOX:	Why can't a computed resistance ever increase in value,
		    BOX:	assuming that its inputs don't increase in value?
		BOX:	Why will our resistance computation finally end, that
		BOX:	is, why will NEED_ANOTHER_TRIP stay FALSE at some time?
At Least One ~Cycle ~Fragment Is Solved Per Trip

	We would like to prove that each trip throughout the semantic
	structure solves at least one cycle.  We could then conclude that the
	process of solving resistances consumes at worst a cost of K-squared,
	where K is the size of the semantic structure.

	This conclusion would be possible because a semantic structure of size
	K (number of blocks) can have only less than K cycles.  Given that
	there are only K cycles, and given that each trip solves at least one
	cycle, there can be at most K trips required.  Each of the K trips
	traverses the semantic structure of size K, giving rise to K^2
	total cost (total number of blocks encountered).

	We will prove however that each trip solves at least one ~cycle
	~fragment.  As we will see, each cycle fragment will contain at least
	one block.  From this, we can conclude again a K-squared worst-case
	cost, this time associating with each trip a newly solved cycle
	~fragment instead of a newly solved cycle.

	This analysis is a worst-case performance.  In practice, many cycles
	(cycle fragments) are solved per trip.  In a sense, all cycle fragments
	of lowest resistance, like leaves on a tree, are solved on the first
	trip.  Each subsequent trip solves ~remaining cycle fragments of
	lowest remaining resistance.  This begins to show that a typical cost
	may be only K*log(K) in practice.

	Parsing a string of length N may deliver a semantic structure as big
	as N^3 (number of blocks) (Section 14.4).  In the N^3 structure,
	exponentially many, or even infinitely many meanings may be
	represented.  This size of the semantic structure, K, is equal in
	worst case to N^3.  K*log(K)-to-K^2 thus translates to:

		N^3 log(N)  -to-  N^6,
What Is A Cycle Fragment?


	A ~cycle ~fragment is part of a cycle.  Recall that cycles are formed
	from coercions and OR-blocks.  A cycle fragment is shown in figure
	28.11(a).  A cycle fragment starts off with coercions (on top) and
	ends up at OR-blocks (on the bottom).

	We allow more than one OR-block at the bottom.  For mathematical
	convenience, we collect up ~all accessible OR-blocks onto the bottom
	of each cycle fragment, so that the leaves of the OR-blocks all point
	to non-OR-blocks.  If one of those leaves points to another cycle
	fragment, it is pointing to a coercion block (not another OR-block).

	Again, a cycle fragment that points to another cycle fragment points
	to a coercion block, and never to an OR-block in that cycle fragment.
	Instead of allowing a cycle fragment to point off to an OR-block, make
	that OR-block part of this cycle fragment.

	(Any OR-block can belong to several cycle fragments).
A Cycle Is Formed Of Cycle Fragments

	Any cycle consists exclusively of cycle fragments:  Pick any coercion
	block in a cycle.  Travel backwards thru coercion blocks until an OR-
	block is encountered.  Go forward one block so as to be at the coercion
	block farthest back.  This is the top of our first cycle fragment.

	Travel forward from the top of the cycle fragment around the cycle,
	until encountering an OR-block.  Continue down that OR-block, possibly
	encountering other OR-blocks.  Continue thru the cycle until a non-
	OR-block, a coercion, is found.  Our first cycle fragment now ends
	at the last OR-block, just before the found coercion block.

	From there, the found coercion block becomes the top of the next cycle
	fragment.  This procedure winds up dividing a cycle into only cycle

		BOX:	In a cycle fragment, what will have the lowest
		BOX:	resistance, a coercion block or an OR-block?
		BOX:	Can a cycle fragment point off to an OR-block?
		BOX:	Why is a cycle decomposable into cycle fragments?
Cycle Fragments Of Minimal Theoretical Resistance Are Solved By The Second Trip

	We want to show that at least one cycle fragment will be solved per

	Of all blocks in all cycles (cycle fragments), pick one whose
	theoretical resistance is the smallest.  That block belongs to (at
	least) one cycle fragment.  Consider any one of those cycle fragments.

	We will show that ~at ~least ~upon ~the ~next ~trip, all blocks in
	that cycle fragment will achieve their final values.  That is, at
	least this one cycle fragment will be solved by the next trip.

	Consider the chosen cycle fragment, which has a block of minimal
	theoretical resistance.  All cycle fragments to which our cycle
	fragment points cannot affect our fragment's resistance:

		    Some path thru the OR-tree at the bottom of our cycle
		    fragment points off to a structure, the top of which will hold
		    ultimately our minimal theoretical resistance (figure

		    That structure can reference no cycle fragments.

		    If it did, it would be pointing to a coercion.  ~Just ~beyond
		    ~that ~coercion is a block whose resistance is ~one ~less than
		    the coercion's resistance.  The coercion references a block
		whose resistance is less than our minimal chosen resistance.
		This contradicts our assumption of having chosen a cycle
		fragment of minimal resistance.

	That minimal structure referenced by our minimal OR-blocks achieves
	its theoretical resistance immediately, since it depends on no cycle
	fragments.  (Only blocks in cycles can change their resistances over
	time, due to cyclic encounters).

	No matter what resistances are reported by other pointers emanating
	from our OR-tree, they will be ignored because they are all no less
	than our minimum resistance.  If they report inaccurate resistances,
	they will be only on the high side.  Those values will not affect our
	computed ~MINimum resistance.

	Because our chosen fragment depends on no others, it depends on no
	cycles at all.  Since only blocks in cycles can change over time, our
	chosen fragment now has its final resistance in each block.

	This might lead us to believe that all cycle fragments of minimal
	resistance will be solved on the first trip.  This is not quite the

		BOX:	Why can't a cycle fragment of minimum resistance
		    BOX:	point to other cycle fragments?
		    BOX:	Why therefore does some block in a cycle fragment
		    BOX:	of minimum resistance achieve its theoretical value
		    BOX:	immediately upon this trip?
Entry Blocks Into Cycles

	  Suppose the cycle shown in figure 28.11(d) is first entered at the top
	  block.  That entry, and the continued travel around the cycle arrive
	  at the first OR-block.  Its value is unchanging as shown before,
	  because the minimum path can reference no cycles.  As the invocations
	  return, that is, as we back out of the cycle, correct, unchanging
	  resistances are left behind.  This trip solves this cycle fragment.

	  However, a cycle might be entered for the first time halfway thru
	  the cycle fragment, as shown in figure 28.11(e).  The first trip is
	  shown in figure 28.11(f).  The blocks below the entry will be left with
	  correct, unchanging resistances as we back out from this trip.
	  However, the blocks above that entry block might have incorrect
	  resistances that can change, like infinity.

	  When only part of a cycle fragment is solved by the first trip, we must
	  rely on the ~next trip to solve the rest of the cycle fragment.	 The
	  next trip, which will follow the identical path (as all trips do),
	  will come around again, meeting the entry block.  This time, that entry
	  block delivers its correct resistance (left by the previous trip).

	  The second trip returns, backing up along the cycle, starting from the
	  entry block.  Figure 28.11(f) shows with thick blocks the returning
	  part of this trip.  It now leaves correct, final resistances in the
	  fragment's upper blocks (prior to the entry block).

	Thus, we are guaranteed that all cycle fragments of minimal theoretical
	resistance are solve by this trip or the very next trip, no matter
	what else happens on the next trip.

		BOX:	Why might it take more than one trip to solve even
		BOX:	a cycle fragment of minimal resistance?
		BOX:	Why are two trips sufficient to solve such cycle
		BOX:	fragments?
Each Non-First Trip Solves At Least One More Cycle Fragment

	Once the first two trips are complete, all cycle fragments of
	minimal resistance are solved.  Let's now consider all
	  other cycle fragments.

	  Once again, pick an ~unsolved cycle fragment whose resistance is
	  minimal ~among ~the ~unsolved cycle fragments.  (This new minimal
	  resistance is at least one greater than our previous minimum, which had
	  included all the now ~solved cycle fragments).

	  The same argument given earlier applies once again for cycle fragments
	  of the new minimal resistance.  They may depend on other cycle
	  fragments, though, but those other fragments are already solved.
	  Therefore, all cycle fragments of the new minimal resistance become
	  solved, during either the overall ~second or ~third trip.

	  Each trip therefore solves all cycle fragments of a new minimal
	  resistance.  That set includes at least one cycle fragment.

		    BOX:	Why do we solve at least one cycle fragment per non-
		    BOX:	first trip?
		    BOX:	Why in practice is the number of trips required
		    BOX:	much less than the number of cycle fragments?
		    BOX:	If an arbitrarily large semantic structure contains
		    BOX:	blocks having at most K different resistances, how
		    BOX:	many trips might be required, worst case?