Languages support rich and concise communication.  Brief communications
	  occur by assuming a ~dynamic ~set ~of ~conventions that is understood
	  by both parties.

	  Some of those conventions concern the actual words and punctuation
	  used, and possibly the order in which they appear.	Such surface
	  structure of language is called ~syntax.  Syntax is a powerful
	  mechanism for briefly encoding ideas.

	  We can think of syntax as a skeleton of the language.  That skeleton
	  is defined by what is called a ~grammar.

	  We will see in the following how brief grammars provide for the
	  definitions of expressive languages.  Beginning in Section 1.2 we will
	  go beyond just the skeleton or syntax of languages, and consider
	  semantics or meanings as well.  Part 7 looks much more deeply into

	  We are interested in languages of all sorts, as we try to pierce the
	  very nature of language itself, including ultimately such nuances as
	  inflection and gesture.  While we're not quite yet ready to encompass
	all that, we strive to be flexible enough to handle ultimately such
	information.  Our techniques are intended immediately for processing
	all computer languages, including formalizations of natural languages.


	Any language is often broken up into what are called its ~syntax and
	~semantics.  The dichotomy renders the language manageable.

	While some dichotomies may impose artificial boundries, any
	deficits that might be imposed by this one will loosen up completely
	when we allow for ~ambiguity in the syntax, as we shall look into more
	specifically in Part 4.  Our model language ICL has an ambiguous
	syntax, for example, that takes the edge off of this dichotomy.

	~Syntax itself is often broken into two parts: its vocabulary and
	grammar.  The grammar introduces meaning above and beyond the meanings
	of the individual words themselves.

	Without a grammar we would be limited
	to only one-word sentences, a poor state of affairs indeed.  A grammar
	introduces the bulk of the richness and conciseness by introducing
	additional meaning based in part on word order.  It's the grammar that
	  distinguishes the following two sentences which use identical words:

		    1)	Jim saw Jill.
		    2)	Jill saw Jim.

	  Another example exposing the utility of grammars
	  is the way we write numbers.  With a vocabulary of only the "words" 0,
	  1, ..., 9, we can express numbers concisely.	The "word" order is
	  important, as for example the number 91 is different from the number
	  19.	 The conciseness here is the implicit fact that the left digit,
	  and not the right digit, is multiplied by ten.

	  If there were no grammar, no implicit word order, we would have to
	  resort to a primitive notation for writing numbers known as tally,
	  where you express the number by writing that many ones.  For
	  example, the number 10 in tally is:


	  The conciseness delivered by our numbers grammar allows us to express a
	  number N using only log-base-10(N) words (digits), instead of tally's
	N words.

	Put another way, the tally notation has an exponential cost
	whereas our numbers notation has only linear cost.

	(We will consider grammars with freer word order in Section 3.5).

The "Numbers" Grammar

	We express a grammar for numbers in BNF (Backus-Naur Form) by writing
	the following ~rules of grammar:

		<DIGIT>	 ::=  0
		<DIGIT>  ::=  1
		<DIGIT>  ::=  2
		<DIGIT>  ::=  3
		<DIGIT>  ::=  4
		<DIGIT>  ::=  5
		<DIGIT>  ::=  6
		<DIGIT>  ::=  7
		<DIGIT>  ::=  8
		<DIGIT>  ::=  9

		<NUMBER> ::=  <DIGIT>


	The first ten rules specify that a <DIGIT> is any one of the ten
	digits zero thru nine.  The last two rules say that a <NUMBER> can be
	a single <DIGIT>, or any <NUMBER>  followed by a <DIGIT>.  The last
	rule is the power house; it says that given any number, you can form
	a new number from it by appending a digit to its right.

	For example, the number 8 is a <NUMBER> because 8 is a <DIGIT> (via
	the ninth rule) and any <DIGIT> is also a <NUMBER> (via the second to
	last rule).  The notation 82 is a <NUMBER> because of the last rule:
	8 can be seen as a <NUMBER> and 2 as a <DIGIT>.  Similarly, since 82
	is a <NUMBER>, the last rule says that 824 is also a <NUMBER> (because
	the 824 is the number 82 followed by the digit 4).

	How many numbers are there, as defined by this grammar?  There are
	infinitely many, because given a "largest" number, we can make a
	larger number by appending a digit onto it, via the last rule.  The
	power of grammars is immense because a finite
	number of rules defines a language that has infinitely many expressions

	It is the last rule that gives the grammar its power.  That
	rule is said to be ~recursive because <NUMBER> appears on both sides
	of the "::=".  <NUMBER> is defined in terms of itself.

The Rewriting Process

	People figure things out with paper and pencil by ~rewriting.  For
	example, to compute the expression:


	we first perform the multiplication between 2 and 3.  We ~erase the
	"2*3" and ~write in its place a 6, leaving:


	We then perform the sum, by ~erasing the "1+6" and ~writing a "7" in
	its place.

	Rewriting is the act of erasing some portion of a string and
	replacing it by something else.  We will return to consider "+" and
	"*" in a moment.  But for now, let's consider rewrites relative to the
	  numbers grammar.

	  Given a number, say, 9314, how do you prove to a friend that it is
	  in fact a <NUMBER> as defined by this grammar?  You write down the
	  characters as follows:

		    9		3	  1	    4

	  Then you "apply" any rules you want.  You apply a given rule by finding
	  the rule's righthand phrase somewhere in the string of characters.
	For example, you can find the second rule's righthand phrase, namely
	  "1", in the string of characters (the third character).  You "apply"
	  the rule by erasing the found occurence of the rule's righthand phrase,
	the "1", and you write in its place the rule's lefthand phrase,
	  <DIGIT>, e.g.

		    9		3	  <DIGIT>	  4

	  The goal is to apply rules in such a way that the given string of
	  characters becomes one <NUMBER>.	For example, we can apply the <DIGIT>
	  rules three more times, to come up with the string:


	  You can then apply the second to last rule upon any of the digits.
	  Let's apply it to the first digit, e.g.,


	We can now apply the last rule to the first ~two elements in this
	latest string, and come up with:


	     string)	  9  3        1       4

	We can apply this rule again, on the first two elements:

	     string)	 9 3 1        4

	One more application of the last rule reduces the original string down


	You have proven that the string 9314 is in fact a <NUMBER>

	It is possible to apply the rules of grammar in such a way so as to get
	stuck.  For example, we could apply one of the rules:

			<NUMBER>  ::=  <DIGIT>

	to the last <DIGIT> in the phrase


	and arrive at the phrase


	We are stuck at this point because there is no possible way to
	continue applying rules so as to arrive at the goal: a single <NUMBER>
	that spans the whole original string.  Even though we could reduce
	the first three <DIGIT>s to a single <NUMBER>, we would still be left
	with the string


	There is no rule whose righthand phrase can be seen in this latest
	string.  No rule can apply.

	In fact, we can tell that the string


	can't be reduced to a single <NUMBER> just by looking at the
	  rightmost <NUMBER>.  No rule can combine that rightmost <NUMBER>
	  with anything to the left:	No rule's righthand phrase ends with
	<NUMBER>.  (All the rules' righthand phrases end with <DIGIT> or 0
	  thru 9, but not <NUMBER>).	No rule can apply which will
	  erase that rightmost <NUMBER>.  That rightmost <NUMBER> will remain
	  forever no matter what we do to the left, (and we can't make the left

	Any rewrite which ultimately gets stuck is called a ~false ~rewrite.

Recording Of Rewrites - The Derivation

	We represent a sequence of rule applications, rewrites, as follows:

		(Original string)	9	3	1	4
				      <DIGIT> <DIGIT> <DIGIT> <DIGIT>
				      --- <NUMBER> --
				      ------- <NUMBER> ------
				      ----------- <NUMBER> ----------

	The first line shows four applications of the <DIGIT> rules.  The
	second line shows only one rule application:  The first <DIGIT>
	becomes a <NUMBER>.  The remaining three lines each shows the
	application of the rule


	Every element in this kind of recording is "consumed" by an element
	appearing on a subsequent line.  For example, the fourth <DIGIT>
	on the first line is consumed by the one rule application appearing
	on the last line.  The third <DIGIT> is consumed in the second to last
	line.  The <NUMBER> appearing on the second to last line is consumed
	by the last line.

	This kind of recording, called a derivation, never contains mistakes
	(false rewrites).  For example, the following is not a derivation:

		(Original string)	9	3	1	4
				      <DIGIT> <DIGIT> <DIGIT> <DIGIT>

	As we discovered earlier, the <NUMBER> appearing over the last <DIGIT>
	can't ever be successfully rewritten to an overall <NUMBER>.  No
	  subsequent line could ever consume it.

	  The white space in these derivations is irrelevant.	 For example, the
	  first line could be broken into four separate lines (one for each rule
	  application).  You could insert blank lines.	The ordering of the
	  lines is constrained only by the requirement that each element on one
	  line must be consumed by something on a subsequent line.

Automatic Rewriting - Parsing

	  A computer program can tell whether or not a given string is legal
	  within the language defined by a given grammar.  The computer program
	  does what we did.  It applies rules by finding occurences of rules'
	righthand phrases in the given string, erasing those occurences and
	writing in their place the rules' lefthand phrases.  As we saw,
	  rewrites can be applied over and over again, even rewriting material
	  that was itself rewritten, (e.g., "9" was rewritten to <DIGIT> and
	  then <DIGIT> was rewritten to <NUMBER>).

	  Programs that tell whether or not a given string is legal in a given
	  grammar are called ~parsers.  Parsers in general are nontrivial
	  programs, because they must avoid getting stuck as we almost did.

	  There are many techniques for parsing.	Some techniques avoid getting
	  stuck by carefully modifying the grammar so that rules apply only
	  when certain knowledge, gained by looking ahead a few characters, is
	  seen.  These parsers never make a mistake, but unfortunately many
	  grammars can't be parsed by this kind of technique.

	Other methods accomodate "getting stuck" by undoing some rewrites,
	backing up during the parsing.  These backtracking methods however can
	be impractically slow, consuming perhaps 2-to-the-n'th time while
	  parsing a string of length N.  This is a drammatic expense because
	  parsing a string of 30 characters would require a billion operations,
	  and a string of length 60 would require 10^18 operations.	 At a
	  million operations per second, the 60 character string would consume
	  31000 years.

	  Another method, which we present in Chapter 12, accomodates mistakes
	  by ~not erasing phrases, but by proposing new phrases as ~alternatives.
	  When a rule's righthand phrase is about to be erased, we don't
	  erase it.	 We instead put the rule's lefthand phrase in as an
	alternative.  This way, we never have to back up because we never
	commit wholeheartedly any rule's application.  We grow many separate
	  possibilities in parallel.

The "Formulas" Grammar

	  Let's illustrate further the role of grammar.  Here is a grammar that
	goes beyond numbers.  It admits expressions built out of numbers, +'s,
	  *'s, and parentheses.  This grammar forces multiplications (*) to
	occur before additions (+), unless a sum is enclosed in parentheses.
	Here are some sample formulas:

		1+2*3			(which is equal to 7 and not 9)
		2*3+4*5			(26)
		2*(3+4)*5		(70)
		1+2*(3+4*(5+6))		(95)

	We begin by writing a subset of the grammar, that part which admits
	numbers and *'s.	We use the part-of-speech <TERM> to denote such

		    <TERM>	::=  <NUMBER>

		    <TERM>	::=  <TERM> * <NUMBER>

	  This says that a <TERM> is either a single <NUMBER>, ~or it is another
	  <TERM> multiplied by a <NUMBER>.	The second rule shows that a "*"
	  can appear in a <TERM>.  It also says that to the left of that "*",
	  any <TERM> can appear.  That <TERM> itself could contain more "*"s.
	  These rules give rise to all strings of the form:

		    number * number * number *  ...	 * number

	  For example, here is a derivation:

			  2	    *		3	  *	    4
		     <DIGIT>	   <DIGIT>		 <DIGIT>
		     <NUMBER>	   <NUMBER>		 <NUMBER>
		     ------- <TERM> ---------
		     ----------------- <TERM> ---------------

	  The second to last line shows the rewrite of the first "*" into a

	  Because we will want to admit expressions like 2*(3+4), we require
	  that "*" combines not only <NUMBER>s but also expressions enclosed
	  in parentheses.	 Let's accomodate this more general notion by dealing
	not with <NUMBER>s, but instead with what we call <ATOM>s.  We replace
	the two <TERM> rules with

		<TERM>  ::=  <ATOM>

		<TERM>  ::=  <TERM> * <ATOM>

	A <TERM> is now a product of <ATOM>s.

	What is an <ATOM>?  One of the things it can be is shown in the next

		<ATOM>  ::=  <NUMBER>

	This says that an <ATOM> can be a <NUMBER>.  We've achieved
	  what we had originally:  These three rules taken together say
	  that a product of numbers is still a <TERM>, as in the derivation

			  2	    *		3	  *	    4
		     <DIGIT>	   <DIGIT>		 <DIGIT>
		     <NUMBER>	   <NUMBER>		 <NUMBER>
		     <ATOM>		   <ATOM>		 <ATOM>
		     ------- <TERM> ---------
		     ----------------- <TERM> ---------------

	  (Each <NUMBER> is turned into an <ATOM>).

	  Let's bring in addition by writing:

		<FORM>  ::=  <TERM>

		<FORM>  ::=  <FORM> + <TERM>

	A <FORM> (short for "formula") is a sum over <TERM>s.

	We complete the grammar by adding one more rule:

		<ATOM>  ::=  ( <FORM> )

	This powerful rule says that any <FORM> enclosed in parentheses can be
	seen as just an <ATOM>.  <ATOM>s, as you recall, can be operated upon
	by "*".  This rule says that a sum can occur
	within a multiplication as long as that sum is enclosed in parentheses.

	Put all together, here are the formula rules:

		<ATOM>  ::=  <NUMBER>
		<ATOM>  ::=  ( <FORM> )

		<TERM>  ::=  <ATOM>
		<TERM>  ::=  <TERM> * <ATOM>

		<FORM>  ::=  <TERM>
		<FORM>  ::=  <FORM> + <TERM>

	Again we see that a grammar having a finite number of rules gives rise
	to infinitely many formulas.  These formulas can contain any number of
	+'s and *'s and parentheses-pairs.  Parentheses can be nested within
	parentheses arbitrarily many times.  You can tell that legal formulas
	always have as many open parentheses "(" as close parentheses ")"
	just by looking at the one rule that admits parentheses.

	This grammar not only has multiplications occuring before additions,
	but it also forces multiplications to occur in left-to-right order.
	You can deduce this left-to-right grouping by looking at the
	multiplication rule:

		<TERM>	::=  <TERM> * <ATOM>

	<TERM>s are admitted on the lefthand side of the "*", and so other
	multiplications can appear there.  However, to the
	right of an "*", only <ATOM>s are admitted.  <ATOM>s can't contain
	  multiplications, unless the multiplications are enclosed in parentheses
	  (via the parenthesis rule).

	  Here is a derivation:

		1	  +	    2		*	  (	    3		+	  4	    )
	   <DIGIT>		 <DIGIT>			 <DIGIT>	     <DIGIT>
	   <NUMBER>		 <NUMBER>			 <NUMBER>	     <NUMBER>
	   <ATOM>		 <ATOM>			 <ATOM>	     <ATOM>
	   <TERM>		 <TERM>			 <TERM>	     <TERM>
	   <FORM>						 <FORM>
								 -------  <FORM>	-----
							  ------------  <ATOM>	-----------
				 ---------------------- <TERM> ----------------------
	   --------------------------------	 <FORM>  --------------------------

		    BOX:	We've introduced <ATOM>, <TERM>, and <FORM>.
		BOX:	Suppose you wanted to include other operators, beyond
		BOX:	"+" and "*".  Might you make up more ~parts-of-speech
		BOX:	like <ATOM>, <TERM>, and <FORM>?

		BOX:	Now that we can parse strings, how do we associate
		BOX:	meaning with them?


	1)	How many times does the rule

			<NUMBER>  ::=  <NUMBER> <DIGIT>

		apply in the text 123456789?

	2)	How many times does the rule  <NUMBER> ::= <DIGIT>  apply
		in the text 123456789?

	3)	Is "456" a number if it appears in 1234567, according to our
		numbers grammar?

	4)	Is -12345 a number according to our grammar?

	5)	Prove that additions group left-to-right, just like
		multiplications do, in the absence of parentheses.

	6)	Modify the rule

			<TERM>  ::=  <TERM> * <ATOM>

		so that multiplications will group right-to-left.

	7)	For each of the last four lines in the derivation just shown
		before the exercises, circle the item(s) that were consumed to
		make that line.  For example, the fourth to the last line, the
		<FORM>, consumes the <FORM> over the "3" and the <TERM> over
		the "4".

		Also, for each of the four last lines, tell which rule was
		applied to create that line.  For example, the fourth to last
		line is the result of the rule:

			<FORM>  ::=  <FORM> + <TERM>

	8)	Write down the derivation for the formula

			1 + 2 * 3 * ( 4 + 5 )

	9)	If necessary, add a rule to the formulas grammar so that unary
		minus "-" is allowed.  That is, we want to be able to write
		-1 or -12345 or even --12345.  These unary negations should
		occur before multiplications or additions.  That is,

			-1+2	should be 1 and not -3, although

			-(1+2)	should be -3.

		[Hint: Make the new rule deal only with <ATOM>s]

	10)	Introduce subtraction (-) and division (/) into the formulas
		grammar.  Do this by only adding new rules.  Make "-" act
		like "+", and "/" act like "*".

	11)	Add exponentiation (^) to the formulas grammar, so that
		exponentiations occur before multiplies.  You may need to
		introduce a new part-of-speech, along with the existing
		<ATOM>, <TERM>, and <FORM> parts-of-speech.  This will
		require you to modify some of the existing rules.  The pattern
		of rules will still look the same albeit longer.

	12)	Is the following part of some derivation for the last formula?
		(We use <D> for <DIGIT> and <N> for <NUMBER> etc.):

			1  +  2  *  3  *  (  4  +  5  )
		       <D>   <D>
		       <N>   <N>
		       <A>   <A>
		       <T>   <T>

		How about:

			1  +  2  *  3  *  (  4  +  5  )
		       <D>   <D>
		       <N>   <N>
		       <A>   <A>
		       <T>   <T>
		       -- <F> --

		or this:

			1  +  2  *  3  *  (  4  +  5  )

Semantics In Grammars

	The preceding description shows how a grammar can be used to specify
	the legal strings of a language but it fails to mention how to
	associate a meaning with a given string in the language.  Grammars
	become truly useful only when meaning is involved.

	We incorporate meaning by associating a meaning with each element in a
	string.  That is, an element of a string will consist of not only a
	part-of-speech (e.g., <TERM>) but also a meaning.

	For example, the rule

		<DIGIT>	   ::=    0

	can include meaning transformations via the following concise notation:

		<DIGIT:0>   ::=    0

	In the "<...>", the ":" separates the part-of-speech on the left from
	the semantics on the right.

	The <NUMBER> grammar can be written with meaning transformations
	as follows:

		<DIGIT:0>   ::=  0
		<DIGIT:1>   ::=  1
		 . . .
		<DIGIT:9>   ::=  9

		<NUMBER:a>  ::=  <DIGIT:a>

		<NUMBER: 10*a+b >   ::=   <NUMBER:a>  <DIGIT:b>

	The meaning associated with a <DIGIT> or a <NUMBER> is an integer.
	Looking at the <DIGIT> rules, note that the digit appearing on the
	righthand side is a character whereas the meaning associated with the
	<DIGIT> is an integer.  For example, the rule

		<DIGIT:1>  ::=  1

	states that the character "1" has the integer 1 as its meaning when
	the character "1" is viewed as a <DIGIT>.  The first <NUMBER> rule
	states that when a <DIGIT> is viewed as a <NUMBER>, the meaning for
	the <NUMBER> is the same as the meaning associated with the <DIGIT>.
	The final rule states that when a <NUMBER> followed by a <DIGIT> is
	viewed as a <NUMBER>, the meaning for the resulting <NUMBER> is ten
	times the meaning of the given <NUMBER> plus the meaning of the

	The parts-of-speech appearing in the righthand phrase include the
	specification of ~variables.  The part-of-speech appearing on the
	lefthand side includes the specification of a meaning which is
	a function of the variables named in the righthand phrase.

	One can see how a transformation is carried out.  When this rule is
	employed in a rewrite operation, the variables "a" and "b" are set to
	the meanings associated with the <NUMBER> and <DIGIT> that are being
	erased.  The value  "10*a+b"  is computed and associated with the new
	<NUMBER> replacing the erased elements.

	Here is the formula grammar with semantics:

		<ATOM: a >	::=	<NUMBER:a>

		<TERM: a >		::=	<ATOM:a>
		<TERM: times(a,b) >	::=	<TERM:a>  *  <ATOM:b>

		<FORM: a >		::=	<TERM:a>
		<FORM: sum(a,b) >	::=	<FORM:a>  +  <TERM:b>

		<ATOM: a >	::=	( <FORM:a> )			.

	<FORM> with semantics is derived from the string 1+2*3 by

	   1	   +	   2	   *	   3
	 <DIGIT:1>	<DIGIT:2>	<DIGIT:3>
	 <ATOM:1>	<ATOM:2>	<ATOM:3>
	 <TERM:1>	<TERM:2>
	 <FORM:1>	<TERM:  times(2,3)  >
	    <FORM:  sum( 1 , times(2,3) )  >

	The final <FORM> has the accumulated meaning:

		sum( 1 , times(2,3) ).

	We have not specified what kind of data sum and times take in and
	produce.  We might assume that sum and times take in and produce
	numbers, i.e., the meaning for the final <FORM> could simply be 7.

	On the other hand, we might assume that sum and times take in and
	produce programs whose executions yield numbers.  For example, a LISP
	program can be obtained if sum and times are defined as follows:

		times(a,b)  =   (LIST 'ITIMES a b)
		    sum(a,b)    =	  (LIST 'IPLUS a b)

	The string 1+2*3 would rewrite to a <FORM> whose meaning is

		(IPLUS 1 (ITIMES 2 3))	.

	Times and sum could be used to deal with any kind of data.  We will
	explore other possible semantic data types throughout the book.

Parts-Of-Speech Are Datatypes

	A datatype in its most general form is a set of conventions by which
	a datum exists.  We say that a datum is an "instance" of a datatype
	precisely when the datum obeys the conventions of the datatype.

	Any datum has a type, because otherwise, we would be unable even to
	examine it.  Something must be known about any datum, so that the
	datum can be operated upon by a computer program.  For example, the
	computer program needs to know if the datum is a 32-bit entity.

	We can see how the parts-of-speech of a grammar serve as the datatypes
	over the space of meanings.

	In defining a grammar and the routines that implement meaning
	one must agree on how a meaning is represented.  For
	example, when we implement the routine sum for the rule

		<FORM: sum(a,b)>  ::=  <FORM:a> + <TERM:b>

	we must know three things:  What kinds of objects are ~a and ~b and
	what kind of object must be associated with the resulting <FORM>?

	We can deduce that the type of data yielded by sum(a,b) must be the
	same as the type of data represented by ~a.  Both are meanings under
	the part-of-speech <FORM>.

	The rewriting process might employ the <FORM:sum(a,b)> generated by
	this rule as the <FORM:a> in another application of this rule, as in:

		c	+	d	+	b
		<FORM: sum(c,d) >
		<FORM:   sum( sum(c,d) , b   >			.

	When ~a is set to the meaning of a <FORM>, we cannot tell which
	rule generated the <FORM>.  That is the nature of the rewriting

	It is therefore advantageous to establish conventions for meaning on
	a part-of-speech by part-of-speech basis.  That is, for a given part-
	of-speech, one should state exactly what can be expected of an
	associated meaning.  A part-of-speech serves as the name for the
	conventions obeyed by any meaning associated with that part-of-speech.

	In the example rule given above, we can assume that ~a follows the
	<FORM>-conventions and that ~b follows the <TERM>-conventions and
	finally that sum(a,b) had better follow the <FORM>-conventions.  From
	the point of view of sum's definition, these requirements appear as
	  datatype constraints:

		    sum( a:FORM  b:TERM ) = FORM:	...				.

	  Sum is a function which maps a FORM and a TERM to a FORM.	 Parts-of-
	  speech are the datatypes for the input and output parameters in any
	  function which implements a meaning.

The Declaration Of Parts-Of-Speech

	  So far, the meaning associated with all our parts-of-speech has been
	  of type integer.  We make this clear by declaring each part-of-
	  speech.  Here are the declarations we've assumed so far:



	The "POS" stands for "Part-Of-Speech".  DIGIT, NUMBER, FORM, TERM, and
	ATOM are now officially declared.  Each has INT (INTeger) as its
	meaning.  The last POS line could be split into three independent
	lines, like we did earlier with the DIGIT/NUMBER pair.

	Each part-of-speech thus has a datatype for its meaning.

Definitions And A New Notation For Rules

    *	A "grammar" is specifically a set of three things:

	    1)	A finite set of parts-of-speech.

		    Parts-of-speech are divided into two sets:

			a)	"Terminal" parts-of-speech, and

			b)	"Nonterminal" parts-of-speech.

		    Terminal parts-of-speech are the characters on your
		    terminal.  Our grammars thus far have used the terminal
		    parts-of-speech (, ), *, +, and 0 thru 9.  Any actual
		    string in a language consists only of terminal parts-of-

		    Terminal parts-of-speech have no associated semantics.
		    For example, the rule

			<ATOM:x>   ::=   (  <FORM:x>  )

		    shows a semantic variable (x) associated only with the
		    nonterminal <FORM> and not with the terminals ( and ).
		    Another example is the rule

			<DIGIT: 1 >    ::=    1

		    The character 1 on the righthand side has no semantics,
		    but the nonterminal <DIGIT> does have associated semantics.

		    Nonterminals are all parts-of-speech other than terminal
		    parts-of-speech.  Our grammars
		    thus far have used the nonterminals <FORM>, <TERM>, <ATOM>,
		    <DIGIT>, and <NUMBER>.  Nonterminal parts-of-speech have
		    associated semantics.

		    Nonterminal parts-of-speech never appear in any actual
		    string of the language.  They appear only during the
		    rewriting process.

	    2)	A finite set of rules.

	    3)	A ~chosen part-of-speech from the grammar's set of parts-of-

	  The set of rules defines the legal rewrites from one string to another.
	  Rewrites can occur anywhere in a given string, and can apply even to
	  earlier rewrites.

	  The ~chosen part-of-speech defines the "language defined by the
	  grammar".	 A string (made up of only terminals) is said to be a
	  member of the language defined by this grammar if and only if that
	  string can be rewritten into one overall occurence of the chosen

	  For example, the <FORM> grammar has <FORM> as its chosen part-of-
	  speech.  The same set of rules, but with <TERM> as the chosen part-of-
	  speech, defines a different language.  While 2*3 is in that <TERM>
	  language, the string 1+2 is not.

    *	  Part-Of-Speech

	  A part-of-speech is a name for a set of notations.	The "chosen"
	  part-of-speech in a grammar names the set of all notations allowed in
	  the language.

	  A part-of-speech can be simply represented by a unique integer.

	  Each nonterminal part-of-speech has an associated semantic type.  The
	  meaning associated with any occurence of that part-of-speech has
	  semantic data of that semantic type.

	  We will soon introduce the notion of an "array" of parts-of-speech,
	  (Chapter 3).  All parts-of-speech in an array have the same semantic

	  A part-of-speech is declared by either:

		    POS  part_of_speech : type_of_semantics ;
		    POS  part_of_speech [ integer ]	 :  type_of_semantics ;

	  The latter form declares an array of parts-of-speech.  The declaration:

		    POS  EXPR[3] : INT ;

	  creates new parts-of-speech named <EXPR[0]>, <EXPR[1]>, <EXPR[2]>, and
	  <EXPR[3]>, whose semantics are INTegers.

    *	  Token

	  A token is an occurence of a part of speech (e.g., within a phrase).
	  That occurence may have associated semantic data, unless the part-of-
	  speech is a terminal part-of-speech.

	  Example tokens are:

		    <FORM: 100 >

    *	  Phrase or String

	  A phrase or string is a sequence of tokens.

    *	  Rule

	  A rule of grammar is a pair of phrases, denoted by either of the two
	  following notations:

		1)  BNF notation	 (Backus Normal Form, or Backus Naur Form):

				phrase  ::=	 phrase

		    This is the notation we've been using thus far.
		In this notation, the righthand phrase is the phrase to match
		and erase, while the lefthand phrase is the replacement phrase.
		That is, occurences of the righthand phrase can be rewritten
		to occurences of the lefthand phrase.

		All our rules so far always have lefthand phrases
		of length one.  The more general case, with phrases longer
		than one part-of-speech, is meaningful and is allowed.  In
		practice however, we rarely need this extra generality.
		(See "Context Free Grammar" below).

	    2)	The "->" notation:

		The notation

			phrase   ->   phrase

		is like the BNF notation except that the phrases are swapped.
		That is, this specifies that the lefthand phrase can be
		rewritten to the righthand phrase.  This order is perhaps more

		We will use this notation primarily for several reasons:

		  a)  The specification of semantics is easier.

			It is clearer to read:

			<FORM:a> + <TERM:b>  ->  <FORM:  a long semantic
							 expression involving
							 a and b	      >

			than to read

			<FORM:  a long semantic
				expressions involving
				a and b		   >   ::=  <FORM:a> + <TERM:b>

			The "->" notation shows the intended
			syntax most clearly.  It keeps all the parts-of-speech
			closer together, and so the rule looks almost as simple
			as the same rule devoid of semantic specification.

		  b)  The specification of operators, including their operand
		      datatypes, reads most naturally in this notation.

			For example, we express that "+" works both on integers
			and on reals, via:

				int  +  int	->	int

				real +  real	->	real

			This reads more like a chemistry or physics equation,
			where time procedes left-to-right.

			We will ultimately use grammars like this one to
			maintain datatype consistency in programs.

		  c)  The righthand phrase can be replaced by a program.

			For example, consider the following rule:

				SHOW  <FORM:a>  .     ->     WRITE(a);

			This rule does not rewrite the "SHOW <FORM> ."
			to anything.  It actually does something.  Whenever
			"SHOW <FORM> ." is seen (as though it were about to be
			rewritten), this rule causes the <FORM>'s INT value to
				appear on the terminal immediately.

				This is how the syntax parsing phase ends, and the
				results of all that parsing begin to blossom.

	  Independent of the two notations "::=" and "->", we call one of the
	  rule's phrases the "match" or "want" phrase, and the other phrase is
	called the "resulting" or "give" phrase.  That is, we always see one
	or the other of:

		give-phrase	::=	want-phrase


		want-phrase	->	give-phrase

	With the new "->" notation, our numbers grammar now appears as:

		0	->	<DIGIT:0>
		1	->	<DIGIT:1>
		2	->	<DIGIT:2>
		3	->	<DIGIT:3>
		4	->	<DIGIT:4>
		5	->	<DIGIT:5>
		6	->	<DIGIT:6>
		7	->	<DIGIT:7>
		8	->	<DIGIT:8>
		9	->	<DIGIT:9>

		<DIGIT:d>		->	<NUMBER:d>
		<NUMBER:n> <DIGIT:d>	->	<NUMBER: 10*n+d >

	Our FORM grammar appears as:

		<NUMBER:n>		->	<ATOM: n >
		(  <FORM:f>  )		->	<ATOM: f >

		<ATOM:a>		->	<TERM: a >
		<TERM:t>  *  <ATOM:a>	->	<TERM: times(t,a) >

		<TERM:t>		->	<FORM: t >
		<FORM:f>  +  <TERM:t>	->	<FORM: sum(f,t) >

    *	"Context Free" Grammar

		Most grammars, and certainly the ones we've seen so far,
		    have give phrases only of length one.	 (The give phrase
		    appears on the lefthand side of "::=", or the righthand side
		    of "->").

		    If all rules in a grammar have give phrases of length one,
		    then the entire grammar is said to be context-free.

    *	  "General Rewrite" Grammar

		    General rewrite grammars have some rules whose give phrases
		    are of length greater than one.	 An example of a general
		    rewrite rule is:

				<A>  <B>	    ->	<C>  <D>

		    The give phrase (to the right of "->") is of length two.
		    As we've seen so far, this rule says that any occurence of the
		phrase <A> <B> can be rewritten to the phrase <C> <D>.

		The meaning transformation associated with a general rewrite
		rule defines a separate meaning for each part-of-speech in the
		give phrase.  For example, the rule:

			<A:a>  <B:b>	  ->	  <C: f(a,b) >  <D: g(a,b) >

		specifies that the meaning under <C> is f(a,b) and that the
		meaning under <D> is g(a,b).  A meaning transformation
		associates a meaning with each part-of-speech in the give
		phrase, and never with the phrase as a whole.

		Theories of computation show that general rewrite grammars
		are powerful enough to do everything that general programs
		can do.  Context free grammars cannot do everything.  Keep in
		mind that these theories presume that grammars have no
		associated semantics.

		However, with the luxury of programmable semantics, we can
		have context free grammars that are as powerful as any
		computational model.  We are allowed to resort to programming
		within the semantic specifications.  Context free grammars
		with semantics provide the most convenient model for language

An Analogy - Grammar Rules Act As "Basis" Set Over Language Space

	A finite set of rules gives rise to a language which may have
	infinitely many legal phrases.  This is analogous to ~linear ~spaces,
	where a finite number of "basis vectors" provide the framework
	from which to build the entire, infinite space.

	It is fortunate that a linear space can be characterized by a finite
	set of basis vectors.  Any one of an infinite number of members in
	the space can be represented with only a finite number of terms.
	This makes the entire linear space manageable.  Transformation
	can be defined in terms of only the basis vectors.  Such
	transformations take only finite specification, and yet operate on
	infinitely many possible inputs.

	Grammar rules are the basis elements over a language space.  Any
	one of the infinitely many legal phrases can be decomposed in terms
	of only a finite number of applications of a finite number of rules.
	That decomposition is called the derivation.  Even though we have
	specified meaning transformations only on a rule by rule basis, the
	composition of those transformations, as dictated by the derivation,
	gives a complete meaning for the given input string of characters.
	As with linear spaces, we have defined meaning transformations solely
	in terms of the basis elements, that is, in terms of the grammar rules.

		BOX:	How are linear spaces similar to language spaces?
		BOX:	What advantage does each offer?