You can find a repository of good solutions to the book exercises here.
Building Abstractions with Procedures
The Elements of Programming
A programming language is our mental framework for organising ideas about process. It provides three mechanisms for combining simple ideas such that they together form more complex ideas:
 primitive expressions, which represent the simplest entities the language is concerned with,
 means of combination, by which compound elements are built from simpler ones, and
 means of abstraction, by which compound elements can be named and manipulated as units.
Expressions
Expressions such as these, formed by delimiting a list of expressions within parentheses in order to denote procedure application, are called combinations. the leftmost element in the list is called the operator , and the other elements are called operands . The value of a combination is obtained by applying the procedure speciﬁed by the operator to the arguments that are the values of the operands.
Placing the operator to the left of the operands is called prefixnotation.
Let’s take a look at the nesting of expressions:
If we align the operands vertically as above we prettyprint our code.
Naming and the Environment
Every programming language uses names which identify a variable whose
value is the object. In the Scheme dialect of list we use define
. In Lisp
every expression has a value.
Lisp programmers know the value of everything but the cost of nothing (Alan Perlis)
Here is an example of how to use define
:
In order to keep track of the nameobject pairs, the interpreter maintains a memory called the (global) environment.
Evaluating Combinations
Let us consider the following recursive evaluation rule:
To evaluate a combination, do the following:
 Evaluate the subexpressions of the combination.
 Apply the procedure that is the value of the leftmost subexpression (the operator) to the arguments that are the values of the other subexpressions (the operands).
Hence, the following code
can be represented in the following tree strucure:
This “percolating upwards” is called tree accumulation. This evaluation rule
does not apply to socalled special forms, such as define
, which each have
their own evaluation rule.
Compound Procedures
Any programming language must have:
 Numbers and arithmetic operations are primitive data and procedures. Nesting
 of combinations provides a means of combining opera tions. Deﬁnitions that
 associate names with values provide a limited means of abstraction.
Next, we need procedure definitions which open a whole new realm of possibility.
Let’s define a compound procedure called square
:
Now we can easily define another procedure that makes use of square
:
which evaluates to 25
. We can take this even further:
which gives us 136
.
The Substitution Model for Procedure Application
Let us consider the combination from above to illustrate the subsitution model:
NB: This is not how the interpreter really works, as we’ll see later. The subsitution model serves the purpose of providing an entry point to thinking about procedure application.
Applicative vs. Normal Order
The “first evaluate arguments and then apply procedures” way of doing things that we used above (applicativeorder evaluation) is not the only way.
The other evaluation model is the “fully expand and then reduce” model, which is called normalorder evaluation and illustrated below:
Conditional Expression and Predicates
Often, we want to do different things depending on the result of a test (case
analysis). In Lisp we use cond
to do that. The first expression in each pair
is called the predicate (either true or false) and the second one is the
consequent expression (value returned if predicate is true).
Of course, we should be able to construct compound predicates also with logical composition operations and not purely numerical ones:
Exercise 1.1
Below is a sequence of expressions. What is the result printed by the interpreter in response to each expression? Assume that the sequence is to be evaluated in the order in which it is presented.
Exercise 1.2
Translate the following expression into prefix form:
$3(6−2)(2−7)5+4+(2−(3−(6+54 ))) $Exercise 1.3
Deﬁne a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.
Exercise 1.4
Observe that our model of evaluation allows for combinations whose operators are compound expressions. Use this observation to describe the behavior of the following procedure:
Exercise 1.5
Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is using applicativeorder evaluation or normalorder evaluation. He deﬁnes the following two procedures:
Using normalorder evaluation, the last expression evaluates to 0
as the
infiniteloopproducing procedure p
is not evaluated. This is not true for
applicativeorder evaluation, where the arguments are evaluated first. Here, the
process ends in an infinite loop.
Example: Square Roots by Newton’s Method
There is a difference between a mathematical function of a square root (which can be used to recognise a square root or derive some interesting insights about it) and a procedure to generate a squre root.
For generating sqaure roots, we can use Newton’s method of approximation:
The sqrtiter
procedure also underlines that iteration can be achieved using
no special construct but the ability to call a procedure
Exercise 1.6
Alyssa P. Hacker doesn’t see why if needs to be provided as a special form. “Why can’t I just deﬁne it as an ordinary procedure in terms of cond ?” she asks. Alyssa’s friend Eva Lu Ator claims this can indeed be done, and she deﬁnes a new version of if:
Now, Alyssa wants to use newif for the squareroot program:
What happens when Alyssa aempts to use this to compute square roots? Explain.
The interpreter returns the following error message:
;Aborting!: maximum recursion depth exceeded
This is due to the fact that the newif
procedure does not share the property
of the if
special form to only evaluate the consequence when the predicate
evaluates to #t
. Hence, infinite recursion whenever we call newif
and there
one of the consequents is a function call.
Exercise 1.7
The goodenough?
test used in computing square roots will not be very
effective for ﬁnding the square roots of very small numbers. Also, in real
computers, arith metic operations are almost always performed with lim ited
precision. this makes our test inadequate for very large numbers. Explain these
statements, with examples showing how the test fails for small and large
numbers. An alternative strategy for implementing goodenough?
is to watch how
guess changes from one iteration to the next and to stop when the change is a
very small fraction of the guess. Design a squareroot procedure that uses this
kind of end test. Does this work beer for small and large numbers?
Exercise 1.8
Newton’s method for cube roots is based on the fact that if y is an approximation to the cube root of x, then a beer approximation is given by the value
$3x/y_{2}+2y $
Use this formula to implement a cuberoot procedure analogous to the squareroot procedure
Procedures as BlackBox Abstractions
The procedure definition binds its formal parameters such that they become bound variables. If variables are not bound, they are free. The set of expressions for which there is a binding defines its name is called scope of that name.
Often it can be useful to “hide” or localise the subprocedures of a given
procedure by utilising what is called a block structure. In the case of our
sqrt
function, we could write:
As can be inspected above, x
is a free variable in the internal procedure
definitions. This discipline is called lexical scoping, which the authors
define as follows:
Lexical scoping dictates that free variables in a procedure are taken to refer to bindings made by enclosing procedure deﬁnitions; that is, they are looked up in the environment in which the procedure was deﬁned.
Procedures and the Processes They Generate
Our situation is now analogous to someone who knows the rules of how pieces move in chess but knows nothing of openings, tactics or strategy. We don’t know any patterns yet.
A procedure is a pattern for the local evolution of a computational process. It specifies how each stage of the process is built upon the previous stage
Linear Recursion and Iteration
Consider the factorial function:
$n!=n⋅(n−1)⋅(n−2)⋯3⋅2⋅1$
Another way to write this is:
$n!=n⋅[(n−1)⋅(n−2)⋯3⋅2⋅1]=n⋅(n−1)!$
From the latter, we can define the following procedure to generate the factorial of \(n\):
The authors visulise the resulting recursion of \(6!\) as follows:
We can also iterate by defining a counter that increases by one each step and is multiplied with the product of the last iteration. So, \(n!\) is the value of the product when the counter exceeds \(n\).
This can be visualised as follows:
Here some important distinctions are to be made. The authors clarify that a recursive process is different from a recursive procedure:
When we describe a procedure as recursive, we are referring to the syntactic fact that the procedure deﬁnition refers (either directly or indirectly) to the procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking about how the process evolves, not about the syntax of how a procedure is written.
Scheme is tailrecursive, i.e. it executes an iterative process in constant
space, even if it is described by a recursive procedure. This means, that in
Scheme we don’t need any special iteration constructs such as for
, while
,
until
etc. They are only useful as
sytactic sugar.
Exercise 1.9
Each of the following two procedures deﬁnes a method for adding two positive
integers in terms of the procedures inc
, which increments its argument by 1,
and dec
, which decrements its argument by 1.
Exercise 1.10
The following procedure computes a mathematical function called Ackermann’s
function. What are the values of the expression below the procedure definition.
Also, give concise mathematical deﬁnitions for the functions computed by the
procedures f
, g
, and h
for positive integer values of \(n\). For
example, (k n)
computes \(5n^2\).
Tree Recursion
To understand tree recursion, consider the Fibonacci sequence:
$0,1,1,2,3,5,8,13,21,...$
In general, the Fibonacci numbers can be defined by the rule:
$Fib(n)=⎩⎨⎧ 01Fib(n−1)+Fib(n−2) ifn=0ifn=1otherwise $Let’s translate that into Lisp:
This is pretty bad as the number of times that the procedure will compute is precisely Fib\((n + 1)\), e.g. in the case above exactly eight times. Thus, the process uses a number of steps that grows exponentially with the input. The space, however, only grows linearly with the input as we only need to keep track of the nodes above the current one at any point during computation.
Let’s define an iterative procedure to do the same thing:
The authors summarise:
The difference in number of steps required by the two methods — one linear in n, one growing as fast as Fib(n) itself — is enormous, even for small inputs.
However, treerecursive processes aren’t useless. Often, they are easier to design and understand. Apparently, the Scheme interpreter itself evaluates expression using a treerecursive process.
Example: Counting Change
Exercise 1.11
A function f is defined by the rule that
f(n)=\left\\{\begin{array}{ll}n \quad \text { if } n<3 \\ f(n1)+2 f(n2)+3 f(n3) & \text { if } \quad n \geq 3\end{array}\right.Write a procedure that computes \(f\) by means of a recursive process. Write a procedure that computes \(f\) by means of an iterative process.
Exercise 1.12
The following pattern of numbers is called Pascal’s triangle:
The numbers at the edge of the triangle are all 1, and each number inside the triangle is the sum of the two numbers above it. Write a procedure that computes elements of Pascal’s triangle by means of a recursive process.
Exercise 1.13

Proposition
For all \(n \in \mathbb{N}\) let \(P(n)\) be the proposition:
\(Fib(n)=\frac{\varphi^{n}{\psi}^{n}}{\sqrt{5}}\)

Basis for induction
\(P(0)\) is true, as this shows:
$5 φ_{0}−ψ_{0} =5 1−1 =0=Fib(0)$ 
Induction hypothesis
\(\forall 0 \le j \le k + 1: Fib(j) = \dfrac {\varphi^j  \psi^j} {\sqrt 5}\)
Thus, we need to show:
\(Fib(k + 2) = \dfrac {\varphi^{k + 2}  \psi^{k + 2} } {\sqrt 5}\)

Induction step
We have the following two identities:
$φ_{2}=(21+5 )_{2}=41 (6+25 )=23+5 =1+φ$
Hence:
$φ_{k+2}−ψ_{k+2}=(1+φ)φ_{k}−(1+ψ)ψ_{k}$
\(= (\varphi^{k}\psi^{k})+(\varphi^{k+1}\psi^{k+1})\)
$=5 (Fib(k)+Fib(k+1))=5 Fib(k+2)$
The result follows by the principle of mathematical induction.
Therefore:
\(\forall n \in \mathbb{N}: Fib(n) = \frac {\varphi^n  \psi^n} {\sqrt 5}\)
Orders of Growth
Let \(R(n)\) be the amount of resources the process requires for a problem of size \(n\).
The authors make some further important definitions
We say that \(R(n)\) has order of growth \(\theta(f(n))\), written $R(n) = θ(f(n)) (pronounced “theta of \(f(n)\)”), if there are positive constants \(k_1\) and \(k_2\) independent of \(n\) such that \(k_1f(n) \leq R(n) \leq k_2 f(n)\) for any sufficiently large value of \(n\). (In other words, for large \(n\), the value \(R(n)\) is sandwiched between \(k_1 f (n)\) and \(k_2 f (n)\).)
Exercise 1.14
Draw the tree illustrating the process generated by the aforementioned
countchange
procedure of in making change for 11 cents. What are the orders
of growth of the space and number of steps used by this process as the amount to
be changed increases?
The space requirement of cc
is proportional to the maximum height of the
recursion tree, because at any given point in the recursive process, the
interpreter must only keep track of the nodes that lead to the current root.
Since the maximum height of the tree is dominated by the branch that contains
the most successive calls, i.e. the leftmost one in the graph, it is growing
linearly with \(n\) (amount
). In other words, \(\theta(n)\).
The time requirement can be deduced as follows:
(cc amount 1)
= $θ(n)$(cc amount 2)
=(cc amount 1)
+(cc ( amount 5) 2))
 Here, we have $θ(n_{2})$ when
kindsofcoins
is 2.  Hence, we get $θ(n_{k})$ ($k$ being
kindsofcoins
) forcc(amount kindsofcoins)
since every 2nd branch is $θ(k)$, and the first branch is called $θ(n)$ times.
Exercise 1.15
The sine of an angle (speciﬁed in radians) can be computed by making use of the approximation \(x \approx x\) if \(x\) is sufficiently small, and the trigonometric identity
\(\sin x=3 \sin \dfrac{x}{3}4 \sin ^{3} \dfrac{x}{3}\)
to reduce the size of the argument of \(sin\). (For purposes of this exercise an angle is considered “sufficiently small” if its magnitude is not greater than 0.1 radians.) These ideas are incorporated in the following procedures:

How many times is the procedure
p
applied when(sine 12.15)
is evaluated?As can be seen above the procedure is applied five times.

What is the order of growth in space and number of steps (as a function of \(a\) or
angle
) used by the process generated by thesine
procedure when(sine a)
is evaluated?So, the basic intuition is that
sine
is applied as many times asangle
can be divided by three until the absolute result is smaller than0.1
. To describe this mathematically, we need the notion of a ceiling (as we want to output an integer). So, we can write$3_{n}12.15 <0.1=3×12.15_{−n}<0.1$
$g(3)×−n×log(12.15)<log(0.1)$
Thus, we can write the number of required computations as
\(\Bigg\lceil\dfrac{\log\dfrac{12.15}{0.1}}{\log{3}}\Bigg\rceil = 5\)
or more generally
\(\Bigg\lceil\dfrac{\log\dfrac{a}{0.1}}{\log{3}}\Bigg\rceil\)
Hence, the order of growth in space is \(\theta(log(x))\).
Exponentiation
This is a recursive definition of the exponent \(n\) for a given integer \(b\):
\begin{array} {l}b^{n}=b \cdot b^{n1} \ b^{0}=1 \end{array}
In Scheme this linearly recursive process looks as such:
This requires \(\theta(n)\) steps and \(\theta(n)\) space. The corresponding iterative definition of the process would be:
This requires \(\theta(n)\) steps and \(\theta(1)\) space. We can be faster, however if we make use of the following:
\begin{array} {l}b^{2}=b \cdot b \ b^{4}=b^{2} \cdot b^{2} \ b^{8}=b^{4} \cdot b^{4} \end{array}
We can thus amend our process such that it runs even faster:
How fast exactly? Well, computing \(b^{2n}\) using fastexpt
requires only
one more computation than computing \(b^{n}\).
Exercise 1.16
Design a procedure that evolves an iterative exponentiation process that uses
successive squaring and uses a logarithmic number of steps, as does fastexpt
.
(Hint: Using the observation that \((b^{n/2})^{2} = (b^{2})^{n/2}\) , keep,
along with the exponent n
and the base b
, an additional state variable a
,
and deﬁne the state transformation in such a way that the product \(ab^n\) is
unchanged from state to state. At the beginning of the process a
is taken to
be 1, and the answer is given by the value of a
at the end of the process. In
general, the technique of deﬁning an invariant quantity that remains unchanged
from state to state is a powerful way to think about the design of iterative
algorithms.)
Exercise 1.17
The exponentiation algorithms in this section are based on performing exponentiation by means of repeated multiplication. In a similar way, one can perform integer multiplication by means of repeated addition. The following multiplication procedure (in which it is assumed that our language can only add, not multiply) is analogous to the expt procedure:
This algorithm takes a number of steps that is linear in b
. Now suppose we
include, together with addition, operations double
, which doubles an integer,
and halve
, which divides an (even) integer by 2. Using these, design a
multiplication procedure analogous to fastexpt that uses a logarithmic number
of steps.
Exercise 1.18
Using the result of the previous two exercises, devise a procedure that generates an iterative process for multiplying two integers in terms of adding, doubling, and halving and uses a logarithmic number of steps.
Exercise 1.19
There is a clever algorithm for computing the Fibonacci numbers in a logarithmic
number of steps. Recall the transformation of the state variables \(a\) and
\(b\) in the fibiter
process of earlier: \(a \rightarrow a + b\) and \(b
\rightarrow a\). Call this transformation \(T\), and observe that applying
\(T\) over and over again \(n\) times, starting with 1 and 0, produces the
pair \(Fib(n + 1)\) and \(Fib(n)\). In other words, the Fibonacci numbers
are produced by applying \(T^{n}\) , the \(n^{th}\) power of the
transformation \(T\), starting with the pair (1, 0). Now consider \(T\) to
be the special case of \(p = 0\) and \(q = 1\) in a family of
transformations \(T_{pq}\), where \(T_{pq}\) transforms the pair (a, b)
according to \(a \rightarrow bq + aq + ap\) and \(b \rightarrow bp + aq\).
Show that if we apply such a transformation \(T_{pq}\) twice, the effect is
the same as using a single transformation \(T_{p’q’}\) of the same form, and
compute \(p’\) and \(q’\) in terms of \(p\) and \(q\). This gives us an
explicit way to square these transformations, and thus we can compute
\(T^{n}\) using successive squaring, as in the fastexpt
procedure. Put this
all together to complete the following procedure, which runs in a logarithmic
number of steps:
The intuition here is the following. Observe that we can write the linea \(T_{pq}\) as a matrix:
$(p+qq qp )(ab )=(bp+aq+apbp+aq )$Now, we are told, we can just apply the matrix on the left twice (square) such that we get a single transformation \(T_{p’q’}\):
$ p+qq qp p+qq qp = ...p_{′} ...q_{′} = ...q_{2}+2pq ...q_{2}+p_{2} $Greatest Common Divisors
The greatest common divisor (GCD) of two integers \(a\) and \(b\) is deﬁned to be the largest integer that divides both \(a\) and \(b\) with no remainder. For example, the GCD of 16 and 28 is 4.
Euclid’s Algorithm is really smart. Let r
be the remainder of the division
of a
by b
. Then GCD(a, b) = GCD(b, r)
. In Scheme this looks as follows:
This code represents an iterative process whose number of steps grows as the logarithm of the numbers involved.
Exercise 1.20
The process that a procedure generates is of course dependent on the rules used
by the interpreter. As an example, consider the iterative gcd
procedure given
above. Suppose we were to interpret this procedure using normalorder
evaluation, as discussed before (The
normalorderevaluation rule for if is described in
Exercise 1.5.) Using the substitution method (for normal
order), illustrate the process generated in evaluating (gcd 206 40)
and
indicate the remainder operations that are actually performed. How many
remainder operations are actually performed in the normalorder evaluation of
(gcd 206 40)
? In the applicativeorder evaluation?
Testing for Primality
This first procedure leverages the fact that \(n\) is prime if and only if \(n\) is its smallest divisor:
The steps required by this procedure has order of growth \(\theta(\sqrt{n})\)
Another procedure leverages Fermat’s little theorem which is worth of being stated:
If \(n\) is \(a\) prime number and \(a\) is any positive integer less than \(n\), then \(a\) raised to the \(n^{th}\) power is congruent to \(a\) modulo \(n\).
NB Two numbers are congruent modulo \(n\) if they both have the same remainder when divided by \(n\). The remainder of a number \(a\) when divided by \(n\) is also referred to as the remainder of \(a\) modulo \(n\), or simply as \(a\) modulo \(n\).)
This is the code in Lisp:
Exercise 1.21
Use the smallestdivisor
procedure to find the smallest divisor of each of the
following numbers: 199, 1999, 19999.
Exercise 1.22
Exercise 1.23
skipped as modern processors are too fast to yield meaningful data to be interpreted here
Exercise 1.24
 see above. Now, this also explains why I did not get meaningful data above
where I probably should have used the slower
prime?
from earlier in the section.
Exercise 1.25
until 1.30
Exercise 1.26
The problem with Louis Reasoner’s proposed change is that the explicit
multiplication leads to the evaluation of two expmod
function when only one is
really needed. Hence, the proposed change produces a \(\theta(n)\) process.
Exercise 1.27
skipped
Exercise 1.28
skipped. Maybe I’ll revisit this when I feel like prime numbers again.
Formulating Abstractions with HigherOrder Procedures
Assigning names to common patterns is very useful. We call procedures that manipulate procedures higherorder procedures. Those higher order procedures “vastly increase the expressive power of out language”.
Procedures as Arguments
Consider the following three procedures:
and finally, a procedure that computes the sum of a sequence of terms in the series
$1⋅31 +5⋅71 +9⋅111 +…$
which converges (very slowly) to \(\pi/8\)
Looking at these procedures, it becomes obvious that we can abstract a general
sum
procedure.
which we can then use to redo the procedures from before:
and
Further, we can now use it freely as a building block to design more involved procedures such as one that numerically approximates an integral according to the formula
$∫_a_{b}f=[f(a+2dx )+f(a+dx+2dx )+f(a+2dx+2dx )+…]dx$for small values of \(dx\). The procedure would look as follows:
Exercise 1.29
Exercise 1.30
Write an iterative sum
procedure
Exercise 1.31

Write an analogous procedure called
$4π =3⋅3⋅5⋅5⋅7⋅7⋅⋯2⋅4⋅4⋅6⋅6⋅8⋯ $product
that returns the product of the values of a function at points over a given range. Show how to definefactorial
in terms ofproduct
. Also useproduct
to compute approximations to \(\pi\) using the formula

If your
product
procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process.
Exercise 1.32
 Show that
sum
andproduct
are both special cases of a still more general notion calledaccumulate
that combines a collection of terms, using some general accumulation function(accumulate combiner nullvalue term a next b)
. It takes as arguments the same term and range speciﬁcations as sum and product , together with a combiner procedure (of two arguments) that specifies how the current term is to be combined with the accumulation of the preceding terms and a nullvalue that speciﬁes what base value to use when the terms run out. Writeaccumulate
and show howsum
andproduct
can both be defined as simple calls toaccumulate
.
 If your accumulate procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that gen erates a recursive process.
Exercise 1.33
You can obtain an even more general version of accumulate
by introducing the
notion of a ﬁlter on the terms to be combined. at is, combine only those terms
derived from values in the range that satisfy a specified condition. The
resulting filteredaccumulate
abstraction takes the same arguments as
accumulate
, together with an additional predicate of one argument that
specifies the filter. Write filteredaccumulate
as a procedure:
Show how to express the following using filteredaccumulate
:
Constructing Procedures Using lambda
lambda
basically allows the programmer to specify trivial procedures without
naming them. Hence pisum
could be rewritten as:
But, we not only need nameless throwaway functions but also (local) variables that behave differently than the ones introduced thus far. If you wish to compute the following function \(f\):
$abf(x,y) =1+xy=1−y=xa_{2}+yb+ab $we can use lambda
like so:
This is so useful that there is the special from called let
:
This is how the authors describe its general form:
which can be thought of as:
A let
expression is simply sytactic sugar for the underlying lambda
application.
A useful example. Let’s stipulate x
(outside of let
) is 5. Then, the
following expression evaluates to 38.
Similarly, if x
in the next expression is given as 2, the following expression
evaluates to 12.
Procedures as General Methods
To find the roots of a continuous function for which we know two values with opposite signs, we can utilise the biscetion method or halfinterval method. It is implemented in the following procedure:
Exercise 1.35
Show that the golden ratio \(\phi\) (Section 1.2.2) is a ﬁxed point of the
transformation \(x \mapsto 1 + \frac{1}{x}\), and use this fact to compute
\(\phi\) by means of the fixedpoint
procedure.
Exercise 1.36
Modify fixedpoint
so that it prints the sequence of approximations it
generates, using the newline
and display
primitives shown in Exercise 1.22.
Then find a solution to \(x^x = 1000\) by finding a fixed point of \(x
\mapsto log(1000)/ log(x)\). (Use Scheme’s primitive log
procedure, which
computes natural logarithms.) Compare the number of steps this takes with and
without average damping. (Note that you cannot start fixedpoint
with a guess
of 1, as this would cause division by \(log(1) = 0\).)
Exercise 1.37
Consider the infinite continues fraction:
$f=D_1+D_2+D_3+…N_3 N_2 N_1 $and its approximation:
$D_1+⋱+D_kN_k N_2 N_1 $
Define a procedure contfrac
such that evaluating (contfrac n d k)
computes
the value of the kterm finite continued fraction. Check your procedure by
approximating \(\frac{1}{\phi}\) using
for successive values of k
. How large must you make k
in order to get an
approximation that is accurate to 4 decimal places?
Exercise 1.38
In 1737, the Swiss mathematician Leonhard Euler published a memoir De Fractionibus Continuis, which included a continued fraction expansion for \(e2\), where e is the base of the natural logarithms. In this fraction, the \(N_{i}\) are all 1, and the \(D_{i}\) are successively 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, … Write a program that uses your contfrac procedure from Exercise 1.37 to approximate \(e\), based on Euler’s expansion.
Exercise 1.39
Now, we use contfrac
to compute J.H. Lambert’s continued fraction
representation of the tangent function:
Procedures as Returned Values
As a first useful example of a useful procedure returning another procedure, the authors mention average damping, a convergence acceleration technique:
Using this, you can reformulate the sqrt
procedure from above in very
expressive form:
The derivative of a function is defined as:
$Dg(x)=dxg(x+dx)−g(x) $
In Scheme, the authors express this as follows:
Now, you can express Newton’s method, \(g(x) = 0\) is a fixed point of the function $x↦f(x)$, where
$f(x)=x−Dg(x)g(x) $
Again, in Scheme, you can express this like so:
This can be generalised even further by realising that both Newton’s method and
the method using fixedpoint
were doing almost the same, i.e. both begin with
a function and end with finding the fixed point of a transformation of that
initial function. In Scheme:
Programming languages impose restrictions on the ways computational elements can be manipulated. Those elements to which the fewest restrictions are applied are called firstclass elements. They may:
 be named by variables
 be passed as arguments to procedures
 be returned as the results of the procedures
 be included in data structures
Lisp treats procedures as firstclass elements, hence Lisp is a functional programming language. This, however, the authors claim poses challenges for efficient implementation, but creates “enormous” (p. 103) gains in expressive power.
Exercise 1.40
Define a procedure cubic that can be used together with the newtonsmethod procedure in expressions of the form:
(newtonsmethod (cubic a b c) 1)
to approximate zeros of the cubic $x_{3}+ax_{2}+bx+c$
Exercise 1.41
Define a procedure double that takes a procedure of one argument as argument and
returns a procedure that applies the original procedure twice. For example, if
inc
is a procedure that adds 1 to its argument, then (double inc)
should be
a procedure that adds 2. What value is returned by
(((double (double double)) inc) 5)
?
Exercise 1.42
Let \(f\) and \(g\) be two oneargument functions. The composition \(f\)
after \(g\) is defined to be the function \(x \mapsto f(g(x))\). Define a
procedure compose that implements composition. For example, if inc
is a
procedure that adds 1 to its argument, ((compose square inc) 6)
evaluates
to 49.
Exercise 1.43
If f is a numerical function and \(n\) is a positive integer, then we can form
the \(n^{th}\) repeated application of \(f\) , which is defined to be the
function whose value at \(x\) is \(f(f(…(f(x))…))\). For example, if
\(f\) is the function $x↦x+1$, then the \(n^{th}\) repeated
application of \(f\) is the function \(x \mapsto x + n\). If \(f\) is the
operation of squaring a number, then the \(n^{th}\) repeated application of
\(f\) is the function that raises its argument to the \(2^{n}\) th power.
Write a procedure that takes as inputs a procedure that computes \(f\) and a
positive integer \(n\) and returns the procedure that computes the
\(n^{th}\) repeated application of \(f\). Your procedure should be able to
be used as follows: ((repeated square 2) 5)
evaluates to 625.
Exercise 1.44
The idea of smoothing a function is an important concept in signal processing.
If \(f\) is a function and \(dx\) is some small number, then the smoothed
version of \(f\) is the function whose value at a point \(x\) is the average
of f \((x − dx)\), \(f (x)\), and \(f (x +dx)\). Write a procedure
smooth
that takes as input a procedure that computes \(f\) and returns a
procedure that computes the smoothed \(f\). It is sometimes valuable to
repeatedly smooth a function (that is, smooth the smoothed function, and so on)
to obtain the nfold smoothed function. Show how to generate the nfold smoothed
function of any given function using smooth
and repeated
from Exercise 1.43.
Exercise 1.45
Unfortunately, the averagedump
process does not work for fourth roots — a
single average damp is not enough to make a fixedpoint search for \(y \mapsto
x/y^{3}\) converge. On the other hand, if we average damp twice (i.e., use the
average damp of the average damp of \(y \mapsto x/y^{3}\)) the fixedpoint
search does converge. Do some experiments to determine how many average damps
are required to compute \(n_{th}\) roots as a fixedpoint search based upon
repeated average damping of \(y \mapsto x/y_{n  1}\). Use this to implement
a simple procedure for computing \(n_{th}\) roots using fixedpoint
,
averagedamp
, and the repeated
procedure
Exercise 1.46
Several of the numerical methods described in this chapter are instances of an
extremely general computational strategy known as iterative improvement.
Iterative improvement says that, to compute something, we start with an initial
guess for the answer, test if the guess is good enough, and otherwise improve
the guess and continue the process using the improved guess as the new guess.
Write a procedure iterativeimprove
that takes two procedures as arguments: a
method for telling whether a guess is good enough and a method for improving a
guess. iterativeimprove
should return as its value a procedure that takes a
guess as argument and keeps improving the guess until it is good enough. Rewrite
the sqrt
procedure and the fixedpoint
procedure in terms of
iterativeimprove
.
Building Abstraction with Data
The authors introduce some critical notions in the opening to chapter two:
 compound data is simply the result of a combination of data objects, i.e. the combination of a numerator and a denominator to represent a rational number
 closure is the notion that the “glue” used for combining data objects should allow not only for combining primitive data object (such as integers) but compound data objects as well.
 compound data objects can serve as conventional interfaces for combining program modules
 symbolic expressions are data whose elementary parts can be any symbol rather than only numbers.
 datadirected programming is a technique that allows different data representations to be designed in isolation to then be combined additively (i.e. without modification)
Introduction to Data Abstraction
The basic idea of data abstraction is to structure the programs that are to use compound data objects so that they operate on “abstract data.” That is, our programs should use data in such a way as to make no assumptions about the data that are not strictly necessary for performing the task at hand. (Abelson and Sussman 2002, 112)
Concrete data representations, on the other hand, are defined independent of the programs using the data. The interface between abstract data and its concrete representations are a set of procedures called selectors and constructors that implement the abstract data in terms of its concrete representation.
In the case of rational numbers a constructor (makerat n d)
returns the
rational number whose numerator is the integer n
and whose denominator is the
integer d
. The selectors (numer x)
and (denom x)
return the numerator and
denominator respectively. We leave them undefined for now. If we had them,
however (wishful thinking) the following relations would allow us to do all
sorts of things with rational numbers:
as procedures they look as follows:
Pairs
Pairs are the a compound data structure provided by Lisp. They are constructed and selected from as follows:
Now rational numbers can be easily represented:
Exercise 2.1
Define a better version of makerat
that handles both positive and negative
arguments. makerat
should normalize the sign so that if the rational number
is positive, both the numerator and denominator are positive, and if the
rational number is negative, only the numerator is negative.
Abstraction Barriers
Exercise 2.2
Consider the problem of representing line segments in a plane. Each segment is
represented as a pair of points: a starting point and an ending point. Define a
constructor makesegment
and selectors startsegment
and endsegment
that
define the representation of segments in terms of points. Furthermore, a point
can be represented as a pair of numbers: the x coordinate and the y coordinate.
Accordingly, specify a constructor makepoint
and selectors xpoint
and
ypoint
that define this representation. Finally, using your selectors and
constructors, define a procedure midpointsegment that takes a line segment as
argument and returns its midpoint (the point whose coordinates are the average
of the coordinates of the endpoints). To try your procedures, you’ll need a way
to print points: