Exploring infinite sequences
#
I was recently watching John Hughes’s talk Why Functional Programming Matters. One thing that jumped out at me was his definition of the Newton-Raphson square root approximation algorithm using a lazy sequence of estimates. This is an example from his influential paper of the same name.
The iterative algorithm looks like this:
def sqrt(n)
# terminate when two successive guesses differ by less than epsilon
# Initial guess can be anything as long as abs(x - y) > epsilon
epsilon = 0.0001
x = 1.0
y = x + 2.0 * epsilon
while (x - y).abs > epsilon
y = x
x = (x + n/x)/2.0
end
x
end
Hughes describes this program as “indivisible” – the estimation is coupled with the iteration and the selection of the final estimate.
What impressed me about his example of doing this lazily in Haskell was how simple it was:
sqrt n = limit eps (iterate next 1.0)
where next x = (x + n/x) / 2
I wanted to learn more, so I found his paper which has a more verbose version.
First, he defines a next
function that produces an estimate:
next n x = (x + n/x)/2
Then, he defines an infinite sequence of better and better estimates. If the first estimate is 1.0 and the estimation function is called f
, the sequence is:
[1.0, f 1.0, f (f 1.0), f (f (f 1.0)), ...]
This sequence can be generated using the Haskell function iterate
applied to the curried function next n
:
iterate (next n) 1.0
This is fun to play with. Here’s the first 5 approximations of the square root of 1000 starting with an estimate of 1.0:
take 5 $ iterate (next 1000) 1.0
[1.0,500.5,251.249000999001,127.61455816345908,67.72532736082603]
Next, Hughes defines a limit
function (called within
in the original paper) to compare the difference of the first two estimates in the sequence to the tolerance, and return one if it’s close enough, otherwise, call itself recursively with the rest of the sequence (which, again, is infinite). As long as the function converges, this will eventually return a result.
-- (x:y:ys) is destructuring.
-- `x` is the first element, `y` is the second element, and `ys` is the rest
limit epsilon (x:y:ys) =
if abs (x - y) <= epsilon then
y
else
limit (y:ys)
Putting it together, square root can be defined as:
epsilon = 0.0001
sqrt n = limit epsilon (iterate (next n) 1.0)
The really cool thing about this is that it separates the generation of the sequence from the selection of the value. The limit
function can be used for other estimation problems with the same algorithm, such as calculating the derivative:
deriv f x =
limit epsilon (map slope (iterate (/2) 1.0))
where slope h = (f (x+h) - f x) / h
-- (^2) is partial application: https://wiki.haskell.org/Section_of_an_infix_operator
-- So this says "compute the derivative of x^2 at 5"
deriv (^2) 5
10.0009765625
-- f(x) = x^2
-- f'(x) = 2x
-- f'(5) => 10
-- It works!
I have found lazy evaluation confusing, but this is a powerful example of how it can simplify code. I wanted to see how it could be applied in a language with eager evaluation, so I translated the code to Ruby.
def iterate(a, &f)
Enumerator.new do |yielder|
loop do
yielder << a
a = f.call(a)
end
end.lazy
end
def limit(epsilon, seq)
# NOTE: To maintain the same behavior as the Haskell code,
# peek at the second estimate so it's not consumed.
a = seq.next
b = seq.peek
if (a - b).abs <= epsilon
b
else
limit(epsilon, seq)
end
end
def sqrt(n)
limit(0.0001, iterate(1.0) { |a| (a + n/a.to_f)/2.0 })
end
This is not as simple as the Haskell code!
Most of the complexity is due to the difficulty of generating an infinite sequence. Ruby is eagerly evaluated, so the iterate
method uses Enumerator
with a loop. But as seen in the limit
method, the Enumerator
takes special care in order to produce the same behavior as the Haskell code. A method to split an infinite sequence into first
and rest
would make things a lot easier.
This was an enlightening exercise. I learned more about laziness – and how hard it can be to shoehorn it into an eagerly evaluated language. Ruby doesn’t lend itself towards lazy sequences, so its support for programming with lazy sequences is weak.
Further reading
I posted the source code from this blog post if you’d like to play around with it.
I adapted the Haskell code for this post from John Hughes’s slides, original paper, and Heinrich Apfelmus’s post about lazy evaluation.
For the Ruby code, I learned a lot by reading Joël Quenneville’s Functional Ciphers in Ruby and Ross Kaffenberger’s Infinite Sequences in Ruby (he also has several more articles about Enumerator
).
Thanks to Paul Cantrell and Charles Capps for reviewing drafts of this post. Any errors are my own.