[plt-scheme] why should black holes be exceptions, rather than divergence?
Hello,
I have been thinking about Scheme's implementation of "black holes",
or provable divergence, as errors. That is,
> (letrec [(x (delay (force x)))] (force x))
raises an error rather than diverges.
Can someone refer me to or give me historical background or motivations
of this particular design choice?
Haskell has recursive monadic bindings, (aka value recursion?), which
implement black holes as divergence (to some extent). That is,
ghci> import Control.Monad.Fix
ghci> (mfix (\x -> do print "hi" >> return (x+1)))
prints "hi" once and diverges.
(See more examples at the bottom.)
Scala's lazy val has yet another behavior.
scala> lazy val x: int = { println("hi"); x}
diverges (until the stack overflows) while continues printing "hi" again and
again...
(For reference
scala> lazy val x: int = { println("hi"); 1}
scala> x
prints "hi" just once.)
I can guess what implementation respectively gives each of the above
behaviors. I just have to tweak the default value put in the
initialization-thunk. But I do not know what considerations led each
language to these different implementations.
Any idea?
Keiko
> (mfix (\x -> do print "hi" >> return (x+1)))
prints "hi" once and diverge.
> (mfix (\x -> do print "hi" >> return x))
prints "hi" once and terminates.
> (mfix (\x -> do print "hi" >> x >> return x))
prints "hi" once and raises an exception.