Lamda-defineability and parallel or
Parallel or
Every (imperative) programmer knows the usefulness of a
shortcircuiting or operation, that is, a binary operation that doesn't
evaluate it's second argument if the first one is True
. To a
functional programmer this is only natural: the function or
is lazy in
it's second argument.
> or True _ = True > or False x = x
Wouldn't it be nice to have a parallel or that evaluates to True
if either of it's arguments is True
. That is, a function por
satisfying
> por :: Bool -> Bool -> Bool > por True _ === True > por _ True === True
even if _
were undefined or a non-halting computation. As it turns
out, a function like this cannot be implemented in lambda calculus. Let's prove this.
We will restrict our selves to simply-typed lambda calculus with three
constants, True, False :: Bool
and if :: Bool -> a -> a
obeying
the following reduction rules:
> if True M N ==> M > if False M N ==> N
Recall that the basic reduction rule of lambda calculus was beta-reduction, that is, function application.
We will additionally allow for fixpoints, as otherwise we wil not be
able to define most functions at all. (Strictly speaking, we will need
one fixpoint operator per type, but type annotations will be ommitted
in the future for brevity.) The definition of fix
(if you don't
understand it, just take for granted that fix let's us do recursion or
see
these
posts):
> fix_a :: (a -> a) -> a > fix_a ==> \f -> f (fix f)
Our evaluation order will be normal order, that is: we apply beta-reduction (function application) and our custom reduction rules at the lefternmost possible part of the expression. (This evaluation scheme is confluent, i.e. it always produces a unique normal form if such exists)
Now, because of it's type, por
must have form
> por = \x -> \y -> E
for some E
. Let us assume that E
is in normal form (if it is not,
it can be reduced without changing the semantics of por
). We will
also assume that both x
and y
appear in E
as this is only
natural.
Now then, buried in E
there must be some reduction opportunities
that can only be taken when something is substituted for x
and y
.
Because x
and y
are variables, they must be distinct parts of E
.
That is, when we substitute something for x
and y
, these
somethings will not overlap. By symmetry we can assume that x
is
left of y
in E
. Let E[A,B]
denote E
with A replaced for y
and B
for x
. Now then if A
reduces to A'
in one step, E[A,B]
reduces to E[A',B]
in one step because A
was our lefternmost
reduction opportunity.
Now we note that fix (\x -> x)
reduces to itself in two steps:
> fix (\x -> x) ==> (\f -> f (fix f)) (\x -> x) ==> (\x -> x) (fix (\x -> x)) > ==> fix (\x -> x)
And thus
> por (fix (\x -> x)) B === E[fix (\x -> x),B]
can have no normal form regardless of B
. Especially por (fix (\x ->
x)) True =/= False
.
A taste of domain theory
Scott models, also known as domains, are a way of obtaining denotational semantics for lambda calculi. Denotational semantics mean a mapping of the structures of the language to mathematical objects (functions, sets) of some sort.
Long story short, we consider a more-defined-than ordering of values.
For simple types such as Bool
and Nat
(the natural numbers), there
are just two levels to this ordering: the completely undefined value
_|_
(pronounced "bottom"), and the defined values
(1
,2
,True
,...). The intuistic reason for using these orders is
that a (computable) function must map a more-defined argument to a
more-defined result. That is, functions must be monotonic wrt. the
orderings of it's argument and result types.
An example is in place. Consider a function f :: Nat -> Bool
with
> f undefined === True
now clearly f 4
must be True
as well because f
couldn't have
inspected it's argument. Similarily, if
> f 4 === undefined
f undefined
must be undefined also.
To be exact, for simple types (here Bool
and Nat
) the ordering is
defined by
_|_ <= x
for allx
x
andy
are not comparable if neither is_|_
The monotonicity requirement can now be formulated as: if a <= x
then f a <= f x
.
The cliff hanger
If we expand the definition of monotonicity to two arguments in the most natural way, that is
> a <= x && b <= y ==> f a b <= f x y
we find out that partial or is monotonic: this can be easily
ascertained by either checking all cases or reasoning that making
either argument to por
less defined will never make the result more
defined. So, por
cannot be told apart from the defineable functions
in our semantic model.
In the next post we will investigate what happens when we introduce
por
as a constant into our lambda calculus. Be warned, it might take
a while as there's a lot to handle and I'm leaving for the alps after
the weekend.