ÐÏࡱá > þÿ Ð Ò þÿÿÿ Î Ï ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿì¥Á Q@ ð¿ 0 É\ bjbj˜Í˜Í 7œ ú§ ú§ ÉT ÿÿ ÿÿ ÿÿ ˆ 2 2 2 2 2 2 2 F Å Å Å Å "Å ä F æØ ¶ Æ Æ Æ Æ Æ íÆ íÆ íÆ eØ gØ gØ gØ gØ gØ gØ $ œÙ R îÛ ‹Ø 2 íÆ íÆ íÆ íÆ íÆ ‹Ø 2 2 Æ Æ Û Ø Ö Ö Ö íÆ è 2 Æ 2 Æ eØ Ö íÆ eØ Ö ( Ö µÖ 2 2 µÖ Æ Æ sHã>Å Å ÕÇ V µÖ eØ ¶Ø 0 æØ µÖ ~Ü +Ö X ~Ü µÖ F F 2 2 2 2 ~Ü 2 µÖ ° íÆ íÆ Ö íÆ íÆ íÆ íÆ íÆ ‹Ø ‹Ø F F b Jc Äa ƒÖ
F F Jc Programming Languages
Tutorial 2
1 Fun with memory
Exercise 1
After the definitions,
val x = new();;
val y = x;;
it is claimed that x and y denote the same storage location, and after the definitions,
val x = new();;
val y = new();;
they denote different locations. Devise an experiment to demonstrate that this is the
case.
Answer:
>>> val x = new();;
--- x =
>>> val y = x;;
--- y =
>>> x := 5;;
--> 5
>>> !y;;
--> 5
>>> y := 7;;
--> 7
>>> !x;;
--> 7
vs.
>>> val x = new();;
--- x =
>>> val y = new();;
--- y =
>>> x := 5;;
--> 5
>>> !y;;
Error: uninitialized location 1
>>> y := 7;;
--> 7
>>> !x;;
--> 5
Exercise 2
In FunMem is it possible for the value of a variable to be another variable?
Give an example, or explain why it can't happen. How can we construct a cons
cell that has a variable as its tail? Show how to use this idea to construct cyclic
lists in FunMem.
Answer:
Yes, it's possible if you're cunning. It might be possible if you're not cunning
as well, but I was feeling cunning when I was doing this question, so the answer
given is a cunning one! For example:
>>> val x = new();;
--- x =
>>> x := new();;
-->
>>> val y = !x;;
--- y =
>>> !x := 5;;
--> 5
>>> !y;;
--> 5
>>> !!x;;
--> 5
Constructing cyclic lists:
>>> val cell = new();;
--- cell =
>>> val lst = new();;
--- lst =
>>> lst := cons(23,cell);;
--> [23 . ]
>>> cell := lst;;
-->
>>> !!cell;;
--> [23 . ]
Exercise 3
Many Pascal-like languages have a repeat loop that we might give the
concrete syntax
repeat expr until expr
The expression repeat e1 until e2 is evaluated by repeatedly evaluating e1
(and discarding its value) followed by e2 (whose value must be a Boolean),
until the last evaluation of e2 yields true. Show how to add this construct
to FunMem.
Answer:
* Add new REPEAT and UNTIL tokens to the parser. (FunParser.hs)
* Add REPEAT -> "repeat" and UNTIL -> "until" to the instance of Show relating to Token. (FunParser.hs)
* Add ("repeat", REPEAT) and ("until", UNTIL) to the keyword lookup (kwlookup). (FunParser.hs)
* Generally modify the rest of the parser so that Repeat expressions are generated properly. (FunParser.hs)
* Change the expression type:
data Expr =
...
| Repeat Expr Expr -- repeat e1 until e2
...
(FunSyntax.hs)
* Add a new case for Repeat expressions to eval:
eval (Repeat e1 e2) env mem = f mem
where
f mem =
let (_, mem') = eval e1 env mem in
let (b, mem'') = eval e2 env mem' in
case b of
BoolVal True -> f mem''
BoolVal False -> (b, mem'')
(FunMem.hs)
Exercise 4
In the language FunMem, we added assignment as a language construct,
extending the abstract syntax of Fun to do so. By defining a primitive
put such that put(x, y) is equivalent to x := y, show that we could in
fact have made assignment a primitive.
Answer:
prim "put" [x, y] mem =
case x of
Addr a -> let mem' = update mem a y in (y, mem')
_ -> error "BOOM"
Exercise 5
In the basic language Fun, we cannot tell whether the actual parameters
of a function are evaluated from left to right or from right to left, nor whether
the function itself is evaluated (to obtain the function object that is applied
to the arguments) before or after the actual parameters. Why is this? Write a
program in Fun with memory that reveal these secrets about the interpreter.
Answer:
Why this is:
Since expressions in the basic Fun language don't have side effects, the result
doesn't depend on the order in which we evaluate the arguments, so we can't
write a program which will determine what the interpreter actually does about it.
(i)
>>> val f(x,y) = !x;;
--- f =
>>> val x = new();;
--- x =
>>> x := 1;;
--> 1
>>> f(x := !x+1; x, x := !x*!x; x);;
--> 4
This tells us that the parameters are evaluated from left to right, since we see
(1+1) * (1+1) = 4
and not
1*1 + 1 = 2
(ii)
>>> val f = new();;
--- f =
>>> f := (lambda(g,x) x);;
-->
>>> val g = (lambda(h,x) 5);;
--- g =
>>> (!f)(f := g; g, 7);;
--> 7
This tells us that the function itself is evaluated before the parameters, since
we see
(lambda(g,x) x)(_, 7) = 7
and not
(lambda(h,x) 5)(_, 7) = 5
This definitely distinguishes between the two cases, because I recompiled it with
the evaluation done the other way round and it produced 5.
2 Monads
Exercise 6
Show that the three monad laws are satisfied by the operations of the
monad of memory.
Answer:
For reference purposes, the three laws are:
(xm @> f) @> g = xm @> (\x -> f x @> g)
(result x) @> f = f x
xm @> result = xm
And the relevant bits of the monad of memory are:
type M a = Mem -> (a, Mem)
result :: a -> M a
result x mem = (x, mem)
(@>) :: M a -> (a -> M b) -> M b
:: (Mem -> (a, Mem)) -> (a -> (Mem -> (b, Mem))) -> Mem -> (b, Mem)
(xm @> f) mem = let (x, mem1) = xm mem in f x mem1
So, down to business:
Law 1 : (xm @> f) @> g = xm @> (\x -> f x @> g)
LHS mem
= ((xm @> f) @> g) mem
= let (x1, mem1) = (xm @> f) mem in g x1 mem1 [defn. of @>]
= let (x1, mem1) = (let (x2, mem2) = xm mem in f x2 mem2) in g x1 mem1 [defn. of @>]
RHS mem
= (xm @> (\x -> f x @> g)) mem
= let (x2, mem2) = xm mem in (\x -> f x @> g) x2 mem2 [defn. of @>]
= let (x2, mem2) = xm mem in (f x2 @> g) mem2 [getting rid of the lambda]
= let (x2, mem2) = xm mem in (let (x1, mem1) = f x2 mem2 in g x1 mem1) [defn. of @>]
But this is the same as LHS mem, since there aren't side-effects in the subset
of the meta-language we're using which implies that:
let (x1, mem1) =
let (x2, mem2) = xm mem in f x2 mem2
in g x1 mem1
is the same as
let (x2, mem2) = xm mem in
let (x1, mem1) = f x2 mem2
in g x1 mem1
Hence LHS = RHS
Law 2 : (result x) @> f = f x
LHS mem
= ((result x) @> f) mem
= let (y, mem1) = (result x) mem in f y mem1 [defn. of @>]
= let (y, mem1) = (x, mem) in f y mem1 [defn. of result]
= f x mem [obvious]
= RHS mem
-> LHS = RHS
Law 3 : xm @> result = xm
LHS mem
= (xm @> result) mem
= let (x, mem1) = xm mem in result x mem1 [defn. of @>]
= let (x, mem1) = xm mem in (x, mem1) [defn. of result]
= xm mem [obvious]
= RHS mem
-> LHS = RHS
Exercise 7
Show that the three monad laws are satisfied by the operations of the
monad of output.
Answer:
For reference purposes, the relevant bits of the monad of output are:
type M a = (a, String)
(@>) :: M a -> (a -> M b) -> M b
:: (a, String) -> (a -> (b, String)) -> (b, String)
xm @> f = let (x, o1) = xm in (let (y, o2) = f x in (y, o1 ++ o2))
result :: a -> M a
:: a -> (a, String)
result x = (x, "")
Law 1 : (xm @> f) @> g = xm @> (\x -> f x @> g)
LHS
= (xm @> f) @> g
= let (x, o1) = (xm @> f) in (let (y, o2) = g x in (y, o1 ++ o2)) [defn. of @>]
= let (x, o1) = (let (z, o3) = xm in (let (w, o4) = f z in (w, o3 ++ o4))) in (let (y, o2) = g x in (y, o1 ++ o2)) [defn. of @>]
= let (z, o3) = xm in [simplification]
let (w, o4) = f z in
let (y, o2) = g w in
(y, o3 ++ o4 ++ o2)
RHS
= xm @> (\x -> f x @> g)
= let (x1, o1) = xm in (let (x2, o2) = (\x -> f x @> g) x1 in (x2, o1 ++ o2)) [defn. of @>]
= let (x1, o1) = xm in (let (x2, o2) = (f x1 @> g) in (x2, o1 ++ o2)) [getting rid of the lambda]
= let (x1, o1) = xm in (let (x2, o2) = (let (x3, o3) = f x1 in (let (x4, o4) = g x3 in (x4, o3 ++ o4))) in (x2, o1 ++ o2) [defn. of @>]
= let (x1, o1) = xm in [simplification]
let (x3, o3) = f x1 in
let (x4, o4) = g x3 in
(x4, o1 ++ o3 ++ o4)
This is clearly equal to the left-hand side modulo the renaming:
z <-> x1
w <-> x3
y <-> x4
o3 <-> o1
o4 <-> o3
o2 <-> o4
QED
Law 2 : (result x) @> f = f x
LHS
= ((result x) @> f)
= ((x, "") @> f) [defn. of result]
= let (x1, o1) = (x, "") in (let (y, o2) = f x1 in (y, o1 ++ o2)) [defn. of @>]
= let (y, o2) = f x in (y, o2) [since x1 = x and o1 = ""]
= f x [obvious]
= RHS
Law 3 : xm @> result = xm
LHS
= xm @> result
= let (x, o1) = xm in (let (y, o2) = result x in (y, o1 ++ o2)) [defn. of @>]
= let (x, o1) = xm in (let (y, o2) = (x, "") in (y, o1 ++ o2)) [defn. of result]
= let (x, o1) = xm in (x, o1) [since y = x and o2 = ""]
= xm [obvious]
= RHS
Exercise 8
Define a monad M that supports both memory and output, together with
operations
get :: Location -> M Value
put :: Location -> Value -> M ()
new :: M Location
output :: String -> M ()
Answer:
type M a = Mem -> (a, Mem, String)
result :: a -> M a
result x mem = (x, mem, "")
(@>) :: M a -> (a -> M b) -> M b
(xm @> f) mem =
let (x, mem1, o1) = xm mem in
let (y, mem2, o2) = f x mem1 in
(y, mem2, o1 ++ o2)
get :: Location -> M Value
get a mem = (contents mem a, mem, "")
put :: Location -> Value -> M ()
put a v mem = ((), update mem a v, "")
new :: M Location
new mem = let (a, mem') = fresh mem in (a, mem', "")
obey :: Para -> GloState -> (String, GloState)
obey (Calculate exp) (env, mem) =
let (v, mem', o) = eval exp env mem in
(o ++ print_value v, (env, mem'))
obey (Define def) (env, mem) =
let x = def_lhs def in
let (env', mem', o) = elab def env mem in
(o ++ print_defn x (find env' x), (env', mem'))
prim "print" [v] = (\mem -> (v, mem, show v ++ "\n"))
I'm rather dubious about the print primitive, but I can't see another way
of doing it offhand. Any thoughts?
Exercise 9
Using the monad laws, show that the programs e1; (e2; e3) and (e1; e2); e3
are equivalent, and that nil; e1 is equivalent to e1. Why is e1; nil not
equivalent to e1?
Answer:
Again, for reference purposes, the monad laws are:
(xm @> f) @> g = xm @> (\x -> f x @> g)
(result x) @> f = f x
xm @> result = xm
The relevant bit of the monad code is:
eval (Sequence e1 e2) env = eval e1 env @> (\v -> eval e2 env)
(i)
We are required to prove that:
eval (Sequence e1 (Sequence e2 e3)) env = eval (Sequence (Sequence e1 e2) e3) env
We proceed as follows:
LHS
= eval (Sequence e1 (Sequence e2 e3)) env
= eval e1 env @> (\v1 -> eval (Sequence e2 e3) env) [defn. of eval]
= eval e1 env @> (\v1 -> eval e2 env @> (\v2 -> eval e3 env)) [defn. of eval]
RHS
= eval (Sequence (Sequence e1 e2) e3) env
= eval (Sequence e1 e2) env @> (\v2 -> eval e3 env) [defn. of eval]
= (eval e1 env @> (\v1 -> eval e2 env)) @> (\v2 -> eval e3 env) [defn. of eval]
These would be the same if @> were associative, so:
Lemma: @> is associative
Proof:
(f @> g) @> h
= f @> (\x -> g x @> h) [monad law 1]
= f @> (g @> h) [\x -> g x = g]
QED
(ii)
We are required to prove that:
eval (Sequence Nil e1) env = eval e1 env
Well:
LHS
= eval (Sequence nilexp e1) env
= (result Nil) @> (\v -> eval e1 env) [defn. of eval]
= (\v -> eval e1 env) Nil [monad law 2]
= eval e1 env [simplification]
= RHS
(iii)
e1; nil isn't equivalent to e1 because its value is [], not the value of e1
Exercise 10
In order to put the interpreter for Fun into monadic form, we replaced
the use of map for evaluating lists of arguments with a function evalargs,
defined as follows.
evalargs :: [Expr] -> Env -> M [Value]
evalargs [] env = result []
evalargs (e:es) env = eval e env @> (\v -> evalargs es env @> (\vs -> result(v:vs)))
Define a polymorphic function mapm with type,
mapm :: (a -> M b) -> [a] -> M [b],
such that evalargs aps env can be replaced by
mapm (\e -> eval e env) aps
Answer:
If aps = []:
evalargs [] env = result []
-> mapm (\e -> eval e env) [] = result []
If aps != []:
evalargs (e:es) env = eval e env @> (\v -> evalargs es env @> (\vs -> result(v:vs)))
-> mapm (\e -> eval e env) (e:es)
= eval e env @> (\v -> evalargs es env @> (\vs -> result(v:vs)))
= eval e env @> (\v -> mapm (\e -> eval e env) es @> (\vs -> result(v:vs)))
= (\e -> eval e env) e @> (\v -> mapm (\e -> eval e env) es @> (\vs -> result(v:vs)))
= f e @> (\v -> mapm f es @> (\vs -> result(v:vs)))
where f e -> eval e env
So we write:
mapm :: (a -> M b) -> [a] -> M [b]
mapm _ [] = result []
mapm f (e:es) = f e @> (\v -> mapm f es @> (\vs -> result(v:vs)))
Exercise 11
Use the interpreter in monadic form to show that the expression e1; e2
can be regarded as syntactic sugar for let x = e1 in e2, where x is an identifier
that does not occur in the source program.
Answer:
We are required to prove that:
eval (Sequence e1 e2) env = eval (Let (Val x e1) e2) env
Well:
[
Monad Laws (for reference purposes)
(xm @> f) @> g = xm @> (\x -> f x @> g)
(result x) @> f = f x
xm @> result = xm
]
LHS
= eval (Sequence e1 e2)
= eval e1 env @> (\v -> eval e2 env) [defn. of eval]
= eval e1 env @> (\v -> eval e2 (define env x v)) [since x doesn't appear in e2]
= eval e1 env @> (\v -> (\env' -> eval e2 env') (define env x v)) [introducing a lambda]
= eval e1 env @> ((\v -> result (define env x v)) @> (\env' -> eval e2 env')) [monad law 2]
= (eval e1 env @> (\v -> result (define env x v))) @> (\env' -> eval e2 env') [associativity of @>]
= elab (Val x e1) env @> (\env' -> eval e2 env') [defn. of elab]
= eval (Let (Val x e1) e2) env [defn. of eval]
= RHS
Exercise 12
Your answer to Exercise 5 shows that in Fun with memory, we can detect
the evaluation order that is used for the function and its arguments in a
function call. Show how to modify the monadic interpreter so that actual
parameters are evaluated from right to left, and the function itself is
evaluated after the parameters.
Answer:
eval (Apply f es) env =
evalargs es env @> (\args ->
eval f env @> (\fv ->
apply fv args))
and
evalargs (e:es) env =
evalargs es env @> (\vs -> eval e env @> (\v -> result (v:vs)))
Note that re-running the tests devised in Exercise 5 now (as expected)
produces the opposite results.
Exercise 13
Suppose that result :: a -> M a and (@>) :: M a -> (a -> M b) -> M b
satisfy the three monad laws. Show that it is possible to define polymorphic
functions mmap :: (a -> b) -> (M a -> M b) and join :: M (M a) -> M a in such
a way that xm @> f = join (mmap f xm). Show that mmap and join satisfy the
following laws:
mmap id = id
mmap (g . f) = mmap g . mmap f
mmap f . result = result . f
mmap f . join = join . mmap (mmap f)
Show in addition that the following laws hold:
(Aside: I'm convinced that this should be join . join = join . mmap join,
because map doesn't make sense here.)
join . join = join . map join
join . result = id
join . mmap result = id
Conversely, if result, mmap and join satisfy all seven of these laws and @>
is defined by the equation xm @> f = join (mmap f xm), then result and bind
together satisfy the three monad laws.
[In the language of Category Theory, the first four laws stated above express
the facts that mmap is (the arrow part of) a 'functor', and that result and
join are 'natural transformations'. This exercise shows that our definition
of the concept of a monad is equivalent to a formulation in terms of a functor
and two natural transformations that together satisfy the three laws shown.
Our formulation is more convenient for programming, and the other is more
convenient for point-free algebraic manipulations.]
Answer:
(Random aside: There seem to be lots of appropriate film quotes which could
accurately describe this question, such as "This is the big one, 007" (The
Living Daylights) or "We come to it at last, the great battle of our time"
(Return of the King). Either way, I can tell this is going to be much more
painful than the rest of the sheet. But I digress...)
[
Monad Laws (just wasting paper by now!)
(xm @> f) @> g = xm @> (\x -> f x @> g)
(result x) @> f = f x
xm @> result = xm
]
mmap :: (a -> b) -> (M a -> M b)
mmap f xm = xm @> (\x -> result (f x))
join :: M (M a) -> M a
join xmm = xmm @> id
(Note that this was mostly done by looking at the types, rather than by
plucking cunning definitions out of thin air, the latter not being an
especially easy method to employ.)
Specification: xm @> f = join (mmap f xm)
join (mmap f xm)
= join (xm @> (\x -> result (f x))) [defn. of mmap]
= (xm @> (\x -> result (f x))) @> id [defn. of join]
= xm @> (\x -> result (f x) @> id) [associativity of @>]
= xm @> (\x -> id (f x)) [monad law 2]
= xm @> (\x -> f x) [defn. of id]
= xm @> f [getting rid of the lambda]
Laws
(Self-note: Some of these proof were constructed "from both ends",
i.e. starting with the LHS and RHS and working towards the middle.
If some of the steps seem hard to follow at a later date, try reading
them backwards and it might become more obvious!)
(i) mmap id = id
mmap id xm
= xm @> (\x -> result (id x)) [defn. of mmap]
= xm @> (\x -> result x) [defn. of id]
= xm @> result [getting rid of the lambda]
= xm [monad law 3]
= id xm [defn. of id]
QED
(ii) mmap (g . f) = mmap g . mmap f
mmap (g . f) xm
= xm @> (\x -> result ((g . f) x)) [defn. of mmap]
= xm @> (\x -> result (g (f x))) [defn. of (.)]
= xm @> (\x -> (\x -> result (g x)) (f x)) [introducing a lambda]
= xm @> (\x -> result (f x) @> (\x -> result (g x))) [monad law 2]
= (xm @> (\x -> result (f x))) @> (\x -> result (g x)) [associativity of @>]
= mmap f xm @> (\x -> result (g x)) [defn. of mmap]
= mmap g (mmap f xm) [defn. of mmap]
= (mmap g . mmap f) xm [defn. of (.)]
(iii) mmap f . result = result . f
(mmap f . result) y
= mmap f (result y) [defn. of (.)]
= result y @> (\x -> result (f x)) [defn. of mmap]
= (\x -> result (f x)) y [monad law 2]
= result (f y) [getting rid of the lambda]
= (result . f) y [defn. of (.)]
(iv) mmap f . join = join . mmap (mmap f)
(mmap f . join) xmm
= mmap f (join xmm) [defn. of (.)]
= mmap f (xmm @> id) [defn. of join]
= (xmm @> id) @> (\x -> result (f x)) [defn. of mmap]
= xmm @> (id @> (\x -> result (f x)) [associativity of @>]
= xmm @> (\xm -> xm @> (\x -> result (f x))) [introducing a lambda]
= xmm @> (\xm -> id (xm @> (\x -> result (f x)))) [defn. of id]
= xmm @> (\xm -> result (xm @> (\x -> result (f x))) @> id) [monad law 2]
= xmm @> (\xm -> result (mmap f xm) @> id) [defn. of mmap]
= (xmm @> (\xm -> result (mmap f xm))) @> id [associativity of @>]
= join (xmm @> (\xm -> result (mmap f xm))) [defn. of join]
= join (mmap (mmap f) xmm) [defn. of mmap]
= (join . mmap (mmap f)) xmm [defn. of (.)]
(v) join . join = join . map join
(join . join) xmmm
= join (join xmmm) [defn. of (.)]
= join (xmmm @> id) [defn. of join]
= (xmmm @> id) @> id [defn. of join]
= xmmm @> (id @> id) [associativity of @>]
= xmmm @> (\xmm -> xmm @> id) [defn. of id]
= xmmm @> join [defn. of join]
= xmmm @> (\xmm -> join xmm) [introducing a lambda]
= xmmm @> (\xmm -> id (join xmm)) [defn. of id]
= xmmm @> (\xmm -> result (join xmm) @> id) [monad law 2]
= (xmmm @> (\xmm -> result (join xmm))) @> id [associativity of @>]
= mmap join xmmm @> id [defn. of mmap]
= join (mmap join xmmm) [defn. of join]
= (join . mmap join) xmmm [defn. of (.)]
(vi) join . result = id
(join . result) y
= join (result y) [defn. of (.)]
= result y @> id [defn. of join]
= id y [monad law 2]
(vii) join . mmap result = id
(join . mmap result) xm
= join (mmap result xm) [defn. of (.)]
= join (xm @> (\x -> result (result x))) [defn. of mmap]
= (xm @> (\x -> result (result x))) @> id [defn. of join]
= xm @> (\x -> result (result x) @> id) [associativity of @>]
= xm @> (\x -> id (result x)) [monad law 2]
= xm @> (\x -> result x) [defn. of id]
= xm @> result [getting rid of the lambda]
= xm [monad law 3]
= id xm [defn. of id]
If result, mmap and join satisfy all seven of these laws and @> is defined
by the equation xm @> f = join (mmap f xm), then result and bind together
satisfy the three monad laws.
The three monad laws are:
(xm @> f) @> g = xm @> (\x -> f x @> g)
(result x) @> f = f x
xm @> result = xm
(This suggests that what was meant by "bind" in the above was "@>", because
bind isn't actually mentioned in the monad laws! Accordingly, I'm going to
assume that.)
We'll also restate the definitions of mmap and join for reference purposes:
mmap :: (a -> b) -> (M a -> M b)
mmap f xm = xm @> (\x -> result (f x))
join :: M (M a) -> M a
join xmm = xmm @> id
Well:
Law 1 : (xm @> f) @> g = xm @> (\x -> f x @> g)
(xm @> f) @> g
= join (mmap g (xm @> f)) [defn. of @>]
= join (mmap g (join (mmap f xm))) [defn. of @>]
= join ((mmap g . join) (mmap f xm)) [defn. of (.)]
= join ((join . mmap (mmap g)) (mmap f xm)) [mmap f . join = join . mmap (mmap f)]
= join (join (mmap (mmap g) (mmap f xm))) [defn. of (.)]
= (join . join) (mmap (mmap g) (mmap f xm)) [defn. of (.)]
= (join . mmap join) (mmap (mmap g) (mmap f xm)) [join . join = join . mmap join]
= join (mmap join (mmap (mmap g) (mmap f xm))) [defn. of (.)]
= join ((mmap join . mmap (mmap g)) (mmap f xm)) [defn. of (.)]
= join (mmap (join . mmap g) (mmap f xm)) [mmap (g . f) = mmap g . mmap f]
= join ((mmap (join . mmap g) . mmap f)) xm [defn. of (.)]
= join (mmap ((join . mmap g) . f) xm) [mmap (g . f) = mmap g . mmap f]
= xm @> ((join . mmap g) . f) [defn. of @>]
= xm @> (\x -> join (mmap g (f x))) [defn. of (.)]
= xm @> (\x -> f x @> g) [defn. of @>]
Law 2 : (result x) @> f = f x
(result x) @ f
= join (mmap f (result x)) [defn. of @>]
= join ((mmap f . result) x) [defn. of (.)]
= join ((result . f) x) [mmap f . result = result . f]
= join (result (f x)) [defn. of (.)]
= (join . result) (f x) [defn. of (.)]
= id (f x) [join . result = id]
= f x [defn. of id]
Law 3 : xm @> result = xm
xm @> result
= join (mmap result xm) [defn. of @>]
= (join . mmap result) xm [defn. of (.)]
= id xm [join . mmap result = id]
= xm [defn. of id]
" 4 5 @ ›
¦
b c o z ê ë ö b k l w p q ~ e f ‰ ” 8% 9% Ý&