TC (was: Narrowing vs. rewriting)
Thanks to Michael for his two examples. I'm convinced by his argument,
echoed by John, that constraints are a `killer app' for LP/LFP.
However, I have a quibble with his first example.
Beware TC! It's easy to give a Too Cute example: one that looks
marvelous, but doesn't generalise to the Typical Case. Here are two
examples, one from Lazy FP and one from LP/LFP, and both coincidentally
sharing the initials TC.
Lazy FP example: Transitive Closure. Represent a relation from A to A
by a function f :: A -> [A]. Then the transitive closure
(tc f) :: A -> [A] is computed by
tc f x = xs where xs = nub (x : map f xs)
where nub :: [A] -> [A] removes duplicates. This is fine if (tc f x)
yields an infinite list, but fails in the Typical Case that it yields
a finite list, since the list terminates in bottom rather than nil.
To handle finite results properly requires an entirely different
structure, that explicitly passes a list of visited nodes.
LP/LFP example: Type Checking. Michael's example is fine if we
represent polymorphism by a meta-variable, but fails in the Typical
Case that we represent polymorphism explicitly. For example, say our
term and type languages are extended as follows
data Expr = ... | Let String Expr Expr
data Texpr = ... | TVar String | TAll String Texpr
and we wish to apply Milner's algorithm W. The only way I know to
program this Typical Case is either to encode unification explicitly,
or to use a non-declarative primitive (Gille Kahn used a clever trick
of this sort in Mentor).
Cheers, -- P
Received on Mo Jul 07 1997 - 18:26:00 CEST
This archive was generated by hypermail 2.3.0
: Do Feb 01 2024 - 07:15:05 CET