> As you can imagine, I do not agree that the Escher model is simpler.
> At a first sight, it seems to be simpler since Escher performs
> only rewrite steps. However, Escher also deals with logical variables,
> and in order to understand the processing of these variables,
> one has to consider also all the rewrite rules in the module
> Booleans. Thus, the operational model of Escher can be characterized
> as "outermost reduction + all rewrite rules of Booleans".
> Since the rewrite rules in Booleans describe the treatment
> of logical variables, the consequences of applying the different
> rules to a concrete goal is not obvious, at least for me.
> For instance, the goal "exists [x] x == head []" will be reduced to
> "True", whereas the goals "exists [x] head [] == head []" and
> "x == head []" are not reducible. Moreover, parallel outermost
> or leftmost outermost reduction may also lead to unexpected results.
> For instance, consider the functions
>
> f [] X = []
> loop = loop
>
> and the expression "f (head []) loop". Since "loop" is the only
> outermost redex, parallel or leftmost outermost reductions run
> into infinite loops, whereas an appropriate rewrite strategy
> (like in Curry specified by definitional trees) immediately stops.
> Maybe these examples are artificial, but they show that the
> behavior of a pure rewrite model is not obvious.
There are several comments to make here. The first is that the rewrites in
Booleans are not complicated! Indeed I have worked very hard to reduce them
to the simplest possible set. Mostly they are rewrites just like programmers
write in their own programs and many are already in Haskell. A few have side
conditions which must be satisfied before they can be applied. I simply do not
accept that (ordinary) programmers will have a problem with these.
Second, every computational model will have problems with infinite loops
and bizarre behaviour of one kind or another in extreme cases. So Escher
suffers from this as well. The more relevant question is how such anomalies
affect the writing of practical, large-scale programs. Basically, two
things (other than straight incorrectness) can go wrong in Escher
programs: you can flounder (i.e. not reduce enough) or you can loop.
Each case requires a fix of some kind which the programmer must provide.
My own experience with Escher is that these fixes are easy to find.
Of course, the only way to confirm or refute this is to try it for yourself!
(This will be possible reasonably soon, I hope.) However, I really think we
should concentrate of actual sensible programming tasks rather than the
ones on the very edge. Hence my suggestion to Sergio. Let's discuss
some realistic programs.
BTW, there are some more technical points about your examples which will
be addressed in a separate message.
> Here I fully agree with Sergio. What is the advantage of adding
> sets to the kernel of Curry in comparison to a module implementing sets?
I think there is a big advantage. See my reply to Sergio (and see the set
examples in the Escher paper).
> I am open to suggestions, but the current form of concurrency
> is identical to the Goffin approach, which is considered as
> more high-level than Concurrent Haskell. Moreover, if we add
> a monadic form of concurrency, we have two forms of concurrency
> in the language (an explicit via forkIO and an implicit
> via the delay of insufficiently instantiated redexes).
Which Goffin paper is the best place to look? Two other points: I have already
built higher-level facilities on top of the concurrent Haskell facilities.
These are usually what the programmer wants to use. The low-level stuff is
available if needed. Second, I don't regard delay due to insufficiently
instantiated redexes as concurrency at all. It just affects the choice of
redex. The Escher concurrency is big, chunky, coarse-grained, explicit
concurrency. (I'm trying to get something written on this, so a more
detailed debate can be had.)
> Maybe the mentioning of the value "Valid" is a bad idea.
> Forget about this value, it is not accessible to the user
> and only used to specify the computation model.
I would be pleased to see the demise of Valid.
> The difference between constraints and Booleans are the
> operations which you can apply on them. Booleans are (like
> in Haskell) two values which can be arbitrarily used
> (whether you call them True/False or A/B is not important).
> Constraints contain more information since they have a
> distinguished interpretation and a constraint solver which
> tries to solve the constraint w.r.t. to this interpretation.
> Currently, only strict equality is supported as a constraint.
> Operations applied to constraints must be supported by
> the constraint solver, thus it is not possible to allow
> arbitrary operations on constraints. For instance, the
> logical negation of strict equations is not supported
> (I have no idea to do it). Nevertheless, you can support
> disequations as constraints, like in Paco's Toy system,
> but this is different from the logical negation of strict equality.
> This is the reason why I think that constraints must be
> distinguished from Booleans, as in the Goffin approach.
> Constraints are more related to functions: functions are
> intended to be applied in order to compute a value,
> and constraints are intended to be solved in order
> to compute a solution.
I'm not sure I understand well yet all the subtleties on this issue. I just
have the gut feeling that mathematicians and logicians have been solving
constraint systems for a very long time with the conventional mathematical
apparatus and therefore there ought to be a way to introduce constraint
solving in a more seamless way without all the extra Curry stuff. For me,
adding constraints means smoothly extending the facilities of the computational
model. It certainly doesn't mean adding a separate constraint solver and a
lot of extra apparatus and syntax.
In fact, as constraint solving can be very nicely formulated as finding the
"normal form" of a (complicated) constraint system, Escher already does
simple constraint solving. To do something more meaningful requires
manipulating several redexes at the same time. This doesn't require any extra
syntax - just a clever operational semantics. It's largely transparent to the
programmer.
John
Received on Mi Jul 02 1997 - 21:31:00 CEST
This archive was generated by hypermail 2.3.0
: Do Jun 20 2024 - 07:15:05 CEST