On Tue, 06 Oct 2015, at 11:32, Wolfgang Lux <wolfgang.lux_at_gmail.com> wrote:
> > Am 01.10.2015 um 13:45 schrieb Michael Hanus <mh_at_informatik.uni-kiel.de>:
> >
> > [...]
> >
> > Hence, the concrete proposal is as follows:
> >
> > 1. Drop references to the type Success, use Bool instead.
>
> I'm a bit undecided on this one. Personally, I like the fact that functions with result type Success are the equivalent of predicates in Prolog in that they can either be satisfied or fail (whether finitely or by non-termination). Using Bool instead of Success could allow distinguishing finite failures from non-termination.
How does non-termination fail?
How is finite failures distinguished from non-termination?
Non-termination gives us no information, we just keep waiting forever
not knowing whether some information (failure or success or true or
false) will become available the next second.
> However, I'm also a bit wary of introducing a new issue with guards and if-then-else cascades where people could be confused why the else case of a conditional or the remaining guards of a definition are not chosen. E.g., given a simple (directed) graph
> data Corner = E1 | E2 | E3
> edge E1 E2 = True
> edge E2 E3 = True
> the (naive) definition
> connected e1 e2
> | edge e1 e2 = True
> | otherwise = let e3 free in edge e1 e3 && connected e3 e2
> will not work as expected (no result rather than True for edge E1 E3) because the otherwise guard will never be evaluated. To work you'd have to change edge into a total function first. With the Success type this error could not happen because you can have at most one guard of type Success and you can't you if-then-else either.
I agree. Recent work on default rules is a positive step in this
direction. We should discuss an explicit "ifexists ... then ..."
construct.
> > 2. Replace occurrences of "=:=" by "==" and define "=="
> > as a flexible function based on narrowing.
>
> I like the idea of defining (==) as a flexible function. In particular, this feels much more natural in the context of an implementation with type classes, where (==) would become a user defined function for most types anyway. It would require a lot discipline from the user (or some compiler magic) to make all instances rigid instead. There is an issue with the equality on the built-in types Char, Int and Float. We could either keep equality rigid for those functions or solve equality by unification for these function but loose completeness (unless the implementation supports disequality constraints).
>
> Anyway, whatever change is made here should be extended to the polymorphic ordering function compare :: a -> a -> Ordering as well.
>
> However, I'm afraid that replacing (=:=) by (==) everywhere and having the compiler automatically convert (==) into (=:=) with a transformation as outlined in the LOPSTR paper is a non-starter for me. I fully understand that having two different equality operators presents a problem for beginners (and perhaps also to more experienced programmers). But I don't think that hiding the difference between narrowing and unification behind a compiler transformation is a good idea. It may lower the hurdle at the beginning, but it introduces a whole rash of problems later. In particular, my problem is that the transformation outlined in the paper does not seem to be stable under simple program transformations. For instance, it looks like the equalities in the guard of
> f (x1,x2,x3) (y1,y2,y3) | x1==y1 && x2==y2 && x3==y3 = …
> would be evaluated by unification rather than narrowing. However, I'm not sure that the same would happen for
> f' (x1,x2,x3) (y1,y2,y3) | and (zipWith (==) [x1,x2,x3] [y1,y2,y3]) = …
> It's also not clear to me how the transformation works with separate compilation. Consider
> module M1 where { f x y | x == y = True }
> module M2 where { import M1; g x | f x y = y where y free }
> versus
> module M1' where { f x y = x == y }
> module M2' where { import M1'; g x | f x y = y where y free }
> In the first case it is clear that the guard of M1.f would be evaluated using unification, but in the second case this would not happen unless you have an optimizing compiler that inlines the definition of M1'.f in the guard of M2'.g. I'm fairly concerned that such subtle changes in the program's code can potentially have dramatic consequences on the performance of programs (by changing the size of the search space) and possibly even result in an error (due to a non-determinstic choice in an IO context when narrowing is used, while the program would remain deterministic if unification was used).
>
> So, while I understand the problem with two different equality operators in the beginning, I'd prefer to keep them, since I think it saves a lot of problems in the long run if the user remains in control of whether an equality is solved by unification or by narrowing and does not depend on somewhat brittle program transformations by the compiler.
>
> Wolfgang
>
>
> _______________________________________________
> curry mailing list
> curry_at_lists.RWTH-Aachen.DE
> http://MailMan.RWTH-Aachen.DE/mailman/listinfo/curry
_______________________________________________
curry mailing list
curry_at_lists.RWTH-Aachen.DE
http://MailMan.RWTH-Aachen.DE/mailman/listinfo/curry
Received on Do Okt 15 2015 - 17:15:50 CEST