Re: New PAKCS release (Version 1.8.0)

From: Wolfgang Lux <wlux_at_uni-muenster.de>
Date: Thu, 05 Apr 2007 10:29:42 +0200

Bernd Brassel wrote:

[Note: Bernd's mail was -- probably inadvertently -- not sent to the
list.
For reference, I have quoted it fully at the end.]

> Wolfgang Lux wrote:
> Of course using constraint solvers would be much better for this task.
> But until then why is it so bad to have a base in narrowing? If
> narrowing was so bad we really need not developing Curry any longer.

Come on, I never said that narrowing is bad. My point is only that
narrowing is not the best approach for some domains (in particular
the primitive types Int and Float because the search space is so
huge).

> [...]
> So why not have it like this from the start:
>
> x<'1 & delay (x<'y) & delay (y<'x+3) & x>'0 where x,y free
>
> Where delay "magically" makes sure that the other constraints are
> preferred. Maybe one has to write it like this
>
> solve [[x<'1,x>'0],[x<'y,y<'x+3]]
>
> to achieve this effect with less magic.
>
> But in any case _dealaying_ a constraint should never mean "stop
> with a
> run-time error if only delayed constraints are left" but should
> begin to
> solve them in that case. This of course would require that free
> variables are always narrowable when produced. So is that still a case
> of loosing precious infants?

So are you saying that for
   solve [[],[x<'y,y<'x+3]]
the system should try solving the equation by instantiating x and y?
If so, in what order? X first or y first? Instantiate integers in the
sequence 0,1,-1,2,-2 or from minInt to maxInt (if the implementation
has a finite Int type only, of course)? I think, the current approach
of suspending such computations is just an indication to the user that
the system needs more help in order to instantiate the variables
reasonably. However, I am the first to admit that the current
implementations are not very helpful in finding out the cause of such
errors. (Okay, PAKCS' :set +suspend may be a first step in that
direction; I will try adding something similar to MCC.)

However, I do not think that it would be more helpful if the system
naively started to instantiate free integer variables. For example,
your initial goal
   x<'1 & x<'y & y<'x+3 & x>'0 where x,y free
has no solution at all (I don't know if that was on purpose). Yet,
an implementation would not even report that. Assuming an implementation
is smart enough to recognize that it can use the constraint x<'1 in
order to instantiate x with decreasing integers 0,-1, etc., and that
it should solve the constraint x>'0 next (instead of trying to
instantiate y in x<'y), the implementation would still narrow to death
(or rather to the end of the user's patience), because the search space
is so huge (in particular if the implementation uses arbitrary
precision integers).

So, until Curry includes real constraint solving capabilities, I believe
that it is better to have the user provide a reasonable range in which
to narrow integer variables than have the system guess one. Note that
it is in fact trivial to define a suitable generator in Curry provided
that you can determine some bounds for the solution:
   x =:= choose [m..n]
where
   choose (x:_) = x
   choose (_:xs) = choose xs

>
>> Probably not. However, I would question why unknown (or a free
>> variable
>> declared with a let/where free declaration for that matter) should
>> have
>> a Narrowable constraint right from the start. Also note that even
>> with
>> a Narrowable constraint, it would be possible to apply primitive
>> functions
>> to a free variable causing suspension of those functions.
>
> You would need to narrow the variable before applying the primitive
> function.

Sorry, I somehow missed the point that even though the type system
does not prevent one from applying a primitive to an unbound variable
(assuming that free variables must be instances of a Narrowable class),
it does prevent one from creating that variable in the first place.
So, there is no way (but also no need) to narrow at all.

> [...]
> This is an interesting thought. The requirement to my statements above
> would indeed be that there would be search if you look for a solution
> where x===y is false. But why and when is this search unnecessary?

The search may be unnecessary because some other part of the computation
may be able to provide a value for one of the variables. Of course, this
would not be a problem if Curry's operational semantics would require
stability, i.e., free variables are instantiated only when no other
deterministic computations can be run. But then, enabling stability by
default can lead to non-termination in a few cases where all solutions
can be computed without stability. (As an aside, this is the main reason
why MCC does not enable stability by default, even though the
implementation supports stability for ages now.) Therefore, I think that
one should not rely on the system getting search strategies right
without
some help from the user.

Regards
Wolfgang


> From: Bernd Brassel <bbr_at_informatik.uni-kiel.de>
> Date: 4. April 2007 10:42:41 MESZ
> To: Wolfgang Lux <wlux_at_uni-muenster.de>
> Subject: Re: New PAKCS release (Version 1.8.0)
>
> Wolfgang Lux wrote:
>> I would (also?) like to see Curry implementations to solve
>> equations like
>> x + 1 =:= 4
>> by binding x to 3. However, if they do not at present, this is not
>> the
>> fault of (=:=) but of (+) which requires ground arguments. And in
>> fact
>> if Curry implementations in the future will solve that equation, I
>> hope
>> that they will not do it by narrowing, but by using the algebraic
>> laws
>> of integer arithmetic (i.e., using a constraint solver).
>
> Of course using constraint solvers would be much better for this task.
> But until then why is it so bad to have a base in narrowing? If
> narrowing was so bad we really need not developing Curry any longer.
>
>> IMHO, the presence of rigid functions is one of the advantages of
>> Curry
>> as it gives the programmer better control over when search should
>> take
>> place and when not. By a judicious use of rigid functions one can
>> improve
>> performance (and in some cases ensure termination at all). On the
>> other
>> hand, the price to pay is that there are cases where goals may
>> flounder.
>
> The kind of control you mention could even be improved without rigid
> functions. For instance, in the constraints
>
> x<'1 & x<'y & y<'x+3 & x>'0 where x,y free
>
> x <' y | x<y = success
>
> The constraints (x<'1) and (x>'0) are very good prunings on the search
> space. Only (x<'y) and (y<'x+3) may behave badly. But as the
> suspension
> is defined on the level of <' (Int being the primitive type) you can't
> do anything.
> The situation is only slightly better if <' was somehow indeed
> narrowing
> the arguments. You could then delay the computation of the bad
> constraints like this:
>
> x<'2 & ensure x <' ensure y
> & ensure y <' ensure x+3
> & x>'0 where x,y free
>
> But now, you will not get the desired results, as there are two
> solutions
>
> [x/1,y/2] and [x/1,y/3] which can efficently be computed by the "bad"
> constraints after the "good" ones have been solved. And thus you get a
> suspension.
>
> So why not have it like this from the start:
>
> x<'1 & delay (x<'y) & delay (y<'x+3) & x>'0 where x,y free
>
> Where delay "magically" makes sure that the other constraints are
> preferred. Maybe one has to write it like this
>
> solve [[x<'1,x>'0],[x<'y,y<'x+3]]
>
> to achieve this effect with less magic.
>
> But in any case _dealaying_ a constraint should never mean "stop
> with a
> run-time error if only delayed constraints are left" but should
> begin to
> solve them in that case. This of course would require that free
> variables are always narrowable when produced. So is that still a case
> of loosing precious infants?
>
>> Probably not. However, I would question why unknown (or a free
>> variable
>> declared with a let/where free declaration for that matter) should
>> have
>> a Narrowable constraint right from the start. Also note that even
>> with
>> a Narrowable constraint, it would be possible to apply primitive
>> functions
>> to a free variable causing suspension of those functions.
>
> You would need to narrow the variable before applying the primitive
> function.
>
>> Maybe we have lost context here. The above statements were coming
>> from a discussion on whether (===) should be defined in terms of
>> equality and disequality constraints, namely
>> x === y | x=:=y = True
>> x === y | x=/=y = False
>> I read your first answer as being opposed to that definitions because
>> it requires disequality constraints. And my answer to this was that
>> you need disequality constraints anyway for a definition of (===)
>> unless you (arbitrarily!) restrict (===) to algebraic data types. I
>> do not see the relation of this to evaluation order.
>
> I would never arbitrarily restrict (===) to algebraic data types. Of
> course I would only do it deliberately.
> And thank you for helping me in this consideration which is really
> difficult and should be viewed from all angles.
>
>> Incidentally, I have another argument in favor of (=:=) and (=/=) as
>> primitives instead of (===). x=:=y can be solved even if both
>> arguments
>> are unbound variables. For (===) you can do this only by either
>> introducing disequality constraints again or by instantiating both
>> arguments non-deterministically leading to possibly excessive and
>> unnecessary search.
>
> This is an interesting thought. The requirement to my statements above
> would indeed be that there would be search if you look for a solution
> where x===y is false. But why and when is this search unnecessary?
> Indeed it is only necessary if you have unbound variables in the
> result
> of the main goal. If you want to have a solution, e.g., for
> (x,y,x===(y::Bool) you would get
>
> (x,x,True)
> (False,True,False)
> (True,False,False)
>
> Indeed I would not mind getting four solutions:
>
> (True,True,True)
> (False,False,True)
> (False,True,False)
> (True,False,False)
>
> But if you have a goal which evaluation really needs the results of
> the
> variables, the search space looks like in this example:
>
> (x=:=y) &> (x && y)
> =>
>
> / \
> x=T x=F
> | |
> y=T y=F
> | |
> x&&y x&&y
>
> =>
> / \
> x=T x=F
> | |
> y=T y=F
> | |
> True False
>
>
> Whereas with the better =:= it looks like
> (x=:=y) &> (x && y)
> => x=y
> |
> x&&y
>
> =>
> x=y
> / \
> y=T y=F
> | |
> T F
>
> So the search is not that excessive nor that unnecessary. Unnecessary
> comes only if you make variable bindings you do not need in the
> result.
> But I am not sure that this is the kind of program we need to speed
> up.
> It seems to me that it is just the same if you evaluate terms by
> seq or
> ($!!) or whatever, which you never need any (sub)result of. Nobody
> would
> assume that this makes his programs faster.
>



_______________________________________________
curry mailing list
curry_at_lists.RWTH-Aachen.DE
http://MailMan.RWTH-Aachen.DE/mailman/listinfo/curry
Received on Mi Apr 11 2007 - 11:57:07 CEST

This archive was generated by hypermail 2.3.0 : Do Jun 20 2024 - 07:15:08 CEST