Subject: Re: [boost] Yap's formal review is starting now!
From: Brook Milligan (brook_at_[hidden])
Date: 2018-02-10 16:05:06
> On Feb 8, 2018, at 2:15 PM, Zach Laine via Boost <boost_at_[hidden]> wrote:
> Ok, some things to note:
> evaluate() currently has two modes, speaking roughly. The first mode is to
> evaluate an expression by doing whatever the built-in operators and
> existing function calls in the expression would do. This can be extremely
> useful in many situations when you write code using Yap, especially
> transforms where you want to at least partially default-evaluate some
> subexpression. The second is to evaluate the expression using custom code
> that the user has specified using customization points; there are
> customization points for every overloadable operator, among others.
> This second mode is essentially a way of doing implicit transforms, and is
> really only there for Proto parity. I've never liked it, and am on the
> fence about cutting the customization points entirely. The implicit nature
> of the customization is at the heart of my problem with this feature. A
> good abstraction is used explicitly, but hides its implementation details.
> These customization points do the implementation hiding bit just fine, but
> you can't even tell you're using them when looking at a particular line of
> code -- does yap::evaluate(a + b) yield a sum, or launch a missile? Who's
> to say? I have to go code spelunking to find out. This is at odds with
> good code practice emphasizing local reasoning. If I had not wanted Proto
> feature parity, I *never* would have implemented a library like this.
Thank you, Zach. This is the first explanation that I have seen that clearly lays out your design philosophy, although even this is a bit implicit. To be more explicit, let me restate what I think you are saying.
You feel that expression templates should essentially provide lazy evaluation of expressions with the same semantics they would otherwise have; the semantics of evaluating an expression should not be changeable in the process of the evaluation. Changing semantics should only result from an explicit transformation of the expression into some new form corresponding to the new semantics. Coming from a Proto world, this is quite a different view that should be clarified in the documentation.
If this is the world view, then it seems that implicit transforms should be removed. Alternatively, they should be retained but with _much_ clearer documentation regarding that use case being for "Proto compatibility" but not really "approved".
For my own ongoing use of Yap, your comments above have been really helpful to clarify what you consider to be best practices. Having the idiom I described in mind (please clarify if I'm getting this wrong) is very helpful to rethink my code base.
However, this points to a concern I have long had, which is that the documentation does not lay out philosophies, guidelines, best practices, etc. Even though I have worked with Yap for a year and with Proto before (maybe that poisoned me), I have apparently been thinking about this wrongly.
The examples are fine for what they do, but they are not sufficient to explain _why_ a certain solution is appropriate. Thus, I strongly urge you to revisit the documentation with an eye toward addressing this gap.
> This transforms all your terminals to new terminals, using your custom code
> that applies your terminal + context operation. You can then just eval()
> the transformed expression using the default evaluate(), transform() it
> again with a different transform, etc. I had to make up the constexpr
> function kind_for<>() that maps expression-kind tag types to yap::expr_kind
> values (which I'll probably add now that I see it is needed *TODO*), but
> the reset is just typical Yapery.
Thanks for this example. It clarifies a lot. And yes, please support this fully.
> Now, this has the downside that if you have a very large number of
> terminals, you may have some expensive copies going on, because you are
> copying the entire tree. This implies to me that the most important issue
> is whether the evaluate-as-you-transform-because-the-tree-is-huge use case
> is of primary or secondary concern. My expectation is that it is of
> secondary concern. To expect otherwise is probably to optimize prematurely
> -- even if this issue is important, how often do users see real perf impact?
I, too, was concerned about copying, but now that I better understand the "proper" use case, I'll offer this. At least in my experience, I cannot copy some of the terminals, but I can of course copy the values that the terminals would evaluate to. It seems that in general, those values should always be cheaply copyable, otherwise the whole idea of evaluation of an expression tree is fraught.
Thus, I feel that using a transform that converts terminals into cheap-to-copy values still embedded within an equivalent expression tree would not incur a major cost.
If I am getting this right, then I agree that this issue is unlikely to be a problem and the following principles make perfect sense:
- the semantics of expression tree evaluation is the same as native evaluation
- expression trees are to encode lazy evaluation without the option of new (or implicit) semantics
- a potentially common use case for complex terminals is to transform them into appropriate values while copying the expression, followed by evaluation of the expression newly populated with values
If I am getting this right, then this type of information is what I feel is missing from the documentation. Including it would go a long way toward making use of Yap clearer.
> In most cases, users also don't care about every possible
> expression kind -- they are designing a mini-language that uses a subset.
Please do not impose this view on the design of the library. It is absolutely not the case for my use case, as I need to support virtually all the operators.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk