Boost logo

Boost :

Subject: Re: [boost] [Boost-announce] [Hana] Formal review for Hana next week (June 10th)
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2015-06-17 19:51:05


On 17 Jun 2015 at 9:36, Louis Dionne wrote:

> Actually, a lot of the names are very close. I just checked Eric's range
> library and there are a __lot__ of resemblances. I guess there was a common
> inspiration (Haskell?) or a lot of luck. I'll try to converge even more
> towards his naming.

Great minds think alike!

But no, more seriously, at a high level Hana could become "Ranges for
the compile time" and fit hand and glove with Ranges for the run
time.

This is why I think David Sankel's preference for smaller, reusable,
single purpose lower level solutions is misguided. I'd ordinarily
agree with that assessment of his by the way in 98% of cases, but not
in this one specific case. I think Hana could potentially be
standards material, indeed I have thought this since you first argued
for it instead of a MPL11. If you can match Eric's algorithms as
close as you can, and indeed if Eric can match your algorithms as
close as he can, I think Hana could be in a C++ 22.

> > I also think all the Concepts need to match naming with Eric's, and
> > eventually in the future that both libraries use identical Concept
> > implementations (which I assume would then be upgraded with Concepts
> > Lite on supporting compilers). I'd suggest therefore breaking the
> > Concepts stuff out into a modular part easily replaceable in the
> > future with a new library shared by Hana and Ranges.
>
> That's a noble goal, but it is completely impossible without some redesign
> of Range-v3's concepts. Range-v3 is based on runtime concepts like
> ForwardRange, which is itself based on ForwardIterator. There is just
> no way heterogeneous containers can be made to model these things, even
> if we were to have an iterator-based design like Fusion. The problem here
> lies in compile-time vs runtime. But I could be wrong, and maybe Eric can
> tell us more about that.

I can appreciate that Range's Concepts emulation right now may not be
able to fit. But Range's Concepts Lite surely must be able to fit by
definition.

That said, I've never used a Concept in my life, so I am probably not
understanding what you mean by runtime concepts. I know Ranges
extends Iterators, but unless I missed something I had thought that
Ranges only did that for backwards compatibility, and that Ranges
could be used pure functional. It's those pure functional parts I
refer to, I am imagining a world where for compile time functional
programming you reach for Hana and for run time functional
programming you reach for Ranges. Both are opposite, but
simultaneously the same thing. If that makes sense.

> I can see the charts on Google Chrome. Perhaps you reloaded the page
> a ton of times? If so, there's a limit on the number of queries one
> can do to fetch the data sets from GitHub. After something like 50
> reloads, you have to wait one hour. It shouldn't be a problem for
> most people, though.

Sigh. It's working today. Wasn't before.

> > BTW for my lightweight monad<T> I have some python which counts x64
> > ops generated by a code example and gets the CI commit to fail if
> > limits are exceeded, if you're interested for the benchmarking and/or
> > making hard promises about runtime costs.
>
> I might be interested; where's that code?

Have a look at
https://github.com/ned14/boost.spinlock/tree/master/test/constexprs.
The key files are:

* All the *.cpp files which are each test case

* with_clang_gcc.sh and with_msvc.bat - These scripts compile every
*.cpp file into an object file, then disassemble it. If you add -g to
the compiler flags, you'll get interleaved source + assembler, very
handy to see what source is causing opcodes to appear.

* count_opcodes.py - This is the world's worst x64 opcode counter. I
*really* don't want to have to write a full assembler parser in
Python, so this nasty hack script tries to inline all function calls
in your chosen example function test1(). This step is necessary
because the compiler doesn't merely output the code you compile, but
also the headers you drag in and you need some postprocessing to
extract just the parts you care about. By world's worst I mean it
gets confused very easily, and will fatal exit if it gets itself into
a loop. If you keep your test cases small, and always compile to x64
not x86, it generally works well enough.

The with_clang_gcc and with_msvc scripts output two things:

1. A CSV history of every opcode count for all past builds. You can
feed that to Jenkins to plot as a graph, or just use it to debug when
you broke something. I've personally found the CSV history much more
useful than originally expected.

2. A JUnit XML unit test results file with the pass/fail status for
each test, the opcode count, and just for fun a dump of the assembler
produced. This displays very pretty in Jenkins and you can have
Jenkins email you when you broke something.

> > > * Tests
> >
> > Tests should be capable of using BOOST_CHECK and BOOST_REQUIRE as
> > well as static_assert. It should be switchable. You can then feed the
> > Boost.Test XML results into the regression test tooling.
>
> What about tests that fail/pass at compile-time? How does the MPL handle
> that? Also, passing a BOOST_CHECK assertion does not necessarily mean
> anything for Hana, since sometimes what we're checking is not only that
> something is true, but that something is true __at compile-time__. How
> is the MPL integrated with the regression test tooling? I think that is
> closer to what Hana might need?

Strictly speaking you should use compile-fail in Boost.Build i.e.
have a suite of test case programs which if they don't fail to
compile with the right error that itself is a failure. Or its
equivalent in cmake. I assume if you get into Boost, you'll need to
convert to Boost.Build anyway.

However, I suspect for a large chunk of your tests they don't
strictly speaking need to be compile time failures. They could be
switched with a macro to runtime, and therefore output XML for the
regression tester to show.

> Thanks a lot for your review and comments during the past year; you've
> been providing me with invaluable feedback and I appreciate that.

Thank you Louis for taking the time and very substantial effort to
bring us Hana. I have spent the last three weeks or so template
metaprogramming for my lightweight monad<T>, and it has reminded me
how much I dislike template metaprogramming. I take my hat off to
you.

Niall

-- 
ned Productions Limited Consulting
http://www.nedproductions.biz/ 
http://ie.linkedin.com/in/nialldouglas/



Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk