From: Peter Dimov (pdimov_at_[hidden])
Date: 2007-06-04 14:51:23
Gennadiy Rozental wrote:
> "Peter Dimov" <pdimov_at_[hidden]> wrote in message
>> Gennadiy Rozental wrote:
>>> "Peter Dimov" <pdimov_at_[hidden]> wrote in message
>>>> My current development model is sync against CVS HEAD, do work,
>>>> commit, check test results, fix. My use model is sync against CVS
>>>> HEAD, compile project, yell at whoever introduced a regression in
>>>> the boost component I'm
>>>> using. This works well for me and I'd like to keep working in a
>>>> similar way.
>>> IMO this is not the desirable scheme in general. Actually this is
>>> exactly what we *should not* be doing IMO.
>> It works for me.
>> As a Boost user, I simply don't use Boost components whose HEAD
>> versions are
> An "average" Boost user is primarily interrested in latest released
Almost; the user is typically interested in a version that works and
contains the libraries s/he needs. A suitable release may not exist (yet).
>> As a Boost developer, if a dependency takes too much time to
>> stabilize, I sever ties with it and reimplement the parts I need.
>> This is rare since I have low tolerance for dependencies anyway. :-)
> What if you depend on serialization or GUI lib or XML parser. It
> might not be possible to "reimplement" all your dependencies.
In this case I'll fix these libraries myself.
> And this is not a
> good practive in general IMO. Since you are causing breakage to
> "single definition rule" on library level.
There is no observable downside to this conceptual breakage, whereas the
breakage resulting from a failed dependency is quite visible.
>> I understand that this mindset may be unusual. Still, I find the
>> idea that the trunk is assumed to be unstable a bit odd. The trunk
>> should be stable and everyone should work to keep it that way.
> If trunk is stable, how do I test my development?
You make an incremental change, test it locally, then commit to trunk?
I didn't mean "stable" as in "guaranteed to pass all tests", more like as
"stable enough for practical use".
> If I am done with my
> development when can I put it into "stable" trunk? What if I break
> something? What if N libraries merged their changes the same time.
> How long will it take t osort it out?
Large commits are always a problem. My suggestion is that we should simply
avoid large commits. If the incremental steps towards the goal are visible
in the trunk, they can be tested and the problems can be fixed as they
appear, rather than as one large batch at the end of the day.
> That's good goal. I support it. But this tree could exist as a
> reflection of actual tree (using svn externals):
> foo/ -> foo/truck/boost/foo
> foo.hpp -> foo/truck/boost/foo.hpp
> bar/ -> foo/truck/boost/bar
> bar.hpp -> foo/truck/boost/bar.hpp
> How run svn update in this directory and you pull all you need.
As I understand it there are technical problems with svn:externals that make
the above not work as well as it could. But it's possible.
> How can I release and test my subset if I can't compile with truck
> version of library I depend on. I don't really care about latest changes?
> would be happy to work with last stable version (last boost release)
You can compose a release by using a specific library version. It should be
possible to use the version from the last release as a starting point.
> I still don't see the difference. What do you win by pullig npart of
> the tree?
Test failures if a library #includes "boost/foo/something.hpp" without foo
being listed as a dependency. Without this check, dependencies tend to find
their way into your library while you're asleep. :-)
>> Right. The release process basically consists of integration testing.
> And this is what we should be avoiding. There should not be testing
> stage during release.
> Let's take a look on this from different prospective: what in my
> proposal you find incorrect?
I see nothing incorrect per se in your proposal; it's quite good. But how do
we get there from here? How many extra tools do we need? Can we implement it
in small steps?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk