Boost logo

Boost Users :

From: Ovanes Markarian (om_boost_at_[hidden])
Date: 2006-11-06 03:09:53


Thanks a lot! I will give it as you describe it and post my experiences here
:)

With Kind Regards,
Ovanes

-----Original Message-----
From: Aleksey Gurtovoy [mailto:agurtovoy_at_[hidden]]
Sent: Monday, November 06, 2006 3:11 AM
To: boost-users_at_[hidden]
Cc: Ovanes Markarian; dave_at_[hidden]
Subject: Re: [mpl::map] generating map for 150 types

Ovanes Markarian writes:
> Many thanks for your detailed reply. I think the second scenario with
> pre-generated headers convinces me more, since our compilation times
> are not optimal at all :(.

I'd still suggest to try out the first version, before engaging into this
particlular optimization, to see whether there are other showstoppers to
what you are trying to do: the library has not been throughly tested with
the sequence limits this high, and, depending on the particular compiler,
you might run into internal structure overflows, ICEs and the likes, which
may force you to adpot a different approach.

> I have one more question: I tried to use the preprocess_map.py script,
> but encounter some problems.

[...]

> How do I submit here the max map size? Or did you mean that I should
> write a similar script, which pregenerates headers by my own?

Sorry for not being specific enough about this; the script generates the
numbered 'map' forms basing entirely on the corresponding source files /
headers in the '$BOOST_ROOT/libs/mpl/preprocessed/map/' and
'$BOOST_ROOT/boost/mpl/map/' directories. To bump up the current 50 elements
limit to, say, 150 you need to provide the corresponding
'mapN.cpp'/'mapN.hpp' files for N up to 150, using the existing ones as an
example.

In the simpliest case, assuming that you don't want the same level of
headers granularity the library adheres to and are okay with jumping from 50
to 150 in one step, '$BOOST_ROOT/boost/mpl/map/map150.hpp'
would look like this:

#ifndef BOOST_MPL_MAP_MAP150_HPP_INCLUDED #define
BOOST_MPL_MAP_MAP150_HPP_INCLUDED

#if !defined(BOOST_MPL_PREPROCESSING_MODE)
# include <boost/mpl/map/map50.hpp>
#endif

#include <boost/mpl/aux_/config/use_preprocessed.hpp>

#if !defined(BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS) \
    && !defined(BOOST_MPL_PREPROCESSING_MODE)

# define BOOST_MPL_PREPROCESSED_HEADER map150.hpp
# include <boost/mpl/map/aux_/include_preprocessed.hpp>

#else

# include <boost/preprocessor/iterate.hpp>

namespace boost { namespace mpl {
# define BOOST_PP_ITERATION_PARAMS_1 \
    (3,(51, 150, <boost/mpl/map/aux_/numbered.hpp>))
# include BOOST_PP_ITERATE()
}}

#endif // BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS

#endif // BOOST_MPL_MAP_MAP150_HPP_INCLUDED

... and the corresponding 'map150.cpp' like this:

#define BOOST_MPL_PREPROCESSING_MODE
#include <boost/config.hpp>
#include <boost/mpl/map/map150.hpp>

> If I look inside of this script, there is one call to main which
> passes 3 parameters, but probably this does not work. If I pass
> parameters as in this example, I get an error that gcc is not
> installed.

Oh, right, sorry I forgot to mention it.

> Ok, I will install it and put it into my PATH env var, but still the
> question, how do specify the size of max map?

Please see the above; the '<mode>' and '<boost_root>' are the only arguments
you need to specify explicitly.

> Is there any way to include pre-generated headers for 150 elements
> into the next boost release?

There is a cost of doing so
(http://article.gmane.org/gmane.comp.lib.boost.user/7057):

   Increasing the default limit to, let's say, 100 elements means plus
   ~120 KB to the size of the distribution, and I'm not sure the need
   is widespread enough to make everybody pay for it.

This has been a while ago, but my concern didn't go away :).

> Probably the unnumbered mpl::map template could go until 50 types to
> save compilation time, but if someone wishes to go above, he/she could
> use numbered map templates like map150. The problem, why I am going to
> suggest it, is that if someone pre-generates the header and updates
> the version, it is possibly, that pre-generated headers will be
> removed and project maintainer (not usually programmer who generated
> these headers) will suddenly get compiler errors and will have to
> repeat this procedure again.

IMHO this is a general issue: how do you manage a third-party library
sources in presence of a need to make local patches to them, and keep these
patches from getting overwritten accidentally? (Unless you always work with
the HEAD / latest & greatest sources _and_ have a direct channel to the
library maintainer, local patches are inevitable).

Our answer to this question here at work is:

1) Always maintain a patch directory alongside with the root directory
   for the library sources, e.g.:

       boost_root/...
       boost_patches/...

2) Make the patch directory precede the original sources in the list
   of include paths. Do this and #1 at the very moment you import the
   library in your repository.

3) When a need for a patch occurs:

   a) Isomorphically copy the affected / add the new files into the
      patch directory, leaving the originals untouched, and patch the
      copies. If you need to delete the file, don't, but, depending on
      the use case, override it with an empty one or the one
      containing an #error directive / redirecting #include.

   b) Rebuild the library if needed (making sure to enforce #2).

4) When upgrading to a new version of the library, diff the patches
   against the new originals and delete/keep/adjust them depending on
   the results.

> And some other question: Dave said, that mpl::map had some bug fixes
> from the 1.33 release to upcoming 1.34. Will these preprocessing
> scripts break the changes?

No; I just double-checked, and all the fixes are present in the original
headers used to generate the preprocessed versions.

HTH,

--
Aleksey Gurtovoy
MetaCommunications Engineering

Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net