Boost logo

Boost :

From: Andy Little (andy_at_[hidden])
Date: 2006-09-06 16:07:52


"David Abrahams" <dave_at_[hidden]> wrote in message
news:87ejup8muh.fsf_at_pereiro.peloton...
> "Andy Little" <andy_at_[hidden]> writes:
>
>> "David Abrahams" <dave_at_[hidden]> wrote in message
>> news:87irk2a0rd.fsf_at_pereiro.peloton...

>> Unfortunately I couldnt check out the relative compile time performance, as I
>> ran out of elements in Boost.Tuple when I tried making a 4 x 4 matrix, else I
>> would have stuck with it for the moment. ( I opted just to use one
>> tuple)
>
> Especially where Boost.Tuple (a cons-list implementation) is
> concerned, I don't see why one tuple would be better than 5.

If you're talking about compile time, I don't know. If you are asking why use a
single tuple, then it seems to be much more convenient (but I havent really
thought that much about it, but it seems to be working OK so far). FWIW the
matrix interface currently looks like so:

 typedef quan::rc_matrix<
        2,2,
        boost::fusion::vector4<
            quan::current::mA, quan::current::mA,
            double,double
>
> current_matrix;

    current_matrix cmat(
        current_matrix::elements(
            quan::current::mA(30),quan::current::mA(5),
            -2,7
        )
    );

The rc_matrix is basically just a wrapper over a boost::fusion::vector, and I
havent bothered to overload the matrix ctor for varying arguments, but rather
left it to the fusion sequence to sort out.

So far I have implememented addition of matrices among other rthings ( That is
simply directly using the fusion transform algorithm). I had some old code which
converts operator functions into function objects. Here FWIW is the main part of
the add algorithm, which is pretty simple, once you get through the return type
deduction:

(relevent code is in <quan-trunk/quan/matrix/> for anyone interested. Headers
using fusion are add_subtract.hpp and rc_matrix_def.hpp)

template <int R, int C, typename SeqL,typename SeqR>
    typename quan::meta::binary_operation<
        quan::rc_matrix<R,C,SeqL>,
        quan::meta::plus,
        quan::rc_matrix<R,C,SeqR>
>::type
    operator + (
        quan::rc_matrix<R,C,SeqL> const & lhs,
        quan::rc_matrix<R,C,SeqR> const & rhs
    )
    {
        typedef typename quan::meta::binary_operation<
            quan::rc_matrix<R,C,SeqL>,
            quan::meta::plus,
            quan::rc_matrix<R,C,SeqR>
>::type result_type;

        result_type result(
            boost::fusion::as_vector(
                boost::fusion::transform(lhs.seq,rhs.seq,quan::operator_plus())
            )
        );
        return result;
    }

and here is a *= impl:

 template <int Rows, int Cols,typename Seq>
    template <typename Numeric>
    inline
    typename boost::enable_if<
        quan::meta::is_numeric<Numeric>,
        rc_matrix<Rows,Cols,Seq>&
>::type
    rc_matrix<Rows,Cols,Seq>::operator *=(Numeric const & in)
    {
        boost::fusion::for_each(
            this->seq,
            detail::assignment_functor<
                quan::operator_times_equals,
                Numeric
>(in)
        );
        return *this;
    }

So all in all I reckon Boost.Fusion is quite cool :-). Of course it is probably
not as good performance wise, but I am not too concerned about performance and
it is more interesting to do it this way..

>> However maybe its a good move to try out Boost.Fusion. Although the docs said
>> the move from tuple was as easy as changing from get to at, I found that
>> there
>> was a big change, because AFAICS Fusion uses references everywhere,
>
> Surely not everywhere. fusion::tuple<int,long> contains an int and a
> long, not references to int and long.
>
>> and the compiler refused to assign anything, for reasons I am not
>> clear on. Anyway after changing from result_of::at_c to
>> result_of::value_at_c things seemed to go more smoothly. (That is
>> using the Boost Review version of fusion). IOW I am succcessfully
>> fused ! FWIW compiling a 2x2, 3x3, and 4x4 with some
>> quan::quantities in, a multiply of each by itself and some output,
>> takes about 22 seconds, on my AMD Atlhon 1.25Ghz system The
>> trickiest part is working out an algorithm to do cofactors of the
>> matrices (to get the inverse), but I may just hard code them, unless
>> anyone has any suggestions ...?
>
> Yeah, use the fusion algorithms to express what you'd ordinarily do
> with looping if these were homogeneous vectors/matrices.

I guess I'll start with the easy stuff :-)

regards
Andy Little


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk