Boost logo

Boost :

From: David Abrahams (abrahams_at_[hidden])
Date: 2000-12-11 19:53:58


> If exporting & importing to/from_python function pointers does not
> reduce the compilation dependencies, breaking up a huge module into
> smaller components becomes more a question of dynamic linking versus
> static linking.

Yes, as I have been suggesting to you for quite some time now, I think that
may be all (or most) of what you need ;-)

> I believe the advantages of dynamic linking are clear.
> Imagine you had to statically link Python and all the third-party
> components you are using each time you want to add a little extension
> module! The system we are working on will rival Python in complexity,
> to say the least. Being able to organize this system in dynamically
> linked components (with Python as the core) is essential.

Okay.

> Actually, it was not so much the compilation independence that
> prompted
> my proposal. There is another very important motivation that I failed
> to mention until now.
>
> Assume there a several groups of classes. In C++, each group lives in
> a certain namespace. However, there are member functions of classes
> in a given namespace that have parameters or return values that are
> objects defined in another namespace. E.g.:
>
> namespace A {
> class A1 {};
> }
> namespace B {
> class B1 { A::A1 foo(); };
> }
>
> It seems most natural to expose these classes to python in a way that
> mirrors the layout in C++:
>
> import A
> obja = A.A1()
> import B
> objb = B.B1()
> res = objb.foo() # res is A.A1!
>
> Assume further that there are a few helper functions and data
> definitions (e.g. constants) in each namespace. It is more than
> a just a convenience if these items are available as A.helper(),
> A.some_constant etc.

I'm not saying that you want one monolithic extension module, but if you had
one you could still do this with some very simple python modules:

    # A.py
    from my_extension_module import A1, helper, some_constant

> It would be nice if this concept would also work for nested
> namespaces,
> but this is probably asking too much (and probably not all that
> important).

I suggest you investigate Python's package system (search for "__init__.py"
in the docs). You can simply place your extension modules at the right place
in the package path.

> Finally, there is an important psychological component. If a group of
> classes is exposed in a huge module as a few of many, the authors will
> not be as happy as they would be if their group of classes is exposed
> by a module with a recognizable name. This is not just a question of
> strong egos, but also of getting credit and funding!

I agree.

> Ruling out the option of statically linking with the Python core, the
> Python "import <name>" statement requires a physical file <name>.*.
> Therefore it seems to me that the only way to expose wrapped C++
> classes as groups is to have a separate module for each group. Does
> someone see an alternative?

See above.
I am sympathetic to the idea of avoiding statically linking everything
together, and am willing to provide some support for that, though I think
sometimes it will make sense to link components together. Any large C++
project requires significant thought given to isolating components from each
other's implementation details and recompilation effects. I believe that
/your/ first job must be to do that job for your project.

> If the symbols seen by the dynamic loader all live in the same name
> space (as is the default under Tru64), the problem of inter-module
> communication at the C++ level reduces to ensuring that
> 1. at compile time: the to/from_python functions are declared.
> 2. at run time: the necessary modules are imported before the
> corresponding to/from_python functions are used.

That sounds right.

> If the symbols seen by the dynamic loader do not all live in the same
> scope (most systems; under Tru64 use the -hidden linker option), the
> to/from_python functions can be implemented as wrappers that import
> PyCObjects

No, you don't have to use PyCObjects; I would advise against it.

> holding pointers to the actual converter functions. This
> is both highly portable and, due to the use of dynamic loading (as
> opposed to statically linking the individual shared libraries against
> each other), very friendly to the end user.

I think I can implement the design I described in a previous message using
your __converters__ idea. Does that fit the bill?

-Dave


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk