Boost logo

Boost :

Subject: Re: [boost] CMake - one more time
From: Paul Fultz II (pfultz2_at_[hidden])
Date: 2016-04-23 13:19:08


> On Apr 23, 2016, at 9:30 AM, Raffi Enficiaud <raffi.enficiaud_at_[hidden]> wrote:
>
> Le 23/04/16 à 06:34, Paul Fultz II a écrit :
>>
>>> [snip]
>>>>
>>>> Then to set each one to the same name you can use the OUTPUT_NAME property:
>>>>
>>>> set_target_properties(MyLib_shared PROPERTIES OUTPUT_NAME MyLib)
>>>> set_target_properties(MyLib_static PROPERTIES OUTPUT_NAME MyLib)
>>>
>>> Exactly, so you artificially make CMake think that 2 different targets should end up with the same name on the filesystem. It does not work for instance on Win because the import .lib of the shared get overwritten by the static. This is not exactly a solution, but rather a hack (or workaround).
>>
>> Its neither, this is optimization because the user could just build the library twice, once for shared and another for static. Of course, this type of optimization mainly affects system maintainers and so everyday users of cmake don’t see this as a big problem.
>
> Yet, having the same output name in case you build twice led to an undefined behaviour (.lib gets overwritten), and is not natively supported by CMake (using eg. CMAKE_ARCHIVE_OUTPUT_DIRECTORY for making the distinction does not work alone).

Yes, thats just for windows, which would need special treatment.

>
>>> We can of course iterate further (set the output folder per type, etc).
>>>
>>>>> - having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms,
>>>>
>>>> How does this not target all platforms?
>>>
>>> Do I have a centralized (or virtualized like inside vagga/docker or virtualenv) and official packet manager on Win32 or OSX? I know tools exist (brew, chocolatey, etc). What about the other platforms (Android)? What about cross compilation?
>>
>> There is bpm, cget, conan, and hunter to name a few that is cross platform and targets all platforms.
>
> You missed the "official" and "centralized" parts. apt/dpkg or yum are official and centralized package manager, cget is not. Why should it be official and centralized?

Bpm wouldn’t be official and centralized?

> Because
> 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance)
> 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system.

But thats the same problem with boost now. If a boost library depended on openCV and then the system updated openCV then the user would have to rebuild boost, however with some form of packaging system, it only needs to rebuild a small set of libraries.

>
> Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated”?

Cget is open source. Also, its fairly non-intrusive, so it can be easily replaced by another tool if necessary.

> Example: Fink/MacPort/HomeBrew.
>
>>>>> and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else.
>>>>
>>>> I don’t follow this at all. For example, when I want to build the hmr library here: https://github.com/pfultz2/hmr
>>>>
>>>> All I have to do after cloning it is: `cget build`, then it will go and grab the dependencies because they have been listed in the requirements.txt file.
>>>
>>> Then I am dependent on another tool, cget, maintained by ... you :) Also from the previous thread, if my project has not the "standard" cget layout, then cget will not work (yet?).
>>
>> There is no layout requirements. It just requires a CMakeLists.txt at the top level(which all cmake-supported libraries have), but the library can be organized however.
>
> The part "It just requires a CMakeLists.txt at the top level" is by definition a layout requirement, which is in contradiction with the part "There is no layout requirements".
> Also the "(which all cmake-supported libraries have)" is not a requirement of CMake itself, it is just a "good practice”.

It is a requirement of cmake. If I call `cmake some-dir` then a CMakeLists.txt needs to be in ‘some-dir'. So then cget just clones the repository(or a unpacks a tar file or copies a directory on your computer) and calls cmake on that directory. There is no special layout requirements.

>
>>
>>> I also need another file, "requirements" that I need to maintain externally to the build system.
>>
>> But building and installing the dependencies is external to the build system anyways. The requirements.txt just lets you automate this process.
>>
>>> I do that often for my python packages, and it is "easy" but difficult to stabilize sometimes, especially in complex dependency graph (and their can be conflicting versions, etc). I can see good things in cget, I can also see weak points.
>>
>> Currently cget doesn’t handle versions. I plan to support channels in the future which can support versions and resolve dependencies using a SAT solver(which pip does not do).
>
> SAT solver, interesting... why would I need that complexity for solving dependencies? I see versions as a "range of possible", which makes (an possibly empty) intersection of half spaces.

SAT solver is what most package managers use to resolve constraints(such as dpkg).

>
>>> What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
>>
>> The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
>
> Right now it scales pretty well with BJam.

The fact I need to download entire boost to build and test hana using bjam seems like it doesn’t scale at all.

>
>>> BTW, is cget able to work offline?
>>
>> Yes.
>
> Good :)
>
>>
>>>
>>>>>
>>>>> To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone.
>>>>
>>>> Cmake already handles the packaging and finding dependencies, cget just provides the mechanism to retrieve the packages using the standard cmake process. This why you can use it to install zlib or even blas, as it doesn’t require an extra dependency management system.
>>>
>>> Well, I really cannot tell for cget. CMake finds things that are installed in expected locations for instance, otherwise the FIND_PATHS should be indicated (and propagated to the dependency graph).
>>
>> It sets the CMAKE_PREFIX_PATH(and a few other variables), which cmake uses to find libraries.
>
> What if we need conflicting CMAKE_PREFIX_PATH? eg one for openCV and another one for Qt?

CMAKE_PREFIX_PATH is a list.

>
>>> What if for instance, it needs an updated/downgraded version of the upstream? How cget does manage that?
>>
>> `cget -U` will replace the current version.
>
> Does that downgrade as well?

Yes, if you give it an older version it will replace the library with that version.

>
>
>>> Is there an equivalent to virtualenv? Right now for boost, I clone the superproject, and the artifacts and dependencies are confined withing this clone (up to doxygen, docbook etc).
>>
>> By default it installs everything in the local directory `cget`, but this can be changed by using the `—prefix` flag or setting the `CGET_PREFIX` environment variable.
>>
>>>
>>>>
>>>>>
>>>>> - I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
>>>>
>>>> I usually add the tests using this(I believe Boost.Hana does the same):
>>>>
>>>> add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} -VV -C ${CMAKE_CFG_INTDIR})
>>>>
>>>> function(add_test_executable TEST_NAME)
>>>> add_executable (${TEST_NAME} EXCLUDE_FROM_ALL ${ARGN})
>>>> if(WIN32)
>>>> add_test(NAME ${TEST_NAME} WORKING_DIRECTORY ${LIBRARY_OUTPUT_PATH} COMMAND ${TEST_NAME}${CMAKE_EXECUTABLE_SUFFIX})
>>>> else()
>>>> add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME})
>>>> endif()
>>>> add_dependencies(check ${TEST_NAME})
>>>> set_tests_properties(${TEST_NAME} PROPERTIES FAIL_REGULAR_EXPRESSION "FAILED")
>>>> endfunction(add_test_executable)
>>>>
>>>> Then when I want to build the library I just run `cmake —build .` and then when I want to run the test, I can run `cmake —build . —target check`. Now if I want to run just one of the tests I can do `cmake —build . —target test_name && ./test_name` just as easy. I have not ever had the need to run subset of tests, this is usually the case when there is nested projects, but is easily avoided when the project is separated into separate components.
>>>
>>> You are strengthening my point, you write an umbrella target for your purpose. My example with the tests was a trap: if you run "cmake —build . —target check" you end up building "all" the tests. To have a finer granularity, you should write "add_test_executable_PROJECTX" etc. BJam knows how to do that, also with a eg. STATIC version of some upstream library, defined at the point it is consumed (and not at the point it is declared/defined), and built only if needed, without the need to do some mumbo/jumbo with object files.
>>
>> I don’t know see how that is something that cmake doesn’t do either.
>
> Let me (try to) explain my point with an "analogy" with templates vs overloads:
>
> What cmake can do is:
> -------- declare possibly N combinations
> targetA(variant1, compilation_options1);
> targetA(variant1, compilation_optionsM);
> ...
> targetA(variantN, compilation_optionM);
> --------
>
> and then consume a subset of the declared combination:
>
> --------
> targetA(variantX, compilation_optionsY);
> --------
> with 1<= X <= N, 1 <= Y <= M.
>
>
> --------
> What BJam can do is:
>
> --------
> template <class variants, class compilation_options>
> targetA(variants, compilation_options);
>
> -------- and then consume any:
> targetA(variantX, compilation_optionsY);
> --------
>
> with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.

I do not follow this analogy at all.

>
> If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.

Cmake is a fairly dynamic language, so I don’t think it is as limited as you think.

>
>
>>> What I am saying is that it is indeed possible, I also know solutions,
>>> but this is not native to cmake.
>>
>> Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
>
> Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.

Yes and the nice thing about cmake, is it leads you to a simple more modular design to solve the problem instead of trying to link in 20 different library targets that are a variation of shared and static from the same library.

>
>
>>>>>> Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
>>>>>
>>>>> Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?
>>>>
>>>> But its not entirely technology that is missing, its the community that is missing, and I don’t think a GSoC will help create a large community for boost build.
>>>
>>> That is true. I see it as an chicken and egg problem also, and we have to start somewhere.
>>>
>>> Where Bjam will always loose is the ability to generate IDE environments, natively, and this is a major reason why cmake will have a more lively community. I believe that a BJam to cmake is possible, but even in that case, Bjam will live in the shadow of cmake.
>>
>> Yep, and instead of competing with cmake, boost could collaborate with cmake and would have a much larger impact.
>
> Maybe CMake ppl are interested, but I do not see in what extent. They are de facto limited by the capabilities of the IDEs.
>
>
>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk