Boost logo

Boost :

From: Hans Dembinski (hans.dembinski_at_[hidden])
Date: 2021-01-06 10:36:57


> On 6. Jan 2021, at 02:07, Sergei Marchenko <serge_v_m_at_[hidden]> wrote:
>
> Thanks Hans for your thoughts.
>
> > I only had a quick glance at your Github page. The code examples do not look bad and you put lot of examples up-front, which is good. A red flag is the use of variable names which start with _. That is discouraged. Some (not all) names starting with _ are reserved for implementers of the C++ stdlib, but there is no use going into the details. Just don't use variables starting with _ to be on the safe side and to give a good example to other C++ programmers.
>
> I absolutely agree with you on the importance of the naming conventions, and if it ever comes to the point where the library is considered for integration into Boost, I fully expect that a lot of renames will be necessary to make it consistent with the other parts. I have not considered this code to be in a position where other C++ programmers would look at it as an example, so I just used the STL naming style as a reference when I was deciding on the names.

Adding to Alexander's comments, the matter is correctly explained in the second answer to this SO question (unfortunately not the accepted answer): https://stackoverflow.com/questions/3136594/naming-convention-underscore-in-c-and-c-sharp-variables. The advice to not use variables starting with _ is given in "C++ Coding Standards" from Herb Sutter and Andrei Alexandrescu, as mentioned in that answer.

If you have not already done so, please also check https://www.boost.org/development/requirements.html
which also has some guidelines for naming - although not on this issue specifically.

> This is definitely a good suggestion. Another possibility that I thought about is the use of the library to extend an existing solution with a small/medium NN component in the situation where cross-process or cross-environment interop is not desirable. Or when a hardware configuration is not known upfront, or a solution is targeting a wide variety of the hardware. Or when data and model size are so small that GPU acceleration would not result in a significant overall improvement. These are the niches which a NN C++ library can fill.

I think Python also supports a wide variety of hardware. You are right, of course, that it would be rather awkward for an existing C++ application to call into Python to do its ML tasks, having a native C++ library to do the job is preferred.

I am not sure about your argument regarding small data and or model sizes. I think in most cases you want to train Neural Nets with large amounts of data. Can you add generic GPU support with Boost.Compute?
https://www.boost.org/doc/libs/1_75_0/libs/compute/doc/html/index.html

> To be more specific, the example application that I have in the GitHub repo for MNIST digits dataset, produces a model, which can be trained to offer a 95% success rate in about 10-15 minutes on a single CPU core. While the example is somewhat synthetic, it is still representative of a wide variety of scenarios where an input from a sensor or a small image can be inspected by a NN component. Another application (not shown on GitHub) was a tiny model to estimate the cost of a web service API response time, given a small set of parameters, such as the user identity, API method, and payload size, which was re-trained on every start of the web service, and used to make predictions about the resource consumption by different callers for load balancing and throttling purposes.

Those are good niche applications, I think.

Some more questions:

Are you building the network at compile-time or run-time? It looks from your examples like it is compile-time. I think your library should offer both. Building the network at compile-time may give some speed benefits as it can gain from compiler optimisations, but it would require re-compilation to change the network itself. Building the network at run-time means you can change the network without re-compiling. This is useful for example when you want to read the network configuration (not only its weights) at run-time from a configuration file.

It is possible to offer both implementations under a unified interface, as I am doing in Boost.Histogram. Other libraries which offer this are std::span and the Eigen library.

I would tentatively endorse this project, but it would be good to have a second opinion from senior Boost members.

Best regards,
Hans


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk