Boost logo

Boost :

From: Sergei Marchenko (serge_v_m_at_[hidden])
Date: 2021-01-06 01:07:38


Thanks Hans for your thoughts.

> I only had a quick glance at your Github page. The code examples do not look bad and you put lot of examples up-front, which is good. A red flag is the use of variable names which start with _. That is discouraged. Some (not all) names starting with _ are reserved for implementers of the C++ stdlib, but there is no use going into the details. Just don't use variables starting with _ to be on the safe side and to give a good example to other C++ programmers.

I absolutely agree with you on the importance of the naming conventions, and if it ever comes to the point where the library is considered for integration into Boost, I fully expect that a lot of renames will be necessary to make it consistent with the other parts. I have not considered this code to be in a position where other C++ programmers would look at it as an example, so I just used the STL naming style as a reference when I was deciding on the names.

> What would be the niche for this library? A NN C++ library would have to compete with the extensive amount of high-quality NN software that already exists in Python.

The niche for a NN C++ library is an excellent question. As you correctly point out, Python is a de-facto standard toolset for prototyping and experimenting with new types of neural layers and network configurations. Offering a support for hardware acceleration, for example via GPU or FPGA, immediately brings a question which hardware to support and which low-level library to use to interact with the hardware. At this point I am not certain what the answers should be, and hoping to get the suggestions from the community.

> I think the niche could be embedded systems. For prototyping and training a NN, Python is certainly the better choice, but once you have the final network, you may want to put it on an embedded system to do its work there. An embedded system does not have a GPU, so not supporting GPU computations wouldn't be a disadvantage.

This is definitely a good suggestion. Another possibility that I thought about is the use of the library to extend an existing solution with a small/medium NN component in the situation where cross-process or cross-environment interop is not desirable. Or when a hardware configuration is not known upfront, or a solution is targeting a wide variety of the hardware. Or when data and model size are so small that GPU acceleration would not result in a significant overall improvement. These are the niches which a NN C++ library can fill.

To be more specific, the example application that I have in the GitHub repo for MNIST digits dataset, produces a model, which can be trained to offer a 95% success rate in about 10-15 minutes on a single CPU core. While the example is somewhat synthetic, it is still representative of a wide variety of scenarios where an input from a sensor or a small image can be inspected by a NN component. Another application (not shown on GitHub) was a tiny model to estimate the cost of a web service API response time, given a small set of parameters, such as the user identity, API method, and payload size, which was re-trained on every start of the web service, and used to make predictions about the resource consumption by different callers for load balancing and throttling purposes.

These are just two examples, and as I said in my original post, I do believe that there is a lot of power in the ideas behind NNs, and there can be a wide variety of possible applications.

Best regards,
Sergei Marchenko.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk