Subject: [boost] Supporting DNNs with Tensors/Multidimensional Arrays
From: Cem Bassoy (cem.bassoy_at_[hidden])
Date: 2018-08-29 13:47:45
2018 just ended one week ago and we had many successefully completed student
I was responsible for adding tensor support to Boost.uBLAS for primarily
supporting multilinear algebra operations in the field of numerics. The
wiki description along with the implementation can be found here
Similar to Boost.multi_array
runtime-reshapable tensor data structure is parametrized in terms of number
of dimensions (rank/order), dimension extents, data type, layout (first-
and last-order) and storage type. The first two are runtime-variable. I am
also about to add subtensor (view/handle of a tensor) along with
multidimensional iterators for convenient algorithm implementation.
It is yet not as flexible as GSL's multi_span
not yet support static rank and dimensions. However, basic generic tensor
operations (contraction/transposition/reshaping/...), including a nice
syntax for Einstein's summation convention with placeholders, using C++17
features are provided. The operations are evaluated using expression
templates (not smart yet).
Similar to the tensor
framework of Eigen, that is used by tensor flow
<https://github.com/tensorflow/tensorflow>, the tensor data structure in
Boost.uBlas could be taken for implementing deep neural networks or
higher-order statistics I think. I am not sure if the C++ community would
appreciate if Boost has some form of basic operations for building *deep
neural networks* (DNNs). I would like to ask
1. if it make sense for boost to support basic operations for DNNs?
2. what are the obligatory, necessary basic operations for creating DNN
3. if there are any additional data structure parameters that needs to be
added for (efficiently) supporting DNNs?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk