Yes try to fork from https://github.com/uBLAS/ublas
We have our own repository for ublas development.


​This one is for the students to push their code.
To be honest, after talking with Stefan, we are going to do the development of ublas directly from the boostorg repository instead of having multiple github repositories. I think it will simplify the development.
 
Silly questions: can we have right from the beginning a fixed dimensions tensors on top of the regular tensor ?

Yes we can have. I would like to start with runtime variable parameters though. On top of that we might add a tensor template class with static dimensions for static memory allocation. There is one library supporting tensors with static rank/order (number of dimensions) and static dimenisons https://github.com/romeric/Fastor with optimizations.
If I recall correctly, Eigen, Blitz and other libraries set the order/rank as a compile time parameter.
However, some applications with graphical interfaces creating / invoking tensors at runtime might need the order to be a dynamic parameter.
Boost could be one of the few libraries to support these type of applications.

​This sounds excellent to me.
Yes for small tensor sizes there is a benefit. The compiler is able to optimize much better. However, for larger tensor sizes designing tensor algorithms with high spatial and temporal data locality is the key design criteria to my mind.
Supporting two or even three versions would be the best option. If we do not focus on small tensors, I would first start with the most flexible version. Once that is finished ( GSOC) we could continue on turning runtime variable parameters into static ones.

​Awesome !