libMentha is a continuation of the tiny-dnn machine learning library.
- Eigen Backend to allow faster matrix multiplication.
- GPU Support via Vulkan Compute via Kompute
- More Layers
- ONNX Saving and Loading
- Improved Serialization System
This version is essentially a cleaner version of the tiny-dnn source tree, with an upgraded xtensor for C++ 20 and 23 support.
Planned improvements:
- Eigen backend
Place lib_mentha_include in an accessible location on your computer. Configure your build system to include lib_mentha_include. If you want serialization, do the same with cereal according to their instructions. If you don't want serialization, define CNN_NO_SERIALIZATION.
- tiny-dnn examples should work by find and replacing tiny-dnn with lib_mentha.