Abstract: With the usage of neural networks in a wide range of application fields, the necessity to execute these efficiently on high performance hardware is one of the key problems for artificial intelligence (AI) framework providers. More and more new specialized hardware types and corresponding libraries appear from various manufacturers. The biggest problem arising is that these libraries usually are only supported by a very limited set of AI frameworks and interoperability can become an issue. In this extended abstract we present Sol, a transparent middleware for neural network acceleration. Sol comes with an optimizing compiler engine, allowing to use device specific libraries and to implement own optimizations, that can be leveraged on all target devices. In contrast to other projects Sol explicitly aims at optimizing prediction and training of neural networks.
Best Poster Finalist (BP): no
Poster summary: PDF
Reproducibility Description Appendix: PDF
Back to Poster Archive Listing