Cellulose is happy to launch TensorRT compatibility checks for tracked ONNX models. This feature is now available to all users on our Professional and Enterprise plans.
TensorRT is widely used for machine learning training and inference workloads today. Hence, we’ve decided to integrate it as our first. We’re constantly talking to our users and partners so we can better serve them. This includes adding more runtime support, both in number of runtimes and comprehensiveness of each.
Enabling these TensorRT checks on Cellulose is simple. Just navigate to a tracked ONNX model visualizer and select TensorRT and the version on the top right corner.
The model graph will then be automatically updated with convertible / non-convertible annotations:
We’ve marked these operators as supported / convertible to TensorRT 8.6.1.
We’ll also display error messages for when operators are not supported for the desired engine precision on a particular TensorRT runtime version.
We’ve also added more information on the full set of compatible engine precisions for each operator in the properties drawer. Just click on the operator to pull out the drawer and navigate to the Supported Runtimes tab.
This is just one of several big features we’re rolling out over the next few weeks. We hope to launch features that will enable some deep learning workflows such as post-training quantization, or identifying layer fusions made by the underlying tools in the coming weeks.