Write your own model handler for RunInference!

Jun-14 14:00-14:25 UTC
Room: Horizon

This talk will cover how to write a custom model handler for RunInference transform in Python SDK. Currently, we support Sklearn, PyTorch, Tensorflow, Onxx, and XGBoost model handlers. But there are situations when developers would like to write their own because of different input types they are using, any new framework, custom options, etc. I’ll talk about the bits and pieces of writing a new model handler and explain the key components by using a Tensorflow model handler as an example.