Generating a ModelProto¶
This example demonstrates the use of onnxscript to define an ONNX model. onnxscript behaves like a compiler. It converts a script into an ONNX model.
First, we define the implementation of a square-loss function in onnxscript.
/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/onnxscript/converter.py:820: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
We can convert it to a model (an ONNX ModelProto) as follows:
model = square_loss.to_model_proto()
Let’s see what the generated model looks like.
print(onnx.printer.to_text(model))
<
ir_version: 8,
opset_import: ["" : 15]
>
square_loss (float[N,1] X, float[N,1] Y) => (float[1,1] return_val) {
diff = Sub (X, Y)
tmp = Mul (diff, diff)
return_val = ReduceSum <keepdims: int = 1> (tmp)
}
We can run shape-inference and type-check the model using the standard ONNX API.
model = onnx.shape_inference.infer_shapes(model)
onnx.checker.check_model(model)
And finally, we can use onnxruntime to compute the outputs based on this model, using the standard onnxruntime API.
0.13999999 [array([[0.13999999]], dtype=float32)]