Skip to content

Add dependency-free ONNX export for DNNs#3145

Open
Compaile wants to merge 2 commits into
davisking:masterfrom
Compaile:onnx_export
Open

Add dependency-free ONNX export for DNNs#3145
Compaile wants to merge 2 commits into
davisking:masterfrom
Compaile:onnx_export

Conversation

@Compaile
Copy link
Copy Markdown

@Compaile Compaile commented May 7, 2026

This adds inference-only ONNX export for dlib DNNs without adding a protobuf or ONNX dependency. The ONNX wire format is written by hand to avoid pulling
libprotobuf into the build.

Adds dlib/dnn/onnx.h and dlib/dnn/onnx_abstract.h, wires the exporter through dlib/dnn.h, adds a small ImageNet export example, and adds dependency-free
unit tests in test/dnn.cpp.

The exporter targets ONNX opset 17, IR version 8.

The default export mode takes the preprocessed NCHW tensor accepted by net.forward(). There is also a dlib_input_layer mode for supported input layers, so
RGB image preprocessing can be included in the ONNX graph.

Validated locally against ONNX Runtime CPU and CUDA:

  • resnet34_1000_imagenet_classifier.dnn
  • dlib_face_recognition_resnet_model_v1.dat
  • semantic_segmentation_voc2012net_v2.dnn

The ResNet34 ImageNet path was also checked on a real dlib sample image, examples/mmod_cars_test_image.jpg, not only synthetic tensor input. Output
differences were within small floating point tolerances.

Supported layer coverage includes common inference layers such as convolutions, transposed convolutions, fully connected layers, affine/batchnorm
conversion, pooling, activations, softmax variants, residual skip layers, concat, reshape/flatten, resize/upsample, slice/extract, transpose, reorg,
normalization layers, embeddings, positional encodings, and fixed-shape tril masks.

Known unsupported cases:

  • dropout
  • image pyramid input layers
  • input_rgb_image_pair
  • adaptive computation time
  • custom user layers
  • detection/NMS postprocessing
  • training/loss graph export beyond inference pass-through

The included tests are dependency-free and inspect the generated ONNX structure directly, so default dlib CI does not need ONNX Runtime or protobuf

ps feel free to add yourself to the copy right line at the top as it is standard in all dlib files. But it felt wrong to ad your name myelf without asking first :)

@Compaile
Copy link
Copy Markdown
Author

Compaile commented May 7, 2026

While dlib is already fast, this gives us a possibility to use tensorrt as backend (via onnxruntime) and "free" fp16 mode which would be harder to implement into dlib

For the Resnet34 i build a small benchmarking script:

ONNX CUDA FP32:
verify max_abs=0.000128366 mean_abs=6.78675e-07 mismatches=0
500 calls: 671.751 ms total, 1.344 ms/call, 744.323 calls/s

ONNX TensorRT FP16:
verify max_abs=0.00305904 mean_abs=9.94151e-06 mismatches=0
500 calls: 282.627 ms total, 0.565 ms/call, 1769.117 calls/s

dlib CUDA:
500 calls: 1074.633 ms total, 2.149 ms/call, 465.275 calls/s

I love and prefer dlib for training, but having tensorRT Fp16 as runtime in prod is also nice

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant