Hybrid bayesian convolutional neural network object detection architectures for tracking small markers in automotive crashtest videos


  • Felix Neubürger University of Applied Sciences South Westphalia
  • Daniel Gierse University of Applied Sciences South Westphalia
  • Thomas Kopinski University of Applied Sciences South Westphalia




object detection, automobile crashtests, computer vision, bayesian classification


Automotive crash tests are an important aspect of everyday safety, where accurate measurements and evaluations play a crucial role. In order to automate this process, we implement a bayesian hybrid computer vision model to detect two different kinds of target markers. These are commonly used in crash tests, by e.g. automotive manufacturers, to aid the localisation of the car’s positional data by attaching these markers at different positions to the vehicle. A tracking algorithm is subsequently used to add contextual time information to the marker objects. The extracted information can then be used in downstream tasks to calculate important metrics for the crash test evaluation, e.g. the speed, momentum, acceleration and trajectory at particular parts of the car during different stages of the crash test. The model consists of a pre-trained Faster-RCNN for the region proposals with the addition of a bayesian convolutional neural network to estimate a statistical uncertainty on the model’s classifications. This uncertainty estimation can be used as a tool to improve safety in uncertain edge cases in videos where lighting conditions and light reflections are not optimal. Our pipeline achieves an average recall and precision of 0.89 and 0.99, respectively, when applied to test data. This outperforms the recall of state of the art models like the Faster-RCNN Resnet-152 by more than 28\% while delivering slightly better precision, increasing robustness in most of the tested use-cases.


COCO - Common Objects in Context. https: //cocodataset.org/#detection-eval. [On-line; accessed 2022-02-03].

models/tf2 detection zoo.md at master· tensorflow/models. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md. [Online; accessed 2022-12-22].

D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, apr 2017. doi: 10.1080/01621459.2017.1285773. URL https://doi.org/10.1080%2F01621459.2017.1285773.

C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural network. In F. Bach and D. Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1613–1622, Lille, France, 07–09 Jul 2015. PMLR. URL https://doi.org/10.5555/3045118.3045290.

G. Deodato, C. Ball, and X. Zhang. Bayesian Neural Networks for Cellular Image Classification and Uncertainty Analysis. preprint, Bioinformatics, Oct. 2019. URL https://doi.org/10.1101/824862.

A. Harakeh, M. Smart, and S. L. Waslander. BayesOD: A Bayesian Approach for Uncertainty Estimation in Deep Object Detectors. arXiv:1903.03838 [cs], Sept. 2019. URL https://doi.org/10.48550/arXiv.1903.03838. arXiv: 1903.03838.

J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy. Speed/accuracy trade-offs for modern convolutional object detectors, 2016. URL https://doi.org/10.48550/arXiv.1611.10012.

D. P. Kingma and M. Welling. Auto-encoding variational bayes. Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014. URL https://doi.org/10.48550/arXiv.1312.61140.

A. Lukezic, T. Voj ́ır, L. Cehovin, J. Matas, and M. Kristan. Discriminative correlation filter with channel and spatial reliability. Int J Comput Vis, 126, 671–688, 2018. doi: 10.1007/s11263-017-1061-3. URL https://doi.org/10.1007/s11263-017-1061-3.

M. Tan, R. Pang, and Q. V. Le. EfficientDet: Scalable and efficient object detection. 2019. doi: 10.48550/ARXIV.1911.09070. URL https://arxiv.org/abs/1911.09070.