Onnxruntime
Segmentation fault (core dumped)
➜ python3 test.py
Traceback (most recent call last):
File "test.py", line 32, in <module>
yolov5 = YOLOv5(model_path=model, drawing=True, save_image=None, show_kpts=False)
File "~/onnxruntime/inference.py", line 89, in __init__
self.ort_inference = ort.InferenceSession(model_path)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 310, in _create_inference_session
[1] 2428621 segmentation fault (core dumped) python3 test.py
Already found the issue that it caused by docker-compose if we set the CPU cores limitation.
It can be fixed by adding these lines.
Solution
options = ort.SessionOptions()
options.intra_op_num_threads = 1
sess = ort.InferenceSession(model, options)
Output too many messages in the beginning
Onnxruntime will output a lot of messages in the beginning when we start it.
Solution
Add this line after import onnxruntime.
import onnxruntime as ort
ort.set_default_logger_severity(3)