How to Train YOLO v5 Model to Detect Distracted Drivers

Source: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813266
Figure 1.0: Class distribution of various driving distractions
Figure 1.1: Roboflow screenshot showing the download options for YOLO v5 PyTorch to local computer.
Figure 1.2: Directory structure of the labeled images
  • One row per object
  • Each row is class x_center y_center width height format.
  • Box coordinates must be in normalized xywh format (from 0–1). If your boxes are in pixels, divide x_center and width by image width, and y_center and height by image height.
  • Class numbers are zero-indexed (start from 0).
Figure 1.3: Sample label file containing two rows for two objects
Figure 1.4: File upload screen. Notice the Data Management menu, directory name, and file upload area.
Figure 1.5: Directory structure to show the expanded form of the uploaded images and labels data
Figure 1.6: YOLO v5 configuration screen
Figure 1.7: YOLO model training monitoring screen showing logs, losses, precision and recall curves.
Figure 1.8: Training losses, precision, recall, mAP@0.5 and mAP@05-.95
Figure 1.9: Model evaluation results.
Figure 1.10: Evaluation result example
  1. Download the latest YOLO v5 source from github using the command:
  1. Ansari S. (2020). Building Computer Vision Applications Using Artificial Neural Networks. Apress. 10.1007/978–1–4842–5887–3_4, https://link.springer.com/book/10.1007/978-1-4842-5887-3
  2. https://github.com/ultralytics/yolov5

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Sam Ansari

Sam Ansari

CEO, author, inventor and thought leader in computer vision, machine learning, and AI. 4 US Patents.