Skip to content

phiflip/YOLOLab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YOLOLab

YOLOLab main logo type

Welcome to the YOLOLab repository, a dedicated platform for setting up and using YOLO for object detection tasks in educational settings.

This repository and code is based on the YOLO project by: Jocher, G., Chaurasia, A., & Qiu, J. (2023). Ultralytics YOLO (Version 8.0.0) [Computer software]. https://github.com/ultralytics/ultralytics

Table of Contents

  1. Installation
  2. Labeling
  3. Training
  4. Predictions
  5. Tracking

Installation

This project assumes that Anaconda and Spyder are installed on your Windows system. Follow the steps below to set up your environment.

Prerequisites

Directory structure

  1. Create the yolo project folder
C:/
├─ Users
│   ├─ UserName
│   │   ├─ yolo                  
│   │   │   ├─ labeling/        
│   │   │   ├─ train/
│   │   │   ├─ test/     
│   │   │   └─ valid/
│   │   └─ ...             
│   └─ ...
└──...

Environment Setup (via Anaconda Prompt)

  1. Create a new conda environment (in the windows search bar type "Anaconda Prompt"): Prompt

    conda create --name yolo spyder=6
  2. Activate the environment:

    conda activate yolo
  3. Install Required Packages:

    pip install ultralytics
  4. Launch Spyder within the new environment:

    spyder
  5. … and check your installation by opening and running the yolo_test.py in Spyder:

    Download yolo_test.py

  6. ... or by typing the following commads in your Command Line Interface (CLI):

    Predict on an image

    yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'

    Run a pretrained yolo object detector on your webcam

    yolo predict model=yolo11n.pt source=0 show=True

Labeling

Additional Tools

  1. Install labelImg for image annotation using the same activated yolo environment:
    pip install labelimg
  2. Start the labeling software:
    labelimg 
  3. ... in the correct folder (labelImg [path to images] [path to predefined_classes.txt file]):
    labelimg C:\Users\UserName\yolo\labeling C:\Users\UserName\yolo\labeling\predefined_classes.txt 
    LabelImg

Training

Start a Training on your local machine directly in the CLI ...

yolo task=detect mode=train model=yolo11n.pt imgsz=800 data=path/to/boars.yaml epochs=200 batch=8 project=/path/to/your/project/training_runs/ name=yolo11n_imgsz800 device="cpu"

... or using a python environment

from ultralytics import YOLO
import os

# change the working directory
os.chdir("path/to/yolo/dataset/training_runs/")
# Load a pretrained model
model = YOLO("yolo11n.pt")

# train the model (transfer learning)
model.train(data="path/to/yolo/boars.yaml",
            epochs=20,
            imgsz=800,
            batch=8,
            project = "path/to/yolo/training_runs/",
            name="yolo11n_imgsz800",
            device="cpu")  # 0 for GPU (check pytorch installation hints) or "cpu"
  • model=yolo11n.pt: Indicates the model to be used. Here, the YOLO Nano model (yolo11n.pt) is used.
  • imgsz=800: Determines the size of the input images in pixels. In this case, the image size is 800x800 pixels (default: 640).
  • data=path/to/boars.yaml: Specifies the path to the data file that defines the training and validation data.
  • epochs=200: Sets the number of training epochs. Here, it is 200 epochs.
  • batch=8: Determines the batch size, i.e., the number of images processed simultaneously. Here, it is 8 (default 16).
  • project=/path/to/your/project/training_runs/: Indicates the project directory where the training runs will be saved.
  • name=yolo11n_imgsz800: Sets the name of the training run. This helps distinguish between different runs.
  • device="cpu": Specifies the CPU to be used for training. Alternatively, GPUs 0 and 1 can be used (device=(0,1)).

Check here for additional train settings and hyperparameters

Example boars.yaml file

# Contents inside the .yaml file

train: path\to\yolo\train
val:   path\to\yolo\valid
test:  path\to\yolo\test

# total number of classes
nc: 2
names: ['boar','human']

Predictions

CLI

yolo predict model= .\training_runs\yolo11n_imgsz800\weights\best.pt source=.\test project=predictions name=yolo11n_imgsz800 conf=0.2 imgsz=800

Python environment

from ultralytics import YOLO
import cv2
from matplotlib import pylab as plt

# Load a trained model
model = YOLO("path/to/yolo/training_runs/yolom_imgsz800/weights/best.pt")

# Use the model (predict on an image)    
results = model("path/to/yolo/test",
                project = "path/to/yolo/predictions",
                name = "yolom_imgsz800",
                show=True, 
                show_labels=False, 
                conf=0.2,
                imgsz=800,
                classes=None,
                save=True
                ) 

#%%     
# Plot the results (within matplotlib)
res_plotted = results[0].plot() # results[0]: show the first image
res_plotted_rgb=cv2.cvtColor(res_plotted, cv2.COLOR_BGR2RGB) 
plt.imshow(res_plotted_rgb)

Predictions (quantitative evaluation on independent images)

CLI

The images must, of course, be labeled but should not have been used in training or validation.

yolo detect val model=path\to\weights\best.pt data=path\to\yourFile.yaml

Python environment

# -*- coding: utf-8 -*-
"""
Evaluate a trained YOLO model and generate a precision-recall curve.
"""

import os
from ultralytics import YOLO
import matplotlib.pyplot as plt

# Optional: Set working directory if needed
os.chdir("C:/Users/your_username/your_project/")  # Replace with your actual path or remove if unnecessary

# Load YOLO model
model = YOLO("dataset/training_runs/yolo11n_1st/weights/best.pt")

# Run evaluation on the test set
metrics = model.val(
    data="dataset/chicken.yaml",
    imgsz=800,
    conf=0.001,
    iou=0.5,
    split="test",
    save_json=False
)

# Print evaluation metrics
print("Evaluation results:")
print(f"Precision (mean):      {metrics.box.mp:.4f}")
print(f"Recall (mean):         {metrics.box.mr:.4f}")
print(f"mAP@0.5:               {metrics.box.map50:.4f}")
print(f"mAP@0.5:0.95:          {metrics.box.map:.4f}")

# Plot precision-recall curve
curve_data = metrics.curves_results[0]  # 'Precision-Recall(B)'
x = curve_data[0]
y = curve_data[1].squeeze()

plt.figure(figsize=(8, 6))
plt.plot(x, y, label='Precision-Recall')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('YOLO Precision-Recall Curve')
plt.grid(True)
plt.legend()
plt.tight_layout()
plt.savefig("pr_curve.png")
plt.show()

Tracking

yolo track model=path\to\weights\best.pt project=path\to\trackings\ name=yolo11n_800_botsort source="path\to\test\" tracker=botsort.yaml

Python environment

from ultralytics import YOLO

# Load trained model
model = YOLO("path/to/weights/best.pt")

# Run tracking
results = model.track(
    source="path/to/test/",             # Video, folder, image, webcam
    project="path/to/trackings",        # Output folder
    name="yolo11n_800_botsort",         # Experiment name
    tracker="botsort.yaml",             # Tracker config
    conf=0.2,                           # Optional: Confidence threshold
    imgsz=800,                          # Optional: Image size
    save=True,                          # Save results
    show=True                           # Show results during processing
)

Here you can find the .yaml files for your tracker, as well as an additional script if you want to draw lines for your tracked paths.

About

YOLO for object detection tasks in educational settings

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors