Skip to content

Refine benchmark script logic #14

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from

Conversation

shyama7004
Copy link

@shyama7004 shyama7004 commented Jul 16, 2025

Refine benchmark script logic

  • Correct get_max_error boolean logic and remove nonexistent l3 branch
  • Remove the now‑unusedget_coordfunction
  • Default TransformObject.name to empty string to avoid “none” history entries
  • Update help text for --rel_center_y
  • Implement true Intersection‑over‑Union in get_norm using cv.intersectConvexConvex and cv.contourArea
  • Reshape and cast projected corners in PerspectiveTransform to float32 before calling getPerspectiveTransform

Tests

tests/test_utils.py
import numpy as np
import pytest

from objdetect_benchmark import get_norm, get_max_error, TypeNorm

def test_get_norm_l1():
    a = np.array([[0, 0], [1, 1]], float)
    b = np.array([[1, 1], [2, 2]], float)
    # |(1,1)-(0,0)|₁ + |(2,2)-(1,1)|₁ = (1+1) + (1+1) = 4
    assert get_norm(a, b, TypeNorm.l1) == pytest.approx(4)

def test_get_norm_l2():
    a = np.array([[0, 0], [0, 0]], float)
    b = np.array([[3, 4], [0, 0]], float)
    # flatten difference = [3,4,0,0], L₂ = sqrt(3²+4²) = 5
    assert get_norm(a, b, TypeNorm.l2) == pytest.approx(5)

def test_get_norm_linf():
    a = np.array([[0, 0], [1, 5]], float)
    b = np.array([[2, 1], [0, 0]], float)
    # per-point diffs: [(2,1),(1,5)] → flattened [2,1,1,5], max = 5
    assert get_norm(a, b, TypeNorm.l_inf) == pytest.approx(5)

def test_get_norm_iou_full_overlap():
    # identical quads → IoU = 1.0
    quad = np.array([[0,0], [1,0], [1,1], [0,1]], float)
    assert get_norm(quad, quad, TypeNorm.intersection_over_union) == pytest.approx(1.0)

def test_get_norm_iou_partial_overlap():
    # gold: unit square; detected: shifted right by 0.5
    gold = np.array([[0,0], [1,0], [1,1], [0,1]], float)
    det  = np.array([[0.5,0], [1.5,0], [1.5,1], [0.5,1]], float)
    # overlap area = 0.5×1 = 0.5; union = 1 + 1 - 0.5 = 1.5 → IoU = 1/3
    assert get_norm(gold, det, TypeNorm.intersection_over_union) == pytest.approx(1/3)

def test_get_max_error():
    assert get_max_error(5, TypeNorm.l2) == 5
    assert get_max_error(0.1, TypeNorm.intersection_over_union) == 1.0
tests/test_generate_run.py
import os
import pytest
from objdetect_benchmark import main

def test_gen_and_show(tmp_path, monkeypatch):
    monkeypatch.chdir(tmp_path)

    import sys
    sys.argv[:] = [
        "objdetect_benchmark.py",
        "--configuration", "generate",
        "-p", str(tmp_path/"out"),
        "--synthetic_object", "chessboard",
        "--cell_img_size", "50",
        "--board_x", "3", "--board_y", "3"
    ]

    main()

    output_files = list((tmp_path/"out").rglob("*.json"))
    assert output_files, "expected some JSON"

To run locally:

cd /opencv_benchmarks/calibration_boards
source .venv/bin/activate
export PYTHONPATH=$PWD
pytest -q

generate_dataset(args, synthetic_object)
if configuration == "generate":
return
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It changes logic. The original "generate" just saves test images and exit. "generate_run" generates, saves and runs all benchmark steps.

Comment on lines 77 to 83
def get_coord(num_rows, num_cols, start_x=0, start_y=0):
i, j = np.ogrid[:num_rows, :num_cols]
v = np.empty((num_rows, num_cols, 2), dtype=np.float32)
v[..., 0] = j + start_y
v[..., 1] = i + start_x
v[..., 0] = j + start_x
v[..., 1] = i + start_y
v.shape = (1, -1, 2)
return v
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function is not used in the benchmark at all and may be removed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will remove it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants