Skip to content

Commit 4c166c6

Browse files
author
Agustín Castro
committed
Add first version of README
1 parent 604c679 commit 4c166c6

File tree

2 files changed

+62
-1
lines changed

2 files changed

+62
-1
lines changed

demos/multi_camera/README.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# Multi-Camera Demo
2+
3+
In this example, we show how to associate trackers of different synchronized videos in Norfair.
4+
5+
Why would we want that?
6+
7+
- When subjects that are being tracked go out of frame in one video, you might still be able to track them and recognize that it is the same individual if it is still visible in other videos.
8+
- Take footage from one or many videos to a common reference frame. For example, if you are watching a soccer match, you might want to combine the information from different cameras and show the position of the players from a top-down view.
9+
10+
## Example 1: Associating different videos
11+
12+
This method will allow you to associate trackers from different footage of the same scene. You can use as many videos as you want.
13+
14+
```bash
15+
python3 demo.py video1.mp4 video2.mp4 video3.mp4
16+
```
17+
18+
A UI will appear to associate points in `video1.mp4` with points in the other videos, to set `video1.mp4` as a common frame of reference. You can save the transformation you have created in the UI by using the `--save-transformation` flag, and you can load it later with the `--load-transformation` flag.
19+
20+
If the videos move, you should also use the `--use-motion-estimator-footage` flag to consider camera movement.
21+
22+
## Example 2: Creating a new perspective
23+
24+
This method will allow you to associate trackers from different footage of the same scen, and create a new perspective of the scene which didn't exist in those videos. You can use as many videos as you want, and also you need to provide one reference (either an image or video) corresponding to the new perspective. In the soccer example, the reference could be a cenital view of a soccer field.
25+
26+
```bash
27+
python3 demo.py video1.mp4 video2.mp4 video3.mp4 --reference path_to_reference_file
28+
```
29+
30+
As before, you will have to use the UI, or if you have already done that and saved the transformation with the `--save-transformation` flag, you can load that same transformation with the `--load-transformation` flag.
31+
32+
If the videos where you are tracking have camera movement, you should also use the `--use-motion-estimator-footage` flag to consider camera movement in those videos.
33+
34+
If you are using a video for the reference file, and the camera moves in the reference, then you should use the `--use-motion-estimator-reference` flag.
35+
36+
37+
For additional settings, you may display the instructions using `python demo.py --help`.
38+
39+
40+
41+
42+
43+
## UI usage
44+
45+
The UI has the puropose of annotating points that match in the reference and the footage, to estimate a transformation.
46+
47+
To add a point, just click a pair of points (one from the footage window, and another from the reference window) and select `"Add"`.
48+
To remove a point, just select the corresponding point at the bottom left corner, and select `"Remove"`.
49+
You can also ignore points, by clicking them and selecting `"Ignore"`. The transformation will not used ingored points.
50+
To 'uningnore' points that have been previously ignored, just click them and select `"Unignore"`.
51+
52+
If either footage or reference are videos, you can jump to future frames to pick points that match.
53+
For example, to jump 215 frames in the footage, just write that number next to `'Frames to skip (footage)'`, and select `"Skip frames"`.
54+
55+
You can go back to the first frame of the video (in either footage or reference) by selecting "Reset video".
56+
57+
Once a transformation has been estimated (you will know that if the `"Finished"` button is green), you can test it:
58+
To Test your transformation, Select the `"Test"` mode, and pick a point in either the reference or the footage, and see the associated point in the other window.
59+
You can go back to the `"Annotate"` mode keep adding more associated points until you are satisfied with the estimated transformation.
60+
61+
Once you are happy with the transformation, just click on `"Finish"`.

norfair/common_reference_ui.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ def set_reference(
6161
Creates a UI to annotate points that match in reference and footage, and estimate the transformation.
6262
To add a point, just click a pair of points (one from the footage window, and another from the reference window) and select "Add".
6363
To remove a point, just select the corresponding point at the bottom left corner, and select "Remove".
64-
You can also ignore point, by clicking them and selecting "Ignore". The transformation will not used ingored points.
64+
You can also ignore points, by clicking them and selecting "Ignore". The transformation will not used ingored points.
6565
To 'uningnore' points that have been previously ignored, just click them and select "Unignore".
6666
6767
If either footage or reference are videos, you can jump to future frames to pick points that match.

0 commit comments

Comments
 (0)