-
Notifications
You must be signed in to change notification settings - Fork 64
Hardware Setup Guide
Structured light scanning might seem daunting at first, but worry not!
The cost of parts have dropped significantly in recent years, and they are generally straightforward to source.
- camera
- digital projector
- computer
- flatscreen display
optional
- mounting hardware
- turntable
optional
- flat board
- printer
- Raspberry Pi camera 8MP [adafruit link]
- Sony MP-CL1 laser projector [amazon link]
- Raspberry Pi 3b+ computer [adafruit link]
- some 1080p LCD TV (calibration step only)
- camera mount rail [amazon link]
- 2x pedco ball socket [amazon link]
- RAM-PRO 10-inch turntable lazy susan [amazon link]
- some small glass rectangle from home depot (~8"x10")
- some HP black/white laser printer
The specific parts mentioned above are not strictly necessary; camera, projector, computer all have wiggle room.
The camera
input is handled by software called mjpeg-streamer. This software is compatible with raspberry pi camera as well as other types of cameras. For development, mjpeg-streamer was only tested with raspberry pi type camera. More resolution = more output points.
The laser projector
above is ideal for scanning use because it has very deep depth of field. There's no focus knob because the laser image is sharp over a large distance range. This property ensures the structured light patterns appear crisp on the object/environment. Any DLP/LCD projector should work just fine, but be warned of challenges associated with depth of field. More resolution, more depth of field = more output points.
Raspberry pi B+ computer
or equivalent model is ideal for scanning use because it is very small, widely available, cheap, and has a very strong support community online for bumps in the road.
Priority number one is ensuring that your camera and projector are absolutely as rigid as possible with respect to each other.
Whatever hardware you use, if the camera or projector slip, the output results will suffer.
Place camera and projector side by side such that they will point at the same object. Their fields of view should have significant overlap.
All practical lenses have some amount of distortion
. To mitigate this source of error
, it's necessary to take some pictures
of a known object
and determine lists of correspondences
between reference object points
(3D) and camera image points
(2D). See slcrunch
for generating correspondence data from image sets.
There are two (2) options for calibration objects
:
printed flat chess pattern
flatscreen display
OpenCV traditionally uses bog-standard chessboard patterns for characterizing lens distortion. This pattern is adopted because it provides a grid distribution of sample points to compare the lens coordinates with real world coordinates. With knowledge of edge length scale
on the printed reference object, it's easy to translate what is seen in the image into metric units. Width and height of chess patterns should be different.
Each image will yield ~10-100 sample points (4x3 -> 10*9 inner corners). It's strongly suggested to capture at least 5 images of chess patterns with a camera, all from different angles. Coverage of the entire camera sensor with chess corners is important for many lenses due to distortion being greatest at the edges. Chessboard patterns cannot be recognized unless the entire pattern is visible, so panning and zooming out can help.
The other option for measuring correspondence data is pointing the camera at a flat LCD screen. Many LCD panels available have 1920*1080 pixel
resolution or better. This gives the possibility of sampling hundreds of thousands to millions of points per view angle. This method is strongly suggested because it collects data from any visible screen pixels; filling the camera's field of view with the display generally results in very good coverage.
extremely WIP
OS is assumed to be raspbian on raspi.
Need to install:
opencv
liblo
SDL ...