One of the most common methods of delivering 3D is the use of structured lighting, either in the form of one-dimensional illumination (using a laser line) or two-dimensional in the form of a defined pattern (using a projector) to generate a height profile.
Moving the object under the laser line generates a series of height profiles that are added to compose a three-dimensional image of the object. The other approach requires an stationary object onto which different patterns (i.e. grey codes) are projected, allowing a clear allocation of pixels and object points to determine a three-dimensional point cloud.
A totally different approach includes tools that use stereo vision to obtain depth information in a similar way to the human eye. A system consists of two cameras where the algorithm first correlates the two images (identifying the corresponding pixels in both camera images). Using geometric coherence, the depth information is then extracted.
Other methods are available including time-of-flight which is still limited to low resolution sensors, or white light interferometry for highly accurate, but extremely slow acquisition.
In general the result of all single camera methods of 3D image acquisition, is a 2.5D image that contains depth/height information per pixel. Using this pattern displacement information, it is possible to compose or calculate a 3D model. Depending on the application requirements, metrically calibrated point clouds might be necessary for the subsequent inspection task. Depending on the acquisition method, i.e. linear scanning, angle based scanning or multi-camera scan systems, calibration algorithms of varying complexity are required.
Once converted to a 3D model, a range of algorithms are available. Usually the first step is a 3D calibration with the goal of excluding perspective or lens distortions and to be able to derive metric coordinates. Normally calibration targets with known measurements are used for the calibration algorithm delivering a 3D accuracy in the subpixel range depending on the algorithm used.
Using a triangulation system based on a camera and a laser, shadows might occur that can be suppressed by using a second camera in the reverse orientation. Preprocessing is necessary to consolidate the point clouds extracted by the second sensor to enable further processing. This can be achieved using so called merge algorithms.
One of the tools to feature this is CVB Merge 3D.
The calibrated and shadowfree 3D model or point cloud is then used in 3D applications for either measurement or matching against a perfect sample, often called a golden template. The result is a highly precise deviation point cloud between the test object and the reference model. Using 3D imaging enables the user to detect defective parts quickly in an early production stage and reject them before further work is carried out, maximising any cost-saving.
As 3D imaging is much more complex than standard 2D imaging due to the increased effort in image acquisition and more complex processing algorithms, precalibrated complete sensors are available (see the solutions section). If you are looking to build a 3D vision system using individual components it must be noted that the performance of a system is influenced by a variety of factors such as laser, camera, triangulation angle, accuracy, material etc. We highly recommend you contact our specialists to help define the optimum system requirements. More information about 3D imaging is also to be found in the camera section.