3D Image Sensor Using Smart Pixels

March 25, 2009
ifm efector inc., a leading manufacturer of sensors and controls, introduces a new 3D Image Sensor for object evaluation. The compact sensor uses time-of-flight distance measurement to identify the height of any object in the field of view. An integrated 64 x 48 pixel array – in which each pixel represents a time of flight measurement – defines the field of view for the sensor. This technology provides critical information in applications such as palletzing and de-palletizing, material handling, bulk level of materials and intelligent routing/sorting. With a list price of $1450 (U.S.), ifm states that the 3D Image Sensor is a new benchmark for price / performance in reliable 3D object detection. 3D Image Sensor Uses Photonic Mixing Device (PMD) Principle Based on Time of Flight Measurement The Photonic Mixing Device (PMD) principle is based on Time of Flight (TOF) distance measurement. In its most simple version, a light pulse is radiated from the sensor. This pulse is reflected by an object and detected by the sensor's smart pixel receiver element. A comparison between the optical and electrical reference signals yields an output signal that carries the desired distance information. Time of Flight (TOF) distance measurement is not new to the market. The challenge is that traditional TOF components use a photodiode for a receiver element and require additional external electronics for signal acquisition and processing. These type of components are used to detect only one measurement and are costly due to the response time required for Time of Flight measurement. Using these existing TOF components to achieve 3D imaging can be expensive for industrial markets. In comparison, ifm's 3D image sensor's receiver element is a System-on-Chip design. The complete sensor element and electronics are built on a 0.25 mm square silicon chip. This miniature chip enables 3D imaging using Time of Flight, reduces the size of the sensor and its cost. For image capture, the 3D sensor’s 64 x 48 pixel array projects 3072 points of reference, capturing the entire image in three dimensions. Each pixel within the array has two gates that are controlled by an oscillator that oscillates at a frequency of 20MHz. Light is emitted from the sensor and reflected by the target and received back into the PMD chip (also connected to the oscillator). The electrons are converted into photons and are separated into the optically sensitive area of the semiconductor called the "moble charge carrier". This built-in functionality means that the sensor can pre-process the signal, removing the need for expensive high-speed electronics. Relating the phase difference measured by each pixel to the speed of light gives the distance travelled by light falling onto the detector. The phase shift of the light is then compared to the reference signal and sent from the chip as the representative distance for that pixel. Information from all the pixels is then brought together to create a 3D image. The sensor also includes patented Suppression of Background Illumination (SBI). This technology allows the sensor to eliminate the effects of external lighting such as sunlight and internal high-powered lighting. This allows the sensor to be used in almost any indoor or outdoor application. Multidimensional Measurement for Industrial Automation Applications Palletizing and de-palletzing The 3D image sensor solves a variety of application challenges for the material handling industry. The challenge in this process is detecting when a layer of product is complete or incomplete. If a stray box is left on the top layer of a pallet, and the box goes undetected, there’s a risk that the robotic arm will crush the product or the missing product will create an unbalanced load on top of the pallet. The 3D image sensor can be mounted to the top of a palletizer. The sensor‘s 64 x 48 pixel array projects 3072 data points of reference onto a pallet. The 3D image sensor evaluates the entire layer of a pallet and sends the information back to the main control indicating the highest or lowest point regardless of where it is in its point of view. Material handling in airport logistics In airports across the country, airport logistics moves thousands of pieces of luggage every day. Determining the gross dimension and volume of a piece of luggage before it reaches the inspection scanner is crucial to proper luggage flow. If a bag is determined to be too large, it can be removed before it reaches the scanner. To pre-determine baggage size, many airports are required to use multiple high-cost sensors to measure all sides of the luggage to determine height, width and depth which adds cost and complexity. The ifm 3D image sensor uses a standard algorithm to determine the gross volume of a piece of luggage in its field of view. The PMD sensor’s pixel array sends 3072 data points to detect the luggage size and volume. The sensor sends a signal to the main control indicating when a luggage piece is too large to fit through the scanner. The luggage can then be removed before it reaches the scanner and maintain proper luggage flow.