3D imaging systems already exist, but they’re usually bulky and expensive. Google’s Project Tango developer devices can do a little of this, but they rely on multiple cameras and are built entirely around depth and position sensing. The NCI chip could be just another thing built into your phone. The key to making this work is that each pixel on the NCI is an independent interferometer. That’s an optical instrument that uses the interference of light rays to judge distance. Therefore, the NCI pixels have both intensity (like a regular camera) and distance information.Your smartphone can probably do an okay job of figuring out how far away something is when taking a photo, but the image itself is still just a flat plain of pixels. A team at Caltech has created a new imaging chip that could be incorporated into smartphones to capture a full 3D view of an object, which could then be used to generate a file suitable for 3D printing. It’s called a nanophotonic coherent imager (NCI), and it’s so small it could fit inside current smartphones without much of a problem.
The NCI makes use of LIDAR technology to scan its targets. LIDAR is already extensively used in range-finding applications like Google’s self-driving cars. The way it’s used in this system is a little different, though. The object being imaged is illuminated by a small array ofLIDAR emitters. This array can sweep across an object to cover different parts of it without moving the NCI. As the lasers bounce off the object, the reflected light is picked up by the NCI chip, which analyzes the phase, frequency, and intensity to generate a map of distance and size — thus a 3D image.
The degree of accuracy could far surpass previous depth measurement hardware based on silicon photonics — not quite yet. though. The resolution of the test chip created by Caltech researchers is just 16 coherent pixels in a 4×4 array. The images of the penny were created by moving the object around in four-pixel increments so the entire surface could be captured in sufficient detail.
This is only necessary for the proof of concept chip. It’s likely that further refinement of the design could vastly increase the effective resolution — the team thinks it could easily scale to hundreds of thousands of pixels. Tight integration in a mobile device with orientation sensors could allow the user to move an NCI-equipped phone around to scan all sides of an object and get a higher effective resolution as well. The sensor could also simply get larger. Right now the chip is just 300 microns square (0.3mm), which is tiny as far as imaging sensors go. Still, part of the appeal is that it’s so small and inexpensive to produce.
The team plans to continue developing this technology. So who knows? Maybe one day you’ll capture a 3D snapshot as easily as you capture 2D ones now.
No comments:
Post a Comment