MIT chip will boost smartphone photo quality
By Stewart Mitchell
Posted on 21 Feb 2013 at 12:43
A chip that could significantly increase the quality of images captured from smartphone cameras has been created by a team of researchers at the MIT’s Microsystems Technology Laboratory.
According to the researchers, the processor would take over the computational work currently run via software, allowing images to make better use of the lens and sensor capabilities while consuming less power than data-intensive software tools.
The processor would be able to handle High Dynamic Range (HDR) imaging, which attempts to make images better reflect the light seen by the human eye, especially at the extremes of brightness that can be handled by existing smartphone cameras.
"To do this, the chip’s processor automatically takes three separate 'low dynamic range' images with the camera: a normally exposed image, an overexposed image capturing details in the dark areas of the scene, and an underexposed image capturing details in the bright areas," said Rahul Rithe, a graduate student on the project.
Software-based systems typically take several seconds to perform this operation, while the chip can do it in a few hundred milliseconds
"It then merges them to create one image capturing the entire range of brightness in the scene."
According to MIT, similar techniques can also be applied on shots taken indoors, to take the harshness out of images that use a flash, by combining one image taken with the flash and one without.
Although the technique can be achieved with software, the researchers say the processing is too slow, especially when used on larger images and video.
"Software-based systems typically take several seconds to perform this operation, while the chip can do it in a few hundred milliseconds on a 10-megapixel image," the researchers said. "This means it is even fast enough to apply to video."
The processor also includes what the scientists called a bilateral filter, which helps to reduce noise in images by merging pixels containing unwanted features with that of their neighbours.
"In conventional filtering, this means even those pixels at the edges of objects are also blurred, which results in a less detailed image," Rithe said. "Bilateral filters will only blur pixels with their neighbours if they have been assigned a similar brightness value; this prevents the system from blurring across any edges."
The MIT team will be presenting the chip to industry later this month, but there are no details on when it might appear in smartphones sold to the public.
Is your business a social business? For helpful info and tips visit our hub.
- Move over Delia: IBM Watson is cooking tonight
- Eric Schmidt on the double-edged smartphone: friend and foe
- Getty joins the race to the bottom
- Hour of Code: five steps to learn how to code
- Sony Xperia Z2 Tablet review: first look
- Sony Xperia Z2 review: first look
- Samsung Galaxy Gear 2 review: first look
- Nokia XL review: first look
- Samsung Galaxy S5 review: first look
- Nokia X review: first look
- Windows Server 2012 R2: how the Datacenter edition could change SMBs
- Invoices and VAT: how to set up your documents correctly
- Nexus 5 vs Samsung Galaxy S4 Active: the best phone for avoiding screen burn
- How much is a social user worth?
- The key to choosing a secure password
- Thunderbolt Bridge: a fast Mac migration tool
- Should you advertise on Twitter?
- How to track a lost smartphone
- Self-publishing success: the best way to sell your book
- 1.6TB SSD: why would you need one?