MIT chip will boost smartphone photo quality
By Stewart Mitchell
Posted on 21 Feb 2013 at 12:43
A chip that could significantly increase the quality of images captured from smartphone cameras has been created by a team of researchers at the MIT’s Microsystems Technology Laboratory.
According to the researchers, the processor would take over the computational work currently run via software, allowing images to make better use of the lens and sensor capabilities while consuming less power than data-intensive software tools.
The processor would be able to handle High Dynamic Range (HDR) imaging, which attempts to make images better reflect the light seen by the human eye, especially at the extremes of brightness that can be handled by existing smartphone cameras.
"To do this, the chip’s processor automatically takes three separate 'low dynamic range' images with the camera: a normally exposed image, an overexposed image capturing details in the dark areas of the scene, and an underexposed image capturing details in the bright areas," said Rahul Rithe, a graduate student on the project.
Software-based systems typically take several seconds to perform this operation, while the chip can do it in a few hundred milliseconds
"It then merges them to create one image capturing the entire range of brightness in the scene."
According to MIT, similar techniques can also be applied on shots taken indoors, to take the harshness out of images that use a flash, by combining one image taken with the flash and one without.
Although the technique can be achieved with software, the researchers say the processing is too slow, especially when used on larger images and video.
"Software-based systems typically take several seconds to perform this operation, while the chip can do it in a few hundred milliseconds on a 10-megapixel image," the researchers said. "This means it is even fast enough to apply to video."
The processor also includes what the scientists called a bilateral filter, which helps to reduce noise in images by merging pixels containing unwanted features with that of their neighbours.
"In conventional filtering, this means even those pixels at the edges of objects are also blurred, which results in a less detailed image," Rithe said. "Bilateral filters will only blur pixels with their neighbours if they have been assigned a similar brightness value; this prevents the system from blurring across any edges."
The MIT team will be presenting the chip to industry later this month, but there are no details on when it might appear in smartphones sold to the public.
- Play it again: Berlin's Computer Game Museum
- Switching from iPhone to Android: what I miss, what I don't
- Tech City: Easy to score when you move the goalposts
- How to remove SkyDrive from the Windows 8.1 Explorer
- Switching from iPhone to Android? Switch off iMessage
- Why is Google pumping more money into Firefox?
- Sky Broadband Shield review
- Samsung Galaxy S4: how to double your battery life
- Motorola Moto G review: first look
- IBM Watson meets Willy Wonka
- The importance of load balancing
- Windows Phone App Studio: an easy way to create your first Windows Phone 8 app
- The end of Windows XP support: what it really means for businesses
- Don't rely on Chrome's password vault
- Using Buffer to manage your social media
- Microsoft needs its own Steve Jobs
- Forget credit cards: hackers want your Facebook account
- Can't get fast enough broadband? Here's what to do
- Leap Motion and the battle against UI stagnation
- How to build a really bad network