MIT chip will boost smartphone photo quality
By Stewart Mitchell
Posted on 21 Feb 2013 at 12:43
A chip that could significantly increase the quality of images captured from smartphone cameras has been created by a team of researchers at the MIT’s Microsystems Technology Laboratory.
According to the researchers, the processor would take over the computational work currently run via software, allowing images to make better use of the lens and sensor capabilities while consuming less power than data-intensive software tools.
The processor would be able to handle High Dynamic Range (HDR) imaging, which attempts to make images better reflect the light seen by the human eye, especially at the extremes of brightness that can be handled by existing smartphone cameras.
"To do this, the chip’s processor automatically takes three separate 'low dynamic range' images with the camera: a normally exposed image, an overexposed image capturing details in the dark areas of the scene, and an underexposed image capturing details in the bright areas," said Rahul Rithe, a graduate student on the project.
Software-based systems typically take several seconds to perform this operation, while the chip can do it in a few hundred milliseconds
"It then merges them to create one image capturing the entire range of brightness in the scene."
According to MIT, similar techniques can also be applied on shots taken indoors, to take the harshness out of images that use a flash, by combining one image taken with the flash and one without.
Although the technique can be achieved with software, the researchers say the processing is too slow, especially when used on larger images and video.
"Software-based systems typically take several seconds to perform this operation, while the chip can do it in a few hundred milliseconds on a 10-megapixel image," the researchers said. "This means it is even fast enough to apply to video."
The processor also includes what the scientists called a bilateral filter, which helps to reduce noise in images by merging pixels containing unwanted features with that of their neighbours.
"In conventional filtering, this means even those pixels at the edges of objects are also blurred, which results in a less detailed image," Rithe said. "Bilateral filters will only blur pixels with their neighbours if they have been assigned a similar brightness value; this prevents the system from blurring across any edges."
The MIT team will be presenting the chip to industry later this month, but there are no details on when it might appear in smartphones sold to the public.
Is your business a social business? For helpful info and tips visit our hub.
- How to check your identity hasn’t been sold to the hackers
- Tim Cook: this is how much TV has changed since the 70s
- Westminster wins the .London battle
- 20 years of PC Pro: from deep pan pizza to virtualisation
- Five reasons why the Apple Watch leaves me cold
- Apple Watch, iPhone 6 and 6 Plus: Tim Cook's Apple back with a bang?
- BT Home Hub 5: how to get maximum speed
- 20 years of PC Pro: one-star reviews (including "the worst tablet we've ever seen")
- 20 years of PC Pro: our best covers
- Why we've closed the PC Pro forums
- How to sell more ebooks on Amazon
- 10 ways to make your business more secure
- Top five VoIP mistakes
- How to add in-app purchasing to an iPhone, Android or Windows app
- Remote-control ransomware: TeamViewer and software hardball
- Why laptops with serial ports matter to the Internet of Things
- Make your mobile battery last longer
- Small steps into handling Big Data
- Nexus 5: does it really run stock Android?
- How to get broadband to a garden office