Skip to navigation

PCPro-Computing in the Real World Printed from

Register to receive our regular email newsletter at

The newsletter contains links to our latest PC news, product reviews, features and how-to guides, plus special offers and competitions.

// Home / Blogs

Posted on August 2nd, 2010 by David Bayon

Why bad 3D, not 3D glasses, is what gives you a headache

This is the second in a series of blogs based on a seminar by Buzz Hays, chief instructor for the Sony 3D Technology Center in Culver City, California.

Zalman 3D glasses

3D is an ever-evolving process, which is why the effect can be such a hit-and-miss affair. But those who insist 3D glasses give them headaches are a little wide of the mark, according to the man who trains the filmmaking pros.

“It’s not the technology’s fault, it’s really the content that can cause these problems,” explains Buzz Hays. “The more care taken when making the content, the better off everyone’s going to be. My mantra is that it’s easy to make 3D but it’s hard to make it good – and by ‘good’ I mean taking care to make sure that this isn’t going to cause eyestrain.”

There are several common mistakes that can cause discomfort, and easy ways for that to be reduced, yet they’re only just being learned and put into regular use.

Interaxial distance

The interaxial, or the distance between the two cameras, controls the overall depth of the 3D effect. Objects will appear closer or further away but they won’t change in size, so it’s important not to increase the interaxial distance too much. Filmmakers are gradually gaining experience with what types of scene work with different depths of 3D, and Buzz was keen to point out that framing a scene for 3D has similarities to composition for still photographers.

“When it comes to composing in 3D… by using the heads of the audience [in a U2 concert clip] or the ground plane, or some continuous sense of depth in the shot, it holds the shot together. One of the complaints people sometimes have about 3D is that it feels like a cardboard cutout: that there’s a cardboard cutout, then some space and then another cardboard cutout. By using a careful choice of interaxial spacing, and also by having something in the frame like the ground plane, or smoke or atmosphere or something, then you can start to hold the shot together.”


Our eyes converge inward as we look at an object moving towards us. In 3D it’s essentially the same thing: we converge (or “toe-in”) the angle of the left and right cameras, and this alters the particular 3D plane to which our focus is drawn. Objects in front of the convergence point appear to be coming out at us, while objects behind do the opposite. Care needs to be taken, however, particularly when fast cutting is used.

Image courtesy of Panasonic

“There’s a situation where every time we cut to a new shot, the subject of interest is at a slightly different distance from us,” explained Buzz, demonstrating a rapidly cut clip of two people at different convergence points. “What’s happening is on every single cut, your eyes are making an adjustment to depth – you’re trying to find that object. It’s a very subtle distance, it’s not a great distance, but that’s what you’re feeling in your eye muscles as you’re trying to work to catch up with the shot. That’s called the vergence-accommodation conflict.”

“The way we make it much easier to look at is by using convergence in post-production. In that same sequence I adjust the convergence in post [production] to massage the depth. Now your eyes are making the adjustment once in the very first shot, and from that point on they don’t have to adjust again. It’s very subtle but if you don’t do it, it’s the difference between a comfortable experience and a splitting headache after 90 minutes.”

What filmmakers are now learning is that trying to control the convergence during filmmaking is, as Buzz bluntly puts it, “a waste of time”. As cuts are made and scenes are shifted around, it’s difficult to know exactly what shot will follow another, so trying to predict it all is futile.

“It’s far better to find the comfortable place to put the convergence level during shooting, then adjust it in post-production once the edit is finished – that ultimately makes the difference between good and bad 3D.”


The opposite of convergence is divergence, and just as our eyes can only converge to a certain point before we go cross-eyed, so they can only diverge to parallel. Overuse of divergence can cause big problems.


“Typically, when we look at an object in the world our eyes are either parallel if it’s at distance, or they’re converged inwards for objects that are closer,” continued Buzz. “There’s a condition that can be created unintentionally where your eyes are forced to rotate outward in order to fuse this image – which frankly only works if you’re a horse or a goldfish, and they don’t buy movie tickets.”

At this point Buzz put a scene on the screen in front of us and had us don our specs. A figure at the back of the image was simply impossible to bring into focus, and even trying was as uncomfortable as you’d expect. Removing the glasses showed why: the left and right views of the figure were several feet apart on the big screen.

“Divergence occurs based on the size of screen you’re using. You might make a neat adjustment [during filming] so it looks great on a monitor, but when you scale it up to 40ft it hurts like heck. Experienced stereographers will be able to avoid it, but some low-budget 3D films have been filled with divergence, as they’ve made the cardinal mistake of falling in love with the image on a video monitor when it was really intended for a cinema display. They’re dialling the depth to within an inch of its life and getting everything they wanted on the small monitor, so their camera settings are out of whack. It can’t be fixed in post – unless you just abandon [the image for] one eye and convert 3D from the other.”

These were just a few of the common faults covered in our brief time with Buzz, and it was clear from his honesty about current 3D’s shortcomings that there really isn’t a true 3D expert in existence. The people teaching it are still learning while they go, and doing their best to pass that knowledge on. The hope is that viewers will benefit from gradually better 3D – and, hopefully, fewer headaches.

Read more:
Why we can’t ditch 3D glasses just yet.
From the Pole to Pandora: the shaky progress of modern 3D.
Why 3D and modern filmmaking techniques don’t mix.
3D TV: in the home, on a budget and… on the news?

Goldfish image courtesy of bensonkua. Convergence diagram courtesy of Panasonic.

Tags: , , , , , , , , ,

Posted in: Hardware, Random


Follow any responses to this entry through the RSS 2.0 feed.

You can skip to the end and leave a response. Pinging is currently not allowed.

16 Responses to “ Why bad 3D, not 3D glasses, is what gives you a headache ”

  1. Jeff Lewis Says:
    August 2nd, 2010 at 7:59 pm

    All of this is true – but there are a couple of other issues that aren’t easily fixed. One, the eye wants to change focus from near to far in order to change priority. We do this all the time in RL and we’ve collectively learned that a movie is ‘flat’ and so we don’t change focus. But a ‘3D’ movie is still flat – but *looks* 3D, so we reflexively try to refocus. That gives us eyestrain and headaches.

    Two, we rely on moving parallax to gauge where things are. When you move your head, things change relative position. 3D movies don’t do this – so it throws off our sense of where things are. So, you either keep perfectly still, or you can get a kind of motion sickness from watching a 3D movie.

    None of these can be fixed by creative movie editing. They’re inherent flawed in the current 3D technology.

  2. Lisa Park Says:
    August 2nd, 2010 at 9:10 pm

    Great info. VSP recently teamed w/ Bill Nye The Science Guy to debunk common myths about eyes… One of the episodes covers 3D TV and gaming and if (and how) it makes you dizzy. Check it out:

  3. nurbles Says:
    August 2nd, 2010 at 9:24 pm

    Jeff Lewis has good points, but I’d like to add the two eyes are not needed for depth perception. I am legally blind in one eye and, when I wear a patch over it I can put a basketball in the hoop; hit a softball and other depth-perception related things much better than with both eyes open. According to things I’ve read, humans get more depth cues from focusing our eyes than by the often miniscule difference between the images from each. Some filmmakers know this and refuse to do 3D — but there’s so much money being thrown at it that they are caving in, even though they know it can never work with a flat projection. I hope more people join me in voting with their dollars and always choose the 2D option.

  4. rob Says:
    August 3rd, 2010 at 1:36 am

    Nurbles, I can’t agree with you on that, at least not for people with two good eyes. Often I’ll notice how bad my depth perception is with one eye closed, such as when I am lying in bed / on the couch with one eye in the pillow, and get totally confused about what I am looking at until I open the other eye. It all flattens out.

    Also, keep in mind that while the distance between your eyes is small (2.5 inches maybe), the distance from one side of your pupil to the other is much smaller, especially in bright light. That is what you have to rely on for determining depth by focus info. I seriously doubt that is what you are using for basketball….more likely you are using motion of your head, perspective, etc. to judge depth.

  5. zholy Says:
    August 3rd, 2010 at 4:37 am


  6. TV John Says:
    August 3rd, 2010 at 9:49 am

    Rob, you are absolutely right. I had a friend at school who was a good tennis player, and suddenly his game went to pot. It subsequently turned out he had a detached retina. In due course he learned to compensate and again became good at tennis – whether he was as good as before I don’t recall. I have no proof for this, but I strongly suspect that if somebody substituted a tennis ball of a slightly different size his game would have gone to pot again, as his brain was probably including the known size of the ball as part of its depth calculation. The human brain has the most amazing ability to work around defects.

  7. Steve Cassidy Says:
    August 3rd, 2010 at 12:23 pm

    I can’t do 3D at all. I knew for a long time that there were differences between individuals when it comes to flicker – as soon as people stated to work on CRT monitors in offices with fluorescent tubes in it became plain that some could sit there all day, while others went bonkers with flicker-perception. Only after an unhappy visit to Moorfields Eye Hospital did I discover that I have a thickly-populatd optic nerve, which means that I am living hell for Opticians (I can see Tim’s hand over the comment here already!) and I can watch no more than about 10 minutes of this “3D” nonsense. I doubt that 3D film-makers have any idea how the audience divides up between “3D neutral” and “3D vomitous” humans.

  8. Bystander Says:
    August 3rd, 2010 at 5:09 pm

    3D movies don’t give me headaches or discomfort, I just don’t like them. The filmmakers fill these movies with so many 3D cliches (how many projectiles can you watch flying out of the screen in one sitting) that it reminds me of the old SCTV skits “Dr. Tongue’s 3D House of (blank).” And the films that don’t get filled up with hackneyed tricks didn’t need the 3D effect in the first place.

    I, for one don’t bother with the 3D in the theatres anymore, and I won’t be spending any money on hardware to experience it at home.

  9. RMH Says:
    August 4th, 2010 at 6:46 am

    nurbles is right. I am an optometrist, so I can help you with this. The human visual system only uses two-eyed depth perception inside 4-5 feet. Our brain is able to triangulate a distance estimate based on the disparity in images coming from the two eyes which in normal humans are never more than about 74mm apart (average is around 60mm). An example of two eyed depth perception is threading a needle. Try it with one eye closed. It is much more difficult. Beyond about 5 feet, the brain is unable to use the disparity cues adn relies on other cues for depth perception such as linear perception, shadows and interposition. Others include parallax and granularity of an image. The ability to accurately judge depth at distance is not a function of having two eyes. That’s one reason that it is legal to drive with one eye in all the States that I’m aware of.

  10. Ewen Flint Says:
    August 5th, 2010 at 8:46 am

    I’m a doctor. I support RMH; stereo vision is only useful for depth perception within about 2 metres, beyond that it is parallax. That is how the one-eyed person can drive, shot hoops etc, but probaly can’t thread a needle. 3D pictures, the ones where you converge your eyes and the dots suddenly turn into a 3D picture, require you to converge to one apparent depth but focus at another. That is why some people can’t do it. 3D movies have the same problem. The eyes are focussed on the screen(effectively at infinity) but seeing different images (effectively converged). Children who are born long-sighted need to focus their eyes even when looking at distant objects. This is called accommodation, and is linked with convergence. As a result when they try to focus, the eyes converge, causing a squint.

  11. Steve Hart Says:
    August 5th, 2010 at 9:46 am

    This is a very interesting discussion. Buzz Hays appears to be saying that the problems are simply the way that the technology is being implemented and once film makers learn how to do it everything will be fine. Others seem to think that being able to ‘do 3D’ is an ability that some individuals just don’t have. I can easily read a 2D document at my desk and then immediately understand the 3D image outside my office window so surely if we understand the mechanics of that process we can ultimately produce a movie image equivalent to looking out of the window? Are we currently constrained by the technology or by our experience of using that technology?

  12. Paul C Says:
    August 6th, 2010 at 3:07 am

    Reading the comments above makes me wonder how much each of us has had to learn (from early childhood) how to interpret 2D images, which are flat and in near-focus but represent a world that has enormous spatial depth.

    (It might be difficult to find someone who has not been exposed to such images during childhood, to see if they have problems interpreting them as an adult.)

    Perhaps 3D is something our children will learn to cope with far better than us adults can, just like, for example, they have learned to use computers from an early age, and intuitively think the right way for doing so.

    When I got a digital hearing aid I found it introduced a time-delay due to the way the signals are processed. However my brain adjusted. I no longer notice the delay at all. And I adjust immediately between hearing-aid and no-hearing-aid modes, without being aware that I am doing so.

    Likewise with varifocals. Many people never adjust to them. I persisted and adapted. It becomes intuitive. It ceases to require conscious effort.

    So I am hopeful that our brains are capable of handling 3D, but I would think our children will adapt more readily to it.

  13. KB Says:
    August 16th, 2010 at 2:47 am

    Two other things to think about:

    The aladin effect– if you’re trying to do a shot and cutting off a person at the waist in 3D, the person will look like a geenie on the bottom of the screen.

    Secondly, many Americans fail to think about subtitles, but they really pose a huge problem on the screen when it comes to 3D movies. If they are put in the “front” of the screen, they look like they are jumping out at you and hurt your eyes reading them, while if you put them at the back of the screen, they would be covered up with other objects or would just be impossible to register in our minds that the words behind the object are not blocked…

  14. Chris Reese Says:
    August 17th, 2010 at 2:01 pm

    These are all good comments. As with sound and color, directors and DPs will learn how to use 3D as an additional story telling medium rather than a theme park gag. This was done very well in How to Train Your Dragon. Viewer proximity was used to create a nice sense of intimacy and at other times the opposite.

    One of the biggest challenges we battle with our conversion business is shot framing with frustrum clipping or pan rates. Not anything we can do to help that. The moral is that not all content is suitable for conversion to 3D.

  15. Chris Reese Says:
    August 28th, 2010 at 1:32 pm

    Oh and I forgot to add that even though I am personally responsible for ruining 3D’s reputation because of my stupid conversion company.

  16. köpa dataspel Says:
    January 13th, 2012 at 1:42 am

    I recently obtained something similar but inexpensive


Leave a Reply

Spam Protection by WP-SpamFree

* required fields

* Will not be published






Your email:

Your password:

remember me


Hitwise Top 10 Website 2010