banner



What Camera Lens To Get Real Vision Feel

This article started after I followed an online word about whether a 35mm or a 50mm lens on a full frame photographic camera gives the equivalent field of view to normal human vision. This detail discussion immediately delved into the optical physics of the center as a camera and lens — an understandable comparing since the eye consists of a front element (the cornea), an aperture ring (the iris and pupil), a lens, and a sensor (the retina).

Despite all the impressive mathematics thrown back and forth regarding the optical physics of the eyeball, the discussion didn't quite seem to make sense logically, then I did a lot of reading of my own on the topic.

There won't exist any direct benefit from this article that will let you run out and take better photographs, only you might find it interesting. Yous may also discover it incredibly boring, so I'll give you lot my conclusion start, in the form of two quotes from Garry Winogrand:

A photo is the illusion of a literal clarification of how the camera 'saw' a piece of time and space.

Photography is non about the thing photographed. It is virtually how that thing looks photographed.

Basically in doing all this inquiry virtually how the homo heart is like a camera, what I actually learned is how human vision is not like a photograph. In a way, it explained to me why I so often notice a photograph much more beautiful and interesting than I found the actual scene itself.

The Centre as a Camera Arrangement

Superficially, its pretty logical to compare the eye to a camera. We can measure the forepart-to-dorsum length of the middle (about 25mm from the cornea to the retina), and the diameter of the student (2mm contracted, 7 to 8 mm dilated) and calculate lens-like numbers from those measurements.

You lot'll find some unlike numbers quoted for the focal length of the eye, though. Some are from physical measurements of the anatomic structures of the eye, others from optometric calculations, some take into business relationship that the lens of the centre and eye size itself modify with the contractions of diverse muscles.

To summarize, though, one commonly quoted focal length of the eye is 17mm (this is calculated from the Optometric diopter value). The more normally accepted value, nevertheless, is 22mm to 24mm (calculated from concrete refraction in the centre). In certain situations, the focal length may really be longer.

Since we know the approximate focal length and the diameter of the educatee, its relatively like shooting fish in a barrel to calculate the aperture (f-end) of the centre. Given a 17mm focal length and an 8mm pupil the eyeball should role as an f/2.1 lens. If we apply the 24mm focal length and 8mm pupil, it should exist f/3.5. There have really been a number of studies done in astronomy to actually measure the f-finish of the human middle, and the measured number comes out to be f/3.2 to f/iii.v (Middleton, 1958).

At this signal, both of you who read this far probably accept wondered "If the focal length of the eye is 17 or 24mm, why is everyone arguing nigh whether 35mm or 50mm lenses are the same field of view as the human middle?"

The reason is that the the measured focal length of the eye isn't what determines the angle of view of human vision. I'll get into this in more detail below, but the main point is that but part of the retina processes the principal image we run into. (The area of master vision is chosen the cone of visual attention, the rest of what we see is "peripheral vision").

Studies have measured the cone of visual attention and found it to be about 55 degrees broad. On a 35mm full frame photographic camera, a 43mm lens provides an bending of view of 55 degrees, so that focal length provides exactly the same angle of view that we humans accept. Damn if that isn't halfway between 35mm and 50mm. Then the original argument is concluded, the bodily 'normal' lens on a 35mm SLR is neither 35mm nor 50mm, it's halfway in between.

The Eye is Not a Camera System

Having gotten the answer to the original discussion, I could accept left things alone and walked away with yet another bit of adequately useless trivia filed away to amaze my online friends with. Simply NOOoooo. When I accept a agglomeration of work that needs doing, I find I'll nigh always cull to spend another couple of hours reading more articles about human vision.

You may have noticed the to a higher place section left out some of the middle-to-photographic camera analogies, because once you get past the simple measurements of aperture and lens, the residual of the comparisons don't fit and then well.

Consider the eye'south sensor, the retina. The retina is almost the same size (32mm in diameter) equally the sensor on a full frame camera (35mm in bore). Subsequently that, though, almost everything is different.

The retina of a human being eye

The first difference between the retina and your camera'due south sensor is rather obvious: the retina is curved forth the back surface of the eyeball, not apartment like the silicon sensor in the photographic camera. The curvature has an obvious advantage: the edges of the retina are about the same distance from the lens every bit the center. On a flat sensor the edges are farther away from the lens, and the middle closer. Advantage retina — it should take better 'corner sharpness'.

The human eye as well has a lot more than pixels than your camera, about 130 one thousand thousand pixels (yous 24-megapixel camera owners feeling apprehensive now?). However, only nigh vi one thousand thousand of the eye'south pixels are cones (which see colour), the remaining 124 million just meet black and white. Only advantage retina once more. Big time.

But if nosotros look further the differences become even more pronounced…

On a camera sensor each pixel is set out in a regular grid pattern. Every square millimeter of the sensor has exactly the same number and pattern of pixels. On the retina in that location's a small primal area, about 6mm across (the macula) that contains the densest concentration of photo receptors in the centre. The key portion of the macula (the fovea) is densely packed with but cone (color sensing) cells. The rest of the macula around this central 'color only' area contains both rods and cones.

The macula contains virtually 150,000 'pixels' in each 1mm square (compare that to 24,000,000 pixels spread over a 35mm 10 24mm sensor in a 5DMkII or D3x) and provides our 'central vision' (the 55 degree cone of visual attention mentioned above). Anyway, the central role of our visual field has far more resolving power than even the best photographic camera.

The residue of the retina has far fewer 'pixels', virtually of which are black and white sensing only. Information technology provides what we usually consider 'peripheral vision', the things we see "in the corner of our eye". This part senses moving objects very well, but doesn't provide enough resolution to read a volume, for instance.

The total field of view (the area in which we can see movement) of the human center is 160 degrees, but outside of the cone of visual attention we can't really recognize detail, only broad shapes and movement.

The advantages of the man eye compared to the camera go reduced a flake as we leave the retina and travel back toward the encephalon. The camera sends every pixel's information from the sensor to a computer scrap for processing into an prototype. The eye has 130 million sensors in the retina, just the optic nerve that carries those sensors' signals to the brain has only 1.2 million fibers, and then less than 10% of the retina's data is passed on to the brain at any given instant. (Partly this is considering the chemical light sensors in the retina accept a while to 'recharge' afterwards being stimulated. Partly because the brain couldn't procedure that much information anyway.)

And of course the brain processes the signals a lot differently than a photography camera. Unlike the intermittent shutter clicks of a camera, the center is sending the brain a constant feed video which is being processed into what we see. A subconscious part of the encephalon (the lateral geniculate nucleus if you must know) compares the signals from both eyes, assembles the most important parts into 3-D images, and sends them on to the witting office of the brain for image recognition and farther processing.

The hidden brain too sends signals to the center, moving the eyeball slightly in a scanning pattern so that the precipitous vision of the macula moves beyond an object of interest. Over a few dissever seconds the eye actually sends multiple images, and the encephalon processes them into a more complete and detailed image.

The subconscious encephalon also rejects a lot of the incoming bandwidth, sending simply a small-scale fraction of its information on to the conscious brain. You can control this to some extent: for example, right now your conscious encephalon is telling the lateral geniculate nucleus "ship me information from the cardinal vision merely, focus on those typed words in the center of the field of vision, move from left to right so I tin read them". Stop reading for a 2nd and without moving your eyes try to see what'south in your peripheral field of view. A 2d ago you didn't "see" that object to the correct or left of the reckoner monitor because the peripheral vision wasn't getting passed on to the conscious brain.

If you concentrate, fifty-fifty without moving your eyes, y'all can at least tell the object is in that location. If you desire to encounter it clearly, though, you'll have to send some other encephalon bespeak to the eye, shifting the cone of visual attending over to that object. Notice as well that you can't both read the text and come across the peripheral objects — the brain can't process that much data.

The brain isn't done when the image has reached the conscious part (called the visual cortex). This area connects strongly with the memory portions of the encephalon, assuasive you to 'recognize' objects in the paradigm. Nosotros've all experienced that moment when we see something, but don't recognize what it is for a second or two. After we've recognized it, we wonder why in the world information technology wasn't obvious immediately. It'due south because it took the brain a split 2nd to admission the retentivity files for epitome recognition. (If y'all haven't experienced this even so, just await a few years. You lot volition.)

In reality (and this is very obvious) human vision is video, not photography. Even when staring at a photograph, the brain is taking multiple 'snapshots' as it moves the center of focus over the picture, stacking and assembling them into the last image we perceive. Look at a photograph for a few minutes and you'll realize that subconsciously your centre has drifted over the picture, getting an overview of the epitome, focusing in on details here and in that location and, later a few seconds, realizing some things about information technology that weren't obvious at first glance.

So What'southward the Betoken?

Well, I have some observations, although they're far away from "which lens has the field of view about similar to human vision?". This information got me thinking about what makes me so fascinated by some photographs, and non and so much by others. I don't know that whatsoever of these observations are true, but they're interesting thoughts (to me at least). All of them are based on one fact: when I really like a photo, I spend a minute or two looking at information technology, letting my human vision scan information technology, grabbing the particular from it or perhaps wondering about the item that's not visible.

Photographs taken at a 'normal' angle of view (35mm to 50mm) seem to retain their appeal whatever their size. Even web-sized images shot at this focal length proceed the essence of the shot. The shot beneath (taken at 35mm) has a lot more detail when seen in a large image, but the essence is obvious even when modest. Perhaps the brain's processing is more than comfortable recognizing an image information technology sees at its normal field of view. Perhaps it's because nosotros photographers tend to subconsciously emphasize limerick and subjects in a 'normal' angle-of-view photograph.

The photograph higher up demonstrates something else I've e'er wondered about: does our fascination and love for blackness and white photography occur because it's ane of the few ways the dense cone (color merely) receptors in our macula are forced to transport a grayscale image to our brain?

Perhaps our brain likes looking at just tone and texture, without color data bottleneck up that narrow bandwidth betwixt eyeball and brain.

Like 'normal-angle' shots, telephoto and macro shots oftentimes wait not bad in pocket-size prints or spider web-sized JPGs. I have an 8 × 10 of an elephant's eye and a similar-sized macro print of a spider on my part wall that fifty-fifty from across the room look corking. (At least they look dandy to me, just you'll detect that they're hanging in my office. I've hung them in a couple of other places in the firm and have been tactfully told that "they really don't go with the living room furniture", so maybe they don't look then great to everyone.)

In that location's no great composition or other factors to make those photos attractive to me, simply I find them fascinating anyway. Possibly because even at a small size, my human being vision tin come across details in the photo that I never could see looking at an elephant or spider with the 'naked eye'.

On the other hand, when I go a good wide angle or scenic shot I inappreciably even carp to mail service a web-sized graphic or make a small print (and I'm non going to start for this article). I want it printed BIG. I think peradventure and so that my man vision tin scan through the image picking out the lilliputian details that are completely lost when its downsized. And every time I do brand a large print, fifty-fifty of a scene I've been to a dozen times, I notice things in the photograph I've never seen when I was there in person.

Perhaps the 'video' my encephalon is making while scanning the print provides much more than detail and I find information technology more pleasing than the composition of the photo would give when it'southward printed small (or which I saw when I was really at the scene).

And perhaps the subconscious 'scanning' that my vision makes across a photograph accounts for why things like the 'rule of thirds' and selective focus pulls my centre to certain parts of the photograph. Maybe nosotros photographers simply figured out how the brain processes images and took advantage of it through applied experience, without knowing all the science involved.

But I estimate my only real conclusion is this: a photograph is NOT exactly what my center and brain saw at the scene. When I get a skilful shot, it's something different and something better, like what Winogrand said in the two quotes above, and in this quote too:

You encounter something happening and you bang away at it. Either you get what you saw or you get something else — and whichever is better you print.


Almost the author: Roger Cicala is the founder of LensRentals. This article was originally published hither.


Image credits: my eye upward shut by machinecodeblue, Nikh's center through camera's eye from my optics for your eyes :-) by slalit, Schematic of the Human Eye by entirelysubjective, My left centre retina by Richard Masoner / Cyclelicious, Chromatic aberration (sort of) past moppet65535

Source: https://petapixel.com/2012/11/17/the-camera-versus-the-human-eye/

Posted by: rudolphbuthe1961.blogspot.com

0 Response to "What Camera Lens To Get Real Vision Feel"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel