Camera Tips

CAMERA VS THE HUMAN EYE

Why can not I point the camera at what I see and that it is recorded as it is? It is a seemingly simple question. It is also one of the most complicated to answer, and it requires an in-depth study of not only how a camera records light, but also how and why our eyes function the way they do. Addressing these issues can reveal surprising concepts about our daily perception of the world – as well as making one a better photographer.

INTRODUCTION
Our eyes can look around a scene and adjust dynamically based on the object of interest, while the cameras only capture a fixed image. This feature explains many of our advantages over cameras. For example, our eyes can compensate for the luminosity while we focus on regions of a different brightness of a scene. Also, one can look around to cover a wider angle of vision, or they can alternatively focus on objects at different distances.

However, the result is similar to a video camera – not a camera – our mind collects meaningful snapshots to form a scene in our brain. A glance of our eyes could be a fairer comparison with a camera, but ultimately, the uniqueness of our visual system is inevitable due to the following concept:

What we see, is the reconstruction of objects in our mind based on the information provided by our eyes – not the actual light received by our eyes.

Wow! Let’s see: So our mind is not merely connected to an image sensor (retina) communicated directly to our brain, but what we perceive is an interpretation of different loose areas with significant things for our mind and then the weapon in a complete scene. That is a kind of HDR of the super high speed of specific regions of a view.

Skeptical? Well most of them are – at least at the beginning. The following examples
show situations in which the mind can be tricked into seeing something
different from what the eyes see:

False Color: Click on the image to see the effect, look at the central cross. The missing point will rotate around the circle, but after a while, this point will appear to be green – even though there is nothing green present in the image.
Picture
Gray Bands: Click on the image. Each of the bands will appear slightly darker or lighter near their upper and lower edges – even though each is uniformly a shade of gray.

However, this should not discourage us from being able to compare eyes and cameras! Under many conditions it is possible to make a fair comparison, but only if we take into consideration both what we are seeing and how the mind processes this information. The following sections try to distinguish between the two whenever possible.

SUMMARY OF THE DIFFERENCES
This tutorial groups the comparisons into the following visual categories:
1 Angle of vision
2 Resolution and Detail
3 Sensitivity and Dynamic Range

The previous categories show the areas in which the eyes and the cameras have the most significant differences, and so general also where the most significant disagreement is found. Other topics may include depth of field, stereo vision, white balance and the range of colors, but it will not be the focus of this tutorial.

READ  BEST SMARTPHONES FOR PHOTOGRAPHY

1 ANGLE OF VISION
With cameras, this is determined by the focal length of the lens (along with the size of the camera’s sensor). For example, a telephoto lens (zoom) has a longer focal length than a standard portrait lens, and therefore covers a narrower viewing angle:

Unfortunately, the explanation of what happens in our eyes is not so simple. Although the human eye has a focal length of approximately 22 mm, this is misleading because (i) the back of the eyes are curved, (ii) the periphery of our visual field contains progressively less detail than the center, and (iii) the
scene we perceive is the combined result of the two eyes.
Each eye individually has an angle of 120 to 200 ° vision, depending on how rigorously objects are defined as being “seen.” Likewise, the overlap region of the two eyes is about 130 ° – or about as wide as a fisheye lens
(Fisheye). However, for reasons of survival our ultimate peripheral vision is only useful for the detection of movement and large objects (like a lion pouncing on its side). On the other hand, such a wide angle appears highly distorted and unnatural if captured by a camera.

Left Eye —— Overlap —— Right eye
Our central angle of vision – around 40-60 º – is what matters most to our perception. Subjectively, this would correspond to the angle at which we can remember details or objects without moving our eyes. By the way, this is close to a 50mm lens “normal” focal length in a full-frame photography camera (43mm to be exact in a Full Frame), or a focal length 27mm in a camera with a factor Clipping 1.6x (like the DSRL and MirrorLess for beginners and Semi-Pro). Although this does not reproduce the full angle of vision in which we see, if it corresponds very well with what we perceive taking into account the compromise between different types of distortion:

Wide angle lens (objects are very different in size)

Telephoto (objects are similar in size)
With too wide an angle of view, the relative size of the objects looks exaggeratedly different, while the narrower the angle of view, the objects look almost the same relative size and the sense of depth is lost. Extremely wide angles also tend to make objects near the edges of the image appear elongated (distorted).

Depth distortion
In comparison, even though our eyes capture a
distorted wide-angle image, our brain reconstructs it to form a 3D mental image that is undistorted.

2. RESOLUTION AND DETAIL
The majority of current digital cameras have between 5-20 megapixels, which is often cited as well below our visual system. This is based on the fact that in 20/20 vision, the human eye is capable of solving the equivalent of a 52-megapixel camera (assuming an angle of view 60). However, these calculations are misleading. Only our central vision is 20/20, so in reality, we never see so much detail at a single glance, that is, far from the central area, our visual capacity decreases drastically, in such a way that by only 20 ° out of center our eyes resolves only one-tenth of all the details.
In peripheral vision, we only detect contrast on a large scale and a minimum of
color:

READ  HOW TO MAKE LIGHT PAINTING EFFECT

Qualitative representation of visual detail at a single glance.
Taking into account the above, at a glance, the detail that we can perceive is comparable to a 5-15 megapixel camera (depending on how well our vision is). However, our mind does not remember the images pixel by
pixel but instead registers excellent textures, the color and the contrast of a succession of the image by image.
To assemble a detailed mental image, therefore, our eyes are focused on several regions of interest in rapid succession. This effectively paints our perception:

Original Scene

Areas of interest for a person
The final result is a mental image whose detail was given based on the objects of interest. This has a necessary implication, but often overlooked for photographers: even when a photograph is close to the technical limits of a camera, such a detail ultimately does not matter much if the image itself is not memorable.

Other significant differences with the way our eyes meet details include:

Asymmetry. Each eye can perceive more detail below our line of sight than above, and peripheral vision is also much more sensitive in outward directions of the nose than towards it — the cameras record almost perfectly symmetrical images.

Vision in Low Light. In very little light, as in the light of the moon or the stars,
our eyes start to see black and white. In such situations, our central vision also begins to represent less detail than outside the center. Many astrophotographers are aware of this, and use it to their advantage by looking right next to a dim star if they want to be able to see it with their eyes without help.

Subtle gradients. Too much attention is often given to the smallest detectable detail, but subtle tonal gradients are also significant – and it happens to be where our eyes and cameras differ most. With a camera, an enlarged detail is always easier to detect – but against all logic, an enlarged detail might be less visible to our eyes. In the example below, both images
contain texture with the same amount of contrast, but this is not visible
in the picture on the right because the composition has been enlarged.

Fine texture (barely visible)

16X enlarged
3 . SENSITIVITY AND DYNAMIC RANGE
Dynamic range * is an area where the eye is often seen as having a significant advantage. If we had to think of situations in which our pupil opens and closes for different areas of brightness, then yes, our eyes far outweigh the capabilities of a single camera image (and may have a range higher than 24 steps). f)
However, in such situations, the eye is adjusting dynamically like a
video camera, so it could be said that it is not a fair comparison.

The eye focuses on the foreground

our mental image
On the other hand, if we had to consider the instantaneous dynamic range of our Eye (where our pupil opening does not change), then the cameras look much better. This would be similar to looking at a single region within a scene, letting our eyes adjust without looking elsewhere. In that case, the most significant estimate is that our eyes can see between 10-14 dynamic range f marks, which certainly surpasses most compact cameras (5-7 scores), but is surprisingly similar to that of cameras Digital SLR (8-11 brands). On the other hand, the dynamic range of our
The eye also depends on the brightness and contrast of the subject, so the above only applies to the typical conditions of daylight. Seeing stars, our eyes can approach an instantaneous dynamic range, even higher.

READ  Which is better, or I think the question would be: What do you look for in a camera? :

* Quantifying the dynamic range. The unit most used to measure the dynamic range in the photograph is the f-stop (mark f). This describes the relationship between the brightest and darkest distinct regions of a scene, in powers of two. A view with a dynamic range of 3 f, therefore, has a white that is 8X as bright as its black (i.e., two cubed = 2x2x2 = 8).

Freezing movement

Sensibility in Low Light
Sensitivity
This is another critical visual feature and describes the ability to solve fast-moving subjects. In bright light, modern cameras are better at resolving fast-moving subjects, as shown by amazing high-speed photos. This is often possible thanks to speeds above ISO 3200; the equivalent ISO for the human eye is thought to be as low as 1. However, in low light conditions, the eyes become much more sensitive (assuming we let them adjust for 30 minutes or more). Astrophotographers often estimate this as close
Of ISO 500-1000, it’s still not as high as digital cameras, but close.
On the other hand, cameras have the advantage of being able to take long exposures to define even weaker light objects, while our eyes do not see additional details after staring at something more than about 10 to 15 seconds. CONCLUSIONS One could be arguing that if a camera can beat the human eye but that does not matter since the cameras require a different standard: they need to take realistic looking impressions. In a printed photograph I do not know in which regions the eye will be concentrated, so each part of the impact must contain

Details to the maximum in case that’s where we’re going to focus. This is especially true for large prints or small areas that we want to see. However, we could also argue that it is useful to put the capabilities of the camera in context.

In general, most of the advantages of our natural visual system are based on the fact that our mind can intelligently interpret the information in our eyes, while with a camera, all we have is the raw image. Despite this, the current digital cameras have a good performance and exceed our own eyes in various visual capabilities. The real winner is the photographer who can intelligently gather multiple images from the camera – thus surpassing even our mental picture.

 
 
Summary