Brief formula for long exposure

No, this is not just a picture for beauty. This is Alena, a participant in the experiment. Which is exactly the dance that made a major contribution to it. However, first things first.


Every photographer knows that the longer the shutter speed, the more often (other parameters being equal) the images of moving people appear blurred.

Question: how does the probability of this depend on the exposure?

On this occasion, there are numerous theoretical considerations. But theory without experiment is like a toast without wine. Therefore, an experiment was conducted:

  1. Photo studio is placed.
  2. Music is on.
  3. Invited Alena.
  4. Which danced ...
  5. ... while I at random moments of time made photos with different exposures from a tripod, having gathered them as a result of several hundreds ...
  6. ... and counting at the end of the share of clear among them.

Read more about the methodology and results



[For reasons of a rather random nature, the terms “exposure” and “exposure” are used interchangeably in the article. In all such cases, an excerpt is implied, which I hope is obvious everywhere from the context]

Of course, the sharpness of the frame - a difficult question and in many ways subjective. For unambiguity, it was decided to take a clear photo, which turned out well and clearly eyes. This, of course, is a simplification, but not infinitely far from reality. After all, it is known that even a strongly “zamylenny” frame with a normally turned out person is perceived tolerantly, but the opposite is no longer:


Immediate results


On the graph - the probability measured in the experiment to make a technically sharp frame depending on the exposure:


“Whiskers” characterize measurement uncertainty due to rounding errors (± 0.5 frames) and expected binomial noise of ± 1σ.

At the qualitative level, everything is as expected. When the exposure grows clear frames are lost. However, it would be nice not only to see this dependence, but also to understand it. Characterize quantitatively.

To do this, instead of the absolute probability of success, let's look at the ratio of the chances of making a bad frame to the chances of good:

Y = (number of cluttered frames) / (number of successful frames)

In these terms, the graph takes on a simple and elegant look:


The red line is a power dependence, which is included in the experimental data. Its index (1.93) is very close to two. And I strongly suspect that “in reality” this is exactly the two, up to measurement errors.

Why?

A bit of theory.

Some theory


To understand this, consider the projection of the movement of the dancer's eyes on the horizontal axis x of the camera matrix. Let it be described by the function x ( t ). A strict condition for the clarity of the image will be his absenteeism beyond a certain limit r during the entire shooting duration of dt :


Since in order to get a “decent enough” frame, this condition is not necessarily completely accurate (we are satisfied with an error of up to tens of percent), we introduce a simplification. We assume that during the shooting time dt, the displacement x ( t ) is at least approximately linear, ie:

x ( t + τ ) ≈ x ( t ) + τ * v x ( t )

Then the definition condition is rewritten as:

| v x ( t ) | < r / dt

Here v x ( t ) is the speed of movement along the x axis at the moment of opening the shutter t .

Further, what do we know about the speed of the eyes? That this is a complex set of superimposed movements of several joints at once: the feet, knees, hips, body, neck. Thus, v x ( t ) can be represented as a superposition of several simpler velocity components:


Each of these components at a given moment of time t can be considered random [for the bore: dance is a quasiperiodic movement; decompose it in a Fourier series and recall that the phase of each component for a photographer is really random]. At first glance, this is of little help to us. After all, we do not know any properties or even the type of distributions of these random variables. It would seem a ring deadlock? But here comes the Central Limit Theorem , which says that when summing up a large number of weakly dependent random variables that are comparable in magnitude, the result will tend to a normal distribution - even if the input distributions are far from that! And in practice, this often works already with the addition of 3–4 magnitudes.

That gives us reason to believe that the value of v x ( t ) is normally distributed:


Moreover, yes, with mean μ = 0. Why? Because the dancer's movement is limited to the stage, and, therefore, the total movement over a long time (i.e. average speed) is zero. However, in practice, the photographer usually “leads” his goal with the lens, which provides an even stricter restriction on average movement.

Further trivial. With what probability | v x ( t ) | < r / dt ? Classic textbook answer:


When the exposures are large, the integral is typed only along a narrow central strip, inside which the function is approximately constant, and the answer turns into:

p = 2 r / ( dt * σ √ (2 * π ))

That is, the probability of accidentally guessing the correct opening moment of the shutter decreases as 1 / dt .

Now remember that the matrix of the camera is still two-dimensional, and in order for the frame to be clear, you must guess the moment not only for the x axis, but for y . (Simplify, simplify, there is no need to calculate √ ( x 2 + y 2 )). If these moments, as can be expected, are statistically weakly dependent, then the probabilities of guessing are multiplied together, and it turns out:

p = const / dt 2

- that asymptotically coincides with the observations in the experiment.

Total


Given this result, I am thus prepared to write the following formula for the probability of making an ungreased frame of the dancer as a function of the dt exposure:

p = 1 / (1 + ( dt / dt 0 ) 2 ) (10)

Here dt 0 is the exposure at which 50% of the footage is taken into marriage.

This conclusion is easily generalized not only to the dance, but also to many complex quasi-periodic movements, where there is a simple frame definition criterion, and the movement itself can be considered two-dimensional (that is, for example, not requiring a radical refocusing at each moment). Whether you shoot a goalkeeper, a hockey player, a hummingbird at a flower, or a friendly booze at the table - the probability of success at the “long” end of the exposition decreases as 1 / dt 2 .

Lubrication from shaking hands, by the way, is most likely described by the same dependence, which can explain stories about successful frames for 1/5 second with 35 mm from the hands.

How is this result useful?


That shows: it is not so terrible to shoot in a bad light. Yes, with a decrease in illumination, the probability of success decreases - but polynomially, and not exponentially. And with such an addiction, it is quite possible to flush.

Secondly, it allows you to estimate the necessary shooting volumes.

Example. Let's say you are photographing a party. Its dynamics is such that even at 1/30 of a second half of the portraits turns out to be blurred. And the light is dimmed and the camera, even at the limit ISO, does not allow to rise above 1/10 of a second. What are the chances of success? Calculate the expected share of clear frames:

p = 1 / (1 + (30/10) 2 ) = 1 / (1 + 9) = 1/10 = 10%

Little, but not hopeless. If you reject a couple of hundred shots, then after them you will most likely be able to extract about 200 * 0.1 = 20 technically good images. If at least half will be interesting in content, then this is quite enough for a decent photo album.

Limits of applicability?


The considered output logic ceases to work at exposures at which the subject of shooting changes the speed and direction of movement several times. For dance, these are subsecond exposures. Is the conclusion beyond their limit correct? Some intuitive considerations and the Bernstein-Kolmogorov inequality seem to indicate that the power scaling O (1 / dt k ) will also be maintained at higher exposures. But I do not undertake to prove it with the utmost severity.

Remark 1 . Of course, the success of photography depends on a huge number of factors, in addition to blurring the image. And many of them — say, uneven illumination of a room, or a shallow depth of field when shooting — are quite capable of “killing” both the snapshot (in the technical sense) and the dependency written above. However, in practice, it works quite well. For the first time, something like p (clarity) ≈ 1 / dt 2 I intended in 2009 with the year. Since then, hundreds of thousands of personnel have been made, many with risky exposures, and the results have generally met the expectations derived from this formula.

Note 2. Of course, today this result is less important than 10-20 years ago, when the maximum operating sensitivity of the camera could be only 400-800 ISO, and you had to go for wild tricks in order to catch one decent shot in the semi-darkness. Today (or in the near future) this problem may well be solved by software. Shoot video at ISO 12800, detect eyes, and choose from hundreds of frames the only one with the best quality. Many of the skills that have been gained over decades of practice are being relegated today to automation. And this is probably correct.

Thank you and all the good frames!

Source: https://habr.com/ru/post/412197/


All Articles