Kijk op het fotografisch universum door Erwin Puts
Blog
  • © 2005-2020 Erwin Puts Contact Me 0

Blog

analog-digital: what does it mean?

The pair of contrasting concepts is now the standard way to describe the difference between silver-halide processes and the computational processes inside the solid-state equipped modern cameras. The advantage of this pair is clarity: you know what you are talking about, they are easily used and nobody will be confused when you are referring to one or the other. What is not clear, is the exact content of the concepts, especially when applied to the alternative and complementary photographic processes.
For a long time I have wondered what an analog photographic process is. The description and analysis of silver-halide processes since the early 1840s do not use the word 'analog'. The few recent books about silver-halide processing (Haist, Modern Photographic Processes and Tadaaki, Photographic Sensitivity) do not use the word 'anaog' at all.
First of all: the word is traditionally spelled as 'analogue'. American writers drop the silent -ue. 'Analog' is now routinely used in relation with electronics and 'analogue' is used in the sense that someting bears an analogy to somethinge else. An analogy is a similarity between two things that are otherwise dissimilar. The word 'analogy' derivers frome Greek word meaning 'proportionate' and this is correct.
There are two domains where the analog-digital pair is used: the computer industry and the signal-processing industry.
Analogue is related to analogy and this concept is used in the construction of scientific theories. When a scientist has a number of experimental data and wants to give an explanation of what is happening, a model or analogy with familiar events or objects is often introduced. The main definition of ‘analog’ originated in the computer industry. An analog computer represented data in a way that reflects the properties of the data being modelled. Data and numbers may be represented by physical quantities such as electric voltage levels. Analogue computing is primarily a modelling technology. An analogue computer contains shafts, wheels, discs and gears to perform operations and also used relays and vacuum tubes. All this gear was used to simulate and model some physical process in reality. Physical processes can be described mathematically with differential equations and this type of functions can be simulated with analogue computers.
The calculations based on analogue computers that use proportions to simulate and compare processes have a finite accuracy. The newer development of digital computers could increase the accuracy by using a long string of binary digits and were also more reliable. Originally there was no word for the type of computers used for simulation and equation solving. They were just 'computers'. When the digital computer was introduced, a new name 'analogue computers' was given to the modelling type, because analogy was the method that was used to model, calculate and simulate processes happening in the real physical world.
Analogue and digital in the computing world are not alternatives but complementary techniques.

These two parallel themes of calculation and modelling have to be separated when discussing the concepts of analogies, simulations and modelling.
The study of the process of silver-halide photography reveals that there is no computing involved and hardly any simulation or modelling. The characterization of the silver-halide process as analogue is false from whatever angle it is approached.
In the photographic world the interpretation of the two words (analog and digital) is derived from a definition used in electric signal processing. An analog signal is a continuous signal which represents physical measurements. The signal is represented by sine waves and the values of the measurements are mathematically continuous. A digital signal is a discrete time signal generated by digital modulation. The signal is denoted by square waves and the values of the measurements are mathematically discontinuous (or discrete). In order to input these values into a computer the values are converted to digital strings of zeros and ones. The analog signal is concerned with small fluctuations that are meaningful, whereas the digital signal uses large electric charges or voltages.
This description is valid for signal processing and the charge-coupled devices used for the capture, transmittance and storage of electrical signals. A modern photographic camera, equipped with charge-coupled devices, is in fact a powerful computer that processes data in binary format.
A series of MOS capacitors that transport and store electric voltages or currents is called a charge-coupled device and when the voltages or currents (the charge packets) are linked to photon counting or imaging, the device is called an imager. The best description of a ‘digital’ camera would be a camera equipped with an imager.
To describe a camera as digital when inside the camera metal-oxide semiconductor capacitors do the work is an obvious, but narrow-minded approach.

The word ‘digital’ has become magical. When the inside working of a device is based on digital technology it is good, reliable and cheap. The original analog devices are forgotten. All analogue technology is based upon the idea that the magnitude of an electric current or voltage has to be analogous to something in the real world. The bad point is that every electric circuit has a certain randomness, a noise effect that mingles with the analog signal values to produce an uncertainty. Analog signals are small, noise-laden voltages. The new idea is to go from small voltages to large ones: a high voltage is interpreted as one, a low voltage as zero. This one-zero concept was already used to represent numbers in computers and this concept was therefore called ‘digital’. Now the code patterns can describe everything and he strength of the signal is no longer important, The code pattern becomes a non-physical entity. We already had such a system in the past: the telegraph system where time plays an important part: the sequence of long and short intervals between the clicks of the telegraph mechanism.
The technology that is used inside a digital camera is the solid-state imaging that is possible with these charge-coupled devices.
Looking in the most recent brochures about the Canon EOS-1V and the Nikon F4, thee is no mention of the analogue processes and of analog cameras. These models are simply called cameras, very advanced and full of electronics of course, but just cameras. There is some mention of opto-mechatronics technology, but that is all there is to say. It is the analogous situation with the early computers. Before the introduction of the digital computer no one knew or was interested to describe such a computer as analogue.
The digital camera has the same functions and the same type of microcontrollers and printed circuits (Canon boasts that the EOS-1V has lots of 32 bits CPU's) and even the same functionality as these older cameras. The only difference is the fact that the F4 and 1V have to be loaded with film and that the modern EOS-1Dx and Nikon D5 are equipped with solid-state imagers.
The most obvious way to distinguish these two types (film using and imager equipped) would be to use a description such as: cameras using film and cameras fitted with imagers). If a shorter description would be needed the word 'film camera' is not smart, because of the fact that movie cameras also are described as film cameras or cinema cameras. This is confusing. Analogue and digital cameras is also not a smart idea, because of the connection to computations. Inside a digital camera lots of computations take place, but there is no computation at all in the silver-halide photography. When we could agree that silver-halide is the core technology of photography, we might say that we have two types of cameras: photographic cameras and computational cameras. Such a description is more to the point of the core processes inside the camera. A camera is a technological and physical artefact, not a digital or analogue one.
Perhaps I am to nitpicking to question the time-honored description, sanctioned by industry and all photographers in the world. Adoption and conciseness is one side of the medal:a clear description of the technology involved is another one. Given the fact that 'analogue' is associated with archaic, dinosaur and Luddite, and 'digital' with modern, progress and future, it makes sense to return to the roots and find the origin of words and why they are used so falsely in technological discussions about photography.