As we delve into the world of audio and video, the first thing that greets our eyes is a series of moving images. But how do these images transform from our real-life experiences into a string of data on our digital screens? Today, let's journey together into the enchanting kingdom of digital imagery.
Let's start with the definition of an image. It's like taking the first step into an unknown realm. What exactly is an image? It's both the world we see and the data in a computer. The process of this transformation is filled with mystery.
Next, we will explore the principles of image formation. It's like uncovering a magician's secret, understanding how ordinary light and shadow are captured and recreated.
Then, we dive into the mathematical description of images. Here, each image is no longer just colors and shapes, but intricate mathematical models waiting to be deciphered.
Following that, we discuss the digitization of images. This process is like a magician's transformation, converting the physical world into digital form.
Finally, we unveil the mystery of digital image data. In this digital age, images are more than just visual experiences; they carry information and knowledge, waiting for us to uncover.
What is the definition of an image?
In our everyday life, 'seeing' is so natural and effortless, but have you ever pondered what exactly the 'image' we see is? At this moment, let us unveil the mystery of images.
First, let's try to define 'image'. An image is a material representation of our visual perception. It can be captured by optical devices like cameras, mirrors, telescopes, and microscopes, or it can be a piece of art created by human hands. Images are not just visual records; they can also be preserved on light-sensitive media like paper and film. With the rapid development of digital technology, images are increasingly stored in digital form in our lives.
In this definition, 'visual perception' and 'material representation' emerge as two key terms. Visual perception is at the heart of the image formation process, while material representation is the essence of the image signal processing process. These two concepts are like the two sides of a coin, together forming the complete picture of an image.
What are the principles of image formation?
How do we see images?
In this colorful world, the principle of image formation is like uncovering a layer of nature's mystery. First, let's contemplate a simple yet profound question: How do we see images?
The human eye, a miracle of nature, uses its lens-like structure to refract light reflected from objects onto the retina. These images are then transmitted to the brain through the optic nerve, and thus, we 'see' the world. This is the principle behind how objects form images in our eyes.
When we turn to digital images, we need devices similar to the human eye to achieve this 'visual perception.' In our daily lives, the most common example of such a device is the camera, whose imaging principle is quite similar to that of the human eye.
The camera, a product of modern technology, captures light information of a moment using the properties of light's linear propagation and the laws of refraction and reflection. This information, carried by photons, is transmitted through the lens to the photosensitive material, be it traditional film or a modern digital sensor, ultimately transforming into an image we can see.
In understanding the principle of imaging, we often simplify the concept by treating the lens as an ideal pinhole, using the principle of pinhole imaging for explanation. This applies to both the human eye and the camera. However, when dealing with more complex issues such as focal length, exposure, blur, and aberrations, more complex models are needed. These fall under the study of optics, which we won't delve into further here.
So, how are the colors in these images that we perceive formed?
2.2 How Do We Perceive Color?
How do we perceive the existence of 'colors'? The answer lies in the light that is emitted or reflected by objects. Since ancient times, it has been known that light is a form of electromagnetic wave. Within our visual system, we have three types of cone cells, each sensitive to different wavelengths of light. These cells combine their signals to present us with a world rich in color, revealing the mystery behind the trichromatic theory of vision.
Humans are unable to see all electromagnetic waves. We can only perceive a small range of wavelengths known as visible light, ranging from 380 to 780 nanometers. The light we see is actually a combination of different wavelengths, allowing us to perceive a myriad of colors. For instance, sunlight is a mix of various colors, a fact revealed by Newton who demonstrated that white light contains all the wavelengths of the visible spectrum.
In the human visual system, there are three types of cone cells, each sensitive to different light colors: yellow-green, green, and blue-violet. The first type responds most to long-wavelength light, peaking at about 560 nm, and is sometimes referred to as L cones. The second type is most responsive to medium-wavelength light, peaking at 530 nm, commonly known as M cones. The third type is most sensitive to short-wavelength light, peaking at 420 nm, and is referred to as S cones. The peak response of human cone cells varies from person to person, so these peak wavelengths depend on the individual, ranging around 564–580 nm, 534–545 nm, and 420–440 nm.
These three types do not correspond exactly to specific colors as we know them. Instead, the perception of color is a complex process that begins with the differentiated output of these cells in the retina and is completed in the visual cortex and other related areas of the brain. For example, although L cones are often referred to as red receptors, spectrophotometry shows their peak sensitivity is in the green-yellow region of the spectrum. Similarly, S and M cones do not directly correspond to blue and green, despite often being described as such (in many descriptions, the three types of human cone cells are sensitive to red light at 630 nm, green light at 530 nm, and blue light at 450 nm, hence the use of RGB as the three primary colors). In fact, the RGB color model is simply a convenient method for expressing color and is not directly based on the types of cone cells in the human eye.
Let's delve deeper into how our eyes perceive color, focusing on several key aspects:
Hue: Take sunlight or a light bulb, which emit full-spectrum visible light resulting in what we perceive as white light. When this white light hits an object, some frequencies are reflected and others absorbed. The mix of reflected frequencies determines the color we perceive. For example, if lower frequencies dominate the reflection, we see red.
Brightness: This relates to the amount of light energy, quantifiable as the brightness of a light source.
Saturation: Refers to how close the color of light is to a spectral color, like red. Lighter or duller colors have lower saturation, making them closer to white.
Additionally, there's a term called Chromaticity, usually used to collectively describe the saturation and hue of color.
This is how we, through our visual system, perceive images and their colors.
Follow me on:
Comments