In: Computer Science
c) Using the 32-bit binary representation for floating point numbers, represent the number 1011100110011 as a 32 bit floating point number.
i) A digital camera processes the images images in the real-world and stores them in binary form. Using the principles of digital signal processing, practically explain how this phenomenon occurs.
A digital camera takes light and focuses it via the lens onto a sensor made out of silicon. It is made up of a grid of tiny photosites that are sensitive to light. Each photosite is usually called a pixel, a contraction of "picture element". There are millions of these individual pixels in the sensor of a DSLR camera.
Digital cameras sample light from our world, or outer space, spatially, tonally and by time. Spatial sampling means the angle of view that the camera sees is broken down into the rectangular grid of pixels. Tonal sampling means the continuously varying tones of brightness in nature are broken down into individual discrete steps of tone. If there are enough samples, both spatially and tonally, we perceive it as faithful representation of the original scene. Time sampling means we make an exposure of a given duration.
It is the time sampling with long exposures that really makes the magic of digital astrophotography possible. A digital sensor's true power comes from its ability to integrate, or collect, photons over much longer time periods than the eye. This is why we can record details in long exposures that are invisible to the eye, even through a large telescope.
Each photosite on a CCD or CMOS chip is composed of a light-sensitive area made of crystal silicon in a photodiode which absorbs photons and releases electrons through the photoelectric effect. The electrons are stored in a well as an electrical charge that is accumulated over the length of the exposure. The charge that is generated is proportional to the number of photons that hit the sensor.
This electric charge is then transferred and converted to an analog voltage that is amplified and then sent to an Analog to Digital Converter where it is digitized (turned into a number).
CCD and CMOS sensors perform similarly in absorbing photons, generating electrons and storing them, but differ in how the charge is transferred and where it is converted to a voltage. Both end up with a digital output.
The entire digital image file is then a collection of numbers that represent the location and brightness values for each square in the array. These numbers are stored in a file that our computers can work with.
The number of electrons that build up in a well is proportional to the number of photons that are detected. The electrons in the well are then converted to a voltage. This charge is analog (continuously varying) and is usually very small and must be amplified before it can be digitized. The read-out amplifier performs this function, matching the output voltage range of the sensor to the input voltage range of the A-to-D converter. The A/D converter converts this data into a binary number.
When the A/D converter digitizes the dynamic range, it breaks it into individual steps. The total number of steps is specified by the bit depth of the converter. Most DSLR cameras work with 12 bits (4096 steps) of tonal depth.
The sensor's output is technically called an analog-to-digital unit (ADU) or digital number (DN). The number of electrons per ADU is defined by the gain of the system. A gain of 4 means that the A/D converter digitized the signal so that each ADU corresponds to 4 electrons.
Computers and Numbers
Because computers are very powerful at manipulating numbers, we can perform different operations on these numbers quickly and easily.
For instance, contrast is defined as the difference in brightness between adjacent pixels. For there to be contrast, there must be a difference to start with, so one pixel will be lighter and one pixel will be darker. We can very easily increase the contrast by simply adding a number to the brightness value of the lighter pixel, and subtracting a number from the brightness value of the darker pixel.
Color in an image is represented by the brightness value of a pixel in each of three color channels - red, green and blue - that constitute the color information. We can just as easily change the color of a pixel, or group of pixels, by just changing the numbers.
We can perform other tricks too, such as increasing the apparent sharpness of an image by increasing the contrast of the edge boundaries of objects in an image with a process called unsharp masking.
Having the image represented by numbers allows us to have a lot of control over it. And, because the image is a set of numbers, it can be exactly duplicated any number of times without any loss of quality.
Digital image processing is the use of a digital computer to process digital images through an algorithm.[1][2] As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics (especially the creation and improvement of discrete mathematics theory); third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.