This document discusses the key concepts of digital image processing and human visual perception. It covers the structure of the human eye, how images are formed in the eye, and how brightness is adapted to and discriminated. It also discusses light and the electromagnetic spectrum, noting that visible light is a small portion and describing the different wavelengths. Image sensing and acquisition are introduced along with sampling and quantization of digital images. Some basic relationships between pixels are also covered.
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical stimuli.
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical light intensities.
The document discusses light and perception. It begins by introducing photometric stereo, which uses pixel brightness to understand shape. It then covers topics like what light is, how we measure and perceive it, how light propagates and interacts with matter. Key points covered include how the human visual system perceives color using rods and cones, properties of light like reflection, challenges of modeling image formation by tracking light rays, and assumptions needed for shape from shading from a single image.
This document discusses key concepts related to visual information and human vision. It covers the electromagnetic spectrum, properties of light, how the human eye perceives color and brightness, and color theory concepts like additive and subtractive color mixing. Standard color temperatures used in television and for white balancing cameras are also explained.
Digital images can be represented as multidimensional arrays of numbers or vectors. Each component in the image, called a pixel, associates with a pixel value such as intensity or color. To create a digital image, an analog image is sampled and quantized by converting the continuously sensed data into discrete numeric values. Sampling involves assigning numeric coordinates to pixels according to a grid, while quantization assigns numeric values to represent the brightness or color at each pixel location. The number of samples and quantization levels can impact the quality and file size of the digital image.
Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamen
Lasers have various applications in ophthalmology, including both diagnostic and therapeutic uses. Some key points:
- Lasers work by stimulating emission of coherent light and can be focused precisely due to properties like collimation and monochromaticity. Common lasers used include Nd:YAG, excimer, and diode lasers.
- Diagnostically, lasers are used in scanning laser ophthalmoscopy, optical coherence tomography, and wavefront analysis. Therapeutically, they are used for refractive surgery, glaucoma treatment like laser iridotomy, and retinal procedures like photocoagulation.
- Specific procedures include PRK, LASIK and LASEK to correct ref
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical stimuli.
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical light intensities.
The document discusses light and perception. It begins by introducing photometric stereo, which uses pixel brightness to understand shape. It then covers topics like what light is, how we measure and perceive it, how light propagates and interacts with matter. Key points covered include how the human visual system perceives color using rods and cones, properties of light like reflection, challenges of modeling image formation by tracking light rays, and assumptions needed for shape from shading from a single image.
This document discusses key concepts related to visual information and human vision. It covers the electromagnetic spectrum, properties of light, how the human eye perceives color and brightness, and color theory concepts like additive and subtractive color mixing. Standard color temperatures used in television and for white balancing cameras are also explained.
Digital images can be represented as multidimensional arrays of numbers or vectors. Each component in the image, called a pixel, associates with a pixel value such as intensity or color. To create a digital image, an analog image is sampled and quantized by converting the continuously sensed data into discrete numeric values. Sampling involves assigning numeric coordinates to pixels according to a grid, while quantization assigns numeric values to represent the brightness or color at each pixel location. The number of samples and quantization levels can impact the quality and file size of the digital image.
Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamen
Lasers have various applications in ophthalmology, including both diagnostic and therapeutic uses. Some key points:
- Lasers work by stimulating emission of coherent light and can be focused precisely due to properties like collimation and monochromaticity. Common lasers used include Nd:YAG, excimer, and diode lasers.
- Diagnostically, lasers are used in scanning laser ophthalmoscopy, optical coherence tomography, and wavefront analysis. Therapeutically, they are used for refractive surgery, glaucoma treatment like laser iridotomy, and retinal procedures like photocoagulation.
- Specific procedures include PRK, LASIK and LASEK to correct ref
The document summarizes key optical principles related to the human visual system. It discusses:
1) The basics of light, photons, and units of measurement for light such as lumens.
2) How different wavelengths of light such as UV, visible light, and X-rays interact with human skin and tissues, including uses in phototherapy and risks of skin cancer.
3) Principles of reflection, refraction, lenses, and image formation and their relevance to the anatomy and functioning of the human eye.
4) Common visual impairments like myopia, hyperopia, and astigmatism as well as methods for testing visual acuity and visual fields.
The document discusses elements of visual perception including the structure and function of the human eye and visual system. It describes how (1) light is focused through the cornea and lens onto the retina, where rods and cones detect the image and transmit signals to the brain, (2) the fovea provides sharp central vision while peripheral vision is supported by rods, and (3) brightness adaptation allows the eye to perceive a wide range of intensities through changes in sensitivity. Phenomena like Mach bands and simultaneous contrast demonstrate that perceived brightness depends on context rather than absolute intensity.
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
PHSIO OF VISION in Ophthalmology in detailManjunathN95
1. The document describes the physiology of vision and the rhodopsin cycle. Light is converted to electrical signals in the photoreceptors through phototransduction.
2. Phototransduction involves rhodopsin bleaching where light exposure converts the retinal component of rhodopsin. This triggers biochemical reactions generating a receptor potential.
3. The receptor potential is transmitted through the visual pathway to the visual cortex for perception through serial and parallel processing.
Lasers in ophthalmology - Dr. Parag Apteparag apte
A full presentation of one hour of all types of lasers in ophthalmology for under graduates and post graduates after going through all the uploaded slides till today. This includes laser photocoagulation, laser iridotomy, and laser capsulotomy in detail
Introduction
The applications of microscopy in the forensic sciences are almost limitless. This is due in large measure to the ability of
microscopes to detect, resolve and image the smallest items of evidence, often without alteration or destruction. As a
result, microscopes have become nearly indispensable in all forensic disciplines involving the natural sciences. Thus, a
firearms examiner comparing a bullet, a trace evidence specialist identifying and comparing fibers, hairs, soils or dust, a
document examiner studying ink line crossings or paper fibers, and a serologist scrutinizing a bloodstain, all rely on
microscopes, in spite of the fact that each may use them in different ways and for different purposes.
The principal purpose of any microscope is to form an enlarged image of a small object. As the image is more greatly
magnified, the concern then becomes resolution; the ability to see increasingly fine details as the magnification is
increased. For most observers, the ability to see fine details of an item of evidence at a convenient magnification, is
sufficient. For many items, such as ink lines, bloodstains or bullets, no treatment is required and the evidence may
typically be studied directly under the appropriate microscope without any form of sample preparation. For other types of
evidence, particularly traces of particulate matter, sample preparation before the microscopical examination begins is
often essential. Types of Microscopes Used in the Forensic Sciences
A variety of microscopes are used in any modern forensic science laboratory. Most of these are light microscopes which
use photons to form images, but electron microscopes, particularly the scanning electron microscope (SEM), are finding
applications in larger, full service laboratories because of their wide range of magnification, high resolving power and
ability to perform elemental analyses when equipped with an energy or wavelength dispersive X-ray spectrometer.
This document discusses the physics of color and light measurement. It covers topics like:
1) Color is a sensory perception produced in the brain that requires a light source, object, and observer.
2) The wavelength of light determines its perceived color - visible light has wavelengths between 380-760nm.
3) Light intensity is measured in lumens and is affected by factors like distance from the light source due to the inverse square law.
4) Common light sources like incandescent, fluorescent, and blackbody radiation are described in terms of their spectral properties and color temperatures.
This document provides an overview of imaging physics and the types of sensors used to record images. It discusses three main types of sensors: point detectors with 2D scanning, linear sensors with 1D scanning, and area array sensors with no scanning required. Semiconductor detectors like photodiodes, phototransistors, and photogates are commonly used area array sensors. Photogates in particular work by using a depletion volume and electric field to separate electron-hole pairs created when photons are absorbed, generating a signal. The document also discusses color separation techniques and factors like photon energy needed to create electron-hole pairs that impact semiconductor detector sensitivity.
This document provides an overview of lighting considerations for network video systems. It discusses key lighting concepts like light, color, infrared light, brightness and glare. It also covers light sources and surfaces, beam patterns, and the inverse square law. The document provides guidance on using white light versus infrared light and safety considerations. It includes a chart showing illumination distances for different Axis illuminator products based on camera angle and distance from the illuminator.
When selecting a network camera for day or night surveillance, there are several elements impacting
image quality that are important to understand. This guide is intended to give a basic overview of those
elements, to give an understanding of how lighting affects the image, and of the factors that need to be
taken into consideration for creating favorable lighting in dark environments.
1. Light transmission through lenses is determined by calculating the percentage of light lost to reflection off the front and back surfaces and light absorbed by the lens material.
2. The human eye absorbs different wavelengths of light depending on the ocular tissue. The cornea absorbs UV light while the lens absorbs more UV as we age.
3. Sunglasses protect the eyes by reducing the amount of UV and infrared radiation that reaches the eyes, preventing damage from prolonged sun exposure like cataracts.
Em and optics project 3 (1st) convertedDurgeshJoshi6
This document is a lab report submitted by Ashok Kumar Sahoo for the course Electromagnetism & Optics at the Indian Institute of Technology Kharagpur. The report discusses experiments and measurements performed with optical fibers and optoelectronic devices. In the first part, experiments are described to analyze the working of single mode and multimode optical fibers by calculating properties like numerical aperture, bending loss, and splice loss. The second part analyzes the characteristics of various optoelectronic devices including solar cells, light dependent resistors, LEDs, phototransistors, photodiodes, and optocouplers. Basic theories of total internal reflection, optical fibers, and these components are also outlined.
Digital image processing involves manipulating digital images using a computer. It has two main applications: improving images for human interpretation and processing images for machine perception tasks. A digital image is composed of pixels arranged in a grid, each with an intensity value. Key steps in digital image processing include image acquisition through sensors, enhancement, restoration, compression and segmentation. The human visual system has adapted to a wide range of light intensities through mechanisms like brightness adaptation and color vision. Digital images are formed by sampling and quantizing a continuous image function.
OCT allows for high-resolution cross-sectional imaging of the retina. It provides micron-level resolution, enabling visualization of the retinal layers. OCT is a non-contact, non-invasive technique useful for qualitative and quantitative analysis of the retina and monitoring of morphological changes. It can detect and measure retinal thickness, volume, and parameters like RNFL thickness. While it provides advantages over other modalities, OCT also has limitations like difficulty imaging through opaque media. It operates using low-coherence interferometry and is useful for evaluating a variety of posterior segment diseases.
Troubleshooting, Designing, & Installing Digital & Analog Closed Circuit TV S...Living Online
The document discusses light and optics, comparing the human eye to a camera. It explains that both have lenses that focus light and sensors (retina for the eye, sensor for the camera) that capture images. However, the eye can automatically focus on objects at different distances, while cameras require manual focus adjustment. It also notes that the eye has a blind spot, but we see a continuous image because information from both eyes is combined in the brain.
Light is a form of electromagnetic radiation that interacts with the retina to produce the sensation of sight. It is the visible portion of the electromagnetic spectrum, ranging from 400-700 nm. Light travels as a transverse wave and exhibits properties of both waves and particles. The interaction of light with matter can be explained using wave optics concepts like interference and diffraction, or quantum optics concepts like absorption and scattering. Geometrical optics describes how lenses and mirrors form images through reflection and refraction according to Snell's law. Total internal reflection occurs when light passes from an optically dense to rare medium at an angle greater than the critical angle.
The document discusses the electromagnetic spectrum, focusing on infrared and visible light. It provides descriptions of the electromagnetic spectrum and its components such as infrared, visible light, ultraviolet, X-rays and gamma rays. It discusses the uses of different parts of the spectrum including infrared light and visible light. Infrared light is used for wireless communication, night vision, medical therapy and more. Visible light enables vision, photography and optical fiber communication. Both parts of the spectrum have advantages like communication and disadvantages like potential harm from overexposure.
Optical coherence tomography (OCT) is a non-invasive imaging technique that uses light to obtain high-resolution cross-sectional images of the retina and anterior segment. OCT of the retina provides images similar to a vertical biopsy under a microscope, with micron-level resolution. Applications of OCT include ophthalmology, dermatology, cardiology, endoscopy, and guided surgery. OCT measures reflected light using interferometry, similar to ultrasound but using light instead of sound. It has much higher resolution than ultrasound. OCT is useful for detailed imaging of the retina and anterior segment, while ultrasound can image deeper structures due to its ability to penetrate tissue.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Mais conteúdo relacionado
Semelhante a 03-Digital Image Fundamentals (5)as.pptx
The document summarizes key optical principles related to the human visual system. It discusses:
1) The basics of light, photons, and units of measurement for light such as lumens.
2) How different wavelengths of light such as UV, visible light, and X-rays interact with human skin and tissues, including uses in phototherapy and risks of skin cancer.
3) Principles of reflection, refraction, lenses, and image formation and their relevance to the anatomy and functioning of the human eye.
4) Common visual impairments like myopia, hyperopia, and astigmatism as well as methods for testing visual acuity and visual fields.
The document discusses elements of visual perception including the structure and function of the human eye and visual system. It describes how (1) light is focused through the cornea and lens onto the retina, where rods and cones detect the image and transmit signals to the brain, (2) the fovea provides sharp central vision while peripheral vision is supported by rods, and (3) brightness adaptation allows the eye to perceive a wide range of intensities through changes in sensitivity. Phenomena like Mach bands and simultaneous contrast demonstrate that perceived brightness depends on context rather than absolute intensity.
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
PHSIO OF VISION in Ophthalmology in detailManjunathN95
1. The document describes the physiology of vision and the rhodopsin cycle. Light is converted to electrical signals in the photoreceptors through phototransduction.
2. Phototransduction involves rhodopsin bleaching where light exposure converts the retinal component of rhodopsin. This triggers biochemical reactions generating a receptor potential.
3. The receptor potential is transmitted through the visual pathway to the visual cortex for perception through serial and parallel processing.
Lasers in ophthalmology - Dr. Parag Apteparag apte
A full presentation of one hour of all types of lasers in ophthalmology for under graduates and post graduates after going through all the uploaded slides till today. This includes laser photocoagulation, laser iridotomy, and laser capsulotomy in detail
Introduction
The applications of microscopy in the forensic sciences are almost limitless. This is due in large measure to the ability of
microscopes to detect, resolve and image the smallest items of evidence, often without alteration or destruction. As a
result, microscopes have become nearly indispensable in all forensic disciplines involving the natural sciences. Thus, a
firearms examiner comparing a bullet, a trace evidence specialist identifying and comparing fibers, hairs, soils or dust, a
document examiner studying ink line crossings or paper fibers, and a serologist scrutinizing a bloodstain, all rely on
microscopes, in spite of the fact that each may use them in different ways and for different purposes.
The principal purpose of any microscope is to form an enlarged image of a small object. As the image is more greatly
magnified, the concern then becomes resolution; the ability to see increasingly fine details as the magnification is
increased. For most observers, the ability to see fine details of an item of evidence at a convenient magnification, is
sufficient. For many items, such as ink lines, bloodstains or bullets, no treatment is required and the evidence may
typically be studied directly under the appropriate microscope without any form of sample preparation. For other types of
evidence, particularly traces of particulate matter, sample preparation before the microscopical examination begins is
often essential. Types of Microscopes Used in the Forensic Sciences
A variety of microscopes are used in any modern forensic science laboratory. Most of these are light microscopes which
use photons to form images, but electron microscopes, particularly the scanning electron microscope (SEM), are finding
applications in larger, full service laboratories because of their wide range of magnification, high resolving power and
ability to perform elemental analyses when equipped with an energy or wavelength dispersive X-ray spectrometer.
This document discusses the physics of color and light measurement. It covers topics like:
1) Color is a sensory perception produced in the brain that requires a light source, object, and observer.
2) The wavelength of light determines its perceived color - visible light has wavelengths between 380-760nm.
3) Light intensity is measured in lumens and is affected by factors like distance from the light source due to the inverse square law.
4) Common light sources like incandescent, fluorescent, and blackbody radiation are described in terms of their spectral properties and color temperatures.
This document provides an overview of imaging physics and the types of sensors used to record images. It discusses three main types of sensors: point detectors with 2D scanning, linear sensors with 1D scanning, and area array sensors with no scanning required. Semiconductor detectors like photodiodes, phototransistors, and photogates are commonly used area array sensors. Photogates in particular work by using a depletion volume and electric field to separate electron-hole pairs created when photons are absorbed, generating a signal. The document also discusses color separation techniques and factors like photon energy needed to create electron-hole pairs that impact semiconductor detector sensitivity.
This document provides an overview of lighting considerations for network video systems. It discusses key lighting concepts like light, color, infrared light, brightness and glare. It also covers light sources and surfaces, beam patterns, and the inverse square law. The document provides guidance on using white light versus infrared light and safety considerations. It includes a chart showing illumination distances for different Axis illuminator products based on camera angle and distance from the illuminator.
When selecting a network camera for day or night surveillance, there are several elements impacting
image quality that are important to understand. This guide is intended to give a basic overview of those
elements, to give an understanding of how lighting affects the image, and of the factors that need to be
taken into consideration for creating favorable lighting in dark environments.
1. Light transmission through lenses is determined by calculating the percentage of light lost to reflection off the front and back surfaces and light absorbed by the lens material.
2. The human eye absorbs different wavelengths of light depending on the ocular tissue. The cornea absorbs UV light while the lens absorbs more UV as we age.
3. Sunglasses protect the eyes by reducing the amount of UV and infrared radiation that reaches the eyes, preventing damage from prolonged sun exposure like cataracts.
Em and optics project 3 (1st) convertedDurgeshJoshi6
This document is a lab report submitted by Ashok Kumar Sahoo for the course Electromagnetism & Optics at the Indian Institute of Technology Kharagpur. The report discusses experiments and measurements performed with optical fibers and optoelectronic devices. In the first part, experiments are described to analyze the working of single mode and multimode optical fibers by calculating properties like numerical aperture, bending loss, and splice loss. The second part analyzes the characteristics of various optoelectronic devices including solar cells, light dependent resistors, LEDs, phototransistors, photodiodes, and optocouplers. Basic theories of total internal reflection, optical fibers, and these components are also outlined.
Digital image processing involves manipulating digital images using a computer. It has two main applications: improving images for human interpretation and processing images for machine perception tasks. A digital image is composed of pixels arranged in a grid, each with an intensity value. Key steps in digital image processing include image acquisition through sensors, enhancement, restoration, compression and segmentation. The human visual system has adapted to a wide range of light intensities through mechanisms like brightness adaptation and color vision. Digital images are formed by sampling and quantizing a continuous image function.
OCT allows for high-resolution cross-sectional imaging of the retina. It provides micron-level resolution, enabling visualization of the retinal layers. OCT is a non-contact, non-invasive technique useful for qualitative and quantitative analysis of the retina and monitoring of morphological changes. It can detect and measure retinal thickness, volume, and parameters like RNFL thickness. While it provides advantages over other modalities, OCT also has limitations like difficulty imaging through opaque media. It operates using low-coherence interferometry and is useful for evaluating a variety of posterior segment diseases.
Troubleshooting, Designing, & Installing Digital & Analog Closed Circuit TV S...Living Online
The document discusses light and optics, comparing the human eye to a camera. It explains that both have lenses that focus light and sensors (retina for the eye, sensor for the camera) that capture images. However, the eye can automatically focus on objects at different distances, while cameras require manual focus adjustment. It also notes that the eye has a blind spot, but we see a continuous image because information from both eyes is combined in the brain.
Light is a form of electromagnetic radiation that interacts with the retina to produce the sensation of sight. It is the visible portion of the electromagnetic spectrum, ranging from 400-700 nm. Light travels as a transverse wave and exhibits properties of both waves and particles. The interaction of light with matter can be explained using wave optics concepts like interference and diffraction, or quantum optics concepts like absorption and scattering. Geometrical optics describes how lenses and mirrors form images through reflection and refraction according to Snell's law. Total internal reflection occurs when light passes from an optically dense to rare medium at an angle greater than the critical angle.
The document discusses the electromagnetic spectrum, focusing on infrared and visible light. It provides descriptions of the electromagnetic spectrum and its components such as infrared, visible light, ultraviolet, X-rays and gamma rays. It discusses the uses of different parts of the spectrum including infrared light and visible light. Infrared light is used for wireless communication, night vision, medical therapy and more. Visible light enables vision, photography and optical fiber communication. Both parts of the spectrum have advantages like communication and disadvantages like potential harm from overexposure.
Optical coherence tomography (OCT) is a non-invasive imaging technique that uses light to obtain high-resolution cross-sectional images of the retina and anterior segment. OCT of the retina provides images similar to a vertical biopsy under a microscope, with micron-level resolution. Applications of OCT include ophthalmology, dermatology, cardiology, endoscopy, and guided surgery. OCT measures reflected light using interferometry, similar to ultrasound but using light instead of sound. It has much higher resolution than ultrasound. OCT is useful for detailed imaging of the retina and anterior segment, while ultrasound can image deeper structures due to its ability to penetrate tissue.
Semelhante a 03-Digital Image Fundamentals (5)as.pptx (20)
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
2. Book title: Digital Image Processing by Gonzales
Digital Image Processing GMM Page 2
3. Contents
GMM Page 3
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
4. Elements of Visual Perception
• Digital image processing(DIP) is built on a foundation of
mathematical and probabilistic formulations,
• But human intuition and analysis play a central role in the choice of
one technique versus another.
• Thus choice is subjective and often made on visual judgments.
• A basic understanding of human visual perception is needed for DIP
• The interest is in the mechanics and parameters related to how
images are formed and perceived by humans.
• Here we are learning the physical limitations of human vision to
work with digital images.
Digital Image Processing GMM Page 4
5. Structure of the Human Eye
• Eye is nearly a sphere, average diameter is 20 mm
• Cornea and sclera are outer layer to cover the
choroid layer and retina.
• Choroid contains blood vessels, nutrition to eye
• Choroid coat is heavily pigmented, helps to reduce
the amount of extraneous light entering into eye
• The choroid is divided into ciliary body and iris.
• ciliary body contracts and expands to control the
amount of light entering into eye
• Iris centeral opening i.e., pupil varies in diameter
aprox. 2 to 8 mm.
• The lense is suspended by fibers that attached with
ciliary body.
• It contains 60 to 70% water, about 6% fat, and
more protein than any other tissue in the eye.
Digital Image Processing GMM Page 5
6. Structure of the Human Eye
• The lens is colored by a slightly yellow
pigmentation that increases with age.
• An excessive clouding of lens causes
cataracts leads to loss of clear vison.
• The Lens absorbs approx. 8% of light
of the visible light spectrum
• Both infrared and ultraviolet light are
absorbed appreciably by proteins
within the lens structure and, in
excessive amounts, can damage the
eye.
Digital Image Processing GMM Page 6
7. Structure of the Human Eye
•The innermost membrane of the
eye is the retina contain light
receptors.
•When the eye is properly focused,
light from an object outside the eye
is imaged on the retina via light
receptors.
•Two classes of light receptors
• Cones and
• Rods
Digital Image Processing GMM Page 7
8. Structure of the Human Eye
• The cones in each eye number
between 6 and 7 million. Which are
sensitive to color. Cone vision is
called photopic/bright-light vision
• Rods are 75 to 150 million are
distributed over the retinal surface.
• Rods serve to give a general, overall
picture of the field of view.
• Rods are sensitive to low levels of
illumination. Like object appear color
less in moonlight compare to sunlight
due to rods. Called scotopic or dim-
light vision.
Digital Image Processing GMM Page 8
9. Image Formation in the Eye
•In human eye the distance between lens and retina is fixed,
but shape of length is variable.
•Whereas it is converse in camera.
Digital Image Processing GMM Page 9
12. Contents
GMM Page 12
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
13. Light and the Electromagnetic Spectrum
• In 1666, Sir Isaac Newton discovered that
when a beam of sunlight passes through a
glass prism, the emerging beam of light is
not white but consists instead of a
continuous spectrum of colors ranging
from violet at one end to red at the other.
• As Fig. 2.10 shows, the range of colors
we perceive in visible light is a small
portion of the electromagnetic spectrum.
• On one end of the spectrum are radio
waves with wavelengths billions of times
longer than those of visible light.
• On the other end of the spectrum are
gamma rays with wavelengths millions of
times smaller than those of visible light.
Digital Image Processing GMM Page 13
14. Light and the Electromagnetic Spectrum
Digital Image Processing GMM Page 14
15. Light and the Electromagnetic Spectrum
• Electromagnetic waves can be
visualized as propagating
sinusoidal waves with wavelength
λ (Fig. 2.11), or they can be
thought of as a stream of massless
particles,
• Each traveling in a wavelike
pattern and moving at the speed of
light.
• Each massless particle contains a
certain amount (or bundle) of
energy, called a photon.
Digital Image Processing GMM Page 15
16. Light and the Electromagnetic Spectrum
• We see from Eq. (2-2) that energy is
proportional to frequency,
• so the higher-frequency (shorter
wavelength) electromagnetic phenomena
carry more energy per photon.
• Thus,
• radio waves have photons with low energies,
• microwaves have more energy than radio
waves,
• infrared still more, then visible, ultraviolet, X-
rays, and finally gamma rays, the most
energetic of all.
• High-energy electromagnetic radiation,
especially in the X-ray and gamma ray
bands, is particularly harmful to living
organisms.
Digital Image Processing GMM Page 16
17. Light and the Electromagnetic Spectrum
• Light is a type of electromagnetic radiation that can be sensed
by the eye.
• The visible (color) spectrum is shown expanded in Fig. 2.10
• The visible band of the electromagnetic spectrum spans the
range from approximately 0.43 mm (violet) to about 0.79 mm
(red).
• For convenience, the color spectrum is divided into six broad
regions:
• violet, blue, green, yellow, orange, and red.
• No color ends abruptly; rather, each range blends smoothly into
the next, as Fig. 2.10 shows.
Digital Image Processing GMM Page 17
18. Light and the Electromagnetic Spectrum
•The colors perceived in an object are determined by the
nature of the light reflected by the object.
•A body that reflects light relatively balanced in all visible
wavelengths appears white to the observer.
•However, a body that favors reflectance in a limited range
of the visible spectrum exhibits some shades of color.
• For example, green objects reflect light with wavelengths primarily
in the 500 to 570 nm range, while absorbing most of the energy at
other wavelengths.
Digital Image Processing GMM Page 18
19. Light and the Electromagnetic Spectrum
•Light that is void of color is called monochromatic (or
achromatic) light.
•The only attribute of monochromatic light is its intensity.
•Because the intensity of monochromatic light is perceived
to vary from black to grays and finally to white,
•The term gray level is used commonly to denote
monochromatic intensity.
•The range of values of monochromatic light from black to
white is usually called the gray scale, and monochromatic
images are frequently referred to as grayscale images.
Digital Image Processing GMM Page 19
20. Light and the Electromagnetic Spectrum
• Chromatic (color) light spans the electromagnetic energy spectrum
from approximately 0.43 to 0.79 mm.
• Three other quantities are used to describe a chromatic light source:
radiance, luminance, and brightness.
• Radiance is the total amount of energy that flows from the light
source, and it is usually measured in watts (W).
• Luminance, measured in lumens (lm), gives a measure of the amount
of energy an observer perceives from a light source.
• For example, light emitted from a source operating in the far infrared region of
the spectrum could have significant energy (radiance), but an observer would
hardly perceive it; its luminance would be almost zero.
• Brightness is a subjective descriptor of light perception that is
practically impossible to measure. It embodies the achromatic notion
of intensity and is one of the key factors in describing color sensation.
Digital Image Processing GMM Page 20
21. • In principle, if a sensor can be developed that is capable of detecting
energy radiated in a band of the electromagnetic spectrum, we can image
events of interest in that band.
• Note, however, that the wavelength of an electromagnetic wave required to
“see” an object must be of the same size as, or smaller than, the object.
• For example,
• a water molecule has a diameter on the order of 10-10 m.
• To study these molecules, we would need a source capable of emitting energy in the
far (high energy) ultraviolet band or soft (low-energy) X-ray bands.
• Although imaging is based predominantly on energy from electromagnetic
wave radiation, this is not the only method for generating images.
• But sound reflected from objects can be used to form ultrasonic images.
• Other sources of digital images are electron beams for electron microscopy, and
• software for generating synthetic images used in graphics and visualization.
Light and the Electromagnetic Spectrum
Digital Image Processing GMM Page 21
22. Contents
GMM Page 22
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
23. What is image processing?
• The amplitude of f at any pair of coordinates (x, y) is called the intensity or
gray level of the image at that point.
• When x, y, and the intensity values of f are all finite, discrete quantities, we
call the image a digital image.
• The field of digital image processing refers to processing digital images by
means of a digital computer.
• Thus a digital image composed of elements, each element has a location
and an intensity value.
• These elements are called picture elements, image elements, pels, or pixels.
Digital Image Processing GMM Page 23
•An image is a 2-D function f(x,y)
• x and y are spatial (plane) coordinates
i.e., location,
24. ImageAcquisition Using a Single Sensor
ImageAcquisition Using Sensor Strips
ImageAcquisition Using Sensor Arrays
A Simple Image Formation Model
Digital Image Processing GMM Page 24
25. Image Sensing and Acquisition
• The illumination may originate from a source of electromagnetic energy,
• such as a radar, infrared, or X-ray system.
• OR
• it could originate from less traditional sources, such as ultrasound or even a computer-
generated illumination pattern.
• Similarly, the scene elements could be familiar objects, like molecules, buried rock
formations, or a human brain.
• Depending on the nature of the source, illumination energy is reflected from, or
transmitted through, objects.
• An example in the first category is light reflected from a planar surface.
• An example in the second category is when X-rays pass through a patient’s body for
the purpose of generating a diagnostic X-ray image.
• In some applications, the reflected or transmitted energy is focused onto a photo
converter (e.g., a phosphor screen) that converts the energy into visible light.
• Electron microscopy and some applications of gamma imaging use this approach.
Digital Image Processing GMM Page 25
26. Image Sensing and Acquisition: IMAGE ACQUISITION
USING A SINGLE SENSING ELEMENT
• Figure 2.12 shows the three
principal sensor arrangements used
to transform incident energy into
digital images.
• The incoming energy is transformed
into a voltage by the combination of
input electrical power and sensor
material that is responsive to the
particular type of energy being
detected.
• The output voltage waveform is the
response of the sensor(s), and a
digital quantity is obtained from
each sensor by digitizing its
response.
Digital Image Processing GMM Page 26
27. Image Sensing and Acquisition: IMAGE ACQUISITION
USING A SINGLE SENSING ELEMENT
• Figure 2.12(a) shows the components of
a single sensing element.
• A familiar sensor of this type is the
photodiode, which is constructed of
silicon materials and whose output is a
voltage proportional to light intensity.
• Using a filter in front of a sensor
improves its selectivity.
• For example, an optical green-transmission
filter favors light in the green band of the
color spectrum.
• As a consequence, the sensor output would
be stronger for green light than for other
visible light components.
Digital Image Processing GMM Page 27
28. •A geometry used more
frequently than single sensors
is an in-line sensor strip, as
in Fig. 2.12(b).
•The strip provides imaging
elements in one direction.
•Motion perpendicular to the
strip provides imaging in the
other direction, as shown in
Fig. 2.14(a).
Digital Image Processing GMM Page 28
Image Sensing and Acquisition
IMAGE ACQUISITION USING SENSOR STRIPS
29. Image Acquisition Using Sensor Arrays
• Figure 2.12(c) shows individual sensing
elements arranged in the form of a 2-D
array.
• Electromagnetic and ultrasonic sensing
devices frequently are arranged in this
manner.
• This is also the predominant
arrangement found in digital cameras.
• A typical sensor for these cameras is a
CCD (charge-coupled device) array,
which can be manufactured with a broad
range of sensing properties and can be
packaged in rugged arrays of
4000X4000 elements or more.
Digital Image Processing GMM Page 29
30. A SIMPLE IMAGE FORMATION MODEL
•As we denoted images by 2D functions of the form f(x,y).
•The value of f at spatial coordinates (x,y) is a scalar quantity
• whose physical meaning is determined by the source of the image,
and
• whose values are proportional to energy radiated by a physical
source (e.g., electromagnetic waves).
• As a consequence, f(x,y) must be nonnegative and finite; that is,
Digital Image Processing GMM Page 30
31. A SIMPLE IMAGE FORMATION MODEL
• As a consequence, f(x,y) must be nonnegative and finite; that is,
Function f(x,y) is characterized by two components:
• (1) the amount of source illumination incident on the scene being viewed,
and
• (2) the amount of illumination reflected by the objects in the scene.
• These are called the illumination and reflectance components,
and are denoted by i(x, y) and r(x,y), respectively.
• The two functions combine as a product to form f(x,y):
Digital Image Processing GMM Page 31
32. A SIMPLE IMAGE FORMATION MODEL
• The two functions combine as a product to form f(x,y):
• Reflectance is bounded by 0 (total absorption) and 1 (total
reflectance).
• The nature of i(x,y) is determined by the illumination source, and
• r(x,y) is determined by the characteristics of the imaged objects.
Digital Image Processing GMM Page 32
33. A SIMPLE IMAGE FORMATION MODEL
Digital Image Processing GMM Page 33
34. A SIMPLE IMAGE FORMATION MODEL
• The interval L[min,max] is called
the intensity (or gray) scale.
• Common practice is to shift this
interval numerically to the interval
[0,1] or [0,C]
• where l = 0 is considered black and
• l = 1 (or C) is considered white on
the scale.
• All intermediate values are
shades of gray varying from black
to white.
Digital Image Processing GMM Page 34
35. Contents
GMM Page 35
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
36. Image Sampling and Quantization
•An image may be continuous with respect to the x- and y-
coordinates, and also in amplitude.
•To convert it to digital form, we have to sample the function
in both coordinates and in amplitude.
•To create a digital image, we need to convert the continuous
sensed data into a digital format. This requires two
processes: sampling and quantization.
•Digitizing the coordinate values is called sampling.
•Digitizing the amplitude values is called quantization.
Digital Image Processing GMM Page 36
37. Image Sampling and Quantization
• The 1-D function in Fig. (b) is a
plot of amplitude (intensity level)
values of the continuous image
along the line segment AB in
Fig.(a).
• To sample this function, we take
equally spaced samples along
line AB, as shown in Fig. (c).
• Small dark squares are showing
samples, i.e. sample function
• Discrete spatial location
indicated by ticks in bottom
• Values of samples is vertically,
are continuous range of
intensities required in
quantization (here are 8-gray
shades from 0 to 1).
Digital Image Processing GMM Page 37
38. Image Sampling and Quantization
Digital Image Processing GMM Page 38
• Figure 2.17(a) shows a
continuous image projected
onto the plane of a 2-D sensor.
• Figure 2.17(b) shows the
image after sampling and
quantization.
• The quality of a digital image
is determined to
• a large number of samples and
• discrete intensity levels used in
sampling and quantization.
• However, image content also
plays a role in the choice of
these parameters.
39. Representing Digital Images
•A 2-D array, containing M rows
and N columns,
• where (x,y) are discrete
coordinates.
• For notational clarity and
convenience, we use integer values
for these discrete coordinates:
• x = 0, 1, 2, ...., M - 1
• y = 0, 1, 2, ...., N – 1
• x and y are referred to as spatial
variables or spatial coordinates.
Digital Image Processing GMM Page 39
41. Representing Digital Images
Digital Image Processing GMM Page 41
• As Fig. 2.19 shows, we define the origin of an image at the top left corner.
• This is a convention based on the fact that many image displays/screen.
• Choosing the origin of f(x,y) at that point makes sense mathematically because digital
images in reality are matrices.
• Sometimes we use x and y interchangeably in equations with the rows (r) and columns
(c) of a matrix.
• For example, the center of an
image of size 1023X1024 is at
(511,512).
• Some programming languages
(e.g., MATLAB) start indexing at 1
instead of at 0.
• The center of an image in that
case is found at
42. Representing Digital Images
•An image exhibiting saturation
and noise.
•Saturation is the highest value
beyond which all intensity values
are clipped (note how the entire
saturated area has a high,
constant intensity level).
•Visible noise in this case appears
as a grainy texture pattern.
•The dark background is noisier,
but the noise is difficult to see.
Digital Image Processing GMM Page 42
43. Representing Digital Images
• This digitization process
requires that decisions be
made regarding the values
for M, N, and for the
number, L, of discrete
intensity levels.
• The number of intensity
levels typically is an integer
power of 2: L= 2k
• Where k is an integer.
• We assume that the discrete
levels are equally spaced and
that they are integers in the
range [0,L-1].
Digital Image Processing GMM Page 43
44. Representing Digital Images
• Figure 2.21 shows the number of megabytes required to store square images
for various values of N and k (as usual, one byte equals 8 bits and a
megabyte equals 106 bytes).
• When an image can have 2k possible intensity levels, it is common practice to
refer to it as a “k-bit image,” (e,g., a 256-level image is called an 8-bit image).
• Note that storage requirements for large 8-bit images (e.g., 10,000*10,000
pixels) are not insignificant.
Digital Image Processing GMM Page 44
45. Representing Digital Images
•Resolution:
• Image resolution describes the amount of detail an image holds.
• Higher resolution images are sharp/more detailed.
• In a lower resolution image, the fine differences in color disappear, edges become
blurred, etc.
• There are many kinds of resolution that can apply to film, television, etc., but the two
types we are concerned with here are
• screen resolution and
• print resolution.
•Screen resolution:
• Measured in pixels per inch (PPI). A pixel is a tiny square of color.
• A monitor uses tiny pixels to assemble text and images on screen.
• The optimal resolution for images on screen is 72 DPI.
• Increasing the DPI won’t make the image look any better, it’ll just make the file larger,
which will probably slow down the website when it loads or the file when it opens.
Digital Image Processing GMM Page 45
46. Representing Digital Images
•Print resolution:
• Measured in dots per inch (or “DPI”),
• DPI means the number of dots of ink per inch that a printer deposits on a piece of
paper.
• For 300 DPI, a printer will output 300 tiny dots of ink to fill every inch of the print.
• 300 DPI is the standard print resolution for high resolution output.
• This means that that images should be a minimum of 300 dpi x 300 dpi or 90,000
dots per square inch to produce a high resolution print.
• If the document will stay on the screen (like a website), you just need to worry about
screen resolution, so your images should be 72 PPI.
• An important note: Sometimes the terms DPI (print) and PPI (screen) are used
interchangeably. So, don’t be confused if someone refers to a 300 DPI image that is
on screen, because pixels per inch (PPI) translate equally to dots per inch (DPI).
• If you’re going to print the document, you need to make sure the images are 300
DPI at 100% of the final output size. This sounds more complicated than it really is.
• The bigger we try to print the 300 pixel × 300 pixel image, the more pixellated it
becomes. The eye can start to see the individual pixels, and the edges become very
sharp.
Digital Image Processing GMM Page 46
47. Representing Digital Images
•How can we figure out the DPI of an image?
•if you want to print an image that is 1024 × 768 (listed as
Width=1024px, Height=768px on a PC), you need to divide
each value by 300 to see how many inches you can print at
300 dpi.
•1024 ÷ 300 = 3.4133″ (width)
•768 ÷ 300 = 2.56″ (height)
•So, you could print this 1024px × 768px image at 300 DPI
at a size of 3.4133″ × 2.56″ –
• any bigger than this, and you risk the image becoming pixellated.
Digital Image Processing GMM Page 47
48. Spatial and Intensity Resolution
•Spatial resolution commonly stated in dots (pixels) per
unit distance like
• dots per inch (dpi) for printing or scanning.
• Pixels per inch (ppi) for screen
•In US to give you an idea of quality,
• newspapers are printed with a resolution of 75 dpi,
• magazines at 133 dpi,
• glossy brochures at 175 dpi, and
• the book page at which you are presently looking is printed at
2400 dpi.
Digital Image Processing GMM Page 48
49. Spatial and Intensity Resolution
• Image size depends on two factors
• Number of samples or spatial locations
(no of pixels)
• Levels of intensity used in
quantization(Intensity resolution),
• the number of intensity levels usually is an
integer power of two
• commonly used 8 or 16 bits, i.e.; 2k where
k=8,16
• Thus intensity levels will be 28 = 256 or 216 =
65536
Digital Image Processing GMM Page 49
53. Python with OpenCV
•OpenCV is a popular open source library for processing
images.
•It has API bindings in Python, C++, Java, and Matlab.
•It comes with thousands of functions and implemented many
advanced image processing algorithms.
•If you’re using Python, a common alternative to OpenCV is
PIL (Python Imaging Library, or its successor, Pillow).
•Compared to PIL, OpenCV has a richer set of features and
is often faster because it is implemented in C++.
Digital Image Processing GMM Page 53
55. Read an image
import cv2
img = cv2.imread('messi2018.png',cv2.IMREAD_COLOR)
cv2.imshow('Messi 2018',img)
R = img.shape[0]
C = img.shape[1]
Channel = img.shape[2]
print("Image rows:",R)
print("Image col:",C)
print("Image rows:",Channel)
cv2.waitKey(0)
cv2.destroyAllWindows()
Digital Image Processing GMM Page 55
56. Convert image 8 to 16 bit
Digital Image Processing GMM Page 56
57. Convert image 8 to 16 bit (MATALB)
Digital Image Processing GMM Page 57
%% convert image 8 to 16 bit
pic1=imread('chestxray.jpg');
imwrite(pic1,'chestxray8.png','BitDepth', 8);
%figure,imshow(pic1);
%title('Chest xray unit 8');
%
imwrite(pic1,'chestxray16.png','BitDepth', 16);
img8 = imread('chestxray8.png');
img16 = imread('chestxray16.png');
%to show max pixel intensity
max(img8,[],'all’)
max(img16,[],'all')
58. Image Interpolation
•Definition: The process of using known data to estimate
values at unknown locations.
•A basic tool used extensively in tasks such as
• zooming,
• shrinking,
• rotating, and
• geometric corrections.
•Here we apply it to image resizing (shrinking and
zooming),
•which are basically image resampling methods.
Digital Image Processing GMM Page 58
59. Image Interpolation
•Nearest neighbor (NN) interpolation
• A simplest approach to interpolation. This method simply determines the “nearest”
neighboring pixel, and assumes the intensity value of it. Rather than calculate an
average value by some weighting criteria or generate an intermediate value based on
complicated rules.
• It has the tendency to produce undesirable artifacts, such as severe distortion of straight
edges.
Digital Image Processing GMM Page 59
60. Image Interpolation
• Bilinear interpolation: in which we use the four nearest
neighbors to estimate the intensity at a given location.
• Let f(x,y) denote the coordinates of the location to which we want to
assign an intensity value (think of it as a point of the grid described
previously), and
• let v(x,y) denote that intensity value. For bilinear interpolation, the
assigned value is obtained using the equation
• v(x, y) = ax + by + cxy + d
• Where the four coefficients are determined from the four equations in
four unknowns that can be written using the four nearest neighbors of
point .
• Bilinear interpolation gives much better results than nearest neighbor
interpolation, with a modest increase in computational burden.
• Detailed solved example:
• https://www.omnicalculator.com/math/bilinear-interpolation
Digital Image Processing GMM Page 60
62. Image Interpolation
• Bicubic interpolation: involves the sixteen nearest neighbors
of a point. The intensity value assigned to point is obtained
using the equation:
• Generally, bicubic interpolation does a better job of preserving
fine detail than its bilinear counterpart.
• Bicubic interpolation is the standard used in commercial image
editing programs,
• such as Adobe Photoshop and Corel Photo paint.
Digital Image Processing GMM Page 62
64. Image Interpolation
• (a) Image reduced to 72 dpi
and zoomed back to its
original size ( pixels) using
nearest neighbor interpolation.
• (b) Image shrunk and zoomed
using bilinear interpolation.
• (c) Same as (b) but using
bicubic interpolation.
• (d)–(f) Same sequence, but
shrinking down to 150 dpi
instead of 72 dpi
Digital Image Processing GMM Page 64
65. Image Interpolation
•Compare Figs. 2.24(e) and
(f), especially the latter, with
the original image in Fig.
2.20(a).
Digital Image Processing GMM Page 65
70. Interpolation Example MATLAB
Digital Image Processing GMM Page 70
clear
img = imread('eye.jpg');
nn = imresize(img,0.3,'nearest'); %shrinking by 30%
bl = imresize(img,0.3,'bilinear');
bc = imresize(img,0.3,'bicubic');
%subplot(2,2,1),
figure,imshow(img), title('Original Image');
%subplot(2,2,2),
figure,imshow(nn), title('Nearest neighbor interpolation');
%subplot(2,2,3),
figure, imshow(bl), title('bilinear interpolation');
%subplot(2,2,4),
figure,imshow(bc), title('bicubic interpolation');
71. Contents
GMM Page 71
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
72. Some Basic Relationships between Pixels
• A pixel p at coordinates (x,y) has two horizontal and two vertical neighbors with coordinates
• (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)
• This set of pixels, called the 4-neighbors of p, is denoted N4(p).
• The four diagonal neighbors of p have coordinates
• and are denoted ND(p).
• These neighbors, together with the 4-neighbors(diagonal), are called the 8-neighbors of p,
denoted by N8(p).
• Def: The set of image locations of the neighbors of a point p is called the neighborhood of
p.
• The neighborhood is said to be closed if it contains p. Otherwise, the neighborhood is said to be
open.
Digital Image Processing GMM Page 72
74. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
•Let V be the set of intensity values used to define adjacency.
•In a binary image,
• V = {1} if we are referring to adjacency of pixels with value 1.
•In a grayscale image, the idea is the same, but set V
typically contains more elements. For example,
• if we are dealing with the adjacency of pixels whose values are in the range
0 to 255, set V could be any subset of these 256 values.
•We consider three types of adjacency:
Digital Image Processing GMM Page 74
75. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• We consider three types of adjacency:
• 1. 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).
• 2. 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).
• 3. m-adjacency (also called mixed adjacency). Two pixels p and q with values from V are m-
adjacent if
(a) q is in N4(p), or
(b) q is in ND(p) and the set N4(p)ՈN4(q) has no pixels whose values are from V.
• Mixed adjacency is a modification of 8-adjacency, and is introduced to eliminate the ambiguities
that may result from using 8-adjacency.
• For example, consider the pixel arrangement in Fig. 2.28(a) and let V = {1}.
• The three pixels at the top of Fig. 2.28(b) show multiple (ambiguous) 8-adjacency, as indicated by the
dashed lines.
• This ambiguity is removed by using m-adjacency, as in Fig. 2.28(c).
• In other words, the center and upper-right diagonal pixels are not m-adjacent because they do not satisfy
condition(b).
Digital Image Processing GMM Page 75
76. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• Connectivity between pixels
• It is an important concept in digital image processing.
• It is used for establishing boundaries of objects and components of regions in an image.
• Two pixels are said to be connected:
• if they are adjacent in some sense( neighbour pixels,4/8/m-adjacency)
• if their gray levels satisfy a specified criterion of similarity(equal intensity level)
• There are three types of connectivity on the basis of adjacency. They are:
• a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with each others.
• b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with each others.
• c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent with each others.
Digital Image Processing GMM Page 76
77. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• Let R represent a subset of pixels in an image.
• We call R a region of the image if R is a connected set.
• Two regions, Ri and Rj are said to be adjacent if their union
forms a connected set.
• Regions that are not adjacent are said to be disjoint.
• We consider 4- and 8-adjacency when referring to regions.
• For our definition to make sense, the type of adjacency used
must be specified.
• For example, the two regions of 1’s in Fig. 2.28(d) are adjacent
only if 8-adjacency is used.
Digital Image Processing GMM Page 77
78. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
Digital Image Processing GMM Page 78
79. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
Digital Image Processing GMM Page 79
80. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• The boundary (also called the border or contour) of a region R is the
set of pixels in R that are adjacent to pixels in the complement of R.
• Also, the border of a region is the set of pixels in the region that have at
least one background neighbor.
• Here again, we must specify the connectivity being used to define
adjacency.
• For example, the point circled in Fig. 2.28(e) is not a member of the border of
the 1-valued region if 4-connectivity is used between the region and its
background, because the only possible connection between that point and the
background is diagonal.
• As a rule, adjacency between points in a region and its background is
defined using 8-connectivity to handle situations such as this.
Digital Image Processing GMM Page 80
81. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
Digital Image Processing GMM Page 81
85. Contents
GMM Page 85
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
86. Array versus Matrix Operations
• An array operation involving one or more images is carried out on a pixel-by-
pixel basis.
• However, there are many operation are made on matrix theory
• Following example shows image based array multiplication.
• It is not like matrix theory.
Digital Image Processing GMM Page 86
88. Linear versus Nonlinear Operations
• Above equation indicates that:
• the output of a linear operation due to the sum of two inputs is the same as performing the
operation on the inputs individually and then summing the results. (called additivity
property)
• In addition, the output of a linear operation to a constant times an input is the same as the
output of the operation due to the original input multiplied by that constant. (called
homogeneity property)
• Example:
Digital Image Processing GMM Page 88
89. Linear versus Nonlinear Operations
Digital Image Processing GMM Page 89
• Linear operations are
exceptionally important
because they are based on a
large body of theoretical and
practical results that are
applicable to image
processing.
• Nonlinear systems are not
nearly as well understood, so
their scope of application is
more limited.
90. Arithmetic Operations
Digital Image Processing GMM Page 90
• Arithmetic operations are carried out between corresponding pixel pairs.
• The four arithmetic operations are denoted as
• The operations are performed between corresponding pixel pairs
in f and g for and
• x = 0, 1, 2, … , M - 1 and y = 0, 1, 2, …, N – 1.
• where, as usual, M and N are the row and column sizes of the images.
• s, d, p, and v are images of size MxN also.
• Note that image arithmetic in the manner just defined involves
images of the same size.
103. Image Arithmetic (Example Code)
•https://subscription.packtpub.com/book/data/978180020177
4/2/ch02lvl1sec10/image-arithmetic
•We know that images are nothing but matrices. This raises
an important and interesting question. If we can carry out
arithmetic operations on matrices, can we carry them out on
images as well? The answer is a bit tricky. We can carry out
the following operations on images:
• Adding and subtracting two images
• Adding and subtracting a constant value to/from image
• Multiplying a constant value by an image
Digital Image Processing GMM Page 103
105. Set and Logical Operations
Digital Image Processing GMM Page 105
• A
106. Set and Logical Operations
Digital Image Processing GMM Page 106
• A
107. Set and Logical Operations
Digital Image Processing GMM Page 107
• A
108. Home work
•Using two images find the five set operations like
• Use two image
• Union
• Intersection
• Complement
• Deference
Digital Image Processing GMM Page 108
109. Set and Logical Operations
Digital Image Processing GMM Page 109
110. Set and Logical Operations
Digital Image Processing GMM Page 110
111. Digital Image Processing GMM Page 111
#import numpy as np
#import matplotlib.pyplot as plt
import cv2 as cv
#%%
#img1 = cv2.imread('black_top_right_triangle.png',0)
#img2 = cv2.imread('black_bottom_right_triangle.png',0)
img1 = cv.imread('Box_A.png',0)
img2 = cv.imread('Box_B.png',0)
img_bwa = cv.bitwise_and(img1,img2) #A Union B
img_bwo = cv.bitwise_or(img1,img2) #A intersection B
img_bwx = cv.bitwise_xor(img1,img2) #
img_Comp = cv.bitwise_not(img_bwo) #
cv.imshow("A: Orginal ImageA", img1)
cv.imshow("B: Orginal Image B", img2)
cv.imshow("Bitwise AND: UnionA and B", img_bwa)
cv.imshow("Bitwise OR: Intersection A and B", img_bwo)
cv.imshow("Bitwise XOR: A XOR B", img_bwx)
cv.imshow("Complement of Image not(A or B)", img_Comp)
cv.waitKey(0)
cv.destroyAllWindows()
112. Digital Image Processing GMM Page 112
#import numpy as np
#import matplotlib.pyplot as plt
import cv2 as cv
#%%
#img1 = cv2.imread('black_top_right_triangle.png',0)
#img2 = cv2.imread('black_bottom_right_triangle.png',0)
img1 = cv.imread('Box_A.png',0)
img2 = cv.imread('Box_B.png',0)
img_bwa = cv.bitwise_and(img1,img2) # A U B
img_bwo = cv.bitwise_or(img1,img2) # A intersection B
img_bwx = cv.bitwise_xor(img1,img2) #
cv.imshow("Black_top_right_triangle", img1)
cv.imshow("Black_bottom_right_triangle", img2)
cv.imshow("Bitwise AND of Image 1 and 2", img_bwa)
cv.imshow("Bitwise OR of Image 1 and 2", img_bwo)
cv.imshow("Bitwise XOR of Image 1 and 2", img_bwx)
cv.waitKey(0)
cv.destroyAllWindows()
113. Convert to B/W, apply AND operator code
Digital Image Processing GMM Page 113
%%
clear all
clc
ImgOrg= imread('box.png');
ImgGray= rgb2gray(ImgOrg);
ImgBW= imbinarize(ImgGray); %convert to binary(BW) image
Figure;
subplot(3,3,1),imshow(ImgOrg), title(‘Original Image')
subplot(3,3,2),imshow(ImgGray), title('Grayscale Image')
subplot(3,3,3),imshow(ImgBW), title('Black and White image')
%% NOT operation
ImgNotBW = not(ImgBW);
subplot(3,3,4),imshow(ImgBW), title('Black and White image')
subplot(3,3,5),imshow(ImgNotBW), title('NOT Image')
%% AND operation
ImgOrg2 = imread('box2.png');
ImgGray2= rgb2gray(ImgOrg2);
ImgBW2= imbinarize(ImgGray2);
ImgAND = and(ImgBW,ImgBW2);
subplot(3,3,7),imshow(ImgBW), title('Image 1')
subplot(3,3,8),imshow(ImgBW2), title('Image 2')
subplot(3,3,9),imshow(ImgAND), title('Image 1 AND Image 2')
114. Spatial Operations
•Spatial operations are performed directly on the pixels of a
given image. We classify spatial operations into three broad
categories:
•(1) single-pixel operations,
•(2) neighborhood operations, and
•(3) geometric spatial transformations.
Digital Image Processing GMM Page 114
117. Geometric spatial transformations and image registration
Digital Image Processing GMM Page 117
•We use geometric transformations modify the spatial
arrangement of pixels in an image.
•These transformations are called rubber-sheet
transformations because they may be viewed as analogous
to “printing” an image on a rubber sheet, then stretching or
shrinking the sheet according to a predefined set of rules.
•Geometric transformations of digital images consist of two
basic operations:
• 1. Spatial transformation of coordinates.
• 2. Intensity interpolation that assigns intensity values to the spatially
transformed pixels.
119. Geometric spatial transformations and image registration
Digital Image Processing GMM Page 119
•Our interest is in so-called affine transformations, which
include scaling, translation, rotation, and shearing.
•The key characteristic of an affine transformation in 2-D is
that it preserves points, straight lines, and planes.
•This transformation can scale, rotate, translate, or sheer an
image, depending on the values chosen for the elements of
matrix A.
121. Geometric spatial transformations and image registration
Digital Image Processing GMM Page 121
•https://www.mathworks.com/help/images/ref/affine2d.html
•The preceding transformation moves the coordinates of
pixels in an image to new locations.
•To complete the process, we have to assign intensity
values to those locations.
•This task is accomplished using intensity interpolation.
122. IMAGE TRANSFORMS
Digital Image Processing GMM Page 122
• All the image processing approaches discussed thus far
operate directly on the pixels of an input image; that is, they
work directly in the spatial domain.
• In some cases, image processing tasks are best formulated
by transforming the input images, carrying the specified task
in a transform domain, and applying the inverse transform to
return to the spatial domain.
• You will encounter a number of different transforms as you
proceed. A particularly important class of 2-D linear
transforms, denoted T(u,v), can be expressed in the general
form
123. IMAGE TRANSFORMS
Digital Image Processing GMM Page 123
• where f(x,y) is an input image, r(x,y,u,v) is called a forward
transformation kernel, and Eq. (2-55) is evaluated for u = 0,1,2 , …
M-1 and v = 0,1,2 , … N-1.
• As before, x and y are spatial variables, while M and N are the row
and column dimensions of f.
• Variables u and v are called the transform variables.
• T(u,v) is called the forward transform of f (x,y).
• Given T(u,v), we can recover f(x,y) using the inverse transform of
T(u,v):
• for x=0,1,2 , … M-1 and y= 0,1,2, … N-1, where s(x,y,u,v) is called
an inverse transformation kernel.
• Together, Eqs. (2-55) and (2-56) are called a transform pair.
124. IMAGE TRANSFORMS
Digital Image Processing GMM Page 124
• Figure 2.44 shows the basic steps for performing image
processing in the linear transform domain.
• First, the input image is transformed, the transform is then
modified by a predefined operation and, finally, the output
image is obtained by computing the inverse of the modified
transform.
• Thus, we see that the process goes from the spatial domain to
the transform domain, and then back to the spatial domain.
126. IMAGE TRANSFORMS
Digital Image Processing GMM Page 126
•The nature of a transform is determined by its kernel.
•A transform of particular importance in digital image
processing is the Fourier transform, which has the
following forward and inverse kernels respectively:
•where j = −1, so these kernels are complex functions.
127. IMAGE TRANSFORMS
Digital Image Processing GMM Page 127
•Substituting the preceding kernels into the general
transform formulations in Eqs. (2-55) and (2-56) gives us
the discrete Fourier transform pair:
129. IMAGE TRANSFORMS
Digital Image Processing GMM Page 129
• It can be shown that the Fourier kernels are separable and
symmetric, and
• that separable and symmetric kernels allow 2-D transforms to
be computed using 1-D transforms.
Zero average value means that: E[n(x,y)]=0 , i.e. the expected value of the noise is zero.
Uncorrelated means that: E[n(x1,y1)n(x2,y2)]=0, where x1≠x2x1≠x2 and y1≠y2y1≠y2.