This document discusses concepts related to digital image processing. It begins with an overview of the structure and function of the human eye, and how images are formed on the retina. It then describes light and the electromagnetic spectrum, focusing on the visible light spectrum that can be sensed by the eye. The document discusses different methods of image sensing and acquisition, including using single sensors, sensor strips, and sensor arrays. It also provides a simple model for image formation. The overall content covers fundamental concepts in visual perception, light, and image acquisition that provide context for digital image processing.
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical stimuli.
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical light intensities.
The document discusses light and perception. It begins by introducing photometric stereo, which uses pixel brightness to understand shape. It then covers topics like what light is, how we measure and perceive it, how light propagates and interacts with matter. Key points covered include how the human visual system perceives color using rods and cones, properties of light like reflection, challenges of modeling image formation by tracking light rays, and assumptions needed for shape from shading from a single image.
This document discusses key concepts related to visual information and human vision. It covers the electromagnetic spectrum, properties of light, how the human eye perceives color and brightness, and color theory concepts like additive and subtractive color mixing. Standard color temperatures used in television and for white balancing cameras are also explained.
Digital images can be represented as multidimensional arrays of numbers or vectors. Each component in the image, called a pixel, associates with a pixel value such as intensity or color. To create a digital image, an analog image is sampled and quantized by converting the continuously sensed data into discrete numeric values. Sampling involves assigning numeric coordinates to pixels according to a grid, while quantization assigns numeric values to represent the brightness or color at each pixel location. The number of samples and quantization levels can impact the quality and file size of the digital image.
Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamen
Lasers have various applications in ophthalmology, including both diagnostic and therapeutic uses. Some key points:
- Lasers work by stimulating emission of coherent light and can be focused precisely due to properties like collimation and monochromaticity. Common lasers used include Nd:YAG, excimer, and diode lasers.
- Diagnostically, lasers are used in scanning laser ophthalmoscopy, optical coherence tomography, and wavefront analysis. Therapeutically, they are used for refractive surgery, glaucoma treatment like laser iridotomy, and retinal procedures like photocoagulation.
- Specific procedures include PRK, LASIK and LASEK to correct ref
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical stimuli.
This document provides an overview of a digital image processing lecture given by Dr. Moe Moe Myint at Technological University in Kyaukse, Myanmar. It includes information about the instructor's contact information and office hours. The document then summarizes the contents of Chapter 2, which covers topics like visual perception, light and the electromagnetic spectrum, image sensing and acquisition, and basic relationships between pixels. Examples and diagrams are provided to illustrate concepts like the structure of the human eye, image formation, brightness adaptation, and the electromagnetic spectrum. Optical illusions are also discussed as examples of how visual perception does not always match physical light intensities.
The document discusses light and perception. It begins by introducing photometric stereo, which uses pixel brightness to understand shape. It then covers topics like what light is, how we measure and perceive it, how light propagates and interacts with matter. Key points covered include how the human visual system perceives color using rods and cones, properties of light like reflection, challenges of modeling image formation by tracking light rays, and assumptions needed for shape from shading from a single image.
This document discusses key concepts related to visual information and human vision. It covers the electromagnetic spectrum, properties of light, how the human eye perceives color and brightness, and color theory concepts like additive and subtractive color mixing. Standard color temperatures used in television and for white balancing cameras are also explained.
Digital images can be represented as multidimensional arrays of numbers or vectors. Each component in the image, called a pixel, associates with a pixel value such as intensity or color. To create a digital image, an analog image is sampled and quantized by converting the continuously sensed data into discrete numeric values. Sampling involves assigning numeric coordinates to pixels according to a grid, while quantization assigns numeric values to represent the brightness or color at each pixel location. The number of samples and quantization levels can impact the quality and file size of the digital image.
Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamentals are accumulated at one place for easy understanding to a layman also in this presentation. Digital image Processing fundamen
Lasers have various applications in ophthalmology, including both diagnostic and therapeutic uses. Some key points:
- Lasers work by stimulating emission of coherent light and can be focused precisely due to properties like collimation and monochromaticity. Common lasers used include Nd:YAG, excimer, and diode lasers.
- Diagnostically, lasers are used in scanning laser ophthalmoscopy, optical coherence tomography, and wavefront analysis. Therapeutically, they are used for refractive surgery, glaucoma treatment like laser iridotomy, and retinal procedures like photocoagulation.
- Specific procedures include PRK, LASIK and LASEK to correct ref
The document summarizes key optical principles related to the human visual system. It discusses:
1) The basics of light, photons, and units of measurement for light such as lumens.
2) How different wavelengths of light such as UV, visible light, and X-rays interact with human skin and tissues, including uses in phototherapy and risks of skin cancer.
3) Principles of reflection, refraction, lenses, and image formation and their relevance to the anatomy and functioning of the human eye.
4) Common visual impairments like myopia, hyperopia, and astigmatism as well as methods for testing visual acuity and visual fields.
The document discusses elements of visual perception including the structure and function of the human eye and visual system. It describes how (1) light is focused through the cornea and lens onto the retina, where rods and cones detect the image and transmit signals to the brain, (2) the fovea provides sharp central vision while peripheral vision is supported by rods, and (3) brightness adaptation allows the eye to perceive a wide range of intensities through changes in sensitivity. Phenomena like Mach bands and simultaneous contrast demonstrate that perceived brightness depends on context rather than absolute intensity.
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
PHSIO OF VISION in Ophthalmology in detailManjunathN95
1. The document describes the physiology of vision and the rhodopsin cycle. Light is converted to electrical signals in the photoreceptors through phototransduction.
2. Phototransduction involves rhodopsin bleaching where light exposure converts the retinal component of rhodopsin. This triggers biochemical reactions generating a receptor potential.
3. The receptor potential is transmitted through the visual pathway to the visual cortex for perception through serial and parallel processing.
Lasers in ophthalmology - Dr. Parag Apteparag apte
A full presentation of one hour of all types of lasers in ophthalmology for under graduates and post graduates after going through all the uploaded slides till today. This includes laser photocoagulation, laser iridotomy, and laser capsulotomy in detail
Introduction
The applications of microscopy in the forensic sciences are almost limitless. This is due in large measure to the ability of
microscopes to detect, resolve and image the smallest items of evidence, often without alteration or destruction. As a
result, microscopes have become nearly indispensable in all forensic disciplines involving the natural sciences. Thus, a
firearms examiner comparing a bullet, a trace evidence specialist identifying and comparing fibers, hairs, soils or dust, a
document examiner studying ink line crossings or paper fibers, and a serologist scrutinizing a bloodstain, all rely on
microscopes, in spite of the fact that each may use them in different ways and for different purposes.
The principal purpose of any microscope is to form an enlarged image of a small object. As the image is more greatly
magnified, the concern then becomes resolution; the ability to see increasingly fine details as the magnification is
increased. For most observers, the ability to see fine details of an item of evidence at a convenient magnification, is
sufficient. For many items, such as ink lines, bloodstains or bullets, no treatment is required and the evidence may
typically be studied directly under the appropriate microscope without any form of sample preparation. For other types of
evidence, particularly traces of particulate matter, sample preparation before the microscopical examination begins is
often essential. Types of Microscopes Used in the Forensic Sciences
A variety of microscopes are used in any modern forensic science laboratory. Most of these are light microscopes which
use photons to form images, but electron microscopes, particularly the scanning electron microscope (SEM), are finding
applications in larger, full service laboratories because of their wide range of magnification, high resolving power and
ability to perform elemental analyses when equipped with an energy or wavelength dispersive X-ray spectrometer.
This document discusses the physics of color and light measurement. It covers topics like:
1) Color is a sensory perception produced in the brain that requires a light source, object, and observer.
2) The wavelength of light determines its perceived color - visible light has wavelengths between 380-760nm.
3) Light intensity is measured in lumens and is affected by factors like distance from the light source due to the inverse square law.
4) Common light sources like incandescent, fluorescent, and blackbody radiation are described in terms of their spectral properties and color temperatures.
This document provides an overview of imaging physics and the types of sensors used to record images. It discusses three main types of sensors: point detectors with 2D scanning, linear sensors with 1D scanning, and area array sensors with no scanning required. Semiconductor detectors like photodiodes, phototransistors, and photogates are commonly used area array sensors. Photogates in particular work by using a depletion volume and electric field to separate electron-hole pairs created when photons are absorbed, generating a signal. The document also discusses color separation techniques and factors like photon energy needed to create electron-hole pairs that impact semiconductor detector sensitivity.
This document provides an overview of lighting considerations for network video systems. It discusses key lighting concepts like light, color, infrared light, brightness and glare. It also covers light sources and surfaces, beam patterns, and the inverse square law. The document provides guidance on using white light versus infrared light and safety considerations. It includes a chart showing illumination distances for different Axis illuminator products based on camera angle and distance from the illuminator.
When selecting a network camera for day or night surveillance, there are several elements impacting
image quality that are important to understand. This guide is intended to give a basic overview of those
elements, to give an understanding of how lighting affects the image, and of the factors that need to be
taken into consideration for creating favorable lighting in dark environments.
1. Light transmission through lenses is determined by calculating the percentage of light lost to reflection off the front and back surfaces and light absorbed by the lens material.
2. The human eye absorbs different wavelengths of light depending on the ocular tissue. The cornea absorbs UV light while the lens absorbs more UV as we age.
3. Sunglasses protect the eyes by reducing the amount of UV and infrared radiation that reaches the eyes, preventing damage from prolonged sun exposure like cataracts.
Em and optics project 3 (1st) convertedDurgeshJoshi6
This document is a lab report submitted by Ashok Kumar Sahoo for the course Electromagnetism & Optics at the Indian Institute of Technology Kharagpur. The report discusses experiments and measurements performed with optical fibers and optoelectronic devices. In the first part, experiments are described to analyze the working of single mode and multimode optical fibers by calculating properties like numerical aperture, bending loss, and splice loss. The second part analyzes the characteristics of various optoelectronic devices including solar cells, light dependent resistors, LEDs, phototransistors, photodiodes, and optocouplers. Basic theories of total internal reflection, optical fibers, and these components are also outlined.
Digital image processing involves manipulating digital images using a computer. It has two main applications: improving images for human interpretation and processing images for machine perception tasks. A digital image is composed of pixels arranged in a grid, each with an intensity value. Key steps in digital image processing include image acquisition through sensors, enhancement, restoration, compression and segmentation. The human visual system has adapted to a wide range of light intensities through mechanisms like brightness adaptation and color vision. Digital images are formed by sampling and quantizing a continuous image function.
OCT allows for high-resolution cross-sectional imaging of the retina. It provides micron-level resolution, enabling visualization of the retinal layers. OCT is a non-contact, non-invasive technique useful for qualitative and quantitative analysis of the retina and monitoring of morphological changes. It can detect and measure retinal thickness, volume, and parameters like RNFL thickness. While it provides advantages over other modalities, OCT also has limitations like difficulty imaging through opaque media. It operates using low-coherence interferometry and is useful for evaluating a variety of posterior segment diseases.
Troubleshooting, Designing, & Installing Digital & Analog Closed Circuit TV S...Living Online
The document discusses light and optics, comparing the human eye to a camera. It explains that both have lenses that focus light and sensors (retina for the eye, sensor for the camera) that capture images. However, the eye can automatically focus on objects at different distances, while cameras require manual focus adjustment. It also notes that the eye has a blind spot, but we see a continuous image because information from both eyes is combined in the brain.
Light is a form of electromagnetic radiation that interacts with the retina to produce the sensation of sight. It is the visible portion of the electromagnetic spectrum, ranging from 400-700 nm. Light travels as a transverse wave and exhibits properties of both waves and particles. The interaction of light with matter can be explained using wave optics concepts like interference and diffraction, or quantum optics concepts like absorption and scattering. Geometrical optics describes how lenses and mirrors form images through reflection and refraction according to Snell's law. Total internal reflection occurs when light passes from an optically dense to rare medium at an angle greater than the critical angle.
The document discusses the electromagnetic spectrum, focusing on infrared and visible light. It provides descriptions of the electromagnetic spectrum and its components such as infrared, visible light, ultraviolet, X-rays and gamma rays. It discusses the uses of different parts of the spectrum including infrared light and visible light. Infrared light is used for wireless communication, night vision, medical therapy and more. Visible light enables vision, photography and optical fiber communication. Both parts of the spectrum have advantages like communication and disadvantages like potential harm from overexposure.
Optical coherence tomography (OCT) is a non-invasive imaging technique that uses light to obtain high-resolution cross-sectional images of the retina and anterior segment. OCT of the retina provides images similar to a vertical biopsy under a microscope, with micron-level resolution. Applications of OCT include ophthalmology, dermatology, cardiology, endoscopy, and guided surgery. OCT measures reflected light using interferometry, similar to ultrasound but using light instead of sound. It has much higher resolution than ultrasound. OCT is useful for detailed imaging of the retina and anterior segment, while ultrasound can image deeper structures due to its ability to penetrate tissue.
The document summarizes key optical principles related to the human visual system. It discusses:
1) The basics of light, photons, and units of measurement for light such as lumens.
2) How different wavelengths of light such as UV, visible light, and X-rays interact with human skin and tissues, including uses in phototherapy and risks of skin cancer.
3) Principles of reflection, refraction, lenses, and image formation and their relevance to the anatomy and functioning of the human eye.
4) Common visual impairments like myopia, hyperopia, and astigmatism as well as methods for testing visual acuity and visual fields.
The document discusses elements of visual perception including the structure and function of the human eye and visual system. It describes how (1) light is focused through the cornea and lens onto the retina, where rods and cones detect the image and transmit signals to the brain, (2) the fovea provides sharp central vision while peripheral vision is supported by rods, and (3) brightness adaptation allows the eye to perceive a wide range of intensities through changes in sensitivity. Phenomena like Mach bands and simultaneous contrast demonstrate that perceived brightness depends on context rather than absolute intensity.
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
Optical Phenomena related to Optometric Optics (Reflection, Refraction, Interference, Diffraction, Polarisation) and also their Optometric Uses or their uses in the Optometry Field
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
PHSIO OF VISION in Ophthalmology in detailManjunathN95
1. The document describes the physiology of vision and the rhodopsin cycle. Light is converted to electrical signals in the photoreceptors through phototransduction.
2. Phototransduction involves rhodopsin bleaching where light exposure converts the retinal component of rhodopsin. This triggers biochemical reactions generating a receptor potential.
3. The receptor potential is transmitted through the visual pathway to the visual cortex for perception through serial and parallel processing.
Lasers in ophthalmology - Dr. Parag Apteparag apte
A full presentation of one hour of all types of lasers in ophthalmology for under graduates and post graduates after going through all the uploaded slides till today. This includes laser photocoagulation, laser iridotomy, and laser capsulotomy in detail
Introduction
The applications of microscopy in the forensic sciences are almost limitless. This is due in large measure to the ability of
microscopes to detect, resolve and image the smallest items of evidence, often without alteration or destruction. As a
result, microscopes have become nearly indispensable in all forensic disciplines involving the natural sciences. Thus, a
firearms examiner comparing a bullet, a trace evidence specialist identifying and comparing fibers, hairs, soils or dust, a
document examiner studying ink line crossings or paper fibers, and a serologist scrutinizing a bloodstain, all rely on
microscopes, in spite of the fact that each may use them in different ways and for different purposes.
The principal purpose of any microscope is to form an enlarged image of a small object. As the image is more greatly
magnified, the concern then becomes resolution; the ability to see increasingly fine details as the magnification is
increased. For most observers, the ability to see fine details of an item of evidence at a convenient magnification, is
sufficient. For many items, such as ink lines, bloodstains or bullets, no treatment is required and the evidence may
typically be studied directly under the appropriate microscope without any form of sample preparation. For other types of
evidence, particularly traces of particulate matter, sample preparation before the microscopical examination begins is
often essential. Types of Microscopes Used in the Forensic Sciences
A variety of microscopes are used in any modern forensic science laboratory. Most of these are light microscopes which
use photons to form images, but electron microscopes, particularly the scanning electron microscope (SEM), are finding
applications in larger, full service laboratories because of their wide range of magnification, high resolving power and
ability to perform elemental analyses when equipped with an energy or wavelength dispersive X-ray spectrometer.
This document discusses the physics of color and light measurement. It covers topics like:
1) Color is a sensory perception produced in the brain that requires a light source, object, and observer.
2) The wavelength of light determines its perceived color - visible light has wavelengths between 380-760nm.
3) Light intensity is measured in lumens and is affected by factors like distance from the light source due to the inverse square law.
4) Common light sources like incandescent, fluorescent, and blackbody radiation are described in terms of their spectral properties and color temperatures.
This document provides an overview of imaging physics and the types of sensors used to record images. It discusses three main types of sensors: point detectors with 2D scanning, linear sensors with 1D scanning, and area array sensors with no scanning required. Semiconductor detectors like photodiodes, phototransistors, and photogates are commonly used area array sensors. Photogates in particular work by using a depletion volume and electric field to separate electron-hole pairs created when photons are absorbed, generating a signal. The document also discusses color separation techniques and factors like photon energy needed to create electron-hole pairs that impact semiconductor detector sensitivity.
This document provides an overview of lighting considerations for network video systems. It discusses key lighting concepts like light, color, infrared light, brightness and glare. It also covers light sources and surfaces, beam patterns, and the inverse square law. The document provides guidance on using white light versus infrared light and safety considerations. It includes a chart showing illumination distances for different Axis illuminator products based on camera angle and distance from the illuminator.
When selecting a network camera for day or night surveillance, there are several elements impacting
image quality that are important to understand. This guide is intended to give a basic overview of those
elements, to give an understanding of how lighting affects the image, and of the factors that need to be
taken into consideration for creating favorable lighting in dark environments.
1. Light transmission through lenses is determined by calculating the percentage of light lost to reflection off the front and back surfaces and light absorbed by the lens material.
2. The human eye absorbs different wavelengths of light depending on the ocular tissue. The cornea absorbs UV light while the lens absorbs more UV as we age.
3. Sunglasses protect the eyes by reducing the amount of UV and infrared radiation that reaches the eyes, preventing damage from prolonged sun exposure like cataracts.
Em and optics project 3 (1st) convertedDurgeshJoshi6
This document is a lab report submitted by Ashok Kumar Sahoo for the course Electromagnetism & Optics at the Indian Institute of Technology Kharagpur. The report discusses experiments and measurements performed with optical fibers and optoelectronic devices. In the first part, experiments are described to analyze the working of single mode and multimode optical fibers by calculating properties like numerical aperture, bending loss, and splice loss. The second part analyzes the characteristics of various optoelectronic devices including solar cells, light dependent resistors, LEDs, phototransistors, photodiodes, and optocouplers. Basic theories of total internal reflection, optical fibers, and these components are also outlined.
Digital image processing involves manipulating digital images using a computer. It has two main applications: improving images for human interpretation and processing images for machine perception tasks. A digital image is composed of pixels arranged in a grid, each with an intensity value. Key steps in digital image processing include image acquisition through sensors, enhancement, restoration, compression and segmentation. The human visual system has adapted to a wide range of light intensities through mechanisms like brightness adaptation and color vision. Digital images are formed by sampling and quantizing a continuous image function.
OCT allows for high-resolution cross-sectional imaging of the retina. It provides micron-level resolution, enabling visualization of the retinal layers. OCT is a non-contact, non-invasive technique useful for qualitative and quantitative analysis of the retina and monitoring of morphological changes. It can detect and measure retinal thickness, volume, and parameters like RNFL thickness. While it provides advantages over other modalities, OCT also has limitations like difficulty imaging through opaque media. It operates using low-coherence interferometry and is useful for evaluating a variety of posterior segment diseases.
Troubleshooting, Designing, & Installing Digital & Analog Closed Circuit TV S...Living Online
The document discusses light and optics, comparing the human eye to a camera. It explains that both have lenses that focus light and sensors (retina for the eye, sensor for the camera) that capture images. However, the eye can automatically focus on objects at different distances, while cameras require manual focus adjustment. It also notes that the eye has a blind spot, but we see a continuous image because information from both eyes is combined in the brain.
Light is a form of electromagnetic radiation that interacts with the retina to produce the sensation of sight. It is the visible portion of the electromagnetic spectrum, ranging from 400-700 nm. Light travels as a transverse wave and exhibits properties of both waves and particles. The interaction of light with matter can be explained using wave optics concepts like interference and diffraction, or quantum optics concepts like absorption and scattering. Geometrical optics describes how lenses and mirrors form images through reflection and refraction according to Snell's law. Total internal reflection occurs when light passes from an optically dense to rare medium at an angle greater than the critical angle.
The document discusses the electromagnetic spectrum, focusing on infrared and visible light. It provides descriptions of the electromagnetic spectrum and its components such as infrared, visible light, ultraviolet, X-rays and gamma rays. It discusses the uses of different parts of the spectrum including infrared light and visible light. Infrared light is used for wireless communication, night vision, medical therapy and more. Visible light enables vision, photography and optical fiber communication. Both parts of the spectrum have advantages like communication and disadvantages like potential harm from overexposure.
Optical coherence tomography (OCT) is a non-invasive imaging technique that uses light to obtain high-resolution cross-sectional images of the retina and anterior segment. OCT of the retina provides images similar to a vertical biopsy under a microscope, with micron-level resolution. Applications of OCT include ophthalmology, dermatology, cardiology, endoscopy, and guided surgery. OCT measures reflected light using interferometry, similar to ultrasound but using light instead of sound. It has much higher resolution than ultrasound. OCT is useful for detailed imaging of the retina and anterior segment, while ultrasound can image deeper structures due to its ability to penetrate tissue.
Semelhante a 03-Digital Image Fundamentals (1) Computer .pptx (20)
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
2. Book title: Digital Image Processing by Gonzales
Digital Image Processing GMM Page 2
3. Contents
GMM Page 3
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
4. Elements of Visual Perception
• Digital image processing(DIP) is built on a foundation of
mathematical and probabilistic formulations,
• But human intuition and analysis play a central role in the choice of
one technique versus another.
• Thus choice is subjective and often made on visual judgments.
• A basic understanding of human visual perception is needed for DIP
• The interest is in the mechanics and parameters related to how
images are formed and perceived by humans.
• Here we are learning the physical limitations of human vision to
work with digital images.
Digital Image Processing GMM Page 4
5. Structure of the Human Eye
• Eye is nearly a sphere, average diameter is 20 mm
• Cornea and sclera are outer layer to cover the
choroid layer and retina.
• Choroid contains blood vessels, nutrition to eye
• Choroid coat is heavily pigmented, helps to reduce
the amount of extraneous light entering into eye
• The choroid is divided into ciliary body and iris.
• ciliary body contracts and expands to control the
amount of light entering into eye
• Iris centeral opening i.e., pupil varies in diameter
aprox. 2 to 8 mm.
• The lense is suspended by fibers that attached with
ciliary body.
• It contains 60 to 70% water, about 6% fat, and
more protein than any other tissue in the eye.
Digital Image Processing GMM Page 5
6. Structure of the Human Eye
• The lens is colored by a slightly yellow
pigmentation that increases with age.
• An excessive clouding of lens causes
cataracts leads to loss of clear vison.
• The Lens absorbs approx. 8% of light
of the visible light spectrum
• Both infrared and ultraviolet light are
absorbed appreciably by proteins
within the lens structure and, in
excessive amounts, can damage the
eye.
Digital Image Processing GMM Page 6
7. Structure of the Human Eye
•The innermost membrane of the
eye is the retina contain light
receptors.
•When the eye is properly focused,
light from an object outside the eye
is imaged on the retina via light
receptors.
•Two classes of light receptors
• Cones and
• Rods
Digital Image Processing GMM Page 7
8. Structure of the Human Eye
• The cones in each eye number
between 6 and 7 million. Which are
sensitive to color. Cone vision is
called photopic/bright-light vision
• Rods are 75 to 150 million are
distributed over the retinal surface.
• Rods serve to give a general, overall
picture of the field of view.
• Rods are sensitive to low levels of
illumination. Like object appear color
less in moonlight compare to sunlight
due to rods. Called scotopic or dim-
light vision.
Digital Image Processing GMM Page 8
9. Image Formation in the Eye
•In human eye the distance between lens and retina is fixed,
but shape of length is variable.
•Whereas it is converse in camera.
Digital Image Processing GMM Page 9
12. Contents
GMM Page 12
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
13. Light and the Electromagnetic Spectrum
• In 1666, Sir Isaac Newton discovered that
when a beam of sunlight passes through a
glass prism, the emerging beam of light is
not white but consists instead of a
continuous spectrum of colors ranging
from violet at one end to red at the other.
• As Fig. 2.10 shows, the range of colors
we perceive in visible light is a small
portion of the electromagnetic spectrum.
• On one end of the spectrum are radio
waves with wavelengths billions of times
longer than those of visible light.
• On the other end of the spectrum are
gamma rays with wavelengths millions of
times smaller than those of visible light.
Digital Image Processing GMM Page 13
14. Light and the Electromagnetic Spectrum
Digital Image Processing GMM Page 14
15. Light and the Electromagnetic Spectrum
• Electromagnetic waves can be
visualized as propagating
sinusoidal waves with wavelength
λ (Fig. 2.11), or they can be
thought of as a stream of massless
particles,
• Each traveling in a wavelike
pattern and moving at the speed of
light.
• Each massless particle contains a
certain amount (or bundle) of
energy, called a photon.
Digital Image Processing GMM Page 15
16. Light and the Electromagnetic Spectrum
• We see from Eq. (2-2) that energy is
proportional to frequency,
• so the higher-frequency (shorter
wavelength) electromagnetic phenomena
carry more energy per photon.
• Thus,
• radio waves have photons with low energies,
• microwaves have more energy than radio
waves,
• infrared still more, then visible, ultraviolet, X-
rays, and finally gamma rays, the most
energetic of all.
• High-energy electromagnetic radiation,
especially in the X-ray and gamma ray
bands, is particularly harmful to living
organisms.
Digital Image Processing GMM Page 16
17. Light and the Electromagnetic Spectrum
• Light is a type of electromagnetic radiation that can be sensed
by the eye.
• The visible (color) spectrum is shown expanded in Fig. 2.10
• The visible band of the electromagnetic spectrum spans the
range from approximately 0.43 mm (violet) to about 0.79 mm
(red).
• For convenience, the color spectrum is divided into six broad
regions:
• violet, blue, green, yellow, orange, and red.
• No color ends abruptly; rather, each range blends smoothly into
the next, as Fig. 2.10 shows.
Digital Image Processing GMM Page 17
18. Light and the Electromagnetic Spectrum
•The colors perceived in an object are determined by the
nature of the light reflected by the object.
•A body that reflects light relatively balanced in all visible
wavelengths appears white to the observer.
•However, a body that favors reflectance in a limited range
of the visible spectrum exhibits some shades of color.
• For example, green objects reflect light with wavelengths primarily
in the 500 to 570 nm range, while absorbing most of the energy at
other wavelengths.
Digital Image Processing GMM Page 18
19. Light and the Electromagnetic Spectrum
•Light that is void of color is called monochromatic (or
achromatic) light.
•The only attribute of monochromatic light is its intensity.
•Because the intensity of monochromatic light is perceived
to vary from black to grays and finally to white,
•The term gray level is used commonly to denote
monochromatic intensity.
•The range of values of monochromatic light from black to
white is usually called the gray scale, and monochromatic
images are frequently referred to as grayscale images.
Digital Image Processing GMM Page 19
20. Light and the Electromagnetic Spectrum
• Chromatic (color) light spans the electromagnetic energy spectrum
from approximately 0.43 to 0.79 mm.
• Three other quantities are used to describe a chromatic light source:
radiance, luminance, and brightness.
• Radiance is the total amount of energy that flows from the light
source, and it is usually measured in watts (W).
• Luminance, measured in lumens (lm), gives a measure of the amount
of energy an observer perceives from a light source.
• For example, light emitted from a source operating in the far infrared region of
the spectrum could have significant energy (radiance), but an observer would
hardly perceive it; its luminance would be almost zero.
• Brightness is a subjective descriptor of light perception that is
practically impossible to measure. It embodies the achromatic notion
of intensity and is one of the key factors in describing color sensation.
Digital Image Processing GMM Page 20
21. • In principle, if a sensor can be developed that is capable of detecting
energy radiated in a band of the electromagnetic spectrum, we can image
events of interest in that band.
• Note, however, that the wavelength of an electromagnetic wave required to
“see” an object must be of the same size as, or smaller than, the object.
• For example,
• a water molecule has a diameter on the order of 10-10 m.
• To study these molecules, we would need a source capable of emitting energy in the
far (high energy) ultraviolet band or soft (low-energy) X-ray bands.
• Although imaging is based predominantly on energy from electromagnetic
wave radiation, this is not the only method for generating images.
• But sound reflected from objects can be used to form ultrasonic images.
• Other sources of digital images are electron beams for electron microscopy, and
• software for generating synthetic images used in graphics and visualization.
Light and the Electromagnetic Spectrum
Digital Image Processing GMM Page 21
22. Contents
GMM Page 22
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
23. ImageAcquisition Using a Single Sensor
ImageAcquisition Using Sensor Strips
ImageAcquisition Using Sensor Arrays
A Simple Image Formation Model
Digital Image Processing GMM Page 23
24. Image Sensing and Acquisition
• The illumination may originate from a source of electromagnetic energy,
• such as a radar, infrared, or X-ray system.
• OR
• it could originate from less traditional sources, such as ultrasound or even a computer-
generated illumination pattern.
• Similarly, the scene elements could be familiar objects, like molecules, buried rock
formations, or a human brain.
• Depending on the nature of the source, illumination energy is reflected from, or
transmitted through, objects.
• An example in the first category is light reflected from a planar surface.
• An example in the second category is when X-rays pass through a patient’s body for
the purpose of generating a diagnostic X-ray image.
• In some applications, the reflected or transmitted energy is focused onto a photo
converter (e.g., a phosphor screen) that converts the energy into visible light.
• Electron microscopy and some applications of gamma imaging use this approach.
Digital Image Processing GMM Page 24
25. Image Sensing and Acquisition: IMAGE ACQUISITION
USING A SINGLE SENSING ELEMENT
• Figure 2.12 shows the three
principal sensor arrangements used
to transform incident energy into
digital images.
• The incoming energy is transformed
into a voltage by the combination of
input electrical power and sensor
material that is responsive to the
particular type of energy being
detected.
• The output voltage waveform is the
response of the sensor(s), and a
digital quantity is obtained from
each sensor by digitizing its
response.
Digital Image Processing GMM Page 25
26. Image Sensing and Acquisition: IMAGE ACQUISITION
USING A SINGLE SENSING ELEMENT
• Figure 2.12(a) shows the components of
a single sensing element.
• A familiar sensor of this type is the
photodiode, which is constructed of
silicon materials and whose output is a
voltage proportional to light intensity.
• Using a filter in front of a sensor
improves its selectivity.
• For example, an optical green-transmission
filter favors light in the green band of the
color spectrum.
• As a consequence, the sensor output would
be stronger for green light than for other
visible light components.
Digital Image Processing GMM Page 26
27. •A geometry used more
frequently than single sensors
is an in-line sensor strip, as
in Fig. 2.12(b).
•The strip provides imaging
elements in one direction.
•Motion perpendicular to the
strip provides imaging in the
other direction, as shown in
Fig. 2.14(a).
Digital Image Processing GMM Page 27
Image Sensing and Acquisition
IMAGE ACQUISITION USING SENSOR STRIPS
28. Image Acquisition Using Sensor Arrays
• Figure 2.12(c) shows individual sensing
elements arranged in the form of a 2-D
array.
• Electromagnetic and ultrasonic sensing
devices frequently are arranged in this
manner.
• This is also the predominant
arrangement found in digital cameras.
• A typical sensor for these cameras is a
CCD (charge-coupled device) array,
which can be manufactured with a broad
range of sensing properties and can be
packaged in rugged arrays of
4000X4000 elements or more.
Digital Image Processing GMM Page 28
29. A SIMPLE IMAGE FORMATION MODEL
•As we denoted images by 2D functions of the form f(x,y).
•The value of f at spatial coordinates (x,y) is a scalar quantity
• whose physical meaning is determined by the source of the image,
and
• whose values are proportional to energy radiated by a physical
source (e.g., electromagnetic waves).
• As a consequence, f(x,y) must be nonnegative and finite; that is,
Digital Image Processing GMM Page 29
30. A SIMPLE IMAGE FORMATION MODEL
• As a consequence, f(x,y) must be nonnegative and finite; that is,
Function f(x,y) is characterized by two components:
• (1) the amount of source illumination incident on the scene being viewed,
and
• (2) the amount of illumination reflected by the objects in the scene.
• These are called the illumination and reflectance components,
and are denoted by i(x, y) and r(x,y), respectively.
• The two functions combine as a product to form f(x,y):
Digital Image Processing GMM Page 30
31. A SIMPLE IMAGE FORMATION MODEL
• The two functions combine as a product to form f(x,y):
• Reflectance is bounded by 0 (total absorption) and 1 (total
reflectance).
• The nature of i(x,y) is determined by the illumination source, and
• r(x,y) is determined by the characteristics of the imaged objects.
Digital Image Processing GMM Page 31
32. A SIMPLE IMAGE FORMATION MODEL
Digital Image Processing GMM Page 32
33. A SIMPLE IMAGE FORMATION MODEL
• The interval L[min,max] is called
the intensity (or gray) scale.
• Common practice is to shift this
interval numerically to the interval
[0,1] or [0,C]
• where l = 0 is considered black and
• l = 1 (or C) is considered white on
the scale.
• All intermediate values are
shades of gray varying from black
to white.
Digital Image Processing GMM Page 33
34. Contents
GMM Page 34
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
35. Image Sampling and Quantization
•An image may be continuous with respect to the x- and y-
coordinates, and also in amplitude.
•To convert it to digital form, we have to sample the function
in both coordinates and in amplitude.
•To create a digital image, we need to convert the continuous
sensed data into a digital format. This requires two
processes: sampling and quantization.
•Digitizing the coordinate values is called sampling.
•Digitizing the amplitude values is called quantization.
Digital Image Processing GMM Page 35
36. Image Sampling and Quantization
• The 1-D function in Fig. (b) is a
plot of amplitude (intensity level)
values of the continuous image
along the line segment AB in
Fig.(a).
• To sample this function, we take
equally spaced samples along
line AB, as shown in Fig. (c).
• Small dark squares are showing
samples, i.e. sample function
• Discrete spatial location
indicated by ticks in bottom
• Values of samples is vertically,
are continuous range of
intensities required in
quantization (here are 8-gray
shades from 0 to 1).
Digital Image Processing GMM Page 36
37. Image Sampling and Quantization
Digital Image Processing GMM Page 37
• Figure 2.17(a) shows a
continuous image projected
onto the plane of a 2-D sensor.
• Figure 2.17(b) shows the
image after sampling and
quantization.
• The quality of a digital image
is determined to
• a large number of samples and
• discrete intensity levels used in
sampling and quantization.
• However, image content also
plays a role in the choice of
these parameters.
38. Representing Digital Images
•A 2-D array, containing M rows
and N columns,
• where (x,y) are discrete
coordinates.
• For notational clarity and
convenience, we use integer values
for these discrete coordinates:
• x = 0, 1, 2, ...., M - 1
• y = 0, 1, 2, ...., N – 1
• x and y are referred to as spatial
variables or spatial coordinates.
Digital Image Processing GMM Page 38
40. Representing Digital Images
Digital Image Processing GMM Page 40
• As Fig. 2.19 shows, we define the origin of an image at the top left corner.
• This is a convention based on the fact that many image displays/screen.
• Choosing the origin of f(x,y) at that point makes sense mathematically because digital
images in reality are matrices.
• Sometimes we use x and y interchangeably in equations with the rows (r) and columns
(c) of a matrix.
• For example, the center of an
image of size 1023X1024 is at
(511,512).
• Some programming languages
(e.g., MATLAB) start indexing at 1
instead of at 0.
• The center of an image in that
case is found at
41. Representing Digital Images
•An image exhibiting saturation
and noise.
•Saturation is the highest value
beyond which all intensity values
are clipped (note how the entire
saturated area has a high,
constant intensity level).
•Visible noise in this case appears
as a grainy texture pattern.
•The dark background is noisier,
but the noise is difficult to see.
Digital Image Processing GMM Page 41
42. Representing Digital Images
• This digitization process
requires that decisions be
made regarding the values
for M, N, and for the
number, L, of discrete
intensity levels.
• The number of intensity
levels typically is an integer
power of 2: L= 2k
• Where k is an integer.
• We assume that the discrete
levels are equally spaced and
that they are integers in the
range [0,L-1].
Digital Image Processing GMM Page 42
43. Representing Digital Images
• Figure 2.21 shows the number of megabytes required to store square images
for various values of N and k (as usual, one byte equals 8 bits and a
megabyte equals 106 bytes).
• When an image can have 2k possible intensity levels, it is common practice to
refer to it as a “k-bit image,” (e,g., a 256-level image is called an 8-bit image).
• Note that storage requirements for large 8-bit images (e.g., 10,000*10,000
pixels) are not insignificant.
Digital Image Processing GMM Page 43
44. Representing Digital Images
•Resolution:
• Image resolution describes the amount of detail an image holds.
• Higher resolution images are sharp/more detailed.
• In a lower resolution image, the fine differences in color disappear, edges become
blurred, etc.
• There are many kinds of resolution that can apply to film, television, etc., but the two
types we are concerned with here are
• screen resolution and
• print resolution.
•Screen resolution:
• Measured in pixels per inch (PPI). A pixel is a tiny square of color.
• A monitor uses tiny pixels to assemble text and images on screen.
• The optimal resolution for images on screen is 72 DPI.
• Increasing the DPI won’t make the image look any better, it’ll just make the file larger,
which will probably slow down the website when it loads or the file when it opens.
Digital Image Processing GMM Page 44
45. Representing Digital Images
•Print resolution:
• Measured in dots per inch (or “DPI”),
• DPI means the number of dots of ink per inch that a printer deposits on a piece of
paper.
• For 300 DPI, a printer will output 300 tiny dots of ink to fill every inch of the print.
• 300 DPI is the standard print resolution for high resolution output.
• This means that that images should be a minimum of 300 dpi x 300 dpi or 90,000
dots per square inch to produce a high resolution print.
• If the document will stay on the screen (like a website), you just need to worry about
screen resolution, so your images should be 72 PPI.
• An important note: Sometimes the terms DPI (print) and PPI (screen) are used
interchangeably. So, don’t be confused if someone refers to a 300 DPI image that is
on screen, because pixels per inch (PPI) translate equally to dots per inch (DPI).
• If you’re going to print the document, you need to make sure the images are 300
DPI at 100% of the final output size. This sounds more complicated than it really is.
• The bigger we try to print the 300 pixel × 300 pixel image, the more pixellated it
becomes. The eye can start to see the individual pixels, and the edges become very
sharp.
Digital Image Processing GMM Page 45
46. Representing Digital Images
•How can we figure out the DPI of an image?
•if you want to print an image that is 1024 × 768 (listed as
Width=1024px, Height=768px on a PC), you need to divide
each value by 300 to see how many inches you can print at
300 dpi.
•1024 ÷ 300 = 3.4133″ (width)
•768 ÷ 300 = 2.56″ (height)
•So, you could print this 1024px × 768px image at 300 DPI
at a size of 3.4133″ × 2.56″ –
• any bigger than this, and you risk the image becoming pixellated.
Digital Image Processing GMM Page 46
47. Spatial and Intensity Resolution
•Spatial resolution commonly stated in dots (pixels) per
unit distance like
• dots per inch (dpi) for printing or scanning.
• Pixels per inch (ppi) for screen
•In US to give you an idea of quality,
• newspapers are printed with a resolution of 75 dpi,
• magazines at 133 dpi,
• glossy brochures at 175 dpi, and
• the book page at which you are presently looking is printed at
2400 dpi.
Digital Image Processing GMM Page 47
48. Spatial and Intensity Resolution
• Image size depends on two factors
• Number of samples or spatial locations
(no of pixels)
• Levels of intensity used in
quantization(Intensity resolution),
• the number of intensity levels usually is an
integer power of two
• commonly used 8 or 16 bits, i.e.; 2k where
k=8,16
• Thus intensity levels will be 28 = 256 or 216 =
65536
Digital Image Processing GMM Page 48
52. Python with OpenCV
•OpenCV is a popular open source library for processing
images.
•It has API bindings in Python, C++, Java, and Matlab.
•It comes with thousands of functions and implemented many
advanced image processing algorithms.
•If you’re using Python, a common alternative to OpenCV is
PIL (Python Imaging Library, or its successor, Pillow).
•Compared to PIL, OpenCV has a richer set of features and
is often faster because it is implemented in C++.
Digital Image Processing GMM Page 52
54. Read an image
import cv2
img = cv2.imread('messi2018.png',cv2.IMREAD_COLOR)
cv2.imshow('Messi 2018',img)
R = img.shape[0]
C = img.shape[1]
Channel = img.shape[2]
print("Image rows:",R)
print("Image col:",C)
print("Image rows:",Channel)
cv2.waitKey(0)
cv2.destroyAllWindows()
Digital Image Processing GMM Page 54
55. Convert image 8 to 16 bit
Digital Image Processing GMM Page 55
56. Convert image 8 to 16 bit (MATALB)
Digital Image Processing GMM Page 56
%% convert image 8 to 16 bit
pic1=imread('chestxray.jpg');
imwrite(pic1,'chestxray8.png','BitDepth', 8);
%figure,imshow(pic1);
%title('Chest xray unit 8');
%
imwrite(pic1,'chestxray16.png','BitDepth', 16);
img8 = imread('chestxray8.png');
img16 = imread('chestxray16.png');
%to show max pixel intensity
max(img8,[],'all’)
max(img16,[],'all')
57. Image Interpolation
•Definition: The process of using known data to estimate
values at unknown locations.
•A basic tool used extensively in tasks such as
• zooming,
• shrinking,
• rotating, and
• geometric corrections.
•Here we apply it to image resizing (shrinking and
zooming),
•which are basically image resampling methods.
Digital Image Processing GMM Page 57
58. Image Interpolation
•Nearest neighbor (NN) interpolation
• A simplest approach to interpolation. This method simply determines the “nearest”
neighboring pixel, and assumes the intensity value of it. Rather than calculate an
average value by some weighting criteria or generate an intermediate value based on
complicated rules.
• It has the tendency to produce undesirable artifacts, such as severe distortion of straight
edges.
Digital Image Processing GMM Page 58
59. Image Interpolation
• Bilinear interpolation: in which we use the four nearest
neighbors to estimate the intensity at a given location.
• Let f(x,y) denote the coordinates of the location to which we want to
assign an intensity value (think of it as a point of the grid described
previously), and
• let v(x,y) denote that intensity value. For bilinear interpolation, the
assigned value is obtained using the equation
• v(x, y) = ax + by + cxy + d
• Where the four coefficients are determined from the four equations in
four unknowns that can be written using the four nearest neighbors of
point .
• Bilinear interpolation gives much better results than nearest neighbor
interpolation, with a modest increase in computational burden.
• Detailed solved example:
• https://www.omnicalculator.com/math/bilinear-interpolation
Digital Image Processing GMM Page 59
61. Image Interpolation
• Bicubic interpolation: involves the sixteen nearest neighbors
of a point. The intensity value assigned to point is obtained
using the equation:
• Generally, bicubic interpolation does a better job of preserving
fine detail than its bilinear counterpart.
• Bicubic interpolation is the standard used in commercial image
editing programs,
• such as Adobe Photoshop and Corel Photo paint.
Digital Image Processing GMM Page 61
63. Image Interpolation
• (a) Image reduced to 72 dpi
and zoomed back to its
original size ( pixels) using
nearest neighbor interpolation.
• (b) Image shrunk and zoomed
using bilinear interpolation.
• (c) Same as (b) but using
bicubic interpolation.
• (d)–(f) Same sequence, but
shrinking down to 150 dpi
instead of 72 dpi
Digital Image Processing GMM Page 63
64. Image Interpolation
•Compare Figs. 2.24(e) and
(f), especially the latter, with
the original image in Fig.
2.20(a).
Digital Image Processing GMM Page 64
69. Interpolation Example MATLAB
Digital Image Processing GMM Page 69
clear
img = imread('eye.jpg');
nn = imresize(img,0.3,'nearest'); %shrinking by 30%
bl = imresize(img,0.3,'bilinear');
bc = imresize(img,0.3,'bicubic');
%subplot(2,2,1),
figure,imshow(img), title('Original Image');
%subplot(2,2,2),
figure,imshow(nn), title('Nearest neighbor interpolation');
%subplot(2,2,3),
figure, imshow(bl), title('bilinear interpolation');
%subplot(2,2,4),
figure,imshow(bc), title('bicubic interpolation');
70. Contents
GMM Page 70
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
71. Some Basic Relationships between Pixels
• A pixel p at coordinates (x,y) has two horizontal and two vertical neighbors with coordinates
• (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)
• This set of pixels, called the 4-neighbors of p, is denoted N4(p).
• The four diagonal neighbors of p have coordinates
• and are denoted ND(p).
• These neighbors, together with the 4-neighbors(diagonal), are called the 8-neighbors of p,
denoted by N8(p).
• Def: The set of image locations of the neighbors of a point p is called the neighborhood of
p.
• The neighborhood is said to be closed if it contains p. Otherwise, the neighborhood is said to be
open.
Digital Image Processing GMM Page 71
73. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
•Let V be the set of intensity values used to define adjacency.
•In a binary image,
• V = {1} if we are referring to adjacency of pixels with value 1.
•In a grayscale image, the idea is the same, but set V
typically contains more elements. For example,
• if we are dealing with the adjacency of pixels whose values are in the range
0 to 255, set V could be any subset of these 256 values.
•We consider three types of adjacency:
Digital Image Processing GMM Page 73
74. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• We consider three types of adjacency:
• 1. 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).
• 2. 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p).
• 3. m-adjacency (also called mixed adjacency). Two pixels p and q with values from V are m-
adjacent if
(a) q is in N4(p), or
(b) q is in ND(p) and the set N4(p)ՈN4(q) has no pixels whose values are from V.
• Mixed adjacency is a modification of 8-adjacency, and is introduced to eliminate the ambiguities
that may result from using 8-adjacency.
• For example, consider the pixel arrangement in Fig. 2.28(a) and let V = {1}.
• The three pixels at the top of Fig. 2.28(b) show multiple (ambiguous) 8-adjacency, as indicated by the
dashed lines.
• This ambiguity is removed by using m-adjacency, as in Fig. 2.28(c).
• In other words, the center and upper-right diagonal pixels are not m-adjacent because they do not satisfy
condition(b).
Digital Image Processing GMM Page 74
75. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• Connectivity between pixels
• It is an important concept in digital image processing.
• It is used for establishing boundaries of objects and components of regions in an image.
• Two pixels are said to be connected:
• if they are adjacent in some sense( neighbour pixels,4/8/m-adjacency)
• if their gray levels satisfy a specified criterion of similarity(equal intensity level)
• There are three types of connectivity on the basis of adjacency. They are:
• a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with each others.
• b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with each others.
• c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent with each others.
Digital Image Processing GMM Page 75
76. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• Let R represent a subset of pixels in an image.
• We call R a region of the image if R is a connected set.
• Two regions, Ri and Rj are said to be adjacent if their union
forms a connected set.
• Regions that are not adjacent are said to be disjoint.
• We consider 4- and 8-adjacency when referring to regions.
• For our definition to make sense, the type of adjacency used
must be specified.
• For example, the two regions of 1’s in Fig. 2.28(d) are adjacent
only if 8-adjacency is used.
Digital Image Processing GMM Page 76
77. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
Digital Image Processing GMM Page 77
78. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
Digital Image Processing GMM Page 78
79. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
• The boundary (also called the border or contour) of a region R is the
set of pixels in R that are adjacent to pixels in the complement of R.
• Also, the border of a region is the set of pixels in the region that have at
least one background neighbor.
• Here again, we must specify the connectivity being used to define
adjacency.
• For example, the point circled in Fig. 2.28(e) is not a member of the border of
the 1-valued region if 4-connectivity is used between the region and its
background, because the only possible connection between that point and the
background is diagonal.
• As a rule, adjacency between points in a region and its background is
defined using 8-connectivity to handle situations such as this.
Digital Image Processing GMM Page 79
80. Some Basic Relationships between Pixels:
ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES
Digital Image Processing GMM Page 80
84. Contents
GMM Page 84
Digital Image Processing
2.1 Elements ofVisual Perception
2.1.1 Structure of the Human Eye
2.1.2 Image Formation in the Eye
2.1.3 BrightnessAdaptation and Discrimination
2.2 Light and the Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.3.1 Image Acquisition Using a Single Sensor
2.3.2 Image Acquisition Using Sensor Strips
2.3.3 Image Acquisition Using Sensor Arrays
2.3.4A Simple Image Formation Model
2.4 Image Sampling and Quantization
2.4.1 Basic Concepts in Sampling and
Quantization
2.4.2 Representing Digital Images
2.4.3 Spatial and Intensity Resolution
2.4.4 Image Interpolation
2.5 Some Basic Relationships between Pixels
2.5.1 Neighbors of a Pixel
2.5.2 Adjacency, Connectivity, Regions, and Boundaries
2.5.3 Distance Measures
2.6 An Introduction to the MathematicalTools Used in Digital
Image Processing
2.6.1 Array versus Matrix Operations
2.6.2 Linear versus Nonlinear Operations
2.6.3 Arithmetic Operations
2.6.4 Set and Logical Operations
2.6.5 Spatial Operations
2.6.6Vector and Matrix Operations
2.6.7 ImageTransforms
2.6.8 Probabilistic Methods
85. Array versus Matrix Operations
• Array versus Matrix Operations:
• An array operation involving one or more images is carried out on a pixel-by-
pixel basis.
• However, there are many operation are made on matrix theory
• Following example shows image based array multiplication. It is not like matrix
theory.
Digital Image Processing GMM Page 85
87. Linear versus Nonlinear Operations
• Above equation indicates that:
• the output of a linear operation due to the sum of two inputs is the same as performing the
operation on the inputs individually and then summing the results. (called additivity
property)
• In addition, the output of a linear operation to a constant times an input is the same as the
output of the operation due to the original input multiplied by that constant. (called
homogeneity property)
• Example:
Digital Image Processing GMM Page 87
88. Linear versus Nonlinear Operations
Digital Image Processing GMM Page 88
• Linear operations are
exceptionally important
because they are based on a
large body of theoretical and
practical results that are
applicable to image
processing.
• Nonlinear systems are not
nearly as well understood,
so their scope of application
is more limited.
89. Arithmetic Operations
Digital Image Processing GMM Page 89
• Arithmetic operations are carried out between corresponding pixel pairs.
• The four arithmetic operations are denoted as
• It is understood that the operations are performed between
corresponding pixel pairs in f and g for and
• x = 0, 1, 2, … , M - 1 and y = 0, 1, 2, …, N – 1.
• where, as usual, M and N are the row and column sizes of the
images.
• Clearly, s, d, p, and are images of size MxN also.
• Note that image arithmetic in the manner just defined involves
images of the same size.
90. Add noise
Digital Image Processing GMM Page 90
clc
close all
% Read the test Image
img = imread('Flower.jpg');
mygrayimg = rgb2gray(img);
mygrayimg = imresize(mygrayimg,[256 256],'nearest');
subplot(2,3,1); imshow(mygrayimg);
title('Original Image');
% Add Salt and pepper noise with noise density 0.02
salt = imnoise(mygrayimg,'salt & pepper',0.02);
subplot(2,3,2); imshow(salt);
title('Salt & Pepper Image');
% Add Gaussian noise with mean 0 and variance 0.01
gau = imnoise(mygrayimg, 'gaussian', 0, 0.01);
subplot(2,3,3); imshow(gau);
title('Gaussian Image- mean 0 and variance 0.01');
% Generate Gaussian noise with mean 6 and variance 225
mynoise = 6 + sqrt(225) * randn(256,256);
subplot(2,3,4); imshow(mynoise,[]);
title('Generated gaussian noise');
% Original Image and generated Gaussian
mynoiseimg = double(mygrayimg) + mynoise;
subplot(2,3,5); imshow(mynoiseimg,[]);
title('Gaussian image - mean 6 & Var 225');
% Original Image plus sinusoidal noise
[x y] = meshgrid(1:256,1:256);
mysinusoidalnoise = 15 * sin(2*pi/14*x+2*pi/14*y);
mynoiseimg1 = double(mygrayimg) + mysinusoidalnoise;
subplot(2,3,6);imshow(mynoiseimg1,[]);
title('Generated Sinusoidal noise');
91. Remove Noise
Digital Image Processing GMM Page 91
clc
close all
I = imread('eight.tif');
figure, imshow(I),title('orginal image')
J = imnoise(I,'salt & pepper',0.02);
figure, imshow(J),title('Noisy image')
Kmedian = medfilt2(J);
figure, imshow(Kmedian),
title('Noise removed image')
102. Set and Logical Operations
Digital Image Processing GMM Page 102
• A
103. Set and Logical Operations
Digital Image Processing GMM Page 103
• A
104. Set and Logical Operations
Digital Image Processing GMM Page 104
• A
105. Home work
•Using two images find the five set operations like
• Use two image
• Union
• Intersection
• Complement
• Deference
Digital Image Processing GMM Page 105
106. Set and Logical Operations
Digital Image Processing GMM Page 106
107. Set and Logical Operations
Digital Image Processing GMM Page 107
108. Convert to B/W, apply AND operator code
Digital Image Processing GMM Page 108
%%
clear all
clc
ImgOrg= imread('box.png');
ImgGray= rgb2gray(ImgOrg);
ImgBW= imbinarize(ImgGray); %convert to binary(BW) image
Figure;
subplot(3,3,1),imshow(ImgOrg), title(‘Original Image')
subplot(3,3,2),imshow(ImgGray), title('Grayscale Image')
subplot(3,3,3),imshow(ImgBW), title('Black and White image')
%% NOT operation
ImgNotBW = not(ImgBW);
subplot(3,3,4),imshow(ImgBW), title('Black and White image')
subplot(3,3,5),imshow(ImgNotBW), title('NOT Image')
%% AND operation
ImgOrg2 = imread('box2.png');
ImgGray2= rgb2gray(ImgOrg2);
ImgBW2= imbinarize(ImgGray2);
ImgAND = and(ImgBW,ImgBW2);
subplot(3,3,7),imshow(ImgBW), title('Image 1')
subplot(3,3,8),imshow(ImgBW2), title('Image 2')
subplot(3,3,9),imshow(ImgAND), title('Image 1 AND Image 2')
109. Spatial Operations
•Spatial operations are performed directly on the pixels of a
given image. We classify spatial operations into three broad
categories:
•(1) single-pixel operations,
•(2) neighborhood operations, and
•(3) geometric spatial transformations.
Digital Image Processing GMM Page 109
112. Geometric spatial transformations and image registration
Digital Image Processing GMM Page 112
• We use geometric transformations modify the spatial
arrangement of pixels in an image.
• These transformations are called rubber-sheet transformations
because they may be viewed as analogous to “printing” an
image on a rubber sheet, then stretching or shrinking the sheet
according to a predefined set of rules.
• Geometric transformations of digital images consist of two basic
operations:
• 1. Spatial transformation of coordinates.
• 2. Intensity interpolation that assigns intensity values to the
spatially transformed pixels.
114. Geometric spatial transformations and image registration
Digital Image Processing GMM Page 114
•Our interest is in so-called affine transformations, which
include scaling, translation, rotation, and shearing.
•The key characteristic of an affine transformation in 2-D is
that it preserves points, straight lines, and planes.
•This transformation can scale, rotate, translate, or sheer an
image, depending on the values chosen for the elements of
matrix A.
116. Geometric spatial transformations and image registration
Digital Image Processing GMM Page 116
•https://www.mathworks.com/help/images/ref/affine2d.html
•The preceding transformation moves the coordinates of
pixels in an image to new locations.
•To complete the process, we have to assign intensity
values to those locations.
•This task is accomplished using intensity interpolation.
117. IMAGE TRANSFORMS
Digital Image Processing GMM Page 117
• All the image processing approaches discussed thus far
operate directly on the pixels of an input image; that is, they
work directly in the spatial domain.
• In some cases, image processing tasks are best formulated
by transforming the input images, carrying the specified task
in a transform domain, and applying the inverse transform to
return to the spatial domain.
• You will encounter a number of different transforms as you
proceed. A particularly important class of 2-D linear
transforms, denoted T(u,v), can be expressed in the general
form
118. IMAGE TRANSFORMS
Digital Image Processing GMM Page 118
• where f(x,y) is an input image, r(x,y,u,v) is called a forward
transformation kernel, and Eq. (2-55) is evaluated for u = 0,1,2 , …
M-1 and v = 0,1,2 , … N-1.
• As before, x and y are spatial variables, while M and N are the row
and column dimensions of f.
• Variables u and v are called the transform variables.
• T(u,v) is called the forward transform of f (x,y).
• Given T(u,v), we can recover f(x,y) using the inverse transform of
T(u,v):
• for x=0,1,2 , … M-1 and y= 0,1,2, … N-1, where s(x,y,u,v) is called
an inverse transformation kernel.
• Together, Eqs. (2-55) and (2-56) are called a transform pair.
119. IMAGE TRANSFORMS
Digital Image Processing GMM Page 119
• Figure 2.44 shows the basic steps for performing image
processing in the linear transform domain.
• First, the input image is transformed, the transform is then
modified by a predefined operation and, finally, the output
image is obtained by computing the inverse of the modified
transform.
• Thus, we see that the process goes from the spatial domain to
the transform domain, and then back to the spatial domain.
121. IMAGE TRANSFORMS
Digital Image Processing GMM Page 121
•The nature of a transform is determined by its kernel.
•A transform of particular importance in digital image
processing is the Fourier transform, which has the
following forward and inverse kernels respectively:
•where j = −1, so these kernels are complex functions.
122. IMAGE TRANSFORMS
Digital Image Processing GMM Page 122
•Substituting the preceding kernels into the general
transform formulations in Eqs. (2-55) and (2-56) gives us
the discrete Fourier transform pair:
124. IMAGE TRANSFORMS
Digital Image Processing GMM Page 124
• It can be shown that the Fourier kernels are separable and
symmetric, and
• that separable and symmetric kernels allow 2-D transforms to
be computed using 1-D transforms.
Zero average value means that: E[n(x,y)]=0 , i.e. the expected value of the noise is zero.
Uncorrelated means that: E[n(x1,y1)n(x2,y2)]=0E[n(x1,y1)n(x2,y2)]=0, where x1≠x2x1≠x2 and y1≠y2y1≠y2.