Gis Unit1
Gis Unit1
Remote sensing is the process of acquiring information about an object or area of the Earth's
surface without physically being there. It relies on detecting and interpreting electromagnetic
radiation that is reflected or emitted from the object or area of interest.
1. Energy Source: This is the initial source of energy that illuminates the target area. In
most cases, the energy source is the sun, but other sources like radar can also be used.
2. Radiation and Atmosphere: The energy source interacts with the target area, and
some of the energy is reflected, absorbed, or transmitted. The atmosphere can affect
the radiation by scattering and absorbing it.
3. Sensor: A sensor mounted on a platform (aircraft, satellite, etc.) detects the
electromagnetic radiation reflected or emitted from the target area. Different sensors
are sensitive to different parts of the electromagnetic spectrum.
4. Data Recording and Transmission: The sensor collects and records the data, which
is then transmitted to a receiving station for processing.
5. Data Processing and Analysis: The raw data from the sensor is processed to remove
noise and errors, and then analyzed to extract meaningful information about the target
area. This may involve techniques like image classification, spectral analysis, and
change detection.
6. Information Extraction and Application: Finally, the extracted information is used
for various applications, such as land cover mapping, resource exploration,
environmental monitoring, and disaster management.
electromagnetic spectrum,
The electromagnetic spectrum is the entire range of electromagnetic radiation, which is a
form of energy that travels through space in waves. It includes all types of electromagnetic
radiation, from the lowest frequencies (radio waves) to the highest frequencies (gamma rays).
Here are the different bands of the electromagnetic spectrum, listed from lowest frequency to
highest frequency:
Radio waves:Radio waves have the longest wavelengths and lowest frequencies of all
electromagnetic waves. They are used for a variety of purposes, including radio and
television broadcasting, cell phones, and Wi-Fi.
Visible light:Visible light is the portion of the electromagnetic spectrum that we can
see with our eyes. It has a range of wavelengths that correspond to the different colors
we perceive, from red (longest wavelength) to violet (shortest wavelength).
X-rays:X-rays have even shorter wavelengths and higher frequencies than ultraviolet
radiation. They can pass through many materials, including human tissue. X-rays are
used for a variety of medical purposes, including imaging bones and teeth.
Opens in a new window www.nibib.nih.gov
Gamma rays:Gamma rays have the shortest wavelengths and highest frequencies of
all electromagnetic waves. They are the most energetic form of electromagnetic
radiation and can be very dangerous. Gamma rays are produced by nuclear reactions
and can be used for a variety of medical purposes, including cancer treatment.
Resolution: This refers to the level of detail captured in the data. There are four main
types of resolution:
o Spatial resolution: The size of an individual pixel in an image, often
measured in meters (m) or centimeters (cm).
o Spectral resolution: The number and width of wavelength bands the sensor
can detect. Measured in nanometers (nm).
o Radiometric resolution: The sensor's ability to distinguish between different
levels of energy within a band. Often expressed as bits (number of digital
levels).
o Temporal resolution: How often an area is revisited by a sensor, typically
measured in days or weeks.
Band: A specific range of wavelengths within the electromagnetic spectrum that a
sensor can detect. Different bands provide information about different features on the
Earth's surface. (e.g., visible red band, near-infrared band)
Platform: The carrier that holds the sensor, such as a satellite, airplane, or drone.
Sensor: The instrument that detects and records the electromagnetic radiation
reflected or emitted from the target area. There are various sensor types, like optical
sensors for visible and infrared light and radar sensors for radio waves.
Radiance: The amount of energy emitted or reflected from a unit area of the target in
a specific wavelength. Measured in Watts per meter squared per steradian (W/m²sr).
Reflectance: The ratio of electromagnetic radiation reflected by a surface to the
radiation that strikes it. Often expressed as a percentage (%).
Emittance: The process by which an object emits electromagnetic radiation due to its
temperature.
Georeferencing: Assigning geographic coordinates (latitude and longitude) to a
remote sensing image so it can be accurately positioned on a map.
Classification: The process of categorizing pixels in an image based on their spectral
characteristics. This allows identification of features like land cover types (forest,
water, urban areas).
Change Detection: Analyzing differences between remote sensing images acquired
at different times to identify changes on the Earth's surface.
Remember, these are just some of the most common terms and units used in remote sensing.
As you delve deeper into the field, you'll encounter more specialized terminology specific to
your area of interest.
Energy Resources:
Solar Energy: The primary energy source for Earth. Sunlight, in the form of
electromagnetic radiation, reaches Earth's surface after interacting with the
atmosphere.
Geothermal Energy: Heat emanating from Earth's core, driven by residual heat from
formation and radioactive decay.
Fossil Fuels: Coal, oil, and natural gas are remnants of ancient organic matter that
store energy from past sunlight captured by plants.
Wind Energy: Kinetic energy of moving air masses, ultimately driven by solar
energy heating different parts of the Earth's surface unevenly.
Hydropower: Potential energy of moving water, converted to kinetic and then
electrical energy through dams and turbines.
Biomass: Organic matter (plants, wood waste) used as a fuel source, capturing and
releasing solar energy stored through photosynthesis.
Reflection: When sunlight hits the Earth's surface, some of it bounces back. Different
features reflect varying amounts and wavelengths of light depending on their
composition (e.g., water reflects more in the blue and green wavelengths). Remote
sensing relies on analyzing reflected radiation to identify features.
Absorption: Earth's surface features absorb some of the sun's energy, converting it to
heat. Darker surfaces tend to absorb more, while lighter surfaces reflect more. This
absorbed energy is eventually re-radiated as heat (infrared radiation) back towards the
atmosphere.
Transmission: Some radiation, particularly in specific wavelength ranges, can pass
through features. For example, near-infrared radiation can penetrate through
vegetation canopies to some degree.
Atmospheric Effects:
The atmosphere plays a significant role in how energy interacts with Earth's surface:
Scattering: Air molecules and particles in the atmosphere scatter sunlight, causing
the sky to appear blue (shorter wavelengths scattered more) and reddening of the sun
at sunrise and sunset (longer wavelengths scattered less).
Absorption: Certain atmospheric gases like water vapor and carbon dioxide absorb
specific wavelengths of radiation, impacting the amount and type of energy reaching
the surface (e.g., ozone absorbs most ultraviolet radiation). This absorption also
contributes to the greenhouse effect.
Map and monitor Earth's resources: Identify vegetation health, assess water
resources, and detect changes in land cover.
Study climate change: Monitor changes in atmospheric composition, track ice cover
variations, and assess the impact of climate change on ecosystems.
Predict weather patterns: Analyze atmospheric conditions and cloud formations to
improve weather forecasting.
These are just some of the ways energy, Earth's features, and the atmosphere interact. Remote
sensing provides a powerful tool to analyze these complex processes and gain valuable
insights into our planet's health and how it functions.
Satellite Orbits:
There are two main types of orbits used for Earth observation satellites:
edit
Remote Sensing Platforms and Sensors, IRS satellites
Satellites: Artificial satellites orbiting Earth are the most common platform for
remote sensing. They provide a global perspective and can revisit the same area at
regular intervals, enabling monitoring of changes over time.
pen_spark
The choice of sensor depends on the specific type of information being sought. Here are some
of the most common sensors used in remote sensing:
LiDAR Sensors:LiDAR (Light Detection and Ranging) sensors use lasers to measure
distances to objects on the Earth's surface. They can create highly accurate 3D models
of terrain and vegetation cover.
Multiple satellites: The IRS program encompasses a series of satellites launched over
several decades, each with its own capabilities and areas of focus.
Polar Sun-synchronous Orbits: Most IRS satellites operate in polar sun-
synchronous orbits, revisiting the same area at regular intervals and providing near-
global coverage.
Varied Sensor payloads: IRS satellites carry a variety of sensors, including optical,
microwave, and radar sensors, catering to diverse applications.
Civilian Applications: The IRS program primarily focuses on providing remote
sensing data for civilian use in various sectors, including agriculture, forestry, water
resource management, disaster monitoring, and urban planning.
Resource Management: IRS data plays a crucial role in monitoring and managing
India's natural resources, ensuring their sustainable use.
Disaster Management: The timely availability of IRS data aids in disaster
preparedness, mitigation, and response efforts.
Infrastructure Development: Remote sensing data from IRS satellites supports
infrastructure development by providing information on land use, soil characteristics,
and potential construction sites.
Environmental Monitoring: IRS data is used to monitor environmental changes,
deforestation, and track pollution levels.
The IRS program is a testament to India's commitment to space technology and its
application for the benefit of the nation and the broader global community.
These elements form the building blocks for analyzing features within a remote sensing
image. They are:
1. Preparation:
o Gather background information about the study area, including existing maps,
field data, and knowledge of the region's typical features.
2. Image Examination:
o Systematically analyze the image based on the elements mentioned above.
3. Identification:
o Based on the analysis, assign identities to objects and features observed in the
image.
4. Evaluation:
o Assess the accuracy and reliability of the interpretations by comparing them
with collateral data.
Visual interpretation is often used in conjunction with digital image processing techniques for
a more comprehensive analysis of remote sensing data.
Absolutely, here's how those concepts work in visual interpretation of remote sensing data:
Basic Elements:
These are the fundamental building blocks used to analyze features within an image. They act
like a code you can "crack" to understand what the image is showing you. Here's a quick
recap:
Converging Evidence:
This is where it gets like detective work! You use a combination of these basic elements to
build a strong case for what something is in the image. It's like gathering clues:
Imagine you see a dark, elongated shape with a shadow in the afternoon. (Shape +
Shadow)
Nearby, you see smaller, square shapes. (Pattern)
Based on this combined evidence (converging evidence), you might conclude it's a
parked car (with its shadow) near buildings.
The more elements you can consider together, the stronger your interpretation becomes. It's
like having multiple pieces of evidence pointing to the same conclusion.
By using both basic elements and converging evidence, you can effectively interpret what a
remote sensing image is telling you!
Terrain Evaluation:
Visual interpretation and spectral properties play a crucial role in evaluating terrain from
remote sensing data. Here's how they work together:
Spectral Properties:
Understanding how different features reflect light across the electromagnetic spectrum is key
to interpreting terrain:
Soil:
o Dry soil generally reflects more in the visible and near-infrared wavelengths.
o Soil moisture content can be estimated by analyzing its reflectance in the
shortwave infrared region.
o Mineral composition like iron oxides can also be identified through specific
spectral signatures.
Water:
o Deep, clear water absorbs most light, appearing dark in visible wavelengths.
o Shallow water or water with suspended sediments might have higher
reflectance in some visible wavelengths.
Vegetation:
o Healthy vegetation absorbs strongly in the visible blue and red regions for
photosynthesis, appearing green.
o They have high reflectance in the near-infrared region, used in vegetation
indices like NDVI (Normalized Difference Vegetation Index) to assess plant
health and density.
Putting it Together:
By combining terrain analysis with spectral properties, we gain a richer understanding of the
landscape:
Dark, elongated features with low near-infrared reflectance could indicate a water
body.
Light-colored areas with low vegetation cover (visible through low NDVI) might
represent bare soil on a steep slope (identified through shadows).
Advanced Techniques:
Visual interpretation is a good starting point, but advanced techniques like digital image
processing and spectral unmixing allow for more detailed analysis:
By combining these tools, remote sensing helps us evaluate terrain, understand soil
properties, and monitor vegetation health over large areas.
Image Representation: Digital images are essentially grids of tiny squares called
pixels, each with a specific brightness or color value. DIP techniques operate on these
pixel values.
Image Enhancement: This involves improving the visual quality of an image for
better human interpretation or further processing. Common techniques include:
o Contrast Stretching: Making the differences between bright and dark areas
more pronounced.
o Noise Reduction: Removing unwanted speckles or grain from the image.
o Filtering: Applying mathematical operations to modify specific image
features like edges or textures.
DIP is a vast and ever-evolving field with a wide range of applications. Its power lies in its
ability to unlock the hidden information within digital images, providing valuable insights for
various scientific and technological advancements.
Classification Techniques:
Supervised Classification:
o Requires a training dataset where pixels are already labeled according to their
class (e.g., water, vegetation, urban area).
o Popular algorithms like Support Vector Machines (SVMs) or k-Nearest
Neighbors (kNN) learn from the training data to classify pixels in new images.
Unsupervised Classification:
o Doesn't require labeled training data.
o Algorithms like k-means clustering group pixels with similar characteristics
together, identifying natural groupings within the image data.
Accuracy Estimation:
Evaluating how well a classification technique performs is vital. Here are some common
methods:
Confusion Matrix:
o A table that compares the actual class labels of pixels with the labels predicted
by the classification algorithm.
o It reveals errors like pixels being classified as the wrong category
(commission errors) or actual classes being missed altogether (omission
errors).
Overall Accuracy:
o A simple metric calculated as the percentage of pixels correctly classified.
Precision and Recall:
o More detailed metrics that consider specific classes.
Precision: Focuses on how many of the pixels classified as a particular
class actually belong to that class (avoiding commission errors).
Recall: Measures how well the algorithm identifies all pixels belonging
to a particular class (avoiding omission errors).
F1 Score:
o A harmonic mean of precision and recall, providing a balanced view of
classification performance for a class.
Advanced Techniques:
Deep Learning: Convolutional Neural Networks (CNNs) are powerful tools for
image classification, achieving high accuracy, especially when large training datasets
are available.
The choice of classification technique depends on the specific problem and available data:
If labeled training data is readily available, supervised learning might be the best
option.
Unsupervised techniques are useful for exploring unlabeled data to discover inherent
groupings.
Conclusion: