Thursday, February 25, 2016

Advanced Remote Sensing: Lab 3 Radiometric and atmospheric correction

Goals and Background

This lab was designed to give us students experience correcting remotely sensed images, in this case they are satellite images, for atmospheric interference. There are two main objectives for this lab:
1) Develop our skills in performing absolute atmospheric correction on remotely sensed images with the use of multiple methods 
2) Conduct relative atmospheric correction on remotely sensed images

Methods

Part 1: Absolute atmospheric correcting using empirical line calibration 

This first portion of the lab is conducting atmospheric correction making use of the empirical line calibration (ELC) technique. The ELC method makes use of the in situ data which is a library of spectral values of many surfaces and materials on earths surface collected at the same time that the sensor captures the aerial imagery. These reflectance values are matched to the reflectance values collected in the imagery through the following equation: CRk = DNk * Mk + Lk where CRk is the corrected digitial output pixels for a band, DNk is the image band that is being corrected, Mk is a multiplicative term that affects the brightness values of the image, and Lk which is an additive term. In this method the Mk value acts as the gain and the Lk values acts as the offset. The gain and offset are used to create regression equations which are used with the in situ data and reluctance of the sensor. There are 3 steps to completing this correction.

Section 1: Background and preparations for conducting Empirical Line Calibration 

The first step is to open the Spectral Analysis Work Station in ERDAS Imagine 2015. Once this is open then bring in the image you want to correct. In this lab we are working with an image of the Eau Claire area collected on August 3rd 2011 via the Landsat 5 satellite. Once the image is loaded make sure that the correct sensor is selected in this case we want Landsat 5 and then click on the Edit Atmospheric Adjustment tool and make sure that ELC is selected as the correction method. Figure 1 is what the window will look like when you have done this step.  
Figure 1 This is the Atmospheric Adjustment tool interface. 

Section 2: Collecting samples and identifying reference to conduct ELC 

Once you have the window open the next step is to collect spectral signatures of features throughout the image. Those signatures are then paired with the in situ signatures from various spectral libraries. The first signature we collected in from the Eau Claire image was a roadway. This is done by finding a good prominent road in the image like a highway, zooming in and using the Create a Point tool to collect a signature from the middle of the road feature. It is important for this and all the signatures collected that we made sure we were only collecting the signature of one feature and that there wasn't vegetation overlapping it or something. We want as pure and accurate signatures as we can get from the image. We changed the line color to grey and selected a road signature which then is displayed in the sample chart. We then go into a spectral library and find the corresponding in situ signature which in this case is called Aliphatic Concrete. The imagery sample is the top line and the library signature is the bottom line in the chart of Figure 2. We repeated this process to collect signatures of forested vegetation, aluminum rooftop, agricultural land, and water. Figure 3 is what all of the sample signatures for each of these looked like compared to the in situ spectral library data.
Figure 2 The chart on the right hand side the spectral signature chart where the image signature is compared to the library signature.
Figure 3 These are the signature comparison charts for each sample. Grey is concrete, green is forest, yellow is agriculture, cyan is rooftop and blue is water. 

Section 3: Executing atmospheric correction using Empirical Line Calibration 

Once we have those signatures collected the next step is to run the ELC correction. In this part of the lab this is a fully automated process but late in the lab we will manually create the regression models that are taken into account in the ELC method. Figure 4, in the results portion of the lab, is final corrected image using the ELC method. Once the new image was created we opened it and compared it to the original by collecting spectral signatures from the same objects in both images. Figure 5 are the resulting spectral signature graphs showing the original verses the corrected image signature values. We collected samples from healthy vegetation, roads, water bodies, and agricultural areas to compare the images.
Figure 5 The image on the left is the original image and the one on the right is the corrected using the ELC method. The spectral signatures boxes are comparing the same location on each image to view the affects of the ELC correction.

Part 2 Absolute atmospheric correction using enhanced image based Dark object subtraction

The second method we explored in this lab is correcting images based on dark object subtraction (DOS). This method makes use of many parameters to correct the image including sensor gain and offset, solar zenith angle, atmospheric scattering, solar irradiance, as well as absorption and path radiance. There are two steps to the correction in this method:
1) Convert the image collected by the satellite to an at-satellite spectral radiance image
2) Convert the at-satellite radiance image to true surface reflectance

Section 1: Conversion of image (DN) to at-satellite spectral radiance 

The first step is to use ERDAS image 2015 to create 6 models one for each band of the imagery (1,2,3,4,5,7). All 6 of the models are created and run in the same model window and make use of the original uncorrected image from Eau Claire just as in part 1. Once the image bands are loaded into the model the next step it to fill in the function for each band. Figure 6 is the equation used in the function. The majority of the information needed for the equation is found in the meta data. Once the function equations are complete for each band the output images need to be saved. Once output destinations are selected the model ( Figure 7) can be run.
Figure 6 This is the function equation for the model to correct each band.
Figure 7 This is the model for the conversion of the DN values to at-satellite spectral values.

Section 2:  Conversion of at-satellite radiance image to true surface reflectance

Step 2 is very similar procedure as step 1 but this time there is a new equation (Figure 8) and instead of the input bands being from the original Eau Claire image they are from the radiance image bands created in step 1. The new equation makes use of path radiance which is collected by measuring the distance from the origin of the histogram to the actual beginning of the histogram for each band. Solar zenith angle is also used in the equation. This is a constant value for all of the bands the distance between the earth and sun varies and needs to be looked up in a chart which has distances for every day of the year. Once you have all the values to fill int he equation it is time to create another model. Just like the first model there are 6 small models, one for each band, and using the radiance bands as the input, the new equation for the functions, and creating a new output location the model (Figure 9) can be run. Once the images are run in the model you can see that there is a layer stack tool in the model. This takes those new output bands and stacks them together so that we can compare the new stacked image with the original image to see how well the correction worked. Figure 10 in the results section in the newly corrected image using the DOS method. 
Figure 8 This is the new equation for the second part of the DOS correction method.
Figure 9 This is the final model including the layer stack using the DOS method.

Part 3 Relative atmospheric correction using multidate image normalization

The final method we used in the lab to conduct atmospheric correction is called multidate image normalization. When in situ data is not available for images, which is many time the case when working with historical images, many times this is the method chosen to correct those images. This method is based on having the same image from two different times. In our case we are using images of the Chicago area, one collected in 2000 and the other in 2009. 

Section 1:  Collection of pseudo-invariant features from base image and subsequent image 

This first step is to open both images in ERDAS 2015 using two separate image viewers. Then link and synchronize the two viewers so that they are set to the same extent and zoom and pan together. We zoomed into the O'Hare International Airport and we are going to use one of the rooftops in image comparison. In order to this we open the spectral profile tool in the mutispectral drop down. Then we unlink and unsynce the two viewers prior to collecting a profile point. We collect this point from the same location on each image which will display as spectral signature in each of the images signature graphs seen in Figure 11. We repeat this point collection procedure for a total of 15 points collected on each image from the same location on each image. It is important to make sure the points are collected from the same location on each image other wise this method will not correct the images accurately. The points collected were collected throughout the image; 5 in Lake Michigan, 5 from urban or built areas, and from lakes and rivers inland. Including the O'Hare point there are 15 signatures for each image. Figure 11 is the spectral signature charts and the point location for both images.
Figure 11 This is showing the 15 points from which spectral signatures were collected. The image on the left is the image from 2000 and the one on the right is the 2009 image that we be corrected.

After the signature are collected the next step is to conduct regression analysis. On the signature graph window we click the tabular data view. This gives us a chart of the pixel data for each of the points collected for each of the 6 bands. We take the mean numbers from each band and enter them into an excel spread sheet (Figure 12). There are 15 rows, one for each location, and 6 columns, one for each band. We create two separate tables one for each image. Once these tables are made we then take the band 1 columns from each table and create a scatter plot. We then add a trend line from which we get the gain, or slope of the line, and the y intercept is the bias. In the new charts (Figure 13), 6 total one for each pair of bands, we include the regression equation and R squared values. These values are needed for the function or equation that we will be using in the model for this correction method.
Figure 12 These are the two charts of mean pixel values for each sample location.
Figure 13 These are the 6 regression graphs from which we get the R squares and regression values.

Section 2: Development of Atmospheric correction image normalization models

The last par of this method is creating another model just as we did in parts 1 and 2 with 6 smaller models, one for each band, inside one large model. The input bands in this model are the 6 bands from the 2009 Chicago image as we are going to the 2000 image to correct the 2009. The equation in Figure 14 will serve as the function equation for the model (Figure 15) and the new corrected output images will be saved. Again like part 2 these new output images are stacked to create the final corrected image. Figure 16, in the results section, is the final corrected image.
Figure 14 This is the equation for the function in the model when using the multidate image normalization method.
Figure 15 This is the final model for correcting the 2009 Chicago image via the multidate image normalization method.

Results

Part 1 

Figure 4 This is the atmospherically corrected image (right) compared to the original image (left). This correction was done using the ELC method.

Part 2

Figure 10 Coming Soon













Part 3


Figure 16 This is the Chicago 2009 corrected image (right) compared to the Chicago 2000 original image (left).


Sources

Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. Spectral signatures are from the respective spectral libraries consulted. 

Tuesday, February 16, 2016

Advanced Remote Sensing: Lab 2 Surface Temperature Extraction from Thermal Remote Sensing Data

Goals and Background

The main goal of this lab exercise is to equip us students with the skills of extracting land surface temperature information from thermal bands of satellite images and account for variations in land surface temperature over space. In order to accomplish this there are 3 main objectives for this lab:
1 Learn about spectral emittance collected by sensors and visually identify variations in relative land surface temperature
2) Build a model to quantitatively estimate land surface temperature from thermal bands 
3) Combining simple models to create larger more complex models

Methods

We made heavy use of ERDAS Imagine 2015 in this lab to aid in the creating and running of models to extract and compensate for errors in the imagery that are used by various forms of atmospheric interference. These models allow use to work with corrected images with the majority of atmospheric interference removed, increasing the accuracy and quality if the data.

Visual identification of relative variations in land surface temperature 

The first portion of the lab is dealing with comparing low gain and high gain bands to look at slight variations that occur between the two in radiometric qualities of the same study area. We compared spectral reflectance bands in the imagery to thermal bands. Spectral reflectance are just what they sound like, they are reflecting solar energy that is collected by the sensor. Thermal imagery works differently in that what the sensor is collecting is emittance. Objects absorb solar energy or heat during the day and as the sun goes down and is less intense these objects emit that energy or give it off as they cool, this is what a thermal camera is collecting. Reflectance are much more intense and easier for the sensor to collect and can be collected really any time there is enough light for the sensor to gather the data. With thermal emittance there are times of the day which are much better for data collection such as early evening or even night time. Thermal data collection does not require the sun to be out because it is collecting the energy given off by objects. 
We study a thermal aerial image of the Eau Claire area to determine places of low, medium and high emmittance. The rate of emittance is determined by an objects thermal inertia. Objects that heat up and cool down quickly have low thermal inertia and the opposite is also true. Water bodies have a high thermal inertia because it takes them a long time to heat up but they then store that heat for a long time. In our imagery water bodies are then a cool feature giving off low amount of heat energy. An example of medium emittance or moderately warm object is vegetation. High emittance are coming from concrete or asphalt which heats up and cools down much more quickly that other surfaces because of low thermal inertia.

Part 2: Calculation of land surface temperature from ETM+ image

Section 1: Conversion of Digital Numbers (DN) to at-satellite radiance

The first step to calculating land surface temperature in a ETM+ image is to convert the Digital Number. To do this conversion a mathematical equation is used which is L(lambda)= Grescale*DN+ Brescale. Before you can fill out this equation you have to solve for Grescale. It also has a formula. Grescale = (LMAX-LMIN)/(QCALMAX-QCALMIN). Brescale is found by looking at the LMIN value. All of these values needed to fill into the equations are found in the meta data of the imagery. Open the meta data in WordPad or a similar program and it will look like Figure 1. Filling in these equations gives the user values which are then put into model maker in ERDAS Imagine 2015.  
Figure 1 This is the meta data table from which values for calculating Grescale, Brescale and other equations come from.
Open the model tool and insert  the input original image, the function or equation that is going to create the new image and then name the new image and select where it will be saved. Figure 2 is the model and function used to create a new image in this first step of land surface temperature calculation which will show the at-satellite radiance or emittance instead of the the DN values.
Figure 2 This is the model and function for converting the DN values to the at-satellite emittance values.

Section 2: Conversion of at-satellite radiance to blackbody surface temperature

Now that we have the radiance values for the imagery the next step is to convert those values through use of another model into blackbody temperature. This is done to correct the temperature values which will be different at the satellite, or the true/kinetic temperature, compared the temperatures on the earths surface. The equation used to convert radiance values to the surface temperature is: Tb= (K2/ln((K1/L))+1)). The K1 and K2 values are the thermal band caliration constants for the ETM+ and TM satellites. Once you have the values for the equation a new model is created with the radiance image from step one as the input, a new function which we just created and an output image of the ground temperature in the imagery. Figure 3 is the model and Figure 4 below in the results section is the resulting image. Figure 5, also found in results, is the same image brought into ArcMap where you can use the select tool to find the temperature of different items in the image. For example we found the Chippewa Valley Regional Airport and took the temperature of the concrete in the runway. Areas of red are higher temperature than areas in oranges and yellows. 
Figure 3 This is the model and function for converting the at-satellite emittance to the blackbody surface temperature. 

 Part 3: Calculation of land surface temperature from TM image

This third potion of the lab is basically a repeat of the first two steps only instead of running the models separate we are going to combine them and run the process from start to finish in one larger model. Figure 6 is the large combined model (Figure 6) The first image input is the Landsat TM image. Next is the function to convert the DN to at-satellite values just like Figure 2. This creates an output but instead of an actual separate output image we create it as a temporary image which becomes the input for the next function. This function is the same as Figure 3 converting the radience temperature to blackbody or surface temperature. The final output is the surface temperatures for the Landsat TM image (Figure 7) in results, which can be brought into ArcMap to look at surface temperature just as we did with Figure 5. 
Figure 6 This is the combined model making use of a temporary image to create the surface temperature final image,

Part 4: Calculation of land surface temperature from Landsat 8 image

This portion of the lab is making use of the skills learned in the First 3 part to create a surface temperature map of Eau Claire and Chippewa Counties. We used a LandSat 8 Thermal IR image collected on May 23, 2014 at 9:48 am. Using the same procedure as Part three we calculated surface temperature. Before we ran the model however we modified the Landsat 8 image we are using so that it focused specifically on Eau Claire and Chippewa counties not the entire image. This is done by using the subset tool which allows you to bring in a shape file of the counties and basically extract those areas out of the larger image and run the analysis just on that subset part of the imagery. Once that is done we created our model. Figure 8 is the model we created again making use of the temporary image just like we did in Part 3. Figure 9 is the final map created in ArcMap using the final surface temperature image created by the Figure 6 model.
Figure 8 This is the complete model for converting the the LandSat 8 subset image to surface temperature values. Figure 9 is the final map created in ArcMap.  


Results

In this lab we learned the process of taking raw thermal imagery from multiple sattelites and convert it to surface temperature vales. These new images can be used to accurately examine the temperature of surface objects. The images below are the resulting images from the models run during the lab.

Part 2
Section 2

Figure 4 This is the converted blackbody surface temperature image displayed in ERDAS Imagine. 
Figure 5 This is the same image as Figure 4 but it is brought into ArcMap and has a color ramp assigned to the thermal values in the image.  The red areas are higher temperature and the oranges and yellows are cooler areas. Water bodies are easy to pick out as they have the lowest temperature and stand our as yellow in the imagery.

Part 3

Figure 7 This is the final surface temperature image from the LandSat TM image using the combined model.

Part 4

Figure 9 This is the final surface temperature map created from the LandSat 8 image from May 23, 2014. Blues are cooler areas and the yellows to reds are warmer. All temperature are in Kelvin so simple conversion can be done to find Fahrenheit temperatures. 

Sources

Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. Area of interest (AOI) file is derived from ESRI counties vector features



Thursday, February 4, 2016

Advanced Remote Sensing: Lab 1 Image Quality Assessment and Statistical Analysis

Goals and Background

This is the first of many blog posts that will be created for labs in Geography 438 Advanced Remote Sensing of the Environment. A basic introduction to software and review of topics covered in Remote Sensing were given before this lab.
The main goal of this lab is to us students with the skills of identifying and removing data redundancy in satellite images through the application of statistical analysis, an important component of image preprocessing. There are three specific objectives included in this lab: 
1) Learn how to extract basic statistical information from satellite images
2) Learn to develop models to calculate image correlation analysis 
3) Interpret the results of correlation analysis for image classification. 

Methods

The first portion of the lab dealt with the first goal of learning how to extract simple statistical information from satellite imagery. Two of the methods explored were Feature Space Plot and Correlation Analysis. ERDAS Imagine 2015 was used to conduct this analysis.

Feature Space Plot

Feature space plots are 2 dimensional graphs that allow the user to compare bands of aerial images based on pixel brightness values. Two bands are compared at a time. For this lab we are looking at a satellite image of Eau Claire, WI taken in 2007( Figure 1). This image has 6 bands in it 1,2,3,4,5, and 7 seen in the Metadata (Figure 2). The  Feature Space Plot compares each band with each other band, so for this image there are 15 comparisons that ERDAS computes. To run this analysis in ERDAS you go to the tool bar at the top of the page and click on Raster --> Supervised --> Feature Space Image. Then simply input the image you would like to analyze and select an output location for the comparisons. Figure 3 is the setup window for this tool. The results from these comparisons are seen below in Figures 6 and 7. 
Figure 1 Satellite imagery of Eau Claire WI and surrounding area collected in 2007.
Figure 2 This is the metadata file for the sattelite imagery in Figure 1. It shows information about how many bands, what data type (8 or 16 bit), and other data info.
Figure 3 This the tool setup box for the Feature Space Plots we created in the first portion of the lab.

Correlation Analysis

Feature Space Plots a preliminary comparison of data in the imagery telling the researcher whether or not correlation exists in the data they are examining. To see how much correlation exists and get a more in depth look at the data correlation analysis is done. Unlike feature space plots which only allow you to compare two bands at a time correlation analysis allows the user to compare every single band and output it into a matrix or table displaying all of the correlation values for each band comparison. This is done through the creation of a model. The model consists of an input image, definition or desired calculation, and output values which in this case are correlation values. These values range from -1 to 1. Values of .95 and above mean that there is very high correlation between the bands and there is most likely data redundancy occurring. It is then up to the analyst to choose which band, based on the type of research they are conducting, to exclude from further analysis. Correlation values of less than .95 in general mean that there is little data redundancy between the two bands and that they both can be included in further analysis. Values close to 0 or negative tell the analyst that there is very little or no correlation between two bands in which case they would be both be kept. For this lab this correlation analysis was conducted for 3 separate images. Figures 4 and 5 are the setups for that analysis. The correlation results or matrix are found below in the results section (Figures 9,11 and 13).
Figure 4 This is the model created to
conduct the correlation analysis.
It contains an input image, tool or
definition and then the final output
correlation values. This model is set up
for the Eau Claire imagery in Figure 1.

Figure 5 This is the box where the definition or analysis selection happens.
You can see that a correlation definition has been selected with Eau Claire
imagery as the input image. To run a different image simply switch out the
Eau Claire image with new image.


Results

The resulting plots from running the Feature Space Plot analysis are below in Figure 5 and 6. Each one of the those windows is a comparison of two bands. Those colorful plots are showing how high the correlation between any two bands is. A plot that is very linear and compact such as the 2nd from the right in Figure 6 shows two bands with high correlation which means similar data was collected in those two bands and one of them should possibly be removed before further analysis, but that is determined by the correlation matrix values. Again these plots are a preliminary look at the data in terms of correlation. In Figure 7 looking at the 2nd plot from the right we see that the two bands being compared are not highly correlated because the plot is very spread out and scattered.
Figure 6 These are the first 7 plots created from the Eau Claire satellite imagery collected in 2007. 
Figure 7 These are the remaining 8 plots. 
When looking at the correlation analysis results we created 3 tables for 3 different satellite images. The model is run and output a text file which is then opened and copied into Excel for formatting. Figures 8-13 are the satellite images with their corresponding correlation matrix.

Figure 8 Satellite imagery of Eau
Claire WI and surrounding areas.
Figure 9 This is the correlation matrix for the Eau Claire imagery. We see that
there are no values higher that .95 which means that data redundancy is probably
not occurring between any of the bands in this imagery. This means that all
bands can be kept moving further with image analysis. The row of red 1s are
where each band is being compared to itself. Anything compared to itself is one
so we are not interested in these values.
Figure 11 This is the correlation matrix for the Florida Keys imagery. We see
that the correlation values are fine except for the comparison of bands 1 and 2.
The correlation value of .98 means that there is most likely data redundancy
occurring between these two bands and one of them should be excluded
depending on the type of imagery analysis being conducted.
Figure 10 Satellite imagery of the
Florida Keys.




























Figure 12 This is satellite imagery for
the Bengal Province in Bangladesh.
It was collected in 2004.
Figure 13  This is the correlation matrix for the Bengladesh imagery. Similar
to the Florida Keys imagery most of the values are below .95 meaning low to
no correlation or data redundancy. Again however we see that the band 1 and 2
comparison is over .95 meaning one of the bands should be thrown out.















Sources

Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. Quickbird high resolution images are from Global Land Cover Facility at www.landcover.org