Thursday, March 31, 2016

Advanced Remote Sensing: Lab 7 Digital change detection

Goals and Background

The main goal of this lab is to develop skills and get a better understanding of how to evaluate and measure changes that occur on land use and land cover over time. To do this digital change will be used which is an important tool for monitoring environmental and socioeconomic phenomena in remotely sensed images. There three objectives which fall under this digital change detection method. They are:
1)  how to perform quick qualitative change detection through visual means
2) quantify post-classification change detection
3) develop a model that will map detail from-to changes in land use/land cover over time

Methods

Part 1: Change detection using Write Function Memory Insertion 

The first portion of the lab makes use of Write Function Memory Insertion. This is a very simple yet effective method of visualizing changes in LULC over time. In order to do this the near-infrared bands from two images of the same area at different times are put into the red, green and blue color guns. When this is done the pixels that changed between those two time periods will be illuminated or be a bright color compared to the rest of the image and areas that did not change. These areas of change are then easy to see and get a quick overview of the change that has occurred between the two study times. In Figure 1 below you can see the areas highlighted in red that stand out from the rest of the the image. These are areas of change between 1991 and 2011.
Figure 1 Write Function Memory Insertion change image of Eau Claire County for 1991 to 2011.

Part 2: Post-classification comparison change detection

Section 1: Calculating quantitative changes in multidate classified images 

The next portion of the lab is about conducting change detection on two classified images of the Milwaukee Metropolitan Statistical Area (MSA). The two images being compared are from 2001 and 2011. The images were already classified and provided be Dr. Cyril Wilson. Figure 2 is the two MSA images side by side in ERDAS Imagine. 
Figure 2 These are the MSA classified images for 2001 right and 2011 on the left.
Once the two images were brought so we could visually compare the two the next step was to quantify the change between the two time periods. This was done by obtaining the the histogram values from the raster attribute table and then input those values into an excel spread sheet by class. These values are then converted to square meters and then from square meters to hectares. Once the we had the hectare values for each of the classes for 2001 and 2011 the percent change was calculated. This is done by taking the 2011 values and subtracting them from the 2001 and then multiplying that by 100. Figure 3 is the resulting table with the percent change values.
Figure 3 This table is showing the percent change for each LULC class from 2001 to 2011.

Section 2: Developing a From-to change map of multidate images 

The final portion of this lab was to create a from-to-change map from the two MSA images. A model was created which detects the change between the two images. We made use of the Wilson-Lula algorithm. Figure 4 is the model that was created. In this model I focused on changes between 5 pairs of classes. Those 5 pairs are as follows: 1. Agriculture to urban 2. Wetlands to urban 3. Forest to urban 4. Wetlands to agriculture 5. Agriculture to bare soil. The first part of the model takes the two MSA images and separates them into the individual classes through an either or statement. Each class from the two years is then paired up based on the 5 pairs above. Once paired up the Bitwise function is used on each pair to show the areas that have changed from one LULC class to another over the time period. These 5 output rastersare then used to create a map of the changes that took place. Figure 5 is the final resulting from-to change map.
Figure 4 This is the from-to-change model making use of the Wilson-Lula algorithm to calculate LULC change from one class to another. 

Results

Figure 5 This is the final from-to-change map. Each are that is colored is showing a change in LULC class in the 5 pairings created earlier in the lab for use in the model in Figure 4.

Sources

The Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. 

Homer, C., Dewitz, J., Fry, J., Coan, M., Hossain, N., Larson, C., Herold, N., McKerrow, A., VanDriel, J.N., and Wickham, J. 2007. Completion of the 2001 National Land Cover Database for the Conterminous United StatesPhotogrammetric Engineering and Remote Sensing, Vol. 73, No. 4, pp 337-341.

Xian, G., Homer, C., Dewitz, J., Fry, J., Hossain, N., and Wickham, J., 2011. The change of impervious surface area between 2001 and 2006 in the conterminous United States. Photogrammetric Engineering and Remote Sensing, Vol. 77(8): 758-762.

The Milwaukee shapefile is from ESRI U.S geodatabase.  


Tuesday, March 29, 2016

Advanced Remote Sensing: Lab 6 Classification Accuracy Assessment

Goals and Background

The main goal of this lab is to gain knowledge on evaluating the accuracy of classification results as accuracy assessment is a mandatory exercise following image classification. It is a vital part of the post-processing stage of remotely sensed data. In order to learn the accuracy assessment process there are two main objectives for this lab:
1) collect ground reference testing samples for accuracy assessment
2) use ground reference testing samples to perform accuracy assessment

Methods

The accuracy assessment in this lab was done using ERDAS Imagine 2015. 

Part 1: Generating ground reference testing samples for accuracy assessment 

The first step in the process of accuracy assessment is to create ground reference testing samples. These ground samples can be collected in the field before classification but if that is not an option they can also be created using a high resolution image as we are doing in this lab.
The first part of this lab is about creating those ground sample points using high resolution aerial imagery of our study area. The image that was assessed for accuracy is the coded unsupervised classification image created in Lab 4. First I opened this image in a ERDAS viewer and then brought in an high resolution aerial image of  the same area into another viewer. This image from 2005 and will serve as the reference image in the accuracy assessment. This image is also where the reference samples will be created. Once they are both open (Figure 1) then the accuracy assessment dialogue is opened. Select the first viewer with the unsupervised classification image and click on the raster tab > supervised > accuracy assessment. This will open the accuracy assessment window (Figure 2) in which you want to open the classified image. Next clicking anywhere in viewer two containing the 2005 imagery will select that image as the reference image for the assessment. Next random points need to be generated. This is done by going to edit > create/add random and this will open the add random points window. In this window some presets need to be changed. For this lab we did 125 in the number of points, set the distribution parameter to stratified random, the minimum number of point to 15 and selected the 4 classes from the unsupervised classification image (Figure 3). Click OK and the reference image has 125 points that appear on it.     
Figure 1 These are the two images used for the accuracy assessment. The unsupervised classification on the left and 2005 reference image in the right. 

Figure 2 This is the accuracy assessment dialogue where the classification and reference images are selected. 
Figure 3 This is the add random points dialogue where the 125 points are added to the 2005 reference image to conduct the assessment. 

Part 2: Performing accuracy assessment 

Section 1: Evaluation of reference points 

Now that the sample points are generated the accuracy assessment can begin. In the accuracy assessment window the first 10 random points are selected. Click show current selection from the view menu and these points will change appear on the reference image as white. Using the same numbering scheme for the classes from Lab 5 I went through and identified the LULC class for each of the 125 random points in the reference image. This is done by finding the point on the reference map and then looking at the unsupervised classification image from lab 4 for the LULC class and recording that in the accuracy assessment table. After each sample point is classified it will change from white to yellow.  This process can be seen in Figure 5. Figure 6 is the table is the table with the reference points in it seen in Figure 5. This is where the classification number is entered.  
Figure 5 This is what the reference image will look like after all of the random sample points have been classified. They will turn from white to yellow.
Figure 6 The are the random generated points in the table with the classification number in the left hand reference column. 

Section 2: Generating accuracy assessment report 

Once each of the 125 points is classified in the accuracy assessment window the next step is to generate the accuracy report. This is done by selecting accuracy report from the report drop down. Figure 8 is what that report will look like. To make the report easier to understand I created an Excel table (Figure 9).
Figure 8 This is the raw accuracy assessment report created in ERDAS Imagine 2015. 
Figure 9 This is the cleaned up easier to understand accuracy report I created in Excel.

Part 3: Accuracy assessment of supervised classification 

The same process was repeated from parts 1 and 2 to conduct an accuracy assessment on the supervised classification image from Lab 5 (Figure 10). 125 points were created to do the assessment and run the accuracy report. There was however an error when the report was created. Labels were not accurate and the report did not produce any Kappa Statistics (Figure 11). This malfunction may be due to an error between the algorithm used and the newest version of the ERDAS software. For his reason an accuracy assessment of the classified image has not been completed and the accuracy of the classified and unclassified images can not be compared. 
Figure 10 The supervised classification image on the left and the reference image with the 125 random points on the left.
Figure 11 This is the accuracy report containing the error for the supervised image. 

Sources

The Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. 
The high resolution image is from United States Department of Agriculture (USDA) National Agriculture Imagery Program. 




Thursday, March 10, 2016

Advanced Remote Sensing: Lab 5 Pixel-based Supervised Classification

Goals and Background

The main goal of this lab is to learn how to use pixel-based supervised classification methods to extract biophysical and sociocultural information from remotely sensed images. Again just like last week doing unsupervised classification image classification is one of the most important remote sensing skills to obtain. The three smaller goals that this lab is split into are: 
1) selecting training samples to train a supervised classifier
2) evaluate the quality of training signatures collected
3) produce meaningful informational land use/land cover classes through supervised classification

Methods

Part 1: Collection of training samples for supervised classification

The first part of the lab is all about collecting training samples which will be used later for the supervised classification. These training samples are basically spectral signatures of various types of surfaces and landcover. In the last lab we relied on spectral libraries to determine what the spectral signatures we collected were from, in this lab we are picking spectral signatures a specific features from the different classes we are going to break the image into. By collecting these training samples we are telling the maximum likelihood classifier in ERDAS Imagine 2015 what spectral signature value range to expect for each different class. For our purposes in this lab we collected at least 50 training samples from the imagery. Those 50 were split into the different classes we want to divide the image into. Just like last week in Lab 4 our classes are water, forest, agriculture, urban/builtup, and bare soil. When collecting the samples we made sure that we collected multiple samples for each class to capture all of the variations of the spectral signatures of each kind of feature. This capture of  variation in signatures will help the classification tool be more accurate and classify more areas in the image accurately. For this lab we collected 12 samples from water, 11 from forested areas, 9 from agricultural land, 11 from urban/builtup areas and 7 from bare soil. This was the minimum required for the lab I ended up collecting about 65 samples total to better capture the variation of spectral signatures between classes.

The first step to collecting training samples is to bring in the image you want to classify into ERDAS Image 2015. We are again using the Eau Claire and Chippewa County imagery collected by Landsat 7 on June 9th of 2000. Once the image is open we can start collecting the samples. First we zoomed into a water feature on the map. A good starting point is Lake Wissota. Once zoomed in we used the Polygon tool from the Draw tool menu to create a sample polygon in the lake. Once the polygon is drawn we open the Signature Editor under the Supervised Classification drop down. Making sure that the polygon is still selected we create a new signature from AOI in the Signature Editor tool. Once it is added we change the name of the signature so we can keep track of what it is as we will be collecting at least 50 samples. This first sample is named Water 1. This same process is repeated 11 more times to collect the water samples making sure we look at all the water bodies and capture the spectral variations in the water features from all over the image not in just one water feature. Figure 1 is what this training sample process looks like in ERDAS Imagine 2015.
Figure 1 This is what the process of collecting training sample looks like in ERDAS.
This same process is used to collect the other 40 training samples for the forested areas, agricultural land, urban areas, and the bare soil. Water features are easy to distinguish from the other land cover features in the imagery but distinguishing between other feature surfaces can be difficult. To help with this we linked and synced a Google Earth viewer window to the false color image. This allows the user to zoom into areas on the false color image which they think is agriculture and look at the high resolution Google Earth imagery to double check. This is repeated for all the land cover features to make sure the training samples are capturing and classified as the right land cover type. Figure 2 is what the Spectral Editor tool will look like once the samples are collected.
Figure 2 This what the Spectral Editor window will look like once all the samples are collected and classified.

Part 2: Evaluating the quality of training samples

The next step in the lab is to check the accuracy of the training samples that were collected. This is a vital step before they are used in the supervised classification tool. What you are looking for when assessing the sample quality is separability between the spectral signatures in each each class. The more separability there is between the spectral samples in each class the better you at capturing the full range of spectral values for that class and the better the classification will work. In simple terms the less over lap there is in the spectral signatures the better. We look at this separability using the Display Mean Plot Window button in the Signature Editor window. We highlight the samples we want to display in the editor window and then click the Mean Plot tool. Figure 3 is how this looks in ERDAS. When this window opens sometimes the signatures are scrunched and you can not see all 6 bands. We fix this by hitting the Scale Chart to Fit Current Signature button. By looking at the patterns of the signatures across the six bands we can do an early rough determination of how accurate the samples collected are. All of the bands should have the same overall pattern across the 6 bands, showing they are of the same type of feature, but they should not be overlapping, to show the variation of signatures in that feature.
Figure 3 This is how to display the spectral signatures under each class to do separability comparison. 
During this process we are looking for signature that are drastically different pattern wise across the 6 bands from the other signatures of similar classes. For example if you have a agriculture signature that has a completely different signature pattern than the rest that sample should be deleted and recollected to improve accuracy. Once we have done this we bring all of the signatures in to the same window to view them as a group. They are color coded in the following way. Water is blue, Forest  is green, Agriculture is pink, Urban is red, and Bare Soil is sienna. Figure 4 is all of the training samples displayed in the same signature window.
Figure 4 These are all the signatures of my collected training samples displayed together. 
Once we have looked at the signatures the final step in the accuracy process is to create a separability report. This is done by clicking on the Evaluate then Separability buttons in the Signature Editor Window. Making sure that all the signatures are selected open the Signature Separability tool. For this lab we chose 4 layers per combination and transformed divergence as the distance measurement. We then click OK to generate the report. Figure 5 is what that report looks like. This report is basically a numerical way of showing the separability between the collected training samples. We see values of 0 to above 2000 in the charts produced. We are looking for values between 1900 and around 2000. Good separability is 1900 and above, 2000 and above is excellent, and anything below 1700 is garbage and needs to be recollected. It also tells us which bands have the most separation between them and for my report my best bands were 1,3,4 and 5 with a Best Average Separability value of 1987 which is good.
Figure 5 This is part of the separability report. Showing the most separated bands as the Best Average Separability value.

Upon getting a good separability value the final step before running the supervised classification is to combine the spectral signatures in each class into 1 signatures representing the whole class (12 water signatures combine to 1). There will be 5 bands total after this is complete, one for each class. To do this we highlight all the signatures for one class, so all the water signature, and go to edit, Merge from in the Signature Editor window. Figure 6 are the final 5 merged classes. We then plot these 5 in the Mean Signature Window, the result is Figure 7.
Figure 6 The merged class signatures. 
Figure 7 The merged signatures displayed in the Mean Plot window.

Part 3: Performing supervised classification

We are now ready to run the supervised classification which is very simple once the prep work in Parts 1 and 2 is complete. We run the tool by clicking Supervised Classification from the Classification tab under the Raster menu. This opens the classification settings (Figure 8). The input image is the original Eau Claire 2009 image and the signature file is that which we created in Part 2 when we merged the signatures into 5 classes. The classified file is the output image so we save that where you like and all other defaults are accepted. We run the tool and view the result (Figure 9) in the Results section below.
Figure 8  The supervised classification window. 

Results

Figure 9 This is the final supervised classification map of Eau Claire and Chippewa counties.
Figure 10 This is a comparison of newly supervised classification image on the left compared to the unsupervised classification image done in lab 4 on the right. We see that there is quite a difference in the two images. 

Sources

The Landsat satellite imagery is from Earth Resources Observation and Science Center, United States Geological Survey. 

Thursday, March 3, 2016

Advanced Remote Sensing: Lab 4 Unsupervised classification

Goals and Background

The main purpose and goal of this lab is to learn how to conduct unsupervised classification using a specialized algorithm. This classification is used to extract biophysical and horticultural information from the imagery. This process is one of the most important in the field of remote sensing. The two specific goals of this lab are:
1) Gain an understanding of the input configuration requirements and execution of an unsupervised classifier
2) Develop the art of recoding multiple spectral clusters generated by an unsupervised classifier into useful thematic informational land use/land cover classes that meet a classification scheme.

Methods

This lab was broken up into two parts. The first part was conducting unsupervised classification with only 10 classes which is pretty low. The second part of the lab is following the same classification method but increasing the classes to 20 to increase the classification accuracy. 

Part 1: Experimenting with unsupervised ISODATA classification algorithm

This first part of the lab is learning how to run an Iterative self-organizing data analysis technique or ISODATA classification algorithum. This is used to analyze an aerial image of Eau Claire and Chippewa Counties in Wisconsin collected via the Landsat 7 satellite on June 9, 2009.

Section 1: Setting up an unsupervised classification algorithm 

To set up the algorithm we brought the original image into ERDAS Imagine 2015.Next we select the unsupervised classification tool which is under the raster toolbar. In the window for the tool we again bring in the original image as the input. The number of classes should be 10 to 10 which means that the algorithm in ERDAS will create 10 classes based on the brightness values found throughout the image. The iterations should also be changed to 250. This value means that the algorithm will run up to 250 times to make sure that unlike features are not grouped together in the 10 classes that it is creating. I say it will run up to 250 times because it may place everything in the correct classes before the 250th run through. Once these parameter are set the model (Figure 1) is ready to run. Once it finishes compare the input image to the output image (Figure 2). This probably sounds like it would take a while to run but it was done processing in under 5 minutes but this depends on the computer it is being run on. 
Figure 1 This is the unsupervised classification tool window where the classification parameters are set.
Figure 2 The image on the left is the original 2009 image and the image on the right is the newly classified image.

Section 2: Recoding of unsupervised clusters into meaningful land use/land cover classes

Once we have the new unsupervised classification image the next step is to recode the clusters created into meaningful LULC classes. This is a pretty simple process however the more time spent can increase or decrease the classification accuracy. To recode we open the image attributes with the newly classified image open in ERDAS 2015. We go through each cluster and change the color to yellow one at a time so they stand out in the image. We then since the image to Google Earth so that we can see the actually features and surface in the cluster areas we have highlighted. Based on what we see in Google Earth for each cluster we label and change the color scheme. The labels that were  assigned to the clusters were Water which is changed to Blue, Forest is Dark Green, Agriculture is Pink, Urban/Buildup is Red and Bare Soil is Sienna. Figure 3 is the recoded unsupervised classification image with the new color assignments to each class.
Figure 3 This is the reclass image making use of only 10 classes. The class labels and associated colors can be seen in the table.

Part 2: Improving the accuracy of unsupervised classification 

Section 1: Setting up and running an unsupervised classification algorithm 

The second portion of the lab was very similar to the first. Again we are going to bring in the original aerial imagery from 2009. This time however in the classification window we are going to set the classes 20 to 20. This will increase the number of classes the algorithum splits the brightness values into, increasing accuracy. One other slight change is reducing the coverage threshold from .95 to .92. Once we have this new classified image with 20 classes instead of 10 we use the same procedure as in part 1 section 2 to assign the correct labels to each class as well as change the colors. Figure 4 is the newly reclassed image with the attributes showing the labels and colors.
Figure 4 This is the newly reclassed image using 20 classes to increase the accuracy. 


Section 2: Recoding LULC classes to enhance map generation 

The final piece to the lab is to combine the classes so that the LULC classes are easier to understand and displayed more effectively when creating a map. This was done only on the image with 20 classes from part 2. The 20 classes are recoded or combined by kind so there are only 5. In order to this the recode tool under the thematic tab is used. The class numbers were 1. Water  2. Forest 3. Agriculture 4. Urban/Builtup 5. Bare Soil. Figure 5 shows the 20 classes combined into 5 using the recode tool.  These values can then be used to create a LULC map in ArcGIS or another GIS software.
Figure 5 These are the 5 classes created using the recode tool. Each of these is multiple classes combined by type to go from 20 to 5 classes.

Results

There is a noticeable difference between the 10 class and 20 class unsupervised classification images (Figure 6). The most noticeable difference is between the forest and agricultural areas. Many of these areas were overlapping in the 10 class image so it was difficult to separate these areas into the correct class. The majority rules when choosing the classes to if there are more trees in the clustered area then it would be forest and the same is true for all the classes. It is much more generalized than the 20 class image where the clusters have a clear majority and it isn't as hard to separate them into the correct classes. One of the biggest factors in the accuracy is how much time the user spends comparing the clusters to Google Earth or other high res imagery to accurately separate the classes. If this is done quickly the classification most likely will not be accurate. Figure 7 is the final map created in ArcGIS using the recoded 5 class image.
Figure 6 These are the two reclassed images for comparison. The image on the left is the image split into 10 classes and the image on the right has 20 classes.
Figure 7 This is the final map created in ArcGIS.

Sources

The Landsat satellite imagery is from Earth Resources Observation and Science Center, United States Geological Survey.