Goals and Background
The main goal of this lab is to learn how to use pixel-based supervised classification methods to extract biophysical and sociocultural information from remotely sensed images. Again just like last week doing unsupervised classification image classification is one of the most important remote sensing skills to obtain. The three smaller goals that this lab is split into are:
1) selecting training samples to train a
supervised classifier
2) evaluate the quality of training signatures collected
3) produce
meaningful informational land use/land cover classes through supervised classification
Methods
Part 1: Collection of training samples for supervised classification
The first part of the lab is all about collecting training samples which will be used later for the supervised classification. These training samples are basically spectral signatures of various types of surfaces and landcover. In the last lab we relied on spectral libraries to determine what the spectral signatures we collected were from, in this lab we are picking spectral signatures a specific features from the different classes we are going to break the image into. By collecting these training samples we are telling the maximum likelihood classifier in ERDAS Imagine 2015 what spectral signature value range to expect for each different class. For our purposes in this lab we collected at least 50 training samples from the imagery. Those 50 were split into the different classes we want to divide the image into. Just like last week in
Lab 4 our classes are water, forest, agriculture, urban/builtup, and bare soil. When collecting the samples we made sure that we collected multiple samples for each class to capture all of the variations of the spectral signatures of each kind of feature. This capture of variation in signatures will help the classification tool be more accurate and classify more areas in the image accurately. For this lab we collected 12 samples from water, 11 from forested areas, 9 from agricultural land, 11 from urban/builtup areas and 7 from bare soil. This was the minimum required for the lab I ended up collecting about 65 samples total to better capture the variation of spectral signatures between classes.
The first step to collecting training samples is to bring in the image you want to classify into ERDAS Image 2015. We are again using the Eau Claire and Chippewa County imagery collected by Landsat 7 on June 9th of 2000. Once the image is open we can start collecting the samples. First we zoomed into a water feature on the map. A good starting point is Lake Wissota. Once zoomed in we used the Polygon tool from the Draw tool menu to create a sample polygon in the lake. Once the polygon is drawn we open the Signature Editor under the Supervised Classification drop down. Making sure that the polygon is still selected we create a new signature from AOI in the Signature Editor tool. Once it is added we change the name of the signature so we can keep track of what it is as we will be collecting at least 50 samples. This first sample is named Water 1. This same process is repeated 11 more times to collect the water samples making sure we look at all the water bodies and capture the spectral variations in the water features from all over the image not in just one water feature. Figure 1 is what this training sample process looks like in ERDAS Imagine 2015.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcKCP9LMXWGyeLRktkOo29hMgDRgM2twETPneKNkb8_JtAzO5RFpj0lS3A5PzqWo7Q5vueAL6GgTCSXslRAJaNUqx32KsaduYNIFlmJBpj7VJQVT52glNq9cAAFWVCvdkrTXOpdGU0KCU/s320/1.JPG) |
Figure 1 This is what the process of collecting training sample looks like in ERDAS. |
This same process is used to collect the other 40 training samples for the forested areas, agricultural land, urban areas, and the bare soil. Water features are easy to distinguish from the other land cover features in the imagery but distinguishing between other feature surfaces can be difficult. To help with this we linked and synced a Google Earth viewer window to the false color image. This allows the user to zoom into areas on the false color image which they think is agriculture and look at the high resolution Google Earth imagery to double check. This is repeated for all the land cover features to make sure the training samples are capturing and classified as the right land cover type. Figure 2 is what the Spectral Editor tool will look like once the samples are collected.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4bdTw_EYdBfPx2Wt1Vbiym14La8l4e-xYAEU5DaWO0tG-vPuXu2oiYSRNuMeTiX0jWiVuBDifoAqanDDAuNwJQrnOnnLMYM-dS2ufyt6MIxiQebCmH9ZwA0Pni73NpWWF8IPmB-4yPr0/s320/6.PNG) |
Figure 2 This what the Spectral Editor window will look like once all the samples are collected and classified. |
Part 2: Evaluating the quality of training samples
The next step in the lab is to check the accuracy of the training samples that were collected. This is a vital step before they are used in the supervised classification tool. What you are looking for when assessing the sample quality is separability between the spectral signatures in each each class. The more separability there is between the spectral samples in each class the better you at capturing the full range of spectral values for that class and the better the classification will work. In simple terms the less over lap there is in the spectral signatures the better. We look at this separability using the Display Mean Plot Window button in the Signature Editor window. We highlight the samples we want to display in the editor window and then click the Mean Plot tool. Figure 3 is how this looks in ERDAS. When this window opens sometimes the signatures are scrunched and you can not see all 6 bands. We fix this by hitting the Scale Chart to Fit Current Signature button. By looking at the patterns of the signatures across the six bands we can do an early rough determination of how accurate the samples collected are. All of the bands should have the same overall pattern across the 6 bands, showing they are of the same type of feature, but they should not be overlapping, to show the variation of signatures in that feature.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheojhG0R2V9V87xXCyhHz-k0pEgjKL4YDkhQFFADJ4h415_lUiTec3NRAaiq2bDhmIPD_rwUfP7rNdgOr5vH_bQ0_0zmtvjnKBmNejD_lTe1ziZEhPyrNQ5kZHugJG9x9XwWsMMDh3ObA/s320/2.JPG) |
Figure 3 This is how to display the spectral signatures under each class to do separability comparison. |
During this process we are looking for signature that are drastically different pattern wise across the 6 bands from the other signatures of similar classes. For example if you have a agriculture signature that has a completely different signature pattern than the rest that sample should be deleted and recollected to improve accuracy. Once we have done this we bring all of the signatures in to the same window to view them as a group. They are color coded in the following way.
Water is blue,
Forest is green,
Agriculture is pink,
Urban is red, and
Bare Soil is sienna. Figure 4 is all of the training samples displayed in the same signature window.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijG71x90hhLKH7-UL9t_yVjJAaM1BaeG5IjmxDTbrBfWk-6ryMjrQez2aN1s1D7NKhswmSeq_77Ind4GkxtQUo0DgyF2zEIL3Zowh4aU8MqBRcnHHO9ToqTR6GPoP0l3bdW3ZJh6PBOkg/s320/5.PNG) |
Figure 4 These are all the signatures of my collected training samples displayed together. |
Once we have looked at the signatures the final step in the accuracy process is to create a separability report. This is done by clicking on the Evaluate then Separability buttons in the Signature Editor Window. Making sure that all the signatures are selected open the Signature Separability tool. For this lab we chose 4 layers per combination and transformed divergence as the distance measurement. We then click OK to generate the report. Figure 5 is what that report looks like. This report is basically a numerical way of showing the separability between the collected training samples. We see values of 0 to above 2000 in the charts produced. We are looking for values between 1900 and around 2000. Good separability is 1900 and above, 2000 and above is excellent, and anything below 1700 is garbage and needs to be recollected. It also tells us which bands have the most separation between them and for my report my best bands were 1,3,4 and 5 with a Best Average Separability value of 1987 which is good.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjw-upvrlhQ1PKj2BqX4K32eXhLQPujx0k8eGEsLwlVASMI87S90uCYfuZlJ5KbgOfpVs2YFsxyf7V9TCUXCIHQHguBl8SWk-pO2bVCqEXgA4OP6RZ6srV4UR6jlqSLKybReWzjV4HGMAY/s320/2.PNG) |
Figure 5 This is part of the separability report. Showing the most separated bands as the Best Average Separability value. |
Upon getting a good separability value the final step before running the supervised classification is to combine the spectral signatures in each class into 1 signatures representing the whole class (12 water signatures combine to 1). There will be 5 bands total after this is complete, one for each class. To do this we highlight all the signatures for one class, so all the water signature, and go to edit, Merge from in the Signature Editor window. Figure 6 are the final 5 merged classes. We then plot these 5 in the Mean Signature Window, the result is Figure 7.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSrze0yE16acoFULAPWCrO9QZg9N5XKbhCkBfZgQ2iCe4HeNj23xCxRogoodmgy9p4f_Q5vQzo0hSl5tko-1WMG0AUeewKF0Ex_C52dvs4vS8mj69YZSVSymz4NEw4FsooU9GMw9URH2o/s320/3.JPG) |
Figure 6 The merged class signatures. |
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCpoYPkf1EcVjMSAgtNaB3dMqpHHVDfIK2TpaKQywea0JrMJWpAUbnT5PTlITbaSPqxouExSlUeKUFSK9VJF4pKuSGdXtlO2XvRCvmjCWc3O_SfJhrUZFbMo-mT545qH-HtG3ssj9vkS0/s320/4.PNG) |
Figure 7 The merged signatures displayed in the Mean Plot window. |
Part 3: Performing supervised classification
We are now ready to run the supervised classification which is very simple once the prep work in Parts 1 and 2 is complete. We run the tool by clicking Supervised Classification from the Classification tab under the Raster menu. This opens the classification settings (Figure 8). The input image is the original Eau Claire 2009 image and the signature file is that which we created in Part 2 when we merged the signatures into 5 classes. The classified file is the output image so we save that where you like and all other defaults are accepted. We run the tool and view the result (Figure 9) in the Results section below.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEUaP_Q1pXhnBgJ5FrSTrbvOjs5so1iWo3x2q9VCELf4j-rgx44TU2SGg0DJqMvS8qwZnJod78syYLS-aNTTUyuwva9AVIPEa3rwZxFexEKsqheHrG19SK4qSVOPlF8fUxAx40T0sgpTk/s320/7.PNG) |
Figure 8 The supervised classification window. |
Results
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJjLrFuPvYlLFKLlUPBuvcPgWMSO6NW5Wul8rdj7Tk4BaziW3jaRxMkLRS3zjSGZ6YOfp4Ev7ARaX6Rn0TN3VtATaEIfnUxf5cNvrTy148_DC1Yq1gFAEnpyLRLJo2jdL7m_aAM3ps_a0/s320/LULC+map.png) |
Figure 9 This is the final supervised classification map of Eau Claire and Chippewa counties. |
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjb2lOdogggSnUninhOil4MNqOsuAajJkMLVJZ_4rWwFDYd4rC7eGxhNIAIPAPKkyWMyDvDzUPxJow2m_k8bNHo_ztb2WD9Siqw5aeeiUJdmXllEs4l2cnhBIXmSLDY-WiBILlrYtaddSE/s320/9.PNG) |
Figure 10 This is a comparison of newly supervised classification image on the left compared to the unsupervised classification image done in lab 4 on the right. We see that there is quite a difference in the two images. |
Sources
The Landsat satellite imagery is from Earth Resources Observation and
Science Center, United States Geological Survey.
No comments:
Post a Comment