Thursday, April 21, 2016

Advanced Remote Sensing Lab 10 : Advanced Classifiers 3

Goals and Background

The main goal of this lab is to learn how to use two advanced classification algorithms. These advanced classifiers are very robust and can greatly increase the accuracy of the LULC classification. The main objectives for the lab are as follows:
1)  Show us how to perform an expert system/decision tree classification with the use of ancillary data
2) Demonstrate how to develop an artificial neural network to perform complex image classification

Methods

Part 1: Expert system classification 

 Section 1: Development of a knowledge base to improve an existing classified image

The first part of the lab is working with a method called expert system classification. This is a very robust classification method that uses not only the remotely sensed imagery but also ancillary data to get a more accurate final classification.
To begin we were given a classified image of the Chippewa Valley (Figure 1). We examined the image and found that there were a number of errors in the LULC classifications. These included urban areas that were labeled as residential when they were clearly industry and many agricultural areas were labeled as green vegetation and vise versa. These errors would be corrected by running this imagery again in the expert system classification to improve the accuracy and get a more realistic classification of the area.
Figure 1 This is a classified image of the Chippewa Valley with errors in the LULC classification.

The first step to running the expert system classification is to create hypothesis and rules for each of the classes we are interested in. To do so we open the knowledge engineer window. Figure 2 is what this window looks like. This was repeated for each of the six classes which are water, residential, forest, green vegetation, agriculture, and other urban.
Figure 2 This is the window to set up the rules for each of the hypothesis or classes in the knowledge file.

Section 2: The use of ancillary data in developing a knowledge base 

Once those 6 hypothesis or classes are entered the next step is to write an argument that will make sure that the other urban class does not get classified as residential urban. There are two arguments that are written one from the residential hypothesis saying that residential will not be classified as other urban and one by the other urban hypothesis saying that other urban will not be classified as residential. These arguments help the classifier better distinguish between the two classes. The same procedure was followed for green vegetation and agriculture. A argument was added to the agriculture saying that agriculture can not be classified as green vegetation and also the opposite is true that green vegetation can not be agriculture again to help the classifier separate those classes more accurately and correct the errors seen in the original classified image we examined. Figure 3 is the final process tree for the expert system classifier. This knowledge file saved for use in the next step of the lab.
Figure 3 This is the final process tree or knowledge file for the expert system classification.

Section 3: Performing expert system classification 

To run the expert the knowledge classifier window is opened once again and the knowledge file or Figure 3 is brought in. This is where you can select which hypothesis or classes you want to include in the classification. In this case we included all of them Figure 4. After the classes to include are selected click OK and the next window is opened (Figure 5). Here we set the cell size to 30 by 30 and select the location and name of the final output classified image. Hit OK and the classifier runs. Figure 11 is the final classified map.
Figure 4 We include all of the classes for this analysis. 
Figure 5 This is the dialogue box to pick the output location of the classified image as well as set some other parameters.

Part 2: Neural network classification 

Section 1: Performing neural network classification with a predefined training sample 

The other method of classification we explored in this lab is called Neural Network classification. This portion of the lab was run in ENVI 4.6.1 another remote sensing software. The first step was to open an image file provided by Dr. Cyril Wilson. Once the image was open the next step was to import an ROI file. These ROIs are training samples that Dr.Wilson collected in this imagery (Figure 6). Once the ROIs are open neural network classification is chosen from the supervised drop down in ENVI. Figure 7 is the parameter window for the classification where the number of iterations, training rate, and output location for the classified image are input. Once the parameters are entered the classification is run and Figure 8 is the result.
Figure 6 This is the image give to us by Dr. Wilson you can see that I have also brought in the ROIs on top of the image.
Figure 7 This is neural network dialogue box where the majority of the parameters are input.
Figure 8 The original false infrared image is on the left and the classified image is on the right.

Section 2: Creating training samples and performing NN classification (Optional challenge section) 

Additional practice and experimentation with the parameters for the neural network classification was done using an image of Northern Iowas campus (Figure 9). I opened the image and instead of having ROIs provided I created them myself. I made three classes which were grass, roofing, and concrete/asphalt. The same procedure for running the classification was followed as above. Figure 10 is the resulting classified image based on the 3 ROIs or classes I created.
Figure 9 This is the original false infrared image of the Northern Iowa campus.
Figure 10 This is the original image on the left and the classified image on the right based on the ROIs I collected.

Results

Figure 11 This is the final LULC map using the expert system classification method.

Sources

The Landsat satellite images are from Earth Resources Observation and Science Center, United States Geological Survey.
The Quickbird High resolution image of portion of University of Northern Iowa campus is from Department of Geography, University of Northern Iowa.


Thursday, April 14, 2016

Advanced Remote Sensing : Lab 9

Goals and Background

The main goal of this lab is to learn the skills involved in performing object-based classification in eCognition which is a top of the line image processing tool. The topics explored in this lab are fairly new to the remote sensing frontier, integrating both spectral and spatial information to extract land surface features from remotely sensed images which in this case is satellite imagery. The main objectives in this lab are as follow:
1) Segment an image into homogeneous spatial and spectral clusters
2) Select appropriate sample objects to train a random forest classifier
3) Execute and refine object-based classification output from random forest classifier
4) Select appropriate sample objects to train a support vector machine classifier
5) Execute and refine object-based classification output from support vector machine classifier.

Methods

Part 1: Create a new project

The first portion of the lab is all about importing imagery into eCognition and setting up a new project. I imported the image set the resolution to 30m/pxl and made sure the geocoding box was selected. Next I changed the color scheme to false infrared by setting a 4,3,2 band combination in the image layer mixing window.

Part 2: Segmentation and sample collection 

Section 1: Create image objects 

The first part of the analysis is creating image objects. This is a grid that is placed over the image made up of many polygons. The polygons shapes are based on parameters set by the user. In order to create this grid a process tree is created. A process tree is where all the commands or tools are placed that the user wants eCognition to run. It is called a tree because it is modeled after a family tree with parents and children which make up the hierarchy of the operations. I created a new process and added a child to it. The child is labeled generate objects and is the tool that created the image object grid. In the generate objects object window the shape was set to .3 the compactness was set to .5 and the scale parameter was set to 9 as seen below in Figure 1. The settings for shape and compactness are decided through trial and error. The goal is to find a combination where the image object polygons are tight to homogeneous objects and pixel values in the aerial imagery. After these are set I hit execute and Figure 2 is what eCognition creates over the imagery.
Figure 1 This is the window set up the generate object process.
Figure 2 This is the image object grid that eCognito creates over the image.

Section 2: Training Sample selection

The next step after the segmentation is to collect training samples. First I created classes of LULC in the class hierarchy. The classes are forest as dark green, agriculture as pink, urban as red, water as blue, green vegetation as light green, and bare soil as yellow (Figure 3).  Once the classes are created training samples can be collected. This is done by selecting a class and then in the image object grid double clicking on polygons that contain that LULC class (Figure 4). This is done for each class and I collected the following number of samples for each class:
  • Forest 10
  • Urban 20
  • Water 10
  • Green vegetation 15
  • Bare soil 15
Figure 3 These are the 5 classes create in eCognition.
Figure 4 You can see the training samples collected for each class by color.

Part 3: Implement object classification 

Section 1: Insert and train Random Forest classifier based on sample objects 

The object-based classification process is fairly robust and the process tree is rather large but explained here is most of the step by step process to complete the classification. The first step is append new after the generate objects process and label it RF classification for random forest classification. Add a child under this RF process and label it train RF classifier. Figure 5 is the window to set up the train RF classifier child. The training samples are brought in via the feature drop down. Next I selected the feature I wanted included from the select features window Figure 6. 
Figure 5 Window to set up the RF classifier trainer.
Figure 6 This is the window to select the features used in the classification.

Section 2: Perform Random Forest Classification 

Next another child is added to the RF classification process called apply RF classifier. Once all of the the parameters are entered the classification can be run by clicking execute on the apply RF classifier child. Figure 7 is the complete process tree for the RF classification. 
Figure 7 This is the final process tree for the RF classifier.

Section 3: Refine classification by editing signatures 

Once the classifier has been run if there are errors the user can go in and manually change the class of an image object in the image. This is a simple process. Select the class that the error occurred in and then the class you want it to be changed to and click on the polygon and it will change.

Section 4: Export classified image to Erdas Imagine format 

The classified image created in eCognition (Figure 8) does not have the color scheme I want to compare this classification method to the supervised and unsupervised methods done in labs 4 and 5. To fix this the classified image is export to ERDAS Imagine where the I reassigned color.
Figure 9 This is the RF classified image created by eCognition.

Part 4: Support Vector Machines

Section 1: Save project and modify Process Tree 

The final part of this lab involved running support vector machine classification instead of the random forest method. To do this the process tree from the RF classification is modified. The same steps are followed as for the RF classification except when the classifier is being trained. Figure 10 is the window where this change occurs. Figure 11 is the final process tree for the SVM classification. A comparison of the RF and SVM final classified images can be found below in the results section in Figure 12.
Figure 10 This the window to set up the SVM classifier trainer.
Figure 11 This is the final process tree for the SVM classification method.

Results

Figure 12 The final RF classified image from ERDAS is on the left and the SVM classified image is on the right.

Sources

The Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey

Advanced Remote Sensing Lab 8: Advanced Classifiers 1

Goals and Background

The main goal of this lab is to gain knowledge and understanding on how to use two classifications algorithms. These classifiers make use extremely robust algorithms which have proved effective for increasing classification accuracy of remotely sensed imagery. These algorithms are much more effective and accurate than traditional unsupervised or supervised classifications methods. The two main objectives for the lab were as follows:
1. Learn how to divide a mixed pixel into fractional parts to perform spectral linear unmixing
2. Demonstrate how to use fuzzy classifier to help solve the mixed pixel problem

Methods

Part 1: Linear spectral unmixing

For the first portion of the lab ENVI or Environment for Visualizing Images software. The step is to perform spectral linear unmixing on a ETM+ satellite image of the Eau Claire and Chippewa Counties. After the image is opened in ENVI the available band list window opens and in this case we have 6 bands 1-5 an band 7. Once this list is open we select a band combination of 4, 3, 2 so that the image displays in false infrared. Once the load band button the image opens in 3 separate viewers, each with a different zoom level aiding in the analysis of the image. After the viewer is open with the three zoom levels the analysis is begun. 

Section 2: Production of endmembers from the ETM+ image 

First the image had to be converted to principle component to reduce and remove noise from the original image. This removal of error helps to increase the accuracy when the image classification in conducted later in the lab. To change the image to principle component click compute new statisics-> rotate from the transform drop down. This converts the image to principle component and when brought into the band viewer there will be an additional 6 principal component bands besides the original 6 bands (Figure 1). 
Figure 1 In this band list you can see the additional 6 PC bands.
Once the principle bands are created the next step is to examine the scatter plots and find areas agriculture, bare soil, and water in the image that correspond to the selected pixel values in the histograms. In order to this I opened the scatter plots  for the PC bands 1 and 2. This is done by selecting 2D scatter plots->scatter plot band choice window. PC band 1 is selected as the X value and PC band 2 is the Y value. Once the scatter plot is open the next step is to collect end-member samples. End-members are collected by drawing a polygon or circle on the scater plot over the pixel values. This is bit of an experimental process as you don't know which LULC classes will be contained in the pixel values you select in the scatter plot however this is why the map window in open so that you can compare the selected pixel values to LULC classes in the map. When selecting end-members you can change the color of your selection and create multiple selections in the same scatter plot. Each of these selections will highlight the corresponding areas in the map in that color. Figure 2 is the scatter plot showing 3 end-member selections. Green in the scatter plot corresponds to agricultural areas in the map, yellow corresponds to bare soil areas and blue corresponds with water feature in the map. 
Figure 2 This is the first set of end-member selections I conducted using PC bands 1 and 2.

After we had located the agricultural, bare soil and water LULC areas in the map the next objective was to find the urban areas. Instead of using PC band 1 as the X and 2 as Y I used PC band 3 as X and band 4 as the Y. Figure 3 below is the resulting scatter plot of PC bands 3 and 4. Using the same process as before I selected pixel values in the scatter plot trying to highlight only the urban areas in the map. This proved to be more difficult than selecting the other LULC classes via the scatter plot.
Figure 3 On the right is the scatter plot for PC band 3 and 4 with the end-member slection. On the left you have the selcted urban areas higlighted as purple.
Once I was finished selecting the end-members the ROIs were saved to be used next when conducting the linear spectral mixing process. Figure 4 is what the window looks like to save the ROIs.
Figure 4 This is the save ROI window. This window tells you haow many pixels are selected in each of the end-member selections made earlier in the scatter plots.

Section 3: Implementation of Linear Spectral unmixing 

The last step in the unmixing process is to run the linear spectral unmixing. This is done by going to spectral-> mapping methods-> linear spectral unmixing. Bring in the original satallite image and then load the ROIs we just saved in the previous step. ENVI take these two inputs and creates 4 separate output images each with a different LULC class highlighted. Figure 5-8 are the resulting images. The brighter an area is in the image the more likely it is a specific LULC class. For example in the water image, water features will show up as bright white the other LULC classes will be darker grey and black.
Figure 5 This is the bare soil fractional image.
Figure 6 This is the fractional image for water. As you can see the water features were not picked out extremely well.
Figure 7 This is the forest fractional image.
Figure 8 This is the urban/built up image.

Part 2: Fuzzy Classification 

The second part of the lab was learning how to use a fuzzy classifier. The main point of the fuzzy classifier is to do basically the same task as the linear spectral mixing. It is used to identify mixed pixel values when performing an accuracy assessment. It takes into consideration the fact that there are mixed pixels within the image and that it is nearly impossible to assign those to the correct LULC class perfectly. It usese membership grades where the pixel values are decided based on whether it is closer to one LULC class compared to the others. The are two main steps in this process.

Section 1: Collection of training signatures to perform fuzzy classification 

The first step is to collect training signatures to perform the fuzzy classification.  Just as in Lab 5 I collected training samples however this time the process was a bit different. Instead of collecting only homogeneous samples like in lab 5 we collected both homogeneous and mixed samples. The collection of both types of samples gives the program a better idea of how things occur in real life. This results in a more accurate classification overall. For this lab I collected 4 water samples, 4 forest, 6 agriculture, 6 urban and 4 bare soil samples. After the samples are collected they are merged just as in lab 5.

Section 2: Performing fuzzy classification

Step 2 is performing the fuzzy classification. Performing the classification is a pretty straight forward process. Open the supervised classification window in ERDAS Imagine and pick fuzzy classification. Then I input the the signature file and which was created in the previous step. The parametric rule is set to maximum likelihood and the non-parametric rule is set to feature space. The best classes per pixel is set to 5 and then the fuzzy classification is run. Once this is run the final step is to run fuzzy convolution which takes the distance file into consideration and creates the final LULC classified image. Figure 9 is the final fuzzy classification image brought into ArcGIS and made into a map.

Results

Figure 9 This is the final LULC map for fuzzy classification method.  

Source

The Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.