Goals and Background
The main goal of this lab is to learn how to use two advanced classification algorithms. These advanced classifiers are very robust and can greatly increase the accuracy of the LULC classification. The main objectives for the lab are as follows:
1) Show
us how to perform an expert system/decision tree classification with the use of ancillary data
2) Demonstrate how to develop an artificial neural network to perform complex image
classification
Methods
Part 1: Expert system classification
Section 1: Development of a knowledge base to improve an existing classified image
The first part of the lab is working with a method called expert system classification. This is a very robust classification method that uses not only the remotely sensed imagery but also ancillary data to get a more accurate final classification.
To begin we were given a classified image of the Chippewa Valley (Figure 1). We examined the image and found that there were a number of errors in the LULC classifications. These included urban areas that were labeled as residential when they were clearly industry and many agricultural areas were labeled as green vegetation and vise versa. These errors would be corrected by running this imagery again in the expert system classification to improve the accuracy and get a more realistic classification of the area.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7nVB0p0hAyjJveXtpvfaL2ldTZHppj2B0n0rkL4cSklnGXnHjxsN62xlpq5oFLgGN2SJgo4J80JHPAl_hKySpWmqmqMxklQB4wZnzz08tdRliLsj3W_3ClWtR-1COPmCL-ziLe__3UxY/s400/13.PNG) |
Figure 1 This is a classified image of the Chippewa Valley with errors in the LULC classification. |
The first step to running the expert system classification is to create hypothesis and rules for each of the classes we are interested in. To do so we open the knowledge engineer window. Figure 2 is what this window looks like. This was repeated for each of the six classes which are water, residential, forest, green vegetation, agriculture, and other urban.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYYO7WdfdhY1pdcQF6R9pdIQuMqTMwkbdkGTcKY2KgTGh2fA7cF9VDy9I21hWMvVsDWmxsT5Ey3I4S4aCCjSmdijBmLnREn3iR22-L9F9SXQJAMUtJRLeCXMOOGnj36cFvL1LHQjHmB1M/s400/14.PNG) |
Figure 2 This is the window to set up the rules for each of the hypothesis or classes in the knowledge file. |
Section 2: The use of ancillary data in developing a knowledge base
Once those 6 hypothesis or classes are entered the next step is to write an argument that will make sure that the other urban class does not get classified as residential urban. There are two arguments that are written one from the residential hypothesis saying that residential will not be classified as other urban and one by the other urban hypothesis saying that other urban will not be classified as residential. These arguments help the classifier better distinguish between the two classes. The same procedure was followed for green vegetation and agriculture. A argument was added to the agriculture saying that agriculture can not be classified as green vegetation and also the opposite is true that green vegetation can not be agriculture again to help the classifier separate those classes more accurately and correct the errors seen in the original classified image we examined. Figure 3 is the final process tree for the expert system classifier. This knowledge file saved for use in the next step of the lab.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUqCrF2F1Kd7ybZ961rNMdDONt-gUZkFpGPzM2ScszdK9y9ZazGuh_b2BH1Y_FacDbpqzZTtBqm9z1n7SPjk3Af2plXwet09wBqN6O3-HyjW3_PxywcWeO1l4lNAJ6GWoVWF9aZ3m_veQ/s400/2.PNG) |
Figure 3 This is the final process tree or knowledge file for the expert system classification. |
Section 3: Performing expert system classification
To run the expert the knowledge classifier window is opened once again and the knowledge file or Figure 3 is brought in. This is where you can select which hypothesis or classes you want to include in the classification. In this case we included all of them Figure 4. After the classes to include are selected click OK and the next window is opened (Figure 5). Here we set the cell size to 30 by 30 and select the location and name of the final output classified image. Hit OK and the classifier runs. Figure 11 is the final classified map.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuMDF1iTfelcqP6PCnNrtlr5IB5QPxPnVs6hOKYigGmbE3bJp7Gw-FlhzZ0OGv1yO_vHYHAQcJg6xciUru1IVOLoQMbGkXvbtKDqiDNq01mBXouF7X_12SdFdkLcaHop3IZWwgdBtMz4k/s400/15.PNG) |
Figure 4 We include all of the classes for this analysis. |
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlMfX1QQEs-acs1HW4eUGMMg5HhftsmaUXXkfQPFqhhf0OKoK_PyRcJlQUchhcbA-b8dpfrcfa9uhtZJzVF6aCOy4ILFewQrxWJrIcD42euz6hxoHy8sNhDrK06l6h7riBPSeQHpMn-50/s400/16.PNG) |
Figure 5 This is the dialogue box to pick the output location of the classified image as well as set some other parameters. |
Part 2: Neural network classification
Section 1: Performing neural network classification with a predefined training sample
The other method of classification we explored in this lab is called Neural Network classification. This portion of the lab was run in ENVI 4.6.1 another remote sensing software. The first step was to open an image file provided by Dr. Cyril Wilson. Once the image was open the next step was to import an ROI file. These ROIs are training samples that Dr.Wilson collected in this imagery (Figure 6). Once the ROIs are open neural network classification is chosen from the supervised drop down in ENVI. Figure 7 is the parameter window for the classification where the number of iterations, training rate, and output location for the classified image are input. Once the parameters are entered the classification is run and Figure 8 is the result.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirl8nKEIAUule0gc6EzNagwNninD4AED4p4uYm7GXNhRndA5p9lznccNp1Cl6RqoGWKy1IeceuahETySG9w4GLd3SLHGLvgPfuPzmwUdULvMqxgXWDNum1zgvpCmoOCwI88bAmoqmgRpM/s400/17.png) |
Figure 6 This is the image give to us by Dr. Wilson you can see that I have also brought in the ROIs on top of the image. |
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgt6L4dTbrKlJ57LwU9oT9aVloAux-TYucs0AcVv305HWzvTMPCCsm4O3xg5Oq_XnqX8uc8uHcmfzkHGzJUO_8pi1u8-MC2Vkmvn0xM-bRwjZEyIWvr0E4iwutB2NF6UcCX0Te0Ie6PgWk/s400/18.PNG) |
Figure 7 This is neural network dialogue box where the majority of the parameters are input. |
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjK_uNSxMq_O6IH9SIrKUDr2Kli0ztqOS9k2ZWxt0FMzL-yCiwUA9tqknlqF1MdZUT4zcJ-NkUPQ7p2O1GoIjV8h6b-335O0vRSv3YsSLL6IAEdVdYcqqkLCfBagwmeViRL9J1JqRb4xjI/s400/7.PNG) |
Figure 8 The original false infrared image is on the left and the classified image is on the right. |
Section 2: Creating training samples and performing NN classification (Optional challenge
section)
Additional practice and experimentation with the parameters for the neural network classification was done using an image of Northern Iowas campus (Figure 9). I opened the image and instead of having ROIs provided I created them myself. I made three classes which were grass, roofing, and concrete/asphalt. The same procedure for running the classification was followed as above. Figure 10 is the resulting classified image based on the 3 ROIs or classes I created.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrF2CAyloHy4wG1Zoash3Zl5IrYGk5EKuhRrr4LhRghRAB5ZAIEVXfQ-xsWweYMDZa1oRgHQYVp7ez2lWNTewrbRbot3SmERWBk5adh16_GpROZ_8kbuznumLcrJday8_hFBDwk2JZmCI/s400/19.png) |
Figure 9 This is the original false infrared image of the Northern Iowa campus. |
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyk8J8M5uV0lsu876rtMdahp2aVSV55UneZ1-neddHYMsgF6oXJyTBBea0rb2c9i7hocxc7cpjFVjWXkHNyP56QzZFsh8ZlIyUzybEzQ9WDgsHkLhIhZ4cNGBlkHYda4RNBQ-1ckDSHyM/s640/9.PNG) |
Figure 10 This is the original image on the left and the classified image on the right based on the ROIs I collected. |
Results
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjysT3JLpzlUXBhhXyoxlBYUxxbsNjHnb3VdIMu6wYLXnvBfWsXDG3CQgoHM3yU_D7SPn0_HuNf9rQ0Pj6g3PaFCM2SPvRCy91XaJ8_IN_MgFNgF2DODfkqzK_1khvV-ZBMzWSMZohaA0/s640/Map.png) |
Figure 11 This is the final LULC map using the expert system classification method. |
Sources
The Landsat satellite images are from Earth Resources Observation and
Science Center, United States Geological Survey.
The Quickbird High resolution image of portion of University of Northern Iowa campus is from
Department of Geography, University of Northern Iowa.
No comments:
Post a Comment