Thursday, May 5, 2016

Advanced Remote Sensing Lab 12: Radar Remote Sensing

Goals and Background

The main goal of this lab is to gain the basic knowledge and understanding of performing basic preprocessing and processing of radar imagery. The main objectives are as follows:
1) Reducing noise using a speckle filter
2) Spectral and spatial enhancement
3) Multi-sensor fusion
4) Texture analysis
5) Polarimetric processing
6) Slant range to ground range conversion

Methods

For this lab due to the large amount of tools and techniques used the results from each tool will be included in the section discussing that tool not in a results section as I have done for most of my other posts. This is done for ease of comparison of before and after the tools and techniques are used.

Part 1: Speckle reduction and edge enhancement 

Section 1: Speckle filtering

In order to conduct speckle reduction we made use of the radar speckle suppression tool in ERDAS Imagine. We ran this tool three times using the subsequent speckle reduced image as the input for each process. The parameters for the speckle tool each time are as follows as each time the tool was run the parameters were changed slightly:

  • Coef. of Var. Multiplier = .5
  • Output Option= Lee-Sigma
  • Coef. of Variation= .275
  • Window/filter size= 3m x 3m
2nd Speckle analysis

  • Coef. of Var. Multiplier = 1.0
  • Output Option= Lee-Sigma
  • Coef. of Variation= .195
  • Window/filter size= 5m x 5m
3rd Speckle Analysis 
  • Coef. of Var. Multiplier = 2.0
  • Output Option= Lee-Sigma
  • Coef. of Variation= .103
  • Window/filter size= 7m x 7m

The speckle was run multiple times to decrease the speckle effect each time in turn increasing the quality of the imagery and making it more user friendly (Figures 1 and 2).
Figure 1 This is the original image with a high amount of speckle.
Figure 2 This is the despeckled image after running through the speckle processing three times.
The easiest way to see the effect the speckle filter is having is by looking at the histogram of the radar. A histogram with a lot of speckle will have spikes all over and will most likely be very Gaussian or only one hump. The more times the speckle tool is run the less spikes on the histogram there should be and it will become multi-modal or have multiple bumps as it better separates between pixel values in the imagery. This can be seen below in Figure 3.
Figure 3 The upper left histogram is the original histogram. Upper right is after one despeckle. Lower left is after 2 despeckles and lower right is after 3 despeckles. You can see the reduction of spikes and a more multi-modal shape.

Section 2: Edge enhancement 

Next I performed edge enhancement. Figure 4 is the resulting image. This is done in ERDAS by selecting raster-> spatial-> non-directional edge.
Figure 4 This is the image after the edge enhancement has been run. 

Section 3: Image enhancement 

In this section of the lab we made use of several image enhancement techniques. The first of which is called the Wallis adaptive filter. This is done by again running the radar speckle suppression but change the filter to gamma-map. I then used that resulting image and went raster-> spatial-> adaptive filter. In the parameter window unsigned 8-bit is selected and make the sure the window size is 3 by 3 with a multiplier of 3.0. Figure 5 is the original image next to the enhanced image.
Figure 5 The original image is on the left and the enhanced image using the adaptive filter is on the right.

Part 2: Sensor Merge, texture analysis and brightness adjustment

Section 1: Apply Sensor Merge 

Another enhancement technique is using sensor merge. This tool allows the user to take two images of the same area such as a radar and multi spectral and combine them into one image. It is used to help extract data from an area when the imagery from a single sensor is not good enough due to atmospheric interference and the like. In this lab we are combining a radar image and Landsat TM image which has pretty dense cloud cover over portions of it. The steps to conduct this analysis are  as follows: raster-> utilities-> sensor merge. Once here the parameter window will open where the following parameters are set:
  • Method= IHS
  • Resampling technique= Nearest neighbor
  • IHS substitution= Intensity
  • R = 1, G = 2, B = 3
  • Unsigned 8-bit box is checked
Figure 6 are the two original input images or the study area and Figure 7 is the result of running the sensor merge tool.
Figure 6 The image on the left is the radar and the image on the right is the Landsat TM image.
Figure 7 This is the resulting image from running the sensor merge tool.

Section 2: Apply Texture Analysis

Another image enhancement tool we explored was the texture analysis tool. This tool is run by going to raster-> utilities-> texture analysis. Once the parameter dialogue box is open the operator should be set to skewness and the the window or filter size should be 5 x 5. Figure 8 shows the original image we used as the input compared to the output image after the texture tool is run.
Figure 8 The original image is on the left and the texture analysis image is on the right.

Section 3: Brightness Adjustment 

The brightness adjustment tool is set up in a similar manner as the texture analysis. Using the same input image as the texture analysis, open raster-> utilities-> brightness adjustment and the parameter dialogue box will open. Once open the data type is set to float single and the output options should be set to column. Figure 9 is a comparison of before and after the brightness adjustment tool is run.
Figure 9 The image on the left is the original image and the image on the right the new image after brightness adjustment is run.

Part 3: Polarimetric SAR Processing and Analysis

Section 1. Synthesize Images 

The final portion of this lab was done in ENVI another remote sensing software. All of the imagery used in this portion of lab is not the raw radar data. The imagery has been preprocessed and made ready to use by our professor before hand. The radar imagery that was used is portions of Death Valley.
The data we are used was collected via the SIR-C radar system and were given to use in a compressed format. In order for the us to be able to use the imagery mathematical synthesis had to be conducted. This is done by going to radar-> polarimetric tools-> synthesize SIR-C data. The dialogue window will open. In this window the output data type is changed to byte and I hit OK. We also added 4 polarization combinations under the add combination button. The next step was to view the image and test how different histogram stretching methods effect the appearance of the image. This is done by going to enhance-> interactive stretch on the image viewer window. The 3 types of stretch explored were Gaussian, Linear and Square-root. Figures 10-13 are the images with each of these histogram stretches applied.
Figure 10 This the Death Valley image with a Gaussian stretch applied to the histogram.
Figure 11 This is the image with a linear stretch applied.
Figure 12 This is the same image with a square-root stretch applied.
Figure 13 This is an image where the radar band combinations are brought into the RGB color gun. This colored imagery can be used to conduct further analysis on vegetation and other features in radar imagery.

Part 4. Slant-to-Ground Range Transformation

The final piece of this lab is looking at how to transform slant range to ground range. This transformation is necessary because of the fact that radar imagery is collected at an angle not in a nadir position like most aerial imagery. Converting from slant range to ground range dramatically decreases skew, stretching and other distortion in the imagery.
This tool is accessed by going to radar-> slant to ground range-> SIR-C. The input image is selected which in this case is the same image of Death Valley used for the synthesis above. Once selected the parameter dialogue box opens and the output pixel size should be set to 13.32 and the resampling method should be set to bilinear. Figure 15 is the slant to ground range image compared to the original.
Figure 15 The image on the left is the original image and the image on the right is the slant to ground range corrected image.

Sources 

All image processing was done using Erdas Imagine, 2015 and  ENVI, 2015.

Thursday, April 21, 2016

Advanced Remote Sensing Lab 10 : Advanced Classifiers 3

Goals and Background

The main goal of this lab is to learn how to use two advanced classification algorithms. These advanced classifiers are very robust and can greatly increase the accuracy of the LULC classification. The main objectives for the lab are as follows:
1)  Show us how to perform an expert system/decision tree classification with the use of ancillary data
2) Demonstrate how to develop an artificial neural network to perform complex image classification

Methods

Part 1: Expert system classification 

 Section 1: Development of a knowledge base to improve an existing classified image

The first part of the lab is working with a method called expert system classification. This is a very robust classification method that uses not only the remotely sensed imagery but also ancillary data to get a more accurate final classification.
To begin we were given a classified image of the Chippewa Valley (Figure 1). We examined the image and found that there were a number of errors in the LULC classifications. These included urban areas that were labeled as residential when they were clearly industry and many agricultural areas were labeled as green vegetation and vise versa. These errors would be corrected by running this imagery again in the expert system classification to improve the accuracy and get a more realistic classification of the area.
Figure 1 This is a classified image of the Chippewa Valley with errors in the LULC classification.

The first step to running the expert system classification is to create hypothesis and rules for each of the classes we are interested in. To do so we open the knowledge engineer window. Figure 2 is what this window looks like. This was repeated for each of the six classes which are water, residential, forest, green vegetation, agriculture, and other urban.
Figure 2 This is the window to set up the rules for each of the hypothesis or classes in the knowledge file.

Section 2: The use of ancillary data in developing a knowledge base 

Once those 6 hypothesis or classes are entered the next step is to write an argument that will make sure that the other urban class does not get classified as residential urban. There are two arguments that are written one from the residential hypothesis saying that residential will not be classified as other urban and one by the other urban hypothesis saying that other urban will not be classified as residential. These arguments help the classifier better distinguish between the two classes. The same procedure was followed for green vegetation and agriculture. A argument was added to the agriculture saying that agriculture can not be classified as green vegetation and also the opposite is true that green vegetation can not be agriculture again to help the classifier separate those classes more accurately and correct the errors seen in the original classified image we examined. Figure 3 is the final process tree for the expert system classifier. This knowledge file saved for use in the next step of the lab.
Figure 3 This is the final process tree or knowledge file for the expert system classification.

Section 3: Performing expert system classification 

To run the expert the knowledge classifier window is opened once again and the knowledge file or Figure 3 is brought in. This is where you can select which hypothesis or classes you want to include in the classification. In this case we included all of them Figure 4. After the classes to include are selected click OK and the next window is opened (Figure 5). Here we set the cell size to 30 by 30 and select the location and name of the final output classified image. Hit OK and the classifier runs. Figure 11 is the final classified map.
Figure 4 We include all of the classes for this analysis. 
Figure 5 This is the dialogue box to pick the output location of the classified image as well as set some other parameters.

Part 2: Neural network classification 

Section 1: Performing neural network classification with a predefined training sample 

The other method of classification we explored in this lab is called Neural Network classification. This portion of the lab was run in ENVI 4.6.1 another remote sensing software. The first step was to open an image file provided by Dr. Cyril Wilson. Once the image was open the next step was to import an ROI file. These ROIs are training samples that Dr.Wilson collected in this imagery (Figure 6). Once the ROIs are open neural network classification is chosen from the supervised drop down in ENVI. Figure 7 is the parameter window for the classification where the number of iterations, training rate, and output location for the classified image are input. Once the parameters are entered the classification is run and Figure 8 is the result.
Figure 6 This is the image give to us by Dr. Wilson you can see that I have also brought in the ROIs on top of the image.
Figure 7 This is neural network dialogue box where the majority of the parameters are input.
Figure 8 The original false infrared image is on the left and the classified image is on the right.

Section 2: Creating training samples and performing NN classification (Optional challenge section) 

Additional practice and experimentation with the parameters for the neural network classification was done using an image of Northern Iowas campus (Figure 9). I opened the image and instead of having ROIs provided I created them myself. I made three classes which were grass, roofing, and concrete/asphalt. The same procedure for running the classification was followed as above. Figure 10 is the resulting classified image based on the 3 ROIs or classes I created.
Figure 9 This is the original false infrared image of the Northern Iowa campus.
Figure 10 This is the original image on the left and the classified image on the right based on the ROIs I collected.

Results

Figure 11 This is the final LULC map using the expert system classification method.

Sources

The Landsat satellite images are from Earth Resources Observation and Science Center, United States Geological Survey.
The Quickbird High resolution image of portion of University of Northern Iowa campus is from Department of Geography, University of Northern Iowa.


Thursday, April 14, 2016

Advanced Remote Sensing : Lab 9

Goals and Background

The main goal of this lab is to learn the skills involved in performing object-based classification in eCognition which is a top of the line image processing tool. The topics explored in this lab are fairly new to the remote sensing frontier, integrating both spectral and spatial information to extract land surface features from remotely sensed images which in this case is satellite imagery. The main objectives in this lab are as follow:
1) Segment an image into homogeneous spatial and spectral clusters
2) Select appropriate sample objects to train a random forest classifier
3) Execute and refine object-based classification output from random forest classifier
4) Select appropriate sample objects to train a support vector machine classifier
5) Execute and refine object-based classification output from support vector machine classifier.

Methods

Part 1: Create a new project

The first portion of the lab is all about importing imagery into eCognition and setting up a new project. I imported the image set the resolution to 30m/pxl and made sure the geocoding box was selected. Next I changed the color scheme to false infrared by setting a 4,3,2 band combination in the image layer mixing window.

Part 2: Segmentation and sample collection 

Section 1: Create image objects 

The first part of the analysis is creating image objects. This is a grid that is placed over the image made up of many polygons. The polygons shapes are based on parameters set by the user. In order to create this grid a process tree is created. A process tree is where all the commands or tools are placed that the user wants eCognition to run. It is called a tree because it is modeled after a family tree with parents and children which make up the hierarchy of the operations. I created a new process and added a child to it. The child is labeled generate objects and is the tool that created the image object grid. In the generate objects object window the shape was set to .3 the compactness was set to .5 and the scale parameter was set to 9 as seen below in Figure 1. The settings for shape and compactness are decided through trial and error. The goal is to find a combination where the image object polygons are tight to homogeneous objects and pixel values in the aerial imagery. After these are set I hit execute and Figure 2 is what eCognition creates over the imagery.
Figure 1 This is the window set up the generate object process.
Figure 2 This is the image object grid that eCognito creates over the image.

Section 2: Training Sample selection

The next step after the segmentation is to collect training samples. First I created classes of LULC in the class hierarchy. The classes are forest as dark green, agriculture as pink, urban as red, water as blue, green vegetation as light green, and bare soil as yellow (Figure 3).  Once the classes are created training samples can be collected. This is done by selecting a class and then in the image object grid double clicking on polygons that contain that LULC class (Figure 4). This is done for each class and I collected the following number of samples for each class:
  • Forest 10
  • Urban 20
  • Water 10
  • Green vegetation 15
  • Bare soil 15
Figure 3 These are the 5 classes create in eCognition.
Figure 4 You can see the training samples collected for each class by color.

Part 3: Implement object classification 

Section 1: Insert and train Random Forest classifier based on sample objects 

The object-based classification process is fairly robust and the process tree is rather large but explained here is most of the step by step process to complete the classification. The first step is append new after the generate objects process and label it RF classification for random forest classification. Add a child under this RF process and label it train RF classifier. Figure 5 is the window to set up the train RF classifier child. The training samples are brought in via the feature drop down. Next I selected the feature I wanted included from the select features window Figure 6. 
Figure 5 Window to set up the RF classifier trainer.
Figure 6 This is the window to select the features used in the classification.

Section 2: Perform Random Forest Classification 

Next another child is added to the RF classification process called apply RF classifier. Once all of the the parameters are entered the classification can be run by clicking execute on the apply RF classifier child. Figure 7 is the complete process tree for the RF classification. 
Figure 7 This is the final process tree for the RF classifier.

Section 3: Refine classification by editing signatures 

Once the classifier has been run if there are errors the user can go in and manually change the class of an image object in the image. This is a simple process. Select the class that the error occurred in and then the class you want it to be changed to and click on the polygon and it will change.

Section 4: Export classified image to Erdas Imagine format 

The classified image created in eCognition (Figure 8) does not have the color scheme I want to compare this classification method to the supervised and unsupervised methods done in labs 4 and 5. To fix this the classified image is export to ERDAS Imagine where the I reassigned color.
Figure 9 This is the RF classified image created by eCognition.

Part 4: Support Vector Machines

Section 1: Save project and modify Process Tree 

The final part of this lab involved running support vector machine classification instead of the random forest method. To do this the process tree from the RF classification is modified. The same steps are followed as for the RF classification except when the classifier is being trained. Figure 10 is the window where this change occurs. Figure 11 is the final process tree for the SVM classification. A comparison of the RF and SVM final classified images can be found below in the results section in Figure 12.
Figure 10 This the window to set up the SVM classifier trainer.
Figure 11 This is the final process tree for the SVM classification method.

Results

Figure 12 The final RF classified image from ERDAS is on the left and the SVM classified image is on the right.

Sources

The Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey

Advanced Remote Sensing Lab 8: Advanced Classifiers 1

Goals and Background

The main goal of this lab is to gain knowledge and understanding on how to use two classifications algorithms. These classifiers make use extremely robust algorithms which have proved effective for increasing classification accuracy of remotely sensed imagery. These algorithms are much more effective and accurate than traditional unsupervised or supervised classifications methods. The two main objectives for the lab were as follows:
1. Learn how to divide a mixed pixel into fractional parts to perform spectral linear unmixing
2. Demonstrate how to use fuzzy classifier to help solve the mixed pixel problem

Methods

Part 1: Linear spectral unmixing

For the first portion of the lab ENVI or Environment for Visualizing Images software. The step is to perform spectral linear unmixing on a ETM+ satellite image of the Eau Claire and Chippewa Counties. After the image is opened in ENVI the available band list window opens and in this case we have 6 bands 1-5 an band 7. Once this list is open we select a band combination of 4, 3, 2 so that the image displays in false infrared. Once the load band button the image opens in 3 separate viewers, each with a different zoom level aiding in the analysis of the image. After the viewer is open with the three zoom levels the analysis is begun. 

Section 2: Production of endmembers from the ETM+ image 

First the image had to be converted to principle component to reduce and remove noise from the original image. This removal of error helps to increase the accuracy when the image classification in conducted later in the lab. To change the image to principle component click compute new statisics-> rotate from the transform drop down. This converts the image to principle component and when brought into the band viewer there will be an additional 6 principal component bands besides the original 6 bands (Figure 1). 
Figure 1 In this band list you can see the additional 6 PC bands.
Once the principle bands are created the next step is to examine the scatter plots and find areas agriculture, bare soil, and water in the image that correspond to the selected pixel values in the histograms. In order to this I opened the scatter plots  for the PC bands 1 and 2. This is done by selecting 2D scatter plots->scatter plot band choice window. PC band 1 is selected as the X value and PC band 2 is the Y value. Once the scatter plot is open the next step is to collect end-member samples. End-members are collected by drawing a polygon or circle on the scater plot over the pixel values. This is bit of an experimental process as you don't know which LULC classes will be contained in the pixel values you select in the scatter plot however this is why the map window in open so that you can compare the selected pixel values to LULC classes in the map. When selecting end-members you can change the color of your selection and create multiple selections in the same scatter plot. Each of these selections will highlight the corresponding areas in the map in that color. Figure 2 is the scatter plot showing 3 end-member selections. Green in the scatter plot corresponds to agricultural areas in the map, yellow corresponds to bare soil areas and blue corresponds with water feature in the map. 
Figure 2 This is the first set of end-member selections I conducted using PC bands 1 and 2.

After we had located the agricultural, bare soil and water LULC areas in the map the next objective was to find the urban areas. Instead of using PC band 1 as the X and 2 as Y I used PC band 3 as X and band 4 as the Y. Figure 3 below is the resulting scatter plot of PC bands 3 and 4. Using the same process as before I selected pixel values in the scatter plot trying to highlight only the urban areas in the map. This proved to be more difficult than selecting the other LULC classes via the scatter plot.
Figure 3 On the right is the scatter plot for PC band 3 and 4 with the end-member slection. On the left you have the selcted urban areas higlighted as purple.
Once I was finished selecting the end-members the ROIs were saved to be used next when conducting the linear spectral mixing process. Figure 4 is what the window looks like to save the ROIs.
Figure 4 This is the save ROI window. This window tells you haow many pixels are selected in each of the end-member selections made earlier in the scatter plots.

Section 3: Implementation of Linear Spectral unmixing 

The last step in the unmixing process is to run the linear spectral unmixing. This is done by going to spectral-> mapping methods-> linear spectral unmixing. Bring in the original satallite image and then load the ROIs we just saved in the previous step. ENVI take these two inputs and creates 4 separate output images each with a different LULC class highlighted. Figure 5-8 are the resulting images. The brighter an area is in the image the more likely it is a specific LULC class. For example in the water image, water features will show up as bright white the other LULC classes will be darker grey and black.
Figure 5 This is the bare soil fractional image.
Figure 6 This is the fractional image for water. As you can see the water features were not picked out extremely well.
Figure 7 This is the forest fractional image.
Figure 8 This is the urban/built up image.

Part 2: Fuzzy Classification 

The second part of the lab was learning how to use a fuzzy classifier. The main point of the fuzzy classifier is to do basically the same task as the linear spectral mixing. It is used to identify mixed pixel values when performing an accuracy assessment. It takes into consideration the fact that there are mixed pixels within the image and that it is nearly impossible to assign those to the correct LULC class perfectly. It usese membership grades where the pixel values are decided based on whether it is closer to one LULC class compared to the others. The are two main steps in this process.

Section 1: Collection of training signatures to perform fuzzy classification 

The first step is to collect training signatures to perform the fuzzy classification.  Just as in Lab 5 I collected training samples however this time the process was a bit different. Instead of collecting only homogeneous samples like in lab 5 we collected both homogeneous and mixed samples. The collection of both types of samples gives the program a better idea of how things occur in real life. This results in a more accurate classification overall. For this lab I collected 4 water samples, 4 forest, 6 agriculture, 6 urban and 4 bare soil samples. After the samples are collected they are merged just as in lab 5.

Section 2: Performing fuzzy classification

Step 2 is performing the fuzzy classification. Performing the classification is a pretty straight forward process. Open the supervised classification window in ERDAS Imagine and pick fuzzy classification. Then I input the the signature file and which was created in the previous step. The parametric rule is set to maximum likelihood and the non-parametric rule is set to feature space. The best classes per pixel is set to 5 and then the fuzzy classification is run. Once this is run the final step is to run fuzzy convolution which takes the distance file into consideration and creates the final LULC classified image. Figure 9 is the final fuzzy classification image brought into ArcGIS and made into a map.

Results

Figure 9 This is the final LULC map for fuzzy classification method.  

Source

The Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. 

Thursday, March 31, 2016

Advanced Remote Sensing: Lab 7 Digital change detection

Goals and Background

The main goal of this lab is to develop skills and get a better understanding of how to evaluate and measure changes that occur on land use and land cover over time. To do this digital change will be used which is an important tool for monitoring environmental and socioeconomic phenomena in remotely sensed images. There three objectives which fall under this digital change detection method. They are:
1)  how to perform quick qualitative change detection through visual means
2) quantify post-classification change detection
3) develop a model that will map detail from-to changes in land use/land cover over time

Methods

Part 1: Change detection using Write Function Memory Insertion 

The first portion of the lab makes use of Write Function Memory Insertion. This is a very simple yet effective method of visualizing changes in LULC over time. In order to do this the near-infrared bands from two images of the same area at different times are put into the red, green and blue color guns. When this is done the pixels that changed between those two time periods will be illuminated or be a bright color compared to the rest of the image and areas that did not change. These areas of change are then easy to see and get a quick overview of the change that has occurred between the two study times. In Figure 1 below you can see the areas highlighted in red that stand out from the rest of the the image. These are areas of change between 1991 and 2011.
Figure 1 Write Function Memory Insertion change image of Eau Claire County for 1991 to 2011.

Part 2: Post-classification comparison change detection

Section 1: Calculating quantitative changes in multidate classified images 

The next portion of the lab is about conducting change detection on two classified images of the Milwaukee Metropolitan Statistical Area (MSA). The two images being compared are from 2001 and 2011. The images were already classified and provided be Dr. Cyril Wilson. Figure 2 is the two MSA images side by side in ERDAS Imagine. 
Figure 2 These are the MSA classified images for 2001 right and 2011 on the left.
Once the two images were brought so we could visually compare the two the next step was to quantify the change between the two time periods. This was done by obtaining the the histogram values from the raster attribute table and then input those values into an excel spread sheet by class. These values are then converted to square meters and then from square meters to hectares. Once the we had the hectare values for each of the classes for 2001 and 2011 the percent change was calculated. This is done by taking the 2011 values and subtracting them from the 2001 and then multiplying that by 100. Figure 3 is the resulting table with the percent change values.
Figure 3 This table is showing the percent change for each LULC class from 2001 to 2011.

Section 2: Developing a From-to change map of multidate images 

The final portion of this lab was to create a from-to-change map from the two MSA images. A model was created which detects the change between the two images. We made use of the Wilson-Lula algorithm. Figure 4 is the model that was created. In this model I focused on changes between 5 pairs of classes. Those 5 pairs are as follows: 1. Agriculture to urban 2. Wetlands to urban 3. Forest to urban 4. Wetlands to agriculture 5. Agriculture to bare soil. The first part of the model takes the two MSA images and separates them into the individual classes through an either or statement. Each class from the two years is then paired up based on the 5 pairs above. Once paired up the Bitwise function is used on each pair to show the areas that have changed from one LULC class to another over the time period. These 5 output rastersare then used to create a map of the changes that took place. Figure 5 is the final resulting from-to change map.
Figure 4 This is the from-to-change model making use of the Wilson-Lula algorithm to calculate LULC change from one class to another. 

Results

Figure 5 This is the final from-to-change map. Each are that is colored is showing a change in LULC class in the 5 pairings created earlier in the lab for use in the model in Figure 4.

Sources

The Landsat satellite image is from Earth Resources Observation and Science Center, United States Geological Survey. 

Homer, C., Dewitz, J., Fry, J., Coan, M., Hossain, N., Larson, C., Herold, N., McKerrow, A., VanDriel, J.N., and Wickham, J. 2007. Completion of the 2001 National Land Cover Database for the Conterminous United StatesPhotogrammetric Engineering and Remote Sensing, Vol. 73, No. 4, pp 337-341.

Xian, G., Homer, C., Dewitz, J., Fry, J., Hossain, N., and Wickham, J., 2011. The change of impervious surface area between 2001 and 2006 in the conterminous United States. Photogrammetric Engineering and Remote Sensing, Vol. 77(8): 758-762.

The Milwaukee shapefile is from ESRI U.S geodatabase.