Missing panel in LCM - Test and Selection of Site and Driver Variables

TerrSet 2020 does not include the "Test and Selection of Site and Driver Variables Panel" that was in previous versions of TerrSet and Idrisi.  TerrSet liberaGIS (due to be release on December 2nd) will have a newer version of this panel added back in.  Below is the reason why it was removed and what is suggested to do instead to determine which variables to include in your model.
 
The developers of LCM found it to be problematic to rely on that panel, as it did not provide adequate information.  The felt it is better to run the model and interpret the output results, so they decided to remove it.   The LCM developers suggest that people instead run various iterations and variations of Run Sub-Model on the Run Transition Sub-Model Panel, look at the statistics output for each variable to see which ones had a bigger impact, try dropping some variables that seem insignificant and run it again.
 
For example, if you choose MLP as your modeling technique, after running Run Sub-Model, you get an output about the skill of the model as a whole, and a breakdown by variable.  Below are examples of parts of that output.
 
3) Model Skill Breakdown by Transition & Persistence

Class Skill measure
Transition : Amazonian Mature Forest to Anthropogenic Disturbance 0.4414
Transition : Savanna to Anthropogenic Disturbance 0.2269
Transition : Deciduous Mature Forest to Anthropogenic Disturbance 0.2671
Persistence : Amazonian Mature Forest 0.6140
Persistence : Savanna 0.4113
Persistence : Deciduous Mature Forest 0.3169

 

3. Sensitivity of Model to Forcing Independent Variables to be Constant

1) Forcing a Single Independent Variable to be Constant

Model Accuracy (%) Skill measure Influence order
With all variables 48.21 0.3785 N/A
Var. 1 constant 27.90 0.1348 1 (most influential)
Var. 2 constant 35.63 0.2276 3
Var. 3 constant 47.12 0.3654 4 (least influential)
Var. 4 constant 35.07 0.2208 2



 
2) Forcing All Independent Variables Except One to be Constant

Model Accuracy (%) Skill measure
With all variables 48.21 0.3785
All constant but var. 1 35.42 0.2251
All constant but var. 2 26.33 0.1160
All constant but var. 3 16.84 0.0021
All constant but var. 4 17.11 0.0053



 
3) Backwards Stepwise Constant Forcing

Model Variables included Accuracy (%) Skill measure
With all variables All variables 48.21 0.3785
Step 1: var.[3] constant [1,2,4] 47.12 0.3654
Step 2: var.[3,2] constant [1,4] 36.51 0.2381
Step 3: var.[3,2,4] constant [1] 35.42 0.2251


 
The tutorial Ex. 4-2 (especially sections M & N, p. 303-304) explains the process of examining the variables.
 
 
Here is what the help system says about that output:
 
-- After the training process has completed, MLP will output a document with various statistics about the learning process. This includes very important information about the explanatory power of the independent variables.
 
-- The General Model Information section lists the inputs and parameters of the model as well as the skill of the model based on an analysis of the validation data. At the end of part 2 in this first section, it lists both the accuracy and the skill with which the model was able to predict whether the validation pixels would change, and if so, to what class. The skill measure is simply the measured accuracy minus the accuracy expected by chance. In part 3 of the first section, it provides a breakdown of the skill according to the transitions being modeled and the persistences involved. Note that negative skills are possible but would not ordinarily be expected to be much different from 0. Note also that the distribution of skill will depend on the nature of the variables being used. Some variables may be more effective in establishing where change won't happen (persistence) than where it will (transition).
 
-- The Weights Information of Neurons across Layers section is provided in the interests for fully describing the model developed. However, we do not recommend using this information to deduce the relative explanatory power of the independent variables. Because of the presence of the hidden layer, multiple connection weight structures can yield models with equivalent skill and it is very difficult to deduce the role of variables in achieving the calculated skill. Instead, we recommend that you look at the information in the third section of the report.
 
-- The third section of the report is entitled Sensitivity of Model to Forcing Independent Variables to be Constant. After the system has trained on all of the explanatory variables, the system tests for the relative power of explanatory variables by selectively holding the inputs from selected variables constant (at their mean value). Holding the input values for a selected variable constant effectively removes the variability (i.e., information content) associated with that variable. Using the modified model, the MLP procedure repeats the skill test using the validation data (the training data that were set aside for validation). The difference in skill thus provides information on the power of that variable.
 
Three different sensitivity analyses are run. In the first section, a single variable is held constant. This is repeated for all variables. In the second section, all variables are held constant (at their mean values) except one. It would be expected that this would provide complementary information about each variable. For example, if removing a variable causes the skill to drop from 0.8 to 0.5, you might expect that testing that variable on its own would indicate a skill of 0.3. However, you will frequently find that this is not the case. This will relate to the presence of interaction effects as well as intercorrelations between input variables.
 
-- The final test in section 3 is entitled Backwards Stepwise Constant Forcing. Starting with the model developed with all variables, it then holds constant every variable in turn to determine which one has the least effect on model skill. Step 1 thus shows the skill after holding constant the variable that has the lowest negative effect on the skill. If a variable is held constant and the skill doesn’t decrease much, then it suggests that that variable has little value and can be removed. It then tests every possible pair of variables that include that determined in step 1 to figure out which pair, when held constant, have the least effect on the skill. It continues in this manner progressively holding another variable constant until only one variable is left.
 
The backward stepwise analysis is very useful for model development. If holding a variable constant doesn’t affect the skill much, then it can be removed from the analysis. Removing variables without power has the beneficial effect of reducing the likelihood of overfitting (lack of generality). Thus it can sometimes occur that the skill of the model slightly increases as some of the variables are removed.
 
 
We apologize that you are not able to access the expected panel.  The alternative method should bring you the desired outcome.
 

 

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.
Powered by Zendesk