1. Model Comparison
In the last section, you learned how to update a model using modification indices. In this section, you will learn two ways of comparing models to determine if your additions improved the model fit.
2. Create Two Models
In order to compare models, we will save two separate model specifications. Then, we will analyze both models with the cfa() function and save those as two separate outputs.
3. Chi-Square Comparison
We will use the anova() function to directly compare the two similar models. The arguments for the anova() function include the variable names of your fitted models that were saved from the cfa() function. The output shows the degrees of freedom for each model, two information criterions, AIC and BIC, the chi-square for each model, and finally, a chi-square difference test. This test subtracts the two chi-square values, and examines if that difference is significantly greater than expected for the differences in degrees of freedom. For example, we only added one parameter between these two models, so chi-square values should increase by 3.84 points to be considered significant at p less than .05.
Chi-square tests are used on nested models, which are models that use the same variables, but differ by at least one estimated parameter. In this example, the correlation between x7 and x8 is the only difference between models, and they have the same variables, so they are nested.
4. Fit Index Comparison
An additional way to compare models would be to use fit indices, especially when models are not nested and have different variables. So far, we have only looked at the fit indices provided by the summary() function, as they are the most popular ones used to assess model fit. The fitmeasures() function will show you even more fit indices, and you use the name of your saved fitted model as the argument for this function.
For the purposes of model comparison, we are going to focus on the AIC, or Akaike information criterion, and the ECVI, or Expected Cross Validation Index. The AIC provides a measure of model quality, wherein lower values are better. When AIC values are negative, the smaller number, which will be more negative, is still preferred. ECVI indicates the likelihood this model will replicate with the same sample size and population. Again, lower values are better.
5. Fit Index Comparison
To get just these values, the second argument of fitmeasures() can be a concatenated list of the fit indices you would like to view. The fitmeasures() function was used on both of our models, and we can see that the second model with the correlation between x7 and x8 is better than the first model without that correlation, as both AIC and ECVI are smaller.
6. Let's practice!
Now you can practice using the fitmeasures() function to determine if your updated models are better than the original models.