Get startedGet started for free

Adding loadings to improve fit

1. Adding loadings to improve model fit

Now that we've reviewed the differences between EFAs and CFAs, we'll look at how to improve model fit.

2. When to make adjustments

While EFAs estimate all variable/factor loadings, CFAs only estimate the loadings you specify. If you run your CFA and get disappointing fit statistics, adding additional loadings is one way to improve model fit.

3. Adding loadings to the syntax

One way to select loadings to add is to look at the results of an EFA with the number of factors dictated by your theory. I've done this for you and picked a couple promising item/factor relationships to add: the fourth Neuroticism item could load to the Extraversion factor, and the third Extraversion item could load to the Neuroticism factor. Both of these proposed relationships hinge on a correlation between the Neuroticism and Extraversion factors. If we look at the summary stats from the original, theory-based CFA, we can see that these factors have a small positive correlation, so this isn't a totally implausible adjustment to the theory. Of course, in a real data application, you'll want to carefully evaluate the theoretical implications of adding loadings.

4. Adding new loadings to the syntax

Here's a graphical representation of what adding loadings looks like. You're effectively saying that the Neuroticism factor can predict responses to item E3 and the Extraversion factor can predict responses to item N4. These new relationships are shown here by bold green arrows.

5. Adding new loadings to the syntax

The first step to adding new loadings is to alter the syntax for the CFA. You can do this by adding the new relationships to the equations used to create the syntax. You can see that we've added item N4 to the Extraversion factor, which is abbreviated EXT in our syntax, and we've also added item E3 to the Neuroticism factor, which is abbreviated NEU. Next, you'll feed those equations into the cfa() function to create syntax compatible with the sem() function used to run the CFA. Once you've got the syntax set up, you can go ahead and plug that revised syntax into the sem() function to run the CFA. As before, remember to continue using the bfi_CFA dataset, since this is still a CFA.

6. Comparing the original and revised models

Now, let's conduct a likelihood ratio test. As you may remember, this tests to see whether the two models fit statistically differently. When you are testing a model against the null model, you want this to be non-significant; however, when you are comparing two specified models, a significant result indicates that one model fits significantly better and is preferred. As the legend at the bottom shows, the stars after the p-value indicate a statistically significant difference, so this is a good result!

7. Comparing the original and revised models

Another fit index to consider is the CFI. The higher value corresponds to the better-fitting model. You can see that the revised model's CFI is higher here.

8. Comparing the original and revised models

The final fit index we'll look at will be the RMSEA. The first value in the output shows the RMSEA value. The next values are the bounds of the 90% confidence interval, which are NA here due to how the model is specified. The important thing here is to identify the lower RMSEA, which is associated with the revised model. As you'll recall, you ideally want the RMSEA to be less than 0.05. Neither of these are, but at least you're a bit closer with the revised model! For more information about calculating and interpreting these and other fit indices, check out this website.

9. Let's practice!

Now that you've seen a demonstration of adding loadings and comparing models, it's time to try it for yourself!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.