what is the value of the f statistic course hero

by Marvin Terry 5 min read

What is the p-value associated with the F-statistic?

Returning to our example above, the p-value associated with the F-statistic is ≥ 0.05, which provides evidence that the model containing X 1, X 2, X 3, X 4 is not more useful than a model containing only the intercept β 0.

What is the F-statistic in statistics?

The F-statistic is the ratio of the mean squares treatment to the mean squares error: The larger the F-statistic, the greater the variation between sample means relative to the variation within the samples. Thus, the larger the F-statistic, the greater the evidence that there is a difference between the group means.

What is the F-statistic in multiple linear regression?

Understand the F-statistic in Linear Regression. When running a multiple linear regression model: Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + β 4 X 4 + … + ε. The F-statistic provides us with a way for globally testing if ANY of the independent variables X 1, X 2, X 3, X 4 … is related to the outcome Y. For a significance level of 0.05:

What is the global significance of the p-values of the β coefficients?

The answer is that we cannot decide on the global significance of the linear regression model based on the p-values of the β coefficients. This is because each coefficient’s p-value comes from a separate statistical test that has a 5% chance of being a false positive result (assuming a significance level of 0.05).

Why do we even need the F-test?

Why do we need a global test? Why not look at the p-values associated with each coefficient β 1, β 2, β 3, β 4 … to determine if any of the predictors is related to Y?

What if the F-statistic has a statistically significant p-value but none of the coefficients does?

Here’s the output of another example of a linear regression model where none of the independent variables is statistically significant but the overall model is (i.e. at least one of the variables is related to the outcome Y) according to the p-value associated with the F-statistic.

Reference

James, D. Witten, T. Hastie, and R. Tibshirani, Eds., An introduction to statistical learning: with applications in R. New York: Springer, 2013.

Understanding the F-Statistic in ANOVA

The F-statistic is the ratio of the mean squares treatment to the mean squares error:

Understanding the P-Value in ANOVA

To determine if the difference between group means is statistically significant, we can look at the p-value that corresponds to the F-statistic.

On Using Post-Hoc Tests with an ANOVA

If the p-value of an ANOVA is less than .05, then we reject the null hypothesis that each group mean is equal.

Additional Resources

An Introduction to the One-Way ANOVA An Introduction to the Two-Way ANOVA The Complete Guide: How to Report ANOVA Results ANOVA vs. Regression: What’s the Difference?

image