Simple linear regression

From testwiki
Jump to navigation Jump to search

Template:Short description

Okun's law in macroeconomics is an example of the simple linear regression. Here the dependent variable (GDP growth) is presumed to be in a linear relationship with the changes in the unemployment rate.

Template:Regression bar

In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable.[1][2][3][4][5] That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the x and y coordinates in a Cartesian coordinate system) and finds a linear function (a non-vertical straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.

It is common to make the additional stipulation that the ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared residual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. In this case, the slope of the fitted line is equal to the correlation between Template:Mvar and Template:Mvar corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass Template:Math of the data points.

Formulation and computation

Consider the model function

y=α+βx,

which describes a line with slope Template:Mvar and Template:Mvar-intercept Template:Mvar. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation the errors. Suppose we observe Template:Mvar data pairs and call them Template:Math}. We can describe the underlying relationship between Template:Math and Template:Math involving this error term Template:Math by

yi=α+βxi+εi.

This relationship between the true (but unobserved) underlying parameters Template:Mvar and Template:Mvar and the data points is called a linear regression model.

The goal is to find estimated values α^ and β^ for the parameters Template:Mvar and Template:Mvar which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in the least-squares approach: a line that minimizes the sum of squared residuals (see also Errors and residuals) ε^i (differences between actual and predicted values of the dependent variable y), each of which is given by, for any candidate parameter values α and β,

ε^i=yiαβxi.

In other words, α^ and β^ solve the following minimization problem:

(α^,β^)=argmin(Q(α,β)),

where the objective function Template:Mvar is:

Q(α,β)=i=1nε^i2=i=1n(yiαβxi)2 .

By expanding to get a quadratic expression in α and β, we can derive minimizing values of the function arguments, denoted α^ and β^:[6]

α^=y¯(β^x¯),β^=i=1n(xix¯)(yiy¯)i=1n(xix¯)2=i=1nΔxiΔyii=1nΔxi2

Here we have introduced Template:Unordered list

Expanded formulas

The above equations are efficient to use if the mean of the x and y variables (x¯ and y¯) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of the α^ and β^ equations. These expanded equations may be derived from the more general polynomial regression equations[7][8] by defining the regression polynomial to be of order 1, as follows.

[ni=1nxii=1nxii=1nxi2][α^β^]=[i=1nyii=1nyixi]

The above system of linear equations may be solved directly, or stand-alone equations for α^ and β^ may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.[9][7]

α^=i=1nyii=1nxi2i=1nxii=1nxiyini=1nxi2(i=1nxi)2β^=ni=1nxiyii=1nxii=1nyini=1nxi2(i=1nxi)2

Interpretation

Relationship with the sample covariance matrix

The solution can be reformulated using elements of the covariance matrix: β^=sx,ysx2=rxysysx

where Template:Unordered list

Substituting the above expressions for α^ and β^ into the original solution yields

yy¯sy=rxyxx¯sx.

This shows that Template:Math is the slope of the regression line of the standardized data points (and that this line passes through the origin). Since 1rxy1 then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known as regressions toward the mean.

Generalizing the x¯ notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example:

xy=1ni=1nxiyi.

This notation allows us a concise formula for Template:Math:

rxy=xyx¯y¯(x2x¯2)(y2y¯2).

The coefficient of determination ("R squared") is equal to rxy2 when the model is linear with a single independent variable. See sample correlation coefficient for additional details.

Interpretation about the slope

By multiplying all members of the summation in the numerator by : (xix¯)(xix¯)=1 (thereby not changing it):

β^=i=1n(xix¯)(yiy¯)i=1n(xix¯)2=i=1n(xix¯)2(yiy¯)(xix¯)i=1n(xix¯)2=i=1n(xix¯)2j=1n(xjx¯)2(yiy¯)(xix¯)

We can see that the slope (tangent of angle) of the regression line is the weighted average of (yiy¯)(xix¯) that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by (xix¯)2 because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more.

Interpretation about the intercept

α^=y¯β^x¯,

Given β^=tan(θ)=dy/dxdy=dx×β^ with θ the angle the line makes with the positive x axis, we have yintersection=y¯dx×β^=y¯dy

Interpretation about the correlation

In the above formulation, notice that each xi is a constant ("known upfront") value, while the yi are random variables that depend on the linear function of xi and the random term εi. This assumption is used when deriving the standard error of the slope and showing that it is unbiased.

In this framing, when xi is not actually a random variable, what type of parameter does the empirical correlation rxy estimate? The issue is that for each value i we'll have: E(xi)=xi and Var(xi)=0. A possible interpretation of rxy is to imagine that xi defines a random variable drawn from the empirical distribution of the x values in our sample. For example, if x had 10 values from the natural numbers: [1,2,3...,10], then we can imagine x to be a Discrete uniform distribution. Under this interpretation all xi have the same expectation and some positive variance. With this interpretation we can think of rxy as the estimator of the Pearson's correlation between the random variable y and the random variable x (as we just defined it).

Numerical properties

Template:Ordered list

Statistical properties

Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere.Template:Clarify

Unbiasedness

The estimators α^ and β^ are unbiased.

To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residuals Template:Math as random variables drawn independently from some distribution with mean zero. In other words, for each value of Template:Mvar, the corresponding value of Template:Mvar is generated as a mean response Template:Math plus an additional random variable Template:Mvar called the error term, equal to zero on average. Under such interpretation, the least-squares estimators α^ and β^ will themselves be random variables whose means will equal the "true values" Template:Mvar and Template:Mvar. This is the definition of an unbiased estimator.

Variance of the mean response

Since the data in this context is defined to be (x, y) pairs for every observation, the mean response at a given value of x, say xd, is an estimate of the mean of the y values in the population at the x value of xd, that is E^(yxd)y^d. The variance of the mean response is given by:[10]

Var(α^+β^xd)=Var(α^)+(Varβ^)xd2+2xdCov(α^,β^).

This expression can be simplified to

Var(α^+β^xd)=σ2(1m+(xdx¯)2(xix¯)2),

where m is the number of data points.

To demonstrate this simplification, one can make use of the identity

(xix¯)2=xi21m(xi)2.

Variance of the predicted response

Template:Further

The predicted response distribution is the predicted distribution of the residuals at the given point xd. So the variance is given by

Var(yd[α^+β^xd])=Var(yd)+Var(α^+β^xd)2Cov(yd,[α^+β^xd])=Var(yd)+Var(α^+β^xd).

The second line follows from the fact that Cov(yd,[α^+β^xd]) is zero because the new prediction point is independent of the data used to fit the model. Additionally, the term Var(α^+β^xd) was calculated earlier for the mean response.

Since Var(yd)=σ2 (a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by

Var(yd[α^+β^xd])=σ2+σ2(1m+(xdx¯)2(xix¯)2)=σ2(1+1m+(xdx¯)2(xix¯)2).


Confidence intervals

The formulas given in the previous section allow one to calculate the point estimates of Template:Mvar and Template:Mvar — that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimators α^ and β^ vary from sample to sample for the specified sample size. Confidence intervals were devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times.

The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:

  1. the errors in the regression are normally distributed (the so-called classic regression assumption), or
  2. the number of observations Template:Mvar is sufficiently large, in which case the estimator is approximately normally distributed.

The latter case is justified by the central limit theorem.

Normality assumption

Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean Template:Mvar and variance σ2/(xix¯)2, where Template:Math is the variance of the error terms (see Proofs involving ordinary least squares). At the same time the sum of squared residuals Template:Mvar is distributed proportionally to Template:Math with Template:Math degrees of freedom, and independently from β^. This allows us to construct a Template:Mvar-value

t=β^βsβ^  tn2,

where

sβ^=1n2i=1nε^i2i=1n(xix¯)2

is the unbiased standard error estimator of the estimator β^.

This Template:Mvar-value has a [[Student's t-distribution|Student's Template:Mvar]]-distribution with Template:Math degrees of freedom. Using it we can construct a confidence interval for Template:Mvar:

β[β^sβ^tn2*, β^+sβ^tn2*],

at confidence level Template:Math, where tn2* is the (1γ2)-th quantile of the Template:Math distribution. For example, if Template:Math then the confidence level is 95%.

Similarly, the confidence interval for the intercept coefficient Template:Mvar is given by

α[α^sα^tn2*, α^+sα^tn2*],

at confidence level (1 − γ), where

sα^=sβ^1ni=1nxi2=1n(n2)(i=1nε^i2)i=1nxi2i=1n(xix¯)2
The US "changes in unemployment – GDP growth" regression with the 95% confidence bands.

The confidence intervals for Template:Mvar and Template:Mvar give us the general idea where these regression coefficients are most likely to be. For example, in the Okun's law regression shown here the point estimates are

α^=0.859,β^=1.817.

The 95% confidence intervals for these estimates are

α[0.76,0.96],β[2.06,1.58].

In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown[11] that at confidence level (1 − γ) the confidence band has hyperbolic form given by the equation

(α+βξ)[α^+β^ξ±tn2*(1n2ε^i2)(1n+(ξx¯)2(xix¯)2)].

When the model assumed the intercept is fixed and equal to 0 (α=0), the standard error of the slope turns into:

sβ^=1n1i=1nε^i2i=1nxi2

With: ε^i=yiy^i

Asymptotic assumption

The alternative second assumption states that when the number of points in the dataset is "large enough", the law of large numbers and the central limit theorem become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile t*n−2 of Student's t distribution is replaced with the quantile q* of the standard normal distribution. Occasionally the fraction Template:Math is replaced with Template:Math. When Template:Mvar is large such a change does not alter the results appreciably.

Numerical exampleTemplate:Anchor

Template:See also

This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the OLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.

Height (m), xi 1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83
Mass (kg), yi 52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46
i xi yi xi2 xiyi yi2
1 1.47 52.21 2.1609 76.7487 2725.8841
2 1.50 53.12 2.2500 79.6800 2821.7344
3 1.52 54.48 2.3104 82.8096 2968.0704
4 1.55 55.84 2.4025 86.5520 3118.1056
5 1.57 57.20 2.4649 89.8040 3271.8400
6 1.60 58.57 2.5600 93.7120 3430.4449
7 1.63 59.93 2.6569 97.6859 3591.6049
8 1.65 61.29 2.7225 101.1285 3756.4641
9 1.68 63.11 2.8224 106.0248 3982.8721
10 1.70 64.47 2.8900 109.5990 4156.3809
11 1.73 66.28 2.9929 114.6644 4393.0384
12 1.75 68.10 3.0625 119.1750 4637.6100
13 1.78 69.92 3.1684 124.4576 4888.8064
14 1.80 72.19 3.2400 129.9420 5211.3961
15 1.83 74.46 3.3489 136.2618 5544.2916
Σ 24.76 931.17 41.0532 1548.2453 58498.5439

There are n = 15 points in this data set. Hand calculations would be started by finding the following five sums:

Sx=xi=24.76,Sy=yi=931.17,Sxx=xi2=41.0532,Syy=yi2=58498.5439,Sxy=xiyi=1548.2453

These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.

β^=nSxySxSynSxxSx2=61.272α^=1nSyβ^1nSx=39.062sε2=1n(n2)[nSyySy2β^2(nSxxSx2)]=0.5762sβ^2=nsε2nSxxSx2=3.1539sα^2=sβ^21nSxx=8.63185
Graph of points and linear least squares lines in the simple linear regression numerical example

The 0.975 quantile of Student's t-distribution with 13 degrees of freedom is Template:Math, and thus the 95% confidence intervals for Template:Mvar and Template:Mvar are

α[α^t13*sα]=[45.4, 32.7]β[β^t13*sβ]=[57.4, 65.1]

The product-moment correlation coefficient might also be calculated:

r^=nSxySxSy(nSxxSx2)(nSyySy2)=0.9946

Alternatives

Calculating the parameters of a linear model by minimizing the squared error.

In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due to regression dilution.

Other estimation methods that can be used in place of ordinary least squares include least absolute deviations (minimizing the sum of absolute values of residuals) and the Theil–Sen estimator (which chooses a line whose slope is the median of the slopes determined by pairs of sample points).

Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data.

Line fitting

Template:Excerpt

Simple linear regression without the intercept term (single regressor)

Sometimes it is appropriate to force the regression line to pass through the origin, because Template:Mvar and Template:Mvar are assumed to be proportional. For the model without the intercept term, Template:Math, the OLS estimator for Template:Mvar simplifies to

β^=i=1nxiyii=1nxi2=xyx2

Substituting Template:Math in place of Template:Math gives the regression through Template:Math:

β^=i=1n(xih)(yik)i=1n(xih)2=(xh)(yk)(xh)2=xykx¯hy¯+hkx22hx¯+h2=xyx¯y¯+(x¯h)(y¯k)x2x¯2+(x¯h)2=Cov(x,y)+(x¯h)(y¯k)Var(x)+(x¯h)2,

where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias). The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.

See also

References

Template:Reflist

Template:Statistics Template:Quantitative forecasting methods zh-yue:簡單線性迴歸分析

  1. Template:Cite book
  2. Template:Cite web
  3. Template:Cite book
  4. Template:Cite journal
  5. Template:Cite journal
  6. Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285
  7. 7.0 7.1 Template:Cite web
  8. Template:Cite web
  9. Template:Cite web
  10. Template:Cite book
  11. Casella, G. and Berger, R. L. (2002), "Statistical Inference" (2nd Edition), Cengage, Template:ISBN, pp. 558–559.