線形推定量
以前に単回帰における最小2乗推定量(least squares estimator; LSE)を求めた際に利用したのは以下の式である..
$$
\begin{eqnarray}
\left\{
\begin{array}
\;\hat{\alpha}+\bar{x}\hat{\beta}-\bar{y}
&=&0\\
\displaystyle n\bar{x}\hat{\alpha}+\left(\sum_{i=1}^n x_i^2\right)\hat{\beta}-\left(\sum_{i=1}^n x_i y_i\right)
&=&0\\
\end{array}
\right.\;\cdots\; \href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{線形単回帰の回帰直線 (\hat{\alpha},\hat{\beta}を求める)}
\end{eqnarray}
$$
上記は以下の形にでき,これは正規方程式(normal equation)と呼ばれる.
$$
\begin{eqnarray}
\left\{
\begin{array}[rcl]
\;\hat{\alpha}+\bar{x}\hat{\beta}
&=&\bar{y}&=&\displaystyle \left(\sum_{i=1}^n \frac{1}{n} y_i\right)
\\\displaystyle n\bar{x}\hat{\alpha}+\left(\sum_{i=1}^n x_i^2\right)\hat{\beta}
&=&\displaystyle \left(\sum_{i=1}^n x_i y_i\right)\\
\end{array}
\right.\\
\end{eqnarray}
$$
観測値\(y_i\)の一次式\(\sum_{i=1}^{n}c_iy_i\;(c_i:定数)\)で表される推定量を線形推定量(linear estimator)と呼ぶ.
よって\(\hat{\alpha},\;\hat{\beta}\)は\(y_i\)の線形推定量である.
観測値\(y_i\)の期待値
$$
\begin{eqnarray}
y_i&=&\alpha+\beta x_i+\epsilon_i\;(i=1,\cdots,n)
\\\left\{\epsilon_i|i=1,\cdots,n\right\}&:&\epsilon_i \overset{iid}{\sim} N(0,\sigma^2)
\\&&\;\cdots\;独立同一分布(independent\;and\;identically\;distributed;\;IID,\;i.i.d.,\;iid)
\\&&\;\cdots\;\mathrm{E}\left[\epsilon_i\right]=0,\;\mathrm{V}\left[\epsilon_i\right]=\sigma^2,互いに独立
\\\mathrm{E}\left[y_i\right]
&=&\mathrm{E}\left[\alpha+\beta x_i+\epsilon_i\right]
\\&=&\mathrm{E}\left[\alpha\right]+\mathrm{E}\left[\beta x_i\right]+\mathrm{E}\left[\epsilon_i\right]
\\&=&\alpha+\beta x_i+0\;\cdots\;\mathrm{E}\left[C\right]=C\;(C:期待値をとることについて定数),\;\mathrm{E}\left[\epsilon_i\right]=0
\\&=&\alpha+\beta x_i
\end{eqnarray}
$$
最小二乗推定量\(\hat{\beta}\)が不偏推定量であることの証明
$$
\begin{eqnarray}
\mathrm{E}\left[\hat{\beta}\right]
&=&\mathrm{E}\left[\frac{S_{xy}}{S_{xx}}\right]
\;\cdots\;\hat{\beta}=\frac{S_{xy}}{S_{xx}}
\\&=&\mathrm{E}\left[\frac{\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right)}{S_{xx}}\right]
\;\cdots\;S_{xy}=\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right)
\\&=&\frac{1}{S_{xx}}\mathrm{E}\left[\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right)\right]
\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[cX\right]=c\mathrm{E}\left[X\right]}
\\&=&\frac{1}{S_{xx}}\sum_{i=1}^{n}\mathrm{E}\left[\left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right)\right]
\\&&\;\cdots\;\mathrm{E}\left[\sum_{i=1}^{n}A_i\right]=\mathrm{E}\left[A_i+\cdots+A_i+\cdots+A_n\right]=\mathrm{E}\left[A_i\right]+\cdots+\mathrm{E}\left[A_i\right]+\cdots+\mathrm{E}\left[A_n\right]=\sum_{i=1}^{n}\mathrm{E}\left[A_i\right]
\\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[X\pm Y\right]=\mathrm{E}\left[X\right]\pm\mathrm{E}\left[Y\right]}
\\&=&\frac{1}{S_{xx}}\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\mathrm{E}\left[y_i-\bar{y}\right]
\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[cX\right]=c\mathrm{E}\left[X\right]}
\\&=&\frac{1}{S_{xx}}\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\left(\mathrm{E}\left[y_i\right]-\mathrm{E}\left[\bar{y}\right]\right)
\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[X\pm Y\right]=\mathrm{E}\left[X\right]\pm\mathrm{E}\left[Y\right]}
\\&=&\frac{1}{S_{xx}}\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\left\{\left(\alpha+\beta x_i\right)-\left(\alpha+\beta \bar{x}\right)\right\}
\;\cdots\;\mathrm{E}\left[y_i\right]=\alpha+\beta x_i,\;\mathrm{E}\left[\bar{y}\right]=\alpha+\beta \bar{x}
\\&=&\frac{1}{S_{xx}}\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\left(\alpha + \beta x_i -\alpha-\beta \bar{x}\right)
\\&=&\frac{1}{S_{xx}}\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\beta\left(x_i-\bar{x}\right)
\\&=&\frac{1}{S_{xx}}\sum_{i=1}^{n}\left(x_i-\bar{x}\right)^2\beta
\\&=&\frac{1}{S_{xx}}\beta\sum_{i=1}^{n}\left(x_i-\bar{x}\right)^2
\\&&\;\cdots\;\sum_{i=1}^{n} cX_i=cX_1+\cdots+cX_i+\cdots+cX_n=c(X_1+\cdots+X_i+\cdots+X_n)=c\sum_{i=1}^{n} X_i
\\&=&\frac{1}{S_{xx}}\beta S_{xx}
\;\cdots\;S_{xx}=\sum_{i=1}^{n}\left(x_i-\bar{x}\right)^2
\\&=&\beta
\end{eqnarray}
$$
最小二乗推定量\(\hat{\alpha}\)が不偏推定量であることの証明
$$
\begin{eqnarray}
\\\mathrm{E}\left[\hat{\alpha}\right]
&=&\mathrm{E}\left[\bar{y}-\hat{\beta}\bar{x}\right]
\;\cdots\;\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}
\\&=&\mathrm{E}\left[\bar{y}\right]-\mathrm{E}\left[\hat{\beta}\bar{x}\right]
\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[X\pm Y\right]=\mathrm{E}\left[X\right]\pm\mathrm{E}\left[Y\right]}
\\&=&\mathrm{E}\left[\bar{y}\right]-\bar{x}\mathrm{E}\left[\hat{\beta}\right]
\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[cX\right]=c\mathrm{E}\left[X\right]}
\\&=&\alpha+\beta\bar{x}-\beta\bar{x}
\;\cdots\;\mathrm{E}\left[\bar{y}\right]=\alpha+\beta \bar{x},\;\mathrm{E}\left[\hat{\beta}\right]=\beta
\\&=&\alpha
\end{eqnarray}
$$
0 件のコメント:
コメントを投稿