間違いしかありません.コメントにてご指摘いただければ幸いです(気が付いた点を特に断りなく頻繁に書き直していますのでご注意ください).

単回帰モデルの最小二乗推定量の分布

単回帰モデルの最小二乗推定量\(\hat{\alpha},\hat{\beta}\)の分布

単回帰モデル

$$ \begin{eqnarray} y_i&=&\alpha+\beta x_i +\epsilon_i\;(i=1,\cdots,n)\;\dots\;\epsilon_i \overset{iid}{\sim} \mathrm{N}\left(0,\sigma^2\right) \\\mathrm{E}\left[y_i\right]&=&\mathrm{E}\left[\alpha+\beta x_i +\epsilon_i\right] \\&=&\alpha+\beta x_i +\mathrm{E}\left[\epsilon_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-expected.html}{\mathrm{E}\left[X+t\right]=\mathrm{E}\left[X\right]+t} \\&=&\alpha+\beta x_i+0 \;\cdots\;\epsilon_i \overset{iid}{\sim} \mathrm{N}\left(0,\sigma^2\right) \\&=&\alpha+\beta x_i \\\mathrm{V}\left[y_i\right]&=&\mathrm{V}\left[\alpha+\beta x_i +\epsilon_i\right] \\&=&\mathrm{V}\left[\epsilon_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-variance.html}{\mathrm{V}\left[X+t\right]=\mathrm{V}\left[X\right]} \\&=&\sigma^2 \;\cdots\;\epsilon_i \overset{iid}{\sim} \mathrm{N}\left(0,\sigma^2\right) \\y_i&\sim&\mathrm{N}(\alpha+\beta x_i,\sigma^2) \end{eqnarray} $$ \(y_i\)は\(\mathrm{N}\left(\alpha+\beta x_i,\sigma^2\right)\)に従う確率変数である.

\(\hat{\beta}\)を\(\sum_{i=1}^n c_iy_i\)の形で表す

推定量が\(\sum_{i=1}^n c_ix_i\;(x_i:標本,\;c_i:定数)\)の形で表現できるとき,この推定量を線形推定量(linear estimate)という.
(よく知られる線形推定量の例として平均\(\bar{x}\)があり,\(\bar{x}=\sum_{i=1}^n \frac{1}{n} x_i\)で表現される) $$ \begin{eqnarray} \hat{\beta}&=&\frac{S_{xy}}{S_{xx}} \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{\hat{\beta}=\frac{S_{xy}}{S_{xx}}} ,\;S_{xx}=\sum_{i=1}^n\left(x_i-\bar{x}\right)^2,\;\bar{x}=\frac{1}{n}\sum_{i=1}^nx_i \\&=&\frac{1}{S_{xx}} \sum_{i=1}^n \left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right) \\&=&\frac{1}{S_{xx}} \sum_{i=1}^n \left(x_i-\bar{x}\right)y_i \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/10/sxy.html}{\sum_{i=1}^n\left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right)=S_{xy}= \sum_{i=1}^n \left(x_i-\bar{x}\right)y_i} \\&=& \sum_{i=1}^n \frac{x_i-\bar{x}}{S_{xx}}y_i \\&=& \sum_{i=1}^n c_iy_i \;\cdots\;c_i=\frac{x_i-\bar{x}}{S_{xx}} \end{eqnarray} $$

\(\hat{\beta}\)の期待値を\(\sum_{i=1}^n c_iy_i\)から求めてみる

$$ \begin{eqnarray} \mathrm{E}\left[\sum_{i=1}^n c_iy_i\right] &=&\mathrm{E}\left[\sum_{i=1}^n \frac{x_i-\bar{x}}{S_{xx}}y_i\right] \\&=&\mathrm{E}\left[\frac{S_{xy}}{S_{xx}}\right] \;\cdots\;上記,\;\frac{S_{xy}}{S_{xx}}=\sum_{i=1}^n \frac{x_i-\bar{x}}{S_{xx}}y_i \\&=&\mathrm{E}\left[\hat{\beta}\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{\hat{\beta}=\frac{S_{xy}}{S_{xx}}} \\&=&\beta \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2.html}{\mathrm{E}\left[\hat{\beta}\right]=\beta} \end{eqnarray} $$

\(\hat{\beta}\)の分散を\(\sum_{i=1}^n c_iy_i\)から求めてみる

$$ \begin{eqnarray} \mathrm{V}\left[\sum_{i=1}^n c_iy_i\right] &=&\mathrm{V}\left[\sum_{i=1}^n \frac{x_i-\bar{x}}{S_{xx}}y_i\right] \\&=&\sum_{i=1}^n \mathrm{V}\left[\frac{x_i-\bar{x}}{S_{xx}}y_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-variance.html}{y_iは互いに独立\mathrm{Cov}\left[y_i, y_j\right]=0,\;互いに独立の場合\mathrm{V}\left[X+Y\right]=\mathrm{V}\left[X\right]+\mathrm{V}\left[Y\right]} \\&=&\sum_{i=1}^n \left(\frac{x_i-\bar{x}}{S_{xx}}\right)^2\mathrm{V}\left[y_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-variance.html}{\mathrm{V}\left[cX\right]=c^2\mathrm{V}\left[X\right]} \\&=&\sum_{i=1}^n \left(\frac{x_i-\bar{x}}{S_{xx}}\right)^2\sigma^2 \\&=&\frac{\sigma^2}{S_{xx}^2}\sum_{i=1}^n \left(x_i-\bar{x}\right)^2 \\&=&\frac{\sigma^2}{S_{xx}^2}S_{xx} \;\cdots\;S_{xx}=\sum_{i=1}^n \left(x_i-\bar{x}\right)^2 \\&=&\frac{\sigma^2}{S_{xx}} \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2variancecovariance.html}{\mathrm{V}\left[\frac{S_{xy}}{S_{xx}}\right]=\frac{\sigma^2}{S_{xx}}}と同じ結果 \end{eqnarray} $$

\(\hat{\beta}\)の分布

以上のように,\(\hat{\beta}\)は線形推定量であり,正規分布に従う\(y_i\)の定数倍の和で表すことができた.よって\(\hat{\beta}\)は同様に正規分布に従い,その期待値と分散はそれぞれ上記で求めたとおりである \(\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/zc1xc2y-2.html}{Z=c_1X+c_2Y(X\sim\mathrm{N}(\mu_1,\sigma_1^2),Y\sim\mathrm{N}(\mu_2,\sigma_2^2),Z\sim\mathrm{N}(c_1\mu_1+c_2\mu_2,c_1^2\sigma_1^2+c_2^2\sigma_2^2))}\). $$ \begin{eqnarray} \hat{\beta}&\sim& \mathrm{N}\left(\beta,\frac{\sigma^2}{S_{xx}}\right) \end{eqnarray} $$

\(\hat{\alpha}\)を\(\sum_{i=1}^n c_iy_i\)の形で表す

$$ \begin{eqnarray} \hat{\alpha}&=&\bar{y}-\hat{\beta}\bar{x} \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}} \\&=&\sum_{i=1}^n\frac{1}{n}y_i-\frac{S_{xy}}{S_{xx}}\bar{x} \\&=&\sum_{i=1}^n\frac{1}{n}y_i-\left(\sum_{i=1}^n\frac{x_i-\bar{x}}{S_{xx}}y_i\right)\bar{x} \;\cdots\;上記,\;\frac{S_{xy}}{S_{xx}}=\sum_{i=1}^n \frac{x_i-\bar{x}}{S_{xx}}y_i \\&=&\sum_{i=1}^n\frac{1}{n}y_i-\bar{x}\sum_{i=1}^n\frac{x_i-\bar{x}}{S_{xx}}y_i \\&=&\sum_{i=1}^n\frac{1}{n}y_i-\sum_{i=1}^n\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}y_i \\&=&\sum_{i=1}^n\left(\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)y_i \\&=& \sum_{i=1}^n c_iy_i \;\cdots\;c_i=\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}} \end{eqnarray} $$

\(\hat{\alpha}\)の期待値を\(\sum_{i=1}^n c_iy_i\)から求めてみる

$$ \begin{eqnarray} \mathrm{E}\left[\sum_{i=1}^n c_iy_i\right] &=&\mathrm{E}\left[\sum_{i=1}^n\left(\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)y_i\right] \\&=&\mathrm{E}\left[\sum_{i=1}^n\left(\frac{1}{n}y_i-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}y_i\right)\right] \\&=&\mathrm{E}\left[\sum_{i=1}^n\frac{1}{n}y_i-\sum_{i=1}^n\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}y_i\right] \\&=&\mathrm{E}\left[\sum_{i=1}^n\frac{1}{n}y_i-\sum_{i=1}^n\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}y_i\right] \\&=&\mathrm{E}\left[\sum_{i=1}^n\frac{1}{n}y_i\right]-\mathrm{E}\left[\sum_{i=1}^n\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}y_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-expected.html}{\mathrm{E}\left[X+Y\right]=\mathrm{E}\left[X\right]+\mathrm{E}\left[Y\right]} \\&=&\mathrm{E}\left[\frac{1}{n}\sum_{i=1}^ny_i\right]-\mathrm{E}\left[\bar{x}\sum_{i=1}^n\frac{\left(x_i-\bar{x}\right)}{S_{xx}}y_i\right] \\&=&\frac{1}{n}\mathrm{E}\left[\sum_{i=1}^ny_i\right]-\bar{x}\mathrm{E}\left[\sum_{i=1}^n\frac{\left(x_i-\bar{x}\right)}{S_{xx}}y_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-expected.html}{\mathrm{E}\left[cX\right]=c\mathrm{E}\left[X\right]} \\&=&\frac{1}{n}\sum_{i=1}^n\mathrm{E}\left[y_i\right]-\bar{x}\mathrm{E}\left[\frac{S_{xy}}{S_{xx}}\right] \;\cdots\;上記,\;\frac{S_{xy}}{S_{xx}}=\sum_{i=1}^n \frac{x_i-\bar{x}}{S_{xx}}y_i \\&=&\frac{1}{n}\sum_{i=1}^n\left(\alpha+\beta x_i\right)-\bar{x}\mathrm{E}\left[\frac{S_{xy}}{S_{xx}}\right] \;\cdots\;\mathrm{E}\left[y_i\right]=\alpha+\beta x_i \\&=&\frac{1}{n}\left(\alpha\sum_{i=1}^n1+\beta\sum_{i=1}^n x_i\right)-\bar{x}\mathrm{E}\left[\hat{\beta}\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{\hat{\beta}=\frac{S_{xy}}{S_{xx}}} \\&=&\frac{1}{n}\left(n\alpha+\beta\;n\bar{x}\right)-\bar{x}\beta \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2.html}{\mathrm{E}\left[\hat{\beta}\right]=\beta} \\&=&\frac{1}{n}n\left(\alpha+\beta\bar{x}\right)-\bar{x}\beta \\&=&\left(\alpha+\beta\bar{x}\right)-\bar{x}\beta \\&=&\alpha \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2.html}{\mathrm{E}\left[\hat{\alpha}\right]=\alpha} \end{eqnarray} $$

\(\hat{\alpha}\)の分散を\(\sum_{i=1}^n c_iy_i\)から求めてみる

$$ \begin{eqnarray} \mathrm{V}\left[\sum_{i=1}^n c_iy_i\right] &=&\mathrm{V}\left[\sum_{i=1}^n\left(\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)y_i\right] \\&=&\sum_{i=1}^n\mathrm{V}\left[\left(\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)y_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2variancecovariance.html}{\mathrm{V}\left[\frac{S_{xy}}{S_{xx}}\right]=\frac{\sigma^2}{S_{xx}}}と同じ結果 \\&=&\sum_{i=1}^n\left(\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)^2\mathrm{V}\left[y_i\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-variance.html}{\mathrm{V}\left[cX\right]=c^2\mathrm{V}\left[X\right]} \\&=&\sum_{i=1}^n\left(\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)^2\sigma^2 \\&=&\sigma^2\sum_{i=1}^n\left(\frac{1}{n}-\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)^2 \\&=&\sigma^2\sum_{i=1}^n\left( \frac{1}{n^2} -2\frac{1}{n}\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}} +\left(\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)^2 \right) \\&=&\sigma^2\left( \sum_{i=1}^n\frac{1}{n^2} -\sum_{i=1}^n2\frac{1}{n}\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}} +\sum_{i=1}^n\left(\frac{\bar{x}\left(x_i-\bar{x}\right)}{S_{xx}}\right)^2 \right) \\&=&\sigma^2\left( \frac{1}{n^2}\sum_{i=1}^n1 -\frac{2\bar{x}}{nS_{xx}}\sum_{i=1}^n\left(x_i-\bar{x}\right) +\frac{\bar{x}^2}{S_{xx}^2}\sum_{i=1}^n\left(x_i-\bar{x}\right)^2 \right) \\&=&\sigma^2\left( \frac{1}{n^2}n -\frac{2\bar{x}}{nS_{xx}}\cdot 0 +\frac{\bar{x}^2}{S_{xx}^2}S_{xx} \right) \\&=&\sigma^2\left( \frac{1}{n^2}\cdot n -\frac{2\bar{x}}{nS_{xx}}\cdot 0 +\frac{\bar{x}^2}{S_{xx}^2}\cdot S_{xx} \right) \\&=&\sigma^2\left( \frac{1}{n} +\frac{\bar{x}^2}{S_{xx}} \right) \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2variancecovariance.html}{\mathrm{V}\left[\bar{y}-\hat{\beta}\bar{x}\right]=\left(\frac{1}{n}+\frac{\bar{x}^2}{S_{xx}}\right)\sigma^2と同じ結果} \end{eqnarray} $$

\(\hat{\alpha}\)の分布

以上のように,\(\hat{\alpha}\)は線形推定量であり,正規分布に従う\(y_i\)の定数倍の和で表すことができた.よって\(\hat{\alpha}\)は同様に正規分布に従い,その期待値と分散はそれぞれ上記で求めたとおりである \(\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/zc1xc2y-2.html}{Z=c_1X+c_2Y(X\sim\mathrm{N}(\mu_1,\sigma_1^2),Y\sim\mathrm{N}(\mu_2,\sigma_2^2),Z\sim\mathrm{N}(c_1\mu_1+c_2\mu_2,c_1^2\sigma_1^2+c_2^2\sigma_2^2))}\). $$ \begin{eqnarray} \hat{\alpha}&\sim& \mathrm{N}\left(\alpha,\sigma^2\left( \frac{1}{n} +\frac{\bar{x}^2}{S_{xx}} \right)\right) \end{eqnarray} $$

確率変数の変数変換 Z=c1X+c2Y / 正規分布の定数倍同士の和

確率変数の変数変換 \(Z=c_1X+c_2Y\) / 正規分布の定数倍同士の和

確率変数の変数変換 \(Z=c_1X+c_2Y\)

$$ \begin{eqnarray} p_{c_1X+c_2Y}(z) &=&\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \delta(z-(c_1x+c_2y))f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/delta-function.html}{\int_{-\infty}^{\infty}\delta(x)f(x)\mathrm{d}x=f(0)} \\&&\;\cdots\;X=x,Y=yの同時確率f(x)g(x)のうちz-\left(c_1x+c_2y\right)=0を満たすものだけを足し合わせる. \\&=&\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \delta(z-(c_1x+c_2y))f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/delta-function.html}{\delta(u(x))=\sum_{\alpha\in u^{-1}(0)}\frac{1}{\left|u^{\prime}(\alpha)\right|}\delta\left(x-\alpha\right)} \\&&\;\cdots\;u(x)=z-(c_1x+c_2y) \\&&\;\cdots\;u(x=\alpha)=0,\;\alpha=\frac{z-c_2y}{c_1} \\&&\;\cdots\;u^\prime=\frac{\mathrm{d}u}{\mathrm{d}x}=\frac{\mathrm{d}}{\mathrm{d}x}\left(z-(c_1x+c_2y)\right)=-c_1 \\&&\;\cdots\;\delta\left(z-(c_1x+c_2y)\right) =\frac{1}{|u^\prime(\alpha)|}\delta\left(x-\alpha\right) =\frac{1}{\left|-c_1\right|}\delta\left(x-\frac{z-c_2y}{c_1}\right) =\frac{1}{\left|c_1\right|}\delta\left(x-\frac{z-c_2y}{c_1}\right) \\&=&\int_{-\infty}^{\infty}\frac{1}{\left|c_1\right|}\delta\left(x-\frac{z-c_2y}{c_1}\right)f_X\left(x\right)g_Y(y)\mathrm{d}y \\&=&\int_{-\infty}^{\infty}\frac{1}{\left|c_1\right|}\delta\left(\frac{z-c_2y}{c_1}-\frac{z-c_2y}{c_1}\right)f_X\left(\frac{z-c_2y}{c_1}\right)g_Y(y)\mathrm{d}y \;\cdots\;z=c_1x+c_2y,\;x=\frac{z-c_2y}{c_1} \\&=&\frac{1}{\left|c_1\right|}\int_{-\infty}^{\infty}\delta(0)f_X\left(\frac{z-c_2y}{c_1}\right)g_Y(y)\mathrm{d}y \;\cdots\;\int cf(x)\mathrm{d}x=c\int f(x)\mathrm{d}x \\&=&\frac{1}{\left|c_1\right|}\int_{-\infty}^{\infty}f_X\left(\frac{z-c_2y}{c_1}\right)g_Y(y)\mathrm{d}y \end{eqnarray} $$

正規分布の定数倍同士の和の例

$$ \begin{eqnarray} f_X(x)&=&\frac{1}{\sqrt{2\pi\sigma_1^2}}\mathrm{e}^{\frac{-(x-\mu_1)^2}{2\sigma_1^2}} \;\cdots\;\mathrm{N}(\mu_1,\sigma_1^2)(\href{https://shikitenkai.blogspot.com/2019/06/binomial-distributionnormal-distribution.html}{正規分布}) \\g_Y(y)&=&\frac{1}{\sqrt{2\pi\sigma_2^2}}\mathrm{e}^{\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \;\cdots\;\mathrm{N}(\mu_2,\sigma_2^2)(\href{https://shikitenkai.blogspot.com/2019/06/binomial-distributionnormal-distribution.html}{正規分布}) \\p_{c_1X+c_2Y}(z) &=&\frac{1}{\left|c_1\right|}\int_{-\infty}^{\infty}f_X\left(\frac{z-c_2y}{c_1}\right)g_Y(y)\mathrm{d}y \\&=&\frac{1}{\left|c_1\right|}\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi\sigma_1^2}}\mathrm{e}^{\frac{-\left\{\left(\frac{z-c_2y}{c_1}\right)-\mu_1\right\}^2}{2\sigma_1^2}}\frac{1}{\sqrt{2\pi\sigma_2^2}}\mathrm{e}^{\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \mathrm{d}y \\&=&\frac{1}{\left|c_1\right|}\frac{1}{\sqrt{2\pi\sigma_1^2}}\frac{1}{\sqrt{2\pi\sigma_2^2}}\int_{-\infty}^{\infty}\mathrm{e}^{\frac{-\left\{\left(\frac{z-c_2y}{c_1}\right)-\mu_1\right\}^2}{2\sigma_1^2}+\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \mathrm{d}y \;\cdots\;\int cf(x)\mathrm{d}x=c\int f(x)\mathrm{d}x \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}}\int_{-\infty}^{\infty}\mathrm{e}^{\frac{-\left\{\left(\frac{z-c_2y}{c_1}\right)-\mu_1\right\}^2}{2\sigma_1^2}+\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \mathrm{d}y \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty} \mathrm{e}^{f_1(c_1,c_2,y,z,\mu_1,\mu_2,\sigma_1,\sigma_2)} \mathrm{d}y \;\cdots\;f_1はyが引数にある. \end{eqnarray} $$
$$ \begin{eqnarray} f_1(c_1,c_2,y,z,\mu_1,\mu_2,\sigma_1,\sigma_2) &=&\frac{-\left\{\left(\frac{z-c_2y}{c_1}\right)-\mu_1\right\}^2}{2\sigma_1^2}+\frac{-(y-\mu_2)^2}{2\sigma_2^2} \\&=&-\frac{1}{2} \left\{ \frac{\left(\frac{1}{c_1}z-\frac{c_2}{c_1}y-\mu_1\right)^2}{\sigma_1^2} +\frac{(y-\mu_2)^2}{\sigma_2^2} \right\} \\&=&-\frac{1}{2} \left\{ \frac{\sigma_2^2\left(\frac{1}{c_1}z-\frac{c_2}{c_1}y-\mu_1\right)^2 +\sigma_1^2(y-\mu_2)^2}{\sigma_1^2\sigma_2^2} \right\} \\&=&-\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \sigma_2^2\left(\frac{1}{c_1}z-\frac{c_2}{c_1}y-\mu_1\right)^2 +\sigma_1^2(y-\mu_2)^2 \right\} \\&=&-\frac{1}{2\sigma_1^2\sigma_2^2} \left[ \sigma_2^2\left\{\left(\frac{1}{c_1}z\right)^2-2\left(\frac{1}{c_1}z\right)\left(\frac{c_2}{c_1}y\right)-2\left(\frac{1}{c_1}z\right)\mu_1+\left(\frac{c_2}{c_1}y\right)^2+2\left(\frac{c_2}{c_1}y\right)\mu_1+\mu_1^2\right\} +\sigma_1^2(y^2-2y\mu_2+\mu_2^2) \right] \\&=&-\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \frac{1}{c_1^2}\sigma_2^2z^2 -2\frac{c_2}{c_1^2}\sigma_2^2zy -2\frac{1}{c_1}\sigma_2^2z\mu_1 +\frac{c_2^2}{c_1^2}\sigma_2^2y^2 +2\frac{c_2}{c_1}\sigma_2^2y\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2y^2 -2\sigma_1^2y\mu_2 +\sigma_1^2\mu_2^2 \right\} \\&=&-\frac{1}{2\sigma_1^2\sigma_2^2} \frac{1}{c_1^2} \left\{ \sigma_2^2z^2 -2c_2\sigma_2^2zy -2c_1\sigma_2^2z\mu_1 +c_2^2\sigma_2^2y^2 +2c_1c_2\sigma_2^2y\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2y^2 -2c_1^2\sigma_1^2y\mu_2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=&-\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ \left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right)y^2 +\left( -2c_2\sigma_2^2z +2c_1c_2\sigma_2^2\mu_1 -2c_1^2\sigma_1^2\mu_2 \right)y +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=&-\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ \left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right)y^2 -2\left( c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 \right)y +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=&-\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ \left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right) \left( y^2 -2\frac{ c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2}y \right) +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=& \\&=&-\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ \left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right) \left( y^2 -2\frac{ c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2}y \right) +A-A +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&&\;\cdots\;平方完成させるための項Aを考える \\&=&-\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ \left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right) \left( y^2 -2\frac{ c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2}y +\frac{1}{c_1^2\sigma_1^2 +c_2^2\sigma_2^2}A \right) -A +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=&-\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left[ \left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right) \left\{ y^2 -2\frac{ c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2}y +\frac{1}{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} \frac{ \left( c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 \right)^2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} \right\} -\frac{ \left( c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 \right)^2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right] \\&&\;\cdots\;A=\frac{ \left( c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 \right)^2 } { c_1^2\sigma_1^2 +c_2^2\sigma_2^2 } \\&=&-\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ \left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right) \left( y^2 -\frac{ c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} \right)^2 -\frac{ \left( c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 \right)^2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=& -\frac{\left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right)}{2c_1^2\sigma_1^2\sigma_2^2} \left( y^2 -\frac{ c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} \right)^2 -\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ -\frac{ \left( c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 \right)^2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=& -f_2(c_1,c_2,\sigma_1^2,\sigma_2^2) \left\{ y-f_3(c_1,c_2,z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2) \right\}^2 -f_4(c_1,c_2,z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2) \;\cdots\;f_2,f_3,f_4はyを引数としない. \\&=& -f_2\left(y-f_3\right)^2-f_4 \;\cdots\;yが平方完成の中の一つだけになっている. \\ \\f_2(c_1,c_2,\sigma_1^2,\sigma_2^2)&=& \frac{\left( c_1^2\sigma_1^2 +c_2^2\sigma_2^2 \right)}{2c_1^2\sigma_1^2\sigma_2^2} \\ \\f_3(c_1,c_2,z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2)&=&\frac{ c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} \\ \\f_4(c_1,c_2,z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2) &=&\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ -\frac{ \left( c_2\sigma_2^2z -c_1c_2\sigma_2^2\mu_1 +c_1^2\sigma_1^2\mu_2 \right)^2 }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} +\sigma_2^2z^2 -2c_1\sigma_2^2z\mu_1 +c_1^2\sigma_2^2\mu_1^2 +c_1^2\sigma_1^2\mu_2^2 \right\} \\&=&\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \left\{ \frac{ -\left( c_2\sigma_2^2 \left(z -c_1\mu_1 \right) +c_1^2\sigma_1^2\mu_2 \right)^2 +\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right) \left(\sigma_2^2\left( z-c_1\mu_1 \right)^2 +c_1^2\sigma_1^2\mu_2^2\right) }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} \right\} \\&=&\frac{1}{2c_1^2\sigma_1^2\sigma_2^2} \frac{ -\left( c_2\sigma_2^2 B +c_1^2\sigma_1^2\mu_2 \right)^2 +\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right) \left(\sigma_2^2B^2 +c_1^2\sigma_1^2\mu_2^2\right) }{c_1^2\sigma_1^2 +c_2^2\sigma_2^2} \;\cdots\;B=z-c_1\mu_1 \\&=&\frac{1}{2c_1^2\sigma_1^2\sigma_2^2(c_1^2\sigma_1^2 +c_2^2\sigma_2^2)} \left\{ -\left( c_2\sigma_2^2 B +c_1^2\sigma_1^2\mu_2 \right)^2 +\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right) \left(\sigma_2^2B^2 +c_1^2\sigma_1^2\mu_2^2\right) \right\} \\&=&\frac{1}{2c_1^2\sigma_1^2\sigma_2^2(c_1^2\sigma_1^2 +c_2^2\sigma_2^2)} \left( \color{red}{-c_2^2\sigma_2^4B^2} \color{black}{-2c_1^2c_2\sigma_1^2\sigma_2^2\mu_2B} \color{green}{-c_1^4\sigma_1^4\mu_2^2} \color{black}{+c_1^2\sigma_1^2\sigma_2^2B^2} \color{red}{+c_2^2\sigma_2^4B^2} \color{green}{+c_1^4\sigma_1^4\mu_2^2} \color{black}{+c_1^2c_2^2\sigma_1^2\sigma_2^2\mu_2^2} \right) \\&=&\frac{1}{2c_1^2\sigma_1^2\sigma_2^2(c_1^2\sigma_1^2 +c_2^2\sigma_2^2)} c_1^2\sigma_1^2 \sigma_2^2\left\{ -2 c_2 \mu_2 B + B^2 + c_2^2 \mu_2^2 \right\} \\&=&\frac{1}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)}\left(B-c_2\mu_2\right)^2 \\&=&\frac{\left(z-c_1\mu_1-c_2\mu_2\right)^2}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)} \;\cdots\;B=z-c_1\mu_1 \\&=&\frac{\left(z-\left(c_1\mu_1+c_2\mu_2\right)\right)^2}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)} \end{eqnarray} $$
$$ \begin{eqnarray} p_{c_1X+c_2Y}(z)&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty} \mathrm{e}^{f_1(c_1,c_2,y,z,\mu_1,\mu_2,\sigma_1,\sigma_2)} \mathrm{d}y \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty} \mathrm{e}^{-f_2\left(y-f_3\right)^2-f_4} \mathrm{d}y \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty} \mathrm{e}^{-f_2\left(y-f_3\right)^2} \mathrm{e}^{-f_4} \mathrm{d}y \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \int_{-\infty}^{\infty} \mathrm{e}^{-f_2\left(y-f_3\right)^2} \mathrm{d}y \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \int_{-\infty}^{\infty} \mathrm{e}^{-u^2} \frac{1}{\sqrt{f_2}}\mathrm{d}u \\&&\;\cdots\;u=\sqrt{f_2}\left(y-f_3\right),\;\frac{\mathrm{d}u}{\mathrm{d}y}=\sqrt{f_2},\;\mathrm{d}y=\frac{1}{\sqrt{f_2}}\mathrm{d}u \\&&\;\cdots\;y:-\infty \rightarrow \infty,\;u:-\infty \rightarrow \infty \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \frac{1}{\sqrt{f_2}} \int_{-\infty}^{\infty} \mathrm{e}^{-u^2} \mathrm{d}u \;\cdots\;\int cf(x)\mathrm{d}x=c\int f(x)\mathrm{d}x \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \frac{1}{\sqrt{f_2}} \sqrt{\pi} \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/gaussian-integral.html}{\int_{-\infty}^{\infty}\mathrm{e}^{-u^2}\mathrm{d}u=\sqrt{\pi}} \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \frac{\sqrt{\pi}}{\sqrt{f_2}} \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \sqrt{\frac{\pi}{f_2}} \mathrm{e}^{-f_4} \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \sqrt{\frac{\pi}{\frac{\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right)}{2c_1^2\sigma_1^2\sigma_2^2}}} \mathrm{e}^{-\frac{\left(z-\left(c_1\mu_1+c_2\mu_2\right)\right)^2}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)}} \\&=&\frac{1}{\left|c_1\right|}\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \sqrt{\frac{2\pi c_1^2\sigma_1^2\sigma_2^2}{\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right)}} \mathrm{e}^{-\frac{\left(z-\left(c_1\mu_1+c_2\mu_2\right)\right)^2}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)}} \\&=&\frac{1}{\left|c_1\right|}\frac{\sqrt{c_1^2}}{\sqrt{2\pi\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right)}} \mathrm{e}^{-\frac{\left(z-\left(c_1\mu_1+c_2\mu_2\right)\right)^2}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)}} \\&=&\frac{1}{\left|c_1\right|}\frac{\left|c_1\right|}{\sqrt{2\pi\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right)}} \mathrm{e}^{-\frac{\left(z-\left(c_1\mu_1+c_2\mu_2\right)\right)^2}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)}} \;\cdots\;\sqrt{A^2}= \left\{ \begin{array} \\A&(A\geq0) \\-A&(A\lt0) \end{array} \right. =\left|A\right| \\&=&\frac{1}{\sqrt{2\pi\left(c_1^2\sigma_1^2+c_2^2\sigma_2^2\right)}} \mathrm{e}^{-\frac{\left(z-\left(c_1\mu_1+c_2\mu_2\right)\right)^2}{2(c_1^2\sigma_1^2+c_2^2\sigma_2^2)}} \\&\sim&\mathrm{N}\left(c_1\mu_1+c_2\mu_2,\;c_1^2\sigma_1^2+c_2^2\sigma_2^2\right) \end{eqnarray} $$

確率変数の変数変換 Z=X+Y その2 / 正規分布の再生性

確率変数の変数変換 Z=X+Y その2 / 正規分布の再生性

Z=X+Y 正規分布同士の例

$$ \begin{eqnarray} f_X(x)&=&\frac{1}{\sqrt{2\pi\sigma_1^2}}\mathrm{e}^{\frac{-(x-\mu_1)^2}{2\sigma_1^2}} \;\cdots\;\mathrm{N}(\mu_1,\sigma_1^2)(\href{https://shikitenkai.blogspot.com/2019/06/binomial-distributionnormal-distribution.html}{正規分布}) \\g_Y(x)&=&\frac{1}{\sqrt{2\pi\sigma_2^2}}\mathrm{e}^{\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \;\cdots\;\mathrm{N}(\mu_2,\sigma_2^2)(\href{https://shikitenkai.blogspot.com/2019/06/binomial-distributionnormal-distribution.html}{正規分布}) \\p_{X+Y}(z)&=&\href{https://shikitenkai.blogspot.com/2020/09/zxy.html}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \delta(z-(x+y))f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y} \\&=&\int_{-\infty}^{\infty}\delta(z-((z-y)+y))f_X(z-y)g_Y(y)\mathrm{d}y \;\cdots\;z=x+y,\;x=z-y \\&=&\int_{-\infty}^{\infty}\delta(0)f_X(z-y)g_Y(y)\mathrm{d}y \\&=&\int_{-\infty}^{\infty}f_X(z-y)g_Y(y)\mathrm{d}y \\&=&\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi\sigma_1^2}}\mathrm{e}^{\frac{-(z-y-\mu_1)^2}{2\sigma_1^2}}\frac{1}{\sqrt{2\pi\sigma_2^2}}\mathrm{e}^{\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \mathrm{d}y \\&=&\frac{1}{\sqrt{2\pi\sigma_1^2}}\frac{1}{\sqrt{2\pi\sigma_2^2}}\int_{-\infty}^{\infty}\mathrm{e}^{\frac{-(z-y-\mu_1)^2}{2\sigma_1^2}+\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \mathrm{d}y \\&=&\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}}\int_{-\infty}^{\infty}\mathrm{e}^{\frac{-(z-y-\mu_1)^2}{2\sigma_1^2}+\frac{-(y-\mu_2)^2}{2\sigma_2^2}} \mathrm{d}y \\&=&\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty} \mathrm{e}^{f_1(y,z,\mu_1,\mu_2,\sigma_1,\sigma_2)} \mathrm{d}y \;\cdots\;f_1はyが引数にある. \end{eqnarray} $$
$$ \begin{eqnarray} f_1(y,z,\mu_1,\mu_2,\sigma_1,\sigma_2) &=&\frac{-(z-y-\mu_1)^2}{2\sigma_1^2}+\frac{-(y-\mu_2)^2}{2\sigma_2^2} \\&=&-\frac{1}{2} \left( \frac{(z-y-\mu_1)^2}{\sigma_1^2} +\frac{(y-\mu_2)^2}{\sigma_2^2} \right) \\&=&-\frac{1}{2} \left( \frac{\sigma_2^2(z-y-\mu_1)^2 +\sigma_1^2(y-\mu_2)^2}{\sigma_1^2\sigma_2^2} \right) \\&=&-\frac{1}{2\sigma_1^2\sigma_2^2} \left( \sigma_2^2(z-y-\mu_1)^2 +\sigma_1^2(y-\mu_2)^2 \right) \\&=&-\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \sigma_2^2(z^2-2zy-2z\mu_1+y^2+2y\mu_1+\mu_1^2) +\sigma_1^2(y^2-2y\mu_2+\mu_2^2) \right\} \\&=&-\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \sigma_2^2z^2 -2\sigma_2^2zy -2\sigma_2^2z\mu_1 +\sigma_2^2y^2 +2\sigma_2^2y\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2y^2 -2\sigma_1^2y\mu_2 +\sigma_1^2\mu_2^2 \right\} \\&=& -\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \left( \sigma_1^2 +\sigma_2^2 \right)y^2 +\left( -2\sigma_2^2z +2\sigma_2^2\mu_1 -2\sigma_1^2\mu_2 \right)y +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right\} \\&=& -\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \left( \sigma_1^2 +\sigma_2^2 \right)y^2 -2\left( \sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2 \right)y +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right\} \\&=& -\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \left( \sigma_1^2 +\sigma_2^2 \right) \left(y^2 -2\frac{ \sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2 }{\sigma_1^2 +\sigma_2^2} y \right) +A -A +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right\} \\&&\;\cdots\;平方完成させるための項Aを考える \\&=& -\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \left( \sigma_1^2 +\sigma_2^2 \right) \left(y^2 -2\frac{ \sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2 }{\sigma_1^2 +\sigma_2^2} y +\frac{1}{\sigma_1^2+\sigma_2^2}A \right) -A +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right\} \\&=& -\frac{1}{2\sigma_1^2\sigma_2^2} \left[ \left( \sigma_1^2 +\sigma_2^2 \right) \left\{y^2 -2\frac{ \sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2 }{\sigma_1^2 +\sigma_2^2} y +\frac{1}{\sigma_1^2+\sigma_2^2} \frac{ (\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 }{\sigma_1^2 +\sigma_2^2} \right\}-\frac{ (\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 }{\sigma_1^2 +\sigma_2^2} +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right] \\&&\;\cdots\;A=\frac{ (\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 }{\sigma_1^2 +\sigma_2^2} \\&=& -\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ \left( \sigma_1^2 +\sigma_2^2 \right) \left(y-\frac{ \sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2 }{\sigma_1^2 +\sigma_2^2} \right)^2 -\frac{ (\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 }{\sigma_1^2 +\sigma_2^2} +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right\} \\&=&-\frac{\sigma_1^2+\sigma_2^2}{2\sigma_1^2\sigma_2^2} \left(y-\frac{ \sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2 }{\sigma_1^2 +\sigma_2^2} \right)^2 -\frac{1}{2\sigma_1^2\sigma_2^2} \left\{ -\frac{ (\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 }{\sigma_1^2+\sigma_2^2} +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right\} \\&=& -f_2(\sigma_1^2,\sigma_2^2) \left\{ y-f_3(z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2) \right\}^2 -f_4(z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2) \;\cdots\;f_2,f_3,f_4はyを引数としない. \\&=& -f_2\left(y-f_3\right)^2-f_4 \;\cdots\;yが平方完成の中の一つだけになっている. \\ \\f_2(\sigma_1^2,\sigma_2^2)&=& \frac{\sigma_1^2+\sigma_2^2}{2\sigma_1^2\sigma_2^2} \\ \\f_3(z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2)&=&\frac{ \sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2 }{\sigma_1^2 +\sigma_2^2 } \\ \\f_4(z,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2)&=&\frac{1}{2\sigma_1^2\sigma_2^2}\left\{ -\frac{ (\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 }{\sigma_1^2 +\sigma_2^2} +\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2 \right\} \\&=&\frac{1}{2\sigma_1^2\sigma_2^2}\left\{ -\frac{ (\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 +(\sigma_1^2+\sigma_2^2)(\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2) }{\sigma_1^2+\sigma_2^2} \right\} \\&=&\frac{1}{2\sigma_1^2\sigma_2^2} \frac{1}{\sigma_1^2+\sigma_2^2} \left\{ -(\sigma_2^2z -\sigma_2^2\mu_1 +\sigma_1^2\mu_2)^2 +(\sigma_1^2+\sigma_2^2)(\sigma_2^2z^2 -2\sigma_2^2z\mu_1 +\sigma_2^2\mu_1^2 +\sigma_1^2\mu_2^2) \right\} \\&=&\frac{1}{2\sigma_1^2\sigma_2^2} \frac{1}{\sigma_1^2+\sigma_2^2} \left\{ -\sigma_2^4z^2 -\sigma_2^4\mu_1^2 -\sigma_1^4\mu_2^2 +2\sigma_2^4z\mu_1 -2\sigma_1^2\sigma_2^2z\mu_2 +2\sigma_1^2\sigma_2^2\mu_1\mu_2 +(\sigma_1^2+\sigma_2^2)\sigma_2^2z^2 -2(\sigma_1^2+\sigma_2^2)\sigma_2^2z\mu_1 +(\sigma_1^2+\sigma_2^2)\sigma_2^2\mu_1^2 +(\sigma_1^2+\sigma_2^2)\sigma_1^2\mu_2^2 \right\} \\&=&\frac{1}{2\sigma_1^2\sigma_2^2} \frac{1}{\sigma_1^2+\sigma_2^2} \left( \color{red }{-\sigma_2^4 z^2} \color{green}{-\sigma_2^4 \mu_1^2} \color{blue }{-\sigma_1^4 \mu_2^2} \color{cyan }{+2\sigma_2^4 z\mu_1} \color{black}{-2\sigma_1^2 \sigma_2^2z\mu_2} \color{black}{+2\sigma_1^2 \sigma_2^2\mu_1\mu_2} \color{black}{+ \sigma_1^2 \sigma_2^2 z^2} \color{red }{+ \sigma_2^4 z^2} \color{black}{-2\sigma_1^2 \sigma_2^2 z\mu_1} \color{cyan}{-2\sigma_2^4 z\mu_1} \color{black}{+ \sigma_1^2 \sigma_2^2 \mu_1^2} \color{green}{+ \sigma_2^4 \mu_1^2} \color{blue }{+ \sigma_1^4 \mu_2^2} \color{black}{+ \sigma_1^2 \sigma_2^2 \mu_2^2} \right) \\&=&\frac{1}{2\sigma_1^2\sigma_2^2} \frac{1}{\sigma_1^2+\sigma_2^2} \left( \color{black}{-2\sigma_1^2 \sigma_2^2z\mu_2} \color{black}{+2\sigma_1^2 \sigma_2^2\mu_1\mu_2} \color{black}{+ \sigma_1^2 \sigma_2^2 z^2} \color{black}{-2\sigma_1^2 \sigma_2^2 z\mu_1} \color{black}{+ \sigma_1^2 \sigma_2^2 \mu_1^2} \color{black}{+ \sigma_1^2 \sigma_2^2 \mu_2^2} \right) \\&=&\frac{1}{2\sigma_1^2\sigma_2^2} \frac{1}{\sigma_1^2+\sigma_2^2} \left(\sigma_1^2 \sigma_2^2\right) \left( \color{black}{-2z\mu_2} \color{black}{+2\mu_1\mu_2} \color{black}{+ z^2} \color{black}{-2z\mu_1} \color{black}{+ \mu_1^2} \color{black}{+ \mu_2^2} \right) \\&=&\frac{1}{2\sigma_1^2\sigma_2^2} \frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \left(z-\mu_1-\mu_2\right)^2 \\&=&\frac{\left(z-\mu_1-\mu_2\right)^2}{2\left(\sigma_1^2+\sigma_2^2\right)} \\&=&\frac{\left\{z-\left(\mu_1+\mu_2\right)\right\}^2}{2\left(\sigma_1^2+\sigma_2^2\right)} \end{eqnarray} $$
$$ \begin{eqnarray} p_{X+Y}(z)&=&\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty} \mathrm{e}^{f_1(y,z,\mu_1,\mu_2,\sigma_1,\sigma_2)} \mathrm{d}y \\&=&\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty}\mathrm{e}^{ -f_2 \left( y-f_3 \right)^2 -f_4} \mathrm{d}y \\&=&\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \int_{-\infty}^{\infty}\mathrm{e}^{ -f_2 \left( y-f_3 \right)^2 } \mathrm{e}^{-f_4} \mathrm{d}y \;\cdots\;A^{B+C}=A^BA^C \\&=&\frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \int_{-\infty}^{\infty}\mathrm{e}^{ -f_2 \left( y-f_3 \right)^2 } \mathrm{d}y \;\cdots\;\int cf(x)\mathrm{d}x=c\int f(x)\mathrm{d}x \\&=& \frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \int_{-\infty}^{\infty}\mathrm{e}^{-f_2\left(y-f_3\right)^2} \mathrm{d}y \\&=& \frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \int_{-\infty}^{\infty}\mathrm{e}^{-u^2} \left(\frac{1}{\sqrt{f_2}}\right)\mathrm{d}u \\&&\;\cdots\;u=\sqrt{f_2}\left(y-f_3\right),\;\frac{\mathrm{d}u}{\mathrm{d}y}=\sqrt{f_2},\;\mathrm{d}y=\frac{1}{\sqrt{f_2}}\mathrm{d}u, \\&&\;\cdots\;y:-\infty \rightarrow \infty,\;u:-\infty \rightarrow \infty \\&=& \frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \mathrm{e}^{-f_4} \frac{1}{\sqrt{f_2}} \int_{-\infty}^{\infty}\mathrm{e}^{-u^2}\mathrm{d}u \\&=& \frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \frac{1}{\sqrt{f_2}} \mathrm{e}^{-f_4} \sqrt{\pi} \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/gaussian-integral.html}{\int_{-\infty}^{\infty}\mathrm{e}^{-u^2}\mathrm{d}u=\sqrt{\pi}} \\&=& \frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \frac{1}{\sqrt{\frac{\sigma_1^2+\sigma_2^2}{2\sigma_1^2\sigma_2^2}}} \mathrm{e}^{-\frac{\left\{z-\left(\mu_1+\mu_2\right)\right\}^2}{2\left(\sigma_1^2+\sigma_2^2\right)}} \sqrt{\pi} \\&=& \frac{1}{2\pi\sqrt{\sigma_1^2\sigma_2^2}} \sqrt{\frac{2\pi\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}} \mathrm{e}^{-\frac{\left\{z-\left(\mu_1+\mu_2\right)\right\}^2}{2\left(\sigma_1^2+\sigma_2^2\right)}} \\&=& \frac{1}{\sqrt{2\pi\left(\sigma_1^2+\sigma_2^2\right)}} \mathrm{e}^{-\frac{\left\{z-\left(\mu_1+\mu_2\right)\right\}^2}{2\left(\sigma_1^2+\sigma_2^2\right)}} \;\cdots\;\mathrm{N}(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2) (再生性) \end{eqnarray} $$ 再生性は,同じ確率分布族に含まれる確率分布\(F_1\),\(F_2\)に対して \(X_1\sim F_1\),\(X_2\sim F_2\)とする互いに独立な確率変数に対して, \(X_1+X_2\)がやはり同一の確率分布族に含まれる性質.

単回帰モデルの最尤推定量の期待値,分散,分布

単回帰モデルの最尤推定量の期待値,分散,分布

単回帰モデル

$$ \begin{eqnarray} y_i&=&\alpha+\beta x_i+\epsilon_i \;(i=1,\cdots,n) \\&&\epsilon_i \overset{iid}{\sim} N(0,\sigma^2)\;\cdots\;独立同一分布(independent\;and\;identically\;distributed;\;IID,\;i.i.d.,\;iid) \end{eqnarray} $$ \(\alpha,\beta,\sigma^2\)の推定量を\(\hat{\alpha},\hat{\beta},\hat{\sigma}^2\)とし,\(\hat{\alpha},\hat{\beta},\hat{\sigma}^2\)の最尤推定量(maximum likelihood estimator)を\(\hat{\alpha}_{ML},\hat{\beta}_{ML},\hat{\sigma}^2_{ML}\)とする.

\(\hat{\beta}_{ML}\)の期待値

$$ \begin{eqnarray} \mathrm{E}\left[\hat{\beta}_{ML}\right]&=&\mathrm{E}\left[\hat{\beta}\right] \;\cdots\;\hat{\beta}_{ML}=\hat{\beta} \\&=&\beta \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2.html}{\mathrm{E}\left[\hat{\beta}\right]=\beta} \\&&\;\cdots\;よって\hat{\beta}_{ML}^2は\beta^2の不偏推定量で\mathbf{ある}. \end{eqnarray} $$

\(\hat{\alpha}_{ML}\)の期待値

$$ \begin{eqnarray} \mathrm{E}\left[\hat{\alpha}_{ML}\right]&=&\mathrm{E}\left[\hat{\alpha}\right] \;\cdots\;\hat{\alpha}_{ML}=\hat{\alpha} \\&=&\alpha \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2.html}{\mathrm{E}\left[\hat{\alpha}\right]=\alpha} \\&&\;\cdots\;よって\hat{\alpha}_{ML}^2は\alpha^2の不偏推定量で\mathbf{ある}. \end{eqnarray} $$

\(\hat{\beta}_{ML}\)の分散

$$ \begin{eqnarray} \mathrm{V}\left[\hat{\beta}_{ML}\right]&=&\mathrm{V}\left[\hat{\beta}\right] \\&=&\frac{1}{S_{xx}}\sigma^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2variancecovariance.html}{\mathrm{V}\left[\hat{\beta}\right]=\frac{1}{S_{xx}}\sigma^2} \\&&\;\cdots\;\bar{x}=\frac{1}{n}\sum_{i=0}^{n}x_i,\;S_{xx}=\sum_{i=0}^{n}\left(x_i-\bar{x}\right)^2 \end{eqnarray} $$

\(\hat{\alpha}_{ML}\)の分散

$$ \begin{eqnarray} \mathrm{V}\left[\hat{\alpha}_{ML}\right]&=&\mathrm{V}\left[\hat{\alpha}\right] \\&=&\left(\frac{1}{n}+\frac{\bar{x}^2}{S_{xx}}\right)\sigma^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2variancecovariance.html}{\mathrm{V}\left[\hat{\alpha}\right]=\left(\frac{1}{n}+\frac{\bar{x}^2}{S_{xx}}\right)\sigma^2} \end{eqnarray} $$

\(\hat{\alpha}_{ML},\hat{\beta}_{ML}\)の分布

$$ \begin{eqnarray} \hat{\beta}_{ML}&=&\hat{\beta}&\sim&\mathrm{N}\left(\beta,\;\frac{1}{S_{xx}}\sigma^2\right) \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/blog-post_29.html}{\hat{\beta}\sim\mathrm{N}\left(\beta,\;\frac{1}{S_{xx}}\sigma^2\right)} \\\hat{\alpha}_{ML}&=&\hat{\alpha}&\sim&\mathrm{N}\left(\alpha,\;\left(\frac{1}{n}+\frac{\bar{x}^2}{S_{xx}}\right)\sigma^2\right) \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/blog-post_29.html}{\hat{\alpha}\sim\mathrm{N}\left(\alpha,\;\left(\frac{1}{n}+\frac{\bar{x}^2}{S_{xx}}\right)\sigma^2\right)} \end{eqnarray} $$

\(\hat{\sigma}^2_{ML}\)の期待値

$$ \begin{eqnarray} \mathrm{E}\left[\hat{\sigma}_{ML}^2\right]&=&\mathrm{E}\left[\frac{n-2}{n}s^2\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/blog-post_25.html}{\hat{\sigma}^2_{ML}=\frac{n-2}{n}s^2} ,\;\href{https://shikitenkai.blogspot.com/2020/09/blog-post.html}{s^2=\frac{1}{\left(n-2\right)}\sum_{i=1}^{n} e_i^2} ,\;\href{https://shikitenkai.blogspot.com/2020/09/blog-post.html}{\sum_{i=1}^{n}e_i^2=\sum_{i=1}^{n}\left(y_i-\hat{y_i}\right)^2} \\&=&\frac{n-2}{n}\mathrm{E}\left[s^2\right] \\&=&\frac{n-2}{n}\sigma^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/blog-post.html}{\mathrm{E}\left[s^2\right]=\sigma^2} \\&\lt&\sigma^2 \;\cdots\;よって\hat{\sigma}_{ML}^2は\sigma^2の不偏推定量では\mathbf{ない}. \end{eqnarray} $$

確率変数の標準化

確率変数の標準化

期待値(平均)が\(\mu\), 分散が\(\sigma^2\)の確率変数\(X\)

$$ \begin{eqnarray} \mathrm{E}\left[X\right]&=&\mu \\\mathrm{V}\left[X\right]&=&\sigma^2 \end{eqnarray} $$

確率変数の変換\(Z=\frac{X-\mu}{\sigma}\)

$$ \begin{eqnarray} \\Z&=&\frac{X-\mu}{\sigma} \end{eqnarray} $$

変換後の確率変数\(Z\)の期待値(平均)と分散

$$ \begin{eqnarray} \\\mathrm{E}\left[Z\right]&=&\mathrm{E}\left[\frac{X-\mu}{\sigma}\right] \\&=&\frac{1}{\sigma}\mathrm{E}\left[X-\mu\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[cX\right]=c\mathrm{E}\left[X\right]} \\&=&\frac{1}{\sigma}\left(\mathrm{E}\left[X\right]-\mu\right) \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[X\pm t\right]=\mathrm{E}\left[X\right] \pm t} \\&=&\frac{1}{\sigma}\left(\mu-\mu\right) \;\cdots\;\mathrm{E}\left[X\right]=\mu \\&=&\frac{1}{\sigma}\left(0\right) \\&=&0 \\\mathrm{V}\left[Z\right]&=&\mathrm{V}\left[\frac{X-\mu}{\sigma}\right] \\&=&\frac{1}{\sigma^2}\mathrm{V}\left[X-\mu\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-variance.html}{\mathrm{V}\left[cX\right]=c^2\mathrm{V}\left[X\right]} \\&=&\frac{1}{\sigma^2}\mathrm{V}\left[X\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-variance.html}{\mathrm{V}\left[X\pm t\right]=\mathrm{V}\left[X\right]} \\&=&\frac{1}{\sigma^2}\sigma^2 \;\cdots\;\mathrm{V}\left[X\right]=\sigma^2 \\&=&1 \end{eqnarray} $$ \(X\)の分布によらず,\(X\)の期待値(平均)と分散が\(\mu\)と\(\sigma^2\)であることから\(Z\)の期待値(平均)と分散が\(0\), \(1\)と標準化される.

単回帰モデルの最尤推定

単回帰モデルの最尤推定

単回帰モデル

$$ \begin{eqnarray} y_i&=&\alpha+\beta x_i+\epsilon_i \;(i=1,\cdots,n) \\&&\epsilon_i \overset{iid}{\sim} N(0,\sigma^2)\;\cdots\;独立同一分布(independent\;and\;identically\;distributed;\;IID,\;i.i.d.,\;iid) \end{eqnarray} $$

対数尤度凾数

対数尤度凾数は以下のようになる. $$ \begin{eqnarray} f(y_1,\cdots,y_n;\alpha,\beta,\sigma^2)&=&\prod_{i=1}^{n}\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{1}{2\sigma^2}\left(y_i-\alpha-\beta x_i\right)^2} \\&=&\left(2\pi\right)^{-\frac{n}{2}} \left(\sigma^2\right)^{-\frac{n}{2}} e^{-\frac{1}{2\sigma^2}\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}} \\l(\alpha,\beta,\sigma^2;y_1,\cdots,y_n)&=&\log{\left\{ \left(2\pi\right)^{-\frac{n}{2}} \left(\sigma^2\right)^{-\frac{n}{2}} e^{-\frac{1}{2\sigma^2}\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}} \right\}} \\&=&\log{\left\{ \left(2\pi\right)^{-\frac{n}{2}} \right\}} +\log{\left\{ \left(\sigma^2\right)^{-\frac{n}{2}} \right\}} +\log{\left\{ e^{-\frac{1}{2\sigma^2}\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}} \right\}} \\&=&-\frac{n}{2}\log{\left(2\pi\right)} -\frac{n}{2}\log{\left(\sigma^2\right)} -\frac{1}{2\sigma^2} \sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2} \end{eqnarray} $$

スコア凾数

スコア凾数は以下のようになる. $$ \begin{eqnarray} \frac{\partial l}{\partial \alpha} &=&\frac{\partial l}{\partial \alpha}\left\{-\frac{1}{2\sigma^2} \sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}\right\} \\&=&-\frac{1}{2\sigma^2} \sum_{i=1}^{n}{\frac{\partial l}{\partial \alpha}\left(y_i-\alpha-\beta x_i\right)^2} \\&=&-\frac{1}{2\sigma^2} \sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)(-1)} \\&=&-\frac{-1}{2\sigma^2} \sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)} \\&=&\frac{1}{2\sigma^2} \left(\sum_{i=1}^{n}y_i-\alpha\sum_{i=1}^{n}1-\beta\sum_{i=1}^{n} x_i\right) \\&=&\frac{1}{2\sigma^2} \left(n\bar{y}-n\alpha-n\beta\bar{x}\right) \;\cdots\;\bar{x}=\frac{1}{n}\sum_{i=1}^{n}x_i,\;\bar{y}=\frac{1}{n}\sum_{i=1}^{n}y_i \\&=&\frac{n}{2\sigma^2} \left(\bar{y}-\alpha-\beta\bar{x}\right) \end{eqnarray} $$ $$ \begin{eqnarray} \frac{\partial l}{\partial \beta} &=&\frac{\partial l}{\partial \beta}\left\{-\frac{1}{2\sigma^2} \sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}\right\} \\&=&-\frac{1}{2\sigma^2} \sum_{i=1}^{n}{\frac{\partial l}{\partial \beta}\left(y_i-\alpha-\beta x_i\right)^2} \\&=&-\frac{1}{2\sigma^2} \sum_{i=1}^{n}{2\left(y_i-\alpha-\beta x_i\right)(-x_i)} \\&=&-\frac{-2}{2\sigma^2} \sum_{i=1}^{n}{\left(x_iy_i-x_i\alpha-\beta x_i^2\right)} \\&=&\frac{1}{\sigma^2} \left(\sum_{i=1}^{n}x_iy_i-\sum_{i=1}^{n}x_i\alpha-\sum_{i=1}^{n}\beta x_i^2\right) \\&=&\frac{1}{\sigma^2} \left(\sum_{i=1}^{n}x_iy_i-n\bar{x}\alpha-\beta \sum_{i=1}^{n}x_i^2\right) \;\cdots\;\bar{x}=\frac{1}{n}\sum_{i=1}^{n}x_i \end{eqnarray} $$ $$ \begin{eqnarray} \frac{\partial l}{\partial \sigma^2} &=&\frac{\partial l}{\partial \sigma^2}\left\{ -\frac{n}{2}\log{\left(\sigma^2\right)} -\frac{1}{2\sigma^2} \sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2} \right\} \\&=& -\frac{n}{2}\frac{\partial l}{\partial \sigma^2}\log{\left(\sigma^2\right)} -\frac{1}{2}\left\{\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}\right\}\frac{\partial l}{\partial \sigma^2}\frac{1}{\sigma^2} \\&=& -\frac{n}{2}\frac{\partial l}{\partial \sigma^2}\log{\left(\sigma^2\right)} -\frac{1}{2}\left\{\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}\right\}\frac{\partial l}{\partial u}\frac{1}{u} \;\cdots\;u=\sigma^2 \\&=& -\frac{n}{2}\frac{\partial l}{\partial \sigma^2}\log{\left(\sigma^2\right)} -\frac{1}{2}\left\{\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}\right\}\left(\frac{-1}{u^2}\right) \\&=& -\frac{n}{2}\frac{1}{\sigma^2} -\frac{1}{2}\left\{\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}\right\}\left(\frac{-1}{\sigma^4}\right) \;\cdots\;u=\sigma^2 \\&=& -\frac{1}{2\sigma^2}\left\{ n-\frac{1}{\sigma^2}\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2} \right\} \end{eqnarray} $$

スコア凾数を連立させる

$$ \begin{eqnarray} \left\{\begin{array}{rcl} \;0&=&\frac{\partial l}{\partial \alpha} \\0&=&\frac{\partial l}{\partial \beta} \\0&=&\frac{\partial l}{\partial \sigma^2} \end{array}\right. \end{eqnarray} $$ $$ \begin{eqnarray} \left\{\begin{array}{rcl} \;0&=&\frac{n}{2\sigma^2} \left(\bar{y}-\alpha-\beta\bar{x}\right) \\0&=&\frac{1}{\sigma^2} \left(\sum_{i=1}^{n}x_iy_i-n\bar{x}\alpha-\beta \sum_{i=1}^{n}x_i^2\right) \\0&=&-\frac{1}{2\sigma^2}\left\{n-\frac{1}{\sigma^2}\sum_{i=1}^{n}{\left(y_i-\alpha-\beta x_i\right)^2}\right\} \end{array}\right. \end{eqnarray} $$ \(\alpha,\beta,\sigma^2\)の推定量を\(\hat{\alpha},\hat{\beta},\hat{\sigma}^2\)とし,\(\hat{\alpha},\hat{\beta},\hat{\sigma}^2\)の最尤推定量(maximum likelihood estimator)を\(\hat{\alpha}_{ML},\hat{\beta}_{ML},\hat{\sigma}^2_{ML}\)とする. $$ \begin{eqnarray} \left\{\begin{array}{rcl} \;0&=&\bar{y}-\hat{\alpha}_{ML}-\hat{\beta}_{ML}\bar{x} \\0&=&\sum_{i=1}^{n}x_iy_i-n\bar{x}\hat{\alpha}_{ML}-\hat{\beta}_{ML} \sum_{i=1}^{n}x_i^2 \\0&=&n-\frac{1}{\sigma^2_{ML}}\sum_{i=1}^{n}{\left(y_i-\hat{\alpha}_{ML}-\hat{\beta}_{ML} x_i\right)^2} \end{array}\right. \end{eqnarray} $$

\(\hat{\alpha}_{ML}\)を求める

$$ \begin{eqnarray} 0&=&\bar{y}-\hat{\alpha}_{ML}-\hat{\beta}_{ML}\bar{x} \\\hat{\alpha}_{ML}&=&\bar{y}-\hat{\beta}_{ML}\bar{x} \end{eqnarray} $$

\(\hat{\beta}_{ML}\)を求める

$$ \begin{eqnarray} 0&=&\sum_{i=1}^{n}x_iy_i-n\bar{x}\hat{\alpha}_{ML}-\hat{\beta}_{ML} \sum_{i=1}^{n}x_i^2 \\&=&\sum_{i=1}^{n}x_iy_i-n\bar{x}\left(\bar{y}-\hat{\beta}_{ML}\bar{x}\right)-\hat{\beta}_{ML} \sum_{i=1}^{n}x_i^2 \;\cdots\;\hat{\alpha}_{ML}=\bar{y}-\hat{\beta}_{ML}\bar{x} \\&=&\sum_{i=1}^{n}x_iy_i-n\bar{x}\bar{y}+n\hat{\beta}_{ML}\bar{x}^2-\hat{\beta}_{ML} \sum_{i=1}^{n}x_i^2 \\&=&\sum_{i=1}^{n}x_iy_i-n\bar{x}\bar{y} -\hat{\beta}_{ML} \left\{ \left( \sum_{i=1}^{n}x_i^2 \right) - n\bar{x}^2\right\} \\&=&\sum_{i=1}^{n}\left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right) -\hat{\beta}_{ML} \sum_{i=1}^{n}\left(x_i-\bar{x}\right)^2 \\&=&S_{xy} -\hat{\beta}_{ML}\;S_{xx} \;\cdots\;S_{xy}=\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y}),\;S_{xx}=\sum_{i=1}^{n}(x_i-\bar{x})^2 \\\hat{\beta}_{ML}&=&\frac{S_{xy}}{S_{xx}} \\&=&\hat{\beta}\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{正規方程式の解と同じ} \\\hat{\alpha}_{ML}&=&\bar{y}-\hat{\beta}_{ML}\bar{x} \\&=&\bar{y}-\hat{\beta}\bar{x} \\&=&\hat{\alpha}\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{正規方程式の解と同じ} \end{eqnarray} $$

\(\hat{\sigma}^2_{ML}\)を求める

$$ \begin{eqnarray} 0&=&n-\frac{1}{\sigma^2_{ML}}\sum_{i=1}^{n}{\left(y_i-\hat{\alpha}_{ML}-\hat{\beta}_{ML} x_i\right)^2} \\-n&=&-\frac{1}{\sigma^2_{ML}}\sum_{i=1}^{n}{\left(y_i-\hat{\alpha}_{ML}-\hat{\beta}_{ML} x_i\right)^2} \\-n\sigma^2_{ML}&=&-\sum_{i=1}^{n}{\left(y_i-\hat{\alpha}_{ML}-\hat{\beta}_{ML} x_i\right)^2} \\\hat{\sigma}^2_{ML} &=&\frac{1}{n}\sum_{i=1}^{n}{\left(y_i-\hat{\alpha}_{ML}-\hat{\beta}_{ML} x_i\right)^2} \\&=&\frac{1}{n}\sum_{i=1}^{n}{\left(y_i-\hat{\alpha}-\hat{\beta} x_i\right)^2} \\&=&\frac{1}{n}\sum_{i=1}^{n}{\left(y_i-\hat{y}_i\right)^2} \;\cdots\;\hat{y}_i=\hat{\alpha}+\hat{\beta} x_i \\&=&\frac{1}{n}\sum_{i=1}^{n}{e_i^2} \\&=&\frac{1}{n}(n-2)s^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/blog-post.html}{\sum_{i=1}^{n}{e_i^2}=(n-2)s^2,\;s^2=\frac{1}{n-2}\sum_{i=1}^{n}{e_i^2}} \\&=&\frac{n-2}{n}s^2 \end{eqnarray} $$

\(\hat{\alpha}_{ML},\hat{\beta}_{ML},\hat{\sigma}^2_{ML}\)

以上より最尤推定量\(\hat{\alpha}_{ML},\hat{\beta}_{ML},\hat{\sigma}^2_{ML}\)は以下のようになる. $$ \begin{eqnarray} \hat{\beta}_{ML}&=&\frac{S_{xy}}{S_{xx}}=\hat{\beta} \\\hat{\alpha}_{ML}&=&\bar{y}-\hat{\beta}_{ML}\bar{x}=\bar{y}-\hat{\beta}\bar{x}=\hat{\alpha} \\\hat{\sigma}_{ML}^2&=&\frac{n-2}{n}s^2 \end{eqnarray} $$

確率変数の変数変換 Z=X/Y

確率変数の変数変換 Z=X/Y

Z=X/Y

$$ \begin{eqnarray} p_{X/Y}(z) &=&\int \int \delta\left(z-\frac{x}{y}\right)f(x)g(y)\mathrm{d}x\mathrm{d}y \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/delta-function.html}{\int_{-\infty}^{\infty}\delta(x)f(x)\mathrm{d}x=f(0)} \\&&\;\cdots\;X=x,Y=yの同時確率f(x)g(x)のうちz-\left(\frac{x}{y}\right)=0を満たすものだけを足し合わせる. \\&=&\int \int \left|y\right|\delta\left(x-yz\right)\;f\left(x\right)g(y)\mathrm{d}x\mathrm{d}y \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/delta-function.html}{\delta(u(x))=\sum_{\alpha\in u^{-1}(0)}\frac{1}{\left|u^{\prime}(\alpha)\right|}\delta\left(x-\alpha\right)} \\&&\;\cdots\;u(x)=z-\frac{x}{y} \\&&\;\cdots\;u(x=\alpha)=0,\;\alpha=yz \\&&\;\cdots\;u^\prime=\frac{\mathrm{d}u}{\mathrm{d}x}=\frac{\mathrm{d}}{\mathrm{d}x}(z-\frac{x}{y})=-\frac{1}{y} \\&&\;\cdots\;\delta\left(z-\frac{x}{y}\right) =\frac{1}{|u^\prime(\alpha)|}\delta\left(x-\alpha\right) =\frac{1}{\left|-\frac{1}{y}\right|}\delta\left(x-yz\right) =\frac{1}{\left|\frac{1}{y}\right|}\delta\left(x-yz\right) =\left|y\right|\delta\left(x-yz\right) \\&=&\int |y|\delta\left(yz-yz\right)f\left(yz\right)g(y)\mathrm{d}y \;\cdots\;z=\frac{x}{y},x=yz \\&=&\int |y|\delta\left(0\right)f\left(yz\right)g(y)\mathrm{d}y \\&=&\int |y|f(yz)g(y)\mathrm{d}y \;\cdots\;\delta凾数の引数が全域で0なので,全域で|y|f(yz)g(y)を足し合わせる積分になる. \end{eqnarray} $$

標準正規分布同士の例

$$ \begin{eqnarray} f_X(x)&=&\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} \;\cdots\;N(0,1)\;(標準正規分布) \\g_Y(y)&=&\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \;\cdots\;N(0,1)\;(標準正規分布) \\p_{X/Y}(z)&=&\int \int \delta(z-\frac{x}{y})f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&=&\int \int \delta(z-\frac{x}{y})\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}\mathrm{d}x\mathrm{d}y \\&=&\int_{-\infty}^{\infty} \left|y\right| \frac{1}{\sqrt{2\pi}}e^{-\frac{\left(yz\right)^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;z=\frac{x}{y},x=yz \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} \left|y\right| e^{-\frac{\left(yz\right)^2}{2}}e^{-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} \left|y\right| e^{-\frac{\left(yz\right)^2}{2}-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;A^BA^C=A^{B+C} \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} \left|y\right| e^{-\frac{y^2\left(z^2+1\right)}{2}} \mathrm{d}y \\&=&\frac{1}{2\pi}\left\{ \int_{-\infty}^{0} \left|y\right| e^{-\frac{y^2\left(z^2+1\right)}{2}} \mathrm{d}y + \int_{0}^{\infty} \left|y\right| e^{-\frac{y^2\left(z^2+1\right)}{2}} \mathrm{d}y \right\} \\&&\;\cdots\;\int_{A}^{B} f(x)\mathrm{d}x=\int_{A}^{C} f(x)\mathrm{d}x+\int_{C}^{B} f(x)\mathrm{d}x\;(ただしA\leq C \leq B) \\&=&\frac{1}{2\pi}\left\{ \int_{-\infty}^{0} -y e^{-\frac{y^2\left(z^2+1\right)}{2}} \mathrm{d}y + \int_{0}^{\infty} y e^{-\frac{y^2\left(z^2+1\right)}{2}} \mathrm{d}y \right\} \;\cdots\;\left|A\right|=\left\{ \begin{array} \\A&(A\geq0) \\-A&(A\lt0) \end{array} \right. \\&=&\frac{1}{2\pi}2\int_{0}^{\infty} y e^{-\frac{y^2\left(z^2+1\right)}{2}} \mathrm{d}y \\&&\;\cdots\;y=xが奇凾数でy=e^{-\frac{x^2\left(z^2+1\right)}{2}}が偶凾数なのでその積は奇凾数. \\&&\;\cdots\;負の区間(-\infty,0)と正の区間(0,\infty)で符号が逆なので全体としては偶凾数と同じに扱える. \\&&\;\cdots\;-f_{奇凾数(-\infty,0)}(x) + f_{奇凾数(0,\infty)}(x) = f_{偶凾数}(x)\mathrm{d}x \\&&\;\cdots\;偶凾数の(-a,a)での積分は正の区間(0,a)の2倍として計算できる. \\&&\;\cdots\;\int_{-a}^a f_{奇凾数}(x) \mathrm{d}x=2\int_0^a f_{偶凾数}(x)\mathrm{d}x \\&=&\frac{1}{\pi}\int_{0}^{\infty} y e^{-\frac{y^2\left(z^2+1\right)}{2}} \mathrm{d}y \\&=&\frac{1}{\pi}\int_{0}^{\infty} \color{red}{y} e^{-t}\cdot\frac{1}{\left(z^2+1\right)\color{red}{y}}\mathrm{d}t \\&&\;\cdots\;t=\frac{y^2\left(z^2+1\right)}{2},\frac{\mathrm{d}t}{\mathrm{d}y}=\frac{\left(z^2+1\right)}{2}2y=\left(z^2+1\right)y,\mathrm{d}y=\frac{1}{\left(z^2+1\right)y}\mathrm{d}t \\&=&\frac{1}{\pi}\frac{1}{\left(z^2+1\right)}\int_{0}^{\infty} e^{-t} \mathrm{d}t \;\cdots\;\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x \\&=&\frac{1}{\pi}\frac{1}{\left(z^2+1\right)}\cdot1 \;\cdots\;\int_{0}^{\infty} e^{-t} \mathrm{d}t=\left[-e^{-t}\right]_{0}^{\infty}=\left[\left(-e^{-\infty}\right)-\left(-e^0\right)\right]=\left[0+1\right]=1 \\&=&\frac{1}{\pi}\frac{1}{\left(z^2+1\right)}\;\cdots\;標準コーシー分布に等しい. \\f(x;x_0, \gamma)&=&\frac{1}{\pi}\frac{\gamma}{(x-x_0)^2+\gamma^2}\;\cdots\;コーシー分布(Cauchy\;distribution)の確率密度凾数 \\f(x;0, 1)&=&\frac{1}{\pi}\frac{1}{(x-0)^2+1^2}\;\cdots\;標準コーシー分布の確率密度凾数 \\&=&\frac{1}{\pi}\frac{1}{x^2+1} \end{eqnarray} $$

指数分布同士の例

$$ \begin{eqnarray} f_X(x)&=&\lambda e^{-\lambda x} (x \geq 0, \lambda \gt 0) \\g_Y(y)&=&\lambda e^{-\lambda y} (x \geq 0, \lambda \gt 0) \\p_{X/Y}(z)&=&\int \int \delta(x-\frac{z}{y})f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&=&\int \int \delta(x-\frac{z}{y})\lambda e^{-\lambda x} \lambda e^{-\lambda y} \mathrm{d}x\mathrm{d}y \\&=&\int_0^{\infty} \left|y\right|\lambda e^{-\lambda yz} \lambda e^{-\lambda y}\mathrm{d}y \;\cdots\;z=\frac{x}{y},x=yz \\&=&\lambda^2 \int_0^{\infty} \left|y\right|e^{-\lambda yz} e^{-\lambda y}\mathrm{d}y \;\cdots\;\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x \\&=&\lambda^2 \int_0^{\infty} \left|y\right|e^{-\lambda yz-\lambda y}\mathrm{d}y \;\cdots\;A^BA^C=A^{B+C} \\&=&\lambda^2 \int_0^{\infty} \left|y\right|e^{-\lambda y\left(z+1\right)}\mathrm{d}y \\&=&\lambda^2 \int_0^{\infty} ye^{-\lambda y\left(z+1\right)}\mathrm{d}y \;\cdots\;y \gt 0\;(積分範囲より) \\&=&\lambda^2 \int_0^{\infty} \frac{t}{\lambda\left(z+1\right)}e^{-t}\frac{1}{\lambda\left(z+1\right)}\mathrm{d}t \\&&\;\cdots\;t=\lambda\left(z+1\right)y, y=\frac{t}{\lambda\left(z+1\right)} \\&&\;\cdots\;\frac{\mathrm{d}t}{\mathrm{d}y}=\lambda\left(z+1\right),\mathrm{d}y=\frac{1}{\lambda\left(z+1\right)}\mathrm{d}t \\&=&\lambda^2 \left\{\frac{1}{\lambda\left(z+1\right)}\right\}^2\int_0^{\infty} te^{-t}\mathrm{d}t \;\cdots\;\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x \\&=&\frac{1}{\left(z+1\right)^2}\int_0^{\infty} te^{-t}\mathrm{d}t \\&=&\frac{1}{\left(z+1\right)^2}\int_0^{\infty} t\left(-e^{-t}\right)^\prime\mathrm{d}t \;\cdots\;\left(-e^{-t}\right)^\prime=e^{-t} \\&=&\frac{1}{\left(z+1\right)^2}\left\{\left[-te^{-t}\right]_0^{\infty}-\int_0^{\infty} -e^{-t}\mathrm{d}t\right\} \;\cdots\;\int_a^b f^\prime(x)g(x) \mathrm{d}x=\left[f(x)g(x)\right]_a^b -\int_a^b f(x)g^\prime(x)\mathrm{d}x \\&=&\frac{1}{\left(z+1\right)^2}\left\{\left[-(\infty)e^{-\infty}-\left(-0e^{-0}\right)\right]+\int_0^{\infty} e^{-t}\mathrm{d}t\right\} \\&=&\frac{1}{\left(z+1\right)^2}\left(0+1\right) \;\cdots\;\int_{0}^{\infty} e^{-t} \mathrm{d}t=\left[-e^{-t}\right]_{0}^{\infty}=\left[\left(-e^{-\infty}\right)-\left(-e^0\right)\right]=\left[0+1\right]=1 \\&=&\frac{1}{\left(z+1\right)^2} \end{eqnarray} $$

確率変数の変数変換 Z=XY

確率変数の変数変換 Z=XY

Z=XY

$$ \begin{eqnarray} p_{XY}(z) &=&\int \int \delta(z-xy)f(x)g(y)\mathrm{d}x\mathrm{d}y \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/delta-function.html}{\int_{-\infty}^{\infty}\delta(x)f(x)\mathrm{d}x=f(0)} \\&&\;\cdots\;X=x,Y=yの同時確率f(x)g(x)のうちz-(xy)=0を満たすものだけを足し合わせる. \\&=&\int \int \frac{1}{|y|}\delta\left(x-\frac{z}{y}\right)\;f\left(x\right)g(y)\mathrm{d}x\mathrm{d}y \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/delta-function.html}{\delta(u(x))=\sum_{\alpha\in u^{-1}(0)}\frac{1}{\left|u^{\prime}(\alpha)\right|}\delta\left(x-\alpha\right)} \\&&\;\cdots\;u(x)=z-xy \\&&\;\cdots\;u(x=\alpha)=0,\;\alpha=\frac{z}{y} \\&&\;\cdots\;u^\prime=\frac{\mathrm{d}u}{\mathrm{d}x}=\frac{\mathrm{d}}{\mathrm{d}x}(z-xy)=-y \\&&\;\cdots\;\delta\left(z-xy\right) =\frac{1}{|u^\prime(\alpha)|}\delta\left(x-\alpha\right) =\frac{1}{|-y|}\delta\left(x-\frac{z}{y}\right) =\frac{1}{|y|}\delta\left(x-\frac{z}{y}\right) \\&=&\int \frac{1}{|y|}\delta\left(\frac{z}{y}-\frac{z}{y}\right)\;f\left(\frac{z}{y}\right)g(y)\mathrm{d}y \;\cdots\;z=xy,x=\frac{z}{y} \\&=&\int \frac{1}{|y|}\delta\left(0\right)\;f\left(\frac{z}{y}\right)g(y)\mathrm{d}y \\&=&\int \frac{1}{|y|}f\left(\frac{z}{y}\right)g(y)\mathrm{d}y \;\cdots\;\delta凾数の引数が全域で0なので,全域で\frac{1}{|y|}f\left(\frac{z}{y}\right)g(y)を足し合わせる積分になる. \end{eqnarray} $$

標準正規分布同士の例

$$ \begin{eqnarray} f_X(x)&=&\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} \;\cdots\;N(0,1)\;(標準正規分布) \\g_Y(y)&=&\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \;\cdots\;N(0,1)\;(標準正規分布) \\p_{XY}(z)&=&\int \int \delta(z-xy)f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&=&\int \int \delta(z-xy)\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}\mathrm{d}x\mathrm{d}y \\&=&\int_{-\infty}^{\infty} \frac{1}{|y|} \frac{1}{\sqrt{2\pi}}e^{-\frac{\left(\frac{z}{y}\right)^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;z=xy,x=\frac{z}{y} \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} \frac{1}{|y|} e^{-\frac{\left(\frac{z}{y}\right)^2}{2}}e^{-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} \frac{1}{|y|} e^{-\frac{1}{2}\left(\frac{z^2}{y^2}+y^2\right)} \mathrm{d}y \\&=&\frac{1}{2\pi}\left\{ \int_{-\infty}^{0} \frac{1}{|y|} e^{-\frac{1}{2}\left(\frac{z^2}{y^2}+y^2\right)} \mathrm{d}y + \int_{0}^{\infty} \frac{1}{|y|} e^{-\frac{1}{2}\left(\frac{z^2}{y^2}+y^2\right)} \mathrm{d}y \right\} \\&&\;\cdots\;\int_{A}^{B} f(x)\mathrm{d}x=\int_{A}^{C} f(x)\mathrm{d}x+\int_{C}^{B} f(x)\mathrm{d}x\;(ただしA\leq C \leq B) \\&=&\frac{1}{2\pi}\left\{ \int_{-\infty}^{0} -\frac{1}{y} e^{-\frac{1}{2}\left(\frac{z^2}{y^2}+y^2\right)} \mathrm{d}y + \int_{0}^{\infty} \frac{1}{y} e^{-\frac{1}{2}\left(\frac{z^2}{y^2}+y^2\right)} \mathrm{d}y \right\} \\&&\;\cdots\;\frac{1}{|y|}=-\frac{1}{y}\;(y\lt0),\frac{1}{|y|}=\frac{1}{y}\;(0\lt y) \\&=&\frac{1}{2\pi}2\int_{0}^{\infty} \frac{1}{y} e^{-\frac{1}{2}\left(\frac{z^2}{y^2}+y^2\right)} \mathrm{d}y \\&&\;\cdots\;y=\frac{1}{x}が奇凾数でy=e^{-\frac{1}{2}\left(\frac{z^2}{x^2}+x^2\right)}が偶凾数なのでその積は奇凾数. \\&&\;\cdots\;負の区間(-\infty,0)と正の区間(0,\infty)で符号が逆なので全体としては偶凾数と同じに扱える. \\&&\;\cdots\;-f_{奇凾数(-\infty,0)}(x) + f_{奇凾数(0,\infty)}(x) = f_{条件付きで偶凾数}(x)\mathrm{d}x \\&&\;\cdots\;偶凾数の(-a,a)での積分は正の区間(0,a)の2倍として計算できる. \\&&\;\cdots\;\int_{-a}^a f_{奇凾数}(x) \mathrm{d}x=2\int_0^a f_{偶凾数}(x)\mathrm{d}x \\&=&\frac{1}{\pi} \int_{0}^{\infty} \color{red}{\frac{1}{y}} e^{-\frac{1}{2}\left(\frac{z^2}{y^2}+y^2\right)} \color{red}{\mathrm{d}y} \\&=&\frac{1}{\pi} \int_{0}^{\infty} e^{-\frac{1}{2}\left\{\frac{z^2}{y^2}+y^2\right\}} \frac{\mathrm{d}t}{2t} \\&&\;\cdots\;t=\frac{y^2}{2},y^2=2t,\;y:0\rightarrow\infty,t:0\rightarrow\infty \\&&\;\cdots\;\frac{\mathrm{d}t}{\mathrm{d}y}=y=\frac{y^2}{y}=\frac{2t}{y},\frac{\mathrm{d}y}{y}=\frac{\mathrm{d}t}{2t} \\&=&\frac{1}{\pi}\frac{1}{2}\int_{0}^{\infty} \frac{1}{t} e^{-\frac{1}{2}\left\{\frac{z^2}{2t}+2t\right\}} \mathrm{d}t \\&=&\frac{1}{2\pi}\int_{0}^{\infty} \frac{1}{t} e^{-t-\frac{z^2}{4t}} \mathrm{d}t \\&&\;\cdots\;K_n(z)=\frac{1}{2}\left(\frac{z}{2}\right)^n\int_0^{\infty} t^{-n-1} e^{-t-\frac{z^2}{4t}} \mathrm{d}t\;(第二種変形ベッセル凾数) \\&&\;\cdots\;K_0(z)=\frac{1}{2}\int_0^{\infty} t^{-1} e^{-t-\frac{z^2}{4t}} \mathrm{d}t \\&=&\frac{1}{\pi}K_0(z) \end{eqnarray} $$

指数分布同士の例

$$ \begin{eqnarray} f_X(x)&=&\lambda e^{-\lambda x} (x \geq 0, \lambda \gt 0) \\g_Y(y)&=&\lambda e^{-\lambda y} (x \geq 0, \lambda \gt 0) \\p_{XY}(z)&=&\int \int \delta(x-xy)f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&=&\int \int \delta(x-xy)\lambda e^{-\lambda x} \lambda e^{-\lambda y} \mathrm{d}x\mathrm{d}y \\&=&\int_0^{\infty} \frac{1}{|y|}\lambda e^{-\lambda \left(\frac{z}{y}\right)} \lambda e^{-\lambda y}\mathrm{d}y \;\cdots\;z=xy,x=\frac{z}{y} \\&=&\lambda^2\int_0^{\infty} \frac{1}{|y|} e^{-\lambda \left(\frac{z}{y}\right)} e^{-\lambda y} \mathrm{d}y \;\cdots\;\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x \\&=&\lambda^2\int_0^{\infty} \frac{1}{|y|} e^{-\lambda \left(\frac{z}{y}\right)-\lambda y} \mathrm{d}y \;\cdots\;A^BA^C=A^{B+C} \\&=&\lambda^2\int_0^{\infty} \frac{1}{|y|} e^{-\lambda \left(\frac{z}{y}+y\right)} \mathrm{d}y \\&=&\lambda^2\int_0^{\infty} \frac{1}{|\frac{t}{\lambda}|} e^{-\lambda \left(\frac{z}{\frac{t}{\lambda}}+\frac{t}{\lambda}\right)} \frac{1}{\lambda}\mathrm{d}t \\&&\;\cdots\;t=\lambda y, y=\frac{t}{\lambda}, \frac{\mathrm{d}y}{\mathrm{d}t}=\frac{1}{\lambda},\mathrm{d}y=\frac{1}{\lambda}\mathrm{d}t,\;y:0\rightarrow\infty, t:0\rightarrow\infty \\&=&\lambda^2\int_0^{\infty} \frac{|\lambda|}{|t|} e^{-\lambda^2\frac{z}{t}-t} \frac{1}{\lambda}\mathrm{d}t \\&=&\lambda^2\int_0^{\infty} \frac{|\lambda|}{\lambda}\frac{1}{|t|} e^{-\lambda^2\frac{z}{t}-t} \mathrm{d}t \\&=&\lambda^2\int_0^{\infty} \frac{1}{|t|} e^{-\lambda^2\frac{z}{t}-t} \mathrm{d}t \;\cdots\;\lambda \gt 0 \\&=&\lambda^2\int_0^{\infty} \frac{1}{t} e^{-\lambda^2\frac{z}{t}-t} \mathrm{d}t \;\cdots\;t \gt 0\;(積分範囲より) \\&=&2\lambda^2K_0(2\lambda\sqrt{z}) \\&&\;\cdots\;K_n(z)=\frac{1}{2}\left(\frac{z}{2}\right)^n\int_0^{\infty} t^{-n-1} e^{-t-\frac{z^2}{4t}} \mathrm{d}t\;(第二種変形ベッセル凾数) \\&&\;\cdots\;K_0(z)=\frac{1}{2}\int_0^{\infty} t^{-1} e^{-t-\frac{z^2}{4t}} \mathrm{d}t \end{eqnarray} $$

確率変数の変数変換 Z=X+Y

確率変数の変数変換 Z=X+Y

Z=X+Y

$$ \begin{eqnarray} p_{X+Y}(z) &=&\int\int \delta(z-(x+y))f(x)g(y)\mathrm{d}x\mathrm{d}y \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/09/delta-function.html}{\int_{-\infty}^{\infty}\delta(x)f(x)\mathrm{d}x=f(0)} \\&&\;\cdots\;X=x,Y=yの同時確率f(x)g(x)のうちz-(x+y)=0を満たすものだけを足し合わせる. \\&=&\int\int \delta(z-((z-y)+y))f(z-y)g(y)\mathrm{d}y \;\cdots\;z=x+y,x=z-y \\&=&\int \delta(0)f(z-y)g(y)\mathrm{d}y \\&=&\int f(z-y)g(y)\mathrm{d}y \;\cdots\;\delta凾数の引数が全域で0なので,全域でf(z-y)g(y)を足し合わせる積分になる. \end{eqnarray} $$

標準正規分布同士の例

$$ \begin{eqnarray} f_X(x)&=&\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} \;\cdots\;N(0,1)\;(標準\href{https://shikitenkai.blogspot.com/2019/06/binomial-distributionnormal-distribution.html}{正規分布}) \\g_Y(y)&=&\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \;\cdots\;N(0,1)\;(標準\href{https://shikitenkai.blogspot.com/2019/06/binomial-distributionnormal-distribution.html}{正規分布}) \\p_{X+Y}(z)&=&\int\int \delta(z-(x+y))f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&=&\int\int \delta(z-(x+y))\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}\mathrm{d}x\mathrm{d}y \\&=&\int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}}e^{-\frac{(z-y)^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;z=x+y,x=z-y \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{(z-y)^2}{2}}e^{-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-expected.html}{\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x} \\&=&\frac{1}{2\pi}\int e^{-\frac{(z-y)^2}{2}-\frac{y^2}{2}} \mathrm{d}y \;\cdots\;A^BA^C=A^{B+C} \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{(z-y)^2+y^2}{2}} \mathrm{d}y \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{z^2-2yz+y^2+y^2}{2}}\mathrm{d}y \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{z^2-2yz+2y^2}{2}}\mathrm{d}y \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{z^2}{2}+yz-y^2}\mathrm{d}y \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{z^2}{2}\color{red}{+\frac{z^2}{4}-\frac{z^2}{4}}\color{black}{+yz-y^2}\mathrm{d}y} \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{z^2}{4}-(\frac{z^2}{4}-yz+y^2)}\mathrm{d}y \\&=&\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{z^2}{4}}e^{-(\frac{z^2}{4}-yz+y^2)}\mathrm{d}y \\&=&\frac{1}{2\pi}e^{-\frac{z^2}{4}}\int_{-\infty}^{\infty} e^{-(\frac{z}{2}-y)^2}\mathrm{d}y \\&=&\frac{1}{2\pi}e^{-\frac{z^2}{4}}\int_{\infty}^{-\infty} e^{-t^2}(-1)\mathrm{d}t \\&&\;\cdots\;t=\frac{z}{2}-y,\frac{\mathrm{d}t}{\mathrm{d}y}=-1,\mathrm{d}y=-\mathrm{d}t,\;y:-\infty\rightarrow\infty, t:\infty\rightarrow -\infty \\&=&\frac{1}{2\pi}e^{-\frac{z^2}{4}}(-1)\int_{\infty}^{-\infty} e^{-t^2}\mathrm{d}t \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-expected.html}{\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x} \\&=&\frac{1}{2\pi}e^{-\frac{z^2}{4}}(-1)\left(-\int_{-\infty}^{\infty} e^{-t^2}\mathrm{d}t\right) \;\cdots\;\int_{A}^{B} f(x)\mathrm{d}x=-\int_{B}^{A} f(x)\mathrm{d}x \\&=&\frac{1}{2\pi}e^{-\frac{z^2}{4}}\int_{-\infty}^{\infty} e^{-t^2}\mathrm{d}t \\&=&\frac{1}{2\pi}e^{-\frac{z^2}{4}}\sqrt{\pi} \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/gaussian-integral.html}{\int_{-\infty}^{\infty} e^{-t^2}\mathrm{d}y=\sqrt{\pi}} \\&=&\frac{1}{2\sqrt{\pi}}e^{-\frac{z^2}{4}} \\&=&\frac{1}{\sqrt{2\pi\cdot2}}e^{-\frac{z^2}{2\cdot2}} \;\cdots\;N(0,2) \end{eqnarray} $$

指数分布同士の例

$$ \begin{eqnarray} f_X(x)&=&\lambda e^{-\lambda x} (x \geq 0, \lambda \gt 0) \;\cdots\;Exp(\lambda)\;(\href{https://shikitenkai.blogspot.com/2020/05/blog-post_0.html}{指数分布}) \\g_Y(y)&=&\lambda e^{-\lambda y} (x \geq 0, \lambda \gt 0) \;\cdots\;Exp(\lambda)\;(\href{https://shikitenkai.blogspot.com/2020/05/blog-post_0.html}{指数分布}) \\p_{X+Y}(z)&=&\int \int \delta(z-(x+y))f_X(x)g_Y(y)\mathrm{d}x\mathrm{d}y \\&=&\int \int \delta(z-(x+y))\lambda e^{-\lambda x} \lambda e^{-\lambda y} \mathrm{d}x\mathrm{d}y \\&=&\int_0^z \lambda e^{-\lambda (z-y)} \lambda e^{-\lambda y}\mathrm{d}y \;\cdots\;z=x+y,x=z-y \\&=&\lambda^2\int_0^z e^{-\lambda (z-y)} e^{-\lambda y}\mathrm{d}y \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-expected.html}{\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x} \\&=&\lambda^2\int_0^z e^{-\lambda (z-y)-\lambda y}\mathrm{d}y \;\cdots\;A^BA^C=A^{B+C} \\&=&\lambda^2\int_0^z e^{-\lambda (z-y+y)}\mathrm{d}y \\&=&\lambda^2\int_0^z e^{-\lambda z}\mathrm{d}y \\&=&\lambda^2 e^{-\lambda z}\int_0^z\mathrm{d}y \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/continuous-random-variable-expected.html}{\int c f(x) \mathrm{d}x = c \int f(x) \mathrm{d}x} \\&=&\lambda^2 e^{-\lambda z}\left[y\right]_0^z \\&=&\lambda^2 e^{-\lambda z}\left[z-0\right] \\&=&\lambda^2 z e^{-\lambda z} \\&=&\Gamma(2, \lambda) \end{eqnarray} $$ $$ \begin{eqnarray} \Gamma(\alpha, \lambda)&=&\frac{1}{\Gamma(\alpha)}\lambda^{\alpha} x^{\alpha-1} e^{-\lambda x}\;(\href{https://shikitenkai.blogspot.com/2020/05/n.html}{ガンマ分布;\;gamma\;disribution}) \\\Gamma(1, \lambda)&=&\frac{1}{\Gamma(1)}\lambda^1 x^{1-1} e^{-\lambda x}=\frac{1}{1}\lambda e^{-\lambda x}=\lambda e^{-\lambda x} \\&&\;\cdots\;\Gamma(1)=(1-1)!=0!=1 \\&&\;\cdots\;\href{https://shikitenkai.blogspot.com/2020/05/blog-post_0.html}{指数分布} \\\Gamma(2, \lambda)&=&\frac{1}{\Gamma(2)}\lambda^2 x^{2-1} e^{-\lambda x}=\frac{1}{1}\lambda^2xe^{-\lambda x}=\lambda^2xe^{-\lambda x} \\&&\;\cdots\;\Gamma(2)=(2-1)!=1!=1 \\&&\;\cdots\;指数分布に従う確率変数同士の和の分布,\;\alpha=2のガンマ分布 \end{eqnarray} $$

デルタ凾数(delta function)

デルタ凾数(delta function)

以下の式を満たす\(\delta(x)\)をデルタ凾数と呼ぶ. $$ \begin{eqnarray} \int_{-\infty}^{\infty}\delta(x)f(x)\mathrm{d}x&=&f(0) \end{eqnarray} $$ これは\(f(x)\)の積分の中から\(x=0\;(\delta(0))\)となる時のf(x)のみを取り出すことになる.

上記式を満たすため,\(\delta(x)\)のみでの積分値は1でとされる. $$ \begin{eqnarray} \int_{-\infty}^{\infty}\delta(x)\mathrm{d}x&=&1\;\cdots\;前述の式においてf(x)=1の場合と同じ \end{eqnarray} $$

デルタ凾数の性質

\(\delta(x-\alpha)\)の場合

足し合わせたいxの値を動かす.これは\(f(x)\)の積分の中から\(x=\alpha\;(\delta(\alpha-\alpha)=\delta(0))\)となる時の\(f(\alpha)\)のみを取り出すことになる. $$ \begin{eqnarray} \\\int_{-\infty}^{\infty}\delta(x-\alpha)f(x)\mathrm{d}x &=&\int_{-\infty}^{\infty}\delta(t)f(t+\alpha)\mathrm{d}t \\&&\;\cdots\;t=x-\alpha,x=t+\alpha,\frac{\mathrm{d}x}{\mathrm{d}t}=1,\mathrm{d}x=\mathrm{d}t \\&&\;\cdots\;x:-\infty\rightarrow \infty, t:-\infty\rightarrow \infty \\&=&f(\alpha) \end{eqnarray} $$

\(\delta(\alpha x)\)の場合

\(f(x)\)と\(\delta(x)\)でスケールが\(\alpha\)倍異なる場合の性質を求める. $$ \begin{eqnarray} \int_{-\infty}^{\infty}\color{red}{\delta(\alpha x)}\color{black}{f(x)\mathrm{d}x} &=&\int_{-\infty}^{\infty}\delta(t)f\left(\frac{t}{\alpha}\right)\frac{\mathrm{d}t}{\alpha} \\&&\;\cdots\;t=\alpha x,x=\frac{t}{\alpha}\;\frac{\mathrm{d}x}{\mathrm{d}t}=\frac{1}{\alpha},\mathrm{d}x=\frac{\mathrm{d}t}{\alpha} \\&&\;\cdots\;x:-\infty\rightarrow \infty, t:-\infty\rightarrow \infty \\&=&\begin{cases} \int_{-\infty}^{\infty}\delta(t)f\left(\frac{t}{\alpha}\right)\frac{\mathrm{d}t}{|\alpha|}&(\alpha \gt 0)\;\cdots\;a=|a|\;(a\gt0) \\ \int_{\infty}^{-\infty}\delta(t)f\left(\frac{t}{\alpha}\right)\frac{\mathrm{d}t}{|\alpha|}&(\alpha \lt 0)\;\cdots\;a=-|a|\;(a\lt0) \end{cases} \\&=&\int_{-\infty}^{\infty}\delta(t)f\left(\frac{t}{\alpha}\right)\frac{\mathrm{d}t}{|\alpha|} \\&&\;\cdots\;t=0以外は\delta(t)=0なので積分範囲が逆転しても\delta(t)=0での \\&&\;\cdots\;f(t)の値以外関係なく,\alphaの絶対値をとれば\alphaの正負によらない式にできる. \\&=&\frac{1}{|\alpha|}\int_{-\infty}^{\infty}\delta(t)f\left(\frac{t}{\alpha}\right)\mathrm{d}t \\&=&\frac{1}{|\alpha|}f\left(0\right) \;\cdots\;\int cf(x)\mathrm{d}x=c\int f(x)\mathrm{d}x \\&=&\int_{-\infty}^{\infty} \color{red}{\frac{1}{|\alpha|}\delta(t)}\color{black}{f\left(x\right)\mathrm{d}t} \;\cdots\;最初の式と比較しやすいように並べ直してみる. \\\delta(\alpha x)&=&\frac{1}{|\alpha|}\delta(t) \end{eqnarray} $$

\(\delta(u(x))\)の場合

足し合わせたいxの値を動かす.\(u(x)=0\)を満たす\(x\)を\(x=\alpha_1,\cdots,\alpha_k,\cdots,\alpha_n\)とするとこれは\(f(x)\)の積分の中から\(x=\alpha_k\;(\delta(\alpha_k-\alpha_k)=\delta(0))\)となる時の\(f(\alpha_k)\)のみを取り出すことになり,積分はその足し合わせとなる.

\(u(x)=0\)の解\(x=\alpha_k\)が区間にただ一つとなるように\(\epsilon\)を十分小さくとるとする. $$ \begin{eqnarray} \int_{\alpha_k-\epsilon}^{\alpha_k+\epsilon}\color{red}{\delta\left(u(x)\right)}\color{black}{f(x)\mathrm{d}x} &=&\int_{\alpha_k-\epsilon}^{\alpha_k+\epsilon}\delta\left(u(\alpha_k)+u^{\prime}(\alpha_k)\left(x-\alpha_k\right)+\frac{1}{2!}u^{\prime\prime}(\alpha_k)\left(x-\alpha_k\right)^2+\cdots\right)f(x)\mathrm{d}x \\&&\;\cdots\;u(x)のx=\alpha周りでのテイラー展開を行った. \\&&\;\cdots\;a周りでのf(x)のテイラー展開: f(x)=f^{}(a)+f^{\prime}(a)\left(x-a\right)+\frac{1}{2}f^{\prime\prime}(a)\left(x-a\right)^2+\cdots \\&=&\int_{\alpha_k-\epsilon}^{\alpha_k+\epsilon}\delta\left(0+u^{\prime}(\alpha_k)\left(x-\alpha_k\right)+0\right)f(x)\mathrm{d}x \\&&\;\cdots\;u(\alpha_k)は\alpha_kの定義より0である. \\&&\;\cdots\;(x-\alpha_k)=\pm\epsilonであり二次以上の項は\epsilon^2以下の数を掛けることになる. \\&&\;\cdots\;これは無視できるほど小さく,0として扱える. \\&=&\int_{\alpha_k-\epsilon}^{\alpha_k+\epsilon}\delta\left(u^{\prime}(\alpha_k)\left(x-\alpha_k\right)\right)f(x)\mathrm{d}x \\&=&\int_{\alpha_k-\epsilon}^{\alpha_k+\epsilon}\color{red}{\frac{1}{\left|u^{\prime}(\alpha_k)\right|}\delta\left(x-\alpha_k\right)}\color{black}{f(x)\mathrm{d}x} \;\cdots\;\delta(\alpha x)=\frac{1}{|\alpha|}\delta(t)(前述) \end{eqnarray} $$ よって全区間では以下のようになる. $$ \begin{eqnarray} \int_{-\infty}^{\infty}\color{red}{\delta\left(u(x)\right)}\color{black}{f(x)\mathrm{d}x} &=&\sum_{k=1}^{n} \int_{\alpha_k-\epsilon}^{\alpha_k+\epsilon}\delta\left(u(x)\right)f(x)\mathrm{d}x \\&=&\sum_{k=1}^{n} \int_{\alpha_k-\epsilon}^{\alpha_k+\epsilon}\frac{1}{\left|u^{\prime}(\alpha_k)\right|}\delta\left(x-\alpha_k\right)f(x)\mathrm{d}x \\&=&\int_{-\infty}^{\infty}\left\{\color{red}{\sum_{k=1}^{n}\frac{1}{\left|u^{\prime}(\alpha_k)\right|}\delta\left(x-\alpha_k\right)}\right\}\color{black}{f(x)\mathrm{d}x} \end{eqnarray} $$ $$ \begin{eqnarray} \\\delta(u(x))&=&\sum_{k=1}^{n}\frac{1}{\left|u^{\prime}(\alpha_k)\right|}\delta\left(x-\alpha_k\right)(ただし\alpha_kはu(x)=0の解) \\&=&\sum_{\alpha\in u^{-1}(0)}\frac{1}{\left|u^{\prime}(\alpha)\right|}\delta\left(x-\alpha\right) \end{eqnarray} $$

確率母凾数 ( probability generating function )

確率母凾数

非負の整数値をとる離散型確率変数\(X\)に対して以下のように確率母凾数(probability generating function;積率母凾数ではない)が定義される. $$ \begin{eqnarray} G_X(t) &=&\mathrm{E}\left[t^X\right] \;\cdots\;積率母凾数はM_X(t)=\mathrm{E}\left[e^{tX}\right] \\&=&\sum_{k} t^k P(X=k) \\&&\;\cdots\;P(X):確率質量凾数, \sum_{k}:Xの定義範囲すべてのkでの和 \end{eqnarray} $$

一階微分((原点周りの)一次モーメント) = 期待値

$$ \begin{eqnarray} \left. G_X^{(1)}(t) \right|_{t=1} &=&\left. \frac{\mathrm{d}}{\mathrm{d}t}G_X(t) \right|_{t=1} \\&=&\left. \sum_{k\geq1} kt^{k-1} P(X=k) \right|_{t=1} \\&=&\sum_{k\geq1} k1^{k-1} P(X=k) \\&=&\sum_{k\geq1} k P(X=k) \\&=&\mathrm{E}\left[X\right] \end{eqnarray} $$

二階微分((原点周りの)二次モーメント)

$$ \begin{eqnarray} \left. G_X^{(2)}(t) \right|_{t=1} &=&\left. \frac{\mathrm{d}^2}{\mathrm{d}t^2}G_X(t) \right|_{t=1} \\&=&\left. \frac{\mathrm{d}}{\mathrm{d}t} \sum_{k\geq1} kt^{k-1} P(X=k) \right|_{t=1} \\&=&\left. \sum_{k\geq2} k(k-1)t^{k-2} P(X=k) \right|_{t=1} \\&=&\sum_{k\geq2} k(k-1)1^{k-2} P(X=k) \\&=&\sum_{k\geq2} k(k-1) P(X=k) \\&=&\mathrm{E}\left[X(X-1)\right] \end{eqnarray} $$

分散((母平均周りの)二次モーメント)

$$ \begin{eqnarray} \mathrm{V}\left[X\right] &=&\mathrm{E}\left[\left(X-\mathrm{E}\left[X\right]\right)^2\right] \\&=&\mathrm{E}\left[X^2\right]-\mathrm{E}\left[X\right]^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-variance.html}{\mathrm{E}\left[\left(X-\mathrm{E}\left[X\right]\right)^2\right]=\mathrm{E}\left[X^2\right]-\left[X\right]^2} \\&=&\mathrm{E}\left[X^2\right]\color{red}{-\mathrm{E}\left[X\right]+\mathrm{E}\left[X\right]}\color{black}{-\mathrm{E}\left[X\right]^2} \\&=&\mathrm{E}\left[X^2-X\right]+\mathrm{E}\left[X\right]-\mathrm{E}\left[X\right]^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[X\pm Y\right]=\mathrm{E}\left[X\right]\pm\mathrm{E}\left[Y\right]} \\&=&\mathrm{E}\left[X(X-1)\right]+\mathrm{E}\left[X\right]-\mathrm{E}\left[X\right]^2 \\&=&\left( \left. G_X^{(2)}(t) \right|_{t=1} \right) + \left( \left. G_X^{(1)}(t) \right|_{t=1} \right) - \left( \left. G_X^{(1)}(t) \right|_{t=1} \right)^2 \end{eqnarray} $$

Z=X+Y

$$ \begin{eqnarray} G_Z(t)&=&\mathrm{E}\left[t^Z\right] \\&=&\mathrm{E}\left[t^{X+Y}\right] \\&=&\mathrm{E}\left[t^Xt^Y\right] \;\cdots\;A^{B+C}=A^BA^C \\&=&\mathrm{E}\left[t^X\right]\mathrm{E}\left[t^Y\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[AB\right]=\mathrm{E}\left[A\right]\mathrm{E}\left[B\right]\;AとBが独立の場合} \\&=&G_X(t)G_Y(t) \end{eqnarray} $$

二項分布(binomial distribution)の積率母凾数(moment-generating function)と期待値(expected value)・分散(variance)

二項分布(binomial distribution)の積率母凾数(moment-generating function)と期待値(expected value)・分散(variance)

二項分布

$$ \begin{eqnarray} X&\sim&B(n, p) \\f(X=x)&=& \begin{cases} _n\mathrm{C}_x\;p^x(1-p)^{n-x} & x \in \left\{0,1,2, \dotsc ,n\right\} \\0 & x \notin \left\{0,1,2, \dotsc ,n\right\} \end{cases}\;\cdots\;確率密度凾数 \end{eqnarray} $$

積率母凾数

$$ \begin{eqnarray} M_X(t)&=&\mathrm{E}\left[e^{tx}\right] \\&=&\sum_{k=0}^n e^{tx}\;_n\mathrm{C}_x\;p^x\left(1-p\right)^{n-x} \\&=&\sum_{k=0}^n\;_n\mathrm{C}_x\;\left( e^{t} p \right)^x \left(1-p\right)^{n-x} \\&=&\;_n\mathrm{C}_0\;\left( e^{t} p \right)^0 \left(1-p\right)^{n-0} +\;_n\mathrm{C}_1\;\left( e^{t} p \right)^1 \left(1-p\right)^{n-1} +\;_n\mathrm{C}_2\;\left( e^{t} p \right)^2 \left(1-p\right)^{n-2} +\cdots +\;_n\mathrm{C}_n\;\left( e^{t} p \right)^n \left(1-p\right)^{n-n} \\&=&\left\{e^{t} p + \left(1-p\right) \right\}^n \\&&\;\cdots\;(A+B)^D=\;_D\mathrm{C}_0\;A^0B^{D-0}+\;_D\mathrm{C}_1\;A^1B^{D-1}+\cdots+\;_D\mathrm{C}_D\;A^DB^{D-D}=\sum_{k=0}^D \;_D\mathrm{C}_k\;A^kB^{D-k}\;(二項関係) \\&=&\left(e^{t}p - p + 1\right)^n \end{eqnarray} $$

(原点周りの)一次モーメント = 期待値

$$ \begin{eqnarray} \mathrm{E}\left[X\right]&=&M^{(1)}_X(t) \\&=&\left.\frac{\mathrm{d} M_X(t)}{\mathrm{d}t}\right|_{t=0} \\&=&\left.\frac{\mathrm{d}}{\mathrm{d}t} \left(e^{t}p - p + 1\right)^n \right|_{t=0} \\&=&\left.\frac{\mathrm{d}}{\mathrm{d}u} u^n \frac{\mathrm{d}u}{\mathrm{d}t} \right|_{t=0} \;\cdots\;u=e^{t}p - p + 1,\frac{\mathrm{d}u}{\mathrm{d}t}=\frac{\mathrm{d}}{\mathrm{d}t}\left(e^{t}p - p + 1\right)=e^{t}p \\&=&\left.nu^{n-1} e^{t}p \right|_{t=0} \\&=&\left.n\left( e^{t}p - p + 1 \right)^{n-1} e^{t}p \right|_{t=0} \\&=&\left.npe^{t} \left( e^{t}p - p + 1 \right)^{n-1} \right|_{t=0} \\&=&npe^{0} \left( e^{0}p - p + 1 \right)^{n-1} \\&=&np\cdot1 \left( 1\cdot p - p + 1 \right)^{n-1} \\&=&np \left(1\right)^{n-1} \\&=&np\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/blog-post_30.html}{Xの期待値の定義から求めた二項分布の期待値}と同じ \end{eqnarray} $$

(原点周りの)二次モーメント

$$ \begin{eqnarray} \mathrm{E}\left[X^2\right]&=&M^{(2)}_X(t) \\&=&\left.\frac{\mathrm{d}^2 M_X(t)}{\mathrm{d}t^2}\right|_{t=0} \\&=&\left.\frac{\mathrm{d}^2}{\mathrm{d}t^2} \left(e^{t}p - p + 1\right)^n \right|_{t=0} \\&=&\left.\frac{\mathrm{d}}{\mathrm{d}t} npe^{t} \left(e^{t}p - p + 1\right)^{n-1} \right|_{t=0} \\&=&\left.np\frac{\mathrm{d}}{\mathrm{d}t} e^{t} \left(e^{t}p - p + 1\right)^{n-1} \right|_{t=0} \;\cdots\;\frac{\mathrm{d}}{\mathrm{d}x}cf(x)=c\frac{\mathrm{d}}{\mathrm{d}x}f(x)\;(c:定数) \\&=&\left.np\frac{\mathrm{d}}{\mathrm{d}t} uv \right|_{t=0} \;\cdots\;u=e^t,\;v= \left(e^{t}p - p + 1\right)^{n-1} \\&=&\left.np\left\{\left(\frac{\mathrm{d}}{\mathrm{d}t}u\right)v+u\left(\frac{\mathrm{d}}{\mathrm{d}t}v\right) \right\}\right|_{t=0} \\&=&\left.np\left[\left(e^{t}\right)v+u\left\{\left(n-1\right)\left(e^{t}p - p + 1\right)^{n-2}pe^t\right\} \right]\right|_{t=0} \;\cdots\;\frac{\mathrm{d}u}{\mathrm{d}t}=e^{t},\frac{\mathrm{d}v}{\mathrm{d}t}=\left(n-1\right)\left(e^{t}p - p + 1\right)^{n-2}pe^t \\&=&\left.np\left[\left(e^{t}\right)\left(e^{t}p - p + 1\right)^{n-1}+e^t\left\{\left(n-1\right)\left(e^{t}p - p + 1\right)^{n-2}pe^t\right\} \right]\right|_{t=0} \;\cdots\;u=e^t,\;v= \left(e^{t}p - p + 1\right)^{n-1} \\&=&\left.npe^{t}\left(e^{t}p - p + 1\right)^{n-1}+n\left(n-1\right)p^2e^{2t}\left(e^{t}p - p + 1\right)^{n-2} \right|_{t=0} \\&=&npe^{0}\left(e^{0}p - p + 1\right)^{n-1}+n\left(n-1\right)p^2e^{2\cdot0}\left(e^{0}p - p + 1\right)^{n-2} \\&=&np\cdot1\cdot\left(1\cdot p - p + 1\right)^{n-1}+n\left(n-1\right)p^2\cdot1\cdot\left(1\cdot p - p + 1\right)^{n-2} \\&=&np\left(1\right)^{n-1}+n\left(n-1\right)p^2\left(1\right)^{n-2} \\&=&np+n\left(n-1\right)p^2 \end{eqnarray} $$

二次の中心(化)モーメント / 母平均周りの二次モーメント = 分散

$$ \begin{eqnarray} \mathrm{V}\left[X\right]&=&\mathrm{E}\left[\left(X-\mathrm{E}\left[X\right]\right)^2\right] \\&=&\mathrm{E}\left[X^2\right]-\mathrm{E}\left[X\right]^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-variance.html}{\mathrm{E}\left[\left(X-\mathrm{E}\left[X\right]\right)^2\right]=\mathrm{E}\left[X^2\right]-\left[X\right]^2} \\&=&M^{(2)}_X(t) -\left(M^{(1)}_X(t)\right)^2 \\&=&np+n\left(n-1\right)p^2 - (np)^2 \\&=&np+n^2p^2-np^2 - n^2p^2 \\&=&np-np^2 \\&=&np(1-p)\;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/blog-post_75.html}{X(X-1)の期待値を利用した二項分布の分散}と同じ \end{eqnarray} $$

正規分布に従う互いに独立な標本における分散の最尤推定量を求める

正規分布に従う互いに独立な標本における分散の最尤推定量を求める

\(x_1,\cdots,x_n\)を\(N\left(\mu,\sigma^2\right)\)からの独立な観測値とする時の\(\sigma^2\)の最尤推定を考える.

確率密度凾数

$$ \begin{eqnarray} X_k&\sim&N\left(\mu,\sigma^2\right) \\f\left(x;\mu,\sigma^2\right)&=&\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}} \;\cdots\;正規分布の確率密度凾数 \\f\left(x_1,\cdots,x_n;\mu,\sigma^2\right)&=&\left\{\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x_1-\mu)^2}{2\sigma^2}}\right\} \left\{\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x_2-\mu)^2}{2\sigma^2}}\right\} \cdots \left\{\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x_n-\mu)^2}{2\sigma^2}}\right\} \\&=&\left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^ne^{-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2} \end{eqnarray} $$

\(\mu\)が既知の場合の\(\sigma^2\)の最尤推定

確率密度凾数から尤度凾数(確率密度凾数に対して,確率変数や既知のパラメータを定数として,未知のパラメータを変数とみなす)の対数をとり,対数尤度凾数を用意する. $$ \begin{eqnarray} \\l\left(\sigma^2;x_1,\cdots,x_n,\mu\right) &=&\log{ \left\{ \left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^ne^{-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2} \right\}} \\&=&\log{ \left\{ \left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)^n\right\}}+\log{ \left\{ e^{-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2} \right\}} \;\cdots\;\log{\left(AB\right)}=\log{\left(A\right)}+\log{\left(B\right)} \\&=&\log{ \left\{ \left(2\pi\sigma^2\right)^{-\frac{n}{2}}\right\}}+\log{ \left\{ e^{-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2} \right\}} \;\cdots\;\frac{1}{A}=A^{-1} \\&=&-\frac{n}{2}\log{ \left( 2\pi\sigma^2 \right)}-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2\log{ \left( e \right)} \;\cdots\;\log{\left(A^B\right)}=B\log{\left(A\right)} \\&=&-\frac{n}{2}\log{ \left( 2\pi\sigma^2 \right)}-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2 \;\cdots\;\log{ \left( e \right)}=1 \end{eqnarray} $$ 極値を考えるために変数である\(\sigma^2\)で微分する(スコア凾数). $$ \begin{eqnarray} \\\frac{\mathrm{d}}{\mathrm{d} \sigma^2}l\left(\sigma^2;x_1,\cdots,x_n,\mu\right) &=&\frac{\mathrm{d}}{\mathrm{d} \sigma^2}\left\{-\frac{n}{2}\log{ \left( 2\pi\sigma^2 \right)}-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2\right\} \\&=&-\frac{n}{2}\frac{\mathrm{d}}{\mathrm{d} \sigma^2}\log{ \left( 2\pi\sigma^2 \right)}-\frac{1}{2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}\frac{\mathrm{d}}{\mathrm{d} \sigma^2}\frac{1}{\sigma^2} \\&&\;\cdots\;\frac{\mathrm{d}}{\mathrm{d}x}\left\{f(x)+g(x)\right\}=\frac{\mathrm{d}}{\mathrm{d}x}f(x)+\frac{\mathrm{d}}{\mathrm{d}x}g(x) ,\;\frac{\mathrm{d}}{\mathrm{d}x}cf(x)=c\frac{\mathrm{d}}{\mathrm{d}x}f(x) \\&=&-\frac{n}{2}\frac{\mathrm{d}}{\mathrm{d} u}\log{ \left( u \right)}\frac{\mathrm{d}u}{\mathrm{d}\sigma^2}-\frac{1}{2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}\frac{\mathrm{d}}{\mathrm{d}v}v^{-1}\frac{\mathrm{d}v}{\mathrm{d}\sigma^2} \\&&\;\cdots\;u=2\pi\sigma^2,\frac{\mathrm{d}u}{\mathrm{d}\sigma^2}=2\pi, v=\sigma^2,\frac{\mathrm{d}v}{\mathrm{d}\sigma^2}=1 \\&=&-\frac{n}{2}\frac{1}{u}2\pi-\frac{1}{2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}\left(-v^{-2}\right)\cdot1 \\&=&-\frac{n}{2}\frac{1}{2\pi\sigma^2}2\pi-\frac{1}{2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}\left\{-\left(\sigma^2\right)^{-2}\right\} \;\cdots\;u=2\pi\sigma^2,\;v=\sigma^2 \\&=&-\frac{n}{2}\frac{1}{\sigma^2}-\frac{1}{2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}\left\{\frac{-1}{\left(\sigma^2\right)^2}\right\} \\&=&-\frac{n}{2\sigma^2}+\frac{1}{2\sigma^4}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\} \;\cdots\;\left(A^B\right)^C=A^{BC} \end{eqnarray} $$ この式が0となる\(\sigma^2(=\hat{\sigma}^2)\)を求める(極値である). $$ \begin{eqnarray} \frac{\mathrm{d}}{\mathrm{d} \sigma^2}l\left(\sigma^2;x_1,\cdots,x_n,\mu\right)=-\frac{n}{2\hat{\sigma}^2}+\frac{1}{2\sigma^4}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}&=&0 \\\frac{1}{2\hat{\sigma}^2}\left[-n+\frac{1}{\hat{\sigma}^2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}\right]&=&0 \\-n+\frac{1}{\hat{\sigma}^2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}&=&0 \;\cdots\;\frac{1}{2\hat{\sigma}^2}は\hat{\sigma}^2が有限なら0にならないので0になるのは\left[\right]の中が0の時 \\\frac{1}{\hat{\sigma}^2}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\}&=&n \\\frac{1}{\hat{\sigma}^2}&=&\frac{n}{\sum_{k=1}^n(x_k-\mu)^2} \\\hat{\sigma}^2&=&\frac{1}{n}\sum_{k=1}^n(x_k-\mu)^2 \;\cdots\;両辺とも逆数をとった. \end{eqnarray} $$

\(\mu\)が未知の場合の\(\sigma^2\)の最尤推定

対数尤度凾数を用意する(上と同じ). $$ \begin{eqnarray} \\l\left(\mu,\sigma^2;x_1,\cdots,x_n\right) &=&-\frac{n}{2}\log{ \left( 2\pi\sigma^2 \right)}-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2 \end{eqnarray} $$ 極値を考えるために変数である\(\mu,\sigma^2\)でそれぞれ偏微分する(スコア凾数). $$ \begin{eqnarray} \\\frac{\partial}{\partial \mu}l\left(\mu, \sigma^2;x_1,\cdots,x_n\right) &=&\frac{\partial}{\partial \mu}\left\{-\frac{n}{2}\log{ \left( 2\pi\sigma^2 \right)}-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2\right\} \\&=&\frac{\partial}{\partial \mu}\left\{-\frac{n}{2}\log{ \left( 2\pi\sigma^2 \right)}\right\}+\frac{\partial}{\partial \mu}\left\{-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2\right\} \\&&\;\cdots\;\frac{\partial}{\partial x}\left\{f(x,y)+g(x,y)\right\} =\frac{\partial}{\partial x}f(x,y) +\frac{\partial}{\partial x}g(x,y) \\&=&-\frac{1}{2\sigma^2}\frac{\mathrm{d}}{\mathrm{d} \mu}\sum_{k=1}^n(x_k-\mu)^2 \;\cdots\;\frac{\mathrm{d}}{\mathrm{d}x}c=0,\;\frac{\mathrm{d}}{\mathrm{d}x}cf(x)=c\frac{\mathrm{d}}{\mathrm{d}x}f(x) \\&=&-\frac{1}{2\sigma^2}\sum_{k=1}^n\frac{\mathrm{d}}{\mathrm{d} \mu}(x_k-\mu)^2 \\&&\;\cdots\;\frac{\mathrm{d}}{\mathrm{d}x}\sum_{k=1}^nf(x_k)=\frac{\mathrm{d}}{\mathrm{d}x}\left\{f(x_1)+\cdots+f(x_n)\right\}=\frac{\mathrm{d}}{\mathrm{d}x}f(x_1)+\cdots+\frac{\mathrm{d}}{\mathrm{d}x}f(x_n)=\sum_{k=1}^n\frac{\mathrm{d}}{\mathrm{d}x}f(x_k) \\&=&-\frac{1}{2\sigma^2}\sum_{k=1}^n\frac{\mathrm{d}}{\mathrm{d} u}u^2\frac{\mathrm{d}u}{\mathrm{d}\mu} \;\cdots\;u=x_k-\mu,\frac{\mathrm{d}u}{\mathrm{d}\mu}=-1 \\&=&-\frac{1}{2\sigma^2}\sum_{k=1}^n2u\cdot-1 \\&=&-\frac{1}{2\sigma^2}\sum_{k=1}^n-2\left(x_k-\mu\right) \;\cdots\;u=x_k-\mu \\&=&-\frac{1}{2\sigma^2}(-2)\sum_{k=1}^n\left(x_k-\mu\right) \\&=&\frac{1}{\sigma^2}\sum_{k=1}^n\left(x_k-\mu\right) \\\frac{\partial}{\partial \sigma^2}l\left(\mu, \sigma^2;x_1,\cdots,x_n\right) &=&\frac{\partial}{\partial \sigma^2}\left\{-\frac{n}{2}\log{ \left( 2\pi\sigma^2 \right)}-\frac{1}{2\sigma^2}\sum_{k=1}^n(x_k-\mu)^2\right\} \\&=&-\frac{n}{2\sigma^2}+\frac{1}{2\sigma^4}\left\{\sum_{k=1}^n(x_k-\mu)^2\right\} \;\cdots\;l(\sigma^2;x_1,\cdots,x_n,\mu)の時と同様 \end{eqnarray} $$ これらの式が0となる連立方程式として\(\mu(=\hat{\mu}),\sigma^2(=\tilde{\sigma}^2)\)を求める(極値である). $$ \begin{eqnarray} \left\{ \begin{array} \;\frac{1}{\tilde{\sigma}^2} \sum_{k=1}^n\left(x_k-\hat{\mu}\right)&=&0 \\-\frac{n}{2\tilde{\sigma}^2}+\frac{1}{2\tilde{\sigma}^4}\left\{\sum_{k=1}^n(x_k-\hat{\mu})^2\right\}&=&0 \end{array} \right. \end{eqnarray} $$ 第一式より $$ \begin{eqnarray} \frac{1}{\tilde{\sigma}^2} \sum_{k=1}^n\left(x_k-\hat{\mu}\right)&=&0 \\\sum_{k=1}^n\left(x_k-\hat{\mu}\right)&=&0 \;\cdots\;\frac{1}{\tilde{\sigma}^2}は\tilde{\sigma}^2が有限なら0にならないので\sum以降が0 \\\sum_{k=1}^nx_k-\sum_{k=1}^n\hat{\mu}&=&0 \;\cdots\;\sum (A-B)=\sum A - \sum B \\\sum_{k=1}^nx_k-n\hat{\mu}&=&0 \;\cdots\;\sum_{k=1}^n c = nc\;(c:定数) \\-n\hat{\mu}&=&-\sum_{k=1}^nx_k \\\hat{\mu}&=&\frac{1}{n}\sum_{k=1}^nx_k \\&=&\bar{x} \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/07/specimen-random-variable.html}{標本平均\;\bar{x}=\frac{1}{n}\sum_{k=1}^nx_k} \end{eqnarray} $$ 第二式の\(\hat{\mu}\)にこれを代入して\(\tilde{\sigma}^2\)の最尤推定量を得る. $$ \begin{eqnarray} -\frac{n}{2\tilde{\sigma}^2}+\frac{1}{2\tilde{\sigma}^4}\left\{\sum_{k=1}^n(x_k-\hat{\mu})^2\right\}&=&0 \\-\frac{n}{2\tilde{\sigma}^2}+\frac{1}{2\tilde{\sigma}^4}\left\{\sum_{k=1}^n(x_k-\bar{x})^2\right\}&=&0 \;\cdots\;\hat{\mu}=\bar{x} \\\frac{1}{2\tilde{\sigma}^2}\left[-n+\frac{1}{\tilde{\sigma}^2}\left\{\sum_{k=1}^n(x_k-\bar{x})^2\right\}\right]&=&0 \\-n+\frac{1}{\tilde{\sigma}^2}\left\{\sum_{k=1}^n(x_k-\bar{x})^2\right\}&=&0 \;\cdots\;\frac{1}{2\tilde{\sigma}^2}は\tilde{\sigma}^2が有限なら0にならないので0になるのは\left[\right]の中が0の時 \\\frac{1}{\tilde{\sigma}^2}\left\{\sum_{k=1}^n(x_k-\bar{x})^2\right\}&=&n \\\frac{1}{\tilde{\sigma}^2}&=&\frac{n}{\sum_{k=1}^n(x_k-\bar{x})^2} \\\tilde{\sigma}^2&=&\frac{1}{n}\sum_{k=1}^n(x_k-\bar{x})^2 \\&=&s^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/07/specimen-random-variable.html}{標本分散\;s^2=\frac{1}{n}\sum_{k=1}^{n} (X_k - \overline{X})^2} \\&=&\frac{n-1}{n}\hat{\sigma}^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/07/specimen-random-variable.html}{不偏分散\;\hat{\sigma}^2=\frac{1}{n-1}\sum_{k=1}^{n} (X_k - \overline{X})^2} \\&&\;\cdots\;これは\href{https://shikitenkai.blogspot.com/2019/07/blog-post_68.html}{標本分散の期待値}でもある. \end{eqnarray} $$

XY平面上の線分同士の交点

線分同士の交点

\(XY平面上の線分\overline{AB},\overline{CD}\)の\(交点G\)

$$ \begin{eqnarray} \overrightarrow{u}&=&\overrightarrow{AB}=(u_x,u_y)=(x_b-x_a, y_b-y_a)\;\cdots\;始点A(x_a,y_a),\;終点B(x_b,y_b) \\\overrightarrow{v}&=&\overrightarrow{CD}=(v_x,v_y)=(x_d-x_c, y_d-y_c)\;\cdots\;始点C(x_c,y_c),\;終点D(x_d,y_d) \\\overrightarrow{w}&=&\overrightarrow{AC}=(w_x,w_y)=(x_c-x_a, y_c-y_a)\;\cdots\;始点A,\;終点B \\\overrightarrow{u}\times\overrightarrow{v}&=&u_xv_y-v_xu_y=(x_b-x_a)(y_d-yc)-(x_d-x_c)(y_b-y_a) \\\sin{\left(\theta\right)}&=&\frac{\overrightarrow{u}\times\overrightarrow{v}}{\left|\overrightarrow{u}\right|\left|\overrightarrow{v}\right|} \\&&\;\cdots\;\overrightarrow{\alpha}\times\overrightarrow{\beta}=XY平面に垂直な成分=\left|\overrightarrow{\alpha}\right|\left|\overrightarrow{\beta}\right|\sin{\left(x\right)} \\&&\;\cdots\;本来外積は演算に用いた2つのベクトルに垂直な軸のベクトルが演算結果であり \\&&\;\cdots\;\sin{(\theta)}を求めるには長さ(絶対値)を取って計算することになるのだが, \\&&\;\cdots\;ここではXY平面に対して垂直な成分のみのベクトルなので, \\&&\;\cdots\;そのXY平面に対して垂直な成分をスカラーとして扱うことで長さ(絶対値)として扱うのと異なり \\&&\;\cdots\;\sin{(\theta)}\lt0が扱える. \\\sin{\left(-\theta\right)}&=&\frac{\overrightarrow{v}\times\overrightarrow{u}}{\left|\overrightarrow{v}\right|\left|\overrightarrow{u}\right|} \end{eqnarray} $$

\(点Aから交点Gまでの長さ\overline{AG}\)と\(線分の長さ\overline{AB}\)の比\(\frac{\overline{AG}}{\overline{AB}}\)

$$ \begin{eqnarray} \overline{AE}&=&\left|\overrightarrow{w}\right|\sin{\left(\pi-\phi\right)} \\&=&\left|\overrightarrow{w}\right|\sin{\left(\phi\right)} \;\cdots\;\sin{\left(\pi-x\right)}=\sin{\left(x\right)} \\&=&\left|\overrightarrow{w}\right|\frac{\overrightarrow{w}\times\overrightarrow{v}}{\left|\overrightarrow{w}\right|\left|\overrightarrow{v}\right|} \\&&\;\cdots\;\overrightarrow{\alpha}\times\overrightarrow{\beta}=\left|\overrightarrow{\alpha}\right|\left|\overrightarrow{\beta}\right|\sin{\left(x\right)} ,\;\sin{\left(x\right)}=\frac{\overrightarrow{\alpha}\times\overrightarrow{\beta}}{\left|\overrightarrow{\alpha}\right|\left|\overrightarrow{\beta}\right|} \\&&\;\cdots\;前述の通り外積の結果をスカラーとして扱う. \\&=&\frac{\overrightarrow{w}\times\overrightarrow{v}}{\left|\overrightarrow{v}\right|} \\\overline{AE}&=&\overline{AG}\sin{\left(\pi-\theta\right)} \\&=&\overline{AG}\sin{\left(\theta\right)} \;\cdots\;\sin{\left(\pi-x\right)}=\sin{\left(x\right)} \\\overline{AG}&=&\frac{\overline{AE}}{\sin{\left(\theta\right)}} \\&=&\frac{\overrightarrow{w}\times\overrightarrow{v}}{\left|\overrightarrow{v}\right|}\frac{1}{\sin{\left(\theta\right)}} \\&=&\frac{\overrightarrow{w}\times\overrightarrow{v}}{\left|\overrightarrow{v}\right|}\frac{\left|\overrightarrow{u}\right|\left|\overrightarrow{v}\right|}{\overrightarrow{u}\times\overrightarrow{v}} \\&=&\frac{\overrightarrow{w}\times\overrightarrow{v}}{\overrightarrow{u}\times\overrightarrow{v}}\left|\overrightarrow{u}\right| \\\frac{\overline{AG}}{\overline{AB}} &=&\frac{\overline{AG}}{\left|\overrightarrow{u}\right|} \\&=&\frac{\overrightarrow{w}\times\overrightarrow{v}}{\overrightarrow{u}\times\overrightarrow{v}}\left|\overrightarrow{u}\right|\frac{1}{\left|\overrightarrow{u}\right|} \\&=&\frac{\overrightarrow{w}\times\overrightarrow{v}}{\overrightarrow{u}\times\overrightarrow{v}} \\&=&\frac{w_xv_y-v_xw_y}{u_xv_y-v_xu_y} \\&=&\frac{(x_c-x_a)(y_d-y_c)-(x_d-x_c)(y_c-y_a)}{(x_b-x_a)(y_d-y_c)-(x_d-x_c)(y_b-y_a)} \end{eqnarray} $$

\(点Cから交点Gまでの長さ\overline{CG}\)と\(線分の長さ\overline{CD}\)の比\(\frac{\overline{CG}}{\overline{CD}}\)

$$ \begin{eqnarray} \overline{CF}&=&\left|-\overrightarrow{w}\right|\sin{\left(\pi-\psi\right)} \\&=&\left|-\overrightarrow{w}\right|\sin{\left(\psi\right)} \;\cdots\;\sin{\left(\pi-x\right)}=\sin{\left(x\right)} \\&=&\left|-\overrightarrow{w}\right|\frac{-\overrightarrow{w}\times\overrightarrow{u}}{\left|-\overrightarrow{w}\right|\left|\overrightarrow{u}\right|} \\&&\;\cdots\;\overrightarrow{\alpha}\times\overrightarrow{\beta}=\left|\overrightarrow{\alpha}\right|\left|\overrightarrow{\beta}\right|\sin{\left(x\right)} ,\;\sin{\left(x\right)}=\frac{\overrightarrow{\alpha}\times\overrightarrow{\beta}}{\left|\overrightarrow{\alpha}\right|\left|\overrightarrow{\beta}\right|} \\&&\;\cdots\;前述の通り外積の結果をスカラーとして扱う. \\&=&\frac{-\overrightarrow{w}\times\overrightarrow{u}}{\left|\overrightarrow{u}\right|} \\\overline{CF}&=&\overline{CG}\sin{\left(-\left(\pi-\theta\right)\right)} \\&=&\overline{CG}\sin{\left(-\theta\right)} \;\cdots\;\sin{\left(\pi-x\right)}=\sin{\left(x\right)} \\\overline{CG}&=&\frac{\overline{CF}}{\sin{\left(-\theta\right)}} \\&=&\frac{-\overrightarrow{w}\times\overrightarrow{u}}{\left|\overrightarrow{u}\right|}\frac{1}{\sin{\left(-\theta\right)}} \\&=&\frac{-\overrightarrow{w}\times\overrightarrow{u}}{\left|\overrightarrow{u}\right|}\frac{\left|\overrightarrow{v}\right|\left|\overrightarrow{u}\right|}{\overrightarrow{v}\times\overrightarrow{u}} \\&=&\frac{-\overrightarrow{w}\times\overrightarrow{u}}{\overrightarrow{v}\times\overrightarrow{u}}\left|\overrightarrow{v}\right| \\\frac{\overline{CG}}{\overline{CD}} &=&\frac{\overline{CG}}{\left|\overrightarrow{v}\right|} \\&=&\frac{-\overrightarrow{w}\times\overrightarrow{u}}{\overrightarrow{v}\times\overrightarrow{u}}\left|\overrightarrow{v}\right|\frac{1}{\left|\overrightarrow{v}\right|} \\&=&\frac{-\overrightarrow{w}\times\overrightarrow{u}}{\overrightarrow{v}\times\overrightarrow{u}} \\&=&\frac{-\overrightarrow{w}\times\overrightarrow{u}}{-\left(\overrightarrow{u}\times\overrightarrow{v}\right)} \;\cdots\;A\times B=-\left(B\times A\right) \\&=&\frac{\overrightarrow{w}\times\overrightarrow{u}}{\overrightarrow{u}\times\overrightarrow{v}} \\&=&\frac{w_xu_y-u_xw_y}{u_xv_y-v_xu_y} \\&=&\frac{(x_c-x_a)(y_b-y_a)-(x_b-x_a)(y_c-y_a)}{(x_b-x_a)(y_d-y_c)-(x_d-x_c)(y_b-y_a)} \end{eqnarray} $$

交点の有無について

\(\frac{\overline{CG}}{\overline{CD}}\)と\(\frac{\overline{AG}}{\overline{AB}}\)は共に 分母に\(\overrightarrow{u}\times\overrightarrow{v}\)があるが, これが\(0\)というのは\(\sin{\left(\theta\right)}=0\)ということであり,線分同士がなす角\(\theta\)が\(0か\pi\)であり,互いに\(\overline{AB},\overline{CD}\)が平行ということなので\(交点G\)を持たない. \(0\)での割り算を発生させないためこれは例外として\(\frac{\overline{CG}}{\overline{CD}}\)と\(\frac{\overline{AG}}{\overline{AB}}\)を求めるより先に判定する必要がある.
その後,\(\frac{\overline{CG}}{\overline{CD}}\)と\(\frac{\overline{AG}}{\overline{AB}}\)を求め,共に\(0\)より大きく\(1\)以下の値の時,\(交点G\)を持つことになる. それ以外の場合は\(交点G\)が線分の外側に存在することになる.

単回帰における残差平方和の期待値

単回帰における残差平方和の期待値

誤差の平方和

$$ \begin{eqnarray} \sum_{i=1}^{n} \epsilon_i^2 &=&\sum_{i=1}^{n} \left(y_i-\alpha-\beta x_i\right)^2 \;\cdots\;y_i=\alpha+\beta x_i+\epsilon_i,\;誤差:\epsilon_i\overset{iid}{\sim} N(0,\sigma^2)\;互いに独立なN(0, \sigma^2)分布に従うと仮定する. \\&=&\sum_{i=1}^{n} \left( y_i -\alpha -\beta x_i \color{red}{ +\bar{y} -\hat{\alpha} -\hat{\beta}\bar{x} } \color{green}{ +\hat{\beta}x_i -\hat{\beta}x_i } \color{blue}{ +\beta\bar{x} -\beta\bar{x} } \right)^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{\bar{y}-\hat{\alpha}-\hat{\beta}\bar{x}=0} \\&=&\sum_{i=1}^{n} \left( y_i-\hat{\alpha}-\hat{\beta}x_i +\bar{y}-\alpha-\beta\bar{x} +\hat{\beta}x_i-\hat{\beta}\bar{x}-\beta x_i+\beta\bar{x} \right)^2 \\&=&\sum_{i=1}^{n} \left[ \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right) +\left(\bar{y}-\alpha-\beta\bar{x}\right) +\left(\hat{\beta}x_i-\hat{\beta}\bar{x}-\beta x_i+\beta\bar{x}\right) \right]^2 \\&=&\sum_{i=1}^{n} \left[ \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right) +\left(\bar{y}-\alpha-\beta\bar{x}\right) +\left\{\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right)\right\} \right]^2 \\&=&\sum_{i=1}^{n} \left[ \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)^2 +\left(\bar{y}-\alpha-\beta\bar{x}\right)^2 +\left\{\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right)\right\}^2 +2\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\bar{y}-\alpha-\beta\bar{x}\right) +2\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right) +2\left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right) \right] \\&&\;\cdots\;(A+B+C)^2=A^2+B^2+C^2+2AB+2AC+2BC \\&=&\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)^2 +\sum_{i=1}^{n} \left(\bar{y}-\alpha-\beta\bar{x}\right)^2 +\sum_{i=1}^{n} \left(\hat{\beta}-\beta\right)^2\left(x_i-\bar{x}\right)^2 \\&&+2\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\bar{y}-\alpha-\beta\bar{x}\right) +2\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right) +2\sum_{i=1}^{n} \left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right) \\&=&\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)^2 +\sum_{i=1}^{n} \left(\bar{y}-\alpha-\beta\bar{x}\right)^2 +\left(\hat{\beta}-\beta\right)^2 \sum_{i=1}^{n} \left(x_i-\bar{x}\right)^2 +2\cdot0+2\cdot0+2\cdot0 \\&&\;\cdots\;\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\bar{y}-\alpha-\beta\bar{x}\right)=0\;(後述) \\&&\;\cdots\;\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right)=0\;(後述) \\&&\;\cdots\;\sum_{i=1}^{n} \left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right)=0\;(後述) \\&=&\sum_{i=1}^{n}\left(y_i-\hat{y_i}\right)^2 +n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2 +\left(\hat{\beta}-\beta\right)^2S_{xx} \;\cdots\;\hat{y_i}=\hat{\alpha}+\hat{\beta}x_i,\; S_{xx}=\sum_{i=1}^{n} \left(x_i-\bar{x}\right)^2 \\&=&\sum_{i=1}^{n}e_i^2 +n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2 +\left(\hat{\beta}-\beta\right)^2S_{xx} \;\cdots\;e_i=y_i-\hat{y_i} \end{eqnarray} $$

残差平方和\(S_e\)

前述の式で\(\sum_{i=1}^{n}\epsilon_i^2\)と\(\sum_{i=1}^{n}e_i^2\)を入れ替えて(互に移項して)以下の式を得る. $$ \begin{eqnarray} \sum_{i=1}^{n}e_i^2 &=&\sum_{i=1}^{n}\left(y_i-\hat{y_i}\right)^2 \;\cdots\;残差平方和\;(sum\;of\;squares\;of\;residuals;\;S_e) \\&=&\sum_{i=1}^{n} \epsilon_i^2 -n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2 -\left(\hat{\beta}-\beta\right)^2S_{xx} \end{eqnarray} $$

残差平方和の期待値\(\mathrm{E}\left[S_e\right]\)

$$ \begin{eqnarray} \mathrm{E}\left[\sum_{i=1}^{n} e_i^2\right] &=&\mathrm{E}\left[\sum_{i=1}^{n} \epsilon_i^2 -n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2 -\left(\hat{\beta}-\beta\right)^2S_{xx}\right] \\&=&\mathrm{E}\left[\sum_{i=1}^{n} \epsilon_i^2\right] -\mathrm{E}\left[n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2\right] -\mathrm{E}\left[\left(\hat{\beta}-\beta\right)^2S_{xx}\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[X+Y\right]=\mathrm{E}\left[X\right]+\mathrm{E}\left[Y\right]} \\&=&\sum_{i=1}^{n}\mathrm{E}\left[ \epsilon_i^2\right] -\mathrm{E}\left[n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2\right] -\mathrm{E}\left[\left(\hat{\beta}-\beta\right)^2S_{xx}\right] \\&&\;\cdots\;\mathrm{E}\left[\sum_{i=1}^n X_i\right]=\mathrm{E}\left[X_1+\cdots+X_n\right]=\mathrm{E}\left[X_1\right]+\cdots+\mathrm{E}\left[X_n\right]=\sum_{i=1}^n\mathrm{E}\left[X_i\right] \\&=&\sum_{i=1}^{n}\left(\mathrm{V}\left[ \epsilon_i \right]+\mathrm{E}\left[ \epsilon_i \right]^2\right) -\mathrm{E}\left[n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2\right] -\mathrm{E}\left[\left(\hat{\beta}-\beta\right)^2S_{xx}\right] \\&&\;\cdots\;\mathrm{V}\left[ X \right]=\mathrm{E}\left[ \left(X-\mathrm{E}\left[ X \right]\right)^2 \right]=\mathrm{E}\left[ X^2 \right]-\mathrm{E}\left[ X \right]^2,\;\mathrm{E}\left[ X^2 \right]=\mathrm{V}\left[ X \right]+\mathrm{E}\left[ X \right]^2 \\&=&\sum_{i=1}^{n}\left(\sigma^2+0^2\right) -\mathrm{E}\left[n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2\right] -\mathrm{E}\left[\left(\hat{\beta}-\beta\right)^2S_{xx}\right] \;\cdots\;\epsilon_i\overset{iid}{\sim} N(0,\sigma^2),\;\mathrm{E}\left[ \epsilon_i \right]=0,\;\mathrm{V}\left[ \epsilon_i \right]=\sigma^2 \\&=&\sum_{i=1}^{n}\sigma^2 -\mathrm{E}\left[n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2\right] -\mathrm{E}\left[\left(\hat{\beta}-\beta\right)^2S_{xx}\right] \\&=&\sigma^2\sum_{i=1}^{n}1 -\mathrm{E}\left[n\left(\bar{y}-\alpha-\beta\bar{x}\right)^2\right] -\mathrm{E}\left[\left(\hat{\beta}-\beta\right)^2S_{xx}\right] \\&=&n\sigma^2 -n\mathrm{E}\left[\left(\bar{y}-\alpha-\beta\bar{x}\right)^2\right] -S_{xx}\mathrm{E}\left[\left(\hat{\beta}-\beta\right)^2\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{\mathrm{E}\left[cX\right]=c\mathrm{E}\left[X\right]} \\&=&n\sigma^2 -n\mathrm{V}\left[\bar{y}\right] -S_{xx}\mathrm{V}\left[\hat{\beta}\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-variance.html}{\mathrm{V}\left[X+t\right]=\mathrm{V}\left[X\right]\;(t:定数)} \\&=&n\sigma^2 -n\frac{\sigma^2}{n} -S_{xx}\mathrm{V}\left[\frac{S_{xy}}{S_{xx}}\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/specimen-random-variable_3.html}{\mathrm{V}\left[\bar{y}\right]=\frac{\sigma^2}{n}} ,\;\href{https://shikitenkai.blogspot.com/2020/03/blog-post.html}{\hat{\beta}=\frac{S_{xy}}{S_{xx}}} \\&=&n\sigma^2 -n\frac{\sigma^2}{n} -S_{xx}\frac{1}{S_{xx}^2}\mathrm{V}\left[S_{xy}\right] \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-variance.html}{\mathrm{V}\left[cX\right]=c^2\mathrm{V}\left[X\right]\;(c:定数)} \\&=&n\sigma^2 -\sigma^2 -\frac{1}{S_{xx}}\sigma^2S_{xx} \;\cdots\;\href{https://shikitenkai.blogspot.com/2020/08/2variancecovariance.html}{\mathrm{V}\left[S_{xy}\right]=\sigma^2S_{xx}} \\&=&n\sigma^2 -\sigma^2 -\sigma^2 \\&=&\left(n-2\right)\sigma^2 \;\cdots\;残差平方和(S_e)の期待値 \end{eqnarray} $$ よって\((n-2)\)で残差平方和を割っておけば\(\sigma^2\)が得られ不偏推定量となる. $$ \begin{eqnarray} \mathrm{E}\left[\sum_{i=1}^{n} e_i^2\right]&=&\left(n-2\right)\sigma^2 \\\frac{1}{n-2}\mathrm{E}\left[\sum_{i=1}^{n} e_i^2\right]&=&\sigma^2 \\\mathrm{E}\left[\frac{1}{\left(n-2\right)}\sum_{i=1}^{n} e_i^2\right]&=&\sigma^2 \;\cdots\;\href{https://shikitenkai.blogspot.com/2019/06/discrete-random-variable-expected-value.html}{c\mathrm{E}\left[X\right]=\mathrm{E}\left[cX\right]} \\\mathrm{E}\left[s^2\right]&=&\sigma^2 \;\cdots\;s^2=\frac{1}{\left(n-2\right)}\sum_{i=1}^{n} e_i^2は\sigma^2の不偏推定量 \end{eqnarray} $$

標準化残差\(e_{is}\)とその分布

$$ \begin{eqnarray} \\e_{is}&\equiv&\frac{e_i}{s} \;\cdots\;標準化残差\;(standardized\;residuals),\;s\equiv\sqrt{\frac{1}{\left(n-2\right)}\sum_{i=1}^{n} e_i^2} \\e_i&=&y_i-\hat{y_i}=(\alpha+\beta x_i+\epsilon_i)-(\hat{\alpha}+\hat{\beta}x_i) \\&=&(\alpha-\hat{\alpha})+(\beta-\hat{\beta})x_i+\epsilon_i \end{eqnarray} $$ \(n\)が十分大きい時\(\hat{\alpha},\hat{\beta}\)は\(\alpha,\beta\)に確率収束する( \(\hat{\alpha},\hat{\beta}\)は不偏推定量(期待値は\(\alpha,\beta\)) ).よって $$ \begin{eqnarray} e_i&\sim&\epsilon_i \;\cdots\;(\alpha-\hat{\alpha}),(\beta-\hat{\beta})はほぼ0になり第1,2項は無視できるようになる. \end{eqnarray} $$ となる.また, $$ \begin{eqnarray} e_{is}&=&\frac{e_i}{s} \end{eqnarray} $$ は, $$ \begin{eqnarray} \epsilon_{is}&\equiv&\frac{\epsilon_i}{\sigma} \;\cdots\;nが十分大きい時,\;sは\sigmaに確率収束する(s^2は不偏推定量(期待値は\sigma^2)). \end{eqnarray} $$ と同じ分布に従うとみなせる.
\(\epsilon_{i}\sim N(0,\sigma^2)\)に従うので\(\epsilon_{is}\)は\(N(0, 1^2)\)に従うと考えられる.

誤差の平方和の計算途中の式について

$$ \begin{eqnarray} \sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\bar{y}-\alpha-\beta\bar{x}\right) &=&\left(\bar{y}-\alpha-\beta\bar{x}\right)\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right) \\&=&\left(\bar{y}-\alpha-\beta\bar{x}\right) \left(\sum_{i=1}^{n}y_i-\hat{\alpha}\sum_{i=1}^{n}1-\hat{\beta}\sum_{i=1}^{n}x_i\right) \\&=&\left(\bar{y}-\alpha-\beta\bar{x}\right) \left(n\bar{y}-n\hat{\alpha}-n\hat{\beta}\bar{x}\right) \\&=&\left(\bar{y}-\alpha-\beta\bar{x}\right) n\left(\bar{y}-\hat{\alpha}-\hat{\beta}\bar{x}\right) \\&=&\left(\bar{y}-\alpha-\beta\bar{x}\right) n\cdot0 \;\cdots\;\bar{y}-\hat{\alpha}-\hat{\beta}\bar{x}=0 \\&=&0 \\ \sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right) &=&\left(\hat{\beta}-\beta\right)\sum_{i=1}^{n} \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\left(x_i-\bar{x}\right) \\&=&\left(\hat{\beta}-\beta\right)\sum_{i=1}^{n} \left\{ \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - \left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\bar{x} \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)\bar{x} \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - \bar{x}\sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right) \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - \bar{x}\left(\sum_{i=1}^{n}y_i-\sum_{i=1}^{n}\hat{\alpha}-\sum_{i=1}^{n}\hat{\beta}x_i\right) \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - \bar{x}\left(n\bar{y}-n\hat{\alpha}-\hat{\beta}n\bar{x}\right) \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - \bar{x}n\left(\bar{y}-\hat{\alpha}-\hat{\beta}\bar{x}\right) \right\} \\&=&\left(\hat{\beta}-\beta\right)\left\{ \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - \bar{x}n\cdot0 \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}\left(y_i-\hat{\alpha}-\hat{\beta}x_i\right)x_i - 0 \right\} \\&=&\left(\hat{\beta}-\beta\right)\sum_{i=1}^{n}\left(y_ix_i-\hat{\alpha}x_i-\hat{\beta}x_i^2\right) \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}y_ix_i-\sum_{i=1}^{n}\hat{\alpha}x_i-\sum_{i=1}^{n}\hat{\beta}x_i^2 \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}y_ix_i-\hat{\alpha}\sum_{i=1}^{n}x_i-\hat{\beta}\sum_{i=1}^{n}x_i^2 \right\} \\&=&\left(\hat{\beta}-\beta\right) \left\{ \sum_{i=1}^{n}y_ix_i-\hat{\alpha}n\bar{x}-\hat{\beta}\sum_{i=1}^{n}x_i^2 \right\} \;\cdots\;\sum_{i=1}^{n}y_ix_i-\hat{\alpha}n\bar{x}-\hat{\beta}\sum_{i=1}^{n}x_i^2=0\;(正規方程式の一つ.\hat{\alpha},\hat{\beta}はこの式を満たすために調整した値) \\&=&\left(\hat{\beta}-\beta\right)\cdot0 \\&=&0 \\ \sum_{i=1}^{n} \left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right)\left(x_i-\bar{x}\right) &=&\left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right) \sum_{i=1}^{n} \left(x_i-\bar{x}\right) \\&=&\left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right) \left(\sum_{i=1}^{n} x_i-\bar{x}\sum_{i=1}^{n} 1\right) \\&=&\left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right) \left(n\bar{x}-n\bar{x}\right) \;\cdots\;\sum_{i=1}^{n} x_i=n\bar{x} \\&=&\left(\bar{y}-\alpha-\beta\bar{x}\right)\left(\hat{\beta}-\beta\right)\cdot0 \\&=&0 \end{eqnarray} $$