证明:经过有限次迭代,可以得到一个将线性可分的训练数据集完全正确划分的分离超平面及感知机模型

当训练数据集线性不可分时,算法不收敛

1. 假设前提:

  1. 所有训练数据点都线性可分的
  2. 初值(w0,b0)=0(w_0, b_0) = \vec{0}

2. 证明前的一些定义

(1)令
w^=(wT,b)Tx^=(x,1) \begin{aligned} \hat w = (w^T, b)^T && \text{向量一般默认为列向量} \\ \hat x = (x, 1) \end{aligned}


w^x^=wx+b \hat w \cdot \hat x = w \cdot x + b

(2)
所有训练数据点都线性可分
\Rightarrow \exists一个超平面,使得所有训练数据集都被正确划分
令这个超平面为
w^optx^=0w^opt=115 \begin{aligned} \hat w_{opt} \cdot \hat x = 0 \text{且} ||\hat w_{opt}|| = 1 && {15} \end{aligned} (3)令
γ=mini{yi(w^optx^i)}1 \begin{aligned} \gamma = min_i\{y_i(\hat w_{opt} \cdot \hat x_i)\} && {1} \end{aligned} (4)令w^k\hat w_k为更新了k次之后的w^\hat w
(5)令
R=max1inx^i10 \begin{aligned} R = max_{1 \le i \le n}||\hat x_i|| && {10} \end{aligned}

3. 证明过程

超平面w^optx^=0\hat w_{opt} \cdot \hat x = 0将所有数据都完全正确的分开
(x^i,yi)\Rightarrow \forall (\hat x_i, y_i),有yiy_iw^optx^i\hat w_{opt} \cdot \hat x_i符号相同,且两者都不为0
(x^i,yi)\Rightarrow \forall (\hat x_i, y_i),有yi(w^optx^i)>0y_i(\hat w_{opt} \cdot \hat x_i)>0
以上结论结合公式(1)得:
yi(w^optx^i)γ>02 \begin{aligned} y_i(\hat w_{opt} \cdot \hat x_i) \ge \gamma \gt 0 && {2} \end{aligned}


假设算法已经更新了k次,则至少有一个样本点在超平面w^k1x^=0\hat w_{k-1} \cdot \hat x = 0上分类错误
假设(xi,yi)(x_i, y_i)就是这个分类错误的点,则 yi(w^k1x^i)<09 \begin{aligned} y_i(\hat w_{k-1} \cdot \hat x_i) < 0 && {9} \end{aligned} 且: {wk=wk1+ηyixibk=bk1+ηyi(3) \begin{cases} w_k = w_{k-1} + \eta y_ix_i \\ b_k = b_{k-1} + \eta y_i && (3) \end{cases} 计算w^k\hat w_{k}w^k1\hat w_{k-1}的关系:
w^k=(wk,bk)=(wk1+ηyixi,bk1+ηyi)=(wk1,bk1)+(ηyixi,ηyi)=w^k1+ηyi(xi,1)=w^k1+ηyix^i \begin{aligned} \hat w_k = (w_k, b_k) = (w_{k-1} + \eta y_ix_i, b_{k-1} + \eta y_i) \\ = (w_{k-1}, b_{k-1}) + (\eta y_ix_i, \eta y_i) = \hat w_{k-1} + \eta y_i (x_i, 1) = \hat w_{k-1} + \eta y_i \hat x_i \end{aligned} 得到:
w^k=w^k1+ηyix^i4 \begin{aligned} \hat w_k = \hat w_{k-1} + \eta y_i \hat x_i && {4} \end{aligned} Note: 由公式(3)推公式(4)本来很简单,之前一直推不出来是因为我把公式(4)ηyix^i\eta y_i \hat x_i当成了一个数,用numpy里面向量和数值相加的公式来算公式(4)。实际上ηyix^i\eta y_i \hat x_i也是一个n+1的向量,应该使用向量的加法来计算公式(4)。


证明w^kw^optkηγ\hat w_k \cdot \hat w_{opt} \ge k \eta \gamma:
w^kw^opt=(w^k1+ηyix^i)w^opt5=w^k1w^opt+ηyix^iw^optw^k1w^opt+ηγ6w^k2w^opt+2ηγw^0w^opt+kηγkηγ7 \begin{aligned} \hat w_k \cdot \hat w_{opt} = (\hat w_{k-1} + \eta y_i \hat x_i) \cdot \hat w_{opt} && {5} = \hat w_{k-1} \cdot \hat w_{opt} + \eta y_i \hat x_i \cdot \hat w_{opt} && {} \ge \hat w_{k-1} \cdot \hat w_{opt} + \eta \gamma && {6} \ge \hat w_{k-2} \cdot \hat w_{opt} + 2\eta \gamma \cdots && {} \ge \hat w_0 \cdot \hat w_{opt} + k\eta \gamma \ge k\eta \gamma && {7} \end{aligned} 公式说明:

  1. 步骤(5):由公式(4)得到
  2. 步骤(6):由公式(2)得到
  3. 步骤(7):假设初值(w0,b0)=0(w_0, b_0) = \vec{0}? 最终得到:
    w^kw^optkηγ8 \begin{aligned} \hat w_k \cdot \hat w_{opt} \ge k \eta \gamma && {8} \end{aligned}

证明w^k2kη2R2||\hat w_k||^2 \le k \eta^2R^2
w^k2=w^k12+2w^k1ηyix^i+(ηyix^i)211<w^k12+0+η2x^i212w^k12+η2R213w^k22+2η2R2w^02+kη2R213kη2R2 \begin{aligned} ||\hat w_k||^2 = ||\hat w_{k-1}||^2 + 2\hat w_{k-1}\eta y_i\hat x_i + (\eta y_i \hat x_i)^2 && {11} \lt ||\hat w_{k-1}||^2 + 0 + \eta^2\hat x_i^2 &&{12} \le ||\hat w_{k-1}||^2 + \eta^2R^2 &&{13} \le ||\hat w_{k-2}||^2 + 2\eta^2R^2 \cdots \le ||\hat w_{0}||^2 + k\eta^2R^2 &&{13} \le k\eta^2R^2 \end{aligned} 公式说明:

  1. 步骤(11):结合公式(4)得到
  2. 步骤(12):结合公式(9)得到第二项小于,第三项中yi2=1y_i^2=1
  3. 步骤(13):结合公式(10)得到
    最终得到:
    w^k2kη2R214 \begin{aligned} ||\hat w_k||^2 \le k \eta^2R^2 && {14} \end{aligned}

8kηγw^kw^optk2η2γ2w^k2w^opt2k2η2γ2w^k2η2R216k(Rγ)2 \begin{aligned} \text{公式8}\Rightarrow k\eta\gamma \le \hat w_k \cdot \hat w_{opt} \Rightarrow k^2\eta^2\gamma^2 \le ||\hat w_k||^2||\hat w_{opt}||^2 \Rightarrow k^2\eta^2\gamma^2 \le ||\hat w_k||^2 \le \eta^2R^2 && {16} \Rightarrow k \le (\frac {R}{\gamma})^2 \end{aligned} 公式说明:

  1. 步骤(16):结合公式(15)

最终结论:
k(Rγ)2 k \le (\frac {R}{\gamma})^2 命题得证

results matching ""

    No results matching ""