• "date": "2020-04-29"
  • "title": "基于Adaline分类iris数据集"

环境:Pycharm 2019.3 + Anaconda3 2020.02

一、实验目的

  1. To digest the implementation of Adaline;
  2. To train a Adaline model on the iris dataset with data standardization and a model without data standardization feature: “petal length” and “petal width” class: “Setosa” and “Versicolour”
  3. To plot the data points and the decision hyperplane ( not the decision region )

二、实验流程

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib.colors import ListedColormap
from sklearn.preprocessing import StandardScaler

1. AdalineGD类

class AdalineGD(object):
   def __init__(self, eta=0.01, n_iter=50):
       self.eta = eta
       self.n_iter = n_iter
   def fit(self, X, y):
       self.w_ = np.zeros(1 + X.shape[1])
       self.cost_ = []
       for i in range(self.n_iter):
           output = self.net_input(X)
           errors = (y - output)
           self.w_[1:] += self.eta * X.T.dot(errors)
           self.w_[0] += self.eta * errors.sum()
           cost = (errors ** 2).sum() / 2.0
           self.cost_.append(cost)
       return self
   def net_input(self, X):
       return np.dot(X, self.w_[1:]) + self.w_[0]
   def activation(self, X):
       return self.net_input(X)
   def predict(self, X):
       return np.where(self.activation(X) >= 0.0, 1, -1)

2. 数据分类可视化函数

def plot_decision_regions(X, y, classifier, resolution=0.02):
   markers = ('s', 'x', 'o', '^', 'v')
   colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
   cmap = ListedColormap(colors[:len(np.unique(y))])
   x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
   x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
   xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution))
   Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
   Z = Z.reshape(xx1.shape)
   plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
   plt.xlim(xx1.min(), xx1.max())
   plt.ylim(xx2.min(), xx2.max())
   for idx, cl in enumerate(np.unique(y)):
       plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl)

3. 鸢尾数据点可视化

df = pd.read_csv('iris.data', header=None)
df.tail()
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
X = df.iloc[0:100, [0, 2]].values
plt.scatter(X[:50, 0], X[:50, 1], color='red', marker='o', label='setosa')
plt.scatter(X[50:100, 0], X[50:100, 1], color='blue', marker='x', label='versicolor')
plt.xlabel('petal length')
plt.ylabel('sepal length')
plt.legend(loc='upper left')
plt.show()

4. 学习速率和迭代次数者两个超参进行观察

可以看出在学习率为0.01时,错误率曲线不收敛。 而在学习率为0.0001时,增加迭代次数,得到收敛的错误率曲线,拐点出现在70附近。

fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
ada0 = AdalineGD(n_iter=50, eta=0.01).fit(X, y)
ax[0].plot(range(1, len(ada0.cost_) + 1), np.log10(ada0.cost_), marker='o')
ax[0].set_xlabel('Epochs')
ax[0].set_ylabel('log(Sum-squared-error)')
ax[0].set_title('Adaline - Learning rate 0.01')

ada1 = AdalineGD(n_iter=50, eta=0.0001).fit(X, y)
ax[1].plot(range(1, len(ada1.cost_) + 1), ada1.cost_, marker='o')
ax[1].set_xlabel('Epochs')
ax[1].set_ylabel('log(Sum-squared-error)')
ax[1].set_title('Adaline - Learning rate 0.01')

ada2 = AdalineGD(n_iter=200, eta=0.0001).fit(X, y)
ax[2].plot(range(1, len(ada2.cost_) + 1), ada2.cost_, marker='o')
ax[2].set_xlabel('Epochs')
ax[2].set_ylabel('Sum-squared-error')
ax[2].set_title('Adaline - Learning rate 0.0001')
plt.show()

5. 学习速率为0.0001,迭代次数200时,分类结果

plot_decision_regions(X, y, classifier=ada2)
plt.title('Adaline - Gradient Descent')
plt.xlabel('sepal length')
plt.ylabel('petal length')
plt.legend(loc='upper left')
plt.show()

6. 增加数据规整后,用学习率为0.01,迭代次数为15进行训练,同样得到收敛的错误率曲线。

sc = StandardScaler().fit(X)
X_std = sc.transform(X)
ada = AdalineGD(n_iter=15, eta=0.01).fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline - Gradient Descent')
plt.xlabel('sepal length Standard')
plt.ylabel('petal length Standard')
plt.legend(loc='upper left')
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Sum-squared-error')
plt.show()

三、实验结论

学习率越短,迭代次数越多,就更容易得到收敛的错误率曲线。使用数据规整可以使数据更标准化,节约训练的资源与时间。