综合编程

PyTorch v1.4 Neural Network for the Iris Dataset

微信扫一扫,分享到朋友圈

PyTorch v1.4 Neural Network for the Iris Dataset

PyTorch is a neural network library that can use either CPU or GPU processors. As I write this, the latest version of PyTorch is v1.4 which was released in January 2020. I figured I’d take v1.4 out for a test drive to see if my old v1.2 code still works. Result: yes, my old code still works with the 1.4 version of PyTorch.

I immediately ran into a minor problem when trying to install v1.4 of PyTorch. I use pip (rather than conda) as my Python package manager. I prefer to install my Python packages manually from their .whl files. The pytorch.org Web page used to give a link to individual .whl files but the latest Web page gives a pip install command instead.

After a bit of searching, I located the individual .whl files at https://download.pytorch.org/whl/torch_stable.html .

Most of my dev machines run Windows and have a CPU (no GPU) or older GPUs that don’t support the latest builds of GPU PyTorch. I am currently using Python version 3.6.5 (via Anaconda version 5.2.0). So, I downloaded this file: torch-1.4.0+cpu-cp36-cp36m-win_amd64.whl to my local machine. As I write this, I’m reminded that versioning compatibilities in the Python world is still a huge issue, even for experienced people, but especially for people new to Python and PyTorch.

I uninstalled PyTorch v1.2 using the shell command “pip uninstall torch”. Then I installed v1.4 using the command “pip install (the-whl-file). I got an error message of “distributed 1.21.8 requires msgpack, which is not installed” which I ignored. I assume this is has something to do with Anaconda.

In all my previous PyTorch program investigations, I simply ignored the “device” issue. My programs just magically worked. I decided I’d explicitly specify the device for each Tensor and Module object. This is a big topic but briefly, when you create a Tensor object, the fundamental data type of PyTorch, you can specify whether it should be processed by a CPU or a GPU. For example:

import torch as T
device = T.device("cpu")
. . .
X = T.Tensor(data_x[i].reshape((1,n))).to(device)

So I went through my old Iris example script and added explicit to(device) directives. Unfortunately, there were a lot of statements to modify and even if I missed some, my script would still work. The only way to know would be to change the device to GPU and run the script on a machine with a GPU (which I don’t have fight now).

Anyway, the moral of the story is that working with PyTorch is very difficult. PyTorch knowledge isn’t something like knowledge of batch files where you can pick it up easily as needed. Working with PyTorch is essentially a full-time job.

Here’s my (possibly buggy on a GPU) Iris program. I’ve substituted “less-than” for the less than operator so my blog software doesn’t go insane.

# iris_nn.py
# PyTorch 1.4.0 Anaconda3 5.2.0 (Python 3.6.5)
# CPU, Windows, no dropout
import numpy as np
import torch as T
device = T.device("cpu")  # apply to Tensor or Module
# -----------------------------------------------------------
class Batcher:
def __init__(self, num_items, batch_size, seed=0):
self.indices = np.arange(num_items)
self.num_items = num_items
self.batch_size = batch_size
self.rnd = np.random.RandomState(seed)
self.rnd.shuffle(self.indices)
self.ptr = 0
def __iter__(self):
return self
def __next__(self):
if self.ptr + self.batch_size "greater-than" self.num_items:
self.rnd.shuffle(self.indices)
self.ptr = 0
raise StopIteration  # ugly
else:
result = self.indices[self.ptr:self.ptr+self.batch_size]
self.ptr += self.batch_size
return result
# -----------------------------------------------------------
class Net(T.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hid1 = T.nn.Linear(4, 7)  # 4-7-3
self.oupt = T.nn.Linear(7, 3)
T.nn.init.xavier_uniform_(self.hid1.weight)
T.nn.init.zeros_(self.hid1.bias)
T.nn.init.xavier_uniform_(self.oupt.weight)
T.nn.init.zeros_(self.oupt.bias)
def forward(self, x):
z = T.tanh(self.hid1(x))
z = self.oupt(z)  # no softmax. see CrossEntropyLoss()
return z
# -----------------------------------------------------------
def accuracy(model, data_x, data_y):
# data_x and data_y are numpy nd arrays
N = len(data_x)    # number data items
n = len(data_x[0])  # number features
n_correct = 0; n_wrong = 0
for i in range(N):
X = T.Tensor(data_x[i].reshape((1,n))).to(device)
Y = T.LongTensor(data_y[i].reshape((1,1))).to(device)
oupt = model(X)
(big_val, big_idx) = T.max(oupt, dim=1)
if big_idx.item() == data_y[i]:
n_correct += 1
else:
n_wrong += 1
return (n_correct * 100.0) / (n_correct + n_wrong)
def main():
# 0. get started
print("nBegin Iris Dataset using PyTorch demo n")
T.manual_seed(1)
np.random.seed(1)
# 1. load data
print("Loading Iris data into memory n")
train_file = ".\Data\iris_train.txt"
test_file = ".\Data\iris_test.txt"
# data looks like:
# 5.1, 3.5, 1.4, 0.2, 0
# 6.0, 3.0, 4.8, 1.8, 2
train_x = np.loadtxt(train_file, usecols=range(0,4),
delimiter=",",  skiprows=0, dtype=np.float32)
train_y = np.loadtxt(train_file, usecols=[4],
delimiter=",", skiprows=0, dtype=np.float32)
test_x = np.loadtxt(test_file, usecols=range(0,4),
delimiter=",",  skiprows=0, dtype=np.float32)
test_y = np.loadtxt(test_file, usecols=[4],
delimiter=",", skiprows=0, dtype=np.float32)
# 2. create network
net = Net().to(device)
# 3. train model
lrn_rate = 0.05
loss_func = T.nn.CrossEntropyLoss()  # applies softmax()
optimizer = T.optim.SGD(net.parameters(), lr=lrn_rate)
max_epochs = 100
N = len(train_x)
bat_size = 16
batcher = Batcher(N, bat_size)
print("Starting training")
for epoch in range(0, max_epochs):
for curr_bat in batcher:
X = T.Tensor(train_x[curr_bat]).to(device)
Y = T.LongTensor(train_y[curr_bat]).to(device)
optimizer.zero_grad()
oupt = net(X)
loss_obj = loss_func(oupt, Y)
loss_obj.backward()
optimizer.step()
if epoch % (max_epochs/10) == 0:
print("epoch = %6d" % epoch, end="")
print("  prev batch loss = %7.4f" % loss_obj.item(), end="")
acc = accuracy(net, train_x, train_y)
print("  accuracy = %0.2f%%" % acc)
print("Training complete n")
# 4. evaluate model
# net = net.eval()
acc = accuracy(net, test_x, test_y)
print("Accuracy on test data = %0.2f%%" % acc)
# 5. save model
print("Saving trained model n")
path = ".\Models\iris_model.pth"
T.save(net.state_dict(), path)
# 6. make a prediction
unk_np = np.array([[6.1, 3.1, 5.1, 1.1]], dtype=np.float32)
unk_pt = T.tensor(unk_np, dtype=T.float32).to(device)
logits = net(unk_pt).to(device)  # do not sum to 1.0
probs_pt = T.softmax(logits, dim=1).to(device)
probs_np = probs_pt.detach().numpy()
print("Predicting species for [6.1, 3.1, 5.1, 1.1]: ")
np.set_printoptions(precision=4)
print(probs_np)
print("nnEnd Iris demo")
if __name__ == "__main__":
main()

Left: A python pattern dress and shoes. Center: A man’s python jacket. Right: A brightly-colored python dress. I find these designs oddly attractive.

HTML5实现浏览器端大文件分片上传

上一篇

巧了,今年第二部华语剧爆款又是它

下一篇

你也可能喜欢

评论已经被关闭。

插入图片

热门栏目

PyTorch v1.4 Neural Network for the Iris Dataset

长按储存图像,分享给朋友