阿掖山
阿掖山

智力活动是一种生活态度 https://mountaye.github.io/blog/

.py | 一個PyTorch 機器學習項目長什麼樣

官網的一個pytorch教程的筆記,原文先按照第一性原理,盡量用原生python 寫了一遍,然後一步一步重構成接近生產環境的代碼。這裡我把順序反過來,先放出重構之後的最終結果。 (Matters 的工程師們真的不打算實現代碼高亮功能嗎~)

自學,或者說一切學習和教學,本質就是在已經掌握的知識和未知的目標知識之間修路。路有兩種修法,一是理論或者說是第一性原理路線,從不證自明的公理或者已經掌握的知識出發,通過邏輯推理一步步得到新的知識;另一種是實踐或者說工程師路線,拿到一個已經可以工作的產品,劃分成各個子系統,通過輸入的改變來觀察輸出的不同,直到子系統簡化到自己可以理解的地步,不再是黑箱,藉此了解整個系統的功能。

但是當學習的對象複雜到一定程度之後,憑藉一個人的自學能力,只用其中一種方法往往難以鑽透。又或者兩種方法學到的路線並非同一條路。對於機器學習,理論路線就是“讓輸入數據通過一個帶有超多參數的函數,根據函數返回值和輸出數據之間的差別修正參數,直到函數能夠近似輸入數據和輸出數據之間的關係”;實踐中代碼往往會使用很多庫作者封裝好的函數,只讀源碼往往一頭霧水。

所以,看到PyTorch 官網的這篇教程WHAT IS TORCH.NN REALLY ?: https://pytorch.org/tutorials/beginner/nn_tutorial.html可以說是喜出望外,把兩種路線寫出的代碼都給了出來,對於自學者來說,就像羅塞塔石碑一樣可以互相對照。這裡我把CNN 相關的部分抽掉了,畢竟CNN 只是深度學習的一個子集,深度學習只是機器學習的一個子集,和這篇文章的主題關係不大。

原文先按照第一性原理,盡量用原生python 寫了一遍,然後一步一步重構成接近生產環境的代碼。這裡我把順序反過來,先放出重構之後的最終結果:

 from pathlib import Path
import requests
import pickle
import gzip
import numpy as np
import torch
import torch.nn.functional as F
from torch import nn
from torch import optim
from torch.utils.data import TensorDataset,DataLoader

# Using GPU

print(torch.cuda.is_available())
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

# Wrapping DataLoader
# https://pytorch.org/tutorials/beginner/basics/data_tutorial.html?highlight=dataloader
# https://pytorch.org/tutorials/beginner/data_loading_tutorial.html?highlight=dataloader

def preprocess(x, y):return x.view(-1, 1, 28, 28).to(dev), y.to(dev)

def get_data(train_ds, valid_ds, bs):return (DataLoader(train_ds, batch_size=bs, shuffle=True),DataLoader(valid_ds, batch_size=bs * 2),)

class WrappedDataLoader:def __init__(self, dl, func):self.dl = dlself.func = func

    def __len__(self):return len(self.dl)

    def __iter__(self):batches = iter(self.dl)for b in batches:yield (self.func(*b))

# Define the neural network model to be trained

# # If the model is simple:
# model = nn.Sequential(nn.Linear(784, 10))

# generally the model is a class that inherites nn.Module and implements forward()
class Mnist_Logistic(nn.Module):def __init__(self):super().__init__()# self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
        # self.bias = nn.Parameter(torch.zeros(10))
        self.lin = nn.Linear(784, 10)

    def forward(self, xb):# return xb @ self.weights + self.bias
        return self.lin(xb)

# Define the training pipeline in fit()

def loss_batch(model, loss_func, xb, yb, opt=None):loss = loss_func(model(xb), yb)

    if opt is not None:loss.backward()opt.step()opt.zero_grad()

    return loss.item(), len(xb)

def fit(epochs, model, loss_func, opt, train_dl, valid_dl):for epoch in range(epochs):model.train()for xb, yb in train_dl:loss_batch(model, loss_func, xb, yb, opt)

        model.eval()with torch.no_grad():losses, nums = zip(*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl])val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)

        print(epoch, val_loss)return None

# __main()__:

# data
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"

PATH.mkdir(parents=True, exist_ok=True)

URL = "https://github.com/pytorch/tutorials/raw/master/_static/"
FILENAME = "mnist.pkl.gz"

if not (PATH / FILENAME).exists():content = requests.get(URL + FILENAME).content(PATH / FILENAME).open("wb").write(content)
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")

x_train, y_train, x_valid, y_valid = map(torch.tensor, (x_train, y_train, x_valid, y_valid)
)

train_dataset = TensorDataset(x_train, y_train)
valid_dataset = TensorDataset(x_valid, y_valid)
train_dataloader, valid_dataloader = get_data(train_ds, valid_ds, bs)
train_dataloader = WrappedDataLoader(train_dataloader, preprocess)
valid_dataloader = WrappedDataLoader(valid_dataloader, preprocess)

# hyperparameters/model
learning_rate = 0.1
epochs = 2
loss_function = F.cross_entropy # loss function
model = Mnist_CNN()
model.to(dev)
optimizer = optim.SGD(model.parameters(), lr=learning_rate , momentum=0.9)

# training
fit(epochs, model, loss_function, optimizer, train_dataloader, valid_dataloader)

可以看到,一個項目主幹可以分成4部分:

  1. 準備數據
  2. 定義模型
  3. 描述流程
  4. 實際運行

下面把各部分拆分開來,把兩種思路的代碼進行對比。

1. 準備數據

重構之前

DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"

PATH.mkdir(parents=True, exist_ok=True)

URL = "https://github.com/pytorch/tutorials/raw/master/_static/"
FILENAME = "mnist.pkl.gz"

if not (PATH / FILENAME).exists():content = requests.get(URL + FILENAME).content(PATH / FILENAME).open("wb").write(content)
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")

x_train, y_train, x_valid, y_valid = map(torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape

重構以後:

 # Wrapping DataLoader
# https://pytorch.org/tutorials/beginner/basics/data_tutorial.html?highlight=dataloader
# https://pytorch.org/tutorials/beginner/data_loading_tutorial.html?highlight=dataloader

def preprocess(x, y):return x.view(-1, 1, 28, 28).to(dev), y.to(dev)

def get_data(train_ds, valid_ds, bs):return (DataLoader(train_ds, batch_size=bs, shuffle=True),DataLoader(valid_ds, batch_size=bs * 2),)

class WrappedDataLoader:def __init__(self, dl, func):self.dl = dlself.func = func

    def __len__(self):return len(self.dl)

    def __iter__(self):batches = iter(self.dl)for b in batches:yield (self.func(*b))

2. 定義模型

重構之前

weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)

def log_softmax(x):return x - x.exp().sum(-1).log().unsqueeze(-1)

def model(xb):return log_softmax(xb @ weights + bias)

def nll(input, target):return -input[range(target.shape[0]), target].mean()
loss_func = nll

def accuracy(out, yb):preds = torch.argmax(out, dim=1)return (preds == yb).float().mean()

重構以後

# If the model is simple:
model = nn.Sequential(nn.Linear(784, 10))

# generally the model is a class that inherites nn.Module and implements forward()
class Mnist_Logistic(nn.Module):def __init__(self):super().__init__()# self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
        # self.bias = nn.Parameter(torch.zeros(10))
        self.lin = nn.Linear(784, 10)

    def forward(self, xb):# return xb @ self.weights + self.bias
        return self.lin(xb)

3. 描述流程

重構之前

lr = 0.5 # learning rate
epochs = 2 # how many epochs to train for

for epoch in range(epochs):for i in range((n - 1) // bs + 1):# set_trace()
        start_i = i * bsend_i = start_i + bsxb = x_train[start_i:end_i]yb = y_train[start_i:end_i]pred = model(xb)loss = loss_func(pred, yb)

        loss.backward()with torch.no_grad():weights -= weights.grad * lrbias -= bias.grad * lrweights.grad.zero_()bias.grad.zero_()

重構以後

def loss_batch(model, loss_func, xb, yb, opt=None):loss = loss_func(model(xb), yb)

    if opt is not None:loss.backward()opt.step()opt.zero_grad()

    return loss.item(), len(xb)

def fit(epochs, model, loss_func, opt, train_dl, valid_dl):for epoch in range(epochs):model.train()for xb, yb in train_dl:loss_batch(model, loss_func, xb, yb, opt)

        model.eval()with torch.no_grad():losses, nums = zip(*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl])val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)

        print(epoch, val_loss)return None

4. 實際運行

重構之前

# __main()__:
print(loss_func(model(xb), yb), accuracy(model(xb), yb))

重構以後

# __main()__:

# data
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"

PATH.mkdir(parents=True, exist_ok=True)

URL = "https://github.com/pytorch/tutorials/raw/master/_static/"
FILENAME = "mnist.pkl.gz"

if not (PATH / FILENAME).exists():content = requests.get(URL + FILENAME).content(PATH / FILENAME).open("wb").write(content)
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")

x_train, y_train, x_valid, y_valid = map(torch.tensor, (x_train, y_train, x_valid, y_valid)
)

train_dataset = TensorDataset(x_train, y_train)
valid_dataset = TensorDataset(x_valid, y_valid)
train_dataloader, valid_dataloader = get_data(train_ds, valid_ds, bs)
train_dataloader = WrappedDataLoader(train_dataloader, preprocess)
valid_dataloader = WrappedDataLoader(valid_dataloader, preprocess)

# hyperparameters/model
learning_rate = 0.1
epochs = 2
loss_function = F.cross_entropy # loss function
model = Mnist_CNN()
model.to(dev)
optimizer = optim.SGD(model.parameters(), lr=learning_rate , momentum=0.9)

# training
fit(epochs, model, loss_function, optimizer, train_dataloader, valid_dataloader)


CC BY-NC-ND 2.0 版權聲明

喜歡我的文章嗎?
別忘了給點支持與讚賞,讓我知道創作的路上有你陪伴。

第一個支持了這篇作品
載入中…
載入中…

發布評論