欢迎您访问365答案网,请分享给你的朋友!
生活常识 学习资料

【偷偷卷死小伙伴Pytorch20天】-【day2】-【图片数据建模流程范例】

时间:2023-05-30

系统教程20天拿下Pytorch
最近和中哥、会哥进行一个小打卡活动,20天pytorch,这是第二天。欢迎一键三连。

文章目录

一、准备数据二、定义模型三、训练模型四、评估模型五、使用模型六、保存模型总结

import osimport datetime#打印时间def printbar(): nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S') print("n"+"=========="*8 + "%s"%nowtime)#mac系统上pytorch和matplotlib在jupyter中同时跑需要更改环境变量os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"

!pip install prettytable!pip install torchkeras

一、准备数据

cifar2数据集为cifar10数据集的子集,只包括前两种类别airplane和automobile。

训练集有airplane和automobile图片各5000张,测试集有airplane和automobile图片各1000张。

cifar2任务的目标是训练一个模型来对飞机airplane和机动车automobile两种图片进行分类。

在Pytorch中构建图片数据管道通常有两种方法。

第一种是使用 torchvision中的datasets.ImageFolder来读取图片然后用 DataLoader来并行加载。

第二种是通过继承 torch.utils.data.Dataset 实现用户自定义读取逻辑然后用 DataLoader来并行加载。

第二种方法是读取用户自定义数据集的通用方法,既可以读取图片数据集,也可以读取文本数据集。

本篇我们介绍第一种方法。

import torch from torch import nnfrom torch.utils.data import Dataset,DataLoaderfrom torchvision import transforms,datasets

transform_train = transforms.Compose( [transforms.ToTensor()])transform_valid = transforms.Compose( [transforms.ToTensor()])

ds_train = datasets.ImageFolder("/home/mw/input/data6936/eat_pytorch_data/data/cifar2/train", transform = transform_train,target_transform= lambda t:torch.tensor([t]).float())ds_valid = datasets.ImageFolder("/home/mw/input/data6936/eat_pytorch_data/data/cifar2/test", transform = transform_train,target_transform= lambda t:torch.tensor([t]).float())print(ds_train.class_to_idx.values())print(ds_train.classes)print(ds_train.imgs)'''输出:dict_values([0, 1])['0_airplane', '1_automobile'][('/home/mw/input/data6936/eat_pytorch_data/data/cifar2/train/0_airplane/0.jpg', 0), ('/home/mw/input/data6936/eat_pytorch_data/data/cifar2/train/0_airplane/1.jpg', 0), ('/home/mw/input/data6936/eat_pytorch_data/data/cifar2/train/0_airplane/10.jpg', 0), ('/home/mw/input/data6936/eat_pytorch_data/data/cifar2/train/0_airplane/100.jpg', 0), ('/home/mw/input/data6936/eat_pytorch_data/data/cifar2/train/0_airplane/1000.jpg', 0), ('/home/mw/input/data6936/eat_pytorch_data/data/cifar2/train/0_airplane/1001.jpg', 0)]'''

tips:
ImageFolder是一个通用的数据加载器,它要求我们以下面这种格式来组织数据集的训练、验证或者测试图片。

root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png

root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png

dataset=torchvision.datasets.ImageFolder( root, transform=None, target_transform=None, loader=, is_valid_file=None)

参数详解:

root:图片存储的根目录,即各类别文件夹所在目录的上一级目录。
transform:对图片进行预处理的操作(函数),原始图片作为输入,返回一个转换后的图片。
**target_transform:**对图片类别进行预处理的操作,输入为 target,输出对其的转换。如果不传该参数,即对 target 不做任何转换,返回的顺序索引 0,1, 2…
loader:表示数据集加载方式,通常默认加载方式即可。
is_valid_file:获取图像文件的路径并检查该文件是否为有效文件的函数(用于检查损坏文件)

返回的dataset都有以下三种属性:

self.classes:用一个 list 保存类别名称self.class_to_idx:字典类型、类别对应的索引,与不做任何转换返回的 target 对应self.imgs:保存(img-path, class) tuple的 list

print(ds_train[0][1])'''输出:tensor([0.])'''

dl_train = DataLoader(ds_train,batch_size = 50,shuffle = True,num_workers=3)dl_valid = DataLoader(ds_valid,batch_size = 50,shuffle = True,num_workers=3)

%matplotlib inline%config InlineBackend.figure_format = 'svg'#查看部分样本from matplotlib import pyplot as plt plt.figure(figsize=(8,8)) for i in range(9): img,label = ds_train[i] img = img.permute(1,2,0) ax=plt.subplot(3,3,i+1) ax.imshow(img.numpy()) ax.set_title("label = %d"%label.item()) ax.set_xticks([]) ax.set_yticks([]) plt.show()


tips:

img = img.permute(1,2,0) # 转换维度
原图像尺寸33232 要转为32323
ax=plt.subplot(3,3,i+1) # 切割子图
ax.imshow(img.numpy()) # 可视化

# Pytorch的图片默认顺序是 Batch,Channel,Width,Heightfor x,y in dl_train: print(x.shape,y.shape) break'''输出:torch.Size([50, 3, 32, 32]) torch.Size([50, 1])'''

二、定义模型

使用Pytorch通常有三种方式构建模型:

使用nn.Sequential按层顺序构建模型继承nn.Module基类构建自定义模型继承nn.Module基类构建模型并辅助应用模型容器(nn.Sequential,nn.ModuleList,nn.ModuleDict)进行封装。

此处选择通过继承nn.Module基类构建自定义模型。

#测试AdaptiveMaxPool2d的效果pool = nn.AdaptiveMaxPool2d((1,1))t = torch.randn(10,8,32,32)pool(t).shape '''输出:torch.Size([10, 8, 1, 1])'''

class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3) self.pool = nn.MaxPool2d(kernel_size = 2,stride = 2) self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5) self.dropout = nn.Dropout2d(p = 0.1) self.adaptive_pool = nn.AdaptiveMaxPool2d((1,1)) self.flatten = nn.Flatten() self.linear1 = nn.Linear(64,32) self.relu = nn.ReLU() self.linear2 = nn.Linear(32,1) self.sigmoid = nn.Sigmoid() def forward(self,x): x = self.conv1(x) x = self.pool(x) x = self.conv2(x) x = self.pool(x) x = self.dropout(x) x = self.adaptive_pool(x) x = self.flatten(x) x = self.linear1(x) x = self.relu(x) x = self.linear2(x) y = self.sigmoid(x) return y net = Net()print(net)'''输出:Net( (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1)) (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1)) (dropout): Dropout2d(p=0.1, inplace=False) (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1)) (flatten): Flatten(start_dim=1, end_dim=-1) (linear1): Linear(in_features=64, out_features=32, bias=True) (relu): ReLU() (linear2): Linear(in_features=32, out_features=1, bias=True) (sigmoid): Sigmoid())'''

import torchkerastorchkeras.summary(net,input_shape= (3,32,32))'''---------------------------------------------------------------- Layer (type) Output Shape Param #================================================================ Conv2d-1 [-1, 32, 30, 30] 896 MaxPool2d-2 [-1, 32, 15, 15] 0 Conv2d-3 [-1, 64, 11, 11] 51,264 MaxPool2d-4 [-1, 64, 5, 5] 0 Dropout2d-5 [-1, 64, 5, 5] 0 AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0 Flatten-7 [-1, 64] 0 Linear-8 [-1, 32] 2,080 ReLU-9 [-1, 32] 0 Linear-10 [-1, 1] 33 Sigmoid-11 [-1, 1] 0================================================================Total params: 54,273Trainable params: 54,273Non-trainable params: 0----------------------------------------------------------------Input size (MB): 0.011719Forward/backward pass size (MB): 0.359634Params size (MB): 0.207035Estimated Total Size (MB): 0.578388----------------------------------------------------------------'''

三、训练模型

Pytorch通常需要用户编写自定义训练循环,训练循环的代码风格因人而异。

有3类典型的训练循环代码风格:脚本形式训练循环,函数形式训练循环,类形式训练循环。

此处介绍一种较通用的函数形式训练循环。

import pandas as pd from sklearn.metrics import roc_auc_scoremodel = netmodel.optimizer = torch.optim.SGD(model.parameters(),lr = 0.01)model.loss_func = torch.nn.BCELoss()model.metric_func = lambda y_pred,y_true: roc_auc_score(y_true.data.numpy(),y_pred.data.numpy())model.metric_name = "auc"

tips:
from sklearn.metrics import roc_auc_score
roc_auc_score

def train_step(model,features,labels): # 训练模式,dropout层发生作用 model.train() # 梯度清零 model.optimizer.zero_grad() # 正向传播求损失 predictions = model(features) loss = model.loss_func(predictions,labels) metric = model.metric_func(predictions,labels) # 反向传播求梯度 loss.backward() model.optimizer.step() return loss.item(),metric.item()def valid_step(model,features,labels): # 预测模式,dropout层不发生作用 model.eval() # 关闭梯度计算 with torch.no_grad(): predictions = model(features) loss = model.loss_func(predictions,labels) metric = model.metric_func(predictions,labels) return loss.item(), metric.item()# 测试train_step效果features,labels = next(iter(dl_train))train_step(model,features,labels)'''输出:(0.6954520344734192, 0.500805152979066)'''

def train_model(model,epochs,dl_train,dl_valid,log_step_freq): metric_name = model.metric_name dfhistory = pd.Dataframe(columns = ["epoch","loss",metric_name,"val_loss","val_"+metric_name]) print("Start Training...") nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S') print("=========="*8 + "%s"%nowtime) for epoch in range(1,epochs+1): # 1,训练循环------------------------------------------------- loss_sum = 0.0 metric_sum = 0.0 step = 1 for step, (features,labels) in enumerate(dl_train, 1): loss,metric = train_step(model,features,labels) # 打印batch级别日志 loss_sum += loss metric_sum += metric if step%log_step_freq == 0: print(("[step = %d] loss: %.3f, "+metric_name+": %.3f") % (step, loss_sum/step, metric_sum/step)) # 2,验证循环------------------------------------------------- val_loss_sum = 0.0 val_metric_sum = 0.0 val_step = 1 for val_step, (features,labels) in enumerate(dl_valid, 1): val_loss,val_metric = valid_step(model,features,labels) val_loss_sum += val_loss val_metric_sum += val_metric # 3,记录日志------------------------------------------------- info = (epoch, loss_sum/step, metric_sum/step, val_loss_sum/val_step, val_metric_sum/val_step) dfhistory.loc[epoch-1] = info # 打印epoch级别日志 print(("nEPOCH = %d, loss = %.3f,"+ metric_name + " = %.3f, val_loss = %.3f, "+"val_"+ metric_name+" = %.3f") %info) nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S') print("n"+"=========="*8 + "%s"%nowtime) print('Finished Training...') return dfhistory

epochs = 20dfhistory = train_model(model,epochs,dl_train,dl_valid,log_step_freq = 50)

四、评估模型

dfhistory

%matplotlib inline%config InlineBackend.figure_format = 'svg'import matplotlib.pyplot as pltdef plot_metric(dfhistory, metric): train_metrics = dfhistory[metric] val_metrics = dfhistory['val_'+metric] epochs = range(1, len(train_metrics) + 1) plt.plot(epochs, train_metrics, 'bo--') plt.plot(epochs, val_metrics, 'ro-') plt.title('Training and validation '+ metric) plt.xlabel("Epochs") plt.ylabel(metric) plt.legend(["train_"+metric, 'val_'+metric]) plt.show()

plot_metric(dfhistory,"loss")

plot_metric(dfhistory,"auc")

五、使用模型

def predict(model,dl): model.eval() with torch.no_grad(): result = torch.cat([model.forward(t[0]) for t in dl]) return(result.data)

#预测概率y_pred_probs = predict(model,dl_valid)y_pred_probs'''tensor([[0.0342], [0.9139], [0.5341], ..., [0.7885], [0.9491], [0.5726]])'''

#预测类别y_pred = torch.where(y_pred_probs>0.5, torch.ones_like(y_pred_probs),torch.zeros_like(y_pred_probs))y_pred'''输出:tensor([[0.], [1.], [0.], ..., [0.], [1.], [1.]])'''

六、保存模型

推荐使用保存参数方式保存Pytorch模型。

print(model.state_dict().keys())'''输出:odict_keys(['conv1.weight', 'conv1.bias', 'conv2.weight', 'conv2.bias', 'linear1.weight', 'linear1.bias', 'linear2.weight', 'linear2.bias'])'''

# 保存模型参数torch.save(model.state_dict(), "./data/model_parameter.pkl")net_clone = Net()net_clone.load_state_dict(torch.load("./data/model_parameter.pkl"))predict(net_clone,dl_valid)'''输出:tensor([[0.8983], [0.5431], [0.9716], ..., [0.0663], [0.1317], [0.4519]])'''

总结 datasets.ImageFolderfrom sklearn.metrics import roc_auc_scorenn.AdaptiveMaxPool2d((1,1))

Copyright © 2016-2020 www.365daan.com All Rights Reserved. 365答案网 版权所有 备案号:

部分内容来自互联网,版权归原作者所有,如有冒犯请联系我们,我们将在三个工作时内妥善处理。