视频地址
保存与读取方式1
保存方式1读取方式1方式1的陷阱 保存与读取方式2
保存方式2读取方式2 保存与读取方式1 保存方式1
import torchimport torchvisionvgg16 = torchvision.models.vgg16(pretrained=False)# 保存方式1 后缀最好是.pth,可以同时保存模型及参数torch.save(vgg16, "vgg16_method1.pth")
读取方式1import torchimport torchvision# 方式1->保存方式1,加载模型model = torch.load("vgg16_method1.pth")print(model)
运行结果为
D:Anaconda3envspytorchpython.exe D:/研究生/代码尝试/model_load.pyVGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace=True) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace=True) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace=True) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace=True) (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU(inplace=True) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace=True) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace=True) (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU(inplace=True) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU(inplace=True) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace=True) (5): Dropout(p=0.5, inplace=False) (6): Linear(in_features=4096, out_features=1000, bias=True) ))Process finished with exit code 0
方式1的陷阱当创建自己的网络时,如果在两个文件里保存和读取,那么读取的时候会报错,需要把模型的类复制到读取的文件中
保存与读取方式2 保存方式2import torchimport torchvisionvgg16 = torchvision.models.vgg16(pretrained=False)# 保存方式2,把vgg16的状态(参数)保存为字典,没有结构 【官方推荐】因为空间小torch.save(vgg16.state_dict(), "vgg16_method2.pth")
读取方式2import torchimport torchvision# 方式2,加载模型model = torch.load("vgg16_method2.pth")print(model)
输出结果为字典形式(比较长,截取了一部分)
OrderedDict([('features.0.weight', tensor([[[[ 0.0203, 0.0550, -0.0628], [-0.0034, 0.0760, 0.0692], [-0.1093, 0.0214, -0.0057]], [[-0.0381, -0.1053, -0.0022], [-0.0064, 0.0445, -0.0055], [ 0.0248, -0.0268, -0.0438]], [[-0.0004, 0.0012, 0.0095], [-0.0862, -0.1330, -0.0214], [ 0.0617, -0.0075, -0.0484]]], ...
如果我们想要把模型结构也加载出来呢?
import torch# 方式2,加载模型vgg16 = torchvision.models.vgg16(pretrained=False)# 参数和结构放一起vgg16.load_state_dict(torch.load("vgg16_method2.pth"))# model = torch.load("vgg16_method2.pth")print(vgg16)