Äú¿ÉÒÔ¾èÖú£¬Ö§³ÖÎÒÃǵĹ«ÒæÊÂÒµ¡£

1Ôª 10Ôª 50Ôª





ÈÏÖ¤Â룺  ÑéÖ¤Âë,¿´²»Çå³þ?Çëµã»÷Ë¢ÐÂÑéÖ¤Âë ±ØÌî



  ÇóÖª ÎÄÕ ÎÄ¿â Lib ÊÓÆµ iPerson ¿Î³Ì ÈÏÖ¤ ×Éѯ ¹¤¾ß ½²×ù Model Center   Code  
»áÔ±   
   
 
     
   
 ¶©ÔÄ
  ¾èÖú
PyTorchʵսָÄÏ
 
×÷Õߣº³ÂÔÆ
  3154  次浏览      28
2020-2-18
 
±à¼­ÍƼö:
ÎÄÕÂÖ÷Òª½áºÏʵ¼ÊµÄÀý×Ó£¬À´½²½âÈçºÎÓÃPyTorchÍê³ÉKaggleÉϵľ­µä±ÈÈü£ºDogs vs. CatsµÈÏêϸÄÚÈÝ£¬Ï£Íû¶ÔÄúÄÜÓÐËù°ïÖú¡£
±¾ÎÄÀ´×ÔÓÚzhihu £¬ÓÉ»ðÁú¹ûÈí¼þLuca±à¼­£¬ÍƼö¡£

ÔÚѧϰij¸öÉî¶Èѧϰ¿ò¼Üʱ£¬ÕÆÎÕÆä»ù±¾ÖªÊ¶ºÍ½Ó¿Ú¹ÌÈ»ÖØÒª£¬µ«ÈçºÎºÏÀí×éÖ¯´úÂ룬ʹµÃ´úÂë¾ßÓÐÁ¼ºÃµÄ¿É¶ÁÐԺͿÉÀ©Õ¹ÐÔÒ²±Ø²»¿ÉÉÙ¡£±¾ÎIJ»»áÉîÈë½²½â¹ý¶à֪ʶÐԵĶ«Î÷£¬¸ü¶àµÄÔòÊÇ´«ÊÚһЩ¾­Ñ飬¹ØÓÚÈçºÎʹµÃ×Ô¼ºµÄ³ÌÐò¸üpythonic£¬¸ü·ûºÏpytorchµÄÉè¼ÆÀíÄî¡£ÕâЩÄÚÈÝ¿ÉÄÜÓÐЩÕùÒ飬ÒòÆäÊÜÎÒ¸öÈËϲºÃºÍcoding·ç¸ñÓ°Ïì½Ï´ó£¬Äã¿ÉÒÔ½«Õⲿ·Öµ±³ÉÊÇÒ»Öֲο¼»òÌáÒ飬¶ø²»ÊÇ×÷Ϊ±ØÐë×ñÑ­µÄ×¼Ôò¡£¹é¸ùµ½µ×£¬¶¼ÊÇÏ£ÍûÄãÄÜÒÔÒ»ÖÖ¸üΪºÏÀíµÄ·½Ê½×éÖ¯×Ô¼ºµÄ³ÌÐò¡£

ÔÚ×öÉî¶ÈѧϰʵÑé»òÏîĿʱ£¬ÎªÁ˵õ½×îÓŵÄÄ£Ðͽá¹û£¬ÖмäÍùÍùÐèÒªºÜ¶à´ÎµÄ³¢ÊÔºÍÐ޸ġ£¸ù¾ÝÎҵĸöÈ˾­Ñ飬ÔÚ´ÓÊ´ó¶àÊýÉî¶ÈѧϰÑо¿Ê±£¬³ÌÐò¶¼ÐèҪʵÏÖÒÔϼ¸¸ö¹¦ÄÜ£º

Ä£ÐͶ¨Òå

Êý¾Ý´¦ÀíºÍ¼ÓÔØ

ѵÁ·Ä£ÐÍ£¨Train&Validate£©

ѵÁ·¹ý³ÌµÄ¿ÉÊÓ»¯

²âÊÔ£¨Test/Inference£©

ÁíÍâ³ÌÐò»¹Ó¦¸ÃÂú×ãÒÔϼ¸¸öÒªÇó£º

Ä£ÐÍÐè¾ßÓи߶ȿÉÅäÖÃÐÔ£¬±ãÓÚÐ޸IJÎÊý¡¢ÐÞ¸ÄÄ£ÐÍ£¬·´¸´ÊµÑé

´úÂëÓ¦¾ßÓÐÁ¼ºÃµÄ×éÖ¯½á¹¹£¬Ê¹ÈËһĿÁËÈ»

´úÂëÓ¦¾ßÓÐÁ¼ºÃµÄ˵Ã÷£¬Ê¹ÆäËûÈËÄܹ»Àí½â

ÔÚ±¾ÎÄÎÒ½«Ó¦ÓÃÕâЩÄÚÈÝ£¬²¢½áºÏʵ¼ÊµÄÀý×Ó£¬À´½²½âÈçºÎÓÃPyTorchÍê³ÉKaggleÉϵľ­µä±ÈÈü£ºDogs vs. Cats¡£±¾ÎÄËùÓÐʾÀý³ÌÐò¾ùÔÚgithubÉÏ¿ªÔ´ https://github.com/chenyuntc/pytorch-best-practice ¡£

Ŀ¼

1 ±ÈÈü½éÉÜ

2 Îļþ×éÖ¯¼Ü¹¹

3 ¹ØÓÚ__init__.py

4 Êý¾Ý¼ÓÔØ

5 Ä£ÐͶ¨Òå

6 ¹¤¾ßº¯Êý

7 ÅäÖÃÎļþ

8 main.py

8.1 ѵÁ·

8.2 ÑéÖ¤

8.3 ²âÊÔ

8.4 °ïÖúº¯Êý

9 ʹÓÃ

10 ÕùÒé

1 ±ÈÈü½éÉÜ

Dogs vs. CatsÊÇÒ»¸ö´«Í³µÄ¶þ·ÖÀàÎÊÌ⣬ÆäѵÁ·¼¯°üº¬25000ÕÅͼƬ£¬¾ù·ÅÖÃÔÚͬһÎļþ¼ÐÏ£¬ÃüÃû¸ñʽΪ<category>.<num>.jpg, Èçcat.10000.jpg¡¢dog.100.jpg£¬²âÊÔ¼¯°üº¬12500ÕÅͼƬ£¬ÃüÃûΪ<num>.jpg£¬Èç1000.jpg¡£²ÎÈüÕßÐè¸ù¾ÝѵÁ·¼¯µÄͼƬѵÁ·Ä£ÐÍ£¬²¢ÔÚ²âÊÔ¼¯ÉϽøÐÐÔ¤²â£¬Êä³öËüÊǹ·µÄ¸ÅÂÊ¡£×îºóÌá½»µÄcsvÎļþÈçÏ£¬µÚÒ»ÁÐÊÇͼƬµÄ<num>£¬µÚ¶þÁÐÊÇͼƬΪ¹·µÄ¸ÅÂÊ¡£

id,label
10001,0.889
10002,0.01
...

2 Îļþ×éÖ¯¼Ü¹¹

Ê×ÏÈÀ´¿´³ÌÐòÎļþµÄ×éÖ¯½á¹¹£º

©À©¤©¤ checkpoints/
©À©¤©¤ data/
©¦ ©À©¤©¤ __init__.py
©¦ ©À©¤©¤ dataset.py
©¦ ©¸©¤©¤ get_data.sh
©À©¤©¤ models/
©¦ ©À©¤©¤ __init__.py
©¦ ©À©¤©¤ AlexNet.py
©¦ ©À©¤©¤ BasicModule.py
©¦ ©¸©¤©¤ ResNet34.py
©¸©¤©¤ utils/
©¦ ©À©¤©¤ __init__.py
©¦ ©¸©¤©¤ visualize.py
©À©¤©¤ config.py
©À©¤©¤ main.py
©À©¤©¤ requirements.txt
©À©¤©¤ README.md

ÆäÖУº

checkpoints/£º ÓÃÓÚ±£´æÑµÁ·ºÃµÄÄ£ÐÍ£¬¿Éʹ³ÌÐòÔÚÒì³£Í˳öºóÈÔÄÜÖØÐÂÔØÈëÄ£ÐÍ£¬»Ö¸´ÑµÁ·

data/£ºÊý¾ÝÏà¹Ø²Ù×÷£¬°üÀ¨Êý¾ÝÔ¤´¦Àí¡¢datasetʵÏÖµÈ

models/£ºÄ£ÐͶ¨Ò壬¿ÉÒÔÓжà¸öÄ£ÐÍ£¬ÀýÈçÉÏÃæµÄAlexNetºÍResNet34£¬Ò»¸öÄ£ÐͶÔÓ¦Ò»¸öÎļþ

utils/£º¿ÉÄÜÓõ½µÄ¹¤¾ßº¯Êý£¬ÔÚ±¾´ÎʵÑéÖÐÖ÷ÒªÊÇ·â×°ÁË¿ÉÊÓ»¯¹¤¾ß

config.py£ºÅäÖÃÎļþ£¬ËùÓпÉÅäÖõıäÁ¿¶¼¼¯ÖÐÔÚ´Ë£¬²¢ÌṩĬÈÏÖµ

main.py£ºÖ÷Îļþ£¬ÑµÁ·ºÍ²âÊÔ³ÌÐòµÄÈë¿Ú£¬¿Éͨ¹ý²»Í¬µÄÃüÁîÀ´Ö¸¶¨²»Í¬µÄ²Ù×÷ºÍ²ÎÊý

requirements.txt£º³ÌÐòÒÀÀµµÄµÚÈý·½¿â

README.md£ºÌṩ³ÌÐòµÄ±ØÒªËµÃ÷

3 ¹ØÓÚ__init__.py

¿ÉÒÔ¿´µ½£¬¼¸ºõÿ¸öÎļþ¼Ð϶¼ÓÐ__init__.py£¬Ò»¸öĿ¼Èç¹û°üº¬ÁË__init__.py Îļþ£¬ÄÇôËü¾Í±ä³ÉÁËÒ»¸ö°ü£¨package£©¡£__init__.py¿ÉÒÔΪ¿Õ£¬Ò²¿ÉÒÔ¶¨Òå°üµÄÊôÐԺͷ½·¨£¬µ«Æä±ØÐë´æÔÚ£¬ÆäËü³ÌÐò²ÅÄÜ´ÓÕâ¸öĿ¼Öе¼ÈëÏàÓ¦µÄÄ£¿é»òº¯Êý¡£ÀýÈçÔÚdata/Îļþ¼ÐÏÂÓÐ__init__.py£¬ÔòÔÚmain.py ÖоͿÉÒÔ

from data.dataset import DogCat

¶øÈç¹ûÔÚdata/__init__.pyÖÐдÈë

from .dataset import DogCat

ÔòÔÚmain.pyÖоͿÉÒÔÖ±½ÓдΪ£º

from data import DogCat

»òÕß

import data;
dataset = data.DogCat

Ïà±ÈÓÚfrom data.dataset import DogCat¸ü¼Ó±ã½Ý¡£

4 Êý¾Ý¼ÓÔØ

Êý¾ÝµÄÏà¹Ø´¦ÀíÖ÷Òª±£´æÔÚdata/dataset.pyÖС£¹ØÓÚÊý¾Ý¼ÓÔØµÄÏà¹Ø²Ù×÷£¬Æä»ù±¾Ô­Àí¾ÍÊÇʹÓÃDataset½øÐÐÊý¾Ý¼¯µÄ·â×°£¬ÔÙʹÓÃDataloaderʵÏÖÊý¾Ý²¢ÐмÓÔØ¡£

KaggleÌṩµÄÊý¾Ý°üÀ¨ÑµÁ·¼¯ºÍ²âÊÔ¼¯£¬¶øÎÒÃÇÔÚʵ¼ÊʹÓÃÖУ¬»¹ÐèרÃÅ´ÓѵÁ·¼¯ÖÐÈ¡³öÒ»²¿·Ö×÷ΪÑéÖ¤¼¯¡£¶ÔÓÚÕâÈýÀàÊý¾Ý¼¯£¬ÆäÏàÓ¦²Ù×÷Ò²²»Ì«Ò»Ñù£¬¶øÈç¹ûרÃÅдÈý¸öDataset£¬ÔòÉÔÏÔ¸´ÔÓºÍÈßÓ࣬Òò´ËÕâÀïͨ¹ý¼ÓһЩÅжÏÀ´Çø·Ö¡£¶ÔÓÚѵÁ·¼¯£¬ÎÒÃÇÏ£Íû×öһЩÊý¾ÝÔöÇ¿´¦Àí£¬ÈçËæ»ú²Ã¼ô¡¢Ëæ»ú·­×ª¡¢¼ÓÔëÉùµÈ£¬¶øÑéÖ¤¼¯ºÍ²âÊÔ¼¯Ôò²»ÐèÒª¡£ÏÂÃæ¿´dataset.pyµÄ´úÂ룺

import os
from PIL import Image
from torch.utils import data
import numpy as np
from torchvision import transforms as T

class DogCat(data.Dataset):

def __init__(self, root, transforms=None, train=True, test=False):
'''
Ä¿±ê£º»ñÈ¡ËùÓÐͼƬ·¾¶£¬²¢¸ù¾ÝѵÁ·¡¢ÑéÖ¤¡¢²âÊÔ»®·ÖÊý¾Ý
'''
self.test = test
imgs = [os.path.join(root, img) for img in os.listdir(root)]
# ѵÁ·¼¯ºÍÑéÖ¤¼¯µÄÎļþÃüÃû²»Ò»Ñù
# test1: data/test1/8973.jpg
# train: data/train/cat.10004.jpg
if self.test:
imgs = sorted(imgs, key=lambda x: int(x.split('.')[-2].split('/')[-1]))
else:
imgs = sorted(imgs, key=lambda x: int(x.split('.')[-2]))

imgs_num = len(imgs)

# shuffle imgs
np.random.seed(100)
imgs = np.random.permutation(imgs)

# »®·ÖѵÁ·¡¢ÑéÖ¤¼¯£¬ÑéÖ¤:ѵÁ· = 3:7
if self.test:
self.imgs = imgs
elif train:
self.imgs = imgs[:int(0.7*imgs_num)]
else :
self.imgs = imgs[int(0.7*imgs_num):]

if transforms is None:

# Êý¾Ýת»»²Ù×÷£¬²âÊÔÑéÖ¤ºÍѵÁ·µÄÊý¾Ýת»»ÓÐËùÇø±ð
normalize = T.Normalize(mean = [0.485, 0.456, 0.406],
std = [0.229, 0.224, 0.225])

# ²âÊÔ¼¯ºÍÑéÖ¤¼¯²»ÓÃÊý¾ÝÔöÇ¿
if self.test or not train:
self.transforms = T.Compose([
T.Scale(224),
T.CenterCrop(224),
T.ToTensor(),
normalize
])
# ѵÁ·¼¯ÐèÒªÊý¾ÝÔöÇ¿
else :
self.transforms = T.Compose([
T.Scale(256),
T.RandomSizedCrop(224),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize
])


def __getitem__(self, index):
'''
·µ»ØÒ»ÕÅͼƬµÄÊý¾Ý
¶ÔÓÚ²âÊÔ¼¯£¬Ã»ÓÐlabel£¬·µ»ØÍ¼Æ¬id£¬Èç1000.jpg·µ»Ø1000
'''
img_path = self.imgs[index]
if self.test:
label = int(self.imgs[index].split('.')[-2].split('/')[-1])
else:
label = 1 if 'dog' in img_path.split('/')[-1] else 0
data = Image.open(img_path)
data = self.transforms(data)
return data, label

def __len__(self):
'''
·µ»ØÊý¾Ý¼¯ÖÐËùÓÐͼƬµÄ¸öÊý
'''
return len(self.imgs)

¹ØÓÚÊý¾Ý¼¯Ê¹ÓõÄ×¢ÒâÊÂÏÔÚÉÏÒ»ÕÂÖÐÒѾ­Ìáµ½£¬½«Îļþ¶ÁÈ¡µÈ·Ñʱ²Ù×÷·ÅÔÚ__getitem__º¯ÊýÖУ¬ÀûÓÃ¶à½ø³Ì¼ÓËÙ¡£±ÜÃâÒ»´ÎÐÔ½«ËùÓÐͼƬ¶¼¶Á½øÄڴ棬²»½ö·ÑʱҲ»áÕ¼ÓýϴóÄڴ棬¶øÇÒ²»Ò×½øÐÐÊý¾ÝÔöÇ¿µÈ²Ù×÷¡£ÁíÍâÔÚÕâÀÎÒÃǽ«ÑµÁ·¼¯ÖеÄ30%×÷ΪÑéÖ¤¼¯£¬¿ÉÓÃÀ´¼ì²éÄ£Ð͵ÄѵÁ·Ð§¹û£¬±ÜÃâ¹ýÄâºÏ¡£

ÔÚʹÓÃʱ£¬ÎÒÃÇ¿Éͨ¹ýdataloader¼ÓÔØÊý¾Ý¡£

train_dataset = DogCat(opt.train_data_root, train=True)
trainloader = DataLoader(train_dataset,
batch_size = opt.batch_size,
shuffle = True,
num_workers = opt.num_workers)

for ii, (data, label) in enumerate(trainloader):
train()

5 Ä£ÐͶ¨Òå

Ä£Ð͵͍ÒåÖ÷Òª±£´æÔÚmodels/Ŀ¼Ï£¬ÆäÖÐBasicModuleÊǶÔnn.ModuleµÄ¼òÒ×·â×°£¬Ìṩ¿ìËÙ¼ÓÔØºÍ±£´æÄ£Ð͵Ľӿڡ£

class BasicModule(t.nn.Module):
'''
·â×°ÁËnn.Module£¬Ö÷ÒªÌṩsaveºÍloadÁ½¸ö·½·¨
'''

def __init__(self,opt=None):
super(BasicModule,self).__init__()
self.model_name = str(type(self)) # Ä£Ð͵ÄĬÈÏÃû×Ö

def load(self, path):
'''
¿É¼ÓÔØÖ¸¶¨Â·¾¶µÄÄ£ÐÍ
'''
self.load_state_dict(t.load(path))

def save(self, name=None):
'''
±£´æÄ£ÐÍ£¬Ä¬ÈÏʹÓá°Ä£ÐÍÃû×Ö+ʱ¼ä¡±×÷ΪÎļþÃû£¬
ÈçAlexNet_0710_23:57:29.pth
'''
if name is None:
prefix = 'checkpoints/' + self.model_name + '_'
name = time.strftime(prefix + '%m%d_%H:%M:%S.pth')
t.save(self.state_dict(), name)
return name

ÔÚʵ¼ÊʹÓÃÖУ¬Ö±½Óµ÷ÓÃmodel.save()¼°model.load(opt.load_path)¼´¿É¡£

ÆäËü×Ô¶¨ÒåÄ£ÐÍÒ»°ã¼Ì³ÐBasicModule£¬È»ºóʵÏÖ×Ô¼ºµÄÄ£ÐÍ¡£ÆäÖÐAlexNet.pyʵÏÖÁËAlexNet£¬ResNet34ʵÏÖÁËResNet34¡£ÔÚmodels/__init__pyÖУ¬´úÂëÈçÏ£º

from .AlexNet import AlexNet
from .ResNet34 import ResNet34

ÕâÑùÔÚÖ÷º¯ÊýÖоͿÉÒÔд³É£º

from models import AlexNet

»ò

import models
model = models.AlexNet()

»ò

import models
model = getattr(models, 'AlexNet')()

ÆäÖÐ×îºóÒ»ÖÖд·¨×îΪ¹Ø¼ü£¬ÕâÒâζ×ÅÎÒÃÇ¿ÉÒÔͨ¹ý×Ö·û´®Ö±½ÓÖ¸¶¨Ê¹ÓõÄÄ£ÐÍ£¬¶ø²»±ØÊ¹ÓÃÅжÏÓï¾ä£¬Ò²²»±ØÔÚÿ´ÎÐÂÔö¼ÓÄ£Ðͺó¶¼Ð޸ĴúÂë¡£ÐÂÔöÄ£ÐͺóÖ»ÐèÒªÔÚmodels/__init__.pyÖмÓÉÏ

from .new_module import NewModule

¼´¿É¡£

ÆäËü¹ØÓÚÄ£ÐͶ¨ÒåµÄ×¢ÒâÊÂÏÔÚÉÏÒ»ÕÂÖÐÒÑÏêϸ½²½â£¬ÕâÀï¾Í²»ÔÙ׸Êö£¬×ܽáÆðÀ´¾ÍÊÇ£º

¾¡Á¿Ê¹ÓÃnn.Sequential£¨±ÈÈçAlexNet£©

½«¾­³£Ê¹ÓõĽṹ·â×°³É×ÓModule£¨±ÈÈçGoogLeNetµÄInception½á¹¹£¬ResNetµÄResidual Block½á¹¹£©

½«Öظ´ÇÒÓйæÂÉÐԵĽṹ£¬Óú¯ÊýÉú³É£¨±ÈÈçVGGµÄ¶àÖÖ±äÌ壬ResNet¶àÖÖ±äÌå¶¼ÊÇÓɶà¸öÖØ¸´¾í»ý²ã×é³É£©

¸ÐÐËȤµÄ ¶ÁÕß¿ÉÒÔ¿´¿´ÔÚ`models/resnet34.py`ÈçºÎÓò»µ½80ÐеĴúÂë(°üÀ¨¿ÕÐкÍ×¢ÊÍ)ʵÏÖresnet34¡£µ±È»ÕâЩģÐÍÔÚtorchvisionÖÐÓÐʵÏÖ£¬¶øÇÒ»¹ÌṩÁËԤѵÁ·µÄÈ¨ÖØ£¬¶ÁÕß¿ÉÒԺܷ½±ãµÄʹÓãº

import torchvision as tv
resnet34 = tv.models.resnet34(pretrained=True)

6 ¹¤¾ßº¯Êý

ÔÚÏîÄ¿ÖУ¬ÎÒÃÇ¿ÉÄÜ»áÓõ½Ò»Ð©helper·½·¨£¬ÕâЩ·½·¨¿ÉÒÔͳһ·ÅÔÚutils/Îļþ¼ÐÏ£¬ÐèҪʹÓÃʱÔÙÒýÈë¡£ÔÚ±¾ÀýÖÐÖ÷ÒªÊÇ·â×°ÁË¿ÉÊÓ»¯¹¤¾ßvisdomµÄһЩ²Ù×÷£¬Æä´úÂëÈçÏ£¬ÔÚ±¾´ÎʵÑéÖÐÖ»»áÓõ½plot·½·¨£¬ÓÃÀ´Í³¼ÆËðʧÐÅÏ¢¡£

#coding:utf8
import visdom
import time
import numpy as np

class Visualizer(object):
'''
·â×°ÁËvisdomµÄ»ù±¾²Ù×÷£¬µ«ÊÇÄãÈÔÈ»¿ÉÒÔͨ¹ý`self.vis.function`
»òÕß`self.function`µ÷ÓÃÔ­ÉúµÄvisdom½Ó¿Ú
±ÈÈç
self.text('hello visdom')
self.histogram(t.randn(1000))
self.line(t.arange(0, 10),t.arange(1, 11))
'''

def __init__(self, env='default', **kwargs):
self.vis = visdom.Visdom(env=env, **kwargs)

# »­µÄµÚ¼¸¸öÊý£¬Ï൱ÓÚºá×ø±ê
# ±ÈÈ磨¡¯loss',23£© ¼´lossµÄµÚ23¸öµã
self.index = {}
self.log_text = ''
def reinit(self, env='default', **kwargs):
'''
ÐÞ¸ÄvisdomµÄÅäÖÃ
'''
self.vis = visdom.Visdom(env=env, **kwargs)
return self

def plot_many(self, d):
'''
Ò»´Îplot¶à¸ö
@params d: dict (name, value) i.e. ('loss', 0.11)
'''
for k, v in d.iteritems():
self.plot(k, v)

def img_many(self, d):
for k, v in d.iteritems():
self.img(k, v)

def plot(self, name, y, **kwargs):
'''
self.plot('loss', 1.00)
'''
x = self.index.get(name, 0)
self.vis.line(Y=np.array([y]), X=np.array([x]),
win=unicode(name),
opts=dict(title=name),
update=None if x == 0 else 'append',
**kwargs
)
self.index[name] = x + 1

def img(self, name, img_, **kwargs):
'''
self.img('input_img', t.Tensor(64, 64))
self.img('input_imgs', t.Tensor(3, 64, 64))
self.img('input_imgs', t.Tensor(100, 1, 64, 64))
self.img('input_imgs', t.Tensor(100, 3, 64, 64), nrows=10)
'''
self.vis.images(img_.cpu().numpy(),
win=unicode(name),
opts=dict(title=name),
**kwargs
)

def log(self, info, win='log_text'):
'''
self.log({'loss':1, 'lr':0.0001})
'''

self.log_text += ('[{time}] {info} <br>'.format(
time=time.strftime('%m%d_%H%M%S'),\
info=info))
self.vis.text(self.log_text, win)

def __getattr__(self, name):
'''
self.function µÈ¼ÛÓÚself.vis.function
×Ô¶¨ÒåµÄplot,image,log,plot_manyµÈ³ýÍâ
'''
return getattr(self.vis, name)

7 ÅäÖÃÎļþ

ÔÚÄ£ÐͶ¨Òå¡¢Êý¾Ý´¦ÀíºÍѵÁ·µÈ¹ý³Ì¶¼Óкܶà±äÁ¿£¬ÕâЩ±äÁ¿Ó¦ÌṩĬÈÏÖµ£¬²¢Í³Ò»·ÅÖÃÔÚÅäÖÃÎļþÖУ¬ÕâÑùÔÚºóÆÚµ÷ÊÔ¡¢Ð޸ĴúÂë»òÇ¨ÒÆ³ÌÐòʱ»á±È½Ï·½±ã£¬ÔÚÕâÀïÎÒÃǽ«ËùÓпÉÅäÖÃÏî·ÅÔÚconfig.pyÖС£

class DefaultConfig(object):
env = 'default' # visdom »·¾³
model = 'AlexNet' # ʹÓõÄÄ£ÐÍ£¬Ãû×Ö±ØÐëÓëmodels/__init__.pyÖеÄÃû×ÖÒ»ÖÂ

train_data_root = './data/train/' # ѵÁ·¼¯´æ·Å·¾¶
test_data_root = './data/test1' # ²âÊÔ¼¯´æ·Å·¾¶
load_model_path = 'checkpoints/model.pth' # ¼ÓÔØÔ¤ÑµÁ·µÄÄ£Ð͵Ä·¾¶£¬ÎªNone´ú±í²»¼ÓÔØ

batch_size = 128 # batch size
use_gpu = True # use GPU or not
num_workers = 4 # how many workers for loading data
print_freq = 20 # print info every N batch

debug_file = '/tmp/debug' # if os.path.exists(debug_file): enter ipdb
result_file = 'result.csv'

max_epoch = 10
lr = 0.1 # initial learning rate
lr_decay = 0.95 # when val_loss increase, lr = lr*lr_decay
weight_decay = 1e-4 # Ëðʧº¯Êý

¿ÉÅäÖõIJÎÊýÖ÷Òª°üÀ¨£º

Êý¾Ý¼¯²ÎÊý£¨Îļþ·¾¶¡¢batch_sizeµÈ£©

ѵÁ·²ÎÊý£¨Ñ§Ï°ÂÊ¡¢ÑµÁ·epochµÈ£©

Ä£ÐͲÎÊý

ÕâÑùÎÒÃÇÔÚ³ÌÐòÖоͿÉÒÔÕâÑùʹÓãº

import models
from config import DefaultConfig

opt = DefaultConfig()
lr = opt.lr
model = getattr(models, opt.model)
dataset = DogCat(opt.train_data_root)

ÕâЩ¶¼Ö»ÊÇĬÈϲÎÊý£¬ÔÚÕâÀﻹÌṩÁ˸üк¯Êý£¬¸ù¾Ý×Öµä¸üÐÂÅäÖòÎÊý¡£

def parse(self, kwargs):
'''
¸ù¾Ý×Öµäkwargs ¸üРconfig²ÎÊý
'''
# ¸üÐÂÅäÖòÎÊý
for k, v in kwargs.iteritems():
if not hasattr(self, k):
# ¾¯¸æ»¹ÊDZ¨´í£¬È¡¾öÓÚÄã¸öÈ˵ÄϲºÃ
warnings.warn("Warning: opt has not attribut %s" %k)
setattr(self, k, v)

# ´òÓ¡ÅäÖÃÐÅÏ¢
print('user config:')
for k, v in self.__class__.__dict__.iteritems():
if not k.startswith('__'):
print(k, getattr(self, k))

ÕâÑùÎÒÃÇÔÚʵ¼ÊʹÓÃʱ£¬²¢²»ÐèҪÿ´Î¶¼ÐÞ¸Äconfig.py£¬Ö»ÐèҪͨ¹ýÃüÁîÐд«ÈëËùÐè²ÎÊý£¬¸²¸ÇĬÈÏÅäÖü´¿É¡£

ÀýÈ磺

opt = DefaultConfig()
new_config = {'lr':0.1,'use_gpu':False}
opt.parse(new_config)
opt.lr == 0.1

8 main.py

ÔÚ½²½âÖ÷³ÌÐòmain.py֮ǰ£¬ÎÒÃÇÏÈÀ´¿´¿´2017Äê3Ô¹ȸ迪ԴµÄÒ»¸öÃüÁîÐй¤¾ßfire £¬Í¨¹ýpip install fire¼´¿É°²×°¡£ÏÂÃæÀ´¿´¿´fireµÄ»ù´¡Ó÷¨£¬¼ÙÉèexample.pyÎļþÄÚÈÝÈçÏ£º

import fire
def add(x, y):
return x + y

def mul(**kwargs):
a = kwargs['a']
b = kwargs['b']
return a * b

if __name__ == '__main__':
fire.Fire()

ÄÇôÎÒÃÇ¿ÉÒÔʹÓãº

python example.py add 1 2 # Ö´ÐÐadd(1, 2)
python example.py mul --a=1 --b=2 # Ö´ÐÐmul(a=1, b=2),kwargs={'a':1, 'b':2}
python example.py add --x=1 --y=2 # Ö´ÐÐadd(x=1, y=2)

¿É¼û£¬Ö»ÒªÔÚ³ÌÐòÖÐÔËÐÐfire.Fire()£¬¼´¿ÉʹÓÃÃüÁîÐвÎÊýpython file <function> [args,] {--kwargs,}¡£fire»¹Ö§³Ö¸ü¶àµÄ¸ß¼¶¹¦ÄÜ£¬¾ßÌåÇë²Î¿¼¹Ù·½Ö¸ÄÏ ¡£

ÔÚÖ÷³ÌÐòmain.pyÖУ¬Ö÷Òª°üº¬Ëĸöº¯Êý£¬ÆäÖÐÈý¸öÐèÒªÃüÁîÐÐÖ´ÐУ¬main.pyµÄ´úÂë×éÖ¯½á¹¹ÈçÏ£º

def train(**kwargs):
'''
ѵÁ·
'''
pass

def val(model, dataloader):
'''
¼ÆËãÄ£ÐÍÔÚÑéÖ¤¼¯ÉϵÄ׼ȷÂʵÈÐÅÏ¢£¬ÓÃÒÔ¸¨ÖúѵÁ·
'''
pass

def test(**kwargs):
'''
²âÊÔ£¨inference£©
'''
pass

def help():
'''
´òÓ¡°ïÖúµÄÐÅÏ¢
'''
print('help')

if __name__=='__main__':
import fire
fire.Fire()

¸ù¾ÝfireµÄʹÓ÷½·¨£¬¿Éͨ¹ýpython main.py <function> --args=xxµÄ·½Ê½À´Ö´ÐÐѵÁ·»òÕß²âÊÔ¡£

8.1 ѵÁ·

ѵÁ·µÄÖ÷Òª²½ÖèÈçÏ£º

¶¨ÒåÍøÂç

¶¨ÒåÊý¾Ý

¶¨ÒåËðʧº¯ÊýºÍÓÅ»¯Æ÷

¼ÆËãÖØÒªÖ¸±ê

¿ªÊ¼ÑµÁ·

ѵÁ·ÍøÂç

¿ÉÊÓ»¯¸÷ÖÖÖ¸±ê

¼ÆËãÔÚÑéÖ¤¼¯ÉϵÄÖ¸±ê

ѵÁ·º¯ÊýµÄ´úÂëÈçÏ£º

def train(**kwargs):
# ¸ù¾ÝÃüÁîÐвÎÊý¸üÐÂÅäÖÃ
opt.parse(kwargs)
vis = Visualizer(opt.env)

# step1: Ä£ÐÍ
model = getattr(models, opt.model)()
if opt.load_model_path:
model.load(opt.load_model_path)
if opt.use_gpu: model.cuda()

# step2: Êý¾Ý
train_data = DogCat(opt.train_data_root,train=True)
val_data = DogCat(opt.train_data_root,train=False)
train_dataloader = DataLoader(train_data,opt.batch_size,
shuffle=True,
num_workers=opt.num_workers)
val_dataloader = DataLoader(val_data,opt.batch_size,
shuffle=False,
num_workers=opt.num_workers)

# step3: Ä¿±êº¯ÊýºÍÓÅ»¯Æ÷
criterion = t.nn.CrossEntropyLoss()
lr = opt.lr
optimizer = t.optim.Adam(model.parameters(),
lr = lr,
weight_decay = opt.weight_decay)

# step4: ͳ¼ÆÖ¸±ê£ºÆ½»¬´¦ÀíÖ®ºóµÄËðʧ£¬»¹ÓлìÏý¾ØÕó
loss_meter = meter.AverageValueMeter()
confusion_matrix = meter.ConfusionMeter(2)
previous_loss = 1e100

# ѵÁ·
for epoch in range(opt.max_epoch):

loss_meter.reset()
confusion_matrix.reset()

for ii,(data,label) in enumerate(train_dataloader):

# ѵÁ·Ä£ÐÍ
input = Variable(data)
target = Variable(label)
if opt.use_gpu:
input = input.cuda()
target = target.cuda()
optimizer.zero_grad()
score = model(input)
loss = criterion(score,target)
loss.backward()
optimizer.step()

# ¸üÐÂͳ¼ÆÖ¸±êÒÔ¼°¿ÉÊÓ»¯
loss_meter.add(loss.data[0])
confusion_matrix.add(score.data, target.data)

if ii%opt.print_freq==opt.print_freq-1:
vis.plot('loss', loss_meter.value()[0])

# Èç¹ûÐèÒªµÄ»°£¬½øÈëdebugģʽ
if os.path.exists(opt.debug_file):
import ipdb;
ipdb.set_trace()

model.save()

# ¼ÆËãÑéÖ¤¼¯ÉϵÄÖ¸±ê¼°¿ÉÊÓ»¯
val_cm,val_accuracy = val(model,val_dataloader)
vis.plot('val_accuracy',val_accuracy)
vis.log("epoch:{epoch},lr:{lr}, loss:{loss},train_cm:{train_cm},val_cm:{val_cm}"
.format(
epoch = epoch,
loss = loss_meter.value()[0],
val_cm = str(val_cm.value()),
train_cm=str(confusion_matrix.value()),
lr=lr))

# Èç¹ûËðʧ²»ÔÙϽµ£¬Ôò½µµÍѧϰÂÊ
if loss_meter.value()[0] > previous_loss:
lr = lr * opt.lr_decay
for param_group in optimizer.param_groups:
param_group['lr'] = lr

previous_loss = loss_meter.value()[0]

ÕâÀïÓõ½ÁËPyTorchNetÀïÃæµÄÒ»¸ö¹¤¾ß: meter¡£meterÌṩÁËһЩÇáÁ¿¼¶µÄ¹¤¾ß£¬ÓÃÓÚ°ïÖúÓû§¿ìËÙͳ¼ÆÑµÁ·¹ý³ÌÖеÄһЩָ±ê¡£AverageValueMeterÄܹ»¼ÆËãËùÓÐÊýµÄƽ¾ùÖµºÍ±ê×¼²î£¬ÕâÀïÓÃÀ´Í³¼ÆÒ»¸öepochÖÐËðʧµÄƽ¾ùÖµ¡£confusionmeterÓÃÀ´Í³¼Æ·ÖÀàÎÊÌâÖеķÖÀàÇé¿ö£¬ÊÇÒ»¸ö±È׼ȷÂʸüÏêϸµÄͳ¼ÆÖ¸±ê¡£ÀýÈç¶ÔÓÚ±í¸ñ6-1£¬¹²ÓÐ50ÕŹ·µÄͼƬ£¬ÆäÖÐÓÐ35Õű»ÕýÈ··ÖÀà³ÉÁ˹·£¬»¹ÓÐ15Õű»ÎóÅгÉ裻¹²ÓÐ100ÕÅèµÄͼƬ£¬ÆäÖÐÓÐ91Õű»ÕýÈ·ÅÐΪÁË裬ʣÏÂ9Õű»ÎóÅгɹ·¡£Ïà±ÈÓÚ׼ȷÂʵÈͳ¼ÆÐÅÏ¢£¬»ìÏý¾ØÕó¸üÄÜÌåÏÖ·ÖÀàµÄ½á¹û£¬ÓÈÆäÊÇÔÚÑù±¾±ÈÀý²»¾ùºâµÄÇé¿öÏ¡£

8.2 ÑéÖ¤

ÑéÖ¤Ïà¶ÔÀ´Ëµ±È½Ï¼òµ¥£¬µ«Òª×¢ÒâÐ轫ģÐÍÖÃÓÚÑé֤ģʽ(model.eval())£¬ÑéÖ¤Íê³Éºó»¹ÐèÒª½«ÆäÖûØÎªÑµÁ·Ä£Ê½(model.train())£¬ÕâÁ½¾ä´úÂë»áÓ°ÏìBatchNormºÍDropoutµÈ²ãµÄÔËÐÐģʽ¡£´úÂëÈçÏ¡£

def val(model,dataloader):
'''
¼ÆËãÄ£ÐÍÔÚÑéÖ¤¼¯ÉϵÄ׼ȷÂʵÈÐÅÏ¢
'''
# °ÑÄ£ÐÍÉèΪÑé֤ģʽ
model.eval()

confusion_matrix = meter.ConfusionMeter(2)
for ii, data in enumerate(dataloader):
input, label = data
val_input = Variable(input, volatile=True)
val_label = Variable(label.long(), volatile=True)
if opt.use_gpu:
val_input = val_input.cuda()
val_label = val_label.cuda()
score = model(val_input)
confusion_matrix.add(score.data.squeeze(), label.long())

# °ÑÄ£Ðͻָ´ÎªÑµÁ·Ä£Ê½
model.train()

cm_value = confusion_matrix.value()
accuracy = 100. * (cm_value[0][0] + cm_value[1][1]) /\
(cm_value.sum())
return confusion_matrix, accuracy

8.3 ²âÊÔ

²âÊÔʱ£¬ÐèÒª¼ÆËãÿ¸öÑù±¾ÊôÓÚ¹·µÄ¸ÅÂÊ£¬²¢½«½á¹û±£´æ³ÉcsvÎļþ¡£²âÊԵĴúÂëÓëÑéÖ¤±È½ÏÏàËÆ£¬µ«ÐèÒª×Ô¼º¼ÓÔØÄ£ÐͺÍÊý¾Ý¡£

def test(**kwargs):
opt.parse(kwargs)
# Ä£ÐÍ
model = getattr(models, opt.model)().eval()
if opt.load_model_path:
model.load(opt.load_model_path)
if opt.use_gpu: model.cuda()

# Êý¾Ý
train_data = DogCat(opt.test_data_root,test=True)
test_dataloader = DataLoader(train_data,\
batch_size=opt.batch_size,\
shuffle=False,\
num_workers=opt.num_workers)

results = []
for ii,(data,path) in enumerate(test_dataloader):
input = t.autograd.Variable(data,volatile = True)
if opt.use_gpu: input = input.cuda()
score = model(input)
probability = t.nn.functional.softmax\
(score)[:,1].data.tolist()
batch_results = [(path_,probability_) \
for path_,probability_ in zip(path,probability) ]
results += batch_results
write_csv(results,opt.result_file)
return results

8.4 °ïÖúº¯Êý

ΪÁË·½±ãËûÈËʹÓÃ, ³ÌÐòÖл¹Ó¦µ±Ìṩһ¸ö°ïÖúº¯Êý£¬ÓÃÓÚ˵Ã÷º¯ÊýÊÇÈçºÎʹÓᣳÌÐòµÄÃüÁîÐнӿÚÖÐÓÐÖÚ¶à²ÎÊý£¬Èç¹ûÊÖ¶¯ÓÃ×Ö·û´®±íʾ²»½ö¸´ÔÓ£¬¶øÇÒºóÆÚÐÞ¸ÄconfigÎļþʱ£¬»¹ÐèÒªÐ޸ĶÔÓ¦µÄ°ïÖúÐÅÏ¢£¬Ê®·Ö²»±ã¡£ÕâÀïʹÓÃÁËPython±ê×¼¿âÖеÄinspect·½·¨£¬¿ÉÒÔ×Ô¶¯»ñÈ¡configµÄÔ´´úÂë¡£helpµÄ´úÂëÈçÏÂ:

def help():
'''
´òÓ¡°ïÖúµÄÐÅÏ¢£º python file.py help
'''

print('''
usage : python {0} <function> [--args=value,]
<function> := train | test | help
example:
python {0} train --env='env0701' --lr=0.01
python {0} test --dataset='path/to/dataset/root/'
python {0} help
avaiable args:'''.format(__file__))

from inspect import getsource
source = (getsource(opt.__class__))
print(source)

µ±Óû§Ö´ÐÐpython main.py helpµÄʱºò£¬»á´òÓ¡ÈçϰïÖúÐÅÏ¢£º

usage : python main.py <function> [--args=value,]
<function> := train | test | help
example:
python main.py train --env='env0701' --lr=0.01
python main.py test --dataset='path/to/dataset/'
python main.py help
avaiable args:
class DefaultConfig(object):
env = 'default' # visdom »·¾³
model = 'AlexNet' # ʹÓõÄÄ£ÐÍ

train_data_root = './data/train/' # ѵÁ·¼¯´æ·Å·¾¶
test_data_root = './data/test1' # ²âÊÔ¼¯´æ·Å·¾¶
load_model_path = 'checkpoints/model.pth' # ¼ÓÔØÔ¤ÑµÁ·µÄÄ£ÐÍ

batch_size = 128 # batch size
use_gpu = True # user GPU or not
num_workers = 4 # how many workers for loading data
print_freq = 20 # print info every N batch

debug_file = '/tmp/debug'
result_file = 'result.csv' # ½á¹ûÎļþ

max_epoch = 10
lr = 0.1 # initial learning rate
lr_decay = 0.95 # when val_loss increase, lr = lr*lr_decay
weight_decay = 1e-4 # Ëðʧº¯Êý

9 ʹÓÃ

ÕýÈçhelpº¯ÊýµÄ´òÓ¡ÐÅÏ¢ËùÊö£¬¿ÉÒÔͨ¹ýÃüÁîÐвÎÊýÖ¸¶¨±äÁ¿Ãû.ÏÂÃæÊÇÈý¸öʹÓÃÀý×Ó£¬fire»á½«°üº¬-µÄÃüÁîÐвÎÊý×Ô¶¯×ª²ãÏ»®Ïß_£¬Ò²»á½«·ÇÊýÖµµÄֵת³É×Ö·û´®¡£ËùÒÔ--train-data-root=data/trainºÍ--train_data_root='data/train'Êǵȼ۵Ä

# ѵÁ·Ä£ÐÍ
python main.py train
--train-data-root=data/train/
--load-model-path='checkpoints/resnet34_16:53:00.pth'
--lr=0.005
--batch-size=32
--model='ResNet34'
--max-epoch = 20

# ²âÊÔÄ£ÐÍ
python main.py test
--test-data-root=data/test1
--load-model-path='checkpoints/resnet34_00:23:05.pth'
--batch-size=128
--model='ResNet34'
--num-workers=12

# ´òÓ¡°ïÖúÐÅÏ¢
python main.py help

10 ÕùÒé

ÒÔÉϵijÌÐòÉè¼Æ¹æ·¶´øÓÐ×÷ÕßÇ¿ÁҵĸöÈËϲºÃ£¬²¢²»Ïë×÷Ϊһ¸ö±ê×¼£¬¶øÊÇ×÷Ϊһ¸öÌáÒéºÍÒ»Öֲο¼¡£ÉÏÊöÉè¼ÆÔÚºÜ¶àµØ·½»¹ÓдýÉÌȶ£¬ÀýÈç¶ÔÓÚѵÁ·¹ý³ÌÊÇ·ñÓ¦¸Ã·â×°³ÉÒ»¸ötrainer¶ÔÏ󣬻òÕßÖ±½Ó·â×°µ½BaiscModuleµÄtrain·½·¨Ö®ÖС£¶ÔÃüÁîÐвÎÊýµÄ´¦ÀíÒ²Óв»ÉÙÖµµÃÌÖÂÛÖ®´¦¡£Òò´Ë²»Òª½«±¾ÎÄÖеĹ۵ã×÷Ϊһ¸ö±ØÐë×ñÊØµÄ¹æ·¶£¬¶øÓ¦¸Ã¿´×÷Ò»¸ö²Î¿¼¡£

±¾ÕÂÖеÄÉè¼Æ¿ÉÄÜ»áÒýÆð²»ÉÙÕùÒ飬ÆäÖбȽÏÖµµÃÉÌȶµÄ²¿·ÖÖ÷ÒªÓÐÒÔϼ¸¸ö·½Ã棺

µÚÒ»¸öÊÇÃüÁîÐвÎÊýµÄÉèÖá£Ä¿Ç°´ó¶àÊý³ÌÐò¶¼ÊÇʹÓÃPython±ê×¼¿âÖеÄargparseÀ´´¦ÀíÃüÁîÐвÎÊý£¬Ò²ÓÐЩʹÓñȽÏÇáÁ¿¼¶µÄclick¡£ÕâÖÖ´¦ÀíÏà¶ÔÀ´Ëµ¶ÔÃüÁîÐеÄÖ§³Ö¸üÍ걸£¬µ«¸ù¾Ý×÷Õߵľ­ÑéÀ´¿´£¬ÕâÖÖ×ö·¨²»¹»Ö±¹Û£¬²¢ÇÒ´úÂëÁ¿Ïà¶ÔÀ´ËµÒ²½Ï¶à¡£±ÈÈçargparse£¬Ã¿´ÎÔö¼ÓÒ»¸öÃüÁîÐвÎÊý£¬¶¼±ØÐëдÈçÏ´úÂë

parser.add_argument('-save-interval', type=int,\
default=500,
help='how many steps to wait before saving [default:500]')

ÔÚÎÒÑÛÖУ¬ÕâÖÖʵÏÖ·½Ê½Ô¶²»ÈçÒ»¸öרÃŵÄconfig.pyÀ´µÄÖ±¹ÛºÍÒ×Óá£ÓÈÆäÊǶÔÓÚʹÓà Jupyter notebook»òIPythonµÈ½»»¥Ê½µ÷ÊÔµÄÓû§À´Ëµ£¬argparse½ÏÄÑʹÓá£

µÚ¶þ¸öÊÇÄ£ÐÍѵÁ·µÄ·½Ê½¡£Óв»ÉÙÈËϲ»¶½«Ä£Ð͵ÄѵÁ·¹ý³Ì¼¯³ÉÓÚÄ£Ð͵͍ÒåÖ®ÖУ¬´úÂë½á¹¹ÈçÏÂËùʾ£º

class MyModel(nn.Module):
def __init__(self,opt):
self.dataloader = Dataloader(opt)
self.optimizer = optim.Adam(self.parameters(),lr=0.001)
self.lr = opt.lr
self.model = make_model()

def forward(self,input):
pass

def train_(self):
# ѵÁ·Ä£ÐÍ
for epoch in range(opt.max_epoch)
for ii,data in enumerate(self.dataloader):
self.train_step(data)
model.save()

def train_step(self):
pass

ÒÖ»òÊÇרÃÅÉè¼ÆÒ»¸öTrainer¶ÔÏó£¬ÐÎÈ磺

import heapq
from torch.autograd import Variable

class Trainer(object):

def __init__(self, model=None, criterion=None, optimizer=None, dataset=None):
self.model = model
self.criterion = criterion
self.optimizer = optimizer
self.dataset = dataset
self.iterations = 0

def run(self, epochs=1):
for i in range(1, epochs + 1):
self.train()

def train(self):
for i, data in enumerate(self.dataset, self.iterations + 1):
batch_input, batch_target = data
self.call_plugins('batch', i, batch_input, batch_target)
input_var = Variable(batch_input)
target_var = Variable(batch_target)

plugin_data = [None, None]

def closure():
batch_output = self.model(input_var)
loss = self.criterion(batch_output, target_var)
loss.backward()
if plugin_data[0] is None:
plugin_data[0] = batch_output.data
plugin_data[1] = loss.data
return loss
self.optimizer.zero_grad()
self.optimizer.step(closure)

self.iterations += i

»¹ÓÐһЩÈËϲ»¶Ä£·ÂkerasºÍscikit-learnµÄÉè¼Æ£¬Éè¼ÆÒ»¸öfit½Ó¿Ú¡£

¶Ô¶ÁÕßÀ´Ëµ£¬ÕâЩ´¦Àí·½Ê½ºÜÄÑ˵Äĸö¸üºÃ»ò¸ü²î£¬ÕÒµ½×îÊʺÏ×Ô¼ºµÄ·½·¨²ÅÊÇ×îºÃµÄ¡£

±¾Ö¸ÄϵÄÅäÌ×´úÂëµØÖ·£º chenyuntc/pytorch-best-practice

 
   
3154 ´Îä¯ÀÀ       28
Ïà¹ØÎÄÕÂ

ÊÖ»úÈí¼þ²âÊÔÓÃÀýÉè¼ÆÊµ¼ù
ÊÖ»ú¿Í»§¶ËUI²âÊÔ·ÖÎö
iPhoneÏûÏ¢ÍÆËÍ»úÖÆÊµÏÖÓë̽ÌÖ
AndroidÊÖ»ú¿ª·¢£¨Ò»£©
Ïà¹ØÎĵµ

Android_UI¹Ù·½Éè¼Æ½Ì³Ì
ÊÖ»ú¿ª·¢Æ½Ì¨½éÉÜ
androidÅÄÕÕ¼°ÉÏ´«¹¦ÄÜ
Android½²ÒåÖÇÄÜÊÖ»ú¿ª·¢
Ïà¹Ø¿Î³Ì

Android¸ß¼¶Òƶ¯Ó¦ÓóÌÐò
Androidϵͳ¿ª·¢
AndroidÓ¦Óÿª·¢
ÊÖ»úÈí¼þ²âÊÔ