Äú¿ÉÒÔ¾èÖú£¬Ö§³ÖÎÒÃǵĹ«ÒæÊÂÒµ¡£

1Ôª 10Ôª 50Ôª





ÈÏÖ¤Â룺  ÑéÖ¤Âë,¿´²»Çå³þ?Çëµã»÷Ë¢ÐÂÑéÖ¤Âë ±ØÌî



  ÇóÖª ÎÄÕ ÎÄ¿â Lib ÊÓÆµ iPerson ¿Î³Ì ÈÏÖ¤ ×Éѯ ¹¤¾ß ½²×ù Model Center   Code  
»áÔ±   
   
 
     
   
 ¶©ÔÄ
  ¾èÖú
ͨ¹ýÇ¨ÒÆÑ§Ï°ÊµÏÖOCTͼÏñʶ±ð
 
  4057  次浏览      27
 2018-12-13 
 
±à¼­ÍƼö:

±¾ÎÄÀ´×ÔÓÚsegmentfault.com£¬ ÎÄÖÐÖ÷Òª½²½âÁ˲¼Ê½»·¾³Ï»úÆ÷ѧϰ±à³Ì,¶ÔÓÚÊäÈëÊý¾Ý½Ï¶àµÄÇé¿öÎÒÃÇÒª´ÓExtract£¬Transform£¬LoadÈý·½Ã濼ÂǽøÐÐÓÅ»¯´¦Àí¡£

Ç¨ÒÆÑ§Ï°

Ç¨ÒÆÑ§Ï°¾ÍÊÇÓñðÈËÒѾ­ÑµÁ·ºÃµÄÄ£ÐÍ£¬È磺Inception Model£¬Resnet ModelµÈ£¬°ÑËüµ±×öPre-trained Model£¬°ïÖúÎÒÃÇÌáÈ¡ÌØÕ÷¡£³£Ó÷½·¨ÊÇÈ¥³ýPre-trained ModelµÄ×îºóÒ»²ã£¬°´ÕÕ×Ô¼ºµÄÐèÇóÖØÐ¸ü¸Ä£¬È»ºóÓÃѵÁ·¼¯ÑµÁ·¡£

ÒòΪPre-trained Model¿ÉÄÜÒѾ­Ê¹Óùý´óÁ¿Êý¾Ý¼¯£¬¾­¹ýÁ˳¤Ê±¼äµÄѵÁ·£¬ËùÒÔÎÒÃÇͨ¹ýÇ¨ÒÆÑ§Ï°¿ÉÒÔʹÓýÏÉÙµÄÊý¾Ý¼¯ÑµÁ·¾Í¿ÉÒÔ»ñµÃÏà¶Ô²»´íµÄ½á¹û¡£

ÓÉÓÚÏîÄ¿ÖÐʹÓõ½Estimator£¬ËùÒÔÎÒÃÇÔÙ¼òµ¥½éÉÜÏÂEstimator¡£

TF Estimator

ÕâÀïÒýÓÃϹÙÍø EstimatorµÄ½éÉÜ¡£

Äú¿ÉÒÔÔÚ±¾µØÖ÷»úÉÏ»ò·Ö²¼Ê½¶à·þÎñÆ÷»·¾³ÖÐÔËÐлùÓÚ Estimator µÄÄ£ÐÍ£¬¶øÎÞÐè¸ü¸ÄÄ£ÐÍ¡£´ËÍ⣬Äú¿ÉÒÔÔÚ CPU¡¢GPU »ò TPU ÉÏÔËÐлùÓÚ Estimator µÄÄ£ÐÍ£¬¶øÎÞÐèÖØÐ±àÂëÄ£ÐÍ¡£

Estimator ¼ò»¯ÁËÔÚÄ£ÐÍ¿ª·¢ÕßÖ®¼ä¹²ÏíʵÏֵĹý³Ì¡£

Äú¿ÉÒÔʹÓø߼¶Ö±¹Û´úÂ뿪·¢ÏȽøµÄÄ£ÐÍ¡£¼òÑÔÖ®£¬²ÉÓà Estimator ´´½¨Ä£ÐÍͨ³£±È²ÉÓÃµÍ½× TensorFlow API ¸ü¼òµ¥¡£

Estimator ±¾ÉíÔÚ tf.layers Ö®ÉϹ¹½¨¶ø³É£¬¿ÉÒÔ¼ò»¯×Ô¶¨Òå¹ý³Ì¡£

Estimator »áΪÄú¹¹½¨Í¼¡£

Estimator Ìṩ°²È«µÄ·Ö²¼Ê½ÑµÁ·Ñ­»·£¬¿ÉÒÔ¿ØÖÆÈçºÎÒÔ¼°ºÎʱ£º¹¹½¨Í¼£¬³õʼ»¯±äÁ¿£¬¿ªÊ¼ÅŶӣ¬´¦ÀíÒì³££¬´´½¨¼ì²éµã²¢´Ó¹ÊÕÏÖлָ´£¬±£´æTensorBoardµÄÕªÒª¡£

ʹÓà Estimator ±àдӦÓÃʱ£¬Äú±ØÐ뽫Êý¾ÝÊäÈë¹ÜµÀ´ÓÄ£ÐÍÖзÖÀë³öÀ´¡£ÕâÖÖ·ÖÀë¼ò»¯Á˲»Í¬Êý¾Ý¼¯µÄʵÑéÁ÷³Ì¡£

°¸Àý

ÎÒÃÇ¿ÉÒÔʹÓá°tf.keras.estimator.model_to_estimator¡±½«kerasת»»Estimator¡£ÕâÀïʹÓõÄÊý¾Ý¼¯ÊÇFashion-MNIST¡£

Fashion-MNISTÊý¾Ý±êÇ©£º

Êý¾Ýµ¼È룺

import os
import time
import tensorflow as tf
import numpy as np
import tensorflow.contrib as tcon

(train_image,train_lables),(test_image,test_labels)=tf.keras.datasets.fashion_mnist.load_data()
TRAINING_SIZE=len(train_image)
TEST_SIZE=len(test_image)

# ½«ÏñËØÖµÓÉ0-255 תΪ0-1 Ö®¼ä
train_image=np.asarray(train_image,dtype=np.float32)/255
# 4άÕÅÁ¿[batch_size,height,width,channels]
train_image=train_image.reshape(shape=(TRAINING_SIZE,28,28,1))
test_image=np.asarray(test_image,dtype=np.float32)/255
test_image=test_image.reshape(shape=(TEST_SIZE,28,28,1))

ʹÓÃtf.keras.utils.to_categorical½«±êǩתΪ¶ÀÈȱàÂë±íʾ£º

# lables תΪ one_hot±íʾ
# Àà±ðÊýÁ¿
LABEL_DIMENSIONS=10
train_lables_onehot=tf.keras.utils.to_categorical(
y=train_lables,num_classes=LABEL_DIMENSIONS
)
test_labels_onehot=tf.keras.utils.to_categorical(
y=test_labels,num_classes=LABEL_DIMENSIONS
)
train_lables_onehot=train_lables_onehot.astype(np.float32)
test_labels_onehot=test_labels_onehot.astype(np.float32)

 

´´½¨KerasÄ£ÐÍ£º

¡°¡±¡°
3²ã¾í»ý²ã£¬2²ã³Ø»¯²ã£¬×îºóչƽÌí¼ÓÈ«Á¬½Ó²ãʹÓÃsoftmax·ÖÀà
¡±¡°¡±
inputs=tf.keras.Input(shape=(28,28,1))
conv_1=tf.keras.layers.Conv2D(
filters=32,
kernel_size=3,
# relu¼¤»îº¯ÊýÔÚÊäÈëֵΪ¸ºÖµÊ±£¬¼¤»îֵΪ0£¬´Ëʱ¿ÉÒÔʹÓÃLeakyReLU
activation=tf.nn.relu
)(inputs)
pool_1=tf.keras.layers.MaxPooling2D(
pool_size=2,
strides=2
)(conv_1)
conv_2=tf.keras.layers.Conv2D(
filters=64,
kernel_size=3,
activation=tf.nn.relu
)(pool_1)
pool_2=tf.keras.layers.MaxPooling2D(
pool_size=2,
strides=2
)(conv_2)
conv_3=tf.keras.layers.Conv2D(
filters=64,
kernel_size=3,
activation=tf.nn.relu
)(pool_2)

conv_flat=tf.keras.layers.Flatten()(conv_3)
dense_64=tf.keras.layers.Dense(
units=64,
activation=tf.nn.relu
)(conv_flat)

predictions=tf.keras.layers.Dense(
units=LABEL_DIMENSIONS,
activation=tf.nn.softmax
)(dense_64)

 

Ä£ÐÍÅäÖãº

model=tf.keras.Model(
inputs=inputs,
outputs=predictions
)
model.compile(
loss='categorical_crossentropy',
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
metrics=['accuracy']
)

´´½¨Estimator

Ö¸¶¨GPUÊýÁ¿£¬È»ºó½«kerasתΪEstimator£¬´úÂëÈçÏ£º

NUM_GPUS=2
strategy=tcon.distribute.MirroredStrategy(num_gpus=NUM_GPUS)
config=tf.estimator.RunConfig(train_distribute=strategy)

estimator=tf.keras.estimator.model_to_estimator(
keras_model=model,config=config
)

Ç°ÃæËµµ½¹ýʹÓà Estimator ±àдӦÓÃʱ£¬Äú±ØÐ뽫Êý¾ÝÊäÈë¹ÜµÀ´ÓÄ£ÐÍÖзÖÀë³öÀ´£¬ËùÒÔ£¬ÎÒÃÇÏÈ´´½¨input function¡£Ê¹ÓÃprefetch½«dataÔ¤Öûº³åÇø¿ÉÒÔ¼Ó¿ìÊý¾Ý¶ÁÈ¡¡£ÒòΪÏÂÃæµÄÇ¨ÒÆÑµÁ·Ê¹ÓõÄÊý¾Ý¼¯½Ï´ó£¬ËùÒÔÔÚÕâÀïÓбØÒª½éÉÜÏÂÓÅ»¯Êý¾ÝÊäÈë¹ÜµÀµÄÏà¹ØÄÚÈÝ¡£

ÓÅ»¯Êý¾ÝÊäÈë¹ÜµÀ

TensorFlowÊý¾ÝÊäÈë¹ÜµÀÊÇÒÔÏÂÈý¸ö¹ý³Ì£º

Extract£ºÊý¾Ý¶ÁÈ¡£¬Èç±¾µØ£¬·þÎñ¶Ë

Transform£ºÊ¹ÓÃCPU´¦ÀíÊý¾Ý£¬ÈçͼƬ·­×ª£¬²Ã¼ô£¬Êý¾ÝshuffleµÈ

Load£º½«Êý¾Ýת¸øGPU½øÐмÆËã

Êý¾Ý¶ÁÈ¡£º

ͨ³££¬µ±CPUΪ¼ÆËã×¼±¸Êý¾Ýʱ£¬GPU/TPU´¦ÓÚÏÐÖÃ״̬£»µ±GPU/TPUÔËÐÐʱ£¬CPU´¦ÓÚÏÐÖã¬ÏÔÈ»É豸ûÓб»ºÏÀíÀûÓá£

tf.data.Dataset.prefetch¿ÉÒÔ½«ÉÏÊöÐÐΪ²¢ÐÐʵÏÖ£¬µ±GPU/TPUÖ´ÐеÚN´ÎѵÁ·£¬´ËʱÈÃCPU×¼±¸N+1´ÎѵÁ·Ê¹Á½¸ö²Ù×÷ÖØµþ£¬´Ó¶øÀûÓÃÉ豸¿ÕÏÐʱ¼ä¡£

ͨ¹ýʹÓÃtf.contrib.data.parallel_interleave¿ÉÒÔ²¢ÐдӶà¸öÎļþ¶ÁÈ¡Êý¾Ý£¬²¢ÐÐÎļþÊýÓÐcycle_lengthÖ¸¶¨¡£

Êý¾Ýת»»£º

ʹÓÃtf.data.Dataset.map¶ÔÊý¾Ý¼¯ÖеÄÊý¾Ý½øÐд¦Àí£¬ÓÉÓÚÊý¾Ý¶ÀÁ¢£¬ËùÒÔ¿ÉÒÔ²¢Ðд¦Àí¡£´Ëº¯Êý¶ÁÈ¡µÄÎļþÊǺ¬ÓÐÈ·¶¨ÐÔ˳Ðò£¬Èç¹û˳Ðò¶ÔѵÁ·Ã»ÓÐÓ°Ï죬Ҳ¿ÉÒÔÈ¡ÏûÈ·¶¨ÐÔ˳Ðò¼Ó¿ìѵÁ·¡£

def input_fn(images,labels,epochs,batch_size):
ds=tf.data.Dataset.from_tensor_slices((images,labels))
# repeatֵΪNone»òÕß-1ʱ½«ÎÞÏÞÖÆµü´ú
ds=ds.shuffle(500).repeat(epochs).batch(batch_size).prefetch(batch_size)

return ds

Ä£ÐÍѵÁ·

# ÓÃÓÚ¼ÆËãµü´úʱ¼ä
class TimeHistory(tf.train.SessionRunHook):
def begin(self):
self.times = []
def before_run(self, run_context):
self.iter_time_start = time.time()
def after_run(self, run_context, run_values):
self.times.append(time.time() - self.iter_time_start)

time_hist = TimeHistory()
BATCH_SIZE = 512
EPOCHS = 5
# lambdaΪÁËÌîд²ÎÊý
estimator.train(lambda:input_fn(train_images,
train_labels,
epochs=EPOCHS,
batch_size=BATCH_SIZE),
hooks=[time_hist])

# ѵÁ·Ê±¼ä
total_time = sum(time_hist.times)
print(f"total time with {NUM_GPUS} GPU(s): {total_time} seconds")

# ѵÁ·Êý¾ÝÁ¿
avg_time_per_batch = np.mean(time_hist.times)
print(f"{BATCH_SIZE*NUM_GPUS/avg_time_per_batch} images/second with
{NUM_GPUS} GPU(s)")

 

½á¹ûÈçͼ£º

µÃÒæÓÚEstimatorÊý¾ÝÊäÈëºÍÄ£Ð͵ķÖÀ룬ÆÀ¹À·½·¨ºÜ¼òµ¥¡£

estimator.evaluate(lambda:input_fn(test_images,
test_labels,
epochs=1,
batch_size=BATCH_SIZE))

Ç¨ÒÆÑ§Ï°ÑµÁ·ÐÂÄ£ÐÍ

ÎÒÃÇʹÓÃRetinal OCT imagesÊý¾Ý¼¯½øÐÐÇ¨ÒÆÑµÁ·£¬Êý¾Ý±êǩΪ£ºNORMAL, CNV, DME DRUSEN£¬°üº¬·Ö±æÂÊΪ512*296£¬84495ÕÅÕÕÆ¬¡£

Êý¾Ý¶ÁÈ¡£¬ÉèÖÃinput_fn:

labels = ['CNV', 'DME', 'DRUSEN', 'NORMAL']
train_folder = os.path.join('OCT2017', 'train', '**', '*.jpeg')
test_folder = os.path.join('OCT2017', 'test', '**', '*.jpeg')

 

def input_fn(file_pattern, labels,
image_size=(224,224),
shuffle=False,
batch_size=64,
num_epochs=None,
buffer_size=4096,
prefetch_buffer_size=None):
# ´´½¨²éÕÒ±í£¬½«string תΪ int 64ID
table = tcon.lookup.index_table_from_tensor(mapping=tf.constant(labels))
num_classes = len(labels)

def _map_func(filename):
# sep = '/'
label = tf.string_split([filename], delimiter=os.sep).values[-2]
image = tf.image.decode_jpeg(tf.read_file(filename), channels=3)
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.image.resize_images(image, size=image_size)
# tf.one_hot:¸ù¾ÝÊäÈëµÄdepth·µ»Øone_hotÕÅÁ¿
# indices = [0, 1, 2]
# depth = 3
# tf.one_hot(indices, depth) return£º
# [[1., 0., 0.],
# [0., 1., 0.],
# [0., 0., 1.]]
return (image, tf.one_hot(table.lookup(label), num_classes))

dataset = tf.data.Dataset.list_files(file_pattern, shuffle=shuffle)

if num_epochs is not None and shuffle:
dataset = dataset.apply(
tcon.data.shuffle_and_repeat(buffer_size, num_epochs))
elif shuffle:
dataset = dataset.shuffle(buffer_size)
elif num_epochs is not None:
dataset = dataset.repeat(num_epochs)

dataset = dataset.apply(
tcon.data.map_and_batch(map_func=_map_func,
batch_size=batch_size,
num_parallel_calls=os.cpu_count()))
dataset = dataset.prefetch(buffer_size=prefetch_buffer_size)

return dataset

ʹÓÃVGG16ÍøÂç

ͨ¹ýkerasʹÓÃԤѵÁ·µÄVGG16ÍøÂ磬ÎÒÃÇÖØÑµÁ·×îºó5²ã:

# include_top:²»°üº¬×îºó3¸öÈ«Á¬½Ó²ã
keras_vgg16 = tf.keras.applications.VGG16(input_shape=(224,224,3),
include_top=False)
output = keras_vgg16.output
output = tf.keras.layers.Flatten()(output)
prediction = tf.keras.layers.Dense(len(labels),
activation=tf.nn.softmax)(output)
model = tf.keras.Model(inputs=keras_vgg16.input,
outputs=prediction)
# ºó5²ã²»ÑµÁ·
for layer in keras_vgg16.layers[:-4]:
layer.trainable = False

 

ÖØÐÂѵÁ·Ä£ÐÍ£º

# ͨ¹ýÇ¨ÒÆÑ§Ï°µÃµ½Ä£ÐÍ
model.compile(loss='categorical_crossentropy',
# ʹÓÃĬÈÏѧϰÂÊ
optimizer=tf.train.AdamOptimizer(),
metrics=['accuracy'])
NUM_GPUS = 2
strategy = tf.contrib.distribute.MirroredStrategy(num_gpus=NUM_GPUS)
config = tf.estimator.RunConfig(train_distribute=strategy)

# תÖÁestimator
estimator = tf.keras.estimator.model_to_estimator(model,
config=config)
BATCH_SIZE = 64
EPOCHS = 1
estimator.train(input_fn=lambda:input_fn(train_folder,
labels,
shuffle=True,
batch_size=BATCH_SIZE,
buffer_size=2048,
num_epochs=EPOCHS,
prefetch_buffer_size=4),
hooks=[time_hist])
# Ä£ÐÍÆÀ¹À£º
estimator.evaluate(input_fn=lambda:input_fn(test_folder,
labels,
shuffle=False,
batch_size=BATCH_SIZE,
buffer_size=1024,
num_epochs=1))
VGG16ÍøÂç

VGG16ÍøÂç

ÈçͼËùʾ£¬VGG16ÓÐ13¸ö¾í»ý²ãºÍ3¸öÈ«Á¬½Ó²ã¡£VGG16ÊäÈëΪ[224,224,3],¾í»ýºË´óСΪ£¨3£¬3£©£¬³Ø»¯´óСΪ£¨2£¬2£©²½³¤Îª2¡£¸÷²ãµÄÏêϸ²ÎÊý¿ÉÒԲ鿴VGG ILSVRC 16 layersÒòΪͼƬ½Ï´ó£¬ÕâÀïÖ»¸ø³ö²¿·Ö½ØÍ¼£¬ÏêÇéÇëµã»÷Á´½Ó²é¿´¡£

VGG16Ä£Ðͽṹ¹æÕû£¬¼òµ¥£¬Í¨¹ý¼¸¸öС¾í»ýºË£¨3£¬3£©¾í»ý²ã×éºÏ±È´ó¾í»ýºËÈ磨7£¬7£©¸üºÃ£¬ÒòΪ¶à¸ö£¨3£¬3£©¾í»ý±ÈÒ»¸ö´óµÄ¾í»ýÓµÓиü¶àµÄ·ÇÏßÐÔ£¬¸üÉٵIJÎÊý¡£´ËÍ⣬ÑéÖ¤Á˲»¶Ï¼ÓÉîµÄÍøÂç½á¹¹¿ÉÒÔÌáÉýÐÔÄÜ£¨¾í»ý+¾í»ý+¾í»ý+³Ø»¯£¬´úÌæ¾í»ý+³Ø»¯£¬ÕâÑù¼õÉÙWµÄͬʱÓпÉÒÔÄâºÏ¸ü¸´ÔÓµÄÊý¾Ý£©£¬²»¹ýVGG16²ÎÊýÁ¿ºÜ¶à£¬Õ¼ÓÃÄÚ´æ½Ï´ó¡£

×ܽá

ͨ¹ýÇ¨ÒÆÑ§Ï°ÎÒÃÇ¿ÉÒÔʹÓýÏÉÙµÄÊý¾ÝѵÁ·³öÀ´Ò»¸öÏà¶Ô²»´íµÄÄ£ÐÍ£¬Estimator¼ò»¯ÁË»úÆ÷ѧϰ±à³ÌÌØ±ðÊÇÔÚ·Ö²¼Ê½»·¾³Ï¡£¶ÔÓÚÊäÈëÊý¾Ý½Ï¶àµÄÇé¿öÎÒÃÇÒª´ÓExtract£¬Transform£¬LoadÈý·½Ã濼ÂǽøÐÐÓÅ»¯´¦Àí¡£µ±È»£¬³ýÁËVGG16ÎÒÃÇ»¹ÓкܶàÑ¡Ôñ£¬È磺Inception Model£¬Resnet Model¡£

   
4057 ´Îä¯ÀÀ       27
Ïà¹ØÎÄÕÂ

»ùÓÚͼ¾í»ýÍøÂçµÄͼÉî¶Èѧϰ
×Ô¶¯¼ÝÊ»ÖеÄ3DÄ¿±ê¼ì²â
¹¤Òµ»úÆ÷ÈË¿ØÖÆÏµÍ³¼Ü¹¹½éÉÜ
ÏîĿʵս£ºÈçºÎ¹¹½¨ÖªÊ¶Í¼Æ×
 
Ïà¹ØÎĵµ

5GÈ˹¤ÖÇÄÜÎïÁªÍøµÄµäÐÍÓ¦ÓÃ
Éî¶ÈѧϰÔÚ×Ô¶¯¼ÝÊ»ÖеÄÓ¦ÓÃ
ͼÉñ¾­ÍøÂçÔÚ½»²æÑ§¿ÆÁìÓòµÄÓ¦ÓÃÑо¿
ÎÞÈË»úϵͳԭÀí
Ïà¹Ø¿Î³Ì

È˹¤ÖÇÄÜ¡¢»úÆ÷ѧϰ&TensorFlow
»úÆ÷ÈËÈí¼þ¿ª·¢¼¼Êõ
È˹¤ÖÇÄÜ£¬»úÆ÷ѧϰºÍÉî¶Èѧϰ
ͼÏñ´¦ÀíËã·¨·½·¨Óëʵ¼ù