±à¼ÍƼö: |
±¾ÎÄÖ÷Òª½éÉÜÈçºÎʹÓÃÒ»¸öTensorFlowµÄ¿ÉÊÓ»¯¹¤¾ß½øÐпÉÊÓ»¯Éñ¾ÍøÂ磬ϣÍû¶ÔÄúµÄѧϰÓÐËù°ïÖú¡£
±¾ÎÄÀ´×ÔÓÚĪ·³PYTHON£¬ÓÉ»ðÁú¹ûÈí¼þAlice±à¼¡¢ÍƼö¡£ |
|
ѧ»áÓà Tensorflow ×Ô´øµÄ tensorboard È¥¿ÉÊÓ»¯ÎÒÃÇËù½¨Ôì³öÀ´µÄÉñ¾ÍøÂçÊÇÒ»¸öºÜºÃµÄѧϰÀí½â·½Ê½.
ÓÃ×îÖ±¹ÛµÄÁ÷³Ìͼ¸æËßÄãÄãµÄÉñ¾ÍøÂçÊdz¤ÔõÑù,ÓÐÖúÓÚÄã·¢ÏÖ±à³ÌÖмäµÄÎÊÌâºÍÒÉÎÊ.
Ч¹û
ºÃ£¬ÎÒÃÇ¿ªÊ¼°É¡£
Õâ´ÎÎÒÃÇ»á½éÉÜÈçºÎ¿ÉÊÓ»¯Éñ¾ÍøÂç¡£ÒòΪºÜ¶àʱºòÎÒÃǶ¼ÊÇ×öºÃÁËÒ»¸öÉñ¾ÍøÂ磬µ«ÊÇûÓÐÒ»¸öͼÏñ¿ÉÒÔչʾ¸ø´ó¼Ò¿´¡£ÕâÒ»½Ú»á½éÉÜÒ»¸öTensorFlowµÄ¿ÉÊÓ»¯¹¤¾ß
¡ª tensorboard :) ͨ¹ýʹÓÃÕâ¸ö¹¤¾ßÎÒÃÇ¿ÉÒÔºÜÖ±¹ÛµÄ¿´µ½Õû¸öÉñ¾ÍøÂçµÄ½á¹¹¡¢¿ò¼Ü¡£ ÒÔǰ¼¸½ÚµÄ´úÂëΪÀý£ºÏà¹Ø´úÂë
ͨ¹ýtensorflowµÄ¹¤¾ß´óÖ¿ÉÒÔ¿´µ½£¬½ñÌìÒªÏÔʾµÄÉñ¾ÍøÂç²î²»¶àÊÇÕâÑù×ÓµÄ

ͬʱÎÒÃÇÒ²¿ÉÒÔÕ¹¿ª¿´Ã¿¸ölayerÖеÄһЩ¾ßÌåµÄ½á¹¹£º

ºÃ£¬Í¨¹ýÔĶÁ´úÂëºÍ֮ǰµÄͼƬÎÒÃÇ´ó¸ÅÖªµÀÁË´Ë´¦ÊÇÓÐÒ»¸öÊäÈë²ã£¨inputs£©£¬Ò»¸öÒþº¬²ã£¨layer£©£¬»¹ÓÐÒ»¸öÊä³ö²ã£¨output£©
ÏÖÔÚ¿ÉÒÔ¿´¿´ÈçºÎ½øÐпÉÊÓ»¯.
´î½¨Í¼Ö½
Ê×ÏÈ´Ó Input ¿ªÊ¼£º
# define placeholder
for inputs to network
xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1]) |
¶ÔÓÚinputÎÒÃǽøÐÐÈçÏÂÐ޸ģº Ê×ÏÈ£¬¿ÉÒÔΪxsÖ¸¶¨Ãû³ÆÎªx_in:
xs= tf.placeholder(tf.float32,
[None, 1],name='x_in') |
È»ºóÔٴζÔysÖ¸¶¨Ãû³Æy_in:
ys= tf.placeholder(tf.loat32,
[None, 1],name='y_in') |
ÕâÀïÖ¸¶¨µÄÃû³Æ½«À´»áÔÚ¿ÉÊÓ»¯µÄͼ²ãinputsÖÐÏÔʾ³öÀ´
ʹÓÃwith tf.name_scope('inputs')¿ÉÒÔ½«xsºÍys°üº¬½øÀ´£¬ÐγÉÒ»¸ö´óµÄͼ²ã£¬Í¼²ãµÄÃû×Ö¾ÍÊÇwith
tf.name_scope()·½·¨ÀïµÄ²ÎÊý¡£
with tf.name_scope('inputs'):
# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 1])
ys = tf.placeholder(tf.float32, [None, 1]) |
½ÓÏÂÀ´¿ªÊ¼±à¼layer £¬ Çë¿´±à¼Ç°µÄ³ÌÐòƬ¶Î £º
def add_layer(inputs,
in_size, out_size, activation_function=None):
# add one more layer and return the output of
this layer
Weights = tf.Variable(tf.random_normal([in_size,
out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) +
0.1)
Wx_plus_b = tf.add(tf.matmul(inputs, Weights),
biases)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b, )
return outputs |
ÕâÀïµÄÃû×ÖÓ¦¸Ã½Ðlayer, ÏÂÃæÊDZ༺óµÄ:
def add_layer(inputs,
in_size, out_size, activation_function=None):
# add one more layer and return the output of
this layer
with tf.name_scope('layer'):
Weights= tf.Variable(tf.random_normal([in_size,
out_size]))
# and so on... |
ÔÚ¶¨ÒåÍê´óµÄ¿ò¼ÜlayerÖ®ºó£¬Í¬Ê±Ò²ÐèÒª¶¨Òåÿһ¸ö¡¯¿ò¼Ü¡®ÀïÃæµÄС²¿¼þ£º(Weights biases
ºÍ activation function): ÏÖÔÚÏÖ¶Ô Weights ¶¨Ò壺 ¶¨ÒåµÄ·½·¨Í¬ÉÏ£¬¿ÉÒÔʹÓÃtf.name.scope()·½·¨£¬Í¬Ê±Ò²¿ÉÒÔÔÚWeightsÖÐÖ¸¶¨Ãû³ÆW¡£
¼´Îª£º
def add_layer(inputs,
in_size, out_size, activation_function=None):
#define layer name
with tf.name_scope('layer'):
#define weights name
with tf.name_scope('weights'):
Weights= tf.Variable(tf.random_normal([in_size,
out_size]),name='W')
#and so on...... |
½Ó׿ÌÐø¶¨Òåbiases £¬ ¶¨Ò巽ʽͬÉÏ¡£
def add_layer(inputs,
in_size, out_size, activation_function=None):
#define layer name
with tf.name_scope('layer'):
#define weights name
with tf.name_scope('weights')
Weights= tf.Variable(tf.random_normal([in_size,
out_size]),name='W')
# define biase
with tf.name_scope('Wx_plus_b'):
Wx_plus_b = tf.add(tf.matmul(inputs, Weights),
biases)
# and so on.... |
activation_function µÄ»°£¬¿ÉÒÔÔÝʱºöÂÔ¡£ÒòΪµ±Äã×Ô¼ºÑ¡ÔñÓÃ
tensorflow Öеļ¤Àøº¯Êý£¨activation function£©µÄʱºò£¬tensorflow»áĬÈÏÌí¼ÓÃû³Æ¡£
×îÖÕ£¬layerÐÎʽÈçÏ£º
def add_layer(inputs,
in_size, out_size, activation_function=None):
# add one more layer and return the output of
this layer
with tf.name_scope('layer'):
with tf.name_scope('weights'):
Weights = tf.Variable(
tf.random_normal([in_size, out_size]),
name='W')
with tf.name_scope('biases'):
biases = tf.Variable(
tf.zeros([1, out_size]) + 0.1,
name='b')
with tf.name_scope('Wx_plus_b'):
Wx_plus_b = tf.add(
tf.matmul(inputs, Weights),
biases)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b, )
return outputs |
Ч¹ûÈçÏ£º£¨ÓÐûÓп´¼û¸Õ²Å¶¨ÒålayerÀïÃæµÄ¡°ÄÚ²¿¹¹¼þ¡±ÄØ£¿£©

×îºó±à¼loss²¿·Ö£º½«with tf.name_scope()Ìí¼ÓÔÚlossÉÏ·½£¬²¢ÎªËüÆðÃûΪloss
# the error between
prediciton and real data
with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.reduce_sum(
tf.square(ys - prediction),
eduction_indices=[1]
)) |
Õâ¾ä»°¾ÍÊÇ¡°»æÖÆ¡± lossÁË£¬ ÈçÏ£º

ʹÓÃwith tf.name_scope()ÔٴζÔtrain_step²¿·Ö½øÐбà¼,ÈçÏ£º
with tf.name_scope('train'):
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) |
ÎÒÃÇÐèҪʹÓà tf.summary.FileWriter() (tf.train.SummaryWriter()
ÕâÖÖ·½Ê½ÒѾÔÚ tf >= 0.12 °æ±¾ÖÐÞðÆú) ½«ÉÏÃæ¡®»æ»¡¯³öµÄͼ±£´æµ½Ò»¸öĿ¼ÖУ¬ÒÔ·½±ãºóÆÚÔÚä¯ÀÀÆ÷ÖпÉÒÔä¯ÀÀ¡£
Õâ¸ö·½·¨Öеĵڶþ¸ö²ÎÊýÐèҪʹÓÃsess.graph £¬ Òò´ËÎÒÃÇÐèÒª°ÑÕâ¾ä»°·ÅÔÚ»ñÈ¡sessionµÄºóÃæ¡£
ÕâÀïµÄgraphÊǽ«Ç°Ã涨ÒåµÄ¿ò¼ÜÐÅÏ¢ÊÕ¼¯ÆðÀ´£¬È»ºó·ÅÔÚlogs/Ŀ¼ÏÂÃæ¡£
sess = tf.Session()
# get session
# tf.train.SummaryWriter soon be deprecated, use
following
writer = tf.summary.FileWriter("logs/",
sess.graph) |
×îºóÔÚÄãµÄterminal£¨ÖÕ¶Ë£©ÖÐ £¬Ê¹ÓÃÒÔÏÂÃüÁî
tensorboard --logdir
logs |
ͬʱ½«ÖÕ¶ËÖÐÊä³öµÄÍøÖ·¸´ÖƵ½ä¯ÀÀÆ÷ÖУ¬±ã¿ÉÒÔ¿´µ½Ö®Ç°¶¨ÒåµÄÊÓͼ¿ò¼ÜÁË¡£
tensorboard »¹ÓкܶàÆäËûµÄ²ÎÊý£¬Ï£Íû´ó¼Ò¿ÉÒÔ¶à¶àÁ˽â, ¿ÉÒÔʹÓà tensorboard
--help ²é¿´tensorboardµÄÏêϸ²ÎÊý ×îÖÕµÄÈ«²¿´úÂëÔÚÕâÀï
¿ÉÄÜ»áÓöµ½µÄÎÊÌâ
(1) ¶øÇÒÓë tensorboard ¼æÈݵÄä¯ÀÀÆ÷ÊÇ ¡°Google Chrome¡±. ʹÓÃÆäËûµÄä¯ÀÀÆ÷²»±£Ö¤ËùÓÐÄÚÈݶ¼ÄÜÕý³£ÏÔʾ.
(2) ͬʱעÒâ, Èç¹ûʹÓà http://0.0.0.0:6006 ÍøÖ·´ò²»¿ªµÄÅóÓÑÃÇ, ÇëʹÓÃ
http://localhost:6006, ´ó¶àÊýÅóÓѶ¼ÊÇÕâ¸öÎÊÌâ.
(3) ÇëÈ·±£ÄãµÄ tensorboard Ö¸ÁîÊÇÔÚÄãµÄ logs
Îļþ¸ùĿ¼ִÐеÄ. Èç¹ûÔÚÆäËûĿ¼ÏÂ, ±ÈÈç Desktop µÈ, ¿ÉÄܲ»»á³É¹¦¿´µ½Í¼. ±ÈÈçÔÚÏÂÃæÕâ¸öĿ¼,
ÄãÒª cd µ½ project Õâ¸öµØ·½Ö´ÐÐ /project > tensorboard --logdir
logs
- project
- logs
model.py
env.py |
(4) ÌÖÂÛÇøµÄÅóÓÑʹÓà anaconda Ï嵀 python3.5 µÄÐéÄâ»·¾³, Èç¹ûÄãÊäÈë tensorboard
µÄÖ¸Áî, ³öÏÖ±¨´í: "tensorboard" is not recognized
as an internal or external command...
½â¾ö·½·¨µÄ¹Ø¼ü¾ÍÊÇÐèÒª¼¤»îTensorFlow. ¹ÜÀíԱģʽ´ò¿ª Anaconda Prompt,
ÊäÈë activate tensorflow, ½Ó×Ű´ÕÕÉÏÃæµÄÁ÷³ÌÖ´ÐÐ tensorboard Ö¸Áî. |