±à¼ÍƼö: |
±¾ÎÄÖ÷Òª½éÉÜÁËÒ»ÖÖ½Ð×ö¡°¸ÐÖªÆ÷¡±µÄÉñ¾ÔªµÄ×é³É²¿·Ö£¬¸ÐÖªÆ÷ʵÏÖÂß¼ÔËËãºÍѵÁ·£¬Ï£ÍûÄܸø´ó¼Ò´øÀ´Ò»Ð©°ïÖú¡£
±¾ÎÄÀ´×ÔÓÚ²©¿ÍÔ°£¬ÓÉ»ðÁú¹ûÈí¼þAlice±à¼¡¢ÍƼö¡£
|
|
δÀ´½«ÊÇÈ˹¤ÖÇÄܺʹóÊý¾ÝµÄʱ´ú£¬ÊǸ÷Ðи÷ҵʹÓÃÈ˹¤ÖÇÄÜÔÚÔÆÉÏ´¦Àí´óÊý¾ÝµÄʱ´ú£¬Éî¶Èѧϰ½«ÊÇÐÂʱ´úµÄÒ»´óÀûÆ÷£¬ÔÚ´ËÎÒ½«´ÓÁ㿪ʼ¼Ç¼Éî¶ÈѧϰµÄѧϰÀú³Ì¡£
ÎÒÏ£ÍûÔÚѧϰ¹ý³ÌÖÐ×öµ½ÒÔϼ¸µã£º
Á˽â¸÷ÖÖÉñ¾ÍøÂçÉè¼ÆÔÀí¡£
ÕÆÎÕ¸÷ÖÖÉî¶ÈѧϰËã·¨µÄpython±à³ÌʵÏÖ¡£
ÔËÓÃÉî¶Èѧϰ½â¾öʵ¼ÊÎÊÌâ¡£
ÈÃÎÒÃÇ¿ªÊ¼Ì¤ÉÏÉî¶È¶ÈѧϰµÄÕ÷³Ì¡£
Ò»¡¢¸ÐÖªÆ÷ÔÐÍ
ÏëÒªÁ˽⡰Éñ¾ÍøÂ硱£¬ÎÒÃÇÐèÒªÁ˽âÒ»ÖÖ½Ð×ö¡°¸ÐÖªÆ÷¡±µÄÉñ¾Ôª¡£¸ÐÖªÆ÷ÔÚ 20 ÊÀ¼ÍÎå¡¢ÁùÄê´úÓÉ¿ÆÑ§¼Ò
Frank Rosenblatt ·¢Ã÷,¸ö¸ÐÖªÆ÷½ÓÊܸöÊ䣬²¢²ú¸öÊä³ö¡£
ÏÂͼÊÇÒ»¸ö¸ÐÖªÆ÷:

ÀýÖеĸÐÖªÆ÷ÓÐÈý¸öÊäx1¡¢x2¡¢x3£¨1*w0×÷ΪƫÖ㬺óÃæ»á½²µ½£©¡£Í¨³£¿ÉÒÔÓиü¶à»ò¸üÉÙÊä¡£
Rosenblatt ÌáÒé¸ö¼òµ¥µÄ¹æÔòÀ´¼ÆËãÊä³ö¡£ËûÒýÈ¨ÖØw1¡¢w2¡¢w3..±íÏàÓ¦Êä¶ÔÓÚÊä³öÖØÒªÐÔµÄʵÊý£¨È¨ÖØ£©¡£Éñ¾ÔªµÄÊä³öΪ0
»òÕß 1£¬ÔòÓɼÆËãÈ¨ÖØºóµÄ×ÜºÍ ¡Æjwjxj ÓÚ»òÕßÓÚЩãÐÖµ¾ö¶¨¡£ºÍÈ¨ÖØÑù£¬
ãÐÖµÊǸöʵÊý£¬¸öÉñ¾ÔªµÄ²ÎÊý¡£¸ü¾«È·µÄ´úÊýÐÎʽ£º
Õâ¾ÍÊǸö¸ÐÖªÆ÷ËùÒª×öµÄËùÓÐÊÂÇ飡
¶øÎÒÃǰÑãØÖµÒƶ¯µ½²»µÈʽ×ó±ß£¬²¢ÓøÐÖªÆ÷µÄÆ«ÖÃb=-threshold´úÌæ£¬ÓÃÆ«Ööø²»ÓÃãØÖµ¡£ÆäÖÐʵÏÖÆ«ÖõÄÒ»ÖÖ·½·¨¾ÍÊÇÈçǰͼËùʾÔÚÊäÈëÖÐÒýÈëÒ»¸öÆ«ÖÃÉñ¾Ôªx0=1£¬Ôòb=x0*w0,ÄÇô¸ÐÖªÆ÷µÄ¹æÔò¿ÉÒÔÖØÐ´.
´Ëʱ¾Í¿ÉÒÔʹÓý×Ô¾º¯ÊýÀ´×÷Ϊ¸ÐÖªÆ÷µÄ¼¤Àøº¯Êý¡£
µ½´ËÎÒÃÇ¿ÉÒÔ·¢ÏÖ£¬Ò»¸ö¸ÐÖªÆ÷ÓÉÒÔϼ¸²¿·Ö×é³É
ÊäÈëȨֵ Ò»¸ö¸ÐÖªÆ÷¿ÉÒÔ½ÓÊÕ¶à¸öÊäÈë
(x1,x2,...,xn¨Oxi¡ÊR)£¬Ã¿¸öÊäÈëÉÏÓÐÒ»¸öȨֵwi¡ÊR£¬´ËÍ⻹ÓÐÒ»¸öÆ«ÖÃÏîb¡ÊR£¬¾ÍÊÇÉÏͼÖеÄw0¡£
¼¤»îº¯Êý ¸ÐÖªÆ÷µÄ¼¤»îº¯Êý¿ÉÒÔÓкܶàÑ¡ÔñÔÚ´ËÎÒÃÇÑ¡ÔñÏÂÃæÕâ¸ö½×Ô¾º¯ÊýÀ´×÷Ϊ¼¤»îº¯Êýf£º
f(z)={1z>00otherwise
Êä³ö ¸ÐÖªÆ÷µÄÊä³öÓÉÏÂÃæÕâ¸ö¹«Ê½À´¼ÆËã
y=f(wx+b)¹«Ê½(1)
½ÓÏÂÈ¥ÎÒÃǽ«»áÓÃÒ»¸öÀý×ÓÀ´Àí½â¸ÐÖªÆ÷µÄÄ£ÐÍ¡£
Ä£Ð͵Ľ¨Á¢ÊÇÔËÓÃÉî¶Èѧϰ·½·¨½â¾öÎÊÌâµÄ»ù´¡¡£
¶þ¡¢¸ÐÖªÆ÷µÄÔËÓÃ
1¡¢¸ÐÖªÆ÷ʵÏÖÂß¼ÔËËã
ÎÒÃÇÉè¼ÆÒ»¸ö¸ÐÖªÆ÷£¬ÈÃËüÀ´ÊµÏÖandÔËËã¡£³ÌÐòÔ±¶¼ÖªµÀ£¬andÊÇÒ»¸ö¶þÔªº¯Êý£¨´øÓÐÁ½¸ö²ÎÊýºÍ£©£¬ÏÂÃæÊÇËüµÄÕæÖµ±í£º

ΪÁ˼ÆËã·½±ã£¬ÎÒÃÇÓÃ0±íʾfalse£¬ÓÃ1±íʾtrue¡£
ÎÒÃÇÁîw1=0.5;w2=0.5;b=0.8£¬¶ø¼¤»îº¯Êýf¾ÍÊÇÇ°ÃæÐ´³öÀ´µÄ½×Ô¾º¯Êý£¬Õâʱ£¬¸ÐÖªÆ÷¾ÍÏ൱ÓÚandº¯Êý¡£ÎÒÃÇÑéËãһϣº
ÊäÈëÉÏÃæÕæÖµ±íµÄµÚÒ»ÐУ¬¼´£¬ÄÇô¸ù¾Ý¹«Ê½(1)£¬¼ÆËãÊä³ö£º

Ò²¾ÍÊǵ±x1x2¶¼Îª0µÄʱºò£¬yΪ0£¬Õâ¾ÍÊÇÕæÖµ±íµÄµÚÒ»ÐС£¶ÁÕß¿ÉÒÔ×ÔÐÐÑéÖ¤ÉÏÊöÕæÖµ±íµÄµÚ¶þ¡¢Èý¡¢ËÄÐС£
¿ÉÒÔ¿´µ½¸ÐÖªÆ÷±¾ÉíÊÇÒ»¸öÏßÐÔ·ÖÀàÆ÷£¬Ëüͨ¹ýÇó¿¼ÂÇÁËÈ¨ÖØµÄ¸÷ÊäÈëÖ®ºÍÓëãØÖµµÄ´óС¹ØÏµ£¬¶ÔÊÂÎï½øÐзÖÀà¡£
ËùÒÔÈκÎÏßÐÔ·ÖÀà»òÏßÐԻعéÎÊÌâ¶¼¿ÉÒÔÓøÐÖªÆ÷À´½â¾ö¡£Ç°ÃæµÄ²¼¶ûÔËËã¿ÉÒÔ¿´×÷ÊǶþ·ÖÀàÎÊÌ⣬¼´¸ø¶¨Ò»¸öÊäÈ룬Êä³ö0£¨ÊôÓÚ·ÖÀà0£©»ò1£¨ÊôÓÚ·ÖÀà1£©¡£
ÈçÏÂÃæËùʾ£¬andÔËËãÊÇÒ»¸öÏßÐÔ·ÖÀàÎÊÌ⣬¼´¿ÉÒÔÓÃÒ»ÌõÖ±Ïß°Ñ·ÖÀà0£¨false£¬ºì²æ±íʾ£©ºÍ·ÖÀà1£¨true£¬Â̵ã±íʾ£©·Ö¿ª¡£

È»¶ø£¬¸ÐÖªÆ÷È´²»ÄÜʵÏÖÒì»òÔËË㣬ÈçÏÂͼËùʾ£¬Òì»òÔËËã²»ÊÇÏßÐԵģ¬ÄãÎÞ·¨ÓÃÒ»ÌõÖ±Ïß°Ñ·ÖÀà0ºÍ·ÖÀà1·Ö¿ª¡£

2¡¢¸ÐÖªÆ÷µÄѵÁ·
ÏÖÔÚ£¬Äã¿ÉÄÜÀ§»óÇ°ÃæµÄÈ¨ÖØÏîºÍÆ«ÖÃÏîµÄÖµÊÇÈçºÎ»ñµÃµÄÄØ£¿Õâ¾ÍÒªÓõ½¸ÐÖªÆ÷ѵÁ·Ëã·¨£º½«È¨ÖØÏîºÍÆ«ÖÃÏî³õʼ»¯Îª0£¬È»ºó£¬ÀûÓÃÏÂÃæµÄ¸ÐÖªÆ÷¹æÔòµü´úµÄÐÞ¸ÄwiºÍb£¬Ö±µ½ÑµÁ·Íê³É¡£
wib¡ûwi+¦¤wi¡ûb+¦¤b
ÆäÖÐ:
¦¤wi¦¤b=¦Ç(ty)xi=¦Ç(ty)
wiÊÇÓëÊäÈëxi¶ÔÓ¦µÄÈ¨ÖØÏbÊÇÆ«ÖÃÏî¡£ÊÂʵÉÏ£¬¿ÉÒÔ°Ñb¿´×÷ÊÇÖµÓÀԶΪ1µÄÊäÈëËù¶ÔÓ¦µÄÈ¨ÖØw0¡£tÊÇѵÁ·Ñù±¾µÄʵ¼ÊÖµ£¬Ò»°ã³ÆÖ®Îªlabel¡£¶øyÊǸÐÖªÆ÷µÄÊä³öÖµ£¬ËüÊǸù¾Ý¹«Ê½(1)¼ÆËãµÃ³ö¡£¦ÇÊÇÒ»¸ö³ÆÎªÑ§Ï°ËÙÂʵij£Êý£¬Æä×÷ÓÃÊÇ¿ØÖÆÃ¿Ò»²½µ÷ÕûȨµÄ·ù¶È¡£
ÿ´Î´ÓѵÁ·Êý¾ÝÖÐÈ¡³öÒ»¸öÑù±¾µÄÊäÈëÏòÁ¿x£¬Ê¹ÓøÐÖªÆ÷¼ÆËãÆäÊä³öy£¬ÔÙ¸ù¾ÝÉÏÃæµÄ¹æÔòÀ´µ÷ÕûÈ¨ÖØ¡£Ã¿´¦ÀíÒ»¸öÑù±¾¾Íµ÷ÕûÒ»´ÎÈ¨ÖØ¡£¾¹ý¶àÂÖµü´úºó£¨¼´È«²¿µÄѵÁ·Êý¾Ý±»·´¸´´¦Àí¶àÂÖ£©£¬¾Í¿ÉÒÔѵÁ·³ö¸ÐÖªÆ÷µÄÈ¨ÖØ£¬Ê¹Ö®ÊµÏÖÄ¿±êº¯Êý¡£
Èý¡¢pythonʵÏÖ¸ÐÖªÆ÷
class Perceptron(object):
def __init__(self, input_num, activator):
'''
³õʼ»¯¸ÐÖªÆ÷£¬ÉèÖÃÊäÈë²ÎÊýµÄ¸öÊý£¬ÒÔ¼°¼¤»îº¯Êý¡£
¼¤»îº¯ÊýµÄÀàÐÍΪdouble -> double
'''
self.activator = activator
# È¨ÖØÏòÁ¿³õʼ»¯Îª0
self.weights = [0.0 for _ in range(input_num)]
# Æ«ÖÃÏî³õʼ»¯Îª0
self.bias = 0.0
def __str__(self):
'''
´òӡѧϰµ½µÄÈ¨ÖØ¡¢Æ«ÖÃÏî
'''
return 'weights\t:%s\nbias\t:%f\n' % (self.weights,
self.bias)
def predict(self, input_vec):
'''
ÊäÈëÏòÁ¿£¬Êä³ö¸ÐÖªÆ÷µÄ¼ÆËã½á¹û
'''
# °Ñinput_vec[x1,x2,x3...]ºÍweights[w1,w2,w3,...]´ò°üÔÚÒ»Æð
# ±ä³É[(x1,w1),(x2,w2),(x3,w3),...]
# È»ºóÀûÓÃmapº¯Êý¼ÆËã[x1*w1, x2*w2, x3*w3]
# ×îºóÀûÓÃreduceÇóºÍ
return self.activator(
reduce(lambda a, b: a + b,
map(lambda (x, w): x * w,
zip(input_vec, self.weights))
, 0.0) + self.bias)
def train(self, input_vecs, labels, iteration,
rate):
'''
ÊäÈëѵÁ·Êý¾Ý£ºÒ»×éÏòÁ¿¡¢Óëÿ¸öÏòÁ¿¶ÔÓ¦µÄlabel£»ÒÔ¼°ÑµÁ·ÂÖÊý¡¢Ñ§Ï°ÂÊ
'''
for i in range(iteration):
self._one_iteration(input_vecs, labels, rate)
def _one_iteration(self, input_vecs, labels, rate):
'''
Ò»´Îµü´ú£¬°ÑËùÓеÄѵÁ·Êý¾Ý¹ýÒ»±é
'''
# °ÑÊäÈëºÍÊä³ö´ò°üÔÚÒ»Æð£¬³ÉΪÑù±¾µÄÁбí[(input_vec, label), ...]
# ¶øÃ¿¸öѵÁ·Ñù±¾ÊÇ(input_vec, label)
samples = zip(input_vecs, labels)
# ¶Ôÿ¸öÑù±¾£¬°´ÕÕ¸ÐÖªÆ÷¹æÔò¸üÐÂÈ¨ÖØ
for (input_vec, label) in samples:
# ¼ÆËã¸ÐÖªÆ÷ÔÚµ±Ç°È¨ÖØÏµÄÊä³ö
output = self.predict(input_vec)
# ¸üÐÂÈ¨ÖØ
self._update_weights(input_vec, output, label,
rate)
def _update_weights(self, input_vec, output, label,
rate):
'''
°´ÕÕ¸ÐÖªÆ÷¹æÔò¸üÐÂÈ¨ÖØ
'''
# °Ñinput_vec[x1,x2,x3,...]ºÍweights[w1,w2,w3,...]´ò°üÔÚÒ»Æð
# ±ä³É[(x1,w1),(x2,w2),(x3,w3),...]
# È»ºóÀûÓøÐÖªÆ÷¹æÔò¸üÐÂÈ¨ÖØ
delta = label - output
self.weights = map(
lambda (x, w): w + rate * delta * x,
zip(input_vec, self.weights))
# ¸üÐÂbias
self.bias += rate * delta
def f(x):
'''
¶¨Ò弤»îº¯Êýf
'''
return 1 if x > 0 else 0
def get_training_dataset():
'''
»ùÓÚandÕæÖµ±í¹¹½¨ÑµÁ·Êý¾Ý
'''
# ¹¹½¨ÑµÁ·Êý¾Ý
# ÊäÈëÏòÁ¿Áбí
input_vecs = [[1,1], [0,0], [1,0], [0,1]]
# ÆÚÍûµÄÊä³öÁÐ±í£¬×¢ÒâÒªÓëÊäÈëÒ»Ò»¶ÔÓ¦
# [1,1] -> 1, [0,0] -> 0, [1,0] -> 0,
[0,1] -> 0
labels = [1, 0, 0, 0]
return input_vecs, labels
def train_and_perceptron():
'''
ʹÓÃandÕæÖµ±íѵÁ·¸ÐÖªÆ÷
'''
# ´´½¨¸ÐÖªÆ÷£¬ÊäÈë²ÎÊý¸öÊýΪ2£¨ÒòΪandÊǶþÔªº¯Êý£©£¬¼¤»îº¯ÊýΪf
p = Perceptron(2, f)
# ѵÁ·£¬µü´ú10ÂÖ, ѧϰËÙÂÊΪ0.1
input_vecs, labels = get_training_dataset()
p.train(input_vecs, labels, 10, 0.1)
#·µ»ØÑµÁ·ºÃµÄ¸ÐÖªÆ÷
return p
if __name__ == '__main__':
# ѵÁ·and¸ÐÖªÆ÷
and_perception = train_and_perceptron()
# ´òӡѵÁ·»ñµÃµÄÈ¨ÖØ
print and_perception
# ²âÊÔ
print '1 and 1 = %d' % and_perception.predict([1,
1])
print '0 and 0 = %d' % and_perception.predict([0,
0])
print '1 and 0 = %d' % and_perception.predict([1,
0])
print '0 and 1 = %d' % and_perception.predict([0,
1]) |
|