Äú¿ÉÒÔ¾èÖú£¬Ö§³ÖÎÒÃǵĹ«ÒæÊÂÒµ¡£

1Ôª 10Ôª 50Ôª





ÈÏÖ¤Â룺  ÑéÖ¤Âë,¿´²»Çå³þ?Çëµã»÷Ë¢ÐÂÑéÖ¤Âë ±ØÌî



  ÇóÖª ÎÄÕ ÎÄ¿â Lib ÊÓÆµ iPerson ¿Î³Ì ÈÏÖ¤ ×Éѯ ¹¤¾ß ½²×ù Model Center   Code  
»áÔ±   
   
 
     
   
 ¶©ÔÄ
  ¾èÖú
PythonͼÏñ´¦Àí½ø½×£º¶àÖÖͼÏñ±ä»»Ë㷨ʵ¼ù£¡
 
  3942  次浏览      27
 2019-4-28
   
 
±à¼­ÍƼö:

±¾ÎÄÀ´×ÔÓÚcsdn£¬ÎÄÕ½éÉÜÁË»ù±¾Í¼Ïñ´¦Àí¼¼Êõ£¬×÷ÕßÖ÷ҪʹÓÃSciKit-Image - numpyÖ´Ðдó¶àÊý²Ù×÷¡£

ͼƬÁÁ¶Èת»»

ÈÃÎÒÃÇÏÈ´Óµ¼ÈëͼÏñ¿ªÊ¼£º

%matplotlib inline
import imageio
import matplotlib.pyplot as plt
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category
=matplotlib.cbook.mplDeprecation)
pic = imageio.imread('img/parrot.jpg')
plt.figure(figsize = (6,6))
plt.imshow(pic);
plt.axis('off');

ͼƬ¸º±ä»»

ÁÁ¶È±ä»»º¯ÊýÔÚÊýѧÉ϶¨ÒåΪ£º

s=T(r)

ÆäÖÐrÊǵ¼ÈëͼƬµÄÏñËØ£¬sÊÇÊä³öͼƬµÄÏñËØ¡£TÊÇÒ»¸öת»»º¯Êý£¬Ëü½«rµÄÿ¸öÖµÓ³Éäµ½sµÄÿ¸öÖµ¡£

¸º±ä»»£¬¼´±ä»»µßµ¹¡£ÔÚ¸º±ä»»ÖУ¬ÊäÈëͼÏñµÄÿ¸öÖµ´ÓL-1ÖмõÈ¥²¢Ó³Éäµ½Êä³öͼÏñÉÏ¡£

ÔÚÕâÖÖÇé¿öÏ£¬Ê¹ÓÃÒÔϹ«Ê½Íê³Éת»»£º

s=(L¨C1)¨Cr

Òò´Ëÿ¸öÖµ¼õÈ¥255£¬×îÖÕЧ¹ûÊÇ£¬Í¼Æ¬Ô­À´½ÏÁÁµÄ²¿·Ö±ä°µ£¬½Ï°µµÄ²¿·Ö±äÁÁ£¬Õâ¾Í²úÉúÁ˸º±ä»»¡£

negative = 255 - pic # neg = (L-1) - img
plt.figure(figsize = (6,6))
plt.imshow(negative);
plt.axis('off');

ÈÕ־ת»»

ÈÕ־ת»»¿ÉÒÔÓÉÒÔϹ«Ê½¶¨Ò壺

s=c*log(r+1)

ÆäÖУ¬sºÍrÊÇÊä³öºÍÊäÈëͼƬµÄÏñËØÖµ£¬cÊdz£Êý¡£Öµ1±»Ìí¼Óµ½ÊäÈëͼƬµÄÿ¸öÏñËØÖµ£¬Èç¹ûͼƬÖеÄÏñËØÇ¿¶ÈΪ0£¬Ôòlog£¨0£©µÈÓÚÎÞÇî´ó£¬Ìí¼Ó1µÄ×÷ÓÃÊÇʹ×îСֵÖÁÉÙΪ1¡£

ÔÚ¶ÔÊý±ä»»ÆÚ¼ä£¬Óë½Ï¸ßÏñËØÖµÏà±È£¬Í¼ÏñÖеݵÏñËØ±»À©Õ¹¡£½Ï¸ßµÄÏñËØÖµÔÚÈÕ־ת»»Öб»Ñ¹Ëõ£¬¶ÔÊý±ä»»ÖеÄcÖµ¿Éµ÷ÕûÔöÇ¿ÀàÐÍ¡£

%matplotlib inline
import imageio
import numpy as np
import matplotlib.pyplot as plt
pic = imageio.imread('img/parrot.jpg')
gray = lambda rgb : np.dot(rgb[... , :3] , [0.299 , 0.587, 0.114])
gray = gray(pic)
'''
log transform
-> s = c*log(1+r)
So, we calculate constant c to estimate s
-> c = (L-1)/log(1+|I_max|)
'''
max_ = np.max(gray)
def log_transform():
return (255/np.log(1+max_)) * np.log(1+gray)
plt.figure(figsize = (5,5))
plt.imshow(log_transform(), cmap = plt.get_cmap(name = 'gray'))
plt.axis('off');

GammaУÕý

GammaУÕýÊÇÓÃÓÚ¶ÔÊÓÆµ»ò¾²Ö¹Í¼ÏñϵͳÖеÄÁÁ¶È»ò´Ì¼¤Öµ½øÐбàÂëºÍ½âÂëµÄ·ÇÏßÐÔ²Ù×÷¡£GammaУÕýÒ²³ÆÎªÃÝÂɱ任£¬Ê×ÏÈ£¬ÎÒÃǵÄͼƬÏñËØÇ¿¶È±ØÐë´Ó0,255µ½0,1.0µÄ·¶Î§Ëõ·Å£¬Í¨¹ýÓ¦ÓÃÒÔϵÈʽ»ñµÃÊä³öµÄGammaУÕýͼÏñ£º

Vo=V1 ^(1/G)

ÆäÖÐViÊÇÊäÈëͼÏñ£¬GÊÇgammaÖµ£¬½«Êä³öͼÏñVoËõ·Å»Ø0-255·¶Î§¡£

gammaÖµG<1ÓÐʱ±»³ÆÎª±àÂëgamma£¬²¢ÇÒѹËõÃÝÂÉ·ÇÏßÐÔ±àÂëµÄ¹ý³Ì±»³ÆÎªgammaѹËõ; GammaÖµ<1»á½«Í¼ÏñÒÆÏò¹âÆ×µÄ½Ï°µ¶Ë¡£

Ïà·´£¬Ù¤ÂíÖµG>1±»³ÆÎª½âÂëgamma£¬²¢ÇÒÅòÕÍÃÝÂÉ·ÇÏßÐÔÓ¦ÓõĹý³Ì±»³ÆÎªgammaÀ©Õ¹¡£GammaÖµ>1½«Ê¹Í¼ÏñÏԵøüÁÁ£¬gammaÖµG=1¶ÔÊäÈëͼÏñûÓÐÓ°Ï죺

import imageio
import matplotlib.pyplot as plt
# Gamma encoding
pic = imageio.imread('img/parrot.jpg')
gamma = 2.2 # Gamma < 1 ~ Dark ; Gamma > 1 ~ Bright
gamma_correction = ((pic/255) ** (1/gamma))
plt.figure(figsize = (5,5))
plt.imshow(gamma_correction)
plt.axis('off');

GammaУÕýµÄÔ­Òò

Ó¦ÓÃGammaУÕýµÄÔ­ÒòÊÇÈ˵ÄÑÛ¾¦Ëù¸ÐÖªµÄÑÕÉ«ºÍÁÁ¶ÈÓëÊýÂëÏà»úÖеĴ«¸ÐÆ÷²»Í¬¡£ ËäÈ»£¬ÊýÂëÏà»úÔÚÁÁ¶ÈÖ®¼ä¾ßÓÐÏßÐÔ¹ØÏµ£¬µ«ÈËÑÛÊÇ·ÇÏßÐÔ¹ØÏµ¡£ÎªÁ˽âÊÍÕâÖÖ¹ØÏµ£¬ÎÒÃÇÓ¦ÓÃGammaУÕý¡£

»¹ÓÐһЩÆäËûµÄÏßÐԱ任º¯Êý£¬±ÈÈ磺

Contrast Stretching

Intensity-Level Slicing

Bit-Plane Slicing

¾í»ý

¼ÆËã»ú¶ÔͼÏñµÄ´¦Àí×îÖÕ»á³ÊÏÖÒ»¸öÏñËØÖµÊý×é¡£¸ù¾ÝͼÏñµÄ·Ö±æÂʺʹóС£¬±ÈÈç»á³öÏÖÒ»¸ö32 x 32 x 3µÄÊý×飬ÆäÖÐ3±íʾRGBÖµ»òͨµÀ¡£¼ÙÉèÎÒÃÇÓÐÒ»¸öPNG¸ñʽµÄ²ÊɫͼÏñ£¬ËüµÄ´óСÊÇ480 x 480£¬Õâ´ú±íµÄÊý×齫ÊÇ480 * 480 * 3£¬ÕâЩÊý×ÖÖеÄÿһ¸ö¶¼ÔÚ0µ½255Ö®¼ä£¬ÃèÊöÁ˸õãµÄÏñËØÇ¿¶È¡£

¼ÙÉèÎÒÃÇÊäÈëÒ»¸ö32 * 32 * 3µÄÏñËØÖµÊý×飬ÎÒÃǸÃÈçºÎÀí½â¾í»ýµÄ¸ÅÄîÄØ£¿Äã¿ÉÒÔ°ÑËüÏëÏóΪÉÁ˸ÔÚͼÏñ×óÉÏ·½µÄÊÖµçͲ£¬ÊÖµçͲÕÕÉäÇøÓòΪ3 x 3£¬¼ÙÉè¸ÃÊÖµçͲ»¬¹ýÊäÈëͼÏñµÄËùÓÐÇøÓò¡£ÔÚ»úÆ÷ѧϰÖУ¬Õâ¸öÊÖµçͲ±»³ÆÎª¹ýÂËÆ÷»òÄںˣ¬ËüËùÕÕÉäµÄÇøÓò³ÆÎª receptive field ¡£

ÏÖÔÚ£¬´Ë¹ýÂËÆ÷Ò²ÊÇÒ»¸öÊý×飬ÆäÖÐÊý×Ö³ÆÎªÈ¨ÖØ»ò²ÎÊý¡£ÐèҪעÒ⣬Â˾µµÄÉî¶È±ØÐëÓëÊäÈëÉî¶ÈÏàͬ£¬Òò´ËÂ˾µµÄ³ß´çΪ3*3*3¡£

ͼÏñÄں˻òÂ˾µÊÇÒ»¸öС¾ØÕó£¬ÓÃÓÚÓ¦ÓÿÉÄÜÔÚPhotoshop»òGimpÖÐÕÒµ½µÄһЩ±ä»»£¬ÀýÈçÄ£ºý¡¢Èñ»¯»ò¸¡µñµÈ£¬ËüÃÇÒ²ÓÃÓÚ»úÆ÷ѧϰµÄ¹¦ÄÜÌáÈ¡£¬ÕâÊÇÒ»ÖÖÓÃÓÚÈ·¶¨Í¼ÏñÖØÒª²¿·ÖµÄ¼¼Êõ¡£

ÏÖÔÚ£¬ÈÃÎÒÃǽ«¹ýÂËÆ÷·ÅÔÚ×óÉϽǡ£µ±Â˲¨Æ÷Î§ÈÆÊäÈëͼÏñ»¬¶¯»ò¾í»ýʱ£¬Ëü½«Â˲¨Æ÷ÖеÄÖµ³ËÒÔͼÏñµÄԭʼÏñËØÖµ£¨Ò²³ÆÎª¼ÆËãÔªËØ³Ë·¨£©¡£ÏÖÔÚ£¬ÎÒÃǶÔÊäÈë¾íÉϵÄÿ¸öλÖÃÖØ¸´´Ë¹ý³Ì¡£ÏÂÒ»²½Êǽ«¹ýÂËÆ÷ÏòÓÒÒÆ¶¯Ò»¸ö²½·ù£¬ÒÀ´ËÀàÍÆ¡£ÊäÈë¾íÉϵÄÿ¸öλÖö¼»áÉú³ÉÒ»¸öÊý×Ö¡£ÎÒÃÇÒ²¿ÉÒÔÑ¡Ôñ²½·ùΪ2ÉõÖÁ¸ü¶à£¬µ«ÎÒÃDZØÐë±£Ö¤¸ÃÊýÖµÊʺÏÊäÈëͼÏñ¡£

µ±Â˾µ»¬¹ýËùÓÐλÖúó£¬ÎÒÃǻᷢÏÖʣϵÄÊÇÒ»¸ö30x30x1µÄÊý×ÖÊý×飬ÎÒÃǽ«Æä³ÆÎªÒªËØÍ¼¡£ÎÒÃǵõ½30x30ÕóÁеÄÔ­ÒòÊÇÓÐ300¸ö²»Í¬µÄλÖã¬3x3Â˾µ¿ÉÒÔ·ÅÔÚ32x32ÊäÈëͼÏñÉÏ¡£Õâ900¸öÊý×ÖÓ³Éäµ½30x30ÕóÁС£ÎÒÃÇ¿ÉÒÔͨ¹ýÒÔÏ·½Ê½¼ÆËã¾í»ýͼÏñ£º

Convolved: (N?F)/S + 1

ÆäÖУ¬NºÍF·Ö±ð´ú±íÊäÈëͼÏñºÍÄں˴óС£¬S´ú±í²½·ù»ò²½³¤¡£

¼ÙÉèÎÒÃÇÓÐÒ»¸ö3x3Â˲¨Æ÷£¬ÔÚ5x5¾ØÕóÉϽøÐоí»ý£¬¸ù¾ÝµÈʽ£¬ÎÒÃÇÓ¦¸ÃµÃµ½Ò»¸ö3x3¾ØÕ󣬼¼ÊõÉϳÆÎª¼¤»îÓ³Éä»òÌØÕ÷Ó³Éä¡£

ʵ¼ÊÉÏ£¬ÎÒÃÇ¿ÉÒÔʹÓò»Ö¹Ò»¸ö¹ýÂËÆ÷£¬ÎÒÃǵÄÊä³öÁ¿½«ÊÇ28 * 28 * n£¨ÆäÖÐnÊǼ¤»îͼµÄÊýÁ¿£©¡£Í¨¹ýʹÓøü¶à¹ýÂËÆ÷£¬ÎÒÃÇÄܹ»¸üºÃ±£Áô¿Õ¼äά¶È¡£

¶ÔÓÚͼÏñ¾ØÕó±ß½çÉϵÄÏñËØ£¬Äں˵ÄÒ»Ð©ÔªËØ¿ÉÄÜÕ¾ÔÚͼÏñ¾ØÕóÖ®Í⣬Òò´Ë²»¾ßÓÐÀ´×ÔͼÏñ¾ØÕóµÄÈκζÔÓ¦ÔªËØ¡£ÔÚÕâÖÖÇé¿öÏ£¬ÎÒÃÇ¿ÉÒÔÏû³ýÕâЩλÖõľí»ýÔËË㣬×îÖÕ³öÏÖСÓÚÊäÈëµÄÊä³ö¾ØÕ󣬻òÕßÎÒÃÇ¿ÉÒÔ½«Ìî³äÓ¦ÓÃÓÚÊäÈë¾ØÕó¡£

ÎÒÃÇ¿ÉÒÔ½«×Ô¶¨Òåͳһ´°¿ÚÓ¦ÓÃÓÚͼÏñ£º

%%time
import numpy as np
import imageio
import matplotlib.pyplot as plt
from scipy.signal import convolve2d
def Convolution(image, kernel):
conv_bucket = []
for d in range(image.ndim):
conv_channel = convolve2d(image[:,:,d], kernel,
mode="same", boundary="symm")
conv_bucket.append(conv_channel)
return np.stack(conv_bucket, axis=2).astype("uint8")
kernel_sizes = [9,15,30,60]
fig, axs = plt.subplots(nrows = 1, ncols = len(kernel_sizes), figsize=(15,15));
pic = imageio.imread('img:/parrot.jpg')
for k, ax in zip(kernel_sizes, axs):
kernel = np.ones((k,k))
kernel /= np.sum(kernel)
ax.imshow(Convolution(pic, kernel));
ax.set_title("Convolved By Kernel: {}".format(k));
ax.set_axis_off();
Wall time: 43.5 s

ÂÖÀªÄںˣ¨ÓÖÃû¡°±ßÔµ¡±Äںˣ©ÓÃÓÚÍ»³öÏÔʾÏñËØÖµÖ®¼äµÄ²îÒ죬ÁÁ¶È½Ó½üµÄÏàÁÚÏñËØÅԱߵÄÏñËØÔÚÐÂͼÏñÖн«ÏÔʾΪºÚÉ«£¬¶ø²îÒìÐԽϴóµÄÏàÁÚÏñËØµÄÅÔ±ßÏñËØ½«ÏÔʾΪ°×É«¡£

%%time
from skimage import color
from skimage import exposure
import numpy as np
import imageio
import matplotlib.pyplot as plt
# import image
pic = imageio.imread('img/crazycat.jpeg')
plt.figure(figsize = (5,5))
plt.imshow(pic)
plt.axis('off');
Wall time: 34.9 ms

# Convert the image to grayscale
img = color.rgb2gray(pic)
# outline kernel - used for edge detection
kernel = np.array([[-1,-1,-1],
[-1,8,-1],
[-1,-1,-1]])
# we use 'valid' which means we do not add zero
padding to our image
edges = convolve2d(img, kernel, mode = 'valid')
# Adjust the contrast of the filtered image by
applying Histogram Equalization
edges_equalized = exposure.equalize_adapthist
(edges/np.max(np.abs(edges)),
clip_limit = 0.03)
# plot the edges_clipped
plt.figure(figsize = (5,5))
plt.imshow(edges_equalized, cmap='gray')
plt.axis('off');

ÈÃÎÒÃÇÓò»Í¬ÀàÐ͵ĹýÂËÆ÷ÊÔһϣ¬±ÈÈçSharpen Kernel¡£Èñ»¯ÄÚºËÇ¿µ÷ÏàÁÚÏñËØÖµµÄÖ®¼ä²îÒ죬Õâ»áÈÃͼÏñ¿´ÆðÀ´¸üÉú¶¯¡£ÈÃÎÒÃǽ«±ßÔµ¼ì²âÄÚºËÓ¦ÓÃÓÚÈñ»¯Äں˵ÄÊä³ö£¬²¢Ê¹Óà box blur filter½øÒ»²½±ê×¼»¯¡£

%%time
from skimage import color
from skimage import exposure
from scipy.signal import convolve2d
import numpy as np
import imageio
import matplotlib.pyplot as plt
# Convert the image to grayscale
img = color.rgb2gray(pic)
# apply sharpen filter to the original image
sharpen_kernel = np.array([[0,-1,0],
[-1,5,-1],
[0,-1,0]])
image_sharpen = convolve2d(img, sharpen_kernel,
mode = 'valid')
# apply edge kernel to the output of the sharpen
kernel
edge_kernel = np.array([[-1,-1,-1],
[-1,8,-1],
[-1,-1,-1]])
edges = convolve2d(image_sharpen, edge_kernel,
mode = 'valid')
# apply normalize box blur filter to the edge
detection filtered image
blur_kernel = np.array([[1,1,1],
[1,1,1],
[1,1,1]])/9.0;
denoised = convolve2d(edges, blur_kernel, mode
= 'valid')
# Adjust the contrast of the filtered image by
applying Histogram Equalization
denoised_equalized = exposure.equalize_adapthist
(denoised/np.max(np.abs(denoised)),
clip_limit=0.03)
plt.figure(figsize = (5,5))
plt.imshow(denoised_equalized, cmap='gray')
plt.axis('off')
plt.show()

ΪÁËÄ£ºýͼÏñ£¬¿ÉÒÔʹÓôóÁ¿²»Í¬µÄ´°¿ÚºÍ¹¦ÄÜ¡£ ÆäÖÐ×î³£¼ûµÄÊÇGaussian window¡£ÎªÁ˽âËü¶ÔͼÏñµÄ×÷Óã¬ÈÃÎÒÃǽ«´Ë¹ýÂËÆ÷Ó¦ÓÃÓÚͼÏñ¡£

%%time
from skimage import color
from skimage import exposure
from scipy.signal import convolve2d
import numpy as np
import imageio
import matplotlib.pyplot as plt
# import image
pic = imageio.imread('img/parrot.jpg')
# Convert the image to grayscale
img = color.rgb2gray(pic)
# gaussian kernel - used for blurring
kernel = np.array([[1,2,1],
[2,4,2],
[1,2,1]])
kernel = kernel / np.sum(kernel)
# we use 'valid' which means we do not add zero
padding to our image
edges = convolve2d(img, kernel, mode = 'valid')
# Adjust the contrast of the filtered image by
applying Histogram Equalization
edges_equalized = exposure.equalize_adapthist
(edges/np.max(np.abs(edges)),
clip_limit = 0.03)
# plot the edges_clipped
plt.figure(figsize = (5,5))
plt.imshow(edges_equalized, cmap='gray')
plt.axis('off')
plt.show()

ͨ¹ýʹÓøü¶à´°»§£¬¿ÉÒÔÌáÈ¡²»Í¬ÖÖÀàµÄÐÅÏ¢¡£ Sobel kernels ½öÓÃÓÚÏÔÊ¾ÌØ¶¨·½ÏòÉÏÏàÁÚÏñËØÖµµÄ²îÒ죬³¢ÊÔʹÓÃÄں˺¯ÊýÑØÒ»¸ö·½Ïò½üËÆÍ¼ÏñµÄÌݶȡ£

ͨ¹ýÔÚXºÍY·½ÏòÒÆ¶¯£¬ÎÒÃÇ¿ÉÒԵõ½Í¼ÏñÖÐÿ¸öÑÕÉ«µÄÌݶÈͼ¡£

%%time
from skimage import color
from skimage import exposure
from scipy.signal import convolve2d
import numpy as np
import imageio
import matplotlib.pyplot as plt
# import image
pic = imageio.imread('img/parrot.jpg')
# right sobel
sobel_x = np.c_[
[-1,0,1],
[-2,0,2],
[-1,0,1]
]
# top sobel
sobel_y = np.c_[
[1,2,1],
[0,0,0],
[-1,-2,-1]
]
ims = []
for i in range(3):
sx = convolve2d(pic[:,:,i], sobel_x, mode="same", boundary="symm")
sy = convolve2d(pic[:,:,i], sobel_y, mode="same", boundary="symm")
ims.append(np.sqrt(sx*sx + sy*sy))
img_conv = np.stack(ims, axis=2).astype("uint8")
plt.figure(figsize = (6,5))
plt.axis('off')
plt.imshow(img_conv);

ÖÁÓÚ½µÔ룬ÎÒÃÇͨ³£Ê¹ÓÃÀàËÆGaussian FilterµÄÂ˲¨Æ÷£¬ÕâÊÇÒ»ÖÖÊý×ÖÂ˲¨¼¼Êõ£¬Í¨³£ÓÃÓÚͼƬ½µÔ롣ͨ¹ý½«Gaussian FilterºÍgradient finding²Ù×÷½áºÏÔÚÒ»Æð£¬ÎÒÃÇ¿ÉÒÔÉú³ÉһЩÀàËÆÓÚԭʼͼÏñ²¢ÒÔÓÐȤ·½Ê½Å¤ÇúµÄÆæ¹Öͼ°¸¡£

%%time
from scipy.signal import convolve2d
from scipy.ndimage import (median_filter, gaussian_filter)
import numpy as np
import imageio
import matplotlib.pyplot as plt
def gaussain_filter_(img):
"""
Applies a median filer to all channels
"""
ims = []
for d in range(3):
img_conv_d = gaussian_filter(img[:,:,d], sigma = 4)
ims.append(img_conv_d)
return np.stack(ims, axis=2).astype("uint8")
filtered_img = gaussain_filter_(pic)
# right sobel
sobel_x = np.c_[
[-1,0,1],
[-2,0,2],
[-1,0,1]
]
# top sobel
sobel_y = np.c_[
[1,2,1],
[0,0,0],
[-1,-2,-1]
]
ims = []
for d in range(3):
sx = convolve2d(filtered_img[:,:,d], sobel_x, mode="same", boundary="symm")
sy = convolve2d(filtered_img[:,:,d], sobel_y, mode="same", boundary="symm")
ims.append(np.sqrt(sx*sx + sy*sy))
img_conv = np.stack(ims, axis=2).astype("uint8")
plt.figure(figsize=(7,7))
plt.axis('off')
plt.imshow(img_conv);

ÏÖÔÚ£¬ÈÃÎÒÃÇÀ´¿´¿´Ê¹ÓÃMedian filter ¿ÉÒÔ¶ÔͼÏñ²úÉúʲôЧ¹û¡£

%%time
from scipy.signal import convolve2d
from scipy.ndimage import (median_filter, gaussian_filter)
import numpy as np
import imageio
import matplotlib.pyplot as plt
def median_filter_(img, mask):
"""
Applies a median filer to all channels
"""
ims = []
for d in range(3):
img_conv_d = median_filter(img[:,:,d], size=(mask,mask))
ims.append(img_conv_d)
return np.stack(ims, axis=2).astype("uint8")
filtered_img = median_filter_(pic, 80)
# right sobel
sobel_x = np.c_[
[-1,0,1],
[-2,0,2],
[-1,0,1]
]
# top sobel
sobel_y = np.c_[
[1,2,1],
[0,0,0],
[-1,-2,-1]
]
ims = []
for d in range(3):
sx = convolve2d(filtered_img[:,:,d], sobel_x, mode="same", boundary="symm")
sy = convolve2d(filtered_img[:,:,d], sobel_y, mode="same", boundary="symm")
ims.append(np.sqrt(sx*sx + sy*sy))
img_conv = np.stack(ims, axis=2).astype("uint8")
plt.figure(figsize=(7,7))
plt.axis('off')
plt.imshow(img_conv);

ãÐÖµÖ® Ostu·½·¨

ãÐÖµ´¦ÀíÊÇͼÏñ´¦ÀíÖзdz£»ù±¾µÄ²Ù×÷£¬½«»Ò¶ÈͼÏñת»»Îªµ¥É«Êdz£¼ûµÄͼÏñ´¦ÀíÈÎÎñ¡£¶øÇÒ£¬Ò»¸öºÃµÄËã·¨×ÜÊÇÒÔÁ¼ºÃµÄ»ù´¡¿ªÊ¼£¡

OtsuãÐÖµ´¦ÀíÊÇÒ»ÖÖ¼òµ¥¶øÓÐЧµÄÈ«¾Ö×Ô¶¯ãÐÖµ´¦Àí·½·¨£¬ÓÃÓÚ¶þÖµ»¯»Ò¶ÈͼÏñ£¬Èçǰ¾°ºÍ±³¾°¡£ÔÚͼÏñ´¦ÀíÖУ¬OtsuµÄãÐÖµ´¦Àí·½·¨£¨1979£©»ùÓÚÖ±·½Í¼µÄÐÎ×´×Ô¶¯¶þÖµ»¯Ë®Æ½¾ö¶¨£¬ÍêÈ«»ùÓÚ¶ÔͼÏñÖ±·½Í¼Ö´ÐеļÆËã¡£

¸ÃËã·¨¼ÙÉèͼÏñÓÉÁ½¸ö»ù±¾Àà×é³É£ºÇ°¾°ºÍ±³¾°£¬¼ÆËã×îСãÐÖµ²¢×îС»¯Á½ÀàµÄÀà·½²î¼ÓȨ¡£

Ëã·¨

Èç¹ûÎÒÃÇÔÚÕâ¸ö¼òµ¥µÄÖð²½Ëã·¨ÖмÓÈëÒ»µãÊýѧ£¬ÄÇôÕâÖÖ½âÊ;ͻᷢÉú±ä»¯£º

¼ÆËãÿ¸öÇ¿¶È¼¶±ðµÄÖ±·½Í¼ºÍ¸ÅÂÊ¡£

ÉèÖóõʼwiºÍ¦Ìi¡£

´ÓãÐÖµt = 0µ½t = L-1£º

oupdate£ºwiºÍ¦Ìi

ocompute£º¦Ò_b** 2£¨t£©

ËùÐèãÐÖµ¶ÔÓ¦ÓÚ¦Ò_b** 2£¨t£©µÄ×î´óÖµ¡£

import numpy as np
import imageio
import matplotlib.pyplot as plt
pic = imageio.imread('img/potato.jpeg')
plt.figure(figsize=(7,7))
plt.axis('off')
plt.imshow(pic);

def otsu_threshold(im):
# Compute histogram and probabilities of each intensity level
pixel_counts = [np.sum(im == i) for i in range(256)]
# Initialization
s_max = (0,0)
for threshold in range(256):
# update
w_0 = sum(pixel_counts[:threshold])
w_1 = sum(pixel_counts[threshold:])
mu_0 = sum([i * pixel_counts[i] for i in range(0,threshold)]) / w_0 if w_0 > 0 else 0
mu_1 = sum([i * pixel_counts[i] for i in range(threshold, 256)]) / w_1 if w_1 > 0 else 0
# calculate - inter class variance
s = w_0 * w_1 * (mu_0 - mu_1) ** 2
if s > s_max[1]:
s_max = (threshold, s)
return s_max[0]

Èç¹û¿ÉÒÔ¼ÙÉèÖ±·½Í¼¾ßÓÐË«·å·Ö²¼²¢ÇÒÁ½·åÖ®¼ä¾ßÓÐÉîÇÒ¼âÈñµÄ¹È£¬ÔòOtsuµÄ·½·¨»á±íÏÖ¸üÓÅ¡£Èç¹ûǰ¾°Óë±³¾°²îÒì½ÏС£¬ÔòÖ±·½Í¼²»»á³ÊÏÖË«·åÐÔ¡£

K-Means¾ÛÀà

K-Means¾ÛÀàÊÇÒ»ÖÖʸÁ¿Á¿»¯·½·¨£¬×î³õÀ´×ÔÐźŴ¦Àí£¬ÊÇÊý¾ÝÍÚ¾òÖоÛÀà·ÖÎöµÄ³£Ó÷½·¨¡£

ÔÚOtsuãÐÖµ´¦ÀíÖУ¬ÎÒÃÇÕÒµ½ÁË×îС»¯¶ÎÄÚÏñËØ·½²îµÄãÐÖµ¡£ Òò´Ë£¬ÎÒÃDz»ÊÇ´Ó»Ò¶ÈͼÏñÖÐѰÕÒãÐÖµ£¬¶øÊÇÔÚÑÕÉ«¿Õ¼äÖÐѰÕÒ¾ÛÀ࣬ͨ¹ýÕâÑù×ö£¬ÎÒÃÇ×îÖյõ½ÁËK-means¾ÛÀà¡£

from sklearn import cluster
import matplotlib.pyplot as plt
# load image
pic = imageio.imread('img/purple.jpg')
plt.figure(figsize=(7,7))
plt.imshow(pic)
plt.axis('off');

ΪÁ˾ÛÀàͼÏñ£¬ÎÒÃÇÐèÒª½«Æäת»»Îª¶þάÊý×é¡£

½ÓÏÂÀ´£¬ÎÒÃÇʹÓÃscikit-learn·½·¨´´½¨¼¯Èº£¬´«µÝn_clusters 5ÒÔÐγÉÎå¸ö¼¯Èº¡£¾ÛÀà³öÏÖÔÚÉú³ÉͼÏñÖУ¬½«Æä·ÖΪÎå¸ö²¿·Ö£¬Ã¿²¿·ÖµÄÑÕÉ«²»Í¬¡£

%%time
# fit on the image with cluster five
kmeans_cluster = cluster.KMeans(n_clusters=5)
kmeans_cluster.fit(pic_2d)
cluster_centers = kmeans_cluster.cluster_centers_
cluster_labels = kmeans_cluster.labels_

Ò»µ©ÐγÉÁË´Ø£¬ÎÒÃǾͿÉÒÔʹÓôØÖÐÐĺͱêÇ©ÖØÐ´´½¨Í¼Ïñ£¬ÒÔÏÔʾ¾ßÓзÖ×éģʽµÄͼÏñ¡£

Ö±Ïß¼ì²âÖ®»ô·ò±ä»»

Èç¹ûÎÒÃÇÄܹ»ÒÔÊýѧÐÎʽ±íʾ¸ÃÐÎ×´£¬Ôò»ô·ò±ä»»ÊÇÒ»ÖÖÓÃÓÚ¼ì²âÈκÎÐÎ×´µÄÁ÷Ðм¼Êõ¡£Ëü¿ÉÒÔ¼ì²âÐÎ×´£¬¼´Ê¹Ëü±»ÆÆ»µ»òŤÇúÒ»µãµã¡£ ÎÒÃDz»»á¹ýÓÚÉîÈë·ÖÎö»ô·ò±ä»»µÄ»úÖÆ¡£

Ëã·¨

Corner»ò±ßÔµ¼ì²â

¦Ñ·¶Î§ºÍ¦È·¶Î§´´½¨

¦Ñ£ºDmaxÖÁDmax

¦È£º90µ½90

»ô·ò±ä»»

2DÊý×éµÄÐÐÊýµÈÓÚ¦ÑÖµµÄÊýÁ¿£¬ÁÐÊýµÈÓÚ¦ÈÖµµÄÊýÁ¿¡£

ÔÚÀÛ¼ÓÆ÷ÖÐͶƱ

¶ÔÓÚÿ¸ö±ßÔµµãºÍ¦ÈÖµ£¬ÕÒµ½×î½Ó½üµÄ¦ÑÖµ²¢ÔÚÀÛ¼ÓÆ÷ÖеÝÔö¸ÃË÷Òý¡£

ÀÛ¼ÓÆ÷Öеľֲ¿×î´óֵָʾÊäÈëͼÏñÖÐ×îÍ»³öµÄÏߵIJÎÊý¡£

def hough_line(img):
# Rho and Theta ranges
thetas = np.deg2rad(np.arange(-90.0, 90.0))
width, height = img.shape
diag_len = int(np.ceil(np.sqrt(width * width + height * height))) # Dmax
rhos = np.linspace(-diag_len, diag_len, diag_len * 2.0)
# Cache some resuable values
cos_t = np.cos(thetas)
sin_t = np.sin(thetas)
num_thetas = len(thetas)
# Hough accumulator array of theta vs rho
accumulator = np.zeros((2 * diag_len, num_thetas), dtype=np.uint64)
y_idxs, x_idxs = np.nonzero(img) # (row, col) indexes to edges
# Vote in the hough accumulator
for i in range(len(x_idxs)):
x = x_idxs[i]
y = y_idxs[i]
for t_idx in range(num_thetas):
# Calculate rho. diag_len is added for a positive index
rho = round(x * cos_t[t_idx] + y * sin_t[t_idx]) + diag_len
accumulator[rho, t_idx] += 1
return accumulator, thetas, rhos

±ßÔµ¼ì²â

±ßÔµ¼ì²âÊÇÒ»ÖÖÓÃÓÚ²éÕÒͼÏñÄÚ¶ÔÏó±ß½çµÄͼÏñ´¦Àí¼¼Êõ¡£ËüµÄ¹¤×÷Ô­ÀíÊǼì²âÁÁ¶ÈµÄ²»Á¬ÐøÐÔ¡£

³£¼ûµÄ±ßÔµ¼ì²âËã·¨°üÀ¨

Sobel

Canny

Prewitt

Roberts

fuzzy logic methods

ÔÚÕâÀÎÒÃǽ«½éÉÜÒ»ÖÖ×îÁ÷Ðеķ½·¨£¬¼´Canny Edge Detection£¨Canny±ßÔµ¼ì²â£©¡£

Canny±ßÔµ¼ì²â

Ò»ÖÖÄܹ»¼ì²âͼÏñÖÐ½Ï¿í·¶Î§±ßÔµµÄ¶à¼¶±ßÔµ¼ì²â²Ù×÷£¬Canny±ßÔµ¼ì²âËã·¨¿ÉÒÔ·Ö½âΪ5²½£º

Ó¦Óøß˹Â˲¨Æ÷

ÕÒ³öÇ¿¶ÈÌݶÈ

Ó¦Ó÷Ǽ«´óÖµÒÖÖÆ

Ó¦ÓÃË«ãÐÖµ

ͨ¹ýÖͺóÐÔÃÅÏÞ¸ú×Ù±ßÔµÏß

ͼƬʸÁ¿Ö®ÂÖÀª¸ú×Ù

ʹÓÃScikit-Image£¬ÎÒÃÇ¿ÉÒÔʹÓÃÂÖÀª¸ú×ÙËã·¨ÌáȡͼƬ±ßԵ·¾¶£¨¹´ÀÕͼƬÂÖÀª£©£¬Õâ¿ÉÒÔ¿ØÖÆ×îÖÕ·¾¶×ñѭԭʼλͼµÄÐÎ×´¡£

from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from skimage import measure
import numpy as np
import imageio
pic = imageio.imread('img/parrot.jpg')
h,w = pic.shape[:2]
im_small_long = pic.reshape((h * w, 3))
im_small_wide = im_small_long.reshape((h,w,3))
km = KMeans(n_clusters=2)
km.fit(im_small_long)
seg = np.asarray([(1 if i == 1 else 0)
for i in km.labels_]).reshape((h,w))
contours = measure.find_contours(seg, 0.5, fully_connected="high")
simplified_contours = [measure.approximate_polygon(c, tolerance=5)
for c in contours]
plt.figure(figsize=(5,10))
for n, contour in enumerate(simplified_contours):
plt.plot(contour[:, 1], contour[:, 0], linewidth=2)
plt.ylim(h,0)
plt.axes().set_aspect('equal')

ͼÏñѹËõÖ®¶Ñµþ×Ô±àÂëÆ÷

AutoencoderÊÇÒ»ÖÖÊý¾ÝѹËõËã·¨£¬ÆäÖÐѹËõºÍ½âѹËõ¹¦ÄÜÊÇ£º

Data-specific

Lossy

ÒÔÏÂΪ¾ßÌåʵÏÖ´úÂ룺

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data",one_hot=True)
# Parameter
num_inputs = 784 # 28*28
neurons_hid1 = 392
neurons_hid2 = 196
neurons_hid3 = neurons_hid1 # Decoder Begins
num_outputs = num_inputs
learning_rate = 0.01
# activation function
actf = tf.nn.relu
# place holder
X = tf.placeholder(tf.float32, shape=[None, num_inputs])
# Weights
initializer = tf.variance_scaling_initializer()
w1 = tf.Variable(initializer([num_inputs, neurons_hid1]), dtype=tf.float32)
w2 = tf.Variable(initializer([neurons_hid1, neurons_hid2]), dtype=tf.float32)
w3 = tf.Variable(initializer([neurons_hid2, neurons_hid3]), dtype=tf.float32)
w4 = tf.Variable(initializer([neurons_hid3, num_outputs]), dtype=tf.float32)
# Biases
b1 = tf.Variable(tf.zeros(neurons_hid1))
b2 = tf.Variable(tf.zeros(neurons_hid2))
b3 = tf.Variable(tf.zeros(neurons_hid3))
b4 = tf.Variable(tf.zeros(num_outputs))
# Activation Function and Layers
act_func = tf.nn.relu
hid_layer1 = act_func(tf.matmul(X, w1) + b1)
hid_layer2 = act_func(tf.matmul(hid_layer1, w2) + b2)
hid_layer3 = act_func(tf.matmul(hid_layer2, w3) + b3)
output_layer = tf.matmul(hid_layer3, w4) + b4
# Loss Function
loss = tf.reduce_mean(tf.square(output_layer - X))
# Optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
train = optimizer.minimize(loss)
# Intialize Variables
init = tf.global_variables_initializer()
saver = tf.train.Saver()
num_epochs = 5
batch_size = 150
with tf.Session() as sess:
sess.run(init)
# Epoch == Entire Training Set
for epoch in range(num_epochs):
num_batches = mnist.train.num_examples // batch_size
# 150 batch size
for iteration in range(num_batches):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(train, feed_dict={X: X_batch})
training_loss = loss.eval(feed_dict={X: X_batch})
print("Epoch {} Complete. Training Loss: {}".format(epoch,training_loss))
saver.save(sess, "./stacked_autoencoder.ckpt")
# Test Autoencoder output on Test Data
num_test_images = 10
with tf.Session() as sess:
saver.restore(sess,"./stacked_autoencoder.ckpt")
results = output_layer.eval(feed_dict={X:mnist.test.images[:num_test_images]})
Extracting MNIST_data\train-images-idx3-ubyte.gz
Extracting MNIST_data\train-labels-idx1-ubyte.gz
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
Epoch 0 Complete. Training Loss: 0.023349963128566742
Epoch 1 Complete. Training Loss: 0.022537199780344963
Epoch 2 Complete. Training Loss: 0.0200303066521883
Epoch 3 Complete. Training Loss: 0.021327141672372818
Epoch 4 Complete. Training Loss: 0.019387174397706985
INFO:tensorflow:Restoring parameters from ./stacked_autoencoder.ckpt

µÚÒ»ÐмÓÔØMNISTѵÁ·¼¯£¬µÚ¶þÐÐʹÓÃ×Ô±àÂëÆ÷½øÐбàÂëºÍ½âÂ룬֮ºóÖØ¹¹ÑµÁ·¼¯¡£µ«ÊÇ£¬Öؽ¨Í¼ÏñÖÐȱÉÙ´óÁ¿ÐÅÏ¢¡£Òò´Ë£¬×Ô±àÂëÆ÷²»ÈçÆäËûѹËõ¼¼ÊõºÃ£¬µ«×÷ΪһÃÅÕýÔÚ¿ìËÙÔö³¤Öеļ¼Êõ£¬ÆäδÀ´»¹»á³öÏÖºÜ¶à½ø²½¡£

£¨´úÂëÏÂÔØ¿É·ÃÎÊGithubµØÖ·£ºhttps://github.com/iphton/Image-Processing-in-Python£©

 
   
3942 ´Îä¯ÀÀ       27
Ïà¹ØÎÄÕÂ

»ùÓÚͼ¾í»ýÍøÂçµÄͼÉî¶Èѧϰ
×Ô¶¯¼ÝÊ»ÖеÄ3DÄ¿±ê¼ì²â
¹¤Òµ»úÆ÷ÈË¿ØÖÆÏµÍ³¼Ü¹¹½éÉÜ
ÏîĿʵս£ºÈçºÎ¹¹½¨ÖªÊ¶Í¼Æ×
 
Ïà¹ØÎĵµ

5GÈ˹¤ÖÇÄÜÎïÁªÍøµÄµäÐÍÓ¦ÓÃ
Éî¶ÈѧϰÔÚ×Ô¶¯¼ÝÊ»ÖеÄÓ¦ÓÃ
ͼÉñ¾­ÍøÂçÔÚ½»²æÑ§¿ÆÁìÓòµÄÓ¦ÓÃÑо¿
ÎÞÈË»úϵͳԭÀí
Ïà¹Ø¿Î³Ì

È˹¤ÖÇÄÜ¡¢»úÆ÷ѧϰ&TensorFlow
»úÆ÷ÈËÈí¼þ¿ª·¢¼¼Êõ
È˹¤ÖÇÄÜ£¬»úÆ÷ѧϰºÍÉî¶Èѧϰ
ͼÏñ´¦ÀíËã·¨·½·¨Óëʵ¼ù