±à¼ÍƼö: |
±¾ÎÄÀ´×ÔÓÚ¸öÈ˲©¿Í£¬±¾ÎÄÖ÷ÒªÏêϸ½éÉÜÁË×÷ΪHadoopµÄºËÐļ¼ÊõÖ®Ò»HDFS£¨Hadoop
Distributed ÊÇ·Ö²¼Ê½¼ÆËãÖÐÊý¾Ý´æ´¢¹ÜÀíµÄ»ù´¡¡£ |
|
Ò».HDFS³öÏֵı³¾° Ëæ×ÅÉç»áµÄ½ø²½£¬ÐèÒª´¦ÀíÊý¾ÝÁ¿Ô½À´Ô½¶à£¬ÔÚÒ»¸ö²Ù×÷ϵͳ¹ÜϽµÄ·¶Î§´æ²»ÏÂÁË£¬ÄÇô¾Í·ÖÅäµ½¸ü¶àµÄ²Ù×÷ϵͳ¹ÜÀíµÄ´ÅÅÌÖУ¬µ«ÊÇÈ´²»·½±ã¹ÜÀíºÍά»¤¡ª>Òò´Ë£¬ÆÈÇÐÐèÒªÒ»ÖÖϵͳÀ´¹ÜÀí¶ą̀»úÆ÷ÉϵÄÎļþ£¬ÓÚÊǾͲúÉúÁË·Ö²¼Ê½Îļþ¹ÜÀíϵͳ£¬Ó¢ÎÄÃû³ÉΪDFS£¨Distributed
File System£©¡£
ÄÇô£¬Ê²Ã´ÊÇ·Ö²¼Ê½Îļþϵͳ£¿¼ò¶øÑÔÖ®£¬¾ÍÊÇÒ»ÖÖÔÊÐíÎļþͨ¹ýÍøÂçÔÚ¶ą̀Ö÷»úÉÏ·ÖÏíµÄÎļþϵͳ£¬¿ÉÒÔÈöà¸ö»úÆ÷ÉϵĶà¸öÓû§·ÖÏíÎļþºÍ´æ´¢¿Õ¼ä¡£Ëü×î´óµÄÌØµãÊÇ¡°Í¨Í¸ÐÔ¡±£¬DFSÈÃʵ¼ÊÉÏÊÇͨ¹ýÍøÂçÀ´·ÃÎÊÎļþµÄ¶¯×÷£¬ÓÉÓû§ºÍ³ÌÐò¿´À´£¬¾ÍÏñÊÇ·ÃÎʱ¾µØµÄ´ÅÅÌÒ»°ã£¨In
other words,ʹÓÃDFS·ÃÎÊÊý¾Ý£¬Äã¸Ð¾õ²»µ½ÊÇ·ÃÎÊÔ¶³Ì²»Í¬»úÆ÷ÉϵÄÊý¾Ý£©¡£

ͼ1.Ò»¸öµäÐ͵ÄDFSʾÀý
¶þ.ÉîÈëÁ˽âHDFSÔÀí ×÷ΪHadoopµÄºËÐļ¼ÊõÖ®Ò»£¬HDFS£¨Hadoop Distributed File System£¬Hadoop·Ö²¼Ê½Îļþϵͳ£©ÊÇ·Ö²¼Ê½¼ÆËãÖÐÊý¾Ý´æ´¢¹ÜÀíµÄ»ù´¡¡£ËüËù¾ßÓеĸßÈÝ´í¡¢¸ß¿É¿¿¡¢¸ß¿ÉÀ©Õ¹ÐÔ¡¢¸ßÍÌÍÂÂʵÈÌØÐÔΪº£Á¿Êý¾ÝÌṩÁ˲»Å¹ÊÕϵĴ洢£¬Ò²Îª³¬´ó¹æÄ£Êý¾Ý¼¯£¨Large
Data Set£©µÄÓ¦Óô¦Àí´øÀ´Á˺ܶà±ãÀû¡£

ͼ2.Hadoop HDFSµÄLOGO
Ìáµ½HDFS£¬²»µÃ²»ËµGoogleµÄGFS¡£ÕýÊÇGoogle·¢±íÁ˹ØÓÚGFSµÄÂÛÎÄ£¬²ÅÓÐÁËHDFSÕâ¸ö¹ØÓÚGFSµÄ¿ªÔ´ÊµÏÖ¡£
2.1 Éè¼ÆÇ°ÌáÓëÄ¿±ê £¨1£©Ó²¼þ´íÎóÊdz£Ì¬¶ø²»ÊÇÒì³££»£¨×îºËÐĵÄÉè¼ÆÄ¿±ê¡ª>HDFS±»Éè¼ÆÎªÔËÐÐÔÚÖÚ¶àµÄÆÕͨӲ¼þÉÏ£¬ËùÒÔÓ²¼þ¹ÊÕÏÊǺÜÕý³£µÄ¡£Òò´Ë£¬´íÎó¼ì²â²¢¿ìËÙ»Ö¸´ÊÇHDFS×îºËÐĵÄÉè¼ÆÄ¿±ê£©
£¨2£©Á÷ʽÊý¾Ý·ÃÎÊ£»£¨HDFS¸ü¹Ø×¢Êý¾Ý·ÃÎʵĸßÍÌÍÂÁ¿£©
£¨3£©´ó¹æÄ£Êý¾Ý¼¯£»£¨HDFSµÄµäÐÍÎļþ´óС´ó¶à¶¼ÔÚGBÉõÖÁTB¼¶±ð£©
£¨4£©¼òµ¥Ò»ÖÂÐÔÄ£ÐÍ£»£¨Ò»´ÎдÈ룬¶à´Î¶ÁÈ¡µÄ·ÃÎÊģʽ£©
£¨5£©Òƶ¯¼ÆËã±ÈÒÆ¶¯Êý¾Ý¸üΪ»®Ë㣻£¨¶ÔÓÚ´óÎļþÀ´Ëµ£¬Òƶ¯¼ÆËã±ÈÒÆ¶¯Êý¾ÝµÄ´ú¼ÛÒªµÍ£©
2.2 HDFSµÄÌåϵ½á¹¹ HDFSÊÇÒ»¸öÖ÷/´Ó£¨Master/Slave£©Ê½µÄ½á¹¹£¬ÈçÏÂͼËùʾ¡£

ͼ3.HDFSµÄ»ù±¾¼Ü¹¹
´Ó×îÖÕÓû§µÄ½Ç¶ÈÀ´¿´£¬Ëü¾ÍÏñ´«Í³µÄÎļþϵͳһÑù£¬¿ÉÒÔͨ¹ýĿ¼·¾¶¶ÔÎļþÖ´ÐÐCRUD£¨Ôöɾ²é¸Ä£©²Ù×÷¡£µ«ÓÉÓÚ·Ö²¼Ê½´æ´¢µÄÐÔÖÊ£¬HDFSÓµÓÐÒ»¸öNameNodeºÍһЩDataNodes¡£NameNode¹ÜÀíÎļþϵͳµÄÔªÊý¾Ý£¬DataNode´æ´¢Êµ¼ÊµÄÊý¾Ý¡£¿Í»§¶Ëͨ¹ýͬNameNodeºÍDataNodeµÄ½»»¥·ÃÎÊÎļþϵͳ¡ú¿Í»§¶ËÁªÏµNameNodeÒÔ»ñÈ¡ÎļþµÄÔªÊý¾Ý£¬¶øÕæÕýµÄI/O²Ù×÷ÊÇÖ±½ÓºÍDataNode½øÐн»»¥µÄ¡£
ÏÂÃæÎÒÃÇÔÙÀ´¿´¿´HDFSµÄ¶Á²Ù×÷ºÍд²Ù×÷µÄÁ÷³Ì£º
¢Ù¶Á²Ù×÷

ͼ4.HDFSµÄ¶Á²Ù×÷
¿Í»§¶ËÒª·ÃÎÊÒ»¸öÎļþ£¬Ê×ÏÈ£¬¿Í»§¶Ë´ÓNameNodeÖлñµÃ×é³É¸ÃÎļþÊý¾Ý¿éλÖÃÁÐ±í£¬¼´ÖªµÀÊý¾Ý¿é±»´æ´¢ÔÚÄöDataNodeÉÏ£»È»ºó£¬¿Í»§¶ËÖ±½Ó´ÓDataNodeÉ϶ÁÈ¡ÎļþÊý¾Ý¡£Ôڴ˹ý³ÌÖУ¬NameNode²»²ÎÓëÎļþµÄ´«Êä¡£
¢Úд²Ù×÷

ͼ5.HDFSµÄд²Ù×÷
¿Í»§¶ËÊ×ÏÈÐèÒªÏòNameNode·¢ÆðдÇëÇó£¬NameNode»á¸ù¾ÝÎļþ´óСºÍÎļþ¿éÅäÖÃÇé¿ö£¬·µ»Ø¸øClientËüËù¹ÜÀí²¿·ÖDataNodeµÄÐÅÏ¢¡£×îºó£¬Client£¨¿ª·¢¿â£©½«Îļþ»®·ÖΪ¶à¸öÎļþ¿é£¬¸ù¾ÝDataNodeµÄµØÖ·ÐÅÏ¢£¬°´Ë³ÐòдÈ뵽ÿһ¸öDataNode¿éÖС£
ÏÂÃæÎÒÃÇ¿´¿´NameNodeºÍDataNode°çÑÝʲô½ÇÉ«£¬ÓÐʲô¾ßÌåµÄ×÷Óãº
£¨1£©NameNode
NameNodeµÄ×÷ÓÃÊǹÜÀíÎļþĿ¼½á¹¹£¬ÊǹÜÀíÊý¾Ý½ÚµãµÄ¡£NameNodeά»¤Á½Ì×Êý¾Ý£ºÒ»Ì×ÊÇÎļþĿ¼ÓëÊý¾Ý¿éÖ®¼äµÄ¹ØÏµ£¬ÁíÒ»Ì×ÊÇÊý¾Ý¿éÓë½Úµã¼äµÄ¹ØÏµ¡£Ç°Ò»Ì×ÊǾ²Ì¬µÄ£¬ÊÇ´æ·ÅÔÚ´ÅÅÌÉϵģ¬Í¨¹ýfsimageºÍeditsÎļþÀ´Î¬»¤£»ºóÒ»Ì×Êý¾Ýʱ¶¯Ì¬µÄ£¬²»³Ö¾Ã»¯µ½´ÅÅÌ£¬Ã¿µ±¼¯ÈºÆô¶¯µÄʱºò£¬»á×Ô¶¯½¨Á¢ÕâЩÐÅÏ¢¡£
£¨2£©DataNode
ºÁÎÞÒÉÎÊ£¬DataNodeÊÇHDFSÖÐÕæÕý´æ´¢Êý¾ÝµÄ¡£ÕâÀïÒªÌáµ½Ò»µã£¬¾ÍÊÇBlock£¨Êý¾Ý¿é£©¡£¼ÙÉèÎļþ´óСÊÇ100GB£¬´Ó×Ö½ÚλÖÃ0¿ªÊ¼£¬Ã¿64MB×Ö½Ú»®·ÖΪһ¸öBlock£¬ÒÔ´ËÀàÍÆ£¬¿ÉÒÔ»®·Ö³öºÜ¶àµÄBlock¡£Ã¿¸öBlock¾ÍÊÇ64MB£¨Ò²¿ÉÒÔ×Ô¶¨ÒåÉèÖÃBlock´óС£©¡£
£¨3£©µäÐͲ¿Êð
HDFSµÄÒ»¸öµäÐͲ¿ÊðÊÇÔÚÒ»¸öרÃŵĻúÆ÷ÉÏÔËÐÐNameNode£¬¼¯ÈºÖÐµÄÆäËû»úÆ÷¸÷ÔËÐÐÒ»¸öDataNode¡££¨µ±È»£¬Ò²¿ÉÒÔÔÚÔËÐÐNameNodeµÄ»úÆ÷ÉÏͬʱÔËÐÐDataNode£¬»òÕßÒ»¸ö»úÆ÷ÉÏÔËÐжà¸öDataNode£©Ò»¸ö¼¯ÈºÖÐÖ»ÓÐÒ»¸öNameNode£¨µ«Êǵ¥NameNode´æÔÚµ¥µãÎÊÌ⣬ÔÚHadoop
2.x°æ±¾Ö®ºó½â¾öÁËÕâ¸öÎÊÌ⣩µÄÉè¼Æ´ó´ó¼ò»¯ÁËϵͳ¼Ü¹¹¡£
2.3 ±£ÕÏHDFSµÄ¿É¿¿ÐÔ´ëÊ© HDFS¾ß±¸Á˽ÏΪÍêÉÆµÄÈßÓ౸·ÝºÍ¹ÊÕϻָ´»úÖÆ£¬¿ÉÒÔʵÏÖÔÚ¼¯ÈºÖпɿ¿µØ´æ´¢º£Á¿Îļþ¡£
£¨1£©ÈßÓ౸·Ý£ºHDFS½«Ã¿¸öÎļþ´æ´¢³ÉһϵÁеÄÊý¾Ý¿é£¨Block£©£¬Ä¬ÈÏ¿é´óСΪ64MB£¨¿ÉÒÔ×Ô¶¨ÒåÅäÖã©¡£ÎªÁËÈÝ´í£¬ÎļþµÄËùÓÐÊý¾Ý¿é¶¼¿ÉÒÔÓи±±¾£¨Ä¬ÈÏΪ3¸ö£¬¿ÉÒÔ×Ô¶¨ÒåÅäÖã©¡£µ±DataNodeÆô¶¯µÄʱºò£¬Ëü»á±éÀú±¾µØÎļþϵͳ£¬²úÉúÒ»·ÝHDFSÊý¾Ý¿éºÍ±¾µØÎļþ¶ÔÓ¦¹ØÏµµÄÁÐ±í£¬²¢°ÑÕâ¸ö±¨¸æ·¢Ë͸øNameNode£¬Õâ¾ÍÊDZ¨¸æ¿é£¨BlockReport£©£¬±¨¸æ¿éÉϰüº¬ÁËDataNodeÉÏËùÓпéµÄÁÐ±í¡£
£¨2£©¸±±¾´æ·Å£ºHDFS¼¯ÈºÒ»°ãÔËÐÐÔÚ¶à¸ö»ú¼ÜÉÏ£¬²»Í¬»ú¼ÜÉÏ»úÆ÷µÄͨÐÅÐèҪͨ¹ý½»»»»ú¡£Í¨³£Çé¿öÏ£¬¸±±¾µÄ´æ·Å²ßÂԺܹؼü£¬»ú¼ÜÄÚ½ÚµãÖ®¼äµÄ´ø¿í±È¿ç»ú¼Ü½ÚµãÖ®¼äµÄ´ø¿íÒª´ó£¬ËüÄÜÓ°ÏìHDFSµÄ¿É¿¿ÐÔºÍÐÔÄÜ¡£HDFS²ÉÓÃÒ»ÖÖ³ÆÎª»ú¼Ü¸ÐÖª£¨Rack-aware£©µÄ²ßÂÔÀ´¸Ä½øÊý¾ÝµÄ¿É¿¿ÐÔ¡¢¿ÉÓÃÐÔºÍÍøÂç´ø¿íµÄÀûÓÃÂÊ¡£ÔÚ´ó¶àÊýÇé¿öÏ£¬HDFS¸±±¾ÏµÊýÊÇĬÈÏΪ3£¬HDFSµÄ´æ·Å²ßÂÔÊǽ«Ò»¸ö¸±±¾´æ·ÅÔÚ±¾µØ»ú¼Ü½ÚµãÉÏ£¬Ò»¸ö¸±±¾´æ·ÅÔÚͬһ¸ö»ú¼ÜµÄÁíÒ»¸ö½ÚµãÉÏ£¬×îºóÒ»¸ö¸±±¾·ÅÔÚ²»Í¬»ú¼ÜµÄ½ÚµãÉÏ¡£ÕâÖÖ²ßÂÔ¼õÉÙÁË»ú¼Ü¼äµÄÊý¾Ý´«Ê䣬Ìá¸ßÁËд²Ù×÷µÄЧÂÊ¡£»ú¼ÜµÄ´íÎóÔ¶Ô¶±È½ÚµãµÄ´íÎóÉÙ£¬ËùÒÔÕâÖÖ²ßÂÔ²»»áÓ°Ïìµ½Êý¾ÝµÄ¿É¿¿ÐԺͿÉÓÃÐÔ¡£

ͼ6.¸±±¾´æ·ÅµÄ²ßÂÔ
£¨3£©ÐÄÌø¼ì²â£ºNameNodeÖÜÆÚÐԵشӼ¯ÈºÖеÄÿ¸öDataNode½ÓÊÜÐÄÌø°üºÍ¿é±¨¸æ£¬NameNode¿ÉÒÔ¸ù¾ÝÕâ¸ö±¨¸æÑéÖ¤Ó³ÉäºÍÆäËûÎļþϵͳԪÊý¾Ý¡£ÊÕµ½ÐÄÌø°ü£¬ËµÃ÷¸ÃDataNode¹¤×÷Õý³£¡£Èç¹ûDataNode²»ÄÜ·¢ËÍÐÄÌøÐÅÏ¢£¬NameNode»á±ê¼Ç×î½üûÓÐÐÄÌøµÄDataNodeΪ崻ú£¬²¢ÇÒ²»»á¸øËûÃÇ·¢ËÍÈκÎI/OÇëÇó¡£
£¨4£©°²È«Ä£Ê½
£¨5£©Êý¾ÝÍêÕûÐÔ¼ì²â
£¨6£©¿Õ¼ä»ØÊÕ
£¨7£©ÔªÊý¾Ý´ÅÅÌʧЧ
£¨8£©¿ìÕÕ£¨HDFSĿǰ»¹²»Ö§³Ö£©
Èý.HDFS³£ÓÃShell²Ù×÷ £¨1£©ÁгöÎļþĿ¼£ºhadoop fs -ls Ŀ¼·¾¶
²é¿´HDFS¸ùĿ¼ÏµÄĿ¼£ºhadoop fs -ls /

µÝ¹é²é¿´HDFS¸ùĿ¼ÏµÄĿ¼£ºhadoop fs -lsr /

£¨2£©ÔÚHDFSÖд´½¨Îļþ¼Ð£ºhadoop fs -mkdir Îļþ¼ÐÃû³Æ
ÔÚ¸ùĿ¼Ï´´½¨Ò»¸öÃû³ÆÎªdiµÄÎļþ¼Ð£º

£¨3£©ÉÏ´«Îļþµ½HDFSÖУºhadoop fs -put ±¾µØÔ´Â·¾¶ Ä¿±ê´æ·Å·¾¶
½«±¾µØÏµÍ³ÖеÄÒ»¸ölogÎļþÉÏ´«µ½diÎļþ¼ÐÖУºhadoop fs -put test.log
/di

*PS:ÎÒÃÇͨ¹ýHadoop ShellÉÏ´«µÄÎļþÊÇ´æ·ÅÔÚDataNodeµÄBlock£¨Êý¾Ý¿é£©Öеģ¬Í¨¹ýLinux
ShellÊÇ¿´²»µ½ÎļþµÄ£¬Ö»ÄÜ¿´µ½Block¡£Òò´Ë£¬¿ÉÒÔÓÃÒ»¾ä»°À´ÃèÊöHDFS£º°Ñ¿Í»§¶ËµÄ´óÎļþ´æ·ÅÔںܶà½ÚµãµÄÊý¾Ý¿éÖС£ £¨4£©´ÓHDFSÖÐÏÂÔØÎļþ£ºhadoop fs -get HDFSÎļþ·¾¶ ±¾µØ´æ·Å·¾¶
½«¸Õ¸ÕÉÏ´«µÄtest.logÏÂÔØµ½±¾µØµÄDesktopÎļþ¼ÐÖУºhadoop fs -get
/di/test.log /home/hadoop/Desktop

£¨5£©Ö±½ÓÔÚHDFSÖв鿴ij¸öÎļþ£ºhadoop fs -text(-cat) Îļþ´æ·Å·¾¶
ÔÚHDFS²é¿´¸Õ¸ÕÉÏ´«µÄtest.logÎļþ£ºhadoop fs -text
/di/test.log

£¨6£©É¾³ýÔÚHDFSÖеÄij¸öÎļþ(¼Ð)£ºhadoop fs -rm(r) Îļþ´æ·Å·¾¶
ɾ³ý¸Õ¸ÕÉÏ´«µÄtest.logÎļþ£ºhadoop fs -rm /di/test.log

ɾ³ýHDFSÖеÄdiÎļþ¼Ð£ºhadoop fs -rmr /di

£¨7£©ÉÆÓÃhelpÃüÁîÇó°ïÖú£ºhadoop fs -help ÃüÁî
²é¿´lsÃüÁîµÄ°ïÖú£ºhadoop fs -help ls

ËÄ.ʹÓÃJava²Ù×÷HDFS ÎÒÃÇÔÚ¹¤×÷ÖÐдÍêµÄ¸÷ÖÖ´úÂëÊÇÔÚ·þÎñÆ÷ÖÐÔËÐеģ¬HDFSµÄ²Ù×÷´úÂëÒ²²»ÀýÍâ¡£ÔÚ¿ª·¢½×¶Î£¬ÎÒÃÇʹÓÃWindowsϵÄEclipse×÷Ϊ¿ª·¢»·¾³£¬·ÃÎÊÔËÐÐÔÚÐéÄâ»úÖеÄHDFS£¬Ò²¾ÍÊÇͨ¹ýÔÚ±¾µØµÄEclipseÖеÄJava´úÂë·ÃÎÊÔ¶³ÌLinuxÖеÄHDFS¡£
ÔÚ±¾µØµÄ¿ª·¢µ÷ÊÔ¹ý³ÌÖУ¬ÒªÊ¹ÓÃËÞÖ÷»úÖеÄJava´úÂë·ÃÎʿͻ§»úÖеÄHDFS£¬ÐèҪȷ±£ÒÔϼ¸µã£º ËÞÖ÷»úºÍÐéÄâ»úµÄÍøÂçÄÜ·ñ»¥Í¨£¿È·±£ËÞÖ÷»úºÍÐéÄâ»úÖеķÀ»ðǽ¶¼¹Ø±Õ£¡È·±£ËÞÖ÷»úÓëÐéÄâ»úÖеÄjdk°æ±¾Ò»Ö£¡
4.1 ×¼±¸¹¤×÷£º £¨1£©µ¼ÈëÒÀÀµjar°ü£¬ÈçÏÂͼËùʾ

£¨2£©¹ØÁªhadoopÔ´ÂëÏîÄ¿£¬ÈçÏÂͼËùʾ

4.2 µÚÒ»¸öJava-HDFS³ÌÐò £¨1£©¶¨ÒåHDFS_PATH£ºpublic static final String HDFS_PATH
= "hdfs://hadoop-master:9000/testdir/testfile.log";
£¨2£©ÈÃURLÀàÐÍʶ±ðhdfs://£¨URLÀàÐÍĬÈÏֻʶ±ðhttp://£©£ºURL.setURLStreamHandlerFactory(new
FsUrlStreamHandlerFactory());
£¨3£©¾ßÌåÏêϸ´úÂëÈçÏ£º
1
package hdfs;
2
3 import java.io.InputStream;
4 import java.net.MalformedURLException;
5 import java.net.URL;
6
7 import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
8 import org.apache.hadoop.io.IOUtils;
9
10 public class firstApp {
11
12 public static final String HDFS_PATH = "hdfs://hadoop-master:9000/testdir/testfile.log";
13
14 /**
15 * @param args
16 * @throws MalformedURLException
17 */
18 public static void main(String[] args) throws
Exception {
19 // TODO Auto-generated method stub
20 URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
21 final URL url = new URL(HDFS_PATH);
22 final InputStream in = url.openStream();
23 /**
24 * @params in ÊäÈëÁ÷
25 * @params out Êä³öÁ÷
26 * @params buffersize »º³åÇø´óС
27 * @params close ÊÇ·ñ×Ô¶¯¹Ø±ÕÁ÷
28 */
29 IOUtils.copyBytes(in, System.out, 1024, true);
30 }
31
32 } |
£¨4£©ÔËÐнá¹û£¨ºóÃæ²»ÔÙÌùÔËÐнá¹ûͼ£©£º

4.3 ¶ÔHDFS½øÐÐCRUD±à³Ì £¨1£©»ñµÃÍòÄܵĴóÉñ¶ÔÏó£ºfinal FileSystem fileSystem = FileSystem.get(new
URI(HDFS_PATH),new Configuration());
£¨2£©µ÷ÓÃHDFS API½øÐÐCRUD²Ù×÷£¬ÏêÇé¼ûÏ´úÂë
public
class FileSystemApp {
private static final String HDFS_PATH = "hdfs://hadoop-master:9000/testdir/";
private static final String HDFS_DIR = "/testdir/dir1";
public static void main(String[] args) throws
Exception {
FileSystem fs = getFileSystem();
// 01.´´½¨Îļþ¼Ð ¶ÔÓ¦shell£ºmkdir
createDirectory(fs);
// 02.ɾ³ýÎļþ ¶ÔÓ¦shell£ºhadoop fs -rm(r) xxx
deleteFile(fs);
// 03.ÉÏ´«Îļþ ¶ÔÓ¦shell£ºhadoop fs -input xxx
uploadFile(fs);
// 04.ÏÂÔØÎļþ ¶ÔÓ¦shell£ºhadoop fs -get xxx xxx
downloadFile(fs);
// 05.ä¯ÀÀÎļþ¼Ð ¶ÔÓ¦shell£ºhadoop fs -lsr /
listFiles(fs,"/");
}
private static void listFiles(FileSystem fs,String
para) throws IOException {
final FileStatus[] listStatus = fs.listStatus(new
Path(para));
for (FileStatus fileStatus : listStatus) {
String isDir = fileStatus.isDir() ? "Directory"
: "File";
String permission = fileStatus.getPermission().toString();
short replication = fileStatus.getReplication();
long length = fileStatus.getLen();
String path = fileStatus.getPath().toString();
System.out.println(isDir + "\t" +
permission + "\t" + replication
+ "\t" + length + "\t" +
path);
if(isDir.equals("Directory")){
listFiles(fs, path);
}
}
}
private static void downloadFile(FileSystem
fs) throws IOException {
final FSDataInputStream in = fs.open(new Path(HDFS_PATH
+ "check.log"));
final FileOutputStream out = new FileOutputStream("E:\\check.log");
IOUtils.copyBytes(in, out, 1024, true);
System.out.println("Download File Success!");
}
private static void uploadFile(FileSystem
fs) throws IOException {
final FSDataOutputStream out = fs.create(new
Path(HDFS_PATH
+ "check.log"));
final FileInputStream in = new FileInputStream("C:\\CheckMemory.log");
IOUtils.copyBytes(in, out, 1024, true);
System.out.println("Upload File Success!");
}
private static void deleteFile(FileSystem
fs) throws IOException {
fs.delete(new Path(HDFS_DIR), true);
System.out.println("Delete File:"
+ HDFS_DIR + " Success!");
}
private static void createDirectory(FileSystem
fs) throws IOException {
fs.mkdirs(new Path(HDFS_DIR));
System.out.println("Create Directory:"
+ HDFS_DIR + " Success!");
}
private static FileSystem getFileSystem()
throws IOException,
URISyntaxException {
return FileSystem.get(new URI(HDFS_PATH), new
Configuration());
}
} |
|