1.¸ÅÊö
ÔÚHadoop2.xÖ®ºóµÄ°æ±¾£¬Ìá³öÁ˽â¾öµ¥µãÎÊÌâµÄ·½°¸££HA£¨High Available ¸ß¿ÉÓã©¡£ÕâÆª²©¿Í²ûÊöÈçºÎ´î½¨¸ß¿ÉÓõÄHDFSºÍYARN¡£
2.´î½¨
2.1´´½¨HadoopÓû§
useradd hadoop passwd hadoop |
È»ºó¸ù¾ÝÌáʾ£¬ÉèÖÃÃÜÂë¡£½Ó×ÅÎÒ¸øhadoopÓû§ÉèÖÃÃæÃâÃÜÂëȨÏÞ£¬Ò²¿É×ÔÐÐÌí¼ÓÆäËûȨÏÞ¡£
chmod +w /etc/sudoers hadoop ALL=(root)NOPASSWD:ALL chmod -w /etc/sudoers |
2.2°²×°JDK
½«ÏÂÔØºÃµÄ°²×°°ü½âѹµ½ /usr/java/jdk1.7£¬È»ºóÉèÖû·¾³±äÁ¿£¬ÃüÁîÈçÏ£º
È»ºó±à¼ÅäÖã¬ÄÚÈÝÈçÏ£º
export JAVA_HOME=/usr/java/jdk1.7 export PATH=$PATH:$JAVA_HOME/bin |
È»ºóʹ»·¾³±äÁ¿Á¢¼´ÉúЧ£¬ÃüÁîÈçÏ£º
È»ºóÑéÖ¤JDKÊÇ·ñÅäÖóɹ¦£¬ÃüÁîÈçÏ£º
ÈôÏÔʾ¶ÔÓ¦°æ±¾ºÅ£¬¼´±íʾJDKÅäÖóɹ¦¡£·ñÔò£¬ÅäÖÃÎÞЧ£¡
2.3ÅäÖÃhosts
¼¯ÈºÖÐËùÓлúÆ÷µÄhostsÅäÖÃÒªÒªÏàͬ£¨ÍƼö£©¡£¿ÉÒÔ±ÜÃâ²»±ØÒªµÄÂé·³£¬ÓÃÓòÃûÈ¡´úIP£¬·½±ãÅäÖá£ÅäÖÃÐÅÏ¢ÈçÏ£º
10.211.55.12 nna¡¡¡¡# NameNode Active 10.211.55.13 nns¡¡¡¡# NameNode Standby 10.211.55.14 dn1¡¡¡¡# DataNode1 10.211.55.15 dn2¡¡¡¡# DataNode2 10.211.55.16 dn3¡¡¡¡# DataNode3 |
È»ºóÓÃscpÃüÁ½«hostsÅäÖ÷ַ¢µ½¸÷¸ö½Úµã¡£ÃüÁîÈçÏ£º
££ ÕâÀïÒÔNNS½ÚµãΪÀý×Ó scp /etc/hosts hadoop@nns:/etc/ |
2.4°²×°SSH
ÊäÈëÈçÏÂÃüÁ
È»ºóһ·°´»Ø³µ¼ü£¬×îºóÔÚ½«id_rsa.pubдµ½authorized_keys£¬ÃüÁîÈçÏ£º
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys |
ÔÚhadoopÓû§Ï£¬ÐèÒª¸øauthorized_keys¸³Óè600µÄȨÏÞ£¬²»È»ÃâÃÜÂëµÇ½ÎÞЧ¡£ÔÚÆäËû½ÚµãÖ»ÐèҪʹÓÃ
ssh-keygen ¨Ct rsa ÃüÁÉú²ú¶ÔÓ¦µÄ¹«Ô¿£¬È»ºó½«¸÷¸ö½ÚµãµÄid_rsa.pub×·¼Óµ½nna½ÚµãµÄauthorized_keysÖС£×îºó£¬½«nna½ÚµãϵÄauthorized_keysÎļþͨ¹ýscpÃüÁ·Ö·¢µ½¸÷¸ö½ÚµãµÄ
~/.ssh/ Ŀ¼Ï¡£Ä¿Â¼ÈçÏ£º
# ÕâÀïÒÔNNS½ÚµãΪÀý×Ó scp ~/.ssh/authorized_keys hadoop@nns:~/.ssh/ |
È»ºóʹÓÃsshÃüÁîÏ໥µÇ¼£¬¿´ÊÇ·ñʵÏÖÁËÃâÃÜÂëµÇ¼£¬µÇ¼ÃüÁîÈçÏ£º
# ÕâÀïÒÔnns½ÚµãΪÀý×Ó ssh nns |
ÈôµÇ¼¹ý³ÌÖÐľÓÐÌáʾÐèÒªÊäÈëÃÜÂ룬¼´±íʾÃÜÂëÅäÖóɹ¦¡£
2.5¹Ø±Õ·À»ðǽ
ÓÉÓÚhadoopµÄ½ÚµãÖ®¼äÐèҪͨÐÅ£¨RPC»úÖÆ£©£¬ÕâÑùÒ»À´¾ÍÐèÒª¼àÌý¶ÔÓ¦µÄ¶Ë¿Ú£¬ÕâÀïÎÒ¾ÍÖ±½Ó½«·À»ðǽ¹Ø±ÕÁË£¬ÃüÁîÈçÏ£º
×¢£ºÈç¹ûÓÃÓÚÉú²ú»·¾³£¬Ö±½Ó¹Ø±Õ·À»ðǽÊÇ´æÔÚ°²È«Òþ»¼µÄ£¬ÎÒÃÇ¿ÉÒÔͨ¹ýÅäÖ÷À»ðǽµÄ¹ýÂ˹æÔò£¬¼´½«hadoopÐèÒª¼àÌýµÄÄÇЩ¶Ë¿ÚÅäÖõ½·À»ðǽ½ÓÊܹæÔòÖС£¹ØÓÚ·À»ðǽµÄ¹æÔòÅäÖòμû¡°linux·À»ðǽÅäÖá±£¬»òÕß֪ͨ¹«Ë¾µÄÔËάȥ°ïæÅäÖùÜÀí¡£
ͬʱ£¬Ò²ÐèÒª¹Ø±ÕSELinux£¬¿ÉÐÞ¸Ä /etc/selinux/config Îļþ£¬½«ÆäÖÐµÄ SELINUX=enforcing
¸ÄΪ SELINUX=disabled¼´¿É¡£
2.6ÐÞ¸ÄÊ±Çø
¸÷¸ö½ÚµãµÄʱ¼äÈç¹û²»Í¬²½£¬»á³öÏÖÆô¶¯Òì³££¬»òÆäËûÔÒò¡£ÕâÀォʱ¼äͳһÉèÖÃΪShanghaiÊ±Çø¡£ÃüÁîÈçÏ£º
# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime cp: overwrite `/etc/localtime'? yes ÐÞ¸ÄΪÖйúµÄ¶«°ËÇø # vi /etc/sysconfig/clock ZONE="Asia/Shanghai" UTC=false ARC=false |
2.7ZK£¨°²×°£¬Æô¶¯£¬ÑéÖ¤£©
2.7.1°²×°
½«ÏÂÔØºÃµÄ°²×°°ü£¬½âѹµ½Ö¸¶¨Î»Öã¬ÕâÀïΪֱ½Ó½âѹµ½µ±Ç°Î»Öã¬ÃüÁîÈçÏ£º
tar -zxvf zk-{version}.tar.gz |
ÐÞ¸ÄzkÅäÖ㬽«zk°²×°Ä¿Â¼ÏÂconf/zoo_sample.cfgÖØÃüÃûzoo.cfg£¬ÐÞ¸ÄÆäÖеÄÄÚÈÝ£º
# The number of milliseconds of each tick
# ·þÎñÆ÷Óë¿Í»§¶ËÖ®¼ä½»»¥µÄ»ù±¾Ê±¼äµ¥Ôª£¨ms£©
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
# zookeeperËùÄܽÓÊܵĿͻ§¶ËÊýÁ¿
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
# ·þÎñÆ÷ºÍ¿Í»§¶ËÖ®¼äÇëÇóºÍÓ¦´ðÖ®¼äµÄʱ¼ä¼ä¸ô
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# ±£´æzookeeperÊý¾Ý£¬ÈÕÖ¾µÄ·¾¶
dataDir=/home/hadoop/data/zookeeper
# the port at which the clients will connect
# ¿Í»§¶ËÓëzookeeperÏ໥½»»¥µÄ¶Ë¿Ú
clientPort=2181
server.1= dn1:2888:3888
server.2= dn2:2888:3888
server.3= dn3:2888:3888
#server.A=B:C:D ÆäÖÐAÊÇÒ»¸öÊý×Ö£¬´ú±íÕâÊǵڼ¸ºÅ·þÎñÆ÷£»
BÊÇ·þÎñÆ÷µÄIPµØÖ·£»C±íʾ·þÎñÆ÷ÓëȺ¼¯Öеġ°Áìµ¼Õß¡±½»»»ÐÅÏ¢µÄ¶Ë¿Ú£»
µ±Áìµ¼ÕßʧЧºó£¬D±íʾÓÃÀ´Ö´ÐÐÑ¡¾Ùʱ·þÎñÆ÷Ï໥ͨÐŵĶ˿ڡ£ |
½ÓÏÂÀ´£¬ÔÚÅäÖõÄdataDirĿ¼Ï´´½¨Ò»¸ömyidÎļþ£¬ÀïÃæÐ´ÈëÒ»¸ö0-255Ö®¼äµÄÒ»¸öËæÒâÊý×Ö£¬Ã¿¸özkÉÏÕâ¸öÎļþµÄÊý×ÖÒªÊDz»Ò»ÑùµÄ£¬ÕâЩÊý×ÖÓ¦¸ÃÊÇ´Ó1¿ªÊ¼£¬ÒÀ´Îдÿ¸ö·þÎñÆ÷¡£ÎļþÖÐÐòºÅÒªÓëdn½ÚµãϵÄzkÅäÖÃÐòºÅÒ»Ö±£¬È磺server.1=dn1:2888:3888£¬ÄÇôdn1½ÚµãϵÄmyidÅäÖÃÎļþÓ¦¸ÃдÉÏ1¡£
2.7.2Æô¶¯
·Ö±ðÔÚ¸÷¸ödn½ÚµãÆô¶¯zk½ø³Ì£¬ÃüÁîÈçÏ£º
È»ºó£¬ÔÚ¸÷¸ö½ÚµãÊäÈëjpsÃüÁ»á³öÏÖÈçϽø³Ì£º
2.7.3ÑéÖ¤
ÉÏÃæËµµÄÊäÈëjpsÃüÁÈôÏÔʾ¶ÔÓ¦µÄ½ø³Ì£¬¼´±íʾÆô¶¯³É¹¦£¬Í¬ÑùÎÒÃÇÒ²¿ÉÒÔÊäÈëzkµÄ״̬ÃüÁî²é¿´£¬ÃüÁîÈçÏ£º
»á³öÏÖÒ»¸öleaderºÍÁ½¸öfollower¡£
2.8HDFS£«HAµÄ½á¹¹Í¼
HDFSÅäÖÃHAµÄ½á¹¹Í¼ÈçÏÂËùʾ£º

ÉÏͼ´óÖ¼ܹ¹°üÀ¨£º
1. ÀûÓù²Ïí´æ´¢À´ÔÚÁ½¸öNN¼äͬ²½editsÐÅÏ¢¡£ÒÔǰµÄHDFSÊÇshare nothing but
NN£¬ÏÖÔÚNNÓÖshare storage£¬ÕâÑùÆäʵÊÇ×ªÒÆÁ˵¥µã¹ÊÕϵÄλÖ㬵«Öи߶˵Ĵ洢É豸ÄÚ²¿¶¼Óи÷ÖÖRAIDÒÔ¼°ÈßÓàÓ²¼þ£¬°üÀ¨µçÔ´ÒÔ¼°Íø¿¨µÈ£¬±È·þÎñÆ÷µÄ¿É¿¿ÐÔ»¹ÊÇÂÔÓÐÌá¸ß¡£Í¨¹ýNNÄÚ²¿Ã¿´ÎÔªÊý¾Ý±ä¶¯ºóµÄflush²Ù×÷£¬¼ÓÉÏNFSµÄclose-to-open£¬Êý¾ÝµÄÒ»ÖÂÐԵõ½Á˱£Ö¤¡£
2. DNͬʱÏòÁ½¸öNN»ã±¨¿éÐÅÏ¢¡£ÕâÊÇÈÃStandby NN±£³Ö¼¯ÈºµÄ×îÐÂ״̬µÄ±ØÐë²½Öè¡£
3. ÓÃÓÚ¼àÊӺͿØÖÆNN½ø³ÌµÄFailoverController½ø³Ì¡£ÏÔÈ»£¬ÎÒÃDz»ÄÜÔÚNN½ø³ÌÄÚ²¿½øÐÐÐÄÌøµÈÐÅϢͬ²½£¬×î¼òµ¥µÄÔÒò£¬Ò»´ÎFullGC¾Í¿ÉÒÔÈÃNN¹ÒÆðÊ®¼¸·ÖÖÓ£¬ËùÒÔ£¬±ØÐëÒªÓÐÒ»¸ö¶ÀÁ¢µÄ¶ÌС¾«º·µÄwatchdogÀ´×¨ÃŸºÔð¼à¿Ø¡£ÕâÒ²ÊÇÒ»¸öËÉñîºÏµÄÉè¼Æ£¬±ãÓÚÀ©Õ¹»ò¸ü¸Ä£¬Ä¿Ç°°æ±¾ÀïÊÇÓÃZooKeeper£¨¼ò³ÆZK£©À´×öͬ²½Ëø£¬µ«Óû§¿ÉÒÔ·½±ãµÄ°ÑÕâ¸öZookeeper
FailoverController£¨¼ò³ÆZKFC£©Ì滻ΪÆäËûµÄHA·½°¸»òleaderÑ¡¾Ù·½°¸¡£
4. ¸ôÀ루Fencing£©£¬·ÀÖ¹ÄÔÁÑ£¬¾ÍÊDZ£Ö¤ÔÚÈκÎʱºòÖ»ÓÐÒ»¸öÖ÷NN£¬°üÀ¨Èý¸ö·½Ã棺
¹²Ïí´æ´¢fencing£¬È·±£Ö»ÓÐÒ»¸öNN¿ÉÒÔдÈëedits¡£
¿Í»§¶Ëfencing£¬È·±£Ö»ÓÐÒ»¸öNN¿ÉÒÔÏìÓ¦¿Í»§¶ËµÄÇëÇó¡£
DN fencing£¬È·±£Ö»ÓÐÒ»¸öNNÏòDNÏ·¢ÃüÁƩÈçɾ³ý¿é£¬¸´ÖÆ¿éµÈµÈ¡£
2.9½ÇÉ«·ÖÅä

2.10»·¾³±äÁ¿ÅäÖÃ
ÕâÀïÁгöÁËËùÓеÄÅäÖ㬺óÃæÅäÖÃÆäËû×é¼þ£¬¿ÉÒԲο¼ÕâÀïµÄÅäÖᣠÅäÖÃÍê³Éºó£¬ÊäÈ룺. /etc/profile£¨»òsource
/etc/profile£©Ê¹Ö®Á¢¼´ÉúЧ¡£ÑéÖ¤»·¾³±äÁ¿ÅäÖóɹ¦Óë·ñ£¬ÊäÈ룺echo $HADOOP_HOME£¬ÈôÊä³ö¶ÔÓ¦µÄÅäÖ÷¾¶£¬¼´¿ÉÈ϶¨ÅäÖóɹ¦¡£
×¢£ºhadoop2.xÒÔºóµÄ°æ±¾confÎļþ¼Ð¸ÄΪetcÎļþ¼ÐÁË
ÅäÖÃÄÚÈÝÈçÏÂËùʾ£º
export JAVA_HOME=/usr/java/jdk1.7 export HADOOP_HOME=/home/hadoop/hadoop-2.6.0 export ZK_HOME=/home/hadoop/zookeeper-3.4.6 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOM |
2.11ºËÐÄÎļþÅäÖÃ
×¢£ºÕâÀïÌØ±ðÌáÐÑ£¬ÅäÖÃÎļþÖеÄ·¾¶ÔÚÆô¶¯¼¯ÈºÖ®Ç°£¬µÃ´æÔÚ£¨Èô²»´æÔÚ£¬ÇëÊÂÏÈ´´½¨£©¡£ÏÂÃæÎª¸ø³ö±¾ÆªÎÄÕÂÐèÒª´´½¨µÄ·¾¶½Å±¾£¬ÃüÁîÈçÏ£º
mkdir -p /home/hadoop/tmp mkdir -p /home/hadoop/data/tmp/journal mkdir -p /home/hadoop/data/dfs/name mkdir -p /home/hadoop/data/dfs/data mkdir -p /home/hadoop/data/yarn/local mkdir -p /home/hadoop/log/yarn |
core-site.xml
<?xml version="1.0" encoding="UTF-8"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://cluster1</value> </property>
<property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp</value> </property> <property> <name>hadoop.proxyuser.hduser.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hduser.groups</name> <value>*</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>dn1:2181,dn2:2181,dn3:2181</value> </property> </configuration>
|
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <configuration> <property> <name>dfs.nameservices</name> <value>cluster1</value> </property> <property> <name>dfs.ha.namenodes.cluster1</name> <value>nna,nns</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.nna</name> <value>nna:9000</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.nns</name> <value>nns:9000</value> </property>
<property> <name>dfs.namenode.http-address.cluster1.nna</name> <value>nna:50070</value> </property> <property> <name>dfs.namenode.http-address.cluster1.nns</name> <value>nns:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://dn1:8485;dn2:8485;dn3:8485/cluster1</value> </property> <property> <name>dfs.client.failover.proxy.provider.cluster1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/data/tmp/journal</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/data/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/data/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:8480</value> </property> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:8485</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>dn1:2181,dn2:2181,dn3:2181</value> </property> </configuration>
|
map-site.xml
<?xml version="1.0" encoding="UTF-8"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>nna:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>nna:19888</value> </property> </configuration> |
yarn-site.xml
<?xml version="1.0" encoding="UTF-8"?> <configuration> <property> <name>yarn.resourcemanager.connect.retry-interval.ms</name> <value>2000</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>dn1:2181,dn2:2181,dn3:2181</value> </property>
<property> <name>yarn.resourcemanager.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>nna</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>nns</value> </property> <!--ÔÚnamenode1ÉÏÅäÖÃrm1,ÔÚnamenode2ÉÏÅäÖÃrm2,×¢Ò⣺
Ò»°ã¶¼Ï²»¶°ÑÅäÖúõÄÎļþÔ¶³Ì¸´ÖƵ½ÆäËü»úÆ÷ÉÏ£¬
µ«Õâ¸öÔÚYARNµÄÁíÒ»¸ö»úÆ÷ÉÏÒ»¶¨ÒªÐÞ¸Ä --> <property> <name>yarn.resourcemanager.ha.id</name> <value>rm1</value> </property> <!--¿ªÆô×Ô¶¯»Ö¸´¹¦ÄÜ --> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <!--ÅäÖÃÓëzookeeperµÄÁ¬½ÓµØÖ· --> <property> <name>yarn.resourcemanager.zk-state-store.address</name> <value>dn1:2181,dn2:2181,dn3:2181</value> </property> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>dn1:2181,dn2:2181,dn3:2181</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster1-yarn</value> </property> <!--scheldulerʧÁªµÈ´ýÁ¬½Óʱ¼ä --> <property> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name> <value>5000</value> </property> <!--ÅäÖÃrm1 --> <property> <name>yarn.resourcemanager.address.rm1</name> <value>nna:8132</value> </property> <property> <name>yarn.resourcemanager.scheduler.address.rm1</name> <value>nna:8130</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>nna:8188</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address.rm1</name> <value>nna:8131</value> </property> <property> <name>yarn.resourcemanager.admin.address.rm1</name> <value>nna:8033</value> </property> <property> <name>yarn.resourcemanager.ha.admin.address.rm1</name> <value>nna:23142</value> </property> <!--ÅäÖÃrm2 --> <property> <name>yarn.resourcemanager.address.rm2</name> <value>nns:8132</value> </property> <property> <name>yarn.resourcemanager.scheduler.address.rm2</name> <value>nns:8130</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>nns:8188</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address.rm2</name> <value>nns:8131</value> </property> <property> <name>yarn.resourcemanager.admin.address.rm2</name> <value>nns:8033</value> </property> <property> <name>yarn.resourcemanager.ha.admin.address.rm2</name> <value>nns:23142</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/home/hadoop/data/yarn/local</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/home/hadoop/log/yarn</value> </property> <property> <name>mapreduce.shuffle.port</name> <value>23080</value> </property> <!--¹ÊÕÏ´¦ÀíÀà --> <property> <name>yarn.client.failover-proxy-provider</name> <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value> </property> <property> <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name> <value>/yarn-leader-election</value> </property> </configuration> |
hadoop-env.sh
# The java implementation to use. export JAVA_HOME=/usr/java/jdk1.7 |
yarn-env.sh
# some Java parameters export JAVA_HOME=/usr/java/jdk1.7 |
2.12slave
ÐÞ¸Ähadoop°²×°Ä¿Â¼ÏµÄslaveÎļþ£º
2.13Æô¶¯ÃüÁhdfsºÍyarnµÄÏà¹ØÃüÁ
ÓÉÓÚÎÒÃÇÅäÖÃÁËQJM£¬ËùÒÔÎÒÃÇÐèÒªÏÈÆô¶¯QJMµÄ·þÎñ£¬Æô¶¯Ë³ÐòÈçÏÂËùʾ£º
1.½øÈëµ½DN½Úµã£¬Æô¶¯zkµÄ·þÎñ£ºzkServer.sh start£¬Ö®ºó¿ÉÒÔÊäÈëzkServer.sh
status²é¿´Æô¶¯×´Ì¬£¬±¾´ÎÎÒÃÇÅäÖÃÁËÈý¸öDN½Úµã£¬»á³öÏÖÒ»¸öleaderºÍÁ½¸öfollower¡£ÊäÈëjps£¬»áÏÔʾÆô¶¯½ø³Ì£ºQuorumPeerMain
2.ÔÚNN½ÚµãÉÏ£¨Ñ¡Ò»Ì¨¼´¿É£¬ÕâÀïÎÒÑ¡ÔñµÄÊÇһ̨ԤNNA½Úµã£©£¬È»ºóÆô¶¯journalnode·þÎñ£¬ÃüÁîÈçÏ£ºhadoop-daemons.sh
start journalnode¡£»òÕßµ¥¶À½øÈ뵽ÿ¸öDNÊäÈëÆô¶¯ÃüÁhadoop-daemon.sh
start journalnode¡£ÊäÈëjpsÏÔʾÆô¶¯½ø³Ì£ºJournalNode¡£
3.½Ó×ÅÈôÊÇÅäÖúó£¬ÎÒÃÇÊ×´ÎÆô¶¯£¬ÐèÒª¸ñʽ»¯HDFS£¬ÃüÁîÈçÏ£ºhadoop
namenode ¨Cformat¡£
4.Ö®ºóÎÒÃÇÐèÒª¸ñʽ»¯ZK£¬ÃüÁîÈçÏ£ºhdfs zkfc ¨CformatZK¡£
5.½Ó×ÅÎÒÃÇÆô¶¯hdfsºÍyarn£¬ÃüÁîÈçÏ£ºstart-dfs.shºÍstart-yarn.sh£¬ÎÒÃÇÔÚnnaÊäÈëjps²é¿´½ø³Ì£¬ÏÔʾÈçÏ£ºDFSZKFailoverController£¬NameNode£¬ResourceManager¡£
6.½Ó×ÅÎÒÃÇÔÚNNSÊäÈëjps²é¿´£¬·¢ÏÖÖ»ÓÐDFSZKFailoverController½ø³Ì£¬ÕâÀïÎÒÃÇÐèÒªÊÖ¶¯Æô¶¯NNSÉϵÄnamenodeºÍResourceManager½ø³Ì£¬ÃüÁîÈçÏ£ºhadoop-daemon.sh
start namenodeºÍyarn-daemon.sh start resourcemanager¡£ÐèҪעÒâµÄÊÇ£¬ÔÚNNSÉϵÄyarn-site.xmlÖУ¬ÐèÒªÅäÖÃÖ¸ÏòNNS£¬ÊôÐÔÅäÖÃΪrm2£¬ÔÚNNAÖÐÅäÖõÄÊÇrm1¡£
7.×îºóÎÒÃÇÐèҪͬ²½NNA½ÚµãµÄÔªÊý¾Ý£¬ÃüÁîÈçÏ£ºhdfs namenode
¨CbootstrapStandby£¬ÈôÖ´ÐÐÕý³££¬ÈÕÖ¾×îºóÏÔʾÈçÏÂÐÅÏ¢£º
15/02/21 10:30:59 INFO common.Storage:
Storage directory /home/hadoop/data/dfs/name has been successfully formatted. 15/02/21 10:30:59 WARN common.Util: Path /home/hadoop/data/dfs/name
should be specified as a URI in configuration files. Please update hdfs configuration. 15/02/21 10:30:59 WARN common.Util: Path /home/hadoop/data/dfs/name
should be specified as a URI in configuration files. Please update hdfs configuration. 15/02/21 10:31:00 INFO namenode.TransferFsImage: Opening connection to
http://nna:50070/imagetransfer?getimage=1&txid=0&storageInfo=-60:1079068934:0:
CID-1dd0c11e-b27e-4651-aad6-73bc7dd820bd 15/02/21 10:31:01 INFO namenode.TransferFsImage:
Image Transfer timeout configured to 60000 milliseconds 15/02/21 10:31:01 INFO namenode.TransferFsImage:
Transfer took 0.01s at 0.00 KB/s 15/02/21 10:31:01 INFO namenode.TransferFsImage:
Downloaded file fsimage.ckpt_0000000000000000000 size 353 bytes. 15/02/21 10:31:01 INFO util.ExitUtil: Exiting with status 0 15/02/21 10:31:01 INFO namenode.NameNode: SHUTDOWN_MSG:
/*** SHUTDOWN_MSG: Shutting down NameNode at nns/10.211.55.13 ***/ |
2.14HAµÄÇл»
ÓÉÓÚÎÒÅäÖõÄÊÇ×Ô¶¯Çл»£¬ÈôNNA½Úµãå´µô£¬NNS½Úµã»áÁ¢¼´ÓÉstandby״̬Çл»Îªactive״̬¡£ÈôÊÇÅäÖõÄÊÖ¶¯×´Ì¬£¬¿ÉÒÔÊäÈëÈçÏÂÃüÁî½øÐÐÈ˹¤Çл»£º
hdfs haadmin -failover --forcefence --forceactive nna nns |
ÕâÌõÃüÁîµÄÒâ˼ÊÇ£¬½«nna±ä³Éstandby£¬nns±ä³Éactive¡£¶øÇÒÊÖ¶¯×´Ì¬ÏÂÐèÒªÖØÆô·þÎñ¡£
2.15Ч¹û½ØÍ¼




3.×ܽá
ÕâÆªÎÄÕ¾Í׸Êöµ½ÕâÀÈôÔÚÅäÖùý³ÌÖÐÓÐʲôÒÉÎÊ»òÎÊÌ⣬¿ÉÒÔ¼ÓÈëQQȺÌÖÂÛ»ò·¢ËÍÓʼþ¸øÎÒ£¬ÎһᾡÎÒËùÄÜΪÄú½â´ð£¬Óë¾ý¹²Ã㣡
|