Äú¿ÉÒÔ¾èÖú£¬Ö§³ÖÎÒÃǵĹ«ÒæÊÂÒµ¡£

1Ôª 10Ôª 50Ôª





ÈÏÖ¤Â룺  ÑéÖ¤Âë,¿´²»Çå³þ?Çëµã»÷Ë¢ÐÂÑéÖ¤Âë ±ØÌî



  ÇóÖª ÎÄÕ ÎÄ¿â Lib ÊÓÆµ iPerson ¿Î³Ì ÈÏÖ¤ ×Éѯ ¹¤¾ß ½²×ù Modeler   Code  
»áÔ±   
 
   
 
 
     
   
 ¶©ÔÄ
  ¾èÖú
Java¼°Web³ÌÐòµ÷ÓÃhadoop2.6
 
×÷Õߣºfansy1990 À´Ô´£ºCSDN ·¢²¼ÓÚ 2015-11-5
  2261  次浏览      28
 

1. hadoop¼¯Èº£º

1.1 ϵͳ¼°Ó²¼þÅäÖãº

hadoop°æ±¾£º2.6 £»Èý̨ÐéÄâ»ú£ºnode101(192.168.0.101)¡¢node102(192.168.0.102)¡¢node103(192.168.0.103)£» ÿ̨»úÆ÷2GÄÚ´æ¡¢1¸öCPUºË£»

node101: NodeManager¡¢ NameNode¡¢ResourceManager¡¢DataNode£»

node102: NodeManager¡¢DataNode ¡¢SecondaryNameNode¡¢JobHistoryServer£»

node103: NodeManager ¡¢DataNode£»

1.2 ÅäÖùý³ÌÖÐÓöµ½µÄÎÊÌ⣺

1£© NodeManagerÆô¶¯²»ÁË£»

×ʼÅäÖõÄÐéÄâ»úÅäÖõÄÊÇ512MÄڴ棬ËùÒÔÔÚyarn-site.xml Öеġ°yarn.nodemanager.resource.memory-mb¡±ÅäÖÃΪ512£¨ÆäĬÈÏÅäÖÃÊÇ1024£©£¬²é¿´ÈÕÖ¾£¬±¨´í£º

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,
Registration of NodeManager failed, Message from ResourceManager: 
NodeManager from  node101 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager. 

°ÑËü¸ÄΪ1024»òÕßÒÔÉϾͿÉÒÔÕý³£Æô¶¯NodeManagerÁË£¬ÎÒÉèÖõÄÊÇ2048£»

2£© ÈÎÎñ¿ÉÒÔÌá½»£¬µ«ÊDz»»á¼ÌÐøÔËÐÐ

a. ÓÉÓÚÕâÀïÿ¸öÐéÄâ»úÖ»ÅäÖÃÁËÒ»¸öºË£¬µ«ÊÇyarn-site.xmlÀïÃæµÄ¡°yarn.nodemanager.resource.cpu-vcores¡±Ä¬ÈÏÅäÖÃÊÇ8£¬ÕâÑùÔÚ·ÖÅä×ÊÔ´µÄʱºò»áÓÐÎÊÌ⣬ËùÒÔ°ÑÕâ¸ö²ÎÊýÅäÖÃΪ1£»

b. ³öÏÖÏÂÃæµÄ´íÎó£º

is running beyond virtual memory limits. Current usage: 96.6 MB of 1.5 GB physical memory used; 
1.6 GB of 1.5 GB virtual memory used. Killing container. 

Õâ¸öÓ¦¸ÃÊÇmap¡¢reduce¡¢NodeManagerµÄ×ÊÔ´ÅäÖÃûÓÐÅäÖú㬴óСÅäÖò»ÕýÈ·µ¼Öµģ¬µ«ÊÇÎÒ¸ÄÁ˺þ㬸оõÓ¦¸ÃÊÇûÎÊÌâµÄ£¬µ«ÊÇÒ»Ö±±¨Õâ¸ö´í£¬×îºóû°ì·¨£¬°ÑÕâ¸ö¼ì²éÈ¥µôÁË£¬¼´°Ñyarn-site.xml Öеġ°yarn.nodemanager.vmem-check-enabled¡±ÅäÖÃΪfalse£»ÕâÑù¾Í¿ÉÒÔÌá½»ÈÎÎñÁË¡£

1.3 ÅäÖÃÎļþ£¨Ï£ÍûÓиßÈË¿ÉÒÔÖ¸µãÏÂ×ÊÔ´ÅäÖÃÇé¿ö£¬¿ÉÒÔ²»³öÏÖÉÏÃæbµÄ´íÎ󣬶ø²»ÊÇʹÓÃÈ¥µô¼ì²éµÄ·½·¨£©£º

1£©hadoop-env.sh ºÍyarn-env.sh ÖÐÅäÖÃjdk£¬Í¬Ê±HADOOP_HEAPSIZEºÍYARN_HEAPSIZEÅäÖÃΪ512£»

2£©hdfs-site.xml ÅäÖÃÊý¾Ý´æ´¢Â·¾¶ºÍsecondarynameËùÔڽڵ㣺

<configuration>  
<property>
<name>dfs.namenode.name.dir</name>
<value>file:////data/hadoop/hdfs/name</value>
<description>Determines where on the local filesystem the DFS name node
should store the name table(fsimage). If this is a comma-delimited list
of directories then the name table is replicated in all of the
directories, for redundancy. </description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/hadoop/hdfs/data</value>
<description>Determines where on the local filesystem an DFS data node
should store its blocks. If this is a comma-delimited
list of directories, then data will be stored in all named
directories, typically on different devices.
Directories that do not exist are ignored.
</description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node102:50090</value>
</property>
</configuration>

3£©core-site.xml ÅäÖÃnamenode£º

<configuration>  
<property>
<name>fs.defaultFS</name>
<value>hdfs://node101:8020</value>
</property>
</configuration>

4£© mapred-site.xml ÅäÖÃmapºÍreduceµÄ×ÊÔ´£º

<configuration>  
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>The runtime framework for executing MapReduce jobs.
Can be one of local, classic or yarn.
</description>
</property>

<!-- jobhistory properties -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>node102:10020</value>
<description>MapReduce JobHistory Server IPC host:port</description>
</property>


<property>
<name>mapreduce.map.memory.mb</name>
<value>1024</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>1024</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx512m</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx512m</value>
</property>
</configuration>

5£©yarn-site.xml ÅäÖÃresourcemanager¼°Ïà¹Ø×ÊÔ´£º

<configuration>  

<property>
<description>The hostname of the RM.</description>
<name>yarn.resourcemanager.hostname</name>
<value>node101</value>
</property>

<property>
<description>The address of the applications manager interface in the RM.</description>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>

<property>
<description>The address of the scheduler interface.</description>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>

<property>
<description>The http address of the RM web application.</description>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>

<property>
<description>The https adddress of the RM web application.</description>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>

<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>

<property>
<description>The address of the RM admin interface.</description>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>

<property>
<description>List of directories to store localized files in. An
application's localized file directory will be found in:
${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
Individual containers' work directories, called container_${contid}, will
be subdirectories of this.
</description>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/hadoop/yarn/local</value>
</property>

<property>
<description>Whether to enable log aggregation</description>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>

<property>
<description>Where to aggregate logs to.</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/data/tmp/logs</value>
</property>

<property>
<description>Amount of physical memory, in MB, that can be allocated
for containers.</description>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>1.0</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<!--
<property>
<description>The class to use as the resource scheduler.</description>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>

<property>
<description>fair-scheduler conf location</description>
<name>yarn.scheduler.fair.allocation.file</name>
<value>${yarn.home.dir}/etc/hadoop/fairscheduler.xml</value>
</property>
-->
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
<property>
<description>the valid service name should only contain a-zA-Z0-9_ and can not start with numbers</description>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

2. Javaµ÷ÓÃHadoop2.6 £¬ÔËÐÐMR³ÌÐò£º

ÐèÐÞ¸ÄÏÂÃæÁ½¸öµØ·½£º

1£© µ÷ÓÃÖ÷³ÌÐòµÄConfigurationÐèÒªÅäÖãº

Configuration conf = new Configuration();  

conf.setBoolean("mapreduce.app-submission.cross-platform", true);// ÅäÖÃʹÓÃ¿çÆ½Ì¨Ìá½»ÈÎÎñ
conf.set("fs.defaultFS", "hdfs://node101:8020");//Ö¸¶¨namenode
conf.set("mapreduce.framework.name", "yarn"); // Ö¸¶¨Ê¹ÓÃyarn¿ò¼Ü
conf.set("yarn.resourcemanager.address", "node101:8032"); // Ö¸¶¨resourcemanager
conf.set("yarn.resourcemanager.scheduler.address", "node101:8030");// Ö¸¶¨×ÊÔ´·ÖÅäÆ÷

2£© Ìí¼ÓÏÂÃæµÄÀൽclasspath£º

ÆäËûµØ·½²»ÓÃÐ޸ģ¬ÕâÑù¾Í¿ÉÒÔÔËÐУ»

3. Web³ÌÐòµ÷ÓÃHadoop2.6£¬ÔËÐÐMR³ÌÐò£»

³ÌÐò¿ÉÒÔÔÚjava web³ÌÐòµ÷ÓÃhadoop2.6 ÏÂÔØ£»

Õâ¸öweb³ÌÐòµ÷Óò¿·ÖºÍÉÏÃæµÄjavaÊÇÒ»ÑùµÄ£¬»ù±¾¶¼Ã»ÓÐÐ޸ģ¬ËùʹÓõ½µÄjar°üҲȫ²¿·ÅÔÚÁËlibÏÂÃæ¡£

×îºóÓÐÒ»µã£¬ÎÒÔËÐÐÁËÈý¸ömap£¬µ«ÊÇÈý¸ömap²»ÊǾùÔÈ·Ö²¼µÄ£º

¿ÉÒÔ¿´µ½node103·ÖÅäÁËÁ½¸ömap£¬node101·ÖÅäÁË1Ò»¸ömap£»»¹ÓÐÒ»´ÎÊÇnode101·ÖÅäÁËÁ½¸ömap£¬node103·ÖÅäÁËÒ»¸ömap£»Á½´Înode102¶¼Ã»ÓзÖÅäµ½mapÈÎÎñ£¬Õâ¸öÓ¦¸ÃÊÇ×ÊÔ´¹ÜÀíºÍÈÎÎñ·ÖÅäµÄµØ·½»¹ÊÇÓеãÎÊÌâµÄÔµ¹Ê¡£

   
2261 ´Îä¯ÀÀ       28
Ïà¹ØÎÄÕÂ

Java΢·þÎñÐÂÉú´úÖ®Nacos
ÉîÈëÀí½âJavaÖеÄÈÝÆ÷
JavaÈÝÆ÷Ïê½â
Java´úÂëÖÊÁ¿¼ì²é¹¤¾ß¼°Ê¹Óð¸Àý
Ïà¹ØÎĵµ

JavaÐÔÄÜÓÅ»¯
Spring¿ò¼Ü
SSM¿ò¼Ü¼òµ¥¼òÉÜ
´ÓÁ㿪ʼѧjava±à³Ì¾­µä
Ïà¹Ø¿Î³Ì

¸ßÐÔÄÜJava±à³ÌÓëϵͳÐÔÄÜÓÅ»¯
JavaEE¼Ü¹¹¡¢ Éè¼ÆÄ£Ê½¼°ÐÔÄܵ÷ÓÅ
Java±à³Ì»ù´¡µ½Ó¦Óÿª·¢
JAVAÐéÄâ»úÔ­ÀíÆÊÎö
×îл¼Æ»®
DeepSeek´óÄ£ÐÍÓ¦Óÿª·¢ 6-12[ÏÃÃÅ]
È˹¤ÖÇÄÜ.»úÆ÷ѧϰTensorFlow 6-22[Ö±²¥]
»ùÓÚ UML ºÍEA½øÐзÖÎöÉè¼Æ 6-30[±±¾©]
ǶÈëʽÈí¼þ¼Ü¹¹-¸ß¼¶Êµ¼ù 7-9[±±¾©]
Óû§ÌåÑé¡¢Ò×ÓÃÐÔ²âÊÔÓëÆÀ¹À 7-25[Î÷°²]
ͼÊý¾Ý¿âÓë֪ʶͼÆ× 8-23[±±¾©]

Java ÖеÄÖÐÎıàÂëÎÊÌâ
Java»ù´¡ÖªÊ¶µÄÈýÊ®¸ö¾­µäÎÊ´ð
Íæ×ª Java Web Ó¦Óÿª·¢
ʹÓÃSpring¸üºÃµØ´¦ÀíStruts
ÓÃEclipse¿ª·¢iPhone WebÓ¦ÓÃ
²å¼þϵͳ¿ò¼Ü·ÖÎö

Struts+Spring+Hibernate
»ùÓÚJ2EEµÄWeb 2.0Ó¦Óÿª·¢
J2EEÉè¼ÆÄ£Ê½ºÍÐÔÄܵ÷ÓÅ
Java EE 5ÆóÒµ¼¶¼Ü¹¹Éè¼Æ
Javaµ¥Ôª²âÊÔ·½·¨Óë¼¼Êõ
Java±à³Ì·½·¨Óë¼¼Êõ

Struts+Spring+Hibernate/EJB+ÐÔÄÜÓÅ»¯
»ªÏÄ»ù½ð ActiveMQ Ô­ÀíÓë¹ÜÀí
ijÃñº½¹«Ë¾ Java»ù´¡±à³Ìµ½Ó¦Óÿª·¢
ij·çµç¹«Ë¾ Java Ó¦Óÿª·¢Æ½Ì¨ÓëÇ¨ÒÆ
ÈÕÕÕ¸Û J2EEÓ¦Óÿª·¢¼¼Êõ¿ò¼ÜÓëʵ¼ù
ij¿ç¹ú¹«Ë¾ ¹¤×÷Á÷¹ÜÀíJBPM
¶«·½º½¿Õ¹«Ë¾ ¸ß¼¶J2EE¼°ÆäÇ°ÑØ¼¼Êõ