• <legend id='Z2YyC'><style id='Z2YyC'><dir id='Z2YyC'><q id='Z2YyC'></q></dir></style></legend>

      <i id='Z2YyC'><tr id='Z2YyC'><dt id='Z2YyC'><q id='Z2YyC'><span id='Z2YyC'><b id='Z2YyC'><form id='Z2YyC'><ins id='Z2YyC'></ins><ul id='Z2YyC'></ul><sub id='Z2YyC'></sub></form><legend id='Z2YyC'></legend><bdo id='Z2YyC'><pre id='Z2YyC'><center id='Z2YyC'></center></pre></bdo></b><th id='Z2YyC'></th></span></q></dt></tr></i><div id='Z2YyC'><tfoot id='Z2YyC'></tfoot><dl id='Z2YyC'><fieldset id='Z2YyC'></fieldset></dl></div>
      <tfoot id='Z2YyC'></tfoot>

        <small id='Z2YyC'></small><noframes id='Z2YyC'>

        • <bdo id='Z2YyC'></bdo><ul id='Z2YyC'></ul>

        Yarn MapReduce 作业问题 - Hadoop 2.3.0 中的 AM 容器启动

        时间:2023-09-27

                <tbody id='Kr6gD'></tbody>
              <legend id='Kr6gD'><style id='Kr6gD'><dir id='Kr6gD'><q id='Kr6gD'></q></dir></style></legend>
            1. <small id='Kr6gD'></small><noframes id='Kr6gD'>

              <i id='Kr6gD'><tr id='Kr6gD'><dt id='Kr6gD'><q id='Kr6gD'><span id='Kr6gD'><b id='Kr6gD'><form id='Kr6gD'><ins id='Kr6gD'></ins><ul id='Kr6gD'></ul><sub id='Kr6gD'></sub></form><legend id='Kr6gD'></legend><bdo id='Kr6gD'><pre id='Kr6gD'><center id='Kr6gD'></center></pre></bdo></b><th id='Kr6gD'></th></span></q></dt></tr></i><div id='Kr6gD'><tfoot id='Kr6gD'></tfoot><dl id='Kr6gD'><fieldset id='Kr6gD'></fieldset></dl></div>

                <tfoot id='Kr6gD'></tfoot>
                • <bdo id='Kr6gD'></bdo><ul id='Kr6gD'></ul>
                  本文介绍了Yarn MapReduce 作业问题 - Hadoop 2.3.0 中的 AM 容器启动错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我已经设置了 Hadoop 2.3.0 的 2 节点集群.它工作正常,我可以成功运行分布式shell-2.2.0.jar 示例.但是当我尝试运行任何 mapreduce 作业时,我得到了错误.我已经根据 (http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide)但我收到以下错误:

                  I have setup a 2 node cluster of Hadoop 2.3.0. Its working fine and I can successfully run distributedshell-2.2.0.jar example. But when I try to run any mapreduce job I get error. I have setup MapRed.xml and other configs for running MapReduce job according to (http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide) but I am getting following error :

                  14/03/22 20:31:17 INFO mapreduce.Job: Job job_1395502230567_0001 failed with state FAILED due to: Application application_1395502230567_0001 failed 2 times due to AM Container for appattempt_1395502230567_0001_000002 exited 
                  with  exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
                      org.apache.hadoop.util.Shell$ExitCodeException: 
                          at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
                          at org.apache.hadoop.util.Shell.run(Shell.java:418)
                          at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
                          at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
                          at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
                          at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
                          at java.util.concurrent.FutureTask.run(FutureTask.java:262)
                          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
                          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
                          at java.lang.Thread.run(Thread.java:744)
                  
                  
                      Container exited with a non-zero exit code 1
                      .Failing this attempt.. Failing the application.
                      14/03/22 20:31:17 INFO mapreduce.Job: Counters: 0
                      Job ended: Sat Mar 22 20:31:17 PKT 2014
                      The job took 6 seconds.
                  

                  如果查看 stderr(工作日志),则只有一行无法找到或加载主类 614"

                  And if look at stderr (log of job) there is only one line "Could not find or load main class 614"

                  现在我已经用谷歌搜索过了,通常当你有不同的 JAVA 版本或 yarn-site.xml 类路径设置不正确时,我的 yarn-site.xml有这个

                  Now I have googled it and usually this issues comes when you have different JAVA versions or in yarn-site.xml classpath is not properly set , my yarn-site.xml has this

                    <property>
                      <name>yarn.application.classpath</name>
                      <value>/opt/yarn/hadoop-2.3.0/etc/hadoop,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib/*,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib/*,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib/*,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib/*</value>
                    </property>
                  

                  那么任何其他想法可能是这里的问题吗?

                  So any other ideas what could be the issue here ?

                  我正在像这样运行我的 mapreduce 工作:

                  I am running my mapreduce job like this:

                  $HADOOP_PREFIX/bin/hadoop jar $HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar randomwriter out
                  

                  推荐答案

                  我在尝试手动安装 Hortonworks HDP 2.1 时遇到了同样的问题.我设法捕获了包含以下内容的容器启动器脚本:

                  I encountered the same problem when trying to install Hortonworks HDP 2.1 manually. I managed to capture the container launcher script which contained the following:

                  #!/bin/bash
                  
                  export NM_HTTP_PORT="8042"
                  export LOCAL_DIRS="/data/1/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001,/data/2/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001,/data/3/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001,/data/4/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001"
                  export JAVA_HOME="/usr/java/latest"
                  export NM_AUX_SERVICE_mapreduce_shuffle="AAA0+gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="
                  export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:$PWD/*"
                  export HADOOP_TOKEN_FILE_LOCATION="/data/2/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001/container_1406927878786_0001_01_000001/container_tokens"
                  export NM_HOST="test02.admin.hypertable.com"
                  export APPLICATION_WEB_PROXY_BASE="/proxy/application_1406927878786_0001"
                  export JVM_PID="$$"
                  export USER="doug"
                  export HADOOP_HDFS_HOME="/usr/lib/hadoop-hdfs"
                  export PWD="/data/2/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001/container_1406927878786_0001_01_000001"
                  export CONTAINER_ID="container_1406927878786_0001_01_000001"
                  export HOME="/home/"
                  export NM_PORT="62404"
                  export LOGNAME="doug"
                  export APP_SUBMIT_TIME_ENV="1406928095871"
                  export MAX_APP_ATTEMPTS="2"
                  export HADOOP_CONF_DIR="/etc/hadoop/conf"
                  export MALLOC_ARENA_MAX="4"
                  export LOG_DIRS="/data/1/hadoop/yarn/logs/application_1406927878786_0001/container_1406927878786_0001_01_000001,/data/2/hadoop/yarn/logs/application_1406927878786_0001/container_1406927878786_0001_01_000001,/data/3/hadoop/yarn/logs/application_1406927878786_0001/container_1406927878786_0001_01_000001,/data/4/hadoop/yarn/logs/application_1406927878786_0001/container_1406927878786_0001_01_000001"
                  ln -sf "/data/1/hadoop/yarn/local/usercache/doug/filecache/10/libthrift-0.9.2.jar" "libthrift-0.9.2.jar"
                  ln -sf "/data/4/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001/filecache/13/job.xml" "job.xml"
                  mkdir -p jobSubmitDir
                  ln -sf "/data/3/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001/filecache/12/job.split" "jobSubmitDir/job.split"
                  mkdir -p jobSubmitDir
                  ln -sf "/data/2/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001/filecache/11/job.splitmetainfo" "jobSubmitDir/job.splitmetainfo"
                  ln -sf "/data/1/hadoop/yarn/local/usercache/doug/appcache/application_1406927878786_0001/filecache/10/job.jar" "job.jar"
                  ln -sf "/data/2/hadoop/yarn/local/usercache/doug/filecache/11/hypertable-0.9.8.0-apache2.jar" "hypertable-0.9.8.0-apache2.jar"
                  exec /bin/bash -c "$JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/data/4/hadoop/yarn/logs/application_1406927878786_0001/container_1406927878786_0001_01_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/data/4/hadoop/yarn/logs/application_1406927878786_0001/container_1406927878786_0001_01_000001/stdout 2>/data/4/hadoop/yarn/logs/application_1406927878786_0001/container_1406927878786_0001_01_000001/stderr "
                  

                  设置 CLASSPATH 的行是罪魁祸首.为了解决这个问题,我必须在 中设置变量 HADOOP_COMMON_HOMEHADOOP_HDFS_HOMEHADOOP_YARN_HOMEHADOOP_MAPRED_HOMEhadoop-env.sh 指向 /usr/lib 下的相应目录.在每个目录中,我还必须设置 share/hadoop/... 子目录层次结构,可以在其中找到 jar.

                  The line that sets CLASSPATH was the culprit. To resolve the problem I had to set the variables HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, HADOOP_YARN_HOME, and HADOOP_MAPRED_HOME in hadoop-env.sh to point to the appropriate directories under /usr/lib. In each of those directories I also had to setup the share/hadoop/... subdirectory hierarchy where the jars could be found.

                  这篇关于Yarn MapReduce 作业问题 - Hadoop 2.3.0 中的 AM 容器启动错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                    <bdo id='jzkzl'></bdo><ul id='jzkzl'></ul>
                    <tfoot id='jzkzl'></tfoot>

                        <tbody id='jzkzl'></tbody>
                      <legend id='jzkzl'><style id='jzkzl'><dir id='jzkzl'><q id='jzkzl'></q></dir></style></legend>

                      1. <small id='jzkzl'></small><noframes id='jzkzl'>

                      2. <i id='jzkzl'><tr id='jzkzl'><dt id='jzkzl'><q id='jzkzl'><span id='jzkzl'><b id='jzkzl'><form id='jzkzl'><ins id='jzkzl'></ins><ul id='jzkzl'></ul><sub id='jzkzl'></sub></form><legend id='jzkzl'></legend><bdo id='jzkzl'><pre id='jzkzl'><center id='jzkzl'></center></pre></bdo></b><th id='jzkzl'></th></span></q></dt></tr></i><div id='jzkzl'><tfoot id='jzkzl'></tfoot><dl id='jzkzl'><fieldset id='jzkzl'></fieldset></dl></div>