• <small id='VR1CY'></small><noframes id='VR1CY'>

    <tfoot id='VR1CY'></tfoot>

    <i id='VR1CY'><tr id='VR1CY'><dt id='VR1CY'><q id='VR1CY'><span id='VR1CY'><b id='VR1CY'><form id='VR1CY'><ins id='VR1CY'></ins><ul id='VR1CY'></ul><sub id='VR1CY'></sub></form><legend id='VR1CY'></legend><bdo id='VR1CY'><pre id='VR1CY'><center id='VR1CY'></center></pre></bdo></b><th id='VR1CY'></th></span></q></dt></tr></i><div id='VR1CY'><tfoot id='VR1CY'></tfoot><dl id='VR1CY'><fieldset id='VR1CY'></fieldset></dl></div>
      <legend id='VR1CY'><style id='VR1CY'><dir id='VR1CY'><q id='VR1CY'></q></dir></style></legend>
        <bdo id='VR1CY'></bdo><ul id='VR1CY'></ul>

        在 Eclipse 中为 2.4.1 hadoop 映射 Reduce 客户端 jar

        时间:2023-09-27

          <legend id='mp4jX'><style id='mp4jX'><dir id='mp4jX'><q id='mp4jX'></q></dir></style></legend>

            <small id='mp4jX'></small><noframes id='mp4jX'>

            <tfoot id='mp4jX'></tfoot>

                <tbody id='mp4jX'></tbody>
            1. <i id='mp4jX'><tr id='mp4jX'><dt id='mp4jX'><q id='mp4jX'><span id='mp4jX'><b id='mp4jX'><form id='mp4jX'><ins id='mp4jX'></ins><ul id='mp4jX'></ul><sub id='mp4jX'></sub></form><legend id='mp4jX'></legend><bdo id='mp4jX'><pre id='mp4jX'><center id='mp4jX'></center></pre></bdo></b><th id='mp4jX'></th></span></q></dt></tr></i><div id='mp4jX'><tfoot id='mp4jX'></tfoot><dl id='mp4jX'><fieldset id='mp4jX'></fieldset></dl></div>

                • <bdo id='mp4jX'></bdo><ul id='mp4jX'></ul>
                • 本文介绍了在 Eclipse 中为 2.4.1 hadoop 映射 Reduce 客户端 jar的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  当我在 shell 的 hadoop 文件夹中运行我的 hadoop mapreduce word count jar 时,它运行正常并且输出正确生成,

                  When I run my hadoop mapreduce word count jar in hadoop folder in shell, it is running properly and the output is generated correctly,

                  由于我在 hadoop 2.4.1 的情况下使用 yarn,所以当我从 eclipse 运行 MapReduce 示例程序 时,MAP 过程完成并且在减少过程中失败.

                  Since I use yarn in case of hadoop 2.4.1, when I run from eclipse for MapReduce Sample program, MAP process completed and getting failed in reduce process.

                  很明显问题出在 jar 配置上.

                  Its clear that the problem is with jar configuration.

                  请找到罐子,我已添加...

                  这是我遇到的错误

                  INFO:减少任务执行器完成.2014 年 11 月 21 日晚上 8:50:35org.apache.hadoop.mapred.LocalJobRunner$Job 运行警告:job_local1638918104_0001 java.lang.Exception:java.lang.NoSuchMethodError:org.apache.hadoop.mapred.ReduceTask.setLocalMapFiles(Ljava/util/Map;)V在org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)在org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)引起:java.lang.NoSuchMethodError:org.apache.hadoop.mapred.ReduceTask.setLocalMapFiles(Ljava/util/Map;)V在org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:309)在java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)在 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)在 java.util.concurrent.FutureTask.run(FutureTask.java:166) 在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)在java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)在 java.lang.Thread.run(Thread.java:722)

                  INFO: reduce task executor complete. Nov 21, 2014 8:50:35 PM org.apache.hadoop.mapred.LocalJobRunner$Job run WARNING: job_local1638918104_0001 java.lang.Exception: java.lang.NoSuchMethodError: org.apache.hadoop.mapred.ReduceTask.setLocalMapFiles(Ljava/util/Map;)V at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.mapred.ReduceTask.setLocalMapFiles(Ljava/util/Map;)V at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:309) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)

                  线程Thread-12"java.lang.NoClassDefFoundError 中的异常:org/apache/commons/httpclient/HttpMethod 在org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:562)引起:java.lang.ClassNotFoundException:org.apache.commons.httpclient.HttpMethod 在java.net.URLClassLoader$1.run(URLClassLoader.java:366) 在java.net.URLClassLoader$1.run(URLClassLoader.java:355) 在java.security.AccessController.doPrivileged(Native Method) 在java.net.URLClassLoader.findClass(URLClassLoader.java:354) 在java.lang.ClassLoader.loadClass(ClassLoader.java:423) 在sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) 在java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 1 更多

                  Exception in thread "Thread-12" java.lang.NoClassDefFoundError: org/apache/commons/httpclient/HttpMethod at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:562) Caused by: java.lang.ClassNotFoundException: org.apache.commons.httpclient.HttpMethod at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 1 more

                  推荐答案

                  根据屏幕截图,您正在手动将所有依赖的 jar 添加到类路径中.强烈建议为此使用 maven,它将自动将依赖 jar 添加到类路径的过程.我们只需要添加主要的依赖 jars.
                  我在 pom.xml 中使用了以下依赖项,它帮助我运行没有任何问题..

                  As per the screenshot, you are manually adding all the dependent jars to the classpath. It's highly recommended to use maven for this, which will automate the process of adding dependent jars to the classpath. We just need to add main dependent jars.
                  I used the following dependencies in pom.xml which helped me to run without any issues..

                  <properties>
                      <hadoop.version>2.5.2</hadoop.version>
                  </properties>
                  
                  <dependencies>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-hdfs</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-common</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-client</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-mapreduce-client-core</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-yarn-api</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-yarn-common</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-auth</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-yarn-server-nodemanager</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                          <dependency>
                              <groupId>org.apache.hadoop</groupId>
                              <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
                              <version>${hadoop.version}</version>
                          </dependency>
                      </dependencies>  
                  

                  来解决您的问题,我检查了类路径,正好有 82 个 jar 文件可用.
                  像这样找到每个罐子会很乏味.
                  您可以添加功能明智的罐子这里.
                  其他解决方法是,将安装的 hadoop 目录路径中的所有 jar 文件添加为 <hadoop-installed>/share/hadoop/ 并添加所有 lib 文件夹中的所有 jar.这是你能做的最好的事情.. 或
                  仅添加 avro 特定 jar,因为 avro 类根据屏幕截图抛出异常. 这可以解决 avro jar 问题.但您可能会面临其他依赖问题.在使用 Hadoop V1 时,我也遇到了同样的问题.所以后来我意识到并将 Maven 与 Hadoop V2 一起使用.所以不用担心依赖 jars.
                  您的重点将放在 Hadoop 和业务需求上.:)
                  希望对你有帮助..

                  come to your problem, I checked in the classpath, there are exactly 82 jar files available.
                  It will be tedious job to find each jar like this.
                  You can add the functional wise jars HERE.
                  Other workaround would be, add all the jar files in installed hadoop directory path as <hadoop-installed>/share/hadoop/ and add all jars from all the lib folder. which is the best thing you can do.. or
                  Add only avro specific jars, because exception thrown by avro class as per the screenshot. This could solve avro jars issue. but you may face other dependecy issues. I also faced the same problem while working with Hadoop V1. So later i realized and using Maven with Hadoop V2. So no worries of dependent jars.
                  Your focus will be on Hadoop and Business needs. :)
                  Hope it helps you..

                  这篇关于在 Eclipse 中为 2.4.1 hadoop 映射 Reduce 客户端 jar的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:在 hbase mapreduce 中传递 Delete 或 Put 错误 下一篇:在 Hadoop 中使用 NullWritable 的优势

                  相关文章

                  最新文章

                  <i id='few7s'><tr id='few7s'><dt id='few7s'><q id='few7s'><span id='few7s'><b id='few7s'><form id='few7s'><ins id='few7s'></ins><ul id='few7s'></ul><sub id='few7s'></sub></form><legend id='few7s'></legend><bdo id='few7s'><pre id='few7s'><center id='few7s'></center></pre></bdo></b><th id='few7s'></th></span></q></dt></tr></i><div id='few7s'><tfoot id='few7s'></tfoot><dl id='few7s'><fieldset id='few7s'></fieldset></dl></div>

                  1. <small id='few7s'></small><noframes id='few7s'>

                    <tfoot id='few7s'></tfoot>
                    • <bdo id='few7s'></bdo><ul id='few7s'></ul>
                      <legend id='few7s'><style id='few7s'><dir id='few7s'><q id='few7s'></q></dir></style></legend>