1. <legend id='tYPVZ'><style id='tYPVZ'><dir id='tYPVZ'><q id='tYPVZ'></q></dir></style></legend>
      <i id='tYPVZ'><tr id='tYPVZ'><dt id='tYPVZ'><q id='tYPVZ'><span id='tYPVZ'><b id='tYPVZ'><form id='tYPVZ'><ins id='tYPVZ'></ins><ul id='tYPVZ'></ul><sub id='tYPVZ'></sub></form><legend id='tYPVZ'></legend><bdo id='tYPVZ'><pre id='tYPVZ'><center id='tYPVZ'></center></pre></bdo></b><th id='tYPVZ'></th></span></q></dt></tr></i><div id='tYPVZ'><tfoot id='tYPVZ'></tfoot><dl id='tYPVZ'><fieldset id='tYPVZ'></fieldset></dl></div>
    2. <small id='tYPVZ'></small><noframes id='tYPVZ'>

      <tfoot id='tYPVZ'></tfoot>

        • <bdo id='tYPVZ'></bdo><ul id='tYPVZ'></ul>

        不使用 JobConf 运行 Hadoop 作业

        时间:2023-09-26

        <i id='SJoRd'><tr id='SJoRd'><dt id='SJoRd'><q id='SJoRd'><span id='SJoRd'><b id='SJoRd'><form id='SJoRd'><ins id='SJoRd'></ins><ul id='SJoRd'></ul><sub id='SJoRd'></sub></form><legend id='SJoRd'></legend><bdo id='SJoRd'><pre id='SJoRd'><center id='SJoRd'></center></pre></bdo></b><th id='SJoRd'></th></span></q></dt></tr></i><div id='SJoRd'><tfoot id='SJoRd'></tfoot><dl id='SJoRd'><fieldset id='SJoRd'></fieldset></dl></div>
          <bdo id='SJoRd'></bdo><ul id='SJoRd'></ul>
          • <tfoot id='SJoRd'></tfoot>

          • <legend id='SJoRd'><style id='SJoRd'><dir id='SJoRd'><q id='SJoRd'></q></dir></style></legend>
          • <small id='SJoRd'></small><noframes id='SJoRd'>

              <tbody id='SJoRd'></tbody>

                1. 本文介绍了不使用 JobConf 运行 Hadoop 作业的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我找不到提交不使用已弃用 JobConf 类的 Hadoop 作业的单个示例.尚未弃用的 JobClient 仍然只支持采用 JobConf 参数的方法.

                  I can't find a single example of submitting a Hadoop job that does not use the deprecated JobConf class. JobClient, which hasn't been deprecated, still only supports methods that take a JobConf parameter.

                  谁能指出一个 Java 代码示例,该示例仅使用 Configuration 类(不是 JobConf)提交 Hadoop 映射/减少作业,并使用 mapreduce.lib.input 包而不是 mapred.input?

                  Can someone please point me at an example of Java code submitting a Hadoop map/reduce job using only the Configuration class (not JobConf), and using the mapreduce.lib.input package instead of mapred.input?

                  推荐答案

                  希望对你有帮助

                  import java.io.File;
                  
                  import org.apache.commons.io.FileUtils;
                  import org.apache.hadoop.conf.Configured;
                  import org.apache.hadoop.fs.Path;
                  import org.apache.hadoop.io.LongWritable;
                  import org.apache.hadoop.io.Text;
                  import org.apache.hadoop.mapreduce.Job;
                  import org.apache.hadoop.mapreduce.Mapper;
                  import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
                  import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
                  import org.apache.hadoop.util.Tool;
                  import org.apache.hadoop.util.ToolRunner;
                  
                  public class MapReduceExample extends Configured implements Tool {
                  
                      static class MyMapper extends Mapper<LongWritable, Text, LongWritable, Text> {
                          public MyMapper(){
                  
                          }
                  
                          protected void map(
                                  LongWritable key,
                                  Text value,
                                  org.apache.hadoop.mapreduce.Mapper<LongWritable, Text, LongWritable, Text>.Context context)
                                  throws java.io.IOException, InterruptedException {
                              context.getCounter("mygroup", "jeff").increment(1);
                              context.write(key, value);
                          };
                      }
                  
                      @Override
                      public int run(String[] args) throws Exception {
                          Job job = new Job();
                          job.setMapperClass(MyMapper.class);
                          FileInputFormat.setInputPaths(job, new Path(args[0]));
                          FileOutputFormat.setOutputPath(job, new Path(args[1]));
                  
                          job.waitForCompletion(true);
                          return 0;
                      }
                  
                      public static void main(String[] args) throws Exception {
                          FileUtils.deleteDirectory(new File("data/output"));
                          args = new String[] { "data/input", "data/output" };
                          ToolRunner.run(new MapReduceExample(), args);
                      }
                  }
                  

                  这篇关于不使用 JobConf 运行 Hadoop 作业的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:job.setOutputKeyClass 和 job.setOutputReduceClass 指的是哪 下一篇:Mapreduce 组合器

                  相关文章

                  最新文章

                  <legend id='pEN5K'><style id='pEN5K'><dir id='pEN5K'><q id='pEN5K'></q></dir></style></legend>
                    <i id='pEN5K'><tr id='pEN5K'><dt id='pEN5K'><q id='pEN5K'><span id='pEN5K'><b id='pEN5K'><form id='pEN5K'><ins id='pEN5K'></ins><ul id='pEN5K'></ul><sub id='pEN5K'></sub></form><legend id='pEN5K'></legend><bdo id='pEN5K'><pre id='pEN5K'><center id='pEN5K'></center></pre></bdo></b><th id='pEN5K'></th></span></q></dt></tr></i><div id='pEN5K'><tfoot id='pEN5K'></tfoot><dl id='pEN5K'><fieldset id='pEN5K'></fieldset></dl></div>

                    • <bdo id='pEN5K'></bdo><ul id='pEN5K'></ul>
                    <tfoot id='pEN5K'></tfoot>

                    <small id='pEN5K'></small><noframes id='pEN5K'>