• <legend id='A4sa5'><style id='A4sa5'><dir id='A4sa5'><q id='A4sa5'></q></dir></style></legend>
      <bdo id='A4sa5'></bdo><ul id='A4sa5'></ul>

    <small id='A4sa5'></small><noframes id='A4sa5'>

    1. <tfoot id='A4sa5'></tfoot>

      <i id='A4sa5'><tr id='A4sa5'><dt id='A4sa5'><q id='A4sa5'><span id='A4sa5'><b id='A4sa5'><form id='A4sa5'><ins id='A4sa5'></ins><ul id='A4sa5'></ul><sub id='A4sa5'></sub></form><legend id='A4sa5'></legend><bdo id='A4sa5'><pre id='A4sa5'><center id='A4sa5'></center></pre></bdo></b><th id='A4sa5'></th></span></q></dt></tr></i><div id='A4sa5'><tfoot id='A4sa5'></tfoot><dl id='A4sa5'><fieldset id='A4sa5'></fieldset></dl></div>

        在 hbase mapreduce 中传递 Delete 或 Put 错误

        时间:2023-09-27
          <tbody id='9oaGA'></tbody>

          <legend id='9oaGA'><style id='9oaGA'><dir id='9oaGA'><q id='9oaGA'></q></dir></style></legend>
            <tfoot id='9oaGA'></tfoot>
              • <bdo id='9oaGA'></bdo><ul id='9oaGA'></ul>

                <small id='9oaGA'></small><noframes id='9oaGA'>

                <i id='9oaGA'><tr id='9oaGA'><dt id='9oaGA'><q id='9oaGA'><span id='9oaGA'><b id='9oaGA'><form id='9oaGA'><ins id='9oaGA'></ins><ul id='9oaGA'></ul><sub id='9oaGA'></sub></form><legend id='9oaGA'></legend><bdo id='9oaGA'><pre id='9oaGA'><center id='9oaGA'></center></pre></bdo></b><th id='9oaGA'></th></span></q></dt></tr></i><div id='9oaGA'><tfoot id='9oaGA'></tfoot><dl id='9oaGA'><fieldset id='9oaGA'></fieldset></dl></div>

                  本文介绍了在 hbase mapreduce 中传递 Delete 或 Put 错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  在 hbase 上运行 mapreduce 时出现以下错误:

                  I am getting below Error while running mapreduce on hbase:

                  java.io.IOException: Pass a Delete or a Put
                      at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:125)
                      at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:84)
                      at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:639)
                      at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
                      at HBaseImporter$InnerMap.map(HBaseImporter.java:61)
                      at HBaseImporter$InnerMap.map(HBaseImporter.java:1)
                      at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
                      at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
                      at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
                      at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
                  12/11/27 16:16:50 INFO mapred.JobClient:  map 0% reduce 0%
                  12/11/27 16:16:50 INFO mapred.JobClient: Job complete: job_local_0001
                  12/11/27 16:16:50 INFO mapred.JobClient: Counters: 0
                  

                  代码:

                  public class HBaseImporter extends Configured implements Tool  {    
                      public static class InnerMap extends
                  TableMapper<Text, IntWritable> {
                      IntWritable one = new IntWritable();
                  
                      public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
                      String val = new String(value.getValue(Bytes.toBytes("cf"), Bytes.toBytes("line")));
                      String[] words = val.toString().split(" ");
                          try {
                                  for(String word:words)
                              {
                              context.write(new Text(word), one);
                              }
                          } catch (InterruptedException e) {
                              e.printStackTrace();
                          }
                      }
                  }
                  
                  public static class MyTableReducer extends TableReducer<Text, IntWritable, ImmutableBytesWritable>  {
                  
                      public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
                              int i = 0;
                              for (IntWritable val : values) {
                                  i += val.get();
                              }
                              Put put = new Put(Bytes.toBytes(key.toString()));
                              put.add(Bytes.toBytes("cf"), Bytes.toBytes("count"), Bytes.toBytes(i));
                  
                              context.write(null, put);
                      }
                  }
                  
                  
                  public int run(String args[]) throws Exception
                  {
                      //Configuration conf = getConf();
                       Configuration conf = HBaseConfiguration.create();
                          conf.addResource(new Path("/home/trg/hadoop-1.0.4/conf/core-site.xml"));
                          conf.addResource(new Path("/home/trg/hadoop-1.0.4/conf/hdfs-site.xml"));
                  
                  
                      Job job = new Job(conf,"SM LogAnalyzer MR");
                  
                      job.setJarByClass(HBaseImporter.class);
                      //FileInputFormat.setInputPaths(job, new Path(args[1]));
                      //FileOutputFormat.setOutputPath(job, new Path("outyy"));
                       //job.setOutputFormatClass(TextOutputFormat.class);
                        job.setMapOutputKeyClass(Text.class);
                          job.setMapOutputValueClass(IntWritable.class);
                  
                      //job.setMapperClass(InnerMap.class);
                      Scan scan = new Scan();
                      scan.setCaching(500);        // 1 is the default in Scan, which will be bad for MapReduce jobs
                      scan.setCacheBlocks(false);
                      TableMapReduceUtil.initTableMapperJob(
                              "wc_in",        // input table
                              scan,               // Scan instance to control CF and attribute selection
                              InnerMap.class,     // mapper class
                              Text.class,         // mapper output key
                              IntWritable.class,  // mapper output value
                              job);
                  
                      TableMapReduceUtil.initTableReducerJob(
                              "word_count",        // output table
                              MyTableReducer.class,    // reducer class
                              job);
                          job.setNumReduceTasks(1);
                  
                      job.setNumReduceTasks(0);
                  
                      return job.waitForCompletion(true)?0:1;
                  }
                  
                  public static void main(String[] args) throws Exception {
                      //Configuration conf = new HBaseConfiguration();
                      //Job job = configureJob(conf, args);
                      //System.exit(job.waitForCompletion(true) ? 0 : 1);
                  
                      String[] inArgs = new String[4]; 
                      inArgs[0] = "HBaseImporter";
                          inArgs[1] = "/user/trg/wc_in"; 
                          inArgs[2] = "AppLogMRImport"; 
                          inArgs[3] = "MessageDB"; 
                          int res = ToolRunner.run(new Configuration(), new HBaseImporter(), inArgs);
                          //int res = ToolRunner.run(new Configuration(), new HBaseImporter(), args);
                  
                      }
                  }
                  

                  我将映射输出值类设置为 IntWritable.class,但仍会在需要 Put 对象的映射器中调用 TableOutputFormat.write.

                  Am setting map output value class as IntWritable.class, but still TableOutputFormat.write getting called in mapper which expects Put object.

                  推荐答案

                  God Answer for my own question.我错误地将减速器任务设置为0".

                  Got Answer for my own question. I was setting mistakenly no of reducer tasks as '0'.

                   job.setNumReduceTasks(0);
                  

                  所以 Mapper 期望 Put 对象直接写入 Hbase 表.注释上述行解决了问题.

                  So Mapper expects Put object to directly write into Hbase table.Commenting the above line solved the issue.

                  这篇关于在 hbase mapreduce 中传递 Delete 或 Put 错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:等效于 mongo 的 out:reduce 选项在 hadoop 下一篇:在 Eclipse 中为 2.4.1 hadoop 映射 Reduce 客户端 jar

                  相关文章

                  最新文章

                  <i id='jAvS5'><tr id='jAvS5'><dt id='jAvS5'><q id='jAvS5'><span id='jAvS5'><b id='jAvS5'><form id='jAvS5'><ins id='jAvS5'></ins><ul id='jAvS5'></ul><sub id='jAvS5'></sub></form><legend id='jAvS5'></legend><bdo id='jAvS5'><pre id='jAvS5'><center id='jAvS5'></center></pre></bdo></b><th id='jAvS5'></th></span></q></dt></tr></i><div id='jAvS5'><tfoot id='jAvS5'></tfoot><dl id='jAvS5'><fieldset id='jAvS5'></fieldset></dl></div>

                      <small id='jAvS5'></small><noframes id='jAvS5'>

                      • <bdo id='jAvS5'></bdo><ul id='jAvS5'></ul>
                      <legend id='jAvS5'><style id='jAvS5'><dir id='jAvS5'><q id='jAvS5'></q></dir></style></legend>

                    1. <tfoot id='jAvS5'></tfoot>