• <legend id='OfqDs'><style id='OfqDs'><dir id='OfqDs'><q id='OfqDs'></q></dir></style></legend>

      • <bdo id='OfqDs'></bdo><ul id='OfqDs'></ul>

      <small id='OfqDs'></small><noframes id='OfqDs'>

    1. <tfoot id='OfqDs'></tfoot>
    2. <i id='OfqDs'><tr id='OfqDs'><dt id='OfqDs'><q id='OfqDs'><span id='OfqDs'><b id='OfqDs'><form id='OfqDs'><ins id='OfqDs'></ins><ul id='OfqDs'></ul><sub id='OfqDs'></sub></form><legend id='OfqDs'></legend><bdo id='OfqDs'><pre id='OfqDs'><center id='OfqDs'></center></pre></bdo></b><th id='OfqDs'></th></span></q></dt></tr></i><div id='OfqDs'><tfoot id='OfqDs'></tfoot><dl id='OfqDs'><fieldset id='OfqDs'></fieldset></dl></div>

        Hadoop - 直接从 Mapper 写入 HBase

        时间:2023-09-27
      1. <legend id='vJMTi'><style id='vJMTi'><dir id='vJMTi'><q id='vJMTi'></q></dir></style></legend>
          <tbody id='vJMTi'></tbody>
        1. <i id='vJMTi'><tr id='vJMTi'><dt id='vJMTi'><q id='vJMTi'><span id='vJMTi'><b id='vJMTi'><form id='vJMTi'><ins id='vJMTi'></ins><ul id='vJMTi'></ul><sub id='vJMTi'></sub></form><legend id='vJMTi'></legend><bdo id='vJMTi'><pre id='vJMTi'><center id='vJMTi'></center></pre></bdo></b><th id='vJMTi'></th></span></q></dt></tr></i><div id='vJMTi'><tfoot id='vJMTi'></tfoot><dl id='vJMTi'><fieldset id='vJMTi'></fieldset></dl></div>

                <bdo id='vJMTi'></bdo><ul id='vJMTi'></ul>

              • <tfoot id='vJMTi'></tfoot>

                  <small id='vJMTi'></small><noframes id='vJMTi'>

                  本文介绍了Hadoop - 直接从 Mapper 写入 HBase的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我有一个 haddop 工作,它的输出应该被写入 HBase.我真的不需要reducer,我想插入的行类型在Mapper中确定.

                  I have a haddop job that its output should be written to HBase. I do not really needs reducer, the kind of row I would like to insert is determined in the Mapper.

                  如何使用 TableOutputFormat 来实现这一点?从我看到的所有示例中,我假设 reducer 是创建 Put 的那个,而 TableMapper 只是用于从 HBase 表中读取.

                  How can I use TableOutputFormat to achieve this? From all the examples I have seen the assumption is that the reducer is the one creating the Put, and that TableMapper is just for reading from HBase table.

                  在我的情况下,输入是 HDFS,输出是放到特定表中,我在 TableMapReduceUtil 中也找不到任何可以帮助我的东西.

                  In my case the input is HDFS the output is Put to specific table, I cannot find anything in TableMapReduceUtil that can help me with that either.

                  有什么例子可以帮助我吗?

                  Is there any example out there that can help me with that?

                  顺便说一句,我正在使用新的 Hadoop API

                  BTW, I am using the new Hadoop API

                  推荐答案

                  这是从文件中读取并将所有行放入Hbase的示例.此示例来自Hbase:权威指南",您可以在存储库中找到它.要获得它,只需在您的计算机上克隆 repo:

                  This is the example of reading from file and put all lines into Hbase. This example is from "Hbase: The definitive guide" and you can find it on repository. To get it just clone repo on your computer:

                  git clone git://github.com/larsgeorge/hbase-book.git
                  

                  在本书中,您还可以找到有关代码的所有解释.但是,如果您有什么不明白的地方,请随时提出.

                  In this book you can also find all the explanations about the code. But if something is incomprehensible for you, feel free to ask.

                  `    public class ImportFromFile {
                       public static final String NAME = "ImportFromFile"; 
                       public enum Counters { LINES }
                  
                       static class ImportMapper
                       extends Mapper<LongWritable, Text, ImmutableBytesWritable, Writable> { 
                         private byte[] family = null;
                         private byte[] qualifier = null;
                  
                         @Override
                         protected void setup(Context context)
                           throws IOException, InterruptedException {
                           String column = context.getConfiguration().get("conf.column");
                           byte[][] colkey = KeyValue.parseColumn(Bytes.toBytes(column));
                           family = colkey[0];
                           if (colkey.length > 1) {
                             qualifier = colkey[1];
                           }
                         }
                  
                         @Override
                         public void map(LongWritable offset, Text line, Context context) 
                         throws IOException {
                            try {
                             String lineString = line.toString();
                             byte[] rowkey = DigestUtils.md5(lineString); 
                             Put put = new Put(rowkey);
                             put.add(family, qualifier, Bytes.toBytes(lineString)); 
                             context.write(new ImmutableBytesWritable(rowkey), put);
                             context.getCounter(Counters.LINES).increment(1);
                           } catch (Exception e) {
                             e.printStackTrace();
                           }
                         }
                       }
                  
                       private static CommandLine parseArgs(String[] args) throws ParseException { 
                         Options options = new Options();
                         Option o = new Option("t", "table", true,
                           "table to import into (must exist)");
                         o.setArgName("table-name");
                         o.setRequired(true);
                         options.addOption(o);
                         o = new Option("c", "column", true,
                           "column to store row data into (must exist)");
                         o.setArgName("family:qualifier");
                         o.setRequired(true);
                         options.addOption(o);
                         o = new Option("i", "input", true,
                           "the directory or file to read from");
                         o.setArgName("path-in-HDFS");
                         o.setRequired(true);
                         options.addOption(o);
                         options.addOption("d", "debug", false, "switch on DEBUG log level");
                         CommandLineParser parser = new PosixParser();
                         CommandLine cmd = null;
                         try {
                           cmd = parser.parse(options, args);
                         } catch (Exception e) {
                           System.err.println("ERROR: " + e.getMessage() + "
                  ");
                           HelpFormatter formatter = new HelpFormatter();
                           formatter.printHelp(NAME + " ", options, true);
                           System.exit(-1);
                         }
                         return cmd;
                       }
                  
                       public static void main(String[] args) throws Exception {
                         Configuration conf = HBaseConfiguration.create();
                         String[] otherArgs =
                           new GenericOptionsParser(conf, args).getRemainingArgs(); 
                         CommandLine cmd = parseArgs(otherArgs);
                         String table = cmd.getOptionValue("t");
                         String input = cmd.getOptionValue("i");
                         String column = cmd.getOptionValue("c");
                         conf.set("conf.column", column);
                         Job job = new Job(conf, "Import from file " + input + " into table " + table); 
                  
                              job.setJarByClass(ImportFromFile.class);
                         job.setMapperClass(ImportMapper.class);
                         job.setOutputFormatClass(TableOutputFormat.class);
                         job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, table);
                         job.setOutputKeyClass(ImmutableBytesWritable.class);
                         job.setOutputValueClass(Writable.class);
                         job.setNumReduceTasks(0); 
                         FileInputFormat.addInputPath(job, new Path(input));
                         System.exit(job.waitForCompletion(true) ? 0 : 1);
                       }
                      }`
                  

                  这篇关于Hadoop - 直接从 Mapper 写入 HBase的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:HDFS 文件校验和 下一篇:Yarn MapReduce 作业问题 - Hadoop 2.3.0 中的 AM 容器启动

                  相关文章

                  最新文章

                  <legend id='RoM3I'><style id='RoM3I'><dir id='RoM3I'><q id='RoM3I'></q></dir></style></legend>
                • <small id='RoM3I'></small><noframes id='RoM3I'>

                    <i id='RoM3I'><tr id='RoM3I'><dt id='RoM3I'><q id='RoM3I'><span id='RoM3I'><b id='RoM3I'><form id='RoM3I'><ins id='RoM3I'></ins><ul id='RoM3I'></ul><sub id='RoM3I'></sub></form><legend id='RoM3I'></legend><bdo id='RoM3I'><pre id='RoM3I'><center id='RoM3I'></center></pre></bdo></b><th id='RoM3I'></th></span></q></dt></tr></i><div id='RoM3I'><tfoot id='RoM3I'></tfoot><dl id='RoM3I'><fieldset id='RoM3I'></fieldset></dl></div>

                    1. <tfoot id='RoM3I'></tfoot>

                        <bdo id='RoM3I'></bdo><ul id='RoM3I'></ul>