<i id='YGksV'><tr id='YGksV'><dt id='YGksV'><q id='YGksV'><span id='YGksV'><b id='YGksV'><form id='YGksV'><ins id='YGksV'></ins><ul id='YGksV'></ul><sub id='YGksV'></sub></form><legend id='YGksV'></legend><bdo id='YGksV'><pre id='YGksV'><center id='YGksV'></center></pre></bdo></b><th id='YGksV'></th></span></q></dt></tr></i><div id='YGksV'><tfoot id='YGksV'></tfoot><dl id='YGksV'><fieldset id='YGksV'></fieldset></dl></div>
<legend id='YGksV'><style id='YGksV'><dir id='YGksV'><q id='YGksV'></q></dir></style></legend>
  • <tfoot id='YGksV'></tfoot>

        <small id='YGksV'></small><noframes id='YGksV'>

          <bdo id='YGksV'></bdo><ul id='YGksV'></ul>
      1. 在 hadoop 上解析 Stackoverflow 的 posts.xml

        时间:2023-09-27
      2. <tfoot id='KWGW8'></tfoot>

          <tbody id='KWGW8'></tbody>
        • <bdo id='KWGW8'></bdo><ul id='KWGW8'></ul>
          <i id='KWGW8'><tr id='KWGW8'><dt id='KWGW8'><q id='KWGW8'><span id='KWGW8'><b id='KWGW8'><form id='KWGW8'><ins id='KWGW8'></ins><ul id='KWGW8'></ul><sub id='KWGW8'></sub></form><legend id='KWGW8'></legend><bdo id='KWGW8'><pre id='KWGW8'><center id='KWGW8'></center></pre></bdo></b><th id='KWGW8'></th></span></q></dt></tr></i><div id='KWGW8'><tfoot id='KWGW8'></tfoot><dl id='KWGW8'><fieldset id='KWGW8'></fieldset></dl></div>
            • <legend id='KWGW8'><style id='KWGW8'><dir id='KWGW8'><q id='KWGW8'></q></dir></style></legend>
                • <small id='KWGW8'></small><noframes id='KWGW8'>

                  本文介绍了在 hadoop 上解析 Stackoverflow 的 posts.xml的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我正在关注这篇文章由 Anoop Madhusudanan 在 codeproject 上构建,而不是在集群上而是在我的系统上构建推荐引擎.

                  I am following this article by Anoop Madhusudanan on codeproject to build a recommendation engine not on cluster but on my system.

                  问题是当我尝试解析结构如下的posts.xml时:

                  Problem is when i try to parse posts.xml whose structure is as follows:

                   <row Id="99" PostTypeId="2" ParentId="88" CreationDate="2008-08-01T14:55:08.477" Score="2" Body="&lt;blockquote&gt;&#xD;&#xA;  &lt;p&gt;The actual resolution of gettimeofday() depends on the hardware architecture. Intel processors as well as SPARC machines offer high resolution timers that measure microseconds. Other hardware architectures fall back to the system’s timer, which is typically set to 100 Hz. In such cases, the time resolution will be less accurate. &lt;/p&gt;&#xD;&#xA;&lt;/blockquote&gt;&#xD;&#xA;&#xD;&#xA;&lt;p&gt;I obtained this answer from &lt;a href=&quot;http://www.informit.com/guides/content.aspx?g=cplusplus&amp;amp;seqNum=272&quot; rel=&quot;nofollow&quot;&gt;High Resolution Time Measurement and Timers, Part I&lt;/a&gt;&lt;/p&gt;" OwnerUserId="25" LastActivityDate="2008-08-01T14:55:08.477" />
                  

                  现在我需要在 hadoop 上解析这个文件(大小 1.4 gb),我已经用 java 编写了代码并创建了它的 jar.Java类如下:

                  Now I need to parse this file(size 1.4 gb) on hadoop for which i have written code in java and created its jar. Java class is as follows:

                  import java.io.IOException;
                  import javax.xml.parsers.DocumentBuilderFactory;
                  import javax.xml.parsers.DocumentBuilder;
                  import org.w3c.dom.Document;
                  import org.w3c.dom.NodeList;
                  import org.w3c.dom.Node;
                  import org.w3c.dom.Element;
                  
                  import java.io.File;
                  
                  
                  import org.apache.hadoop.conf.Configuration;
                  import org.apache.hadoop.fs.FileSystem;
                  import org.apache.hadoop.fs.Path;
                  import org.apache.hadoop.io.LongWritable;
                  import org.apache.hadoop.io.Text;
                  import org.apache.hadoop.mapred.OutputCollector;
                  import org.apache.hadoop.mapred.Reporter;
                  import org.apache.hadoop.mapreduce.Mapper;
                  import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
                  import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
                  import org.apache.hadoop.mapreduce.Job;
                  
                  
                  public class Recommend {
                  
                      static class Map extends Mapper<Text, Text, Text, Text> {
                          Path path;
                          String fXmlFile;
                          DocumentBuilderFactory dbFactory;
                          DocumentBuilder dBuilder;
                          Document doc;
                  
                          /**
                           * Given an output filename, write a bunch of random records to it.
                           */
                          public void map(LongWritable key, Text value,
                                  OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
                              try{
                                  fXmlFile=value.toString();
                                  dbFactory = DocumentBuilderFactory.newInstance();
                                  dBuilder= dbFactory.newDocumentBuilder();
                                  doc= dBuilder.parse(fXmlFile);
                  
                                  doc.getDocumentElement().normalize();
                                  NodeList nList = doc.getElementsByTagName("row");
                  
                                  for (int temp = 0; temp < nList.getLength(); temp++) {
                  
                                      Node nNode = nList.item(temp);
                                      Element eElement = (Element) nNode;
                  
                                      Text keyWords =new Text(eElement.getAttribute("OwnerUserId"));
                                      Text valueWords = new Text(eElement.getAttribute("ParentId"));
                                      String val=keyWords.toString()+" "+valueWords.toString();
                                      // Write the sentence 
                                      if(keyWords != null && valueWords != null){
                                          output.collect(keyWords, new Text(val));
                                      }
                                  }
                  
                              }catch (Exception e) {
                                  e.printStackTrace();
                              } 
                          }
                      }
                  
                      /**
                       * 
                       * @throws IOException 
                       */
                      public static void main(String[] args) throws Exception {
                          Configuration conf = new Configuration();
                          //String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
                          /*if (args.length != 2) {
                            System.err.println("Usage: wordcount <in> <out>");
                            System.exit(2);
                          }*/
                  //      FileSystem fs = FileSystem.get(conf);
                          Job job = new Job(conf, "Recommend");
                          job.setJarByClass(Recommend.class);
                          
                          // the keys are words (strings)
                          job.setOutputKeyClass(Text.class);
                          job.setMapOutputKeyClass(LongWritable.class);
                          job.setMapOutputValueClass(Text.class);
                          
                          // the values are counts (ints)
                          job.setOutputValueClass(Text.class);
                  
                          job.setMapperClass(Map.class);
                          //conf.setReducerClass(Reduce.class);
                  
                          FileInputFormat.addInputPath(job, new Path(args[0]));
                          FileOutputFormat.setOutputPath(job, new Path(args[1]));
                         
                          System.exit(job.waitForCompletion(true) ? 0 : 1);
                           Path outPath = new Path(args[1]);
                              FileSystem dfs = FileSystem.get(outPath.toUri(), conf);
                              if (dfs.exists(outPath)) {
                              dfs.delete(outPath, true);
                              }
                      }
                  }
                  

                  我希望输出是 hadoop 中的一个文件,其中包含输出为 OwnerUserId ParentId但我得到的输出为:

                  I expect the output to be as a file in hadoop containing output as OwnerUserId ParentId but instead I get output as:

                  1599788   <row Id="2292" PostTypeId="2" ParentId="2284" CreationDate="2008-08-05T13:28:06.700" Score="0" ViewCount="0" Body="&lt;p&gt;The first thing you should do is contact the main people who run the open source project. Ask them if it is ok to contribute to the code and go from there.&lt;/p&gt;&#xD;&#xA;&#xD;&#xA;&lt;p&gt;Simply writing your improved code and then giving it to them may result in your code being rejected.&lt;/p&gt;" OwnerUserId="383" LastActivityDate="2008-08-05T13:28:06.700" />
                  

                  我不知道 1599788 的起源作为映射器的键值出现.

                  I dont know about the origin of 1599788 appearing as a key value from mapper.

                  我不太了解为 hadoop 编写映射器类,我需要帮助来修改我的代码以获得所需的输出.

                  I don`t know much about writing mapper classes for hadoop, I need help to modify my code to get the desired output.

                  提前致谢.

                  推荐答案

                  经过大量的研究和实验,终于学会了为parsin xml文件写map的方法,语法和我提供的一样.我改变了我的方法,这是我的新映射器代码......它适用于我的用例.

                  After a lot of research and experiments , finally learnt the way to write map for parsin xml files which have syntax like one I provided. I changed my approach and this is my new mapper code... Its workin for my usecase.

                  希望它可以帮助某人,他们可以节省时间:)

                  hope it help someone and they can save their time :)

                  import java.io.IOException;
                  import java.util.StringTokenizer;
                  
                  import javax.xml.parsers.ParserConfigurationException;
                  import org.apache.hadoop.io.LongWritable;
                  import org.apache.hadoop.io.NullWritable;
                  import org.apache.hadoop.io.Text;
                  import org.apache.hadoop.mapreduce.Mapper;
                  import org.xml.sax.SAXException;
                  
                  public class Map extends Mapper<LongWritable, Text, NullWritable, Text> {
                      NullWritable obj;
                  
                      @Override
                      public void map(LongWritable key, Text value, Context context) throws InterruptedException {
                          StringTokenizer tok= new StringTokenizer(value.toString()); 
                          String pa=null,ow=null,pi=null,v;
                          while (tok.hasMoreTokens()) {
                              String[] arr;
                              String val = (String) tok.nextToken();
                              if(val.contains("PostTypeId")){
                                  arr= val.split("["]");
                                  pi=arr[arr.length-1];
                                  if(pi.equals("2")){
                                      continue;
                                  }
                                  else break;
                              }
                              if(val.contains("ParentId")){
                                  arr= val.split("["]");
                                  pa=arr[arr.length-1];
                              } 
                              else if(val.contains("OwnerUserId") ){
                                  arr= val.split("["]");
                                  ow=arr[arr.length-1];
                                  try {
                                      if(pa!=null && ow != null){
                                          v=String.format("{0},{1}", ow,pa);
                                          context.write(obj,new Text(v));
                  
                                      }
                                  } catch (IOException e) {
                                      // TODO Auto-generated catch block
                                      e.printStackTrace();
                                  }
                              }
                          }
                  
                  
                      }
                  
                  }
                  

                  这篇关于在 hadoop 上解析 Stackoverflow 的 posts.xml的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:Java MapReduce 按日期计数 下一篇:如何在 hadoop 中序列化对象(在 HDFS 中)

                  相关文章

                  最新文章

                    <legend id='tNlIs'><style id='tNlIs'><dir id='tNlIs'><q id='tNlIs'></q></dir></style></legend>
                  1. <small id='tNlIs'></small><noframes id='tNlIs'>

                    <i id='tNlIs'><tr id='tNlIs'><dt id='tNlIs'><q id='tNlIs'><span id='tNlIs'><b id='tNlIs'><form id='tNlIs'><ins id='tNlIs'></ins><ul id='tNlIs'></ul><sub id='tNlIs'></sub></form><legend id='tNlIs'></legend><bdo id='tNlIs'><pre id='tNlIs'><center id='tNlIs'></center></pre></bdo></b><th id='tNlIs'></th></span></q></dt></tr></i><div id='tNlIs'><tfoot id='tNlIs'></tfoot><dl id='tNlIs'><fieldset id='tNlIs'></fieldset></dl></div>

                    1. <tfoot id='tNlIs'></tfoot>
                      • <bdo id='tNlIs'></bdo><ul id='tNlIs'></ul>