• <bdo id='x5xgr'></bdo><ul id='x5xgr'></ul>
      <legend id='x5xgr'><style id='x5xgr'><dir id='x5xgr'><q id='x5xgr'></q></dir></style></legend>

    1. <small id='x5xgr'></small><noframes id='x5xgr'>

        <tfoot id='x5xgr'></tfoot>
        <i id='x5xgr'><tr id='x5xgr'><dt id='x5xgr'><q id='x5xgr'><span id='x5xgr'><b id='x5xgr'><form id='x5xgr'><ins id='x5xgr'></ins><ul id='x5xgr'></ul><sub id='x5xgr'></sub></form><legend id='x5xgr'></legend><bdo id='x5xgr'><pre id='x5xgr'><center id='x5xgr'></center></pre></bdo></b><th id='x5xgr'></th></span></q></dt></tr></i><div id='x5xgr'><tfoot id='x5xgr'></tfoot><dl id='x5xgr'><fieldset id='x5xgr'></fieldset></dl></div>

        Hadoop 流作业在 Python 中失败(不成功)

        时间:2023-09-12
      1. <tfoot id='iYp2b'></tfoot>
      2. <i id='iYp2b'><tr id='iYp2b'><dt id='iYp2b'><q id='iYp2b'><span id='iYp2b'><b id='iYp2b'><form id='iYp2b'><ins id='iYp2b'></ins><ul id='iYp2b'></ul><sub id='iYp2b'></sub></form><legend id='iYp2b'></legend><bdo id='iYp2b'><pre id='iYp2b'><center id='iYp2b'></center></pre></bdo></b><th id='iYp2b'></th></span></q></dt></tr></i><div id='iYp2b'><tfoot id='iYp2b'></tfoot><dl id='iYp2b'><fieldset id='iYp2b'></fieldset></dl></div>
            <tbody id='iYp2b'></tbody>
          <legend id='iYp2b'><style id='iYp2b'><dir id='iYp2b'><q id='iYp2b'></q></dir></style></legend>
            <bdo id='iYp2b'></bdo><ul id='iYp2b'></ul>

              <small id='iYp2b'></small><noframes id='iYp2b'>

                  本文介绍了Hadoop 流作业在 Python 中失败(不成功)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我正在尝试使用 Python 脚本在 Hadoop Streaming 上运行 Map-Reduce 作业,并遇到与 Hadoop Streaming Job failed error in python 但这些解决方案对我不起作用.

                  I'm trying to run a Map-Reduce job on Hadoop Streaming with Python scripts and getting the same errors as Hadoop Streaming Job failed error in python but those solutions didn't work for me.

                  当我运行cat sample.txt | ./p1mapper.py | sort | ./p1reducer.py"时,我的脚本运行良好

                  My scripts work fine when I run "cat sample.txt | ./p1mapper.py | sort | ./p1reducer.py"

                  但是当我运行以下命令时:

                  But when I run the following:

                  ./bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar 
                      -input "p1input/*" 
                      -output p1output 
                      -mapper "python p1mapper.py" 
                      -reducer "python p1reducer.py" 
                      -file /Users/Tish/Desktop/HW1/p1mapper.py 
                      -file /Users/Tish/Desktop/HW1/p1reducer.py
                  

                  (注意:即使我删除python"或为-mapper和-reducer输入完整路径名,结果也是一样的)

                  (NB: Even if I remove the "python" or type the full pathname for -mapper and -reducer, the result is the same)

                  这是我得到的输出:

                  packageJobJar: [/Users/Tish/Desktop/HW1/p1mapper.py, /Users/Tish/Desktop/CS246/HW1/p1reducer.py, /Users/Tish/Documents/workspace/hadoop-0.20.2/tmp/hadoop-unjar4363616744311424878/] [] /var/folders/Mk/MkDxFxURFZmLg+gkCGdO9U+++TM/-Tmp-/streamjob3714058030803466665.jar tmpDir=null
                  11/01/18 03:02:52 INFO mapred.FileInputFormat: Total input paths to process : 1
                  11/01/18 03:02:52 INFO streaming.StreamJob: getLocalDirs(): [tmp/mapred/local]
                  11/01/18 03:02:52 INFO streaming.StreamJob: Running job: job_201101180237_0005
                  11/01/18 03:02:52 INFO streaming.StreamJob: To kill this job, run:
                  11/01/18 03:02:52 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
                  11/01/18 03:02:52 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
                  11/01/18 03:02:53 INFO streaming.StreamJob:  map 0%  reduce 0%
                  11/01/18 03:03:05 INFO streaming.StreamJob:  map 100%  reduce 0%
                  11/01/18 03:03:44 INFO streaming.StreamJob:  map 50%  reduce 0%
                  11/01/18 03:03:47 INFO streaming.StreamJob:  map 100%  reduce 100%
                  11/01/18 03:03:47 INFO streaming.StreamJob: To kill this job, run:
                  11/01/18 03:03:47 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
                  11/01/18 03:03:47 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
                  11/01/18 03:03:47 ERROR streaming.StreamJob: Job not Successful!
                  11/01/18 03:03:47 INFO streaming.StreamJob: killJob...
                  Streaming Job Failed!
                  

                  对于每个失败/终止的任务尝试:

                  For each Failed/Killed Task Attempt:

                  Map output lost, rescheduling: getMapOutput(attempt_201101181225_0001_m_000000_0,0) failed :
                  org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_201101181225_0001/attempt_201101181225_0001_m_000000_0/output/file.out.index in any of the configured local directories
                      at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
                      at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
                      at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2887)
                      at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
                      at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
                      at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
                      at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
                      at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
                      at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
                      at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
                      at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
                      at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
                      at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
                      at org.mortbay.jetty.Server.handle(Server.java:324)
                      at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
                      at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
                      at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
                      at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
                      at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
                      at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
                      at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
                  

                  这是我的 Python 脚本:p1mapper.py

                  Here are my Python scripts: p1mapper.py

                  #!/usr/bin/env python
                  
                  import sys
                  import re
                  
                  SEQ_LEN = 4
                  
                  eos = re.compile('(?<=[a-zA-Z]).')   # period preceded by an alphabet
                  ignore = re.compile('[Wd]')
                  
                  for line in sys.stdin:
                      array = re.split(eos, line)
                      for sent in array:
                          sent = ignore.sub('', sent)
                          sent = sent.lower()
                          if len(sent) >= SEQ_LEN:
                              for i in range(len(sent)-SEQ_LEN + 1):
                                  print '%s 1' % sent[i:i+SEQ_LEN]
                  

                  p1reducer.py

                  p1reducer.py

                  #!/usr/bin/env python
                  
                  from operator import itemgetter
                  import sys
                  
                  word2count = {}
                  
                  for line in sys.stdin:
                      word, count = line.split(' ', 1)
                      try:
                          count = int(count)
                          word2count[word] = word2count.get(word, 0) + count
                      except ValueError:    # count was not a number
                          pass
                  
                  # sort
                  sorted_word2count = sorted(word2count.items(), key=itemgetter(1), reverse=True)
                  
                  # write the top 3 sequences
                  for word, count in sorted_word2count[0:3]:
                      print '%s	%s'% (word, count)
                  

                  非常感谢任何帮助,谢谢!

                  Would really appreciate any help, thanks!

                  更新:

                  hdfs-site.xml:

                  hdfs-site.xml:

                  <?xml version="1.0"?>
                  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
                  
                  <!-- Put site-specific property overrides in this file. -->
                  
                  <configuration>
                  
                  <property>
                  
                            <name>dfs.replication</name>
                  
                            <value>1</value>
                  
                  </property>
                  
                  </configuration>
                  

                  mapred-site.xml:

                  mapred-site.xml:

                  <?xml version="1.0"?>
                  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
                  
                  <!-- Put site-specific property overrides in this file. -->
                  
                  <configuration>
                  
                  <property>
                  
                            <name>mapred.job.tracker</name>
                  
                            <value>localhost:54311</value>
                  
                  </property>
                  
                  </configuration>
                  

                  推荐答案

                  你缺少很多配置,你需要定义目录等.见这里:

                  You are missing a lot of configurations and you need to define directories and such. See here:

                  http://wiki.apache.org/hadoop/QuickStart

                  分布式操作和上面描述的伪分布式操作一样,除了:

                  Distributed operation is just like the pseudo-distributed operation described above, except:

                  1. 在 conf/hadoop-site.xml 的 fs.default.name 和 mapred.job.tracker 的值中指定主服务器的主机名或 IP 地址.这些被指定为主机:端口对.
                  2. 在 conf/hadoop-site.xml 中指定 dfs.name.dir 和 dfs.data.dir 的目录.它们分别用于保存主节点和从节点上的分布式文件系统数据.请注意,dfs.data.dir 可能包含以空格或逗号分隔的目录名称列表,以便数据可以存储在多个设备上.
                  3. 在 conf/hadoop-site.xml 中指定 mapred.local.dir.这决定了临时 MapReduce 数据的写入位置.它也可以是目录列表.
                  4. 在 conf/mapred-default.xml 中指定 mapred.map.tasks 和 mapred.reduce.tasks.根据经验,mapred.map.tasks 使用 10 倍的从处理器数量,mapred.reduce.tasks 使用 2 倍的从处理器数量.
                  5. 在您的 conf/slaves 文件中列出所有从属主机名或 IP 地址,每行一个,并确保 jobtracker 在您的/etc/hosts 文件中指向您的 jobtracker 节点

                  这篇关于Hadoop 流作业在 Python 中失败(不成功)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:从 IPython 笔记本运行 MRJob 下一篇:在 Windows 上使用 XAMPP 托管 Django

                  相关文章

                  最新文章

                  • <bdo id='6Un6Y'></bdo><ul id='6Un6Y'></ul>

                  <tfoot id='6Un6Y'></tfoot>
                • <small id='6Un6Y'></small><noframes id='6Un6Y'>

                      <legend id='6Un6Y'><style id='6Un6Y'><dir id='6Un6Y'><q id='6Un6Y'></q></dir></style></legend>
                    1. <i id='6Un6Y'><tr id='6Un6Y'><dt id='6Un6Y'><q id='6Un6Y'><span id='6Un6Y'><b id='6Un6Y'><form id='6Un6Y'><ins id='6Un6Y'></ins><ul id='6Un6Y'></ul><sub id='6Un6Y'></sub></form><legend id='6Un6Y'></legend><bdo id='6Un6Y'><pre id='6Un6Y'><center id='6Un6Y'></center></pre></bdo></b><th id='6Un6Y'></th></span></q></dt></tr></i><div id='6Un6Y'><tfoot id='6Un6Y'></tfoot><dl id='6Un6Y'><fieldset id='6Un6Y'></fieldset></dl></div>