我一直在尝试解决从 ftp/ftps 下载文件时出现的问题.文件下载成功,但文件下载完成后不执行任何操作.没有发生可以提供有关该问题的更多信息的错误.我尝试在 stackoverflow 上搜索这个,发现这个 链接 谈到了类似的问题陈述,看起来我面临类似的问题,但我不确定.在解决问题时需要更多帮助.
I have been trying to troubleshoot an issue where in when we are downloading a file from ftp/ftps. File gets downloaded successfully but no operation is performed post file download completion. No error has occurred which could give more information about the issue. I tried searching for this on stackoverflow and found this link which talks about similar problem statement and looks like I am facing similar issue, though I am not sure. Need little more help in resolving the issue.
我尝试将 FTP 连接超时设置为 60 分钟,但帮助较少.在此之前,我使用的是 ftplib 的 retrbinary(),但同样的问题发生在那里.我尝试传递不同的块大小和窗口大小,但同样的问题是可重现的.
I tried setting the FTP connection timeout to 60mins but of less help. Prior to this I was using retrbinary() of the ftplib but same issue occurs there. I tried passing different blocksize and windowsize but with that also issue was reproducible.
我正在尝试从 AWS EMR 集群下载大小约为 3GB 的文件.示例代码如下.
I am trying to download the file of size ~3GB from AWS EMR cluster. Sample code is written below.
def download_ftp(self, ip, port, user_name, password, file_name, target_path):
try:
os.chdir(target_path)
ftp = FTP(host=ip)
ftp.connect(port=int(port), timeout=3000)
ftp.login(user=user_name, passwd=password)
if ftp.nlst(file_name) != []:
dir = os.path.split(file_name)
ftp.cwd(dir[0])
for filename in ftp.nlst(file_name):
sock = ftp.transfercmd('RETR ' + filename)
def background():
fhandle = open(filename, 'wb')
while True:
block = sock.recv(1024 * 1024)
if not block:
break
fhandle.write(block)
sock.close()
t = threading.Thread(target=background)
t.start()
while t.is_alive():
t.join(60)
ftp.voidcmd('NOOP')
logger.info("File " + filename + " fetched successfully")
return True
else:
logger.error("File " + file_name + " is not present in FTP")
except Exception, e:
logger.error(e)
raise
上述链接中建议的另一个选项是在下载小块文件后关闭连接,然后重新启动连接.有人可以建议如何实现这一点,不确定如何在关闭连接之前从上次停止文件下载的同一点恢复下载.这种方法是否可以完全证明下载整个文件.
Another option suggested in the above mentioned link is to close the connection post downloading small chunk of the file and then restart the connection. Can someone suggest how can this be achieved, not sure how to resume the download from the same point where the file download was stopped last time before closing the connection. Will this method be full proof of downloading the entire file.
我对 FTP 服务器级别的超时设置了解不多,因此不知道需要更改什么以及如何更改.我基本上想写一个通用的 FTP 下载器,它可以帮助从 FTP/FTPS 下载文件.
I don't know much about FTP server level timeout settings so didn't know what and how it needs to be altered. I basically want to write a generic FTP down-loader which can help in downloading the files from FTP/FTPS.
当我使用 ftplib 的 retrbinary() 方法并将调试级别设置为 2 时.
When I use retrbinary() method of ftplib and set debug level to 2.
ftp.set_debuglevel(2)
ftp.retrbinary('RETR ' + filename, fhandle.write)
正在打印以下日志.
cmd 'TYPE I'put 'TYPE I 'get '200 类型设置为 I. 'resp '200 类型设置为 I.'cmd 'PASV'put 'PASV 'get '227 进入被动模式 (64,27,160,28,133,251). 'resp '227 进入被动模式(64,27,160,28,133,251).cmd 'RETR FFFT_BRA_PM_R_201711.txt'put 'RETR FFFT_BRA_PM_R_201711.txt 'get '150 打开 FFFT_BRA_PM_R_201711.txt 的 BINARY 模式数据连接. 'resp '150 打开 FFFT_BRA_PM_R_201711.txt 的 BINARY 模式数据连接.'
cmd 'TYPE I' put 'TYPE I ' get '200 Type set to I. ' resp '200 Type set to I.' cmd 'PASV' put 'PASV ' get '227 Entering Passive Mode (64,27,160,28,133,251). ' resp '227 Entering Passive Mode (64,27,160,28,133,251).' cmd 'RETR FFFT_BRA_PM_R_201711.txt' put 'RETR FFFT_BRA_PM_R_201711.txt ' get '150 Opening BINARY mode data connection for FFFT_BRA_PM_R_201711.txt. ' resp '150 Opening BINARY mode data connection for FFFT_BRA_PM_R_201711.txt.'
在做任何事情之前,请注意您的连接存在严重问题,诊断并修复它比解决它要好得多.但有时,您只需要处理损坏的服务器,甚至发送保活也无济于事.那么,你能做什么呢?
Before doing anything, note that there is something very wrong with your connection, and diagnosing that and getting it fixed is far better than working around it. But sometimes, you just have to deal with a broken server, and even sending keepalives doesn't help. So, what can you do?
诀窍是一次下载一个块,然后中止下载,或者,如果服务器无法处理中止,则关闭并重新打开连接.
The trick is to download a chunk at a time, then abort the download—or, if the server can't handle aborting, close and reopen the connection.
请注意,我正在使用 ftp://speedtest.tele2.net/5MB 测试以下所有内容.zip,希望这不会导致一百万人开始攻击他们的服务器.当然,您需要使用实际的服务器对其进行测试.
Note that I'm testing everything below with ftp://speedtest.tele2.net/5MB.zip, which hopefully this doesn't cause a million people to start hammering their servers. Of course you'll want to test it with your actual server.
整个解决方案当然依赖于能够恢复传输的服务器,而并非所有服务器都能做到这一点——尤其是当您处理严重损坏的东西时.所以我们需要对此进行测试.请注意,此测试将非常缓慢,并且在服务器上非常繁重,因此不要使用 3GB 文件进行测试;找到更小的东西.此外,如果您可以在其中放置可读的内容,这将有助于调试,因为您可能会在十六进制编辑器中比较文件时遇到困难.
The entire solution of course relies on the server being able to resume transfers, which not all servers can do—especially when you're dealing with something badly broken. So we'll need to test for that. Note that this test will be very slow, and very heavy on the server, so do not testing with your 3GB file; find something much smaller. Also, if you can put something readable there, it will help for debugging, because you may be stuck comparing files in a hex editor.
def downit():
with open('5MB.zip', 'wb') as f:
while True:
ftp = FTP(host='speedtest.tele2.net', user='anonymous', passwd='test@example.com')
pos = f.tell()
print(pos)
ftp.sendcmd('TYPE I')
sock = ftp.transfercmd('RETR 5MB.zip', rest=pos)
buf = sock.recv(1024 * 1024)
if not buf:
return
f.write(buf)
您可能不会一次获得 1MB,而是 8KB 以下.假设您看到的是 1448,然后是 2896、4344 等.
You will probably not get 1MB at a time, but instead something under 8KB. Let's assume you're seeing 1448, then 2896, 4344, etc.
REST
中获得异常,则服务器不会处理恢复 - 放弃,您将被淹没.f.seek
来解决它,我可以解释——但您可能不会遇到它.REST
, the server does not handle resuming—give up, you're hosed.f.seek
, I can explain—but you probably won't run into it.我们可以做的一件事是尝试中止下载并且不重新连接.
One thing we can do is try to abort the download and not reconnect.
def downit():
with open('5MB.zip', 'wb') as f:
ftp = FTP(host='speedtest.tele2.net', user='anonymous', passwd='test@example.com')
while True:
pos = f.tell()
print(pos)
ftp.sendcmd('TYPE I')
sock = ftp.transfercmd('RETR 5MB.zip', rest=pos)
buf = sock.recv(1024 * 1024)
if not buf:
return
f.write(buf)
sock.close()
ftp.abort()
您将要尝试多种变体:
sock.close
.ftp.abort
.ftp.abort
之后使用 sock.close
.sock.close
之后使用 ftp.abort
.TYPE I
移到循环之前而不是每次.sock.close
.ftp.abort
.sock.close
after ftp.abort
.ftp.abort
after sock.close
.TYPE I
moved to before the loop instead of each time.有些会引发异常.其他人只会看起来永远挂起.如果这对所有 8 个都是真的,我们需要放弃中止.但如果其中任何一个有效,那就太好了!
Some will raise exceptions. Others will just appear to hang forever. If that's true for all 8 of them, we need to give up on aborting. But if any of them works, great!
另一种加快速度的方法是在中止或重新连接之前一次下载 1MB(或更多).只需替换此代码:
The other way to speed things up is to download 1MB (or more) at a time before aborting or reconnecting. Just replace this code:
buf = sock.recv(1024 * 1024)
if buf:
f.write(buf)
用这个:
chunklen = 1024 * 1024
while chunklen:
print(' ', f.tell())
buf = sock.recv(chunklen)
if not buf:
break
f.write(buf)
chunklen -= len(buf)
现在,您不再为每次传输读取 1442 或 8192 字节,而是每次传输最多读取 1MB.试着把它推得更远.
Now, instead of reading 1442 or 8192 bytes for each transfer, you're reading up to 1MB for each transfer. Try pushing it farther.
例如,如果您的下载在 10MB 时失败,而您问题中的 keepalive 代码将大小增加到 512MB,但对于 3GB 来说还是不够,您可以将两者结合起来.使用 keepalive 一次读取 512MB,然后中止或重新连接并读取下一个 512MB,直到完成.
If, say, your downloads were failing at 10MB, and the keepalive code in your question got things up to 512MB, but it just wasn't enough for 3GB—you can combine the two. Use keepalives to read 512MB at a time, then abort or reconnect and read the next 512MB, until you're done.
这篇关于Python:文件下载成功后,使用 ftplib 下载文件永远挂起的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!