芹菜作为 docker 容器运行的 RabbitMQ:收到类型为“

时间:2023-04-27
本文介绍了芹菜作为 docker 容器运行的 RabbitMQ:收到类型为“..."的未注册任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我对 docker、celery 和 rabbitMQ 比较陌生.

I am relatively new to docker, celery and rabbitMQ.

在我们的项目中,我们目前有以下设置:1 个运行多个 docker 容器的物理主机:

In our project we currently have the following setup: 1 physical host with multiple docker containers running:

1x rabbitmq:3-管理容器

# pull image from docker hub and install
docker pull rabbitmq:3-management
# run docker image
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit -p 8080:15672 -p 5672:5672 rabbitmq:3-management

1x 芹菜容器

# pull docker image from docker hub
docker pull celery
# run celery container
docker run --link some-rabbit:rabbit --name some-celery -d celery

(还有一些容器,但它们不应该对问题做任何事情)

(there are some more containers, but they should not have to do anything with the problem)

任务文件

为了稍微了解一下celery和rabbitmq,我在物理主机上创建了一个tasks.py文件:

To get to know celery and rabbitmq a bit, I created a tasks.py file on the physical host:

from celery import Celery

app = Celery('tasks', backend='amqp', broker='amqp://guest:guest@172.17.0.81/')

@app.task(name='tasks.add')
def add(x, y):
    return x + y

整个设置实际上似乎工作得很好.所以当我在tasks.py所在的目录中打开一个python shell并运行时

The whole setup seems to be working quite fine actually. So when I open a python shell in the directory where tasks.py is located and run

>>> from tasks import add
>>> add.delay(4,4)

任务被排队并直接从 celery worker 中拉出.

The task gets queued and directly pulled from the celery worker.

但是,celery worker 不知道关于日志的任务模块:

However, the celery worker does not know the tasks module regarding to the logs:

$ docker logs some-celery


[2015-04-08 11:25:24,669: ERROR/MainProcess] Received unregistered task of type 'tasks.add'.
The message has been ignored and discarded.

Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.

The full contents of the message body was:
{'callbacks': None, 'timelimit': (None, None), 'retries': 0, 'id': '2b5dc209-3c41-4a8d-8efe-ed450d537e56', 'args': (4, 4), 'eta': None, 'utc': True, 'taskset': None, 'task': 'tasks.add', 'errbacks': None, 'kwargs': {}, 'chord': None, 'expires': None} (256b)
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: 'tasks.add'

所以问题显然是,芹菜容器中的芹菜工人不知道任务模块.现在由于我不是 docker 专家,所以我想问一下如何将任务模块最好地导入到 celery 容器中?

So the problem obviously seems to be, that the celery workers in the celery container do not know the tasks module. Now as I am not a docker specialist, I wanted to ask how I would best import the tasks module into the celery container?

任何帮助表示赞赏:)

编辑 2015 年 4 月 8 日,21:05:

感谢 Isowen 的回答.为了完整起见,这是我所做的:

Thanks to Isowen for the answer. Just for completeness here is what I did:

假设我的 tasks.py 位于我的本地计算机上的 /home/platzhersh/celerystuff.现在我在同一目录中创建了一个 celeryconfig.py,内容如下:

Let's assume my tasks.py is located on my local machine in /home/platzhersh/celerystuff. Now I created a celeryconfig.py in the same directory with the following content:

CELERY_IMPORTS = ('tasks')
CELERY_IGNORE_RESULT = False
CELERY_RESULT_BACKEND = 'amqp'

正如 Isowen 所说,celery 在容器的 /home/user 中搜索任务和配置文件.所以我们在启动时将/home/platzhersh/celerystuff挂载到容器中:

As mentioned by Isowen, celery searches /home/user of the container for tasks and config files. So we mount the /home/platzhersh/celerystuff into the container when starting:

run -v /home/platzhersh/celerystuff:/home/user --link some-rabbit:rabbit --name some-celery -d celery

这对我有用.希望这可以帮助其他有类似问题的人.我现在将尝试通过将任务也放在单独的 docker 容器中来扩展该解决方案.

This did the trick for me. Hope this helps some other people with similar problems. I'll now try to expand that solution by putting the tasks also in a separate docker container.

推荐答案

正如您所怀疑的,问题是因为 celery worker 不知道任务模块.您需要做两件事:

As you suspect, the issue is because the celery worker does not know the tasks module. There are two things you need to do:

  1. 将您的任务定义放入" docker 容器中.
  2. 配置 celery worker 以加载这些任务定义.

对于第 (1) 项,最简单的方法可能是使用 Docker Volume" 将代码的主机目录挂载到 celery docker 实例上.比如:

For Item (1), the easiest way is probably to use a "Docker Volume" to mount a host directory of your code onto the celery docker instance. Something like:

docker run --link some-rabbit:rabbit -v /path/to/host/code:/home/user --name some-celery -d celery 

/path/to/host/code 是您的主机路径,/home/user 是在实例上挂载它的路径.为什么是 /home/user 在这种情况下?因为 Dockerfilecelery image 将工作目录(WORKDIR)定义为/home/user.

Where /path/to/host/code is the your host path, and /home/user is the path to mount it on the instance. Why /home/user in this case? Because the Dockerfile for the celery image defines the working directory (WORKDIR) as /home/user.

(注意:完成第 (1) 项的另一种方法是使用内置"代码构建自定义 docker 映像,但我将把它留给读者作为练习.)

(Note: Another way to accomplish Item (1) would be to build a custom docker image with the code "built in", but I will leave that as an exercise for the reader.)

对于第(2)项,您需要创建一个导入任务文件的 celery 配置文件.这是一个更普遍的问题,所以我将指向以前的 stackoverflow 答案:Celery 收到未注册类型的任务(运行示例)

For Item (2), you need to create a celery configuration file that imports the tasks file. This is a more general issue, so I will point to a previous stackoverflow answer: Celery Received unregistered task of type (run example)

这篇关于芹菜作为 docker 容器运行的 RabbitMQ:收到类型为“..."的未注册任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

上一篇:用于 AMQP 的优秀 Python 库 下一篇:pika 使用哪种连接形式

相关文章

最新文章