Dataflow/apache beam: manage custom module dependencies

守給你的承諾、 提交于 2020-01-22 22:57:06

问题


I have a .py pipeline using apache beam that import another module (.py), that is my custom module. I have a strucutre like this:

├── mymain.py
└── myothermodule.py

I import myothermodule.py in mymain.py like this:

import myothermodule

When I run locally on DirectRuner, I have no problem. But when I run it on dataflow with DataflowRunner, I have an error that tells:

ImportError: No module named myothermodule

So I want to know what should I do if I whant this module to be found when running the job on dataflow?


回答1:


When you run your pipeline remotely, you need to make any dependencies available on the remote workers too. To do it you should put your module file in a Python package by putting it in a directory with a __init__.py file and creating a setup.py. It would look like this:

├── mymain.py
├── setup.py
└── othermodules
    ├── __init__.py
    └── myothermodule.py

And import it like this:

from othermodules import myothermodule

Then you can run you pipeline with the command line option --setup_file ./setup.py

A minimal setup.py file would look like this:

import setuptools

setuptools.setup(packages=setuptools.find_packages())

The whole setup is documented here.

And a whole example using this can be found here.



来源:https://stackoverflow.com/questions/51763406/dataflow-apache-beam-manage-custom-module-dependencies

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!