Google DataFlow/Python: Import errors with save_main_session and custom modules in __main__

依然范特西╮ 提交于 2019-12-11 07:03:44

问题


Could somebody please clarify the expected behavior when using save_main_session and custom modules imported in __main__. My DataFlow pipeline imports 2 non-standard modules - one via requirements.txt and another one via setup_file. Unless I move the imports into the functions where they get used I keep getting import/pickling errors. Sample error is below. From the documentation, I assumed that setting save_main_session would help to solve this problem, but it does not (see error below). So I wonder if I missed something or this behavior is by design. The same import works fine when placed into a function.

Error:

  File "/usr/lib/python2.7/pickle.py", line 1130, in find_class
    __import__(module)
ImportError: No module named jmespath

回答1:


https://cloud.google.com/dataflow/faq#how-do-i-handle-nameerrors https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/

When to use --save_main_session:

you can set the --save_main_session pipeline option to True. This will cause the state of the global namespace to be pickled and loaded on the Cloud Dataflow worker

The setup that best works for me is having a dataflow_launcher.py sitting at the project root with your setup.py. The only thing it does is import your pipeline file and launch it. Use setup.py to handle all your dependencies. This is the best example I've found so far.

https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/complete/juliaset



来源:https://stackoverflow.com/questions/51311301/google-dataflow-python-import-errors-with-save-main-session-and-custom-modules

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!