how to configure flink to understand the Azure Data Lake file system?

喜你入骨 提交于 2020-01-07 04:07:47

问题


I am using flink to read the data from Azure data lake.But flink is not able to find the Azure data lake file system. how to configure flink to understand the Azure Data lake file system.Could anyone guide me in this?


回答1:


Flink has the capability to connect to any Hadoop compatible file system (i.e that implements org.apache.hadoop.fs.FileSystem). See here for the explanation: https://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html

In the core-site.xml, you should add the ADLS-specific configuration. You will also need the ADL jars in the class path whereever the Flink agents run.

It's basically the same concept as outlined in this blog, except adapted to Flink. https://medium.com/azure-data-lake/connecting-your-own-hadoop-or-spark-to-azure-data-lake-store-93d426d6a5f4



来源:https://stackoverflow.com/questions/45029639/how-to-configure-flink-to-understand-the-azure-data-lake-file-system

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!