问题
I am using flink to read the data from Azure data lake.But flink is not able to find the Azure data lake file system. how to configure flink to understand the Azure Data lake file system.Could anyone guide me in this?
回答1:
Flink has the capability to connect to any Hadoop compatible file system (i.e that implements org.apache.hadoop.fs.FileSystem). See here for the explanation: https://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html
In the core-site.xml, you should add the ADLS-specific configuration. You will also need the ADL jars in the class path whereever the Flink agents run.
It's basically the same concept as outlined in this blog, except adapted to Flink. https://medium.com/azure-data-lake/connecting-your-own-hadoop-or-spark-to-azure-data-lake-store-93d426d6a5f4
来源:https://stackoverflow.com/questions/45029639/how-to-configure-flink-to-understand-the-azure-data-lake-file-system