Apache Spark reads for S3: can't pickle thread.lock objects

匿名 (未验证) 提交于 2019-12-03 02:29:01

问题:

So I want my Spark App to read some text from Amazon's S3. I Wrote the following simple script:

import boto3 s3_client = boto3.client('s3') text_keys = ["key1.txt", "key2.txt"] data = sc.parallelize(text_keys).flatMap(lambda key: s3_client.get_object(Bucket="my_bucket", Key=key)['Body'].read().decode('utf-8'))

When I do data.collect I get the following error:

TypeError: can't pickle thread.lock objects

and I don't seem to find any help online. Have perhaps someone managed to solve the above?

回答1:

Your s3_client isn't serialisable.

Instead of flatMap use mapPartitions, and initialise s3_client inside the lambda body to avoid overhead. That will:

  1. init s3_client on each worker
  2. reduce initialisation overhead


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!