Error Reading an Uploaded CSV Using Dask in Django: 'InMemoryUploadedFile' object has no attribute 'startswith'

纵然是瞬间 提交于 2020-07-23 09:48:08

问题


I'm building a Django app that enables users to upload a CSV via a form using a FormField. Once the CSV is imported I use the Pandas read_csv(filename) command to read in the CSV so I can do some processing on the CSV using Pandas.

I've recently started learning the really useful Dask library because the size of the uploaded files can be larger than memory. Everything works fine when using Pandas pd.read_csv(filename) but when I try and use Dask dd.read_csv(filename) I get the error "'InMemoryUploadedFile' object has no attribute 'startswith'".

I'm pretty new to Django, Pandas and Dask. I've searched high and low and can't seem to find this error when associated with Dask anywhere on Google.

Here is the code I'm trying to use below (just the relevant bits... I hope):

Inside forms.py I have:

class ImportFileForm(forms.Form):
    file_name = forms.FileField(label='Select a csv',validators=[validate_file_extension, file_size])

Inside views.py

import pandas as pd
import codecs
import dask.array as da
import dask.dataframe as dd

from dask.distributed import Client
client = Client()

def import_csv(request):

    if request.method == 'POST':
        form = ImportFileForm(request.POST, request.FILES)
        if form.is_valid():

             utf8_file = codecs.EncodedFile(request.FILES['file_name'].open(),"utf-8")

             # IF I USE THIS PANDAS LINE IT WORKS AND I CAN THEN USE PANDAS TO PROCESS THE FILE
             #df_in = pd.read_csv(utf8_file)

             # IF I USE THIS DASK LINE IT DOES NOT WORK AND PRODUCES THE ERROR
             df_in = dd.read_csv(utf8_file)

And here is the error output I'm getting:

AttributeError at /import_data/import_csv/
'InMemoryUploadedFile' object has no attribute 'startswith'

/home/username/projects/myproject/import_data/services.py in save_imported_doc
    df_in = dd.read_csv(utf8_file) …
▶ Local vars
/home/username/anaconda3/lib/python3.7/site-packages/dask/dataframe/io/csv.py in read
            **kwargs …
▶ Local vars
/home/username/anaconda3/lib/python3.7/site-packages/dask/dataframe/io/csv.py in read_pandas
        **(storage_options or {}) …
▶ Local vars
/home/username/anaconda3/lib/python3.7/site-packages/dask/bytes/core.py in read_bytes
    fs, fs_token, paths = get_fs_token_paths(urlpath, mode="rb", storage_options=kwargs) …
▶ Local vars
/home/username/anaconda3/lib/python3.7/site-packages/fsspec/core.py in get_fs_token_paths
        path = cls._strip_protocol(urlpath) …
▶ Local vars
/home/username/anaconda3/lib/python3.7/site-packages/fsspec/implementations/local.py in _strip_protocol
        if path.startswith("file://"): …
▶ Local vars
/home/username/anaconda3/lib/python3.7/codecs.py in __getattr__
        return getattr(self.stream, name) 

回答1:


It seems you are not passing a file on disc, but some django-specific buffer object. Since you are expecting large files, you probably want to tell django to stream the uploads directly to disc and give you the filename for dask; i.e., is request.FILES['file_name'] actually somewhere in your storage? The error message seems to suggest not, in which case you need to configure django (sorry, I don't know how).

Note that Dask can deal with in-memory file-like objects such as io.BytesIO, using the MemoryFileSystem, but this isn't very typical, and won't help with your memory issues.




回答2:


I finally got it working. Here's a Django specific solution building on the answer from @mdurant who thankfully pointed me in the right direction.

By default Django stores files under 2.5MB in memory and so Dask isn't able to access it in the way Pandas does as Dask asks for a location in actual storage. However, when the file is over 2.5MB Django stores the file in a temp folder which can then be located with the Django command temporary_file_path(). This temp file path can then be used directly by Dask. I found some really useful information about how Django actually handles files in the background in their docs: https://docs.djangoproject.com/en/3.0/ref/files/uploads/#custom-upload-handlers.

In case you can't predict in advance your user uploaded file sizes (as is in my case) and you happen to have a file less than 2.5MB you can change FILE_UPLOAD_HANDLERS in your Django settings file so that it writes all files to a temp storage folder regardless of size so it can always be accessed by Dask.

Here is how I changed my code in case this is helpful for anyone else in the same situation.

In views.py

def import_csv(request):

    if request.method == 'POST':
        form = ImportFileForm(request.POST, request.FILES)
        if form.is_valid():

             # the temporary_file_path() shows Dask where to find the file
             df_in = dd.read_csv(request.FILES['file_name'].temporary_file_path())

And in settings.py adding in the setting as below makes Django always write an uploaded file to temp storage whether the file is under 2.5MB or not so it can always be accessed by Dask

FILE_UPLOAD_HANDLERS = ['django.core.files.uploadhandler.TemporaryFileUploadHandler',]


来源:https://stackoverflow.com/questions/59427722/error-reading-an-uploaded-csv-using-dask-in-django-inmemoryuploadedfile-objec

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!