How to map a dask Series with a large dict

自闭症网瘾萝莉.ら 提交于 2019-12-12 13:18:20

问题


I'm trying to figure out the best way to map a dask Series with a large mapping. The straightforward series.map(large_mapping) issues UserWarning: Large object of size <X> MB detected in task graph and suggests using client.scatter and client.submit but the latter doesn't solve the problem and in fact it's much slower. Trying broadcast=True in client.scatter doesn't help either.

import argparse
import distributed
import dask.dataframe as dd

import numpy as np
import pandas as pd


def compute(s_size, m_size, npartitions, scatter, broadcast, missing_percent=0.1, seed=1):
    np.random.seed(seed)
    mapping = dict(zip(np.arange(m_size), np.random.random(size=m_size)))
    ps = pd.Series(np.random.randint((1 + missing_percent) * m_size, size=s_size))
    ds = dd.from_pandas(ps, npartitions=npartitions)
    if scatter:
        mapping_futures = client.scatter(mapping, broadcast=broadcast)
        future = client.submit(ds.map, mapping_futures)
        return future.result()
    else:
        return ds.map(mapping)


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('-s', default=200000, type=int, help='series size')
    parser.add_argument('-m', default=50000, type=int, help='mapping size')
    parser.add_argument('-p', default=10, type=int, help='partitions number')
    parser.add_argument('--scatter', action='store_true', help='Scatter mapping')
    parser.add_argument('--broadcast', action='store_true', help='Broadcast mapping')
    args = parser.parse_args()

    client = distributed.Client()
    ds = compute(args.s, args.m, args.p, args.scatter, args.broadcast)
    print(ds.compute().describe())

回答1:


You problem is here

In [4]: mapping = dict(zip(np.arange(50000), np.random.random(size=50000)))

In [5]: import pickle

In [6]: %time len(pickle.dumps(mapping))
CPU times: user 2.24 s, sys: 18.6 ms, total: 2.26 s
Wall time: 2.25 s
Out[6]: 6268809

So mapping is big and unpartitioned - the scatter operation is the one giving you the problem in this case.

Consider the alternative

def make_mapping():
    return dict(zip(np.arange(50000), np.random.random(size=50000)))

mapping = client.submit(make_mapping)  # ships the function, not the data
                                       # and requires no serialisation
future = client.submit(ds.map, mapping)

This will not show the warning. However, it seems strange to me to use a dictionary here to do the mapping, a series of straight array seems to encode the nature of the data better.



来源:https://stackoverflow.com/questions/50638682/how-to-map-a-dask-series-with-a-large-dict

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!