How do I run a dask.distributed cluster in a single thread?

半腔热情 提交于 2019-12-04 01:21:33

Local Scheduler

If you can get by with the single-machine scheduler's API (just compute) then you can use the single-threaded scheduler

x.compute(scheduler='single-threaded')

Distributed Scheduler - Single Machine

If you want to run a dask.distributed cluster on a single machine you can start the client with no arguments

from dask.distributed import Client
client = Client()  # Starts local cluster
x.compute()

This uses many threads but operates on one machine

Distributed Scheduler - Single Process

Alternatively if you want to run everything in a single process then you can use the processes=False keyword

from dask.distributed import Client
client = Client(processes=False)  # Starts local cluster
x.compute()

All of the communication and control happen in a single thread, though computation occurs in a separate thread pool.

Distributed Scheduler - Single Thread

To run control, communication, and computation all in a single thread you need to create a Tornado concurrent.futures Executor. Beware, this Tornado API may not be public.

from dask.distributed import Scheduler, Worker, Client
from tornado.concurrent import DummyExecutor
from tornado.ioloop import IOLoop
import threading

loop = IOLoop()
e = DummyExecutor()
s = Scheduler(loop=loop)
s.start()
w = Worker(s.address, loop=loop, executor=e)
loop.add_callback(w._start)

async def f():
    async with Client(s.address, start=False) as c:
        future = c.submit(threading.get_ident)
        result = await future
        return result

>>> threading.get_ident() == loop.run_sync(f)
True
标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!