Spark - Which instance type is preferred for AWS EMR cluster? [closed]

╄→гoц情女王★ 提交于 2019-11-28 23:15:11

Generally speaking, it depends on your use case, needs, etc... But I can suggest a minimum configuration considering the information that you have shared.

You seem to be trying to train an ALS factorization or SVD on matrices between 2 ~ 4 GBs of data. So actually that's not too much of data.

You'll be needing at least 1 master and 2 nodes to setup and configure a small distributed cluster. The master won't be doing any computing whatsoever so it won't need much resources but of course I would be dealing task scheduling, etc.

You can add slaves (instances) according to your needs.

  • 1 x master : m3.xlarge m5.xlarge - vCPU : 4 , RAM : 16 GB with EBS storage.
  • 2 x slaves : c3.4xlarge c5.xlarge - vCPU : 16, RAM : 32 GB with EBS storage.

EDIT : As mentioned in the comments, 5th generation instances are now available for each of the instance types mentioned in this thread: R5, M5, and C5. In general, latest-generation instance types are cheaper and more performant than their older counterparts.

C3, C4, and C5 are compute optimized instances featuring high performance processors and with a lowest price/compute performance in EC2 compared to R3, R4 or R5 although it's recommended use cases are distributed memory caches and in-memory analytics. But C5 will do the job for you for a lower price.

Performance Optimizations :

  • Amazon EMR charges on hourly increments. This means once you run a cluster, you are paying for the entire hour. That's important to remember because if you are paying for a full hour of Amazon EMR cluster, improving your data processing time by matter of minutes may not be worth your time and effort.

  • Don't forget that adding more nodes to increase performance is cheaper than spending time optimizing your cluster.

Reference : Amazon EMR Best Practices - Parviz Deyhim.

EDIT : You might also consider enabling Ganglia to monitor your cluster resources: CPU, RAM, Network I/O. This would help you also tuning your EMR cluster. Practically, you don't have any configuration to do. Just follow the documentation to add it to your EMR cluster on creation.

Generally speaking the preferred instance depends on the job you are running (is it memory intensive? is it CPU intensive? etc.) However Spark is very memory intensive and I wouldn't use machines with less than 30Gb for most jobs.

In your particular case (4Gb dataset) I am not sure why you'd want to use distributed computing to begin with- it will just make your job run slow. If you are sure you want spark run it in local mode with X threads (depending on how many cores you have)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!