EMR Spark - TransportClient: Failed to send RPC

后端 未结 2 1537
独厮守ぢ
独厮守ぢ 2021-01-04 03:00

I\'m getting this error, I tried to increase memory on cluster instances and in the executor and driver parameters without success.

17/05/07 23:17:07 ERROR T         


        
相关标签:
2条回答
  • 2021-01-04 03:27

    When I setup hadoop and spark in my laptop and try to launch spark as "spark-shell --master yarn" I got the same error message.

    Solution:

    sudo vim /usr/local/hadoop/etc/hadoop/yarn-site.xml

    Add the following property:

    <property>       
    <name>yarn.nodemanager.vmem-pmem-ratio</name>      
    <value>5</value>  
    </property>
    

    Then restart hadoop service

    stop-all.sh 
    start-all.sh
    
    0 讨论(0)
  • 2021-01-04 03:30

    Finally I resolved the problem. It was due to insufficient disk space. One column of hadoop logs showed:

    Hadoop YARN: 1/1 local-dirs are bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containers

    Googling it I found http://gethue.com/hadoop-yarn-11-local-dirs-are-bad-varlibhadoop-yarncacheyarnnm-local-dir-11-log-dirs-are-bad-varloghadoop-yarncontainers/

    "If you are getting this error, make some disk space!"

    To see this error I have to activate the yarn logs in EMR. See

    http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-web-interfaces.html

    To have access to the logs port in the cluster ec2 instances I changed security groups for it

    i.e.:

    master instance was listening here: 172.30.12.84:8088 And core instance here: 172.30.12.21:8042

    Finally I fixed the problem changing in etl.py the type of instances by other ones with bigger disks:

    master: m3.2xlarge
    core: c3.4xlarge

    0 讨论(0)
提交回复
热议问题