Spark executors not able to access ignite nodes inside kubernetes cluster

社会主义新天地 提交于 2019-12-02 22:56:44

问题


I am connecting my spark job with an existing ignite cluster. I use a service account name spark for it. My driver is able to access the ignite pods, but my executors are not able to access that.

This is what executor log looks like

Caused by: java.io.IOException: Server returned HTTP response code: 403 for URL: https://35.192.214.68/api/v1/namespaces/default/endpoints/ignite

I guess it's due to some privileges. Is there a way to explicitly specify service account for executors as well?

Thanks in advance.


回答1:


The similar issue was discussed here.

Most likely you need to grant more permissions to a service account which is used for running Ignite.

This way you are able to create and bind one more role to the service account:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: ignite
  namespace: default
rules:
- apiGroups:
  - ""
  resources: # Here is resources you can access
  - pods
  - endpoints
  verbs: # That is what you can do with them
  - get
  - list
  - watch

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ignite
roleRef:
  kind: ClusterRole
  name: ignite
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: <service account name>
  namespace: default

Also, if your namespace is not default you need to update that one in yaml-files and specify it in TcpDiscoveryKubernetesIpFinder configuration.



来源:https://stackoverflow.com/questions/50968087/spark-executors-not-able-to-access-ignite-nodes-inside-kubernetes-cluster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!