I use kubeadm to launch cluster on AWS. I can successfully create a load balancer on AWS by using kubectl, bu
In my case - the problem was missing the option --cloud-provider=aws
After I placed the following in /etc/default/kubelet (via terraform in my case ) and redeployed my nodes, everything worked
/etc/default/kubelet
KUBELET_EXTRA_ARGS='--cloud-provider=aws'
In My case the issue was with the worker nodes not getting the providerId assigned properly.
I managed to patch the node like - kubectl patch node ip-xxxxx.ap-southeast-2.compute.internal -p '{"spec":{"providerID":"aws:///ap-southeast-2a/i-0xxxxx"}}'
to add the ProviderID. And then when i deployed the service . The ELB got created. the node group got added and end to end it worked. This is not a straight forward answer . But until i find a better solution let remain here