kubectl

How do I access this Kubernetes service via kubectl proxy?

五迷三道 提交于 2019-12-04 00:45:05
I want to access my Grafana Kubernetes service via the kubectl proxy server , but for some reason it won't work even though I can make it work for other services. Given the below service definition, why is it not available on http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana ? grafana-service.yaml apiVersion: v1 kind: Service metadata: namespace: monitoring name: grafana labels: app: grafana spec: type: NodePort ports: - name: web port: 3000 protocol: TCP nodePort: 30902 selector: app: grafana grafana-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment

Kubectl error: the object has been modified; please apply your changes to the latest version and try again

拥有回忆 提交于 2019-12-03 23:45:01
问题 I am getting below error while trying to apply patch : core@dgoutam22-1-coreos-5760 ~ $ kubectl apply -f ads-central-configuration.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply Error from server (Conflict): error when applying patch: {"data":{"default":"{\"dedicated_redis_cluster\": {\"nodes\": [{\"host\": \"192.168.1.94\", \"port\": 6379}]}}"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration"

Where are Kubernetes' pods logfiles?

自作多情 提交于 2019-12-03 16:08:36
When I run $ kubectl logs <container> I get the logs of my pods. But where are the files for those logs? Some sources says /var/log/containers/ others says /var/lib/docker/containers/ but I couldn't find my actual application's or pod's log. The on-disk filename comes from docker inspect $pod_name_or_sha | jq -r '.[0].LogPath' assuming the docker daemon's configuration is the default {"log-driver": "json-file"} , which is almost guaranteed to be true if kubectl logs behaves correctly. This may also go without saying, but you must be on the Node upon which the Pod was scheduled for either

kubectl port forwarding timeout issue

末鹿安然 提交于 2019-12-03 05:54:54
问题 While using kubectl port-forward function I was able to succeed in port forwarding a local port to a remote port. However it seems that after a few minutes idling the connection is dropped. Not sure why that is so. Here is the command used to portforward: kubectl --namespace somenamespace port-forward somepodname 50051:50051 Error message: Forwarding from 127.0.0.1:50051 -> 50051 Forwarding from [::1]:50051 -> 50051 E1125 17:18:55.723715 9940 portforward.go:178] lost connection to pod Was

How to upgrade kubectl client version

偶尔善良 提交于 2019-12-03 05:44:49
I want to upgrade the kubectl client version to 1.11.3. I executed brew install kubernetes-cli but the version doesnt seem to be updating. Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:06:30Z", GoVersion:"go1.10.3", Compiler:

Listing all resources in a namespace

◇◆丶佛笑我妖孽 提交于 2019-12-03 04:48:11
问题 I would like to see all resources in a namespace. Doing kubectl get all will, despite of the name, not list things like services and ingresses. If I know the the type I can explicitly ask for that particular type, but it seems there is also no command for listing all possible types. (Especially kubectl get does for example not list custom types). Any idea how to show all resources before for example deleting that namespace? 回答1: Based on this comment , the supported way to list all resources

How to pull environment variables with Helm charts

时间秒杀一切 提交于 2019-12-03 04:13:53
问题 I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm. Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way. How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application? Here is some part of my deployment.yaml file ... ... spec: restartPolicy: Always

kops / kubectl - how do i import state created on a another server?

*爱你&永不变心* 提交于 2019-12-03 02:51:30
i setup my kubernetes cluster using kops, and I did so from local machine. So my .kube directory is stored on my local machine, but i setup kops for state storage in s3 . Im in the process of setting up my CI server now, and I want to run my kubectl commands from that box. How do i go about importing the existing state to that server? To run kubectl command, you will need the cluster's apiServer URL and related credentials for authentication. Those data are by convention stored in ~/.kube/config file. You may also view it via kubectl config view command. In order to run kubectl on your CI

kubectl not able to pull the image from private repository

喜你入骨 提交于 2019-12-02 20:56:44
I am running kubeadm alpha version to set up my kubernates cluster. From kubernates , I am trying to pull docker images which is hosted in nexus repository. When ever I am trying to create a pods , It is giving "ImagePullBackOff" every time. Can anybody help me on this ? Detail for this are present in https://github.com/kubernetes/kubernetes/issues/41536 Pod definition : apiVersion: v1 kind: Pod metadata: name: test-pod labels: name: test spec: containers: - image: 123.456.789.0:9595/test name: test ports: - containerPort: 8443 imagePullSecrets: - name: my-secret You need to refer to the

kubectl port forwarding timeout issue

限于喜欢 提交于 2019-12-02 20:43:25
While using kubectl port-forward function I was able to succeed in port forwarding a local port to a remote port. However it seems that after a few minutes idling the connection is dropped. Not sure why that is so. Here is the command used to portforward: kubectl --namespace somenamespace port-forward somepodname 50051:50051 Error message: Forwarding from 127.0.0.1:50051 -> 50051 Forwarding from [::1]:50051 -> 50051 E1125 17:18:55.723715 9940 portforward.go:178] lost connection to pod Was hoping to be able to keep the connection up Seems there is a 5 minute timeout that can be overridden with