Kubernetes

Migration of GKE from Default to Shared VPC and Public to Private GKE CLuster

走远了吗. 提交于 2021-02-11 14:31:27
问题 Few queries on GKE We have few GKE CLusters running on Default VPC . Can we migrate these clusters to use SharedVPC or atleast Custom VPC ? It seems existing clusters with default VPC mode cannot be changed to SharedVPC model as per GCP documentation but can we convert to Custom VPC from default VPC How to migrate from Custom VPC to Shared VPC ? Is it creating a new Cluster from existing Cluster and select SharedVPC in networking section for new cluster and then copy the Kubernetes resources

Kubernetes getting sibling-pod IP/properties from the same deployment/replicaset

喜夏-厌秋 提交于 2021-02-11 14:21:38
问题 Need to set the ip and/or any metadata of the deployment to be available as env vars to each pod under the same deployment... ex: having a 3 replica deploment. need to set env var for other IP address for each of the two other pods. need to set the host name for each other two pods. as of having HOSTNAME=deplymentNAME-d74cf6f77-q57jx deplymentNAME_PORT=tcp://10.152.183.27:13000 need to add: HOSTNAME2=deplymentNAME-d74cf6f77-y67kl HOSTNAME3=deplymentNAME-d74cf6f77-i90ro deplymentNAME_PORT2=tcp

nginx ingress controller don't reach backend service?

喜你入骨 提交于 2021-02-11 14:21:14
问题 I am currently trying to expose an kubernetes service via an ingress controller, but I cannot seem to so? Who some odd reason does the host/path never resolve to the clusterIp and port that I want to use, eventhough this should have been resolved via my ingress controller and the ressource?... apiVersion: v1 kind: Service metadata: name: hello-kubernetes spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: hello-kubernetes --- apiVersion: apps/v1 kind: Deployment

Want to specify rules in VirtualService file where two or more services have same rules

余生长醉 提交于 2021-02-11 14:01:32
问题 I have deployed eight services on Kubernetes with Istio sidecar injection. I want to set-up routing rules in VirtualService where three services have same rule. Rules:- - match: - headers: location: exact: pune uri: prefix: /wagholi route: - destination: host: wagholi port: number: 8080 uri: prefix: /yerwada route: - destination: host: yerwada port: number: 8080 uri: prefix: /hadapsar route: - destination: host: hadapsar port: number: 8080 - match: - headers: location: exact: mumbai uri:

Debugging Kubernetes/GKE timeout issue while creating ingress with ingress-nginx

流过昼夜 提交于 2021-02-11 13:54:42
问题 Using ingress-nginx v0.30 in the GKE cluster has no issue creating the ingress using kubectl apply -f command. After upgrading to ingress-nginx v0.31.1 , the following error has been shown: Error from server (Timeout): error when creating "kubernetes/ingress.yaml": Timeout: request did not complete within requested timeout 30s Questions: How to debug the timeout of this request? There is no connection issue same ingress file works on v0.30 Stackdriver shows no clue Any way to increase the

上海·2020线下年会来了!| MongoDB,More than Document Database.

拟墨画扇 提交于 2021-02-11 13:53:12
2020年MongoDB中文社区年终大会 一起重新认识MongoDB! (2021-1-8 上海线下) DB-Engines是对数据库管理系统的受欢迎程度进行排名的网站,近年来,MongoDB在DB-Engines 数据库流行度排行榜稳居榜单前五,在DB-Engines Ranking上Relational环抱中赫然出现一个Document。 对于MongoDB,不少朋友可能还带着疑惑: MongoDB究竟是一个怎样的存在? 为什么MongoDB可以突破关系型的重围出现在大家的视野之中,并连续几年位居前五? MongoDB仅仅是一个文档型数据库吗? 跟其它数据库相比,MongoDB具有怎样的特性? MongoDB有什么应用场景和解决方案? 2020 跨时代的一年,MongoDB中文社区带大家重新认识 MongoDB。 01 大会速递 MongoDB,More than Document Database. 时间:2021年1月8日(星期五)9:00-17:30 地点:上海市静安区市北高新园区江场三路258号上海市大数据产业基地商务中心三楼宴会厅中厅 大会名额:200名 报名链接: http://hdxu.cn/RInMN *大会优享票:9.9元优享票,联系小芒果领取(微信ID:mongoingcom) 长按识别二维码 添加小芒果微信 并根据提示进行回复获取优惠票 添加后请备注

Inquiring pod and service subnets from inside Kubernetes cluster

痞子三分冷 提交于 2021-02-11 13:50:25
问题 How can one inquire the Kubernetes pod and service subnets in use (e.g. 10.244.0.0/16 and 10.96.0.0/12 respectively) from inside a Kubernetes cluster in a portable and simple way? For instance, kubectl get cm -n kube-system kubeadm-config -o yaml reports podSubnet and serviceSubnet . But this is not fully portable because a cluster may have been set up by another means than kubeadm . kubectl get cm -n kube-system kube-proxy -o yaml reports clusterCIDR (i.e. pod subnet) and kubectl get pod -n

Failed to provision volume with StorageClass “slow”: Failed to get GCE GCECloudProvider with error <nil>

好久不见. 提交于 2021-02-11 13:29:00
问题 I'm trying to install Redis cluster (StatefulSet) out of GKE and when getting pvc I've got Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 10s persistentvolume-controller Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil> Already added "--cloud-provider=gce" on files /etc/kubernetes/manifests/kube-controller-manager.yaml and sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml. Restarted but

Scheduling and scaling pods in kubernetes

冷暖自知 提交于 2021-02-11 13:24:32
问题 i am running k8s cluster on GKE it has 4 node pool with different configuration Node pool : 1 (Single node coroned status) Running Redis & RabbitMQ Node pool : 2 (Single node coroned status) Running Monitoring & Prometheus Node pool : 3 (Big large single node) Application pods Node pool : 4 (Single node with auto-scaling enabled) Application pods currently, i am running single replicas for each service on GKE however 3 replicas of the main service which mostly manages everything. when scaling

Scheduling and scaling pods in kubernetes

↘锁芯ラ 提交于 2021-02-11 13:24:19
问题 i am running k8s cluster on GKE it has 4 node pool with different configuration Node pool : 1 (Single node coroned status) Running Redis & RabbitMQ Node pool : 2 (Single node coroned status) Running Monitoring & Prometheus Node pool : 3 (Big large single node) Application pods Node pool : 4 (Single node with auto-scaling enabled) Application pods currently, i am running single replicas for each service on GKE however 3 replicas of the main service which mostly manages everything. when scaling