虚拟侵入

k8s nginx-ingress 504 timeout

杀马特。学长 韩版系。学妹 提交于 2019-12-03 13:19:00
nginx ingress 报错 504 timeout,是由于反向代理超时造成的,反向代理默认超时时间60s 官方文档 配置片段: apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" nginx.ingress.kubernetes.io/proxy-read-timeout: "300" nginx.ingress.kubernetes.io/proxy-send-timeout: "300" 来源: https://www.cnblogs.com/pythonPath/p/11796569.html

kubeadm安装kubernetes 1.16.2

我们两清 提交于 2019-12-03 02:11:29
目录 简介 环境说明 安装 准备基础环境 安装docker 安装kubeadm、kubelet、kubectl 配置kubeadm-config.yaml 部署master 安装flannel网络插件 添加节点 安装dashboard 部署ingress 重置集群 简介 当前kubernetes的最新版本已经到了1.16,而kubernetes官方推出的安装工具kubeadm也已经GA。本文就基于kubeadm来安装最新的kubernetes集群。 各组件示意图如下: 环境说明 部署环境: 主机名 ip地址 节点类型 系统版本 master.example.com 192.168.0.1 master、etcd centos7 node1.example.com 192.168.0.2 node centos7 相关组件版本说明: 组件 版本 说明 kubernetes 1.16.2 主程序 docker 19.03.3 容器 flannel 0.11.0 网络插件 etcd 3.3.15 数据库 coredns 1.6.2 dns组件 kubernetes-dashboard 2.0.0-beta5 web界面 ingress-nginx 0.26.1 ingress 安装 安装步骤说明: 配置主机名、防火墙、yum源 配置内核参数 加载内核模块 安装Docker

No ingress address on minikube Kubernetes cluster with nginx ingress controller

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I've got the following: ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: abcxyz annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: abcxyz http: paths: - path: /a/ backend: serviceName: service-a servicePort: 80 - path: /b/ backend: serviceName: service-b servicePort: 80 Output of kubectl describe ingress abcxyz : Name: abcxyz Namespace: default Address: Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- abcxyz /a/ service-a:80 (<none>) /b/ service-b:80 (

K8s Ingress rule for multiple paths in same backend service

匿名 (未验证) 提交于 2019-12-03 01:39:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I am trying to setup ingress load balancer. Basically, I have a single backend service with multiple paths. Let's say my backend NodePort service name is hello-app. The pod associated with this service exposes multiple paths like /foo and /bar. Below is the example NodePort service and associated deployment apiVersion : v1 kind : Service metadata : name : hello - app spec : selector : app : hello - app type : NodePort ports : - protocol : "TCP" port : 7799 targetPort : 7799 --- apiVersion : apps / v1 kind : Deployment metadata :

GCE Ingress not picking up health check from readiness probe

匿名 (未验证) 提交于 2019-12-03 01:38:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: When I create a GCE ingress, Google Load Balancer does not set the health check from the readiness probe. According to the docs ( Ingress GCE health checks ) it should pick it up. Expose an arbitrary URL as a readiness probe on the pods backing the Service. Any ideas why? Deployment: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend-prod labels: app: frontend-prod spec: selector: matchLabels: app: frontend-prod replicas: 3 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata:

Kubernetes 1.4 SSL Termination on AWS

匿名 (未验证) 提交于 2019-12-03 01:26:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup). I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config. I'd like All 6 to have SSL termination (not in the docker image) 4 need websockets and client IP session affinity (Meteor, Socket.io) 5 need http->https forwarding 1 serves the same content on http and https I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations . This created AWS

K8s Ingress, initiate ingress controller nginx error?

匿名 (未验证) 提交于 2019-12-03 01:08:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have two spring boot container, I want to setup ingress service. As document here says, ingress has two parts, one is controller, the other is resources. My two resources are two containers: gearbox-rack-eureka-server and gearbox-rack-config-server. The difference is port so that ingress could route traffic by different ports. My yaml files are listed below: eureka_pod.yaml apiVersion: v1 kind: Pod metadata: name: gearbox-rack-eureka-server labels: app: gearbox-rack-eureka-server purpose: platform_eureka_demo spec: containers: - name:

Control Ingress Traffic(0.8)

匿名 (未验证) 提交于 2019-12-03 00:30:01
在一个k8s环境中, Kubernetes Ingress Resource 被用于指定一个应被暴露在集群外的服务。在一个Istio服务网格中,一个更好的方法(在k8s和其他环境都可以工作)是使用一种不同的配置模型,称作 Istio Gateway . Gateway 允许Istio功能(例如监控和路由规则)应用于进入集群的流量。 这个task描述如何使用Istio Gateway 配置Istio在服务网格外暴露一个服务。 Before you begin 安装Isito 确认你的当前目录是 istio Ŀ¼ 开启 httpbin 示例,它将被用作暴露在外部的目标服务。 如果你开启了自动注入sidecar,执行 kubectl apply -f samples/httpbin/httpbin.yaml samples/httpbin/httpbin.yaml 否则,你需要在部署 httpbin 应用前手动注入sidecar: kubectl apply -f < (istioctl kube -inject -f samples/httpbin/httpbin . yaml) samples/httpbin/httpbin.yaml 为了测试,使用 OpenSSL 新建一个密钥和证书。 openssl req -x509 -nodes -days 365 -newkey rsa:

k8s(十)、微服务--istio1.0抢鲜测试

a 夏天 提交于 2019-12-02 23:59:28
前言 此前写了三篇文章,介绍了istio的工作原理、流量调度策略、服务可视化以及监控: k8s(四)、微服务框架istio安装测试 k8s(五)、微服务框架istio流量策略控制 k8s(六)、微服务框架istio服务可视化与监控 istio项目成立1年多以来,鲜有生产上的用例,但这次最新的1.0版本,在官网介绍上赫然写着: All of our core features are now ready for production use. 既然宣称生产可用,那么不妨来试用一番昨晚刚发布热气腾腾的1.0版本。 一、安装 首先环境还是基于此前的k8s v1.9集群 1.下载安装istio官方包: curl -L https://git.io/getLatestIstio | sh - mv istio-1.0.0/ /usr/local ln -sv /usr/local/istio-1.0.0/ /usr/local/istio cd /usr/local/istio/install/kubernetes/ 由于集群使用traefik对外服务,因此对官方的istio-demo.yaml部署文件中名为istio-ingressgateway的Service部分稍作修改,修改部分未删除已注释,大约在文件的第2482行开始: ### vim istio-demo.yaml '''