nginx

开源OA办公平台搭建教程:基于nginx的快速集群部署——端口分发

只谈情不闲聊 提交于 2021-01-28 05:22:17
主机信息 主机1:172.16.98.8(linux) 主机2: 172.16.98.9 (linux) 集群需求 172.16.98.8 : WEB服务器, 应用服务器,文件存储服务器,中心服务器 172.16.98.9 : WEB服务器, 应用服务器,文件存储服务器 ,中心服务器 数据库 mysql数据库 nginx访问域名及端口 域名:qmx.o2oa.net(可以用ip,如果域名没有解析需要配置hosts) 端口:80(o2web服务器)、82( o2 应用服务器, 可以为其他没有冲突的端口 )、83(o2 中心服务器 , 可以为其他没有冲突的端口 ) 分发规则: nginx端口 o2端口服务 80 8080(o2web服务器) 82 20020( o2 应用 服务器 ) 83 20030(o2中心 服务器) 配置步骤 一、配置节点标识 1、在主机 172.16.98.8 的o2server/local目录中修改或者新增文件node.cfg,修改内容为主机的域名或者IP; 主机 172.16.98.8 的节点标识文件配置 172.16.98.8 2、在主机 172.16.98.9 的o2server/local目录中修改或者新增文件node.cfg,修改内容为主机的域名或者IP; 主机 172.16.98.9 的节点标识文件配置 1 72.16.98.9 二、准备配置文件

NGINX caching for HTTPS

三世轮回 提交于 2021-01-28 05:14:55
问题 I am exploring Nginx cache, everything works as long as I access the resource through HTTP . But as soon as I use HTTPS , Nginx dont put data in cache. I always see MISS in response headers. Does I need to do anything extra for HTTPS or ignore few headers which gets added by default for HTTPS ? I see HIT when I access the same resource through HTTP which was not working in HTTPS. And once it gets cached HTTPS also return from the cahce as I can see HIT in response header. Somehow HTTPS is not

nginx variables (cname) in proxy_pass

六月ゝ 毕业季﹏ 提交于 2021-01-28 05:06:17
问题 i am trying dynamically set a the proxy_pass destination where the variable would be the cname of the original request. what i have right now is: server { listen 8888; server_name (.*).domain.com; location / { proxy_pass http://$1.otherdomain.com; proxy_set_header Host $1.otherdomain.com; but unfortunately this ends up in a 502 bad gateway. nothing really works when using a variable in proxy_pass and proxy_set_header . i also tried to use (?<cname>.+) or (?P<cname>.+) in the server name and

How to create / handle subdomains on the fly using NGINX and Node Js?

隐身守侯 提交于 2021-01-28 04:07:47
问题 I am working on a MEAN stack SAAS application where I want to provide each user with their own unique subdomain. For example, I want the user John Doe to have the following subdomain to his name: johndoe.website.com. The application is nearly ready pending this feature. I am looking for steps to accomplish this using NGINX and Node js in a manner that will lead to minimal changes to our existing code base. I have searched the internet, and I was unable to find a resource that can serve as a

Return 404 with nginx to all locations except for one

可紊 提交于 2021-01-28 03:00:42
问题 I've created a subdomain to host our API. I'm trying to figure out how to configure nginx to only allow requests to the API location (/2/, which would look like https://api.example.com/2/) and return 404s for all other requests to api.example.com We're using PHP with a pretty standard PHP setup--routing most requests through index.php and matching php as show below: if (!-e $request_filename) { rewrite ^/(.*)$ /index.php last; } location ~ \.php$ { config here; } I'm hoping I'm over-thinking

Getting Common Name from Distinguished Name of client certificate in NGINX

≡放荡痞女 提交于 2021-01-28 00:35:04
问题 I need to get the CN of a client certificate in NGINX to append it to the proxy headers. I already found the following map code for this. map $ssl_client_s_dn $ssl_client_s_dn_cn { default ""; ~/CN=(?<CN>[^/]+) $CN; } But sadly it only returns an empty string for the following $ssl_client_s_dn: CN=testcn,O=Test Organization I tested it with other DNs, too. But the problem is always the same. 回答1: The pattern you use requires the legacy DN, since it assumes the / to separate the RDNs. So

Letsencrypt + Docker + Nginx

≯℡__Kan透↙ 提交于 2021-01-28 00:15:29
问题 I am referring this link https://miki725.github.io/docker/crypto/2017/01/29/docker+nginx+letsencrypt.html to enable SSL on my app which is running along with docker. So the problem here is when I run the below command docker run -it --rm \ -v certs:/etc/letsencrypt \ -v certs-data:/data/letsencrypt \ deliverous/certbot \ certonly \ --webroot --webroot-path=/data/letsencrypt \ -d api.mydomain.com It throws an error: Failed authorization procedure. api.mydomain.com (http-01): urn:acme:error

ingress-nginx自定义最大请求头大小

落花浮王杯 提交于 2021-01-27 22:41:35
1、首先添加一个configmap叫nginx-config apiVersion: v1 data: client-header-buffer-size: 32k client-max-body-size: 5m gzip-level: "7" large-client-header-buffers: 4 32k proxy-connect-timeout: 11s proxy-read-timeout: 12s use-geoip2: "true" use-gzip: "true" kind: ConfigMap 2、配置ingress-nginx-controller的deployment添加arg,指定configmap=kube-system/nginx-config ... spec: containers: - args: - /nginx-ingress-controller - --default-backend-service=kube-system/nginx-ingress-default-backend - --election-id=ingress-controller-leader - --ingress-class=nginx - --tcp-services-configmap=kube-system/tcp-services - -

app on path instead of root not working for Kubernetes Ingress

扶醉桌前 提交于 2021-01-27 21:16:18
问题 I have an issue at work with K8s Ingress and I will use fake examples here to illustrate my point. Assume I have an app called Tweeta and my company is called ABC. My app currently sits on tweeta.abc.com. But we want to migrate our app to app.abc.com/tweeta. My current ingress in K8s is as belows: apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tweeta-ingress spec: rules: - host: tweeta.abc.com http: paths: - path: / backend: serviceName: tweeta-frontend servicePort: 80 -

人人都能看懂的 6 种限流实现方案!(纯干货)

淺唱寂寞╮ 提交于 2021-01-27 21:14:13
为了上班方便,去年我把自己在北郊的房子租出去了,搬到了南郊,这样离我上班的地方就近了,它为我节约了很多的时间成本,我可以用它来做很多有意义的事,最起码不会因为堵车而闹心了,幸福感直线上升。 但即使这样,生活也有其他的烦恼。南郊的居住密度比较大,因此停车就成了头痛的事,我租的是路两边的非固定车位,每次只要下班回来,一定是没有车位停了,因此我只能和别人的车并排停着,但这样带来的问题是,我每天早上都要被挪车的电话给叫醒,心情自然就不用说了。 但后来几天,我就慢慢变聪明了,我头天晚上停车的时候,会找第二天限行的车并排停着,这样我第二天就不用挪车了,这真是限行给我带来的“巨大红利”啊。 而 车辆限行就是一种生活中很常见的限流策略 ,他除了给我带来了以上的好处之外,还给我们美好的生活环境带来了一丝改善,并且快速增长的私家车已经给我们的交通带来了巨大的“负担 ” ,如果再不限行,可能所有的车都要被堵在路上,这就是限流给我们的生活带来的巨大好处。 从生活回到程序中,假设一个系统只能为 10W 人提供服务,突然有一天因为某个热点事件,造成了系统短时间内的访问量迅速增加到了 50W,那么导致的直接结果是系统崩溃,任何人都不能用系统了, 显然只有少人数能用远比所有人都不能用更符合我们的预期,因此这个时候我们要使用「限流 」 了 。 限流分类 限流的实现方案有很多种,磊哥这里稍微理了一下, 限流的分类