consul

Ocelot简易教程(四)之请求聚合以及服务发现

浪尽此生 提交于 2019-12-04 00:02:04
上篇文章给大家讲解了Ocelot的一些特性并对路由进行了详细的介绍,今天呢就大家一起来学习下Ocelot的请求聚合以及服务发现功能。希望能对大家有所帮助。 作者:依乐祝 原文地址: https://www.cnblogs.com/yilezhu/p/9695639.html 请求聚合 Ocelot允许你声明聚合路由,这样你可以把多个正常的ReRoutes打包并映射到一个对象来对客户端的请求进行响应。比如,你请求订单信息,订单中又包含商品信息,这里就设计到两个微服务,一个是商品服务,一个是订单服务。如果不运用聚合路由的话,对于一个订单信息,客户端可能需要请求两次服务端。实际上这会造成服务端额外的开销。这时候有了聚合路由后,你只需要请求一次聚合路由,然后聚合路由会合并订单跟商品的结果都一个对象中,并把这个对象响应给客户端。使用Ocelot的此特性可以让你很容易的实现前后端分离的架构。 为了实现Ocelot的请求功能,你需要在ocelot.json中进行如下的配置。这里我们指定了了两个正常的ReRoutes,然后给每个ReRoute设置一个Key属性。最后我们再Aggregates节点中的ReRouteKeys属性中加入我们刚刚指定的两个Key从而组成了两个ReRoutes的聚合。当然我们还需要设置UpstreamPathTemplate匹配上游的用户请求

Spring Cloud Finchley.SR1 的学习与应用 4

时光毁灭记忆、已成空白 提交于 2019-12-03 21:30:24
Spring Cloud Consul 简介 Spring Cloud Consul项目是针对Consul的服务治理实现。Consul是一个分布式高可用的系统,它包含多个组件,但是作为一个整体,在微服务架构中为我们的基础设施提供服务发现和服务配置的工具。它包含了下面几个特性: 服务发现 健康检查 Key/Value存储 多数据中心 由于Spring Cloud Consul项目的实现,我们可以轻松的将基于Spring Boot的微服务应用注册到Consul上,并通过此实现微服务架构中的服务治理。 之所以在本项目中选择Consul而不是Eureka,是考虑到Eureka 2.0 开源工作宣告停止 服务注册 前篇文章已经介绍了consul服务端的部署安装,接下来介绍基于Spring Cloud Consul客户端的使用 创建modules-woqu,继承parent-woqu,用于管理业务系统项目 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0

How does a Consul agent know it is the leader of a cluster?

落花浮王杯 提交于 2019-12-03 12:02:20
In Consul you can have many agents as servers or clients. Amongst all servers one is chosen as the leader. From the agent's point of view, how does it know it is the leader? The Consul leader is elected via an implementation of the Raft Protocol from amongst the Quorum of Consul Servers. Only Consul instances that are configured as Servers participate in the Raft Protocol communication. The Consul Agent (the daemon) can be started as either a Client or a Server . Only a Server can be the leader of a Datacenter. The Raft Protocol was created by Diego Ongaro and John Ousterhout from Stanford

Zuul and Consul integration issue

匿名 (未验证) 提交于 2019-12-03 08:28:06
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have problem setting up Spring Cloud application with Zuul and Consul service discovery. I have Consul server agent installed and running locally: ./src/main/bash/local_run_consul.sh When I run Spring Boot application with @EnableZuulProxy annotation I get the following error: Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.netflix.zuul.filters.RouteLocator]: Factory method 'routeLocator' threw exception; nested exception is java.lang.IllegalStateException: Unable to locate

How to self register a service with Consul

拈花ヽ惹草 提交于 2019-12-03 07:58:08
问题 I'm trying to self register my ASP.NET Core application to Consul registry on startup and deregister it on shutdown. From here I can gather that calling the http api [ put /v1/agent/service/register ] might be the way to go (or maybe not!). From my app, I thought I'll target the Startup class, starting with adding the my .json file public Startup(IHostingEnvironment env) { var builder = new Configuration().AddJsonFile("consulconfig.json"); Configuration = builder.Build(); } But now, I'm stuck

Consul常用接口使用

一个人想着一个人 提交于 2019-12-03 07:40:01
prometheus.yml 配置 - job_name: 'node_exporter' consul_sd_configs: - server: 'consul_ip:8500' services: ['node_exporter'] # 匹配service关键字 - job_name: 'service' consul_sd_configs: - server: 'consul_ip:8500' services: [] relabel_configs: - source_labels: [__meta_consul_tags] regex: .*service.* action: keep 注册服务 curl -X PUT -d '{ "id" : "test1" , "name" : "test1" , "address" : "10.80.229.55" , "port" : 9100 , "tags" : [ "service" ] , "checks" : [ { "http" : "http:// 10.80.229.55:9100/" , "interval" : "5s" } ] }' http : / /consul_ip : 8500 /v1 /agent /service /register 查询指定节点以及指定的服务信息 curl http:/

Prometheus配置文件

天涯浪子 提交于 2019-12-03 07:39:32
在prometheus监控系统,prometheus的职责是采集,查询和存储和推送报警到alertmanager。本文主要介绍下prometheus的配置文件。 全局配置文件简介 默认配置文件 按 Ctrl+C 复制代码 按 Ctrl+C 复制代码 global: 此片段指定的是prometheus的全局配置, 比如采集间隔,抓取超时时间等。 rule_files: 此片段指定报警规则文件, prometheus根据这些规则信息,会推送报警信息到alertmanager中。 scrape_configs: 此片段指定抓取配置,prometheus的数据采集通过此片段配置。 alerting: 此片段指定报警配置, 这里主要是指定prometheus将报警规则推送到指定的alertmanager实例地址。 remote_write: 指定后端的存储的写入api地址。 remote_read: 指定后端的存储的读取api地址。 global片段主要参数 # How frequently to scrape targets by default. [ scrape_interval: <duration> | default = 1m ] # 抓取间隔 # How long until a scrape request times out. [ scrape_timeout:

What is the conceptual difference between Service Discovery tools and Load Balancers that check node health?

一个人想着一个人 提交于 2019-12-03 07:30:49
问题 Recently several service discovery tools have become popular/"mainstream", and I’m wondering under what primary use cases one should employ them instead of traditional load balancers. With LBs, you cluster a bunch of nodes behind the balancer, and then clients make requests to the balancer, who then (typically) round robins those requests to all the nodes in the cluster. With service discovery (Consul, ZK, etc.), you let a centralized “consensus” service determine what nodes for particular

Docker container with status “Dead” after consul healthcheck runs

拜拜、爱过 提交于 2019-12-03 02:08:09
I am using consul's healthcheck feature, and I keep getting these these "dead" containers: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 20fd397ba638 progrium/consul:latest "\"/bin/bash -c 'cur 15 minutes ago Dead What is exactly a "Dead" container? When does a stopped container become "Dead"? For the record, I run progrium/consul + gliderlabs/registrator images + SERVICE_XXXX_CHECK env variables to do health checking. It runs a healthcheck script running an image every X secs, something like docker run --rm my/img healthcheck.sh I'm interested in general to what "dead" means and how

Unable to load balance using Docker, Consul and nginx

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: What I want to achive is load balancing using this stack: Docker, Docker Compose, Registrator, Consul, Consul Template, NGINX and, finally, a tiny service that prints out "Hello world" in browser. So, at this moment I have a docker-compose.yml file. It looks like so: version : '2' services : accent : build : context : ./ accent image : accent container_name : accent restart : always ports : - 80 consul : image : gliderlabs / consul - server : latest container_name : consul hostname : $ { MYHOST } restart : always ports : - 8300 :