Communication Between Microservices

蹲街弑〆低调 提交于 2019-12-12 08:49:21

问题


Say you have microservice A,B, and C which all currently communicate through HTTP. Say service A sends a request to service B which results in a response. The data returned in that response must then be sent to service C for some processing before finally being returned to service A. Service A can now display the results on the web page.

I know that latency is an inherent issue with implementing a microservice architecture, and I was wondering what are some common ways of reducing this latency?

Also, I have been doing some reading on how Apache Thrift and RPC's can help with this. Can anyone elaborate on that as well?


回答1:


Also, I have been doing some reading on how Apache Thrift and RPC's can help with this. Can anyone elaborate on that as well?

The goal of an RPC framework like Apache Thrift is

  • to significantly reduce the manual programming overhead
  • to provide efficient serialization and transport mechanisms
  • across all kinds of programming languages and platforms

In other words, this allows you to send your data as a very compactly written and compressed packet over the wire, while most of the efforts required to achieve this are provided by the framework.

Apache Thrift provides you with a pluggable transport/protocol stack that can quickly be adapted by plugging in different

  • transports (Sockets, HTTP, pipes, streams, ...)
  • protocols (binary, compact, JSON, ...)
  • layers (framed, multiplex, gzip, ...)

Additionally, depending on the target language, you get some infrastructure for the server-side end, such as TNonBlocking or ThreadPool servers, etc.

So coming back to your initial question, such a framework can help to make communication easier and more efficient. But it cannot magically remove latency from other parts of the OSI stack.



来源:https://stackoverflow.com/questions/35673254/communication-between-microservices

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!