thrift

详解RPC远程调用和消息队列MQ的区别

删除回忆录丶 提交于 2019-12-06 16:52:37
什么是RPC RPC(Remote Procedure Call)远程过程调用,主要解决远程通信间的问题,不需要了解底层网络的通信机制。 RPC服务框架有哪些 知名度较高的有Thrift(FB的)、dubbo(阿里的) RPC的一般需要经历4个步骤: 1、建立通信 首先要解决通讯的问题:即A机器想要调用B机器,首先得建立起通信连接,主要是通过在客户端和服务器之间建立TCP连接。 2、服务寻址 要解决寻址的问题,A服务器上如何连接到B服务器(如主机或IP地址)以及特定的端口,方法的名称是什么。 3、网络传输 1)序列化 当A服务器上的应用发起一个RPC调用时,调用方法和参数数据都需要先进行序列化。 2)反序列化 当B服务器接收到A服务器的请求之后,又需要对接收到的参数等信息进行反序列化操作。 4、服务调用 B服务器进行本地调用(通过代理Proxy)之后得到了返回值,此时还需要再把返回值发送回A服务器,同样也需要经过序列化操作,然后再经过网络传输将二进制数据发送回A服务器。 通常,一次完整的PRC调用需要经历如上4个步骤。 MQ(消息队列) 消息队列(MQ)是一种能实现生产者到消费者单向通信的通信模型,一般来说是指实现这个模型的中间件。 典型的特点: 1、解耦 2、可靠投递 3、广播 4、最终一致性 5、流量削峰 6、消息投递保证 7、异步通信(支持同步) 8、提高系统吞吐、健壮性

Storm topology failure while running on production

◇◆丶佛笑我妖孽 提交于 2019-12-06 14:33:47
问题 Hi I'm having a issue with running storm cluster. It is similar to My Topology is defined as : package com.abc.newsclassification; import StormBase.KnowledgeGraph.ClassifierBolt; import StormBase.KnowledgeGraph.ClientSpecificTwitterSpout; import StormBase.KnowledgeGraph.LiveTwitterSpout; import StormBase.KnowledgeGraph.NewsTwitterSpout; import StormBase.KnowledgeGraph.TwitterTrainingBolt; import StormBase.KnowledgeGraph.UrlExtractorBolt; import backtype.storm.Config; import backtype.storm

Apache Thrift 0.9.0 won't configure per instructions

孤人 提交于 2019-12-06 13:25:56
Apache Thrift 0.9.0 won't configure per instructions on base CentOS install. When you try to do the ./configure, it gives you an "Error: libcrypto required" The documentation says that you need: sudo yum install automake libtool flex bison pkgconfig gcc-c++ boost-devel libevent-devel zlib-devel python-devel ruby-devel http://thrift.apache.org/docs/install/centos/ The documentation is missing the openssl dependency, you also need to include: openssl-devel.x86_64 in your package install list above What you really need to install is sudo yum install automake libtool flex bison pkgconfig gcc-c++

Can I directly serialize to a file using PHP's thrift library?

半世苍凉 提交于 2019-12-06 13:12:58
Related: Apache Thrift: Serializing data Hi guys : I am noting that the PHP thrift extensions don't appear to have a TFileTransport class. This leads me to wonder : what is the mechanism for writing a thrift object to a FILE in PHP ? Unfortunately, available documentation is focused on the client/server model for using thrift : but I need to use PHP to serialize binary thrift files on disc, which contain a stream of thrift objects. Try extending TPhpStream by overriding: private static function inStreamName() { if (php_sapi_name() == 'cli') { return 'php://stdin'; } return 'php://input'; } and

Using Thrift for IPC-Communication via shared Memory

吃可爱长大的小学妹 提交于 2019-12-06 12:12:13
问题 I couldn't find a sufficient example on how to use apache thrift for ipc-communication via shared memory. My goal is to serialize an exisiting class with help of thrift, then send via shared memory to a different process where i deserialize it again with help of thrift. Right now i'm using TMemoryBuffer and TBinaryProtocol to serialize the data. Although this works, I have no idea on how to write it to shared memory. Here is my code so far: #include "test_types.h" #include "test_constants.h"

How to load files in sparksql through remote hive storage ( s3 orc) using spark/scala + code + configuration

ぃ、小莉子 提交于 2019-12-06 11:27:37
intellij(spark)--->Hive(Remote)---storage on S3(orc format) Not able to read remote Hive table through spark/scala. was able to read table schema but not able to read table. Error -Exception in thread "main" java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively). import org.apache.spark.SparkConf import org.apache.spark.SparkContext import org.apache.spark.sql.{Encoders, SparkSession} import org.apache.spark

Session management in Thrift

自古美人都是妖i 提交于 2019-12-06 10:59:14
问题 I can't seem to find any documentation on how session management is supposed to be done in Thrift's RPC framework. I know I can do a TServer.setServerEventHandler(myEventHandler); and observe calls to createContext (called when a connection is established) and processContext (called before every method call). Still, I have to get whatever session state I maintain in those message into the handler itself. So how can I access session information in my handlers? 回答1: Not sure if there isn't also

How to run a Sqoop Import from a Hive Thrift Client to a Hive Thrift Server?

折月煮酒 提交于 2019-12-06 10:29:05
Using JDBC I easily connected and able to Run Hive-QL query with the following sample code:- Connection con = DriverManager.getConnection("jdbc:hive2://192.168.56.102:10000/default", "", ""); Statement stmt = con.createStatement(); String tableName = "testHiveDriverTable1"; stmt.executeQuery("create table " + tableName + " (key int, value string)"); This means I am able to communicate with Hive. Now I want to execute sqoop also. How can I do it? I did it through command line, see following sample import which worked sqoop import --connect jdbc:mysql://192.168.56.101:3316/dw_db --username=user

Apache Thrift 的魅力

◇◆丶佛笑我妖孽 提交于 2019-12-06 10:15:35
WhyApacheThrift 因为最近在项目中需要集成进来一个Python编写的机器学习算法,但是我的后端主要使用的是SpringCloud技术栈. 于是面临着异构语言之间的通信实现方式的抉择. 因为业务逻辑是这样的 主要就是实现2-3这部分请求响应, 实现的方式挺多的, 只要有能力甚至将py封装成一个WebServer对外提供服务, 或者是选择使用消息中间件, 但是大部分消息中间的通信模型都是单向的,即发布订阅, 不过也能实现上面的业务需求 项目中一开始的实现其实是像下面这样的, 选择简单粗暴直接使用socket编程实现, py用socket写一个服务端, java用socket实现客户端, 双方之间实现异构通信, 就像下面代码的两段,在本地运行的话双方通信的速度还可以,但是当我将他制作成docket镜像打包发布到线上时, 双方的通信竟然需要9s 一个请求需要九秒钟, 这肯定是不能接受的 InetAddress localhost = InetAddress.getByName("192.168.88.1"); Socket socket = new Socket(localhost.getHostName(), 9999); OutputStream outputStream = socket.getOutputStream(); InputStream

聊聊storm的submitTopology

时间秒杀一切 提交于 2019-12-06 09:27:45
序 本文主要研究一下storm的submitTopology 提交topology日志实例 2018-10-08 17:32:55.738 INFO 2870 --- [ main] org.apache.storm.StormSubmitter : Generated ZooKeeper secret payload for MD5-digest: -8659577410336375158:-6351873438041855318 2018-10-08 17:32:55.893 INFO 2870 --- [ main] org.apache.storm.utils.NimbusClient : Found leader nimbus : a391f7a04044:6627 2018-10-08 17:32:56.059 INFO 2870 --- [ main] o.apache.storm.security.auth.AuthUtils : Got AutoCreds [] 2018-10-08 17:32:56.073 INFO 2870 --- [ main] org.apache.storm.utils.NimbusClient : Found leader nimbus : a391f7a04044:6627 2018-10-08 17:32:56.123 INFO