thrift

Hbase Thrift in CDH 5

孤人 提交于 2019-12-04 17:22:35
I'm using Node.js Thrift API to connect to Hbase. Everything was working great until I upgraded CDH 4.6 to CDH 5. After upgrading I regenerated the Thrift API for Node.js with this script: thrift --gen js:node /opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hbase/include/thrift/hbase2.thrift After replacing the original Node.js script with the newly generated script, everything stopped working. You can view the new script and the basic methods in the demo that I'm trying to run on https://github.com/lgrcyanny/Node-HBase-Thrift2 When I run the 'get' method, it returns "Internal error

libtool error building thrift 0.9.1 on Ubuntu 13.04

我怕爱的太早我们不能终老 提交于 2019-12-04 16:24:03
问题 Building thrift 0.9.1 (support C, C++, java, C#, perl, python) on Ubuntu 13.04 I am getting this error. ./configure run without any options, make run without any options... Making all in test make[2]: Entering directory `/home/dvb/sw/thrift-0.9.1/test' Making all in nodejs make[3]: Entering directory `/home/dvb/sw/thrift-0.9.1/test/nodejs' make[3]: Nothing to be done for `all'. make[3]: Leaving directory `/home/dvb/sw/thrift-0.9.1/test/nodejs' Making all in cpp make[3]: Entering directory `

Thrift client-server multiple roles

落花浮王杯 提交于 2019-12-04 13:08:36
this is my first question, so sorry if the form is wrong! I'm trying to make thrift server (python) and client (c++). However I need to exchange messages in both direction. Client should register (call server's function and wait), and server should listen on same port for N (N-> 100k) incoming connections (clients). After some conditions are satisfied, server needs to call functions on each client and collect results and interpret them. I'm little confused, and first questions is "can this be done in Thrift"? Second question is related to mechanism that will allow me bidirectional

TTransportException when using TFramedTransport

两盒软妹~` 提交于 2019-12-04 12:47:01
I'm pretty puzzled with this issue. I have an Apache Thrift 0.9.0 client and server. The client code goes like this: this.transport = new TSocket(this.server, this.port); final TProtocol protocol = new TBinaryProtocol(this.transport); this.client = new ZKProtoService.Client(protocol); This works fine. However, if I try to wrap the transport in a TFramedTransport this.transport = new TSocket(this.server, this.port); final TProtocol protocol = new TBinaryProtocol(new TFramedTransport(this.transport)); this.client = new ZKProtoService.Client(protocol); I get the following obscure (no explanation

Is it possible to use Apache Thrift on a regular web server?

一笑奈何 提交于 2019-12-04 11:22:11
问题 I already have a web server that I pay for, and I want to expose some services on it using Thrift and PHP. My question is: can I run a Thrift server using normal PHP that's hosted on the default port (the same way web pages are hosted) instead of having a separate PHP application that runs on some funky obscure port. This way I wouldn't have to change the server configuration (which is something I'm not able to do even if I wanted to). Thanks EDIT: maybe I should clarify a bit more. Once I've

Integrate Hbase with PHP [closed]

大城市里の小女人 提交于 2019-12-04 09:50:15
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 5 years ago . I have installed the Hbase and now I am looking for some PHP library to integrate hbase with PHP I have tried 2 libraries one is I tried to connect with thrift but was unable to do so 2nd is I tried to connect it with popHbase but was unable to do so can somebody provide me if

How do I insert a row with a TimeUUIDType column in Cassandra?

为君一笑 提交于 2019-12-04 08:56:43
In Cassandra, I have the following Column Family: <ColumnFamily CompareWith="TimeUUIDType" Name="Posts"/> I'm trying to insert a record into it as follows using a C++ generated function generated by Thrift: ColumnPath new_col; new_col.__isset.column = true; /* this is required! */ new_col.column_family.assign("Posts"); new_col.super_column.assign(""); new_col.column.assign("1968ec4a-2a73-11df-9aca-00012e27a270"); client.insert("Keyspace1", "somekey", new_col, "Random Value", 1234, ONE); However, I'm getting the following error: "UUIDs must be exactly 16 bytes" I've even tried the Cassandra CLI

Protobuf Thrift数据描述语言调研

眉间皱痕 提交于 2019-12-04 08:28:37
编者注 由于json是字符进行传送,尤其是传送float参数,会直接导致数据传输量暴增。则需要使用二进制化的跨语言传输方法。编者找到Google Protocol Buffers,Apache Thrift,Apache avro。 Google Protocol Buffers Google Protocol Buffers 官网 - 需要翻墙 Git地址 protobuf git 安装 通过Readme,安装包在如下url Protocol Compiler Installation 由于protobuf-all-x.x.x.tar.gz需要进行重新编译,则本作者下载的是 protoc-3.5.1-win32.zip 解压缩后,能够看到如下结构 │ readme.txt │ ├─bin │ protoc.exe │ └─include └─google └─protobuf │ any.proto │ api.proto │ descriptor.proto │ duration.proto │ empty.proto │ field_mask.proto │ source_context.proto │ struct.proto │ timestamp.proto │ type.proto │ wrappers.proto │ └─compiler plugin.proto

Communication Between Microservices

烈酒焚心 提交于 2019-12-04 07:26:37
Say you have microservice A,B, and C which all currently communicate through HTTP. Say service A sends a request to service B which results in a response. The data returned in that response must then be sent to service C for some processing before finally being returned to service A. Service A can now display the results on the web page. I know that latency is an inherent issue with implementing a microservice architecture, and I was wondering what are some common ways of reducing this latency? Also, I have been doing some reading on how Apache Thrift and RPC's can help with this. Can anyone

大数据教程(11.8)Hive1.2.2简介&初体验

风格不统一 提交于 2019-12-04 06:45:09
上一篇文章分析了Hive1.2.2的安装,本节博主将分享Hive的体验&Hive服务端和客户端的使用方法。 一、Hive与hadoop直接的关系 Hive利用HDFS存储数据,利用MapReduce查询数据。 二、Hive与传统数据库对比 Hive RDBMS 查询语言 HQL SQL 数据存储 HDFS Raw Device or Local FS 执行 MapReduce、spark等 Excutor执行引擎 执行延迟 高 低 处理数据规模 大 小 索引 0.8版本后加入位图索引 有完整的索引体系 总结:hive具有sql数据库的外表(包括sql命令行功能、sql语法等),但应用场景完全不同,hive只适合用来做批量数据统计分析。 三、Hive的数据存储 a、Hive中所有的数据都存储在 HDFS 中,没有专门的数据存储格式(可支持Text,SequenceFile,ParquetFile,RCFILE等) b、只需要在创建表的时候告诉 Hive 数据中的列分隔符和行分隔符,Hive 就可以解析数据。 c、Hive 中包含以下数据模型:DB、Table,External Table,Partition,Bucket。 db:在hdfs中表现为${hive.metastore.warehouse.dir}目录下一个文件夹 table:在hdfs中表现所属db目录下一个文件夹