protocol-buffers

Google Protocol Buffers and HTTP

◇◆丶佛笑我妖孽 提交于 2019-11-28 16:05:42
I'm refactoring legacy C++ system to SOA using gSoap. We have some performance issues (very big XMLs) so my lead asked me to take a look at protocol buffers. I did, and it looks very cool (We need C++ and Java support). However protocol buffers are solution just for serialization and now I need to send it to Java front-end. What should I use from C++ and Java perspective to send those serialized stuff over HTTP (just internal network)? PS. Another guy tries to speed-up our gSoap solution, I'm interested in protocol buffers only. You can certainly send even a binary payload with an HTTP request

How to write a high performance Netty Client

烂漫一生 提交于 2019-11-28 16:01:25
I want an extremely efficient TCP client to send google protocol buffer messages. I have been using the Netty library to develop a server/client. In tests the server seems to be able to handle up to 500k transactions per second, without to many problems, but the client tends to peak around 180k transactions per second. I have based my client on the examples provided in the Netty documentation, but the difference is I just want to send the message and forget, I don't want a response (which most of the examples get). Is there anyway to optimize my client, so that I can achieve a higher TPS ?

How fast or lightweight Is Protocol Buffer?

邮差的信 提交于 2019-11-28 15:54:11
Is Protocol Buffer for .NET gonna be lightweight/faster than Remoting(the SerializationFormat.Binary)? Will there be a first class support for it in language/framework terms? i.e. is it handled transparently like with Remoting/WebServices? I very much doubt that it will ever have direct language support or even framework support - it's the kind of thing which is handled perfectly well with 3rd party libraries. My own port of the Java code is explicit - you have to call methods to serialize/deserialize. (There are RPC stubs which will automatically serialize/deserialize, but no RPC

Thrift vs Protocol buffers [duplicate]

泪湿孤枕 提交于 2019-11-28 15:25:43
This question already has an answer here: Biggest differences of Thrift vs Protocol Buffers? 14 answers I've been using PB for quite a while now, but, Thrift has constantly been at the back of my mind. The primary advantages of thrift, as I see it are: Native collections (i.e, vector, set etc) vs PBs repeated providing functionality similar to, but not quite like (no iterators unless you dig into RepeatedField which the documentation states "shouldn't be required in most cases"). A decent RPC implementation provided, instead of just hooks to plug your own in. More officially supported

Performance comparison of Thrift, Protocol Buffers, JSON, EJB, other?

久未见 提交于 2019-11-28 15:14:41
We're looking into transport/protocol solutions and were about to do various performance tests, so I thought I'd check with the community if they've already done this: Has anyone done server performance tests for simple echo services as well as serialization/deserialization for various messages sizes comparing EJB3, Thrift, and Protocol Buffers on Linux? Primarily languages will be Java, C/C++, Python, and PHP. Update: I'm still very interested in this, if anyone has done any further benchmarks please let me know. Also, very interesting benchmark showing compressed JSON performing similar /

What are the key differences between Apache Thrift, Google Protocol Buffers, MessagePack, ASN.1 and Apache Avro?

我们两清 提交于 2019-11-28 15:06:36
All of these provide binary serialization, RPC frameworks and IDL. I'm interested in key differences between them and characteristics (performance, ease of use, programming languages support). If you know any other similar technologies, please mention it in an answer. JUST MY correct OPINION ASN.1 is an ISO/ISE standard. It has a very readable source language and a variety of back-ends, both binary and human-readable. Being an international standard (and an old one at that!) the source language is a bit kitchen-sinkish (in about the same way that the Atlantic Ocean is a bit wet) but it is

google protocol buffers vs json vs XML [closed]

夙愿已清 提交于 2019-11-28 14:56:16
I would like to know the merits & de-merits of Google Protocol Buffers JSON XML I want to implement one common framework for two application, one in Perl and second in Java. So, would like to create common service which can be used by both technology i.e. Perl & Java. Both are web-applications. Please share me your valuable thoughts & suggestion on this. I have seen many links on google but all have mixed opinions. Json human readable/editable can be parsed without knowing schema in advance excellent browser support less verbose than XML XML human readable/editable can be parsed without

how to write a valid decoding file based on a given .proto, reading from a .pb

旧城冷巷雨未停 提交于 2019-11-28 09:57:16
问题 Based on the answer to this question I'm thinking that I've provided my .pb file with a "faulty decoder". This is the data I'm trying to decode. This is my .proto file. Based on the ListPeople.java example provided in the Java tutorial documentation, I tried to write something similar to start picking apart that data, I wrote this: import cc.refectorie.proj.relation.protobuf.DocumentProtos.Document; import cc.refectorie.proj.relation.protobuf.DocumentProtos.Document.Sentence; import java.io

What's the best way to represent System.Decimal in Protocol Buffers?

a 夏天 提交于 2019-11-28 09:56:11
Following on from this question, what would be the best way to represent a System.Decimal object in a Protocol Buffer? Well, protobuf-net will simply handle this for you; it runs off the properties of types, and has full support for decimal . Since there is no direct way of expressing decimal in proto, it won't (currently) generate a decimal property from a ".proto" file, but it would be a nice tweak to recognise some common type ("BCL.Decimal" or similar) and interpret it as decimal. As for representing it - I had a discussion document on this (now out of date I suspect) in the protobuf-net

connect input and output tensors of two different graphs tensorflow

无人久伴 提交于 2019-11-28 09:24:52
I have 2 ProtoBuf Files, I currently load and forward pass each of them separately, by calling- out1=session.run(graph1out, feed_dict={graph1inp:inp1}) followed by final=session.run(graph2out, feed_dict={graph2inp:out1}) where graph1inp and graph1out are input node and output node of graph 1 and similar terminology for graph 2 Now, I want to connect graph1out with graph2inp such that I only have to run graph2out while feeding graph1inp with inp1 . In other words connecting the input and output tensors of the 2 involved graphs in such a way that one run is sufficient to run inference on both