protocol-buffers

Loading SavedModel is a lot slower than loading a tf.train.Saver checkpoint

不问归期 提交于 2019-12-02 18:00:41
I changed from tf.train.Saver to the SavedModel format which surprisingly means loading my model from disk is a lot slower (instead of a couple of seconds it takes minutes). Why is this and what can I do to load the model faster? I used to do this: # Save model saver = tf.train.Saver() save_path = saver.save(session, model_path) # Load model saver = tf.train.import_meta_graph(model_path + '.meta') saver.restore(session, model_path) But now I do this: # Save model builder = tf.saved_model.builder.SavedModelBuilder(model_path) builder.add_meta_graph_and_variables(session, [tf.saved_model.tag

How to design for a future additional enum value in protocol buffers?

倖福魔咒の 提交于 2019-12-02 17:40:50
One of the attractive features of protocol buffers is that it allows you extend the message definitions without breaking code that uses the older definition. In the case of an enum according to the documentation : a field with an enum type can only have one of a specified set of constants as its value (if you try to provide a different value, the parser will treat it like an unknown field) therefore if you extend the enum and use the new value then a field with that type in old code will be undefined or have its default value, if there is one. What is a good strategy to deal with this, knowing

Protocol buffers in C# projects using protobuf-net - best practices for code generation

旧时模样 提交于 2019-12-02 16:48:42
I'm trying to use protobuf in a C# project, using protobuf-net, and am wondering what is the best way to organise this into a Visual Studio project structure. When manually using the protogen tool to generate code into C#, life seems easy but it doesn't feel right. I'd like the .proto file to be considered to be the primary source-code file, generating C# files as a by-product, but before the C# compiler gets involved. The options seem to be: Custom tool for proto tools (although I can't see where to start there) Pre-build step (calling protogen or a batch-file which does that) I have

Set maximum size of protobuf object

泪湿孤枕 提交于 2019-12-02 16:29:52
问题 I have some protobuf object, let's call it's Msg , with some repeated fields. I want to serialize it and send. But i have max limit for package, witch i can send. So is it possible to separate this object to some little object's? Or may be set maximum size of serialized objects? 回答1: As described in the documentation, protobuf does not take care of where a message start and stops, so you'll have to do that yourself. The output of the protobuf serializer is just a serialized buffer. This means

Can I define a grpc call with a null request or response?

倾然丶 夕夏残阳落幕 提交于 2019-12-02 16:27:27
Does the rpc syntax in proto3 allow null requests or responses? e.g. I want the equivalent of the following: rpc Logout; rpc Status returns (Status); rpc Log (LogData); Or should I just create a null type? message Null {}; rpc Logout (Null) returns (Null); rpc Status (Null) returns (Status); rpc Log (LogData) returns (Null); Kenton's comment below is sound advice: ... we as developers are really bad at guessing what we might want in the future. So I recommend being safe by always defining custom params and results types for every method, even if they are empty. Answering my own question:

Integrate Protocol Buffers into Maven2 build

妖精的绣舞 提交于 2019-12-02 16:05:02
I'm experimenting with Protocol Buffers in an existing, fairly vanilla Maven 2 project. Currently, I invoke a shell script every time I need to update my generated sources. This is obviously a hassle, as I would like the sources to be generated automatically before each build. Hopefully without resorting to shameful hackery. So, my question is two-fold: Long shot: is there a "Protocol Buffers plugin" for Maven 2 that can achieve the above in an automagic way? There's a branch on Google Code whose author appears to have taken a shot at implementing such a plugin. Unfortunately, it hasn't passed

Unable to build protobuf to go endpoint

痞子三分冷 提交于 2019-12-02 15:43:44
using protobuf version 2.6.1 ( which i installed via homebrew) I am trying to run $ protoc --go_out=../cloud/ *.proto I keep receiving this error. $ protoc-gen-go: program not found or is not executable $ --go_out: protoc-gen-go: Plugin failed with status code 1. I have the protoc-gen-go installed in my go path. Anyone else have this issue? protoc-gen-go needs to be in your shell path, i.e. one of the directories listed in the PATH environment variable, which is different from the Go path. You can test this by simply typing protoc-gen-go at the command line: If it says "command not found" (or

boost serialization vs google protocol buffers? [closed]

给你一囗甜甜゛ 提交于 2019-12-02 15:42:33
Does anyone with experience with these libraries have any comment on which one they preferred? Were there any performance differences or difficulties in using? Magnus Österlind I've played around a little with both systems, nothing serious, just some simple hackish stuff, but I felt that there's a real difference in how you're supposed to use the libraries. With boost::serialization, you write your own structs/classes first, and then add the archiving methods, but you're still left with some pretty "slim" classes, that can be used as data members, inherited, whatever. With protocol buffers,

Why isn't Hadoop implemented using MPI?

那年仲夏 提交于 2019-12-02 14:18:13
Correct me if I'm wrong, but my understanding is that Hadoop does not use MPI for communication between different nodes. What are the technical reasons for this? I could hazard a few guesses, but I do not know enough of how MPI is implemented "under the hood" to know whether or not I'm right. Come to think of it, I'm not entirely familiar with Hadoop's internals either. I understand the framework at a conceptual level (map/combine/shuffle/reduce and how that works at a high level) but I don't know the nitty gritty implementation details. I've always assumed Hadoop was transmitting serialized

How to bring a gRPC defined API to the web browser

本小妞迷上赌 提交于 2019-12-02 14:04:01
We want to build a Javascript/HTML gui for our gRPC-microservices. Since gRPC is not supported on the browser side, we thought of using web-sockets to connect to a node.js server, which calls the target service via grpc. We struggle to find an elegant solution to do this. Especially, since we use gRPC streams to push events between our micro-services. It seems that we need a second RPC system, just to communicate between the front end and the node.js server. This seems to be a lot of overhead and additional code that must be maintained. Does anyone have experience doing something like this or