protocol-buffers

Protocol buffers - unique numbered tag - clarification?

匆匆过客 提交于 2021-01-20 14:29:48
问题 I'm using protocol buffers and everything is working fine. except that the fact that I don't understand - why do I need the numbered tags in the proto file : message SearchRequest { required string query = 1; optional int32 page_number = 2; optional int32 result_per_page = 3; } Sure I've read the docs : As you can see, each field in the message definition has a unique numbered tag. These tags are used to identify your fields in the message binary format, and should not be changed once your

Protocol buffers - unique numbered tag - clarification?

我是研究僧i 提交于 2021-01-20 14:29:37
问题 I'm using protocol buffers and everything is working fine. except that the fact that I don't understand - why do I need the numbered tags in the proto file : message SearchRequest { required string query = 1; optional int32 page_number = 2; optional int32 result_per_page = 3; } Sure I've read the docs : As you can see, each field in the message definition has a unique numbered tag. These tags are used to identify your fields in the message binary format, and should not be changed once your

Protocol buffers - unique numbered tag - clarification?

梦想与她 提交于 2021-01-20 14:29:18
问题 I'm using protocol buffers and everything is working fine. except that the fact that I don't understand - why do I need the numbered tags in the proto file : message SearchRequest { required string query = 1; optional int32 page_number = 2; optional int32 result_per_page = 3; } Sure I've read the docs : As you can see, each field in the message definition has a unique numbered tag. These tags are used to identify your fields in the message binary format, and should not be changed once your

Tensorflow: Is it possible to store TF record sequence examples as float16

喜你入骨 提交于 2021-01-06 07:27:45
问题 Is it possible to store sequence example in tensorflow as float16 instead of regular float? We can live with 16bit precision, and it will reduce the size of the data files we use, saving us ~200 GB. 回答1: I think the snip below does just that. import tensorflow as tf import numpy as np # generate the data data_np = np.array(np.random.rand(10), dtype=np.float16) with tf.python_io.TFRecordWriter('/tmp/data.tfrecord') as writer: # encode the data in a dictionary of features data = {'raw': tf

Tensorflow: Is it possible to store TF record sequence examples as float16

一曲冷凌霜 提交于 2021-01-06 07:22:28
问题 Is it possible to store sequence example in tensorflow as float16 instead of regular float? We can live with 16bit precision, and it will reduce the size of the data files we use, saving us ~200 GB. 回答1: I think the snip below does just that. import tensorflow as tf import numpy as np # generate the data data_np = np.array(np.random.rand(10), dtype=np.float16) with tf.python_io.TFRecordWriter('/tmp/data.tfrecord') as writer: # encode the data in a dictionary of features data = {'raw': tf

Tensorflow: Is it possible to store TF record sequence examples as float16

我是研究僧i 提交于 2021-01-06 07:22:22
问题 Is it possible to store sequence example in tensorflow as float16 instead of regular float? We can live with 16bit precision, and it will reduce the size of the data files we use, saving us ~200 GB. 回答1: I think the snip below does just that. import tensorflow as tf import numpy as np # generate the data data_np = np.array(np.random.rand(10), dtype=np.float16) with tf.python_io.TFRecordWriter('/tmp/data.tfrecord') as writer: # encode the data in a dictionary of features data = {'raw': tf

Getting module 'google.protobuf.descriptor_pool' has no attribute 'Default' in my python script

爱⌒轻易说出口 提交于 2021-01-02 05:42:18
问题 I am new to python and was using a python script written by someone else. I was running it fine in a different PC. Just had to install coupe of packages including pip3 , google-cloud , google-cloud-bigquery and pandas . Now when I installed the same packages on a different PC, I am unable to run the script. It is showed the following error first: module = 'google.protobuf.descriptor_pb2' TypeError: expected bytes, Descriptor found However, when In purged/re-installed/updated packages and also

Do I need to delete objects passed to google protocol buffer (protobuf)?

谁说我不能喝 提交于 2021-01-01 06:51:16
问题 I have simple messages: message SmallValue { int32 val = 1; } message Value { int32 val1 = 1; int32 val2 = 2; SmallValue val3 = 3; } message SendMessage { int32 id = 1; oneof message { Value value= 2; } My piece of code: // create new pointer for smallValue SmallValue* smallValue = new SmallValue(); smallValue->set_val3(3); // create new object value and set_allocated_val3 Value value; value.set_val1(1); value.set_val2(2); value.set_allocated_val3(smallValue); // create new object message and

Is “google/protobuf/struct.proto” the best way to send dynamic JSON over GRPC?

限于喜欢 提交于 2020-12-29 05:49:31
问题 I have a written a simple GRPC server and a client to call the server (both in Go). Please tell me if using golang/protobuf/struct is the best way to send a dynamic JSON with GRPC. In the example below, earlier I was creating Details as a map[string]interface{} and serializing it. Then I was sending it in protoMessage as bytes and was de-serializing the message on the server side. Is it the best/efficient way to do it or should I define Details as a struct in my proto file? Below is User

Is “google/protobuf/struct.proto” the best way to send dynamic JSON over GRPC?

醉酒当歌 提交于 2020-12-29 05:48:19
问题 I have a written a simple GRPC server and a client to call the server (both in Go). Please tell me if using golang/protobuf/struct is the best way to send a dynamic JSON with GRPC. In the example below, earlier I was creating Details as a map[string]interface{} and serializing it. Then I was sending it in protoMessage as bytes and was de-serializing the message on the server side. Is it the best/efficient way to do it or should I define Details as a struct in my proto file? Below is User