google-cloud-datastore

Datastore and task queue downtime correlation

為{幸葍}努か 提交于 2019-12-14 02:28:18
问题 What correlation is there between datastore and task queue downtime? (I'd like to use the task queue to defer some operations in the case of datastore downtime.) 回答1: The Task Queue should be generally more durable than the datastore, as it's a simpler system, but there's no guarantee that they can't both experience a simultaneous outage. 来源: https://stackoverflow.com/questions/3800252/datastore-and-task-queue-downtime-correlation

Have you experienced DataStore downtime in AppEngine? What are the odds?

末鹿安然 提交于 2019-12-14 02:22:43
问题 Google start to use The High Replication datastore (HRD) as the default for new applications. HR from the docs: The HRD is a highly available, highly reliable storage solution. It remains available for reads and writes during planned downtime and is extremely resilient in the face of catastrophic failure—but it costs more than the master/slave option. M/S from the docs: your data may be temporarily unavailable during data center issues or planned downtime Now, have you ever expirienced

Contention issues due to indexing “_expires” property - Sessions on Google App Engine [Java]

前提是你 提交于 2019-12-14 00:54:46
问题 I've noticed that the _expires property of _ah_SESSION is indexed. This surely comes handy when querying for expired sessions. But there seems to be a downside to this. The _expires property contains monotonically increasing values. Which means that index records for this property will most likely end up on a single tablet server. I am concerned that such tablet server could be easily overloaded by applications which update session data often. Is there a way of telling App Engine NOT to index

Tree structures in a nosql database

怎甘沉沦 提交于 2019-12-14 00:14:36
问题 I'm developing an application for Google App Engine which uses BigTable for its datastore. It's an application about writing a story collaboratively. It's a very simple hobby project that I'm working on just for fun. It's open source and you can see it here: http://story.multifarce.com/ The idea is that anyone can write a paragraph, which then needs to be validated by two other people. A story can also be branched at any paragraph, so that another version of the story can continue in another

DatastoreRepository able to save objects, but all find methods throw a null pointer exception

柔情痞子 提交于 2019-12-13 19:12:27
问题 I have been working with a Spring App, using GCP Datastore as our storage solution. In order to streamline the code, I am looking into using the Spring DataStore Repository (I have been doing it all the hard way) So I have created a new project to experiment, using the existing datastore we have set up. I can get the .save() method working fine, but any find methods do not work. I have tried findAll, findByID and even the count method of the repository to try and get around it. Running it in

How do I refresh an NDB entity from the datastore?

微笑、不失礼 提交于 2019-12-13 18:37:25
问题 I'd like to be able to assert in tests that my code called Model.put() for the entities that were modified. Unfortunately, there seems to be some caching going on, such that this code: from google.appengine.ext import ndb class MyModel(ndb.Model): name = StringProperty(indexed=True) text = StringProperty() def update_entity(id, text): entity = MyModel.get_by_id(id) entity.text = text # This is where entity.put() should happen but doesn't Passes this test: def test_updates_entity_in_datastore

Strongly consistent queries for root entities in GAE?

岁酱吖の 提交于 2019-12-13 16:48:51
问题 I'd like some advice on the best way to do a strongly consistent read/write in Google App Engine. My data is stored in a class like this. class UserGroupData(ndb.Model): users_in_group = ndb.StringProperty(repeated=True) data = ndb.StringProperty(repeated=True) I want to write a safe update method for this data. As far as I understand, I need to avoid eventually consistent reads here, because they risk data loss. For example, the following code is unsafe because it uses a vanilla query which

Google App Engine - “java.lang.IllegalArgumentException: datastore transaction or write too big.”

喜你入骨 提交于 2019-12-13 16:19:42
问题 When calling DatastoreService.delete(keys) with 400 keys, I'm get this exception: java.lang.IllegalArgumentException: datastore transaction or write too big. I thought the limit on batch deletes was 500 so I am well under the limit. Am I missing something here? Thanks, Keyur 回答1: it looks like you're hitting the overall size limit for puts and deletes. you're right that batch puts and deletes have a limit of 500 entities, but there's also an overall size limit of roughly 10MB. i'm not sure if

Is it possible to set two fields as indexes on an entity in ndb?

旧时模样 提交于 2019-12-13 16:14:44
问题 I am new to ndb and gae and have a problem coming up with a good solution setting indexes. Let say we have a user model like this: class User(ndb.Model): name = ndb.StringProperty() email = ndb.StringProperty(required = True) fb_id = ndb.StringProperty() Upon login if I was going to check against the email address with a query, I believe this would be quite slow and inefficient. Possibly it has to do a full table scan. q = User.query(User.email == EMAIL) user = q.fetch(1) I believe it would

Reading a BlobstoreInputStream >= 1MB in size

拟墨画扇 提交于 2019-12-13 15:36:16
问题 Reading more than 1mb of data from a BlobstoreInputStream will throw an IOException "Blob fetch size too large." You can use the ChainedBlobstoreInputStream class in the answer below to solve this problem. 回答1: I've created a simple wrapper class that solves this problem. You should be able to directly swap ChainedBlobstoreInputStream for BlobstoreInputStream. I haven't tested the mark(), markSupported() or reset() methods. You may use this code however you wish. package net.magicscroll