app-engine-ndb

Is it best to query by keys_only=True then get_multi or just full query?

五迷三道 提交于 2019-12-22 04:04:08
问题 I am using NDB with python 2.7 with threadsafe mode turned on. I understand that querying for entities with NDB does not use local cache or memcache but goes straight to the datastore unlike getting by key name. (The rest of the question might be redundant if this premise is not correct.) Therefore would a good paradigm be to only query with keys_only=True and then do a get_multi to obtain the full entities? The benefits would be that keys_only=True queries are much faster than keys_only

GAE Memcache Usage for NDB Seems Low

随声附和 提交于 2019-12-21 21:40:22
问题 I have a Google App Engine project with a ~40 GB database, and I'm getting poor read performance with NDB. I've noticed that my memcache size (as listed on the dashboard) is only about 2 MB. I would expect NDB to implicitly make more use of memcache to improve performance. Is there a way of debugging NDB's memcache usage? 回答1: The question is rather poorly formulated -- there are a zillion reasons for poor read performance, and most are due to a poorly written app, but you don't tell us

ndb and consistency: Why is happening this behavior in a query without a parent

怎甘沉沦 提交于 2019-12-21 17:51:14
问题 I'm doing some work with Python and ndb and can't understand why. I'll post the cases and the code above: models.py class Reference(ndb.Model): kind = ndb.StringProperty(required=True) created_at = ndb.DateTimeProperty(auto_now_add=True) some_id = ndb.StringProperty(indexed=True) data = ndb.JsonProperty(default={}) Those tests are running in the Interactive console and --high_replication option to dev_appserver.py: Test 1 from models import Reference from google.appengine.ext import ndb

Appengine NDB: Putting 880 rows, exceeding datastore write ops quota. Why?

天大地大妈咪最大 提交于 2019-12-21 05:56:22
问题 I have an application which imports 880 rows into an NDB datastore, using put_async(). Whenever I run this import it exceeds the daily quota of 50,000 write ops to the datastore. I'm trying to understand why this operation is so expensive and what can be done to stay under quota. There are 13 columns like so: stringbool = ['true', 'false'] class BeerMenu(ndb.Model): name = ndb.StringProperty() brewery = ndb.StringProperty() origin = ndb.StringProperty() abv = ndb.FloatProperty() size = ndb

Use the Datastore (NDB), the Search API or both for views on data?

霸气de小男生 提交于 2019-12-21 02:14:12
问题 In a CMS, a list of customers is retrieved using a regular NDB query with ordering. To allow filtering on name, company name and email, I create several (sometimes many) indices. The situation was not ideal, but workable. Now there's the (experimental) Search API. It seems to have no relation to the datastore (or NDB), but my data is already there. I'd like to use Full Text Search and put filters on multiple fields simultaniously, so should I keep my data in the Datastore and duplicate parts

looking for ideas/alternatives to providing a page/item count/navigation of items matching a GAE datastore query

点点圈 提交于 2019-12-20 12:38:39
问题 I like the datastore simplicity, scalability and ease of use; and the enhancements found in the new ndb library are fabulous. As I understand datastore best practices, one should not write code to provide item and/or page counts of matching query results when the number of items that match a query is large; because the only way to do this is to retrieve all the results which is resource intensive. However, in many applications, including ours, it is a common desire to see a count of matching

Most Efficient One-To-Many Relationships in Google App Engine Datastore?

柔情痞子 提交于 2019-12-20 10:43:53
问题 Sorry if this question is too simple; I'm only entering 9th grade. I'm trying to learn about NoSQL database design. I want to design a Google Datastore model that minimizes the number of read/writes. Here is a toy example for a blog post and comments in a one-to-many relationship. Which is more efficient - storing all of the comments in a StructuredProperty or using a KeyProperty in the Comment model? Again, the objective is to minimize the number of read/writes to the datastore. You may make

Effective implementation of one-to-many relationship with Python NDB

别等时光非礼了梦想. 提交于 2019-12-20 10:39:53
问题 I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many)) In my understanding, there are three ways to implement it. Use 'parent' argument Use 'repeated' Structured property Use 'repeated' Key property I choose a way based on the logic below usually, but does it make sense to you? If you have better logic, please teach me. Use 'parent' argument Transactional operation is required between these entities

Efficient way to store relation values in NDB

别说谁变了你拦得住时间么 提交于 2019-12-20 07:34:05
问题 I've this data model (I made it, so if there's a better way to do it, please let me know). Baically I've Club that can have many Courses . now I want to know all the members and instructors of a Club. members and instructors are stored in the Course model, and Club has a reference to them. See the code.. class Course(ndb.Model): ... instructor_keys = ndb.KeyProperty(kind="User", repeated=True) member_keys = ndb.KeyProperty(kind="User", repeated=True) @property def instructors(self): return

Google AppEngine: Form handling 'repeated' StructuredProperty

南楼画角 提交于 2019-12-20 06:38:31
问题 How do I work with ndb.StructuredProperty(repeated = True) properties when it comes to designing their forms and handlers? Consider this example: I've got 3 ndb.Model kinds: SkilledPerson , his Education , and his (work) Experience . The latter two are StructuredProperty types of SkilledPerson. class SkilledPerson(ndb.Model): name = ndb.StringProperty() birth = ndb.DateProperty() education = ndb.StructuredProperty(Education, repeated = True) experience = ndb.StructuredProperty(Experience,