query-optimization

Efficiently query an embedded database in a loop with prepared statements

非 Y 不嫁゛ 提交于 2019-12-23 03:16:16
问题 I asked a similar question the other day but have since realized I was getting ahead of myself. I'm seeking advice on the proper way to handle the following scenario. I'm trying to SELECT the correct longitude and latitude for a given address and city, in the fastest way possible. My COORDINATES table has 25,000 rows and looks like this: I have a Java HashMap<Integer, List<String>> which contains an Integer as the key, and an ArrayList containing 2 entries, an address and city. The HashMap

MySQL optimization query with subqueries

拥有回忆 提交于 2019-12-23 03:14:38
问题 Today i received email from my hosting account saying that i need to tweak my query: SELECT `id`, `nick`, `msg`, `uid`, `show_pic`, `time`,`ip`,`time_updated`, (SELECT COUNT(c.msg_id) FROM `the_ans` c where c.msg_id = d.id) AS counter, (SELECT c.msg FROM `the_ans` c WHERE c.msg_id=d.id ORDER BY `time` DESC LIMIT 1) as lastmsg FROM `the_data` d ORDER BY `time_updated` DESC LIMIT 26340 ,15 EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY d ALL 34309 Using

Reading a file by multiple threads

雨燕双飞 提交于 2019-12-23 02:19:41
问题 I have a 250Mb file to be read. And the application is multi threaded. If i allow all threads to read the file the memory starvation occurs. I get out of memory error. To avoid it. I want to have only one copy of the String (which is read from stream) in memory and i want all the threads to use it. while (true) { synchronized (buffer) { num = is.read(buffer); String str = new String(buffer, 0, num); } sendToPC(str); } Basically i want to have only one copy of string when all thread completed

How to select many to one to many without hundreds of queries using Django ORM?

*爱你&永不变心* 提交于 2019-12-22 18:30:46
问题 My database has the following schema: class Product(models.Model): pass class Tag(models.Model): product = models.ForeignKey(Product) attr1 = models.CharField() attr2 = models.CharField() attr3 = models.CharField() class AlternatePartNumber(models.Model): product = models.ForeignKey(Product) In other words, a Product has many Tag s, and a Product has many AlternatePartNumber s. Tag s are a collection of attributes of the Product . Given the three attributes in a Tag , I want to select the

How to select many to one to many without hundreds of queries using Django ORM?

Deadly 提交于 2019-12-22 18:30:08
问题 My database has the following schema: class Product(models.Model): pass class Tag(models.Model): product = models.ForeignKey(Product) attr1 = models.CharField() attr2 = models.CharField() attr3 = models.CharField() class AlternatePartNumber(models.Model): product = models.ForeignKey(Product) In other words, a Product has many Tag s, and a Product has many AlternatePartNumber s. Tag s are a collection of attributes of the Product . Given the three attributes in a Tag , I want to select the

Nested Set indices & performance

喜欢而已 提交于 2019-12-22 13:56:14
问题 I'm having some troubles understanding what indices to use on a Nested-Set model. The query is: SELECT `node`.`id`,(COUNT(parent.id) - 1) AS `depth`,`name` FROM `categories` AS `parent` INNER JOIN `categories` AS `node` ON (`node`.`lft` BETWEEN parent.lft AND parent.rgt) INNER JOIN `filebank_categories` ON (`node`.`id` = `filebank_categories`.`category_id` AND `filebank_categories`.`filebank_id` = 136) INNER JOIN `categories_names` ON (`categories_names`.`category_id` = `node`.`id` AND

MYSQL and the LIMIT clause

天大地大妈咪最大 提交于 2019-12-22 11:33:49
问题 I was wondering if adding a LIMIT 1 to a query would speed up the processing? For example... I have a query that will most of the time return 1 result, but will occasionaly return 10's, 100's or even 1000's of records. But I will only ever want the first record. Would the limit 1 speed things up or make no difference? I know I could use GROUP BY to return 1 result but that would just add more computation. Any thoughts gladly accepted! Thanks 回答1: It depends if you have an ORDER BY. An ORDER

MongoDB Index definition strategy

允我心安 提交于 2019-12-22 10:48:57
问题 I have a MongoDB-based database with something about 100K to 500K text documents inside and the collection keeps growing. The system should support the queries by different fields of the documents, e.g. title, category, importance etc. The system is a near real-time system, which got new documents every 5-10 minutes. My question: Is it a good idea, in order to boost the queries' performance, to define a separate index for each frequently queried field (field types: small text, numeric, date)

Why is LEFT JOIN slower than INNER JOIN?

ぃ、小莉子 提交于 2019-12-22 06:49:23
问题 I have two queries, the first one (inner join) is super fast, and the second one (left join) is super slow. How do I make the second query fast? EXPLAIN SELECT saved.email FROM saved INNER JOIN finished ON finished.email = saved.email; id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE finished index NULL email 258 NULL 32168 Using index 1 SIMPLE saved ref email email 383 func 1 Using where; Using index EXPLAIN SELECT saved.email FROM saved LEFT JOIN finished ON

How to optimize Neo4j Cypher queries with multiple node matches (Cartesian Product)

我们两清 提交于 2019-12-22 06:35:45
问题 I am currently trying to merge three datasets for analysis purposes. I am using certain common fields to establish the connections between the datasets. In order to create the connections I have tried using the following type of query: MATCH (p1:Person),(p2:Person) WHERE p1.email = p2.email AND p1.name = p2.name AND p1 <> p2 CREATE UNIQUE (p1)-[IS]-(p2); Which can be similarly written as: MATCH (p1:Person),(p2:Person {name:p1.name, email:p1.email}) WHERE p1 <> p2 CREATE UNIQUE (p1)-[IS]-(p2);