database-performance

Using Multi Column Unique Indexes vs Single Hashed Column

孤人 提交于 2019-12-11 10:29:15
问题 I've a table which I need to give unique constraint to multiple columns. But instead of creating multi column unique index, I can also introduce an extra column based on hashing of all the required fields. So which one will be more effective in terms of database performance? MySQL suggests the hashed column method but I couldn't find any information regarding SqlServer. 回答1: The link you give states: If this column is short, reasonably unique, and indexed, it might be faster than a “wide”

mysql udf json_extract in where clause - how to improve performance

拥有回忆 提交于 2019-12-11 07:17:44
问题 How can I efficiently search json data in a mysql database? I installed the extract_json udf from labs.mysql.com and played around with a test table with 2.750.000 entries. CREATE TABLE `testdb`.`JSON_TEST_TABLE` ( `AUTO_ID` INT UNSIGNED NOT NULL AUTO_INCREMENT, `OP_ID` INT NULL, `JSON` LONGTEXT NULL, PRIMARY KEY (`AUTO_ID`)) $$ An example JSON field would look like so: {"ts": "2014-10-30 15:08:56 (9400.223725848107) ", "operation": "1846922"} I found that putting json_extract into a select

Optimizing InnoDB Insert Queries

会有一股神秘感。 提交于 2019-12-11 05:47:08
问题 According to slow query log, the following query (and similar queries) would take around 2s to execute occassionally: INSERT INTO incoming_gprs_data (data,type) VALUES ('3782379837891273|890128398120983891823881abcabc','GT100'); Table structure: CREATE TABLE `incoming_gprs_data` ( `id` int(200) NOT NULL AUTO_INCREMENT, `dt` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `data` text NOT NULL, `type` char(10) NOT NULL, `test_udp_id` int(20) NOT NULL, `parse_result` text NOT NULL, `completed`

Optimizing a SQL Query which is already doing an index seek

做~自己de王妃 提交于 2019-12-11 04:13:47
问题 I have a SQL query which is pretty efficient, but I have a feeling it can be improved upon. It's the 48% cost on the sort after the index seek using IX_thing_time_location that I would hope can be improved upon. I would hate to think that this is the best that can be done with this query. Is there anything else I can do to improve performance in terms of updating the query, changing my indexes, partitioning (I know these doesn't always mean performance gains)? Here is the execution plan: http

BigQuery performance: Is this correct?

南楼画角 提交于 2019-12-11 04:10:55
问题 Folks, I'm using BigQuery as a superfast database for my analytics queries, but I'm very disappointed with its performance. Let me show you the numbers: Just one Table at "from" clause Select about 15 fields with group by each, about 5 fields with SUM() Total table rows: 3.7 millions Total rows returned: 830K When I execute this query on BigQuery's console, it takes about 1 minute to process. Is this ok for you? I was expecting that it will return in about 2 seconds... If I execute this query

mysql partitioning

♀尐吖头ヾ 提交于 2019-12-11 03:08:26
问题 just want to verify that database partition is implemented only at the database level, when we query a partitioned table, we still do our normal query, nothing special with our queries, the optimization is performed automatically when parsing the query, is that correct? e.g. we have a table called 'address' with a column called 'country_code' and 'city'. so if i want to get all the addresses in New York, US, normally i wound do something like this: select * from address where country_code =

slow mysql query that has to execute hundreds of thousand of times per hour

天大地大妈咪最大 提交于 2019-12-11 02:51:26
问题 My mysql database has a table with several hundred thousand rows. Each row has (amongst other data) a user name and a time stamp. I need to retrieve the record with the most recent timestamp for a given user. The current, brute force query is: SELECT * FROM tableName WHERE `user_name`="The user name" AND datetime_sent = (SELECT MAX(`datetime_sent`) FROM tableName WHERE `user_name`="The user name"); The table has an index on user_name and on datetime_sent . How can I best improve this query? I

Joomla MySQL Performance

无人久伴 提交于 2019-12-10 16:43:36
问题 I have been developing a Joomla site with version 2.5.11.Site will be under very high traffic. My problem is about MySQL query performance. Database includes about 60000 rows in content table, and the query seen below (core com_content articles model query) execution time is about 6 seconds.Very slow. SELECT a.id, a.title, a.alias, a.title_alias, a.introtext, a.checked_out, a.checked_out_time, a.catid, a.created, a.created_by, a.created_by_alias, CASE WHEN a.modified = 0 THEN a.created ELSE a

Understanding “Number of keys” in nodetool cfstats

早过忘川 提交于 2019-12-10 14:24:25
问题 I am new to Cassandra, in this example i am using a cluster with 1 DC and 5 nodes and a NetworkTopologyStrategy with replication factor as 3. Keyspace: activityfeed Read Count: 0 Read Latency: NaN ms. Write Count: 0 Write Latency: NaN ms. Pending Tasks: 0 Table: feed_shubham SSTable count: 1 Space used (live), bytes: 52620684 Space used (total), bytes: 52620684 SSTable Compression Ratio: 0.3727660543119897 Number of keys (estimate): 137984 Memtable cell count: 0 Memtable data size, bytes: 0

Node.js, store object in memory or database?

谁说我不能喝 提交于 2019-12-10 11:10:03
问题 Am developing a node js application, which reads a json list from a centralised db List Object is around 1.2mb(if kept in txt file) Requirement is like, data is to be refreshed every 24 hours, so i kept a cron job for it Now after fetching data i keep it into a db(couchbase) which is locally running on my server Data access is very frequent, i get around 1 or 2 req per sec and nearly all req need that Object Is it good to keep that Object as a in memory object in node js or to keep it in