database-performance

how to find number of times query executed?

余生颓废 提交于 2019-12-08 05:30:06
问题 I have few queries running in Oracle 11g database from application those are repeatable queries. I want to find number of times the query executed for the day and time it took for each execution based on sql_id or sql_text? Is there a way to find this? 回答1: The number of executions is in the AWR reports. Which means it can probably also be derived from a DBA_HIST_ table but I don't know which one. Based on your previous question I assume you have AWR licensed. --Find the SQL_ID. If not in

How can I use a covering indexed view in a supported manner?

允我心安 提交于 2019-12-08 03:11:57
问题 According to Unsupported Customizations: Adding tables, stored procedures, or views to the database is also not supported because of referential integrity or upgrade issues. I have a process that returns the most recently due phone call for staff to dial. This is causing a problem because we are a call centre, with a couple million calls already and adding a few thousand a day. I'd like to add an indexed view which provides a covering index for the few fields required from the base tables.

How to improve performance of a function with cursors in PostgreSQL?

耗尽温柔 提交于 2019-12-07 15:02:13
问题 I have function with two nested cursors. The outer cursor gets payment details of a customer from the source and inserts into the target based on some business logic. The inner cursor takes the payment details of each payment, it happens one after another. The payments table has about 125000 rows, and about 335000 rows for payment details. All of these rows are to be migrated to a target table. Executing the function takes over two hours and the database CPU usage goes up to 99%. I am working

SQL Geometry VS decimal(8,6) Lat, Long Performance

蓝咒 提交于 2019-12-07 07:11:33
问题 I was looking into performance of selecting closest points within certain proximity to given coordinate. Options are to ether use two decimal(8,6) - lat, long columns or single geography column and work with that. I am only interested which is faster? 回答1: TL;DR Geography is ~10 times faster. Ok so I have set up test: Couple of tables one with id,lat,long (int, decimal(8,6),decimal(8,6)) other with id,coord (int, geography) . Then insert 47k of random data. For indexing first table I used

UPDATE vs UPDATE WHERE

隐身守侯 提交于 2019-12-07 06:13:36
问题 I have a table with many rows, where I periodically want to set one column to 0 using a cron. What is faster / less memory consuming, doing an UPDATE on all rows (ie. no WHERE clause) or doing an UPDATE only WHERE mycolumn != 0 ? 回答1: As noticed in comments on the original post, it depends on several things (index, database engine, type of storage media, available cache memory, etc.). We could make an educated guess that: a) We should always have a full-table scan unless we have an index on

Performance of nested select

只愿长相守 提交于 2019-12-07 05:49:47
问题 I know this is a common question and I have read several other posts and papers but I could not find one that takes into account indexed fields and the volume of records that both queries could return. My question is simple really. Which of the two is recommended here written in an SQL-like syntax (in terms of performance). First query: Select * from someTable s where s.someTable_id in (Select someTable_id from otherTable o where o.indexedField = 123) Second query: Select * from someTable

Tune Oracle Database for faster startup (flashback)

懵懂的女人 提交于 2019-12-07 03:56:06
问题 I'm using Oracle Database 11.2. I have a scenario where I issue FLASHBACK DATABASE quite often. It seems that a FLASHBACK DATABASE cycle does a reboot of the database instance which takes approx. 7 seconds on my setup. Database is small (~ 1 GB tablespace), all files should be in I/O caches/buffers. Therefore I think that the bottleneck is not I/O based. I'm looking for tuning advices in order to save user time and/or CPU time for doing a flashback. UPDATE: Flashback sequence (and timing of

Node.js, store object in memory or database?

风流意气都作罢 提交于 2019-12-06 13:26:06
Am developing a node js application, which reads a json list from a centralised db List Object is around 1.2mb(if kept in txt file) Requirement is like, data is to be refreshed every 24 hours, so i kept a cron job for it Now after fetching data i keep it into a db(couchbase) which is locally running on my server Data access is very frequent, i get around 1 or 2 req per sec and nearly all req need that Object Is it good to keep that Object as a in memory object in node js or to keep it in local db ? What are the advantages and disadvantages of both ? Object only read for all requests , only

Slow running Postgres query

空扰寡人 提交于 2019-12-06 12:20:25
问题 I have this query that takes a very long time on my database. This SQL is generated from an ORM (Hibernate) inside of an application. I don't have access to the source code. I was wondering if anyone can take a look at the following ANALYZE EXPLAIN output and suggest any Postgres tweaks I can make. I don't know where to start or how to tune my database to service this query. The query looks like this select resourceta0_.RES_ID as col_0_0_ from HFJ_RESOURCE resourceta0_ left outer join HFJ_RES

Improving performance of spatial MySQL query

耗尽温柔 提交于 2019-12-06 12:10:38
问题 I have a query that returns all records, ordered by distance from a fixed point, compared to a POINT field in my MySQL 5.7 database. For a simple example, lets say it looks like this: SELECT shops.*, st_distance(location, POINT(:lat, :lng)) as distanceRaw FROM shops ORDER BY distanceRaw LIMIT 50 My actual query also has to do a few joins to get additional data for the results. The issue is, that in order to sort the data by distance, it needs to calculate the distance over every single record