database-performance

How to configure MongoDB Java driver MongoOptions for production use?

北城余情 提交于 2019-11-26 18:43:14
问题 I've been searching the web looking for best practices for configuring MongoOptions for the MongoDB Java driver and I haven't come up with much other than the API. This search started after I ran into the "com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection" error and by increasing the connections/multiplier I was able to solve that problem. I'm looking for links to or your best practices in configuring these options for production. The options for the 2.4 driver

How big can a MySQL database get before performance starts to degrade

强颜欢笑 提交于 2019-11-26 16:51:31
At what point does a MySQL database start to lose performance? Does physical database size matter? Do number of records matter? Is any performance degradation linear or exponential? I have what I believe to be a large database, with roughly 15M records which take up almost 2GB. Based on these numbers, is there any incentive for me to clean the data out, or am I safe to allow it to continue scaling for a few more years? Nick Berardi The physical database size doesn't matter. The number of records don't matter. In my experience the biggest problem that you are going to run in to is not size, but

How do NULL values affect performance in a database search?

浪尽此生 提交于 2019-11-26 15:46:21
问题 In our product we have a generic search engine, and trying to optimze the search performance. A lot of the tables used in the queries allow null values. Should we redesign our table to disallow null values for optimization or not? Our product runs on both Oracle and MS SQL Server . 回答1: In Oracle , NULL values are not indexed, i. e. this query: SELECT * FROM table WHERE column IS NULL will always use full table scan since index doesn't cover the values you need. More than that, this query:

Mysql count performance on very big tables

前提是你 提交于 2019-11-26 13:05:50
问题 I have a table with more than 100 millions rows in Innodb. I have to know if there is more than 5000 rows where the foreign key = 1. I don\'t need the exact number. I made some testing : SELECT COUNT(*) FROM table WHERE fk = 1 => 16 seconds SELECT COUNT(*) FROM table WHERE fk = 1 LIMIT 5000 => 16 seconds SELECT primary FROM table WHERE fk = 1 => 0.6 seconds I will have a bigger network and treatment time but it can be an overload of 15.4 seconds ! Do you have a better idea ? Thanks Edit:

Postgresql Truncation speed

笑着哭i 提交于 2019-11-26 07:00:20
问题 We\'re using Postgresql 9.1.4 as our db server. I\'ve been trying to speed up my test suite so I\'ve stared profiling the db a bit to see exactly what\'s going on. We are using database_cleaner to truncate tables at the end of tests. YES I know transactions are faster, I can\'t use them in certain circumstances so I\'m not concerned with that. What I AM concerned with, is why TRUNCATION takes so long (longer than using DELETE) and why it takes EVEN LONGER on my CI server. Right now, locally

Multiple schemas versus enormous tables

浪尽此生 提交于 2019-11-26 04:25:40
问题 Consider a mobile device manager system that contains information for every user such as a table that stores the apps that he has installed on the phone, auditing details, notification information etc. Is it wise to create a seperate schema for each user with the corresponding tables? The number of tables is large for a single user amounting to about 30 tables each. Would it be better to have a seperate schema where all this information is placed into these tables (in turn creating enormous