database-performance

Why do spring/hibernate read-only database transactions run slower than read-write?

拈花ヽ惹草 提交于 2019-11-28 03:06:57
I've been doing some research around the performance of read-only versus read-write database transactions. The MySQL server is remote across a slow VPN link so it's easy for me to see differences between the transaction types. This is with connection pooling which I know is working based on comparing 1st versus 2nd JDBC calls. When I configure the Spring AOP to use a read-only transaction on my DAO call, the calls are 30-40% slower compared to read-write: <!-- slower --> <tx:method name="find*" read-only="true" propagation="REQUIRED" /> ... // slower @Transaction(readOnly = true) Versus: <!--

How to configure MongoDB Java driver MongoOptions for production use?

不想你离开。 提交于 2019-11-27 16:35:41
I've been searching the web looking for best practices for configuring MongoOptions for the MongoDB Java driver and I haven't come up with much other than the API. This search started after I ran into the "com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection" error and by increasing the connections/multiplier I was able to solve that problem. I'm looking for links to or your best practices in configuring these options for production. The options for the 2.4 driver include: http://api.mongodb.org/java/2.4/com/mongodb/MongoOptions.html autoConnectRetry connectionsPerHost

save method of CRUDRepository is very slow?

北城余情 提交于 2019-11-27 15:36:38
问题 i want to store some data in my neo4j database. i use spring-data-neo4j for that. my code is like the follow: for (int i = 0; i < newRisks.size(); i++) { myRepository.save(newRisks.get(i)); System.out.println("saved " + newRisks.get(i).name); } My newRisks-array contains circa 60000 objects and 60000 edges. Every node and edge has one property. The duration of this loop is circa 15 - 20 minutes, is this normal? I used Java VisualVM to search some bottlenecks, but my average CPU usage was 10 -

How do NULL values affect performance in a database search?

送分小仙女□ 提交于 2019-11-27 11:48:18
In our product we have a generic search engine, and trying to optimze the search performance. A lot of the tables used in the queries allow null values. Should we redesign our table to disallow null values for optimization or not? Our product runs on both Oracle and MS SQL Server . In Oracle , NULL values are not indexed, i. e. this query: SELECT * FROM table WHERE column IS NULL will always use full table scan since index doesn't cover the values you need. More than that, this query: SELECT column FROM table ORDER BY column will also use full table scan and sort for same reason. If your

Entity Framework Vs Stored Procedures - Performance Measure

随声附和 提交于 2019-11-27 09:28:21
问题 I'm trying to establish how much slower Entity Framework is over Stored Procedures. I hope to convince my boss to let us use Entity Framework for ease of development. Problem is I ran a performance test and it looks like EF is about 7 times slower than Stored Procs. I find this extremely hard to believe, and I'm wondering if I'm missing something. Is this a conclusive Test? Is there anything I can do to increase the performance of the EF Test? var queries = 10000; // Stored Proc Test

Is this date comparison condition SARG-able in SQL?

一曲冷凌霜 提交于 2019-11-27 06:49:14
问题 Is this condition sargable? AND DATEDIFF(month,p.PlayerStatusLastTransitionDate,@now) BETWEEN 1 AND 7) My rule of thumb is that a function on the left makes condition non sargable.. but in some places I have read that BETWEEN clause is sargable. So does any one know for sure? For reference: What makes a SQL statement sargable? http://en.wikipedia.org/wiki/Sargable NOTE: If any guru ends here, please do update Sargable Wikipedia page. I updated it a little bit but I am sure it can be improved

MYSQL query performs very slow

邮差的信 提交于 2019-11-27 05:51:35
问题 I have developed a user bulk upload module. There are 2 situations, when I do a bulk upload of 20 000 records when database has zero records. Its taking about 5 hours. But when the database already has about 30 000 records the upload is very very slow. It takes about 11 hours to upload 20 000 records. I am just reading a CSV file via fgetcsv method. if (($handle = fopen($filePath, "r")) !== FALSE) { while (($peopleData = fgetcsv($handle, 10240, ",")) !== FALSE) { if (count($peopleData) ==

MySQL | REGEXP VS Like

陌路散爱 提交于 2019-11-27 03:52:27
问题 I have a table CANDIDATE in my db which is running under MySQL 5.5 and I am trying to get rows from table where RAM is contains in firstname, so I can run below two queries, but I would like to now which query we should use for long term with respect to optimization. SELECT * FROM CANDIDATE c WHERE firstname REGEXP 'ram'; SELECT * FROM CANDIDATE c WHERE firstname LIKE'%ram%'; 回答1: REGEXP and LIKE are used to totally different cases. LIKE is used to add wildcards to a string whereas REGEXP is

Why do spring/hibernate read-only database transactions run slower than read-write?

杀马特。学长 韩版系。学妹 提交于 2019-11-26 23:57:08
问题 I've been doing some research around the performance of read-only versus read-write database transactions. The MySQL server is remote across a slow VPN link so it's easy for me to see differences between the transaction types. This is with connection pooling which I know is working based on comparing 1st versus 2nd JDBC calls. When I configure the Spring AOP to use a read-only transaction on my DAO call, the calls are 30-40% slower compared to read-write: <!-- slower --> <tx:method name="find

Postgresql Truncation speed

ぐ巨炮叔叔 提交于 2019-11-26 19:34:27
We're using Postgresql 9.1.4 as our db server. I've been trying to speed up my test suite so I've stared profiling the db a bit to see exactly what's going on. We are using database_cleaner to truncate tables at the end of tests. YES I know transactions are faster, I can't use them in certain circumstances so I'm not concerned with that. What I AM concerned with, is why TRUNCATION takes so long (longer than using DELETE) and why it takes EVEN LONGER on my CI server. Right now, locally (on a Macbook Air) a full test suite takes 28 minutes. Tailing the logs, each time we truncate tables... ie: