database-performance

Need recommendations on pushing the envelope with SqlBulkCopy on SQL Server

痞子三分冷 提交于 2019-12-04 10:52:50
I am designing an application, one aspect of which is that it is supposed to be able to receive massive amounts of data into SQL database. I designed the database stricture as a single table with bigint identity, something like this one: CREATE TABLE MainTable ( _id bigint IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED, field1, field2, ... ) I will omit how am I intending to perform queries, since it is irrelevant to the question I have. I have written a prototype, which inserts data into this table using SqlBulkCopy. It seemed to work very well in the lab. I was able to insert tens of millions

RDBMS impact on Golang [closed]

℡╲_俬逩灬. 提交于 2019-12-04 08:11:15
Closed . This question is opinion-based. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it can be answered with facts and citations by editing this post . Closed 5 years ago . I'm not going to make a big long rattle on this question about what I've tested and number crunching. I'm more interested in actual up-to-date practice performances. I've read tons of articles already and some of them are pretty skeptical or either very pro to one library . I'm currently testing a bit with gorp , yet I have no clue how to compare the performances

Table Indexing on PostgreSQL for performance

佐手、 提交于 2019-12-04 06:16:14
问题 I am solving performance issues on PostgreSQL and I have the following table: CREATE TABLE main_transaction ( id integer NOT NULL DEFAULT nextval('main_transaction_id_seq'::regclass), description character varying(255) NOT NULL, request_no character varying(18), account character varying(50), .... ) Above table has 34 columns including 3 FOREIGN KEY s and it has over 1 Million rows data. I have the following conditional SELECT query: SELECT * FROM main_transaction WHERE upper(request_no) LIKE

Firebase Android: slow “join” using many listeners, seems to contradict documentation

*爱你&永不变心* 提交于 2019-12-04 05:27:11
Implementing an Android+Firebase app, which has a many-to-many relationship: User <-> Widget (Widgets can be shared to multiple users). Considerations: List all the Widgets that a User has. A User can only see the Widgets which are shared to him/her. Be able to see all Users to whom a given Widget is shared. A single Widget can be owned/administered by multiple Users with equal rights (modify Widget and change to whom it is shared). Similar to how Google Drive does sharing to specific users. One of the approaches to implement fetching (join-style), would be to go with this advice: https://www

Does a postgres foreign key imply an index?

时光总嘲笑我的痴心妄想 提交于 2019-12-04 04:00:36
I have a postgres table (lets call this table Events ) with a composite foreign key to another table (lets call this table Logs ). The Events table looks like this: CREATE TABLE Events ( ColPrimary UUID, ColA VARCHAR(50), ColB VARCHAR(50), ColC VARCHAR(50), PRIMARY KEY (ColPrimary), FOREIGN KEY (ColA, ColB, ColC) REFERENCES Logs(ColA, ColB, ColC) ); In this case, I know that I can efficiently search for Events by the primary key, and join to Logs. What I am interested in is if this foreign key creates an index on the Events table which can be useful even without joining. For example, would the

Oracle Autoincrement Functionality: Triggers or Oracle JDBC CallableStatement in 11.2?

本小妞迷上赌 提交于 2019-12-04 01:55:04
问题 Which is the best way (in terms of insert performance) to implement autoincrement functionality in Oracle (11.2) when you need to retrieved the newly generated key using JDBC? I know there are identity columns in Oracle 12, but I'm stuck with 11.2 right now. Like many others, I have had no luck in getting the JDBC getGeneratedKeys() to work with Oracle. I ended up having trigger in my Oracle (11.2) database that acts like a MySQL autoincrement function and inserts the NextVal from a table

T-SQL code is extremely slow when saved as an Inline Table-valued Function

北城以北 提交于 2019-12-04 01:09:56
问题 I can't seem to figure out why SQL Server is taking a completely different execution plan when wrapping my code in an ITVF. When running the code inside of the ITVF on its own, the query runs in 5 seconds. If I save it as an ITVF, it will run for 20 minutes and not yield a result. I'd prefer to have this in an ITVF for code reuse. Any ideas why saving code as an ITVF would cause severe performance issues? CREATE FUNCTION myfunction ( @start_date date, @stop_date date ) RETURNS TABLE AS RETURN

MYSQL Huge SQL Files Insertion | MyISAM speed suddenly slow down for Insertions (strange issue)

人走茶凉 提交于 2019-12-03 21:55:20
I'm facing very strange problem, I've asked the question here about speed up the insertion in MYSql, especially about the insertion of Huge SQL files multiple GB in size. They suggested me to use MyISAM engine. I did the following: ALTER TABLE revision ENGINE=MyISAM; Use ALTER TABLE .. DISABLE KEYS . (MyISAM only) Set bulk_insert_buffer_size to 500M. (MyISAM only) Set unique_checks = 0 . not checked. SET autocommit=0; ... SQL import statements ... COMMIT; SET foreign_key_checks=0; It Speed up the process to 5 minutes that previously took 2 hours and I'm impressed. But now when i tried the same

MYSQL Insert Huge SQL Files of GB in Size

耗尽温柔 提交于 2019-12-03 21:01:54
I'm trying to create a Wikipedia DB copy (Around 50GB), but having problems with the largest SQL files. I've split the files of size in GB using linux split utility into chunks of 300 MB. e.g. split -d -l 50 ../enwiki-20070908-page page.input. On average 300MB files take 3 hours at my server. I've ubuntu 12.04 server OS and Mysql 5.5 Server. I'm trying like following: mysql -u username -ppassword database < category.sql Note: these files consist of Insert statements and these are not CSV files. Wikipedia offers database dumps for download, so everybody can create a copy of Wikipedia. You can

ActiveRecord query much slower than straight SQL?

馋奶兔 提交于 2019-12-03 17:47:42
问题 I've been working on optimizing my project's DB calls and I noticed a "significant" difference in performance between the two identical calls below: connection = ActiveRecord::Base.connection() pgresult = connection.execute( "SELECT SUM(my_column) FROM table WHERE id = #{id} AND created_at BETWEEN '#{lower}' and '#{upper}'") and the second version: sum = Table. where(:id => id, :created_at => lower..upper). sum(:my_column) The method using the first version on average takes 300ms to execute