large-data-volumes

C# Charting - Reasonble Large Data Set and Real-time

爱⌒轻易说出口 提交于 2020-01-23 03:43:45
问题 I'm looking for a C# WinForms charting component, either commercial or open source, that can handle relatively large data sets and be reasonable scalable with regards to chart rendering and updates. The number of data sets to be displayed would be around 30. There would be between 15 and 20 updates per second for each data set. A line chart component would be required for this. 回答1: I have used ZedGraph in the past for realtime stock charts with large histories. It can be a little slow if the

Parallel.ForEach throws exception when processing extremely large sets of data

怎甘沉沦 提交于 2020-01-14 03:59:14
问题 My question centers on some Parallel.ForEach code that used to work without fail, and now that our database has grown to 5 times as large, it breaks almost regularly. Parallel.ForEach<Stock_ListAllResult>( lbStockList.SelectedItems.Cast<Stock_ListAllResult>(), SelectedStock => { ComputeTipDown( SelectedStock.Symbol ); } ); The ComputeTipDown() method gets all daily stock tic data for the symbol, and iterates through each day, gets yesterday's data and does a few calculations and then inserts

Large primary key: 1+ billion rows MySQL + InnoDB?

拟墨画扇 提交于 2020-01-10 10:56:29
问题 I was wondering if InnoDB would be the best way to format the table? The table contains one field, primary key, and the table will get 816k rows a day (est.). This will get very large very quick! I'm working on a file storage way (would this be faster)? The table is going to store ID numbers of Twitter Ids that have already been processed? Also, any estimated memory usage on a SELECT min('id') statement? Any other ideas are greatly appreciated! 回答1: The only definitive answer is to try both

Large primary key: 1+ billion rows MySQL + InnoDB?

假装没事ソ 提交于 2020-01-10 10:56:10
问题 I was wondering if InnoDB would be the best way to format the table? The table contains one field, primary key, and the table will get 816k rows a day (est.). This will get very large very quick! I'm working on a file storage way (would this be faster)? The table is going to store ID numbers of Twitter Ids that have already been processed? Also, any estimated memory usage on a SELECT min('id') statement? Any other ideas are greatly appreciated! 回答1: The only definitive answer is to try both

Large primary key: 1+ billion rows MySQL + InnoDB?

夙愿已清 提交于 2020-01-10 10:56:07
问题 I was wondering if InnoDB would be the best way to format the table? The table contains one field, primary key, and the table will get 816k rows a day (est.). This will get very large very quick! I'm working on a file storage way (would this be faster)? The table is going to store ID numbers of Twitter Ids that have already been processed? Also, any estimated memory usage on a SELECT min('id') statement? Any other ideas are greatly appreciated! 回答1: The only definitive answer is to try both

How can we handle large matrices in matlab(larger than 10000x10000)

旧城冷巷雨未停 提交于 2020-01-04 06:11:09
问题 In my program I am faced with some matrices that are larger than 10000x10000. I cannot transpose or inverse them, how can this problem be overcome? ??? Error using ==> ctranspose Out of memory. Type HELP MEMORY for your options. Error in ==> programname1 at 70 B = cell2mat(C(:,:,s))'; Out of memory. Type HELP MEMORY for your options. Example 1: Run the MEMORY command on a 32-bit Windows system: >> memory Maximum possible array: 677 MB (7.101e+008 bytes) * Memory available for all arrays: 1602

Fetch only N rows at a time (MySQL)

*爱你&永不变心* 提交于 2020-01-03 17:12:22
问题 I'm looking for a way to fetch all data from a huge table in smaller chunks. Please advise. 回答1: To answer a question from the title use LIMIT operator SELECT * FROM table LIMIT 0,20 as for one from body, it's too broad to ask for a certain code example, doesn't it? 来源: https://stackoverflow.com/questions/3599548/fetch-only-n-rows-at-a-time-mysql

Getting random results from large tables

ぃ、小莉子 提交于 2020-01-02 06:33:13
问题 I'm trying to get 4 random results from a table that holds approx 7 million records. Additionally, I also want to get 4 random records from the same table that are filtered by category. Now, as you would imagine doing random sorting on a table this large causes the queries to take a few seconds, which is not ideal. One other method I thought of for the non-filtered result set would be to just get PHP to select some random numbers between 1 - 7,000,000 or so and then do an IN(...) with the

Handling large records in a Java EE application

喜你入骨 提交于 2019-12-29 05:00:10
问题 There is a table phonenumbers with two columns: id , and number . There are about half a million entries in the table. Database is MySQL . The requirement is to develop a simple Java EE application, connected to that database, that allows a user to download all number values in comma separated style by following a specific URL. If we get all the values in a huge String array and then concatenate them (with comma in between all the values) in a String and then send it down to the user, does it

JDBC Batch Insert OutOfMemoryError

妖精的绣舞 提交于 2019-12-29 03:55:20
问题 I have written a method insert() in which I am trying to use JDBC Batch for inserting half a million records into a MySQL database: public void insert(int nameListId, String[] names) { String sql = "INSERT INTO name_list_subscribers (name_list_id, name, date_added)"+ " VALUES (?, ?, NOW())"; Connection conn = null; PreparedStatement ps = null; try{ conn = getConnection(); ps = conn.prepareStatement(sql); for(String s : names ){ ps.setInt(1, nameListId); ps.setString(2, s); ps.addBatch(); } ps