Maximum number of rows in an MS Access database engine table?

大城市里の小女人 提交于 2019-11-28 11:59:29

Some comments:

  1. Jet/ACE files are organized in data pages, which means there is a certain amount of slack space when your record boundaries are not aligned with your data pages.

  2. Row-level locking will greatly reduce the number of possible records, since it forces one record per data page.

  3. In Jet 4, the data page size was increased to 4KBs (from 2KBs in Jet 3.x). As Jet 4 was the first Jet version to support Unicode, this meant that you could store 1GB of double-byte data (i.e., 1,000,000,000 double-byte characters), and with Unicode compression turned on, 2GBs of data. So, the number of records is going to be affected by whether or not you have Unicode compression on.

  4. Since we don't know how much room in a Jet/ACE file is taken up by headers and other metadata, nor precisely how much room index storage takes, the theoretical calculation is always going to be under what is practical.

  5. To get the most efficient possible storage, you'd want to use code to create your database rather than the Access UI, because Access creates certain properties that pure Jet does not need. This is not to say there are a lot of these, as properties set to the Access defaults are usually not set at all (the property is created only when you change it from the default value -- this can be seen by cycling through a field's properties collection, i.e., many of the properties listed for a field in the Access table designer are not there in the properties collection because they haven't been set), but you might want to limit yourself to Jet-specific data types (hyperlink fields are Access-only, for instance).

I just wasted an hour mucking around with this using Rnd() to populate 4 fields defined as type byte, with composite PK on the four fields, and it took forever to append enough records to get up to any significant portion of 2GBs. At over 2 million records, the file was under 80MBs. I finally quit after reaching just 700K 7 MILLION records and the file compacted to 184MBs. The amount of time it would take to get up near 2GBs is just more than I'm willing to invest!

Here's my attempt:

I created a single-column (INTEGER) table with no key:

CREATE TABLE a (a INTEGER NOT NULL);

Inserted integers in sequence starting at 1.

I stopped it (arbitrarily after many hours) when it had inserted 65,632,875 rows. The file size was 1,029,772 KB.

I compacted the file which reduced it very slightly to 1,029,704 KB.

I added a PK:

ALTER TABLE a ADD CONSTRAINT p PRIMARY KEY (a);

which increased the file size to 1,467,708 KB.

This suggests the maximum is somewhere around the 80 million mark.

As others have stated it's combination of your schema and the number of indexes.

A friend had about 100,000,000 historical stock prices, daily closing quotes, in an MDB which approached the 2 Gb limit.

He pulled them down using some code found in a Microsoft Knowledge base article. I was rather surprised that whatever server he was using didn't cut him off after the first 100K records.

He could view any record in under a second.

It's been some years since I last worked with Access but larger database files always used to have more problems and be more prone to corruption than smaller files.

Unless the database file is only being accessed by one person or stored on a robust network you may find this is a problem before the 2GB database size limit is reached.

We're not necessarily talking theoretical limits here, we're talking about real world limits of the 2GB max file size AND database schema.

  • Is your db a single table or multiple?
  • How many columns does each table have?
  • What are the datatypes?

The schema is on even footing with the row count in determining how many rows you can have.

We have used Access MDBs to store exports of MS-SQL data for statistical analysis by some of our corporate users. In those cases we've exported our core table structure, typically four tables with 20 to 150 columns varying from a hundred bytes per row to upwards of 8000 bytes per row. In these cases, we would bump up against a few hundred thousand rows of data were permissible PER MDB that we would ship them.

So, I just don't think that this question has an answer in absence of your schema.

It all depends. Theoretically using a single column with 4 byte data type. You could store 300 000 rows. But there is probably alot of overhead in the database even before you do anything. I read some where that you could have 1.000.000 rows but again, it all depends..

You can also link databases together. Limiting yourself to only disk space.

Practical = 'useful in practice' - so the best you're going to get is anecdotal. Everything else is just prototyping and testing results.

I agree with others - determining 'a max quantity of records' is completely dependent on schema - # tables, # fields, # indexes.

Another anecdote for you: I recently hit 1.6GB file size with 2 primary data stores (tables), of 36 and 85 fields respectively, with some subset copies in 3 additional tables.

Who cares if data is unique or not - only material if context says it is. Data is data is data, unless duplication affects handling by the indexer.

The total row counts making up that 1.6GB is 1.72M.

When working with 4 large Db2 tables I have not only found the limit but it caused me to look really bad to a boss who thought that I could append all four tables (each with over 900,000 rows) to one large table. the real life result was that regardless of how many times I tried the Table (which had exactly 34 columns - 30 text and 3 integer) would spit out some cryptic message "Cannot open database unrecognized format or the file may be corrupted". Bottom Line is Less than 1,500,000 records and just a bit more than 1,252,000 with 34 rows.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!