How much faster is the memory usually than the disk?

前端 未结 6 965
旧巷少年郎
旧巷少年郎 2020-11-28 23:50

IDE,SCSI,SSD,SATA or all of those.

相关标签:
6条回答
  • 2020-11-29 00:10

    Accessing the RAM is in the order of nanoseconds ( 10e-9 seconds ), while accessing data on the disk or the network is in the order of milliseconds (10e-3 seconds).

    from Node.JS Design Patterns

    0 讨论(0)
  • 2020-11-29 00:21

    RAM is 100 Thousand Times Faster than Disk for Database Access from http://www.directionsmag.com/articles/ram-is-100-thousand-times-faster-than-disk-for-database-access/123964

    0 讨论(0)
  • 2020-11-29 00:24

    Random Access Memory (RAM) takes nanoseconds to read from or write to, while hard drive (IDE, SCSI, SATA that I'm aware of) access speed is measured in milliseconds.

    0 讨论(0)
  • 2020-11-29 00:29

    2016 Hardware Update: Actual read/write seq throughput

    Now the Samsung 940 PRO SSD

    • reading at 3,500 MB/sec
    • writing at 2,100 MB/sec

    Ram got faster too

    • reading at 61,000 MB/sec
    • writing at 48,000 MB/sec..

    So now using this metric, RAM looks to be 20x faster than the stuff around when @ChrisW wrote his answer, not 100,000. And, SSDs are 10 times faster than RAM was when he wrote this question.

    An important consideration is that we're only measuring memory bandwidth not latency.

    0 讨论(0)
  • 2020-11-29 00:32

    I'm surprised: Figure 3 in the middle of this article, The Pathologies of Big Data, says that memory is only about 6 times faster when you're doing sequential access (350 Mvalues/sec for memory compared with 58 Mvalues/sec for disk); but it's about 100,000 times faster when you're doing random access.

    0 讨论(0)
  • 2020-11-29 00:32

    It's not precisely about SCSI drives, but I think that the Latency Numbers Every Programmer Should Know table could assist you in understanding the speed and the difference between different latency numbers, including storage options.

    Latency Comparison Numbers (~2012)
    ----------------------------------
    L1 cache reference                           0.5 ns
    Branch mispredict                            5   ns
    L2 cache reference                           7   ns                      14x L1 cache
    Mutex lock/unlock                           25   ns
    Main memory reference                      100   ns                      20x L2 cache, 200x L1 cache
    Compress 1K bytes with Zippy             3,000   ns        3 us
    Send 1K bytes over 1 Gbps network       10,000   ns       10 us
    Read 4K randomly from SSD*             150,000   ns      150 us          ~1GB/sec SSD
    Read 1 MB sequentially from memory     250,000   ns      250 us
    Round trip within same datacenter      500,000   ns      500 us
    Read 1 MB sequentially from SSD*     1,000,000   ns    1,000 us    1 ms  ~1GB/sec SSD, 4X memory
    Disk seek                           10,000,000   ns   10,000 us   10 ms  20x datacenter roundtrip
    Read 1 MB sequentially from disk    20,000,000   ns   20,000 us   20 ms  80x memory, 20X SSD
    Send packet CA->Netherlands->CA    150,000,000   ns  150,000 us  150 ms
    

    Here is a great visual representation that will help you to better understand the scale: https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html

    0 讨论(0)
提交回复
热议问题