disk

Maintain “links” when writing a B+ tree to disk?

怎甘沉沦 提交于 2021-02-08 08:20:30
问题 I have done the implementation of B+ tree in java , but as usual , that is completely in main memory. How can i store a B+ tree onto a disk ? Each node of the btree contains the pointers(main memory addresses or reference to objects) to its children , how can i achieve the similar thing while Btree resides on disk? What replaces main memory addresses in b+ tree nodes in the scenario when b+ tree is on disk ? There is already a similar Question posted here : B+Tree on-disk implementation in

Fio results are steadily increasing IOPS, not what I expected

ε祈祈猫儿з 提交于 2021-01-29 19:51:11
问题 I'm trying to somehow test my rbd storage with random read, random write, mixed randrw, but the output is not correct, it is a sequential growing number. What is wrong with my steps? This is the fio file that I ran: ; fio-rand-write.job for fiotest [global] name=fio-rand-write filename=fio-rand-write rw=randwrite bs=4K direct=1 write_iops_log=rand-read [file1] size=1G ioengine=libaio iodepth=16 And the result is this: head rand-read_iops.1.log 2, 1, 1, 4096, 0 2, 1, 1, 4096, 0 2, 1, 1, 4096,

SQL table growing inconsistently

亡梦爱人 提交于 2021-01-29 16:34:28
问题 There is a SQL table which is growing rapidly and inconsistently compared to it's intrinsic data. To make it short, there is a windows service backing up the content of .txt files in this table, the files weight from 1KB to 45KB approx. hence the nvarchar(max) column used to store the content of those text files. When running the sp_spaceused command on this table, here is the result: name rows reserved data index_size unused Files 20402 814872 KB 813416 KB 1048 KB 408 KB But when running

Latency of accessing main memory is almost the same order of sending a packet

我的梦境 提交于 2020-12-15 05:27:07
问题 Looking at Jeff Dean's famous latency guides Latency Comparison Numbers (~2012) ---------------------------------- L1 cache reference 0.5 ns Branch mispredict 5 ns L2 cache reference 7 ns 14x L1 cache Mutex lock/unlock 25 ns Main memory reference 100 ns 20x L2 cache, 200x L1 cache Compress 1K bytes with Zippy 3,000 ns 3 us Send 1K bytes over 1 Gbps network 10,000 ns 10 us Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD Read 1 MB sequentially from memory 250,000 ns 250 us Round trip

OSHI: Get HWDiskStore for a given path

喜欢而已 提交于 2020-07-30 08:13:32
问题 I am using OSHI https://github.com/oshi/oshi to monitor the hardware. There is a method HWDiskStore[] getDisks(); https://github.com/oshi/oshi/blob/master/oshi-core/src/main/java/oshi/hardware/Disks.java to get the list of all hard drives on the machine. Is it possible to get HWDiskStore for a particular path like FileStore getFileStore(Path path) https://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html#getFileStore-java.nio.file.Path- If no, what is a reliable way to match a

OSHI: Get HWDiskStore for a given path

不羁的心 提交于 2020-07-30 08:12:31
问题 I am using OSHI https://github.com/oshi/oshi to monitor the hardware. There is a method HWDiskStore[] getDisks(); https://github.com/oshi/oshi/blob/master/oshi-core/src/main/java/oshi/hardware/Disks.java to get the list of all hard drives on the machine. Is it possible to get HWDiskStore for a particular path like FileStore getFileStore(Path path) https://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html#getFileStore-java.nio.file.Path- If no, what is a reliable way to match a

OSHI: Get HWDiskStore for a given path

江枫思渺然 提交于 2020-07-30 08:12:28
问题 I am using OSHI https://github.com/oshi/oshi to monitor the hardware. There is a method HWDiskStore[] getDisks(); https://github.com/oshi/oshi/blob/master/oshi-core/src/main/java/oshi/hardware/Disks.java to get the list of all hard drives on the machine. Is it possible to get HWDiskStore for a particular path like FileStore getFileStore(Path path) https://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html#getFileStore-java.nio.file.Path- If no, what is a reliable way to match a

SD does wear leveling work at the partition or disk level

别来无恙 提交于 2020-05-16 22:02:50
问题 I wrote this question previously I'm testing SOLID STATE write failure times (c code) and the device isn't failing and it was brilliantly answered by Brendan. I was basically asking about writing to an SD card many times and when it would fail as I am writing an application that will be writing data to an SD card for many years. My follow up question is does wear leveling work at the disk or partition level. If for ease of setting my application up, say i wanted 3 partitions ie for examples