What type of applications would benefit the most from an In-Memory Database Predictable Latency?

99封情书 提交于 2020-01-06 02:55:11

问题


I'm doing some research on In-Memory databases and am wondering what type of applications would benefit the most from the predictable latency characteristic of In-Memory databases.

I can imagine online gaming, such as first person shooter games. I'm just wondering what other type of applications.


回答1:


Not much surprisingly the very applications, that benefit from predictable latency (be it low, or not -- latency jitter bothers...)


low-latency edge:

HPC, where nanoseconds and sub-nanosecond delays matter most, due to immense scale of the computational complexity ( static scales beyond Peta, Exa, ... prefixes ), where a guarranteed determinism of all In-RAM data-structure handling latency enables true PARALLEL and not just-an-opportunistic-belief in "best-efforts" based CONCURRENT code-execution.

DSP, where you simply cannot afford to "block/wait" at a cost of missing the next part of the unique & hardly repeatable signal-flow ( may imagine a CERN's LHC signals to experimental data signal-sensor readings / data-acquisition recording / data-conditoining + sanity / experiment control + processing / storage services )


mid-latency zone:

"hard" real-time constrained control systems ( F-35 avionics, that keeps otherwise inherently unstable aircraft somewhere up in the blue skies by ( fast enough ) fully coordinated endless-loop of sensor-network-based + pulse-controlled-effectors'-triggered state-transitions between many discrete ( still unstable ) states, that collectively draft an "envelope" illusion of a behaviour that we humans are used to call flying ( while the aircraft is not able to "fly" ( yes, it cannot extend it's own state-of-motion and continue in such motion any few moments further from any current state ( ... sure, except The Bird standing with engines off on the TARMAC ... but who would call that "flying" ??? ) because that would cause an inadvertent nose-down dig / flat-spin stall ... you name it all ),

"soft" real-time systems alike operating systems, deterministic schedulers, audio/video live-stream processors,

telephone switching ( well, recent packet-radio latency jitter of the mobile access networks a bit skew the advances of global TELCO networks synchronicity, developed over 80-ies/90-ies, but all these were principally building on defined latency tresholds and alowed for the first time due to this very feature to seamlessly connect Japan-standards of their PDH systems with US-standards of PDH hierarchy with the old continent's ISDN / PDH hierarchies, that were otherwise mutually impossible to connect on their own. Ref. to SDH/SONET architecture for details. )


high-latency zone: ( yes, high latency is nothing adverse ( if kept under control ) )

SoC-designs, where "just-enough" principle rules the constraint-based system design, at the very edge of the resources available -- i.e. deploy the system with minimum processor resources, with minimum DRAM-powering budget, with minimum Bill-of-Material / ASIC designs, while benefiting on the fact of a known, deterministic, latency, which ensures your "just-enough" design will still meet the required stability & reliability of the deployed processing at a minimised cost of that.


Epilogue:

Author has not either un-knowingly, the less intentionally, slipped into any jargon or strange tags juggling. The terms used in the text above are as common in the contemporary IT and TELCO domains, as alphabet is among the general audience. Sure, any professional specialisation adds plenties of more tags and abbreviations, that have no other chance but share acronyms' appearance with other, similarly looking acronyms from other field of science, technology or other field of humans' activity, but this is the cost of composing acronyms.

Due care with acronym meaning dis-ambiguation is thus a common practice in any scientific and/or engineering domain.

The text above has used a few terms, that are pretty common:

DSP: Digital Signal Processing
CERN: Conseil Européen pour la Recherche Nucléaire
LHC: Large Hadron Collider, a largest known particle accelerator on the Earth ( CERN, CH )
F-35: Lockheed Marting F-35 JSC aircraft
SoC: System-on-Chip -- Xilinx ZynQ, FPGAs, EpiphanyIV MPPA, Kalray Bostan2(R) et al
ASIC: Application Specific Integrated Circuit

HPC: High-Performance Computing is a leading/bleeding edge of all the computation-related sciences -- hardware, software, theoretical foundations behind the computational problems' computability ( Big-O rating ( ref. complexity-theory ) )

nanosecond = 1 / 1.000.000.000 fraction of a second.

Contemporary TV-broadcasting and {CRT|LED}-monitor-refresh rates
take about 1 / 24 .. 1 / 60 second ( i.e. about 40.000.000 - 20.000.000 ns ).

This said, the fastest contemporary CPU-clocks are about 5.000.000.000 [Hz].

That means,
such single CPU-core can compute
about 200.000.000 single-CLK CPU-instructions,
before a next visual-output ( a picture ) shall get finished and put on screen.

That provides indeed a vast amount of time for underlying gaming-engine to compute / process whatever needed.

Such comfort of having that much time is not granted

On the very contrary, such comfort of having that much time is not so common in high-intensity computing and/or high capacity transport of binary streams ( super-computing and telecommunication networks et al ).

The less is any such assumption fair in externally triggered processing, where events interleaving is not under ones control and is principally non-deterministic. HFT trading realm is such brief example, where lowest-possible latencies are a must, so in-Memory technology is the only feasible approach.

Even low-intensity HFT-trading software does not have plenty of time as
10% of events arrive in less than +__100 [ms]
20% of events arrive in less than +__200 [ms]
30% of events arrive in less than +1 100 [ms]
40% of events arrive in less than +1 200 .. +200 000 [ms] -- the rest arrives in anything between 1.2 and 200 [sec]

( deeper details on controlled-latency software design exceed the format of this S/O post, but visual demonstrations and quantitative comparison of ms, us and ns available for any kind of computation hopefully shows the message -- the key difference for a latency-aware software design )

To have some idea, how many computing steps a CPU / a cluster-of-CPUs / a grid-of-CPUs may undertake in contemporary hardware spends less than about 10 ns for CPU to read a value from DRAM, less than about 0.1 ns to fetch a value from on-CPU-cache-memory.

While on-CPU-cache sizes are growing ( and today specifications state for common, consumer electronics, processors { L2 | L3 }-cache sizes above 20 MB, which is for your kind consideration more than my first PC used to have available as it's Hard Disk Drive capacity ( and that high-tech piece was those days under supervision of COCOM export regulations, requiring approval for re-export and having a Cold-War ban to disable any potential export outside of Western Block territories ) cache-allocation algorithms are not providing an a-priori deterministic certainty of having the whole database in-(cache)-memory.

So, the fastest access is about 0.1 ns into local CPU-cache, but it is uncertain.

The next fastest access is about 10 ns into local DRAM memory, and GB .. TB sizes can fit into this memory-type.

The next fastest access is about 800 ns into NUMA distributed memory infrastructures, where capacities above 1 000 TB .. 1 000 000 TB can fit ( Peta Bytes to Exa Bytes ) and all be served under a uniform and predictable access times of about the 800 ns ( the latency becomes both lowest possible for such huge databases and is both uniform and predictable ).

So, if speaking about indeed a low and predictable latency these are the yardsticks that measure it.

Both CAPEX and OPEX costs ( by which any professional purchase of computing technology is assessed ) of such high-capacity + high-performance computing frameworks are very prohibitive, but human civilisation has no better computational engines so far.



来源:https://stackoverflow.com/questions/36949143/what-type-of-applications-would-benefit-the-most-from-an-in-memory-database-pred

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!