Why are Interpreted Languages Slow?

前端 未结 15 1494
南旧
南旧 2020-12-24 11:05

I was reading about the pros and cons of interpreted languages, and one of the most common cons is the slowness, but why are programs in interpreted languages slow?

相关标签:
15条回答
  • 2020-12-24 11:54

    In addition to the other answers there's optimization: when you're compiling a programme, you don't usually care how long it takes to compile - the compiler has lots of time to optimize your code. When you're interpreting code, it has to be done very quickly so some of the more clever optimizations might not be able to be made.

    0 讨论(0)
  • 2020-12-24 11:55

    Wikipedia says,

    Interpreting code is slower than running the compiled code because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This run-time analysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time.

    Refer this IBM doc,

    Interpreted program must be translated each time it is executed, there is a higher overhead. Thus, an interpreted language is generally more suited to ad hoc requests than predefined requests.

    In Java though it is considered as an interpreted language, It uses JIT (Just-in-Time) compilation which mitigate the above issue by using a caching technique to cache the compiled bytecode.

    The JIT compiler reads the bytecodes in many sections (or in full, rarely) and compiles them dynamically into machine code so the program can run faster. This can be done per-file, per-function or even on any arbitrary code fragment; the code can be compiled when it is about to be executed (hence the name "just-in-time"), and then cached and reused later without needing to be recompiled.

    0 讨论(0)
  • 2020-12-24 11:57

    All answers seem to miss the real important point here. It's the detail how "interpreted" code is implemented.

    Interpreted script languages are slower because their method, object and global variable space model is dynamic. In my opinion this is the real definition of of script language not the fact that it is interpreted. This requires many extra hash-table lookups on each access to a variable or method call. And its the main reason why they are all terrible at multithreading and using a GIL (Global Interpreter Lock). This lookups is where most of the time is spent. It is a painful random memory lookup, which really hurts when you get a L1/L2 cache-miss.

    Google's Javascript Core8 is so fast and targeting almost C speed for a simple optimization: they take the object data model as fixed and create internal code to access it like the data structure of a native compiled program. When a new variable or method is added or removed then the whole compiled code is discarded and compiled again.

    The technique is well explained in the Deutsch/Schiffman paper "Efficient Implementation of the Smalltalk-80 System".

    The question why php, python and ruby aren't doing this is pretty simple to answer: the technique is extremely complicated to implement.

    And only Google has the money to pay for JavaScript because a fast browser-based JavaScript interpreter is their fundamental need of their billion dollar business model.

    0 讨论(0)
提交回复
热议问题