jit

How much instruction-level optimisation can a JIT apply?

一笑奈何 提交于 2019-12-11 11:22:02
问题 To what extent can a JIT replace platform independent code with processor-specific machine instructions? For example, the x86 instruction set includes the BSWAP instruction to reverse a 32-bit integer's byte order. In Java the Integer.reverseBytes() method is implemented using multiple bitwise masks and shifts, even though in x86 native code it could be implemented in a single instruction using BSWAP . Are JITs (or static compilers for that matter) able to make the change automatically or is

MonoTouch: Using ServiceStack caused JIT error?

眉间皱痕 提交于 2019-12-11 11:06:54
问题 I am using the MonoTouch build of Service Stack from https://github.com/ServiceStack/ServiceStack/tree/master/release/latest/MonoTouch When run on a iPad, I get a JIT error. I thought MonoTouch took care of that in the build? Attempting to JIT compile method 'ServiceStack.Text.Json.JsonReader`1<Common.AppCategoryEnum>:GetParseFn ()' while running with --aot-only. I use the DLLS: ServiceStack.Common.dll ServiceStack.Interface.dll ServiceStack.Text.dll And only this single call: new

Cannot execute function JITed by LLVM

那年仲夏 提交于 2019-12-11 09:41:29
问题 Using LLVM-5.0 I implemented a minimal testcase that creates assembly for a function returning the 32bit integer "42" at runtime and executes it. Using llvm::ExecutionEngine I was able to generate the following code at runtime (displayed with gdb): 0x7ffff7ff5000 mov $0x2a,%eax 0x7ffff7ff5005 retq Calling the function yields Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7ff5000 in ?? () My working theory is that the memory page LLVM wrote the code on is not executable. Is

Differences of x86 and x86-64 machine code

狂风中的少年 提交于 2019-12-11 09:39:10
问题 So, I've got a program which generates JIT x86 machine code and executes it directly and I want it to support x86-64/AMD64/x64 as well. The obvious differences are: New registers ( rax , r8 ...) and pointer width (pointers need to use 64bit regs) Default C calling convention (arguments on stack vs. registers) Some new mnemonics ( pushq to push 64bit) Are there any differences in the binary instructions as well or should it be (roughly) sufficient to use pushq and 64bit registers when

@jit slowing down function

六眼飞鱼酱① 提交于 2019-12-11 08:43:30
问题 I'm developing an optimization code for a complex reservoir operations problem. Part of this requires me to calculate the objective function for a large number of potential solutions. I'm testing the optimizer on the Rosenbrock function and trying to improve its speed. I noticed when I profiled the code that calculating the objective function within a for loop was one of the code bottlenecks so I developed a way to do this in parallel for multiple sets of decision variables. I have two

Disadvantages of RuntimeHelpers.PrepareMethod in a windows service [closed]

邮差的信 提交于 2019-12-11 03:16:51
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 5 years ago . I am investigating an issue latency issues that happens soon after a server (having multiple services) starts. I've added a simple method that loads of referenced DLLs and performs RuntimeHelpers.PrepareMethod on every method in every type in every assembly in those DLLs,

How to prove that the .NET CLR JIT compiles every method only once per run?

旧时模样 提交于 2019-12-10 21:06:30
问题 There's an old question asking whether C# is JIT compiled every time and the answer by famous Jon Skeet is: "no, it's compiled only once per application" as long as we're talking about desktop applications which are not NGENed. I want to know if that information from 2009 is still true and I want to figure that out by experiment and debugging, potentially by putting a breakpoint on the JITter and using WinDbg commands to inspect objects and methods. My research so far I know that the .NET

numba @jit slower that pure python?

一个人想着一个人 提交于 2019-12-10 19:29:37
问题 so i need to improve the execution time for a script that i have been working on. I started working with numba jit decorator to try parallel computing however it throws me KeyError: "Does not support option: 'parallel'" so i decided to test the nogil if it unlocks the whole capabilities from my cpu but it was slower than pure python i dont understand why this happened, and if someone can help me or guide me i will be very grateful import numpy as np from numba import * @jit(['float64[:,:]

Recursive version of Java function is slower than iterative on first call, but faster after. Why is this?

谁都会走 提交于 2019-12-10 18:27:47
问题 For an assignment I'm currently trying to measure the performance (space/time) difference between an iterative solution to the matrix chain problem and a recursive one. The gist of the problem and the solution I'm using for the iterative version can be found here: http://www.geeksforgeeks.org/dynamic-programming-set-8-matrix-chain-multiplication/ I'm running a given input through both functions 10 times, measuring the space and time performance of each function. The very interesting thing is

C# - Why does a class, new() constraint use Activator.CreateInstance<T>()? [duplicate]

只愿长相守 提交于 2019-12-10 17:12:11
问题 This question already has answers here : Why does the c# compiler emit Activator.CreateInstance when calling new in with a generic type with a new() constraint? (5 answers) Closed 3 years ago . I just asked C# - How do generics with the new() constraint get machine code generated? After thinking about this for a while, I'm wondering why the C# Compiler emitted IL like that. Why couldn't it say some IL like: "Call T's default constructor"? 回答1: There is no such instruction in CIL (http://www