jit

Are WeakHashMap cleared during a full GC?

点点圈 提交于 2021-02-18 05:14:43
问题 I encountered some troubles with WeakHashMap. Consider this sample code: List<byte[]> list = new ArrayList<byte[]>(); Map<String, Calendar> map = new WeakHashMap<String, Calendar>(); String anObject = new String("string 1"); String anOtherObject = new String("string 2"); map.put(anObject, Calendar.getInstance()); map.put(anOtherObject, Calendar.getInstance()); // In order to test if the weakHashMap works, i remove the StrongReference in this object anObject = null; int i = 0; while (map.size(

Are WeakHashMap cleared during a full GC?

我的未来我决定 提交于 2021-02-18 05:14:19
问题 I encountered some troubles with WeakHashMap. Consider this sample code: List<byte[]> list = new ArrayList<byte[]>(); Map<String, Calendar> map = new WeakHashMap<String, Calendar>(); String anObject = new String("string 1"); String anOtherObject = new String("string 2"); map.put(anObject, Calendar.getInstance()); map.put(anOtherObject, Calendar.getInstance()); // In order to test if the weakHashMap works, i remove the StrongReference in this object anObject = null; int i = 0; while (map.size(

unexplained 10%+ performance boost from simply adding a method argument (slimmer jit code)

泄露秘密 提交于 2021-02-17 19:13:07
问题 (note: proper answer must go beyond reproduction). After millions of invocations, quicksort1 is definitely faster than quicksort2, which have identical code aside from this 1 extra arg. The code is at the end of the post. Spoiler: I also found the jit code is fatter by 224 bytes even if it should be actually simpler (like the byte code size tells; see very last update below). Even after trying to factor out this effect with some microbenchmark harness (JMH), the performance difference is

unexplained 10%+ performance boost from simply adding a method argument (slimmer jit code)

[亡魂溺海] 提交于 2021-02-17 19:07:25
问题 (note: proper answer must go beyond reproduction). After millions of invocations, quicksort1 is definitely faster than quicksort2, which have identical code aside from this 1 extra arg. The code is at the end of the post. Spoiler: I also found the jit code is fatter by 224 bytes even if it should be actually simpler (like the byte code size tells; see very last update below). Even after trying to factor out this effect with some microbenchmark harness (JMH), the performance difference is

unexplained 10%+ performance boost from simply adding a method argument (slimmer jit code)

喜你入骨 提交于 2021-02-17 19:07:14
问题 (note: proper answer must go beyond reproduction). After millions of invocations, quicksort1 is definitely faster than quicksort2, which have identical code aside from this 1 extra arg. The code is at the end of the post. Spoiler: I also found the jit code is fatter by 224 bytes even if it should be actually simpler (like the byte code size tells; see very last update below). Even after trying to factor out this effect with some microbenchmark harness (JMH), the performance difference is

Couldn't find program: 'pypy' on Google Colab. Solution or alternatives?

三世轮回 提交于 2021-02-10 05:42:06
问题 I'm getting this error in a google colab notebook. Do I need to install something or it's just not possible to use pypy inside colab? I've tried this simple script: %%pypy print("hello") # Couldn't find program: 'pypy' If I run %lsmagic the output is the following, in which pypy is present. Available line magics: %alias %alias_magic %autocall %automagic %autosave %bookmark %cat %cd %clear %colors %config %connect_info %cp %debug %dhist %dirs %doctest_mode %ed %edit %env %gui %hist %history

TypingError: Failed in nopython mode pipeline (step: nopython frontend)

爷,独闯天下 提交于 2021-02-10 03:39:48
问题 I am trying to write my first function using numba jit, I have a pandas dataframe that I need to iterate through and find the root mean square for each 350 points, since the for loop of python is quite slow I decided to try numba jit, the code is: @jit(nopython=True) def find_rms(data, length): res = [] for i in range(length, len(data)): interval = np.array(data[i-length:i]) interval =np.power(interval, 2) sum = interval.sum() resI = sum/length resI = np.sqrt(res) res.appennd(resI) return res

How do I pass in calculated values to a list sort using numba.jit in python?

丶灬走出姿态 提交于 2021-02-08 17:02:37
问题 I am trying to sort a list using a custom key within a numba-jit function in Python. Simple custom keys work, for example I know that I can just sort by the absolute value using something like this: import numba @numba.jit(nopython=True) def myfunc(): mylist = [-4, 6, 2, 0, -1] mylist.sort(key=lambda x: abs(x)) return mylist # [0, -1, 2, -4, 6] However, in the following more complicated example, I get an error that I do not understand. import numba import numpy as np @numba.jit(nopython=True)

How do I pass in calculated values to a list sort using numba.jit in python?

本秂侑毒 提交于 2021-02-08 16:57:24
问题 I am trying to sort a list using a custom key within a numba-jit function in Python. Simple custom keys work, for example I know that I can just sort by the absolute value using something like this: import numba @numba.jit(nopython=True) def myfunc(): mylist = [-4, 6, 2, 0, -1] mylist.sort(key=lambda x: abs(x)) return mylist # [0, -1, 2, -4, 6] However, in the following more complicated example, I get an error that I do not understand. import numba import numpy as np @numba.jit(nopython=True)

Are these the smallest possible x86 macros for these stack operations?

廉价感情. 提交于 2021-01-29 17:03:41
问题 I'm making a stack based language as a fun personal project. So, I have some signed/unsigned 32-bit values on the stack and my goal is to write some assembly macros that operate on this stack. Ideally these will be small since they'll be used a lot. Since I'm new to x86 assembly I was wondering if you guys had any tips or improvements you could think of. I'd greatly appreciate your time, thanks! Note: An optimizer is used after the macros are expanded to avoid cases like pop eax; push eax so