llvm

@autoreleasepool semantics

守給你的承諾、 提交于 2019-12-03 08:13:25
I was reading the ARC docs on the llvm site: http://clang.llvm.org/docs/AutomaticReferenceCounting.html#autoreleasepool ..in particular about @autoreleasepool. In lot of current implementation using NSAutoreleasePool, I see cases where the pool is drained periodically during a loop iteration - how do we do the same with @autorelease pool, or is it all done for us somehow under the hood? Secondly, the docs state that if an exception is thrown, the pool isn't drained.... ok exceptions are by name exceptional, but if they do happen, you might like to recover without leaking a load of memory. The

LLVM Error : External function could not be resolved

匿名 (未验证) 提交于 2019-12-03 07:50:05
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am reading the LLVM's Kaleidoscope tutorial ( http://llvm.org/docs/tutorial/index.html ). I wanted to compile and test the language. After some compiler's errors (EngineBuilder and Module's constructor, linking libs...), the example program was built. Then, I tried the language. I got a few problems with InitializeNativeTargets, DataLayoutPass... But I managed to correct them. Howewer, I don't manage to resolve one error. When I write extern printd(x); printd(5); , the program doesn't work : "LLVM ERROR : Program used external function

Is it possible to compile LLVM libraries to android/ARM

风格不统一 提交于 2019-12-03 07:48:23
问题 I'm fascinated by the Pure algebraic/functional language. The Pure interpreter uses the LLVM JIT compiler as its backend. I would like to compile Pure so that it runs on Android(ARM). Pure has a dependency on the LLVM JIT. So I need to compile LLVM source for Pure to run. Is it possible to compile LLVM source for Android (ARM) devices? There really seems to be no information about this on the web. Maybe my search terms are wrong. Searching for Android LLVM does not bring up many good hits

How to instrument a statement just before another statement using clang

狂风中的少年 提交于 2019-12-03 07:47:06
I have to instrument certain statements in clang by adding a statement just before it. I have a pointer to an Expr object using which I need to insert another statement just before the statement containing it. Right now I am using a hacky approach which just moves back the SourceLocation pointer till I see a ; or } or {. But this does not work for all cases. eg when I try to instrument a for statement, it fails. Is there any class in clang which provides a method to do this in a more cleaner way? EDIT: Here is snippet of my code. I need to insert an assert just before the statement containing

LLVM 2.0 can't build for iPhone simulator. GCC 4.2 works fine

こ雲淡風輕ζ 提交于 2019-12-03 07:36:52
When I build my project (any project, really - I tried creating a new empty project with the same results), it builds fine with GCC 4.2 under either Xcode4 or Xcode 3.2.4. If I build using LLVM 2.0 under Xcode4 or with LLVM 1.5 under Xcode3, I get compile-time build failures, but only when building for the Simulator. The build errors that I get under LLVM are all in headers over which I have no control, such as UIView.h, UIDevice.h, UIApplication.h, UITextView.h and UIWebView.h in UIKit and CGPDFContext.h in CoreGraphics. Here's an example error, in WebView.h: @property(nonatomic)

llvm JIT add library to module

匿名 (未验证) 提交于 2019-12-03 07:36:14
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am working on a JIT that uses LLVM. The language has a small run-time written in C++ which I compile down to LLVM IR using clang clang++ runtime.cu --cuda-gpu-arch=sm_50 -c -emit-llvm and then load the *.bc files, generate additional IR, and execute on the fly. The reason for the CUDA stuff is that I want to add some GPU acceleration to the runtime. However, this introduces CUDA specific external functions which gives errors such as: LLVM ERROR: Program used external function 'cudaSetupArgument' which could not be resolved! As discussed

Calling fsincos instruction in LLVM slower than calling libc sin/cos functions?

╄→尐↘猪︶ㄣ 提交于 2019-12-03 07:08:28
I am working on a language that is compiled with LLVM. Just for fun, I wanted to do some microbenchmarks. In one, I run some million sin / cos computations in a loop. In pseudocode, it looks like this: var x: Double = 0.0 for (i <- 0 to 100 000 000) x = sin(x)^2 + cos(x)^2 return x.toInteger If I'm computing sin/cos using LLVM IR inline assembly in the form: %sc = call { double, double } asm "fsincos", "={st(1)},={st},1,~{dirflag},~{fpsr},~{flags}" (double %"res") nounwind this is faster than using fsin and fcos separately instead of fsincos. However, it is slower than if I calling the llvm

running x86 program _on_ llvm

好久不见. 提交于 2019-12-03 07:05:55
Is it possible to use llvm to run x86 programs? I.e. I want to use llvm as an x86 simulator to run x86 programs and then instrument the x86 program. Thanks! I think you are looking for LibCPU . It has an x86 frontend (well, actually only 8086 at the moment, and that is not even complete, but they're working on it), and since it is built on top of LLVM, it obviously also has an x86 backend, thus making it possible to run x86-on-x86 but passing it through LLVM's optimization, instrumentation and analysis stages. Also, there was a project to use LLVM in qemu. It is also a way of running x86 code

Proper way to enable SSE4 on a per-function / per-block of code basis?

ⅰ亾dé卋堺 提交于 2019-12-03 06:53:23
For one of my OS X programs, I have a few optimized cases which use SSE4.1 instructions. On SSE3-only machines, the non-optimized branch is ran: // SupportsSSE4_1 returns true on CPUs that support SSE4.1, false otherwise if (SupportsSSE4_1()) { // Code that uses _mm_dp_ps, an SSE4 instruction ... __m128 hDelta = _mm_sub_ps(here128, right128); __m128 vDelta = _mm_sub_ps(here128, down128); hDelta = _mm_sqrt_ss(_mm_dp_ps(hDelta, hDelta, 0x71)); vDelta = _mm_sqrt_ss(_mm_dp_ps(vDelta, vDelta, 0x71)); ... } else { // Equivalent code that uses SSE3 instructions ... } In order to get the above to

How to replace llvm-ld with clang?

∥☆過路亽.° 提交于 2019-12-03 06:37:06
Summary: llvm-ld has been removed from the LLVM 3.2 release . I am trying to figure out how to use clang in its place in my build system. Note that I figured out the answer to my own question while writing it but I am still posting it in case it is useful to anyone else. Alternative answers are also welcome. Details : I have a build process which first generates bitcode using clang++ -emit-llvm . Then I take the bitcode files and link them together with llvm-link . Then I apply some standard optimization passes with opt . Then I apply another custom compiler pass with opt . Then I apply the