llvm-codegen

How does Rust's 128-bit integer `i128` work on a 64-bit system?

人盡茶涼 提交于 2020-12-27 07:54:05
问题 Rust has 128-bit integers, these are denoted with the data type i128 (and u128 for unsigned ints): let a: i128 = 170141183460469231731687303715884105727; How does Rust make these i128 values work on a 64-bit system; e.g. how does it do arithmetic on these? Since, as far as I know, the value cannot fit in one register of a x86-64 CPU, does the compiler somehow use 2 registers for one i128 value? Or are they instead using some kind of big integer struct to represent them? 回答1: All Rust's

LLVM's integer types

痴心易碎 提交于 2020-01-30 19:54:17
问题 The LLVM language specifies integer types as iN, where N is the bit-width of the integer, and ranges from 1 to 2^23-1 (According to: http://llvm.org/docs/LangRef.html#integer-type) I have 2 questions: When compiling a C program down to LLVM IR level, what types may be lowered to i1, i2, i3, etc? It seems like the types i8, i16, i32, i64 must be enough, so I was wondering what all the other nearly 8 million integer types are for. Is it true that both signed and unsigned integer types are

LLVM's integer types

本小妞迷上赌 提交于 2020-01-30 19:53:08
问题 The LLVM language specifies integer types as iN, where N is the bit-width of the integer, and ranges from 1 to 2^23-1 (According to: http://llvm.org/docs/LangRef.html#integer-type) I have 2 questions: When compiling a C program down to LLVM IR level, what types may be lowered to i1, i2, i3, etc? It seems like the types i8, i16, i32, i64 must be enough, so I was wondering what all the other nearly 8 million integer types are for. Is it true that both signed and unsigned integer types are

Why does this code generate much more assembly than equivalent C++/Clang? [closed]

可紊 提交于 2019-12-20 17:42:05
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I wrote a simple C++ function in order to check compiler optimization: bool f1(bool a, bool b) { return !a || (a && b); } After that I checked the equivalent in Rust: fn f1(a: bool, b: bool) -> bool { !a || (a && b) } I used godbolt to check the assembler output. The result of the C++ code (compiled by clang

Setting a non-default rounding mode with Rust inline asm isn't respected by the LLVM optimizer?

丶灬走出姿态 提交于 2019-12-10 15:36:15
问题 I am working on a Rust crate which changes the rounding mode (+inf, -inf, nearest, or truncate). The functions that change the rounding mode are written using inline assembly: fn upward() { let cw: u32 = 0; unsafe { asm!("stmxcsr $0; mov $0, %eax; or $$0x4000, %eax; mov %eax, $0; ldmxcsr $0;" : "=*m"(&cw) : "*m"(&cw) : "{eax}" ); } } When I compile the code in debug mode it works as intended, I get 0.3333333333337 for one-third when rounding toward positive infinity, but when I compile in

C ABI with LLVM

与世无争的帅哥 提交于 2019-12-06 19:47:47
问题 I've got a compiler written with LLVM and I'm looking to up my ABI compliance. For example, I've found it hard to actually find specification documents for C ABI on Windows x86 or Linux. And the ones I have found explain it in terms of RAX/EAX/etc, rather than IR terms that I can use. So far, I think I've figured that LLVM treats aggregates invisibly- that is, it considers their members as a distinct parameter each. So for example, on Windows x64, if I want to handle an aggregate like the

LLVM opt mem2reg has no effect

走远了吗. 提交于 2019-12-05 00:51:51
问题 I am currently playing around with LLVM and am trying to write a few optimizers to familiarize myself with opt and clang. I wrote a test.c file that is as follow: int foo(int aa, int bb, int cc){ int sum = aa + bb; return sum/cc; } I compiled the source code and generated 2 .ll files, one unoptimized and one with mem2reg optimizer pass: clang -emit-llvm -O0 -c test.c -o test.bc llvm-dis test.bc opt -mem2reg -S test.ll -o test-mem2reg.ll Both .ll files gave me the following output: ModuleID =

LLVM opt mem2reg has no effect

余生长醉 提交于 2019-12-03 19:32:49
I am currently playing around with LLVM and am trying to write a few optimizers to familiarize myself with opt and clang. I wrote a test.c file that is as follow: int foo(int aa, int bb, int cc){ int sum = aa + bb; return sum/cc; } I compiled the source code and generated 2 .ll files, one unoptimized and one with mem2reg optimizer pass: clang -emit-llvm -O0 -c test.c -o test.bc llvm-dis test.bc opt -mem2reg -S test.ll -o test-mem2reg.ll Both .ll files gave me the following output: ModuleID = 'test.bc' source_filename = "test.c" target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" target

Why is there a large performance impact when looping over an array with 240 or more elements?

a 夏天 提交于 2019-12-03 01:36:01
问题 When running a sum loop over an array in Rust, I noticed a huge performance drop when CAPACITY >= 240. CAPACITY = 239 is about 80 times faster. Is there special compilation optimization Rust is doing for "short" arrays? Compiled with rustc -C opt-level=3 . use std::time::Instant; const CAPACITY: usize = 240; const IN_LOOPS: usize = 500000; fn main() { let mut arr = [0; CAPACITY]; for i in 0..CAPACITY { arr[i] = i; } let mut sum = 0; let now = Instant::now(); for _ in 0..IN_LOOPS { let mut s =

Why does the Rust compiler not optimize code assuming that two mutable references cannot alias?

那年仲夏 提交于 2019-12-03 00:04:49
问题 As far as I know, reference/pointer aliasing can hinder the compiler's ability to generate optimized code, since they must ensure the generated binary behaves correctly in the case where the two references/pointers indeed alias. For instance, in the following C code, void adds(int *a, int *b) { *a += *b; *a += *b; } when compiled by clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final) with the -O3 flag, it emits 0000000000000000 <adds>: 0: 8b 07 mov (%rdi),%eax 2: 03 06 add (%rsi),%eax 4: 89