abi

Cpp uint32_fast_t resolves to uint64_t but is slower for nearly all operations than a uint32_t (x86_64). Why does it resolve to uint64_t?

久未见 提交于 2021-02-16 13:58:29
问题 Ran a benchmark and uint32_fast_t is 8 byte but slower than 4 byte uint32_t for nearly all operations. If this is the case why does uint32_fast_t not stay as 4 bytes? OS info: 5.3.0-62-generic #56~18.04.1-Ubuntu SMP Wed Jun 24 16:17:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux Cpu info: cat /sys/devices/cpu/caps/pmu_name skylake model name : Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz Benchmark I used for testing: #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <cstdint>

Missing arm64-v8a on 64bit compatible tablet

南楼画角 提交于 2021-02-11 13:13:58
问题 I'm developping an Android app in Xamarin Forms. This development in for a specific tablet, a Galaxy Tab active 2. I'm facing constraints and want that app to be 64 bits only. In the Android options on my project, I have only arm64-v8a selected in supported architectures. During deployment of the app, I'm unable to do it because the arm64-v8a is not supported. Android ABI mismatch. You are deploying an app supporting 'arm64-v8a' ABIs to an incompatible device of ABI 'armeabi-v7a;armeabi'. You

Missing arm64-v8a on 64bit compatible tablet

眉间皱痕 提交于 2021-02-11 13:12:54
问题 I'm developping an Android app in Xamarin Forms. This development in for a specific tablet, a Galaxy Tab active 2. I'm facing constraints and want that app to be 64 bits only. In the Android options on my project, I have only arm64-v8a selected in supported architectures. During deployment of the app, I'm unable to do it because the arm64-v8a is not supported. Android ABI mismatch. You are deploying an app supporting 'arm64-v8a' ABIs to an incompatible device of ABI 'armeabi-v7a;armeabi'. You

Missing arm64-v8a on 64bit compatible tablet

你说的曾经没有我的故事 提交于 2021-02-11 13:12:15
问题 I'm developping an Android app in Xamarin Forms. This development in for a specific tablet, a Galaxy Tab active 2. I'm facing constraints and want that app to be 64 bits only. In the Android options on my project, I have only arm64-v8a selected in supported architectures. During deployment of the app, I'm unable to do it because the arm64-v8a is not supported. Android ABI mismatch. You are deploying an app supporting 'arm64-v8a' ABIs to an incompatible device of ABI 'armeabi-v7a;armeabi'. You

ABI Register Names for RISC-V Calling Convention

落爺英雄遲暮 提交于 2021-02-08 12:19:40
问题 I'm confused about the RISC-V ABI Register Names. For example, Table 18.2 in the "RISC-V Instruction Set Manual, Volume I: User-Level ISA, Version 2.0" at page 85 specifies that the stack pointer sp is register x14 . However, the instruction addi sp,zero,0 is compiled to 0x00000113 by riscv64-unknown-elf-as ( -m32 does not make a difference). In binary: 000000000000 00000 000 00010 0010011 ^imm ^rs1 ^f3 ^rd ^opcode So here sp seems to be x2 . Then I googled a bit and found the RISC-V Linux

Is it possible to use C++11 ABI _and_ both cxx11-style and old-style strings?

北城余情 提交于 2021-02-08 10:14:44
问题 I have some code which is being built with GCC 5.3.1 without _GLIBCXX_CXX11_ABI set. Now, suppose I want to use both old-style and new-style std::__cxx11::string 's in the same bit of code . Is that possible? If so, how? Notes: I don't really have a good reason for doing this. I do have a not-so-good reason, but let's make it a theoretical question please. It's ok if the C++11 strings aren't called std::string . 回答1: Can you have both the old and new string implementation in the same code?

Passed array with more elements that expected in subroutine

痞子三分冷 提交于 2021-02-08 05:17:46
问题 I have a subroutine in a shared library: SUBROUTINE DLLSUBR(ARR) IMPLICIT NONE INTEGER, PARAMETER :: N = 2 REAL ARR(0:N) arr(0) = 0 arr(1) = 1 arr(2) = 2 END And let's assume I will call it from executable by: REAL ARR(0:3) CALL DLLSUBR(ARR) Note: The code happily compiles and runs (DLLSUBR is inside a module) without any warning or error in Debug + /check:all option switched on. Could this lead to memory corruption or some weird behaviour? Where I can find info about passing array with

Passed array with more elements that expected in subroutine

无人久伴 提交于 2021-02-08 05:14:30
问题 I have a subroutine in a shared library: SUBROUTINE DLLSUBR(ARR) IMPLICIT NONE INTEGER, PARAMETER :: N = 2 REAL ARR(0:N) arr(0) = 0 arr(1) = 1 arr(2) = 2 END And let's assume I will call it from executable by: REAL ARR(0:3) CALL DLLSUBR(ARR) Note: The code happily compiles and runs (DLLSUBR is inside a module) without any warning or error in Debug + /check:all option switched on. Could this lead to memory corruption or some weird behaviour? Where I can find info about passing array with

Passed array with more elements that expected in subroutine

喜欢而已 提交于 2021-02-08 05:14:21
问题 I have a subroutine in a shared library: SUBROUTINE DLLSUBR(ARR) IMPLICIT NONE INTEGER, PARAMETER :: N = 2 REAL ARR(0:N) arr(0) = 0 arr(1) = 1 arr(2) = 2 END And let's assume I will call it from executable by: REAL ARR(0:3) CALL DLLSUBR(ARR) Note: The code happily compiles and runs (DLLSUBR is inside a module) without any warning or error in Debug + /check:all option switched on. Could this lead to memory corruption or some weird behaviour? Where I can find info about passing array with

Why is the “alignment” the same on 32-bit and 64-bit systems?

不羁的心 提交于 2021-02-07 04:54:29
问题 I was wondering whether the compiler would use different padding on 32-bit and 64-bit systems, so I wrote the code below in a simple VS2019 C++ console project: struct Z { char s; __int64 i; }; int main() { std::cout << sizeof(Z) <<"\n"; } What I expected on each "Platform" setting: x86: 12 X64: 16 Actual result: x86: 16 X64: 16 Since the memory word size on x86 is 4 bytes, this means it has to store the bytes of i in two different words. So I thought the compiler would do padding this way: