bit

ClickOnce application does not start through Process.Start(“x.abc”) with *.abc associated to the ClickOnce application

匿名 (未验证) 提交于 2019-12-03 02:05:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have successfully developed and deployed a ClickOnce application which registers an associated file extension, for instance *.abc . When I click on a file named x.abc or if I type x.abc from the command prompt, the ClickOnce application starts, and I can retrieve the file through the dedicated API. I can also launch the application programmatically with the following code: System.Diagnostics.Process.Start ("x.abc"); Manual start of x.abc by double-clicking it from the Explorer works. Manual start of x.abc from the command prompt works.

tcnative-1.dll Can't load AMD 64-bit .dll on a IA 32-bit platform

匿名 (未验证) 提交于 2019-12-03 02:05:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm getting this error when I try to run tomcat: "java.lang.UnsatisfiedLinkError: C:\Program Files (x86)\apache-tomcat-7.0.34\bin\tcnative-1.dll: Can't load AMD 64-bit .dll on a IA 32-bit platform". However, I have the 64 bit JRE downloaded, and double-checked my java version: C:\Program Files (x86)\apache-tomcat-7.0.34\bin>java -version java version "1.7.0_10" Java(TM) SE Runtime Environment (build 1.7.0_10-b18) Java HotSpot(TM) 64-Bit Server VM (build 23.6-b04, mixed mode) I've seen this question here before, but in one there was no

How to store a 128 bit number in a single column in MySQL?

匿名 (未验证) 提交于 2019-12-03 02:05:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm changing some tables to store IP addresses as numbers rather than strings. This is simple with IPv4 where the 32 bit address can fit into an integer column. However, an IPv6 address is 128 bits. The MySQL documentation only shows numeric types up to 64 bits ("bigint"). Should I stick with char/varchar for IPv6? (Ideally I'd like to use the same column for IPv4 and IPv6, so I'd prefer not to do this). Is there anything better than using two bigint columns? I would prefer not to have to break the value into upper and lower /64 whenever

Can't build 32bit Wine on 64bit linux

匿名 (未验证) 提交于 2019-12-03 02:05:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to do this: Build 32bit on 64 bit Linux using an automake configure script? Doesn't work for me :( Compileing wine. I found this in config.log: configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "Wine" | #define PACKAGE_TARNAME "wine" | #define PACKAGE_VERSION "1.5.19" | #define PACKAGE_STRING "Wine 1.5.19" | #define PACKAGE_BUGREPORT "wine-devel@winehq.org" | #define PACKAGE_URL "http://www.winehq.org" | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } Configuration fails with: Cannot build

Python: Read and write TIFF 16 bit , three channel , colour images

匿名 (未验证) 提交于 2019-12-03 02:05:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Does anyone have a method for importing a 16 bit per channel, 3 channel TIFF image in Python? I have yet to find a method which will preserve the 16 bit depth per channel when dealing with the TIFF format. I am hoping that some helpful soul will have a solution. Here is a list of what I have tried so far without success and the results: import numpy as np import PIL.Image as Image import libtiff import cv2 im = Image.open('a.tif') # IOError: cannot identify image file tif = libtiff.TIFF.open('a.tif') im = tif.read_image() # im only contains

Int to UInt (and vice versa) bit casting in Swift

匿名 (未验证) 提交于 2019-12-03 02:05:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am looking for a direct way to bit cast the bit values of an Int to UInt and vice versa. For example (using the 8 bits integers for simplicity) I want to achieve the following: let unsigned: UInt8 = toUInt8(-1) // unsigned is 255 or 0xff let signed: Int8 = toInt8(0xff) // signed is -1 At first I came out with the following solution: let unsigned = unsafeBitCast(Int8(-1), UInt8.self) let signed = unsafeBitCast(UInt8(0xff), Int8.self) But Apple in the "unsafeBitCast()" documentation states the following: .. Caution:: Breaks the guarantees of

Storing SHA1 hash values in MySQL

匿名 (未验证) 提交于 2019-12-03 02:03:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have a simple question which occured when I wanted to store the result of a SHA1 hash in a MySQL database: How long should the VARCHAR field be in which I store the hash's result? 回答1: I would use VARCHAR for variable length data, but not with fixed length data. Because a SHA-1 value is always 160 bit long, the VARCHAR would just waste an additional byte for the length of the fixed-length field . And I also wouldn’t store the value the SHA1 is returning. Because it uses just 4 bit per character and thus would need 160/4 = 40 characters.

Why bit endianness is an issue in bitfields?

匿名 (未验证) 提交于 2019-12-03 02:03:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Any portable code that uses bitfields seems to distinguish between little- and big-endian platforms. See the declaration of struct iphdr in linux kernel for an example of such code. I fail to understand why bit endianness is an issue at all. As far as I understand, bitfields are purely compiler constructs, used to facilitate bit level manipulations. For instance, consider the following bitfield: struct ParsedInt { unsigned int f1:1; unsigned int f2:3; unsigned int f3:4; }; uint8_t i; struct ParsedInt *d = &i; Here, writing d->f2 is simply a

Wow64DisableWow64FsRedirection on 32-bit Windows XP

匿名 (未验证) 提交于 2019-12-03 02:03:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm writing a program in Visual Studio C++ which needs to run natively as a 32-bit process on any computer running Windows XP 32-bit, or any later Windows operating system. This program needs to be able to access the C:\Windows\system32\ folder on a computer, regardless of whether the program is running on a 64-bit or 32-bit system. To do this, I was using Wow64DisableWow64FsRedirection to disable the redirection that Windows typically does to 32-bit processes, sending them to C:\Windows\syswow64 . Unfortunately, this breaks compatibility --

MinGW compiled programs crash on 64 bit Windows

匿名 (未验证) 提交于 2019-12-03 02:01:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have 32bit MinGW and 64 bit Eclipse CDT installed on 64 bit Windows 7 with an Intel Core i7-3612QM . When I compile and run hello world , the string prints, but the program gets a SIGILL afterward. Source: #include using namespace std; int main() { cout Stacktrace: hello.exe [C/C++ Application] hello.exe [5532] Thread [1] 0 (Suspended : Signal : SIGILL:Illegal instruction) libstdc++-6!_ZSt4cout() at 0x6fccc3c0 libstdc++-6!_ZNSolsEPFRSoS_E() at 0x6fc8908c _fu0___ZSt4cout() at hello.cpp:5 0x4013be gdb Adding cin causes a segfault: Source: