casting

Interesting observation on byte addition and assignment

ぃ、小莉子 提交于 2021-02-04 19:06:40
问题 Today while helping someone I came across an interesting issue which I couldn't understand the reason. While using += we don't need to explicit casting, but when we use i+i, we need to explicitly cast. Couldn't find exact reason. Any input will be appreciated. public class Test{ byte c = 2; byte d = 5; public void test(String args[]) { c += 2; d = (byte) (d + 3); } } 回答1: Java is defined such that += and the other compound assignment operators automatically cast the result to the type of the

Why is an explicit cast not needed here?

核能气质少年 提交于 2021-02-04 17:10:06
问题 class MyClass { void myMethod(byte b) { System.out.print("myMethod1"); } public static void main(String [] args) { MyClass me = new MyClass(); me.myMethod(12); } } I understand that the argument of myMethod() being an int literal, and the parameter b being of type byte, this code would generate a compile time error. (which could be avoided by using an explicit byte cast for the argument: myMethod((byte)12) ) class MyClass{ byte myMethod() { return 12; } public static void main(String [ ] args

Clone an Rc<RefCell<MyType> trait object and cast it

杀马特。学长 韩版系。学妹 提交于 2021-02-04 16:42:18
问题 This question is related to Rust: Clone and Cast Rc pointer Let's say I have this piece of code which works fine: use std::rc::Rc; trait TraitAB : TraitA + TraitB { fn as_a(self: Rc<Self>) -> Rc<dyn TraitA>; fn as_b(self: Rc<Self>) -> Rc<dyn TraitB>; } trait TraitA {} trait TraitB {} struct MyType {} impl TraitAB for MyType { fn as_a(self: Rc<Self>) -> Rc<dyn TraitA> {self} fn as_b(self: Rc<Self>) -> Rc<dyn TraitB> {self} } impl TraitA for MyType {} impl TraitB for MyType {} fn main() { let a

Clone an Rc<RefCell<MyType> trait object and cast it

感情迁移 提交于 2021-02-04 16:42:12
问题 This question is related to Rust: Clone and Cast Rc pointer Let's say I have this piece of code which works fine: use std::rc::Rc; trait TraitAB : TraitA + TraitB { fn as_a(self: Rc<Self>) -> Rc<dyn TraitA>; fn as_b(self: Rc<Self>) -> Rc<dyn TraitB>; } trait TraitA {} trait TraitB {} struct MyType {} impl TraitAB for MyType { fn as_a(self: Rc<Self>) -> Rc<dyn TraitA> {self} fn as_b(self: Rc<Self>) -> Rc<dyn TraitB> {self} } impl TraitA for MyType {} impl TraitB for MyType {} fn main() { let a

Clone an Rc<RefCell<MyType> trait object and cast it

喜欢而已 提交于 2021-02-04 16:42:05
问题 This question is related to Rust: Clone and Cast Rc pointer Let's say I have this piece of code which works fine: use std::rc::Rc; trait TraitAB : TraitA + TraitB { fn as_a(self: Rc<Self>) -> Rc<dyn TraitA>; fn as_b(self: Rc<Self>) -> Rc<dyn TraitB>; } trait TraitA {} trait TraitB {} struct MyType {} impl TraitAB for MyType { fn as_a(self: Rc<Self>) -> Rc<dyn TraitA> {self} fn as_b(self: Rc<Self>) -> Rc<dyn TraitB> {self} } impl TraitA for MyType {} impl TraitB for MyType {} fn main() { let a

Cleaner way to convert an array of compact float values into an array of floats

余生颓废 提交于 2021-01-29 21:12:28
问题 Before the actual question, small prelude. I don't care about security, I do care about performance. I KNOW this is not proper and I know it's very hacky, however this is quite fast. vector<float> result = move(*((vector<float>*)&vertices)); That code is abusing C style casts and pointers to force the compiler to interpret the left hand side array vertices which is a vector of a compact type where all the fields are float as an array of floats. i.e struct vertex { float x; float y; float z; }

Swift: casting un-constrained generic type to generic type that confirms to Decodable

淺唱寂寞╮ 提交于 2021-01-29 11:08:10
问题 Situation I have a two generic classes which will fetch data either from api and database, lets say APIDataSource<I, O> and DBDataSource<I, O> respectively I will inject any of two class in view-model when creating it and view-model will use that class to fetch data it needed. I want view-model to work exactly same with both class. So I don't want different generic constraints for the classes // sudo code ViewModel(APIDataSource <InputModel, ResponseModel>(...)) // I want to change the

TSQL “Illegal XML Character” When Converting Varbinary to XML

早过忘川 提交于 2021-01-29 07:20:18
问题 I'm trying to create a stored procedure in SQL Server 2016 that converts XML that was previously converted into Varbinary back into XML, but getting an "Illegal XML character" error when converting. I've found a workaround that seems to work, but I can't actually figure out why it works, which makes me uncomfortable. The stored procedure takes data that was converted to binary in SSIS and inserted into a varbinary(MAX) column in a table and performs a simple CAST(Column AS XML) It worked fine

Convert list of strings to numpy array of floats

随声附和 提交于 2021-01-29 06:24:19
问题 Assume I have a list of strings and I want to convert it to the numpy array. For example I have A=A=['[1 2 3 4 5 6 7]','[8 9 10 11 12 13 14]'] print(A) ['[1 2 3 4 5 6 7]', '[8 9 10 11 12 13 14]'] I want my output to be like the following : a matrix of 2 by 7 [1 2 3 4 5 6 7;8 9 10 11 12 13 14] What I have tried thus far is the following: m=len(A) M=[] for ii in range(m): temp=A[ii] temp=temp.strip('[') temp=temp.strip(']') M.append(temp) print(np.asarray(M)) however my output is the following:

Can a type of variable be an object in C++?

天涯浪子 提交于 2021-01-29 06:09:02
问题 I hope my question is clear. I would like to do something like this: TYPE tipo = int; tipo = float; To later be able to do other things like these: void* ptr = new tipo(); cout << *(tipo*)ptr; It's, basically, to determinate the type of a void-pointer (or a variant -object, or a any -object) and its de-reference, but in execution time. This is my whole problem: I'm trying to create an array of any type of variable (simulating the RAM of a PC for a compiler). For this I'm using void pointers