thrust

Dual Thrust 区间突破策略 Python 版

和自甴很熟 提交于 2020-02-26 01:48:07
前言 Dual Thrust直译为“双重推力”,是上个世纪80年代由Michael Chalek开发的一个交易策略,曾经在期货市场风靡一时。由于策略本身思路简单,参数很少,因此可以适应于很多金融市场,正是因为简单易用和普适性高的特点,得到了广大交易者的认可流传至今。 Dual Thrust简介 Dual Thrust策略属于开盘区间突破策略,它以当天开盘价加减一定的范围来确定一个上下轨道,当价格突破上轨时做多,价格突破下轨时做空。不过与其他突破策略相比有两点不同:第一个是Dual Thrust策略在设置范围的时候,引入的是前N个交易日的开高低收这四个价格,这使得在一定时期内范围相对稳定,对于趋势跟踪策略来说是比较合理的。 第二个是Dual Thrust策略在多头和空头的触发条件上,考虑了非对称性,通过外部参数Ks和Kx,可以针对多头和空头选择不同的周期,这一点比较符合期货市场涨缓跌急的特点。当Ks小于Kx时,多头相对容易被触发,当Ks大于Kx时,空头相对容易被触发。这样的好处是可以根据自己的交易经验,动态地调整Ks和Kx的值。也可以根据历史数据测试的最优参数来使用策略。 Dual Thrust上下轨 在Dual Thrust策略中,首先需要定义前N根K线的震荡区间,然后震荡区间乘以多头和空头系数计算出范围,接着以开盘价加减这个范围,形成上轨和下轨

Using polymorphic functors inside functions in Thrust

巧了我就是萌 提交于 2020-02-23 07:34:06
问题 I need a function that computes some "math" function of several variables on GPU. I decided to use Thrust and its zip_iterator to pack variables in a tuple, and implement my math function as a functor foк for_each . But I'd like to have a universal function that can compute different "math" functions. So, I need to pass this functor in the function. As I think, to do this task, I should implement some simple hierarchy (with the only base class) of functors with different versions of operator(

Polymorphism and derived classes in CUDA / CUDA Thrust

心不动则不痛 提交于 2020-01-27 07:56:13
问题 This is my first question on Stack Overflow, and it's quite a long question. The tl;dr version is: How do I work with a thrust::device_vector<BaseClass> if I want it to store objects of different types DerivedClass1 , DerivedClass2 , etc, simultaneously? I want to take advantage of polymorphism with CUDA Thrust. I'm compiling for an -arch=sm_30 GPU (GeForce GTX 670). Let us take a look at the following problem: Suppose there are 80 families in town. 60 of them are married couples, 20 of them

Polymorphism and derived classes in CUDA / CUDA Thrust

有些话、适合烂在心里 提交于 2020-01-27 07:56:05
问题 This is my first question on Stack Overflow, and it's quite a long question. The tl;dr version is: How do I work with a thrust::device_vector<BaseClass> if I want it to store objects of different types DerivedClass1 , DerivedClass2 , etc, simultaneously? I want to take advantage of polymorphism with CUDA Thrust. I'm compiling for an -arch=sm_30 GPU (GeForce GTX 670). Let us take a look at the following problem: Suppose there are 80 families in town. 60 of them are married couples, 20 of them

How to modify the contents of a zip iterator

流过昼夜 提交于 2020-01-25 09:52:08
问题 I have the XYZ locations of 1000 points. I need to transform each of them with a matrix. To start with a simpler problem, I try to multiply each point with a constant. Also I take only three points for example. I use thrust::zip_iterator to pack the XYZ and use a functor and transform operation to modify the XYZ. My code is as below, However it gives compilation error. The functor modify_tuple compiles fine. But when it is used in the transform operation I get lot of errors. My question is

Can thrust::gather be used “in-place”?

半城伤御伤魂 提交于 2020-01-17 06:42:13
问题 Consider the following code: #include <time.h> // --- time #include <stdlib.h> // --- srand, rand #include<fstream> #include <thrust\host_vector.h> #include <thrust\device_vector.h> #include <thrust\sort.h> #include <thrust\iterator\zip_iterator.h> #include "TimingGPU.cuh" /********/ /* MAIN */ /********/ int main() { const int N = 16384; std::ifstream h_indices_File, h_x_File; h_indices_File.open("h_indices.txt"); h_x_File.open("h_x.txt"); std::ofstream h_x_result_File; h_x_result_File.open(

How to remove zero values from an array in parallel

喜夏-厌秋 提交于 2020-01-13 08:37:07
问题 How can I efficiently remove zero values from an array in parallel using CUDA. The information about the number of zero values is available in advance, which should simplify this task. It is important that the numbers remain ordered as in the source array, when being copied to the resulting array. Example: The array would e.g. contain the following values: [0, 0, 19, 7, 0, 3, 5, 0, 0, 1] with the additional information that 5 values are zeros. The desired end result would then be another

Evaluating expressions consisting of elementwise matrix operations in Thrust

两盒软妹~` 提交于 2020-01-10 05:49:05
问题 I would like to use Thrust to evaluate expressions consisting of elementwise matrix operations. To make it clear, let us consider an expression like: D=A*B+3*sin(C) where A , B , C and D are matrices, of course of the same size. The Thrust Quick Start Guide provides the saxpy example for which y is used both as input and as output, while in my case the output argument is different from the input ones which, by the way, are more than two. At Element-by-element vector multiplication with CUDA,

Output from reduce_by_key() as a function of two reduced vectors

旧时模样 提交于 2020-01-07 02:39:07
问题 I'm refactoring thrust code by converting from an AoS to SoA approach to take advantage of memory coalescing. To that end, I have two vectors that are reduced by a common key, and which are then used to calculate the values for an output vector. The original code did this with a single functor, which I'd like to emulate. Essentially: Oᵢ = Rᵢ / Sᵢ, where Rᵢ and Sᵢ are vectors reduced by the same key, and Oᵢ is the corresponding output vector. Below is code that exemplifies what I'm trying to

Can 'Thrust' generate random numbers on the device

做~自己de王妃 提交于 2020-01-06 06:12:06
问题 Does anybody know if the CUDA library 'Thrust' can generate random numbers on the device? I've seen from the example codes that it can do it on the host...But Thats no good for me really. Cheers in advance Jack 回答1: Yes, Thrust has device random generator. See Monte Carlo example provided by Thrust team. 回答2: cuRAND might also do what you want: http://developer.nvidia.com/curand It's not Thrust, but it has similar functionality. 来源: https://stackoverflow.com/questions/11024718/can-thrust