pruning

R: Pruning data.tree without altering

霸气de小男生 提交于 2019-12-11 06:49:40
问题 In the data.tree package when one prunes a tree, it permanently alters the tree. This is problematic, as my data.tree takes a long time to generate, and I don't want to generate a new one everytime I have to do a new pruning. Here I generate a data.tree # Loading data and library library(data.tree) data(acme) # Function to add cumulative costs to all nodes Cost <- function(node) { result <- node$cost if(length(result) == 0) result <- sum(sapply(node$children, Cost)) return (result) } # Adding

Accelerating FFTW pruning to avoid massive zero padding

匆匆过客 提交于 2019-12-08 22:08:18
问题 Suppose that I have a sequence x(n) which is K * N long and that only the first N elements are different from zero. I'm assuming that N << K , say, for example, N = 10 and K = 100000 . I want to calculate the FFT, by FFTW, of such a sequence. This is equivalent to having a sequence of length N and having a zero padding to K * N . Since N and K may be "large", I have a significant zero padding. I'm exploring if I can save some computation time avoid explicit zero padding. The case K = 2 Let us

How do I enable partition pruning in spark

倾然丶 夕夏残阳落幕 提交于 2019-12-06 21:22:29
问题 I am reading parquet data and I see that it is listing all the directories on driver side Listing s3://xxxx/defloc/warehouse/products_parquet_151/month=2016-01 on driver Listing s3://xxxx/defloc/warehouse/products_parquet_151/month=2014-12 on driver I have specified month=2014-12 in my where clause. I have tried using spark sql and data frame API, and looks like both aren't pruning partitions. Using Dataframe API df.filter("month='2014-12'").show() Using Spark SQL sqlContext.sql("select name,

How do I enable partition pruning in spark

假如想象 提交于 2019-12-05 02:19:41
I am reading parquet data and I see that it is listing all the directories on driver side Listing s3://xxxx/defloc/warehouse/products_parquet_151/month=2016-01 on driver Listing s3://xxxx/defloc/warehouse/products_parquet_151/month=2014-12 on driver I have specified month=2014-12 in my where clause. I have tried using spark sql and data frame API, and looks like both aren't pruning partitions. Using Dataframe API df.filter("month='2014-12'").show() Using Spark SQL sqlContext.sql("select name, price from products_parquet_151 where month = '2014-12'") I have tried the above on versions 1.5.1, 1

Is there a free tool capable of pruning unused code from a CLI assembly? [closed]

醉酒当歌 提交于 2019-12-04 06:42:31
Closed. This question is off-topic. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . Is there a free tool capable of pruning unused code from a CLI assembly? I know there are obfuscators that are capable of performing this optimization, but these all cost money. Is there a free (or even open source) tool that removes the unused code in an already compiled assembly? Jb Evain There is. It's called the Mono.Linker . What I wrote three years about the Mono.Linker ago pretty much still

How to freeze/lock weights of one TensorFlow variable (e.g., one CNN kernel of one layer)

有些话、适合烂在心里 提交于 2019-12-03 04:36:22
问题 I have a TensorFlow CNN model that is performing well and we would like to implement this model in hardware; i.e., an FPGA. It's a relatively small network but it would be ideal if it were smaller. With that goal, I've examined the kernels and find that there are some where the weights are quite strong and there are others that aren't doing much at all (the kernel values are all close to zero). This occurs specifically in layer 2, corresponding to the tf.Variable() named, "W_conv2". W_conv2

How to freeze/lock weights of one TensorFlow variable (e.g., one CNN kernel of one layer)

[亡魂溺海] 提交于 2019-12-02 18:35:36
I have a TensorFlow CNN model that is performing well and we would like to implement this model in hardware; i.e., an FPGA. It's a relatively small network but it would be ideal if it were smaller. With that goal, I've examined the kernels and find that there are some where the weights are quite strong and there are others that aren't doing much at all (the kernel values are all close to zero). This occurs specifically in layer 2, corresponding to the tf.Variable() named, "W_conv2". W_conv2 has shape [3, 3, 32, 32]. I would like to freeze/lock the values of W_conv2[:, :, 29, 13] and set them

Pruning in Keras

£可爱£侵袭症+ 提交于 2019-11-30 17:26:06
I'm trying to design a neural network using Keras with priority on prediction performance, and I cannot get sufficiently high accuracy by further reducing the number of layers and nodes per layer. I have noticed that very large portion of my weights are effectively zero (>95%). Is there a way to prune dense layers in hope of reducing prediction time? Not a dedicated way :( There's currently no easy (dedicated) way of doing this with Keras. A discussion is ongoing at https://groups.google.com/forum/#!topic/keras-users/oEecCWayJrM . You may also be interested in this paper: https://arxiv.org/pdf