optimization

Why do many sites minify CSS and JavaScript but not HTML? [duplicate]

落爺英雄遲暮 提交于 2019-12-29 04:06:25
问题 This question already has answers here : Closed 9 years ago . Possible Duplicate: Why minify assets and not the markup? I have seen a lot of sites using minified CSS and JavaScript to increase website response time but I have never seen any sites use minified HTML. Why would you not want your HTML to be minified? 回答1: Because if you're doing things properly you're serving HTML gzipped anyway, so the low-hanging fruit of HTML minification - whitespace - isn't all that relevant. There aren't

Why do many sites minify CSS and JavaScript but not HTML? [duplicate]

你说的曾经没有我的故事 提交于 2019-12-29 04:06:06
问题 This question already has answers here : Closed 9 years ago . Possible Duplicate: Why minify assets and not the markup? I have seen a lot of sites using minified CSS and JavaScript to increase website response time but I have never seen any sites use minified HTML. Why would you not want your HTML to be minified? 回答1: Because if you're doing things properly you're serving HTML gzipped anyway, so the low-hanging fruit of HTML minification - whitespace - isn't all that relevant. There aren't

Hyperparameter tune for Tensorflow

对着背影说爱祢 提交于 2019-12-29 04:02:35
问题 I am searching for a hyperparameter tune package for code written directly in Tensorflow (not Keras or Tflearn). Could you make some suggestion? 回答1: Usually you don't need to have your hyperparameter optimisation logic coupled with the optimised model (unless your hyperparemeter optimisation logic is specific to the kind of model that you are training, in which case you would need to tell us a bit more). There are several tools and packages available for the task. Here is a good paper on the

How to speed up my sparse matrix solver?

风格不统一 提交于 2019-12-29 03:41:08
问题 I'm writing a sparse matrix solver using the Gauss-Seidel method. By profiling, I've determined that about half of my program's time is spent inside the solver. The performance-critical part is as follows: size_t ic = d_ny + 1, iw = d_ny, ie = d_ny + 2, is = 1, in = 2 * d_ny + 1; for (size_t y = 1; y < d_ny - 1; ++y) { for (size_t x = 1; x < d_nx - 1; ++x) { d_x[ic] = d_b[ic] - d_w[ic] * d_x[iw] - d_e[ic] * d_x[ie] - d_s[ic] * d_x[is] - d_n[ic] * d_x[in]; ++ic; ++iw; ++ie; ++is; ++in; } ic +=

Laziness and tail recursion in Haskell, why is this crashing?

孤街浪徒 提交于 2019-12-29 03:10:37
问题 I have this fairly simple function to compute the mean of elements of a big list, using two accumulators to hold the sum so far and the count so far: mean = go 0 0 where go s l [] = s / fromIntegral l go s l (x:xs) = go (s+x) (l+1) xs main = do putStrLn (show (mean [0..10000000])) Now, in a strict language, this would be tail-recursive, and there would be no problem. However, as Haskell is lazy, my googling has led me to understand that (s+x) and (l+1) will be passed down the recursion as

CUDA Block and Grid size efficiencies

北城余情 提交于 2019-12-29 03:10:07
问题 What is the advised way of dealing with dynamically-sized datasets in cuda? Is it a case of 'set the block and grid sizes based on the problem set' or is it worthwhile to assign block dimensions as factors of 2 and have some in-kernel logic to deal with the over-spill? I can see how this probably matters alot for the block dimensions, but how much does this matter to the grid dimensions? As I understand it, the actual hardware constraints stop at the block level (i.e blocks assigned to SM's

Scipy Optimization Not Running when I try to set constraints using a for loop

心已入冬 提交于 2019-12-29 01:58:10
问题 I was trying to minimize the objective function while using a for loop to set the constraints such that x1 = x2 = ... xn. However, the optimization doesn't seem to work. I.e. the end x still equals to the initial x. And I am getting an error message of 'Singular matrix C in LSQ subproblem'. covariance_matrix = np.matrix([[0.159775519, 0.022286316, 0.00137635, -0.001861736], [0.022286316, 0.180593862, -5.5578e-05, 0.00451056], [0.00137635, -5.5578e-05, 0.053093075, 0.02240866], [-0.001861736,

os.walk very slow, any way to optimise?

霸气de小男生 提交于 2019-12-29 01:39:12
问题 I am using os.walk to build a map of a data-store (this map is used later in the tool I am building) This is the code I currently use: def find_children(tickstore): children = [] dir_list = os.walk(tickstore) for i in dir_list: children.append(i[0]) return children I have done some analysis on it: dir_list = os.walk(tickstore) runs instantly, if I do nothing with dir_list then this function completes instantly. It is iterating over dir_list that takes a long time, even if I don't append

Pass by value or reference, to a C++ constructor that needs to store a copy?

僤鯓⒐⒋嵵緔 提交于 2019-12-29 01:37:14
问题 Should a C++ (implicit or explicit) value constructor accept its parameter(s) by value or reference-to-const, when it needs to store a copy of the argument(s) in its object either way? Here is the shortest example I can think of: struct foo { bar _b; foo(bar [const&] b) // pass by value or reference-to-const? : _b(b) { } }; The idea here is that I want to minimize the calls to bar's copy constructor when a foo object is created, in any of the various ways in which a foo object might get

Why gcc disassembler allocating extra space for local variable?

冷暖自知 提交于 2019-12-29 01:36:15
问题 I have written simple function in C, void GetInput() { char buffer[8]; gets(buffer); puts(buffer); } When I disassemble it in gdb's disassembler, it gives following disassembly. 0x08048464 <+0>: push %ebp 0x08048465 <+1>: mov %esp,%ebp 0x08048467 <+3>: sub $0x10,%esp 0x0804846a <+6>: mov %gs:0x14,%eax 0x08048470 <+12>: mov %eax,-0x4(%ebp) 0x08048473 <+15>: xor %eax,%eax => 0x08048475 <+17>: lea -0xc(%ebp),%eax 0x08048478 <+20>: mov %eax,(%esp) 0x0804847b <+23>: call 0x8048360 <gets@plt>