theory

Why can't a null-reference exception name the object that has a null reference?

≯℡__Kan透↙ 提交于 2019-11-29 06:06:15
It seems to me that a lot of my debugging time is spent chasing down null-reference exceptions in complex statements. For instance: For Each game As IHomeGame in _GamesToOpen.GetIterator() Why, when I get a NullReferenceException, can I get the line number in the stack trace, but not the name of the object that equals null. In other words, why: Object reference not set to an instance of an object. instead of _GamesToOpen is not set to an instance of an object. or Anonymous object returned by _GamesToOpen.GetIterator() is null. or game was set to null. Is this strictly a design choice, meant to

while-else-loop

谁说我不能喝 提交于 2019-11-29 03:59:32
Of course this is an impossible statement in java (to-date), however ideally I would like to implement it as it is at the heart of many iterations. For example the first multiple times it is called I'm doing it 650,000+ times when it is creating the ArrayList . Unfortunately the reality is that my actual code does not have the set inside the else loop; thus it will pass over both the add and then the set commands and wasting time. After that I have it also in another loop where it is only performing the set as the data is already created and this is multi-nested with in many others so it is a

Understanding word alignment

主宰稳场 提交于 2019-11-29 02:44:41
问题 I understand what it means to access memory such that it is aligned but I don’t understand why this is necessary. For instance, why can I access a single byte from an address 0x…1 but I cannot access a half word (two bytes) from the same address. Again, I understand that if you have an address A and an object of size s that the access is aligned if A mod s = 0 . But I just don’t understand why this is important at the hardware level. 回答1: Hardware is complex; this is a simplified explanation.

Proof that the halting problem is NP-hard?

半城伤御伤魂 提交于 2019-11-29 00:12:50
问题 In this answer to a question about the definitions of NP, NP-hard, and NP-complete, Jason makes the claim that The halting problem is the classic NP-hard problem. This is the problem that given a program P and input I, will it halt? This is a decision problem but it is not in NP. It is clear that any NP-complete problem can be reduced to this one. While I agree that the halting problem is intuitively a much "harder" problem than anything in NP, I honestly cannot come up with a formal,

complexity of parsing C++

强颜欢笑 提交于 2019-11-28 23:49:28
Out of curiosity, I was wondering what were some "theoretical" results about parsing C++. Let n be the size of my project (in LOC, for example, but since we'll deal with big-O it's not very important) Is C++ parsed in O(n) ? If not, what's the complexity? Is C (or Java or any simpler language in the sense of its grammar) parsed in O(n)? Will C++1x introduce new features that will be even harder to parse? References would be greatly appreciated! I think the term "parsing" is being interpreted by different people in different ways for the purposes of the question. In a narrow technical sense,

Balancing a Binary Tree (AVL)

放肆的年华 提交于 2019-11-28 23:38:57
问题 Ok, this is another one in the theory realm for the CS guys around. In the 90's I did fairly well in implementing BST's. The only thing I could never get my head around was the intricacy of the algorithm to balance a Binary Tree (AVL). Can you guys help me on this? 回答1: A scapegoat tree possibly has the simplest balance-determination algorithm to understand. If any insertion causes the new node to be too deep, it finds a node around which to rebalance, by looking at weight balance rather than

How do we decide the number of dimensions for Latent semantic analysis ?

核能气质少年 提交于 2019-11-28 21:56:41
I have been working on latent semantic analysis lately. I have implemented it in java by making use of the Jama package. Here is the code: Matrix vtranspose ; a = new Matrix(termdoc); termdoc = a.getArray(); a = a.transpose() ; SingularValueDecomposition sv =new SingularValueDecomposition(a) ; u = sv.getU(); v = sv.getV(); s = sv.getS(); vtranspose = v.transpose() ; // we obtain this as a result of svd uarray = u.getArray(); sarray = s.getArray(); varray = vtranspose.getArray(); if(semantics.maketerms.nodoc>50) { sarray_mod = new double[50][50]; uarray_mod = new double[uarray.length][50];

What is a DSL and where should I use it?

£可爱£侵袭症+ 提交于 2019-11-28 21:17:46
I'm hearing more and more about domain specific languages being thrown about and how they change the way you treat business logic, and I've seen Ayende's blog posts and things, but I've never really gotten exactly why I would take my business logic away from the methods and situations I'm using in my provider. If you've got some background using these things, any chance you could put it in real laymans terms: What exactly building DSLs means? What languages are you using? Where using a DSL makes sense? What is the benefit of using DSLs? DSL's are good in situations where you need to give some

Combinatory method like tap, but able to return a different value?

假装没事ソ 提交于 2019-11-28 21:00:45
问题 I'm going through a phase of trying to avoid temporary variables and over-use of conditional where I can use a more fluid style of coding. I've taken a great liking to using #tap in places where I want to get the value I need to return, but do something with it before I return it. def fluid_method something_complicated(a, b, c).tap do |obj| obj.update(:x => y) end end Vs. the procedural: def non_fluid_method obj = something_complicated(a, b, c) obj.update(:x => y) obj # <= I don't like this,

GAN的理论 Theory behind GAN

断了今生、忘了曾经 提交于 2019-11-28 20:36:56
gan部分的理论,最开始的版本。这里面其实有一些问题 我们想要找到一个高维空间中的分布 P_data(x),在目标类别的区域,probability是高的,在那个区域之外,probability是低的。但这个P_data(x)分布的具体形式是不知道的 没有gan怎么做生成? maximum likelihood estimation! 1.从P_data(x)中sample一些数据作为训练数据 2.我们有一个含有未知参数theta的分布P_G(x; theta),想做的事情就是找出能够让P_G和P_data最接近的参数theta。比如我们有一个混合高斯分布GMM作为P_G(x; theta),theta就是Gaussians的means和variances 3.由训练数据{x1, x2, ..., xm}计算P_G(xi; theta) 4.likelihood就定义为所有可能的i,P_G(xi; theta)的连乘 5.就用gradient ascent 让这个likelihood最大 maximum likelihood estimation 等价于 minimize KL Divergence 原因在于,可以对概率取对数 然后可以把连乘号拿出来,变成求和: 这个式子就是对期望的估计 然后计算这个期望,就是对x求积分 然后加一个和P_G完全无关的项,也就是不含theta的项