Finding size of max independent set in binary tree - why faulty “solution” doesn't work?
问题 Here is a link to a similar question with a good answer: Java Algorithm for finding the largest set of independent nodes in a binary tree. I came up with a different answer, but my professor says it won't work and I'd like to know why (he doesn't answer email). The question: Given an array A with n integers, its indexes start with 0 (i.e, A[0] , A[1] , …, A[n-1] ). We can interpret A as a binary tree in which the two children of A[i] are A[2i+1] and A[2i+2] , and the value of each element is