hashset

Is the .Net HashSet uniqueness calculation completely based on Hash Codes?

限于喜欢 提交于 2019-12-06 18:47:28
问题 I was wondering whether the .Net HashSet<T> is based completely on hash codes or whether it uses equality as well? I have a particular class that I may potentially instantiate millions of instances of and there is a reasonable chance that some hash codes will collide at that point. I'm considering using HashSet's to store some instances of this class and am wondering if it's actually worth doing - if the uniqueness of an element is only determined on its hash code then that's of no use to me

Why is Python set intersection faster than Rust HashSet intersection?

风流意气都作罢 提交于 2019-12-06 18:23:01
问题 Here is my Python code: len_sums = 0 for i in xrange(100000): set_1 = set(xrange(1000)) set_2 = set(xrange(500, 1500)) intersection_len = len(set_1.intersection(set_2)) len_sums += intersection_len print len_sums Here is my Rust code: use std::collections::HashSet; fn main() { let mut len_sums = 0; for _ in 0..100000 { let set_1: HashSet<i32> = (0..1000).collect(); let set_2: HashSet<i32> = (500..1500).collect(); let intersection_len = set_1.intersection(&set_2).count(); len_sums +=

.Net Collection for atomizing T?

*爱你&永不变心* 提交于 2019-12-06 08:16:18
I am looking if there is a pre-existing .Net 'Hash-Set type' implementation suitable to atomizing a general type T. We have a large number of identical objects coming in for serialized sources that need to be atomized to conserve memory. A Dictionary<T,T> with the value == key works perfectly, however the objects in these collections can run into the millions across the app, and so it seem very wasteful to store 2 references to every object. HashSet cannot be used as it only has Contains, there ?is no way? to get to the actual member instance. Obviously I could roll my own but wanted to check

How to find common elements in multiple lists?

喜你入骨 提交于 2019-12-06 07:39:14
问题 I have a list of list (nest list). I need to find the common elements between those. Example would be [1,3,5], [1,6,7,9,3], [1,3,10,11] should result in [1,3] If not using the retainAll method of HashSet, how to iterate all the element to find? Thanks, 回答1: What you can do: Set<Integer> intersection = new HashSet<>(lists.get(0)) for(List<Integer> list : lists) { Set<Integer> newIntersection = new HashSet<>(); for(Integer i : list) { if(intersection.contains(i)) { newIntersections.add(i); } }

C# HashSet2 to work exactly like the standard C# HashSet, not compiling

人走茶凉 提交于 2019-12-06 07:27:51
I'm creating my own HashSet that works as the standard HashSet, using a Dictionary. I'm doing this because C# for XNA XBox doesn't support HashSets. This code is based on code from an example I found. I've edited the example to fix some of the problems but it still won't compile. public class HashSet2<T> : ICollection<T> { private Dictionary<T, Int16> dict; // code has been edited out of this example // see further on in the question for the full class public IEnumerator<T> GetEnumerator() { throw new NotImplementedException(); } IEnumerator<T> IEnumerable<T>.GetEnumerator() { return dict

C# type conversion: Explicit cast exists but throws a conversion error?

耗尽温柔 提交于 2019-12-06 06:27:51
I learned that HashSet implements the IEnumerable interface. Thus, it is possible to implicitly cast a HashSet object into IEnumerable : HashSet<T> foo = new HashSet<T>(); IEnumerable<T> foo2 = foo; // Implicit cast, everything fine. This works for nested generic types, too: HashSet<HashSet<T>> dong = new HashSet<HashSet<T>>(); IEnumerable<IEnumerable<T>> dong2 = dong; // Implicit cast, everything fine. At least that's what I thought. But if I make a Dictionary , I run into a problem: IDictionary<T, HashSet<T>> bar = new Dictionary<T, HashSet<T>>(); IDictionary<T, IEnumerable<T>> bar2 = bar; /

Java HashSet shows list in weird order, always starting with 3

不羁的心 提交于 2019-12-06 05:38:59
I have array of Strings which actually in nothing but list of integers coming from file. I converted it to HashSet so as to remove duplicates as follows: Set<String> intSet = new HashSet<String>(Arrays.asList(strArr)); I expected that it all the numbers to be in order but off course, since this is a string and not integer list, it may not come in order. But whenever I try to print this HashSet, I always get output as follows: [3, 2, 1, 4] [3, 2, 5, 4] Every time, if 3 is present it is considered to be first element. I am not getting why it is acting this way? Can anyone please explain me this.

Java: Modify id that changes hashcode

混江龙づ霸主 提交于 2019-12-06 03:19:42
问题 I use HashSet and I need to modify the ID of an object, but it changes hashcode and breaks HashSet and rules of hashCode() method. What is best solution: to delete object from Set and add object with new ID, or to keep the hash code (generated in constructor, for example) in every object in Set, or is there other way to solve this problem? Thanks for help. UPDATE: I made mistake: keeping hash code in object is terrible, because in that case equal objects can have different hash codes. 回答1: A

Is there any way to look up in HashSet by only the value the type is hashed on?

落花浮王杯 提交于 2019-12-06 03:19:31
I have a struct that has, among other data, a unique id : struct Foo { id: u32, other_data: u32, } I want to use the id as the key and keep it inside of the struct: use std::collections::HashSet; use std::hash::{Hash, Hasher}; impl PartialEq for Foo { fn eq(&self, other: &Foo) -> bool { self.id == other.id } } impl Eq for Foo {} impl Hash for Foo { fn hash<H: Hasher>(&self, state: &mut H) { self.id.hash(state); } } This works: pub fn bar() { let mut baz: HashSet<Foo> = HashSet::new(); baz.insert(Foo { id: 1, other_data: 2, }); let other_data = baz.get(&Foo { id: 1, other_data: 0, }).unwrap()

why hastable's rehash complexity may be quadratic in worst case

限于喜欢 提交于 2019-12-06 01:59:23
I do not understand why hastable's rehash complexity may be quadratic in worst case at : http://www.cplusplus.com/reference/unordered_set/unordered_multiset/reserve/ Any help would be appreciated ! Thanks Just some basics: Hash collisions is when two or more elements take on the same hash. This can cause worst-case O(n) operations. I won't really go into this much further, since one can find many explanations of this. Basically all the elements can have the same hash, thus you'll have one big linked-list at that hash containing all your elements (and search on a linked-list is of course O(n) )