dtype

numpy genfromtxt - how to detect bad int input values

跟風遠走 提交于 2021-01-29 06:49:30
问题 Here is a trivial example of a bad int value to numpy.genfromtxt . For some reason, I can't detect this bad value, as it's showing up as a valid int of -1. >>> bad = '''a,b 0,BAD 1,2 3,4'''.splitlines() My input here has 2 columns of ints, named a and b. b has a bad value, where we have a string "BAD" instead of an integer. However, when I call genfromtxt , I cannot detect this bad value. >>> out = np.genfromtxt(bad, delimiter=',', dtype=(numpy.dtype('int64'), numpy.dtype('int64')), names

Constrain numpy to automatically convert integers to floating-point numbers (python 3.7)

感情迁移 提交于 2021-01-29 06:17:29
问题 I have just made the following mistake: a = np.array([0,3,2, 1]) a[0] = .001 I was expecting 0 to be replaced by .001 (and the dtype of my numpy array to automatically switch from int to float). However, print (a) returns: array([0, 3, 2, 1]) Can somebody explain why numpy is doing that? I am confused because multiplying my array of integers by a floating point number will automatically change dtype to float: b = a*.1 print (b) array([0. , 0.3, 0.2, 0.1]) Is there a way to constrain numpy to

Creating array with single structured element containing an array

故事扮演 提交于 2021-01-28 07:51:32
问题 I have a dtype like this: >>> dt = np.dtype([('x', object, 3)]) >>> dt dtype([('x', 'O', (3,))]) One field named 'x', containing three pointers. I would like to construct an array with a single element of this type: >>> a = np.array([(['a', 'b', 'c'])], dtype=dt) >>> b = np.array([(np.array(['a', 'b', 'c'], dtype=object))], dtype=dt) >>> c = np.array((['a', 'b', 'c']), dtype=dt) >>> d = np.array(['a', 'b', 'c'], dtype=dt) >>> e = np.array([([['a', 'b', 'c']])], dtype=dt) All five of these

How to count the number of categorical features with Pandas?

房东的猫 提交于 2020-12-25 09:56:19
问题 I have a pd.DataFrame which contains different dtypes columns. I would like to have the count of columns of each type. I use Pandas 0.24.2. I tried: dataframe.dtypes.value_counts() It worked fine for other dtypes (float64, object, int64) but for a weird reason, it doesn't aggregate the 'category' features, and I get a different count for each category (as if they would be counted as different values of dtypes). I also tried: dataframe.dtypes.groupby(by=dataframe.dtypes).agg(['count']) But

How to count the number of categorical features with Pandas?

旧城冷巷雨未停 提交于 2020-12-25 09:56:09
问题 I have a pd.DataFrame which contains different dtypes columns. I would like to have the count of columns of each type. I use Pandas 0.24.2. I tried: dataframe.dtypes.value_counts() It worked fine for other dtypes (float64, object, int64) but for a weird reason, it doesn't aggregate the 'category' features, and I get a different count for each category (as if they would be counted as different values of dtypes). I also tried: dataframe.dtypes.groupby(by=dataframe.dtypes).agg(['count']) But

Return value of pyfunc_0 is double, but expects float

元气小坏坏 提交于 2019-12-11 17:14:28
问题 I am currently trying to better understand Tensorflows CustomLayer feature. While implementing such a custom layer, I ran into the following error: /usr/lib/python3/dist-packages/skimage/util/dtype.py:110: UserWarning: Possible precision loss when converting from float64 to uint16 "%s to %s" % (dtypeobj_in, dtypeobj)) /usr/lib/python3/dist-packages/skimage/exposure/exposure.py:307: RuntimeWarning: invalid value encountered in true_divide image = (image - imin) / float(imax - imin) Traceback