machine-learning

R - mlr: Is there a easy way to get the variable importance of tuned support vector machine models in nested resampling (spatial)?

烂漫一生 提交于 2021-02-09 11:46:24
问题 I am trying to get the variable importance for all predictors (or variables, or features) of a tuned support vector machine (svm) model using e1071::svm through the mlr -package in R . But I am not sure, if I am doing the assessment right. Well, at first the idea: To get an honest tuned svm-model, I am following the nested-resampling tutorial using spatial n-fold cross-validation ( SpRepCV ) in the outer loop and spatial cross-validation ( SpCV ) in the inner loop. As tuning parameter gamma

sklearn dimensionality issues “Found array with dim 3. Estimator expected <= 2”

倖福魔咒の 提交于 2021-02-09 11:13:22
问题 I am trying to use KNN to correctly classify .wav files into two groups, group 0 and group 1. I extracted the data, created the model, fit the model, however when I try and use the .predict() method I get the following error: Traceback (most recent call last): File "/..../....../KNN.py", line 20, in <module> classifier.fit(X_train, y_train) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/neighbors/base.py", line 761, in fit X, y = check_X_y(X, y,

sklearn dimensionality issues “Found array with dim 3. Estimator expected <= 2”

依然范特西╮ 提交于 2021-02-09 11:12:20
问题 I am trying to use KNN to correctly classify .wav files into two groups, group 0 and group 1. I extracted the data, created the model, fit the model, however when I try and use the .predict() method I get the following error: Traceback (most recent call last): File "/..../....../KNN.py", line 20, in <module> classifier.fit(X_train, y_train) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/neighbors/base.py", line 761, in fit X, y = check_X_y(X, y,

Subsample size in scikit-learn RandomForestClassifier

走远了吗. 提交于 2021-02-09 08:21:11
问题 How is it possible to control the size of the subsample used for the training of each tree in the forest? According to the documentation of scikit-learn: A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default

Subsample size in scikit-learn RandomForestClassifier

三世轮回 提交于 2021-02-09 08:20:55
问题 How is it possible to control the size of the subsample used for the training of each tree in the forest? According to the documentation of scikit-learn: A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default

Subsample size in scikit-learn RandomForestClassifier

半世苍凉 提交于 2021-02-09 08:19:04
问题 How is it possible to control the size of the subsample used for the training of each tree in the forest? According to the documentation of scikit-learn: A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default

Subsample size in scikit-learn RandomForestClassifier

。_饼干妹妹 提交于 2021-02-09 08:19:03
问题 How is it possible to control the size of the subsample used for the training of each tree in the forest? According to the documentation of scikit-learn: A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: [None, 2584]

女生的网名这么多〃 提交于 2021-02-09 05:58:44
问题 I'm working in a project that isolate vocal parts from an audio. I'm using the DSD100 dataset, but for doing tests I'm using the DSD100subset dataset from I only use the mixtures and the vocals. I'm basing this work on this article First I process the audios to extract a spectrogram and put it on a list, with all the audios forming four lists (trainMixed, trainVocals, testMixed, testVocals). Like this: def to_spec(wav, n_fft=1024, hop_length=256): return librosa.stft(wav, n_fft=n_fft, hop

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: [None, 2584]

穿精又带淫゛_ 提交于 2021-02-09 05:58:13
问题 I'm working in a project that isolate vocal parts from an audio. I'm using the DSD100 dataset, but for doing tests I'm using the DSD100subset dataset from I only use the mixtures and the vocals. I'm basing this work on this article First I process the audios to extract a spectrogram and put it on a list, with all the audios forming four lists (trainMixed, trainVocals, testMixed, testVocals). Like this: def to_spec(wav, n_fft=1024, hop_length=256): return librosa.stft(wav, n_fft=n_fft, hop

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: [None, 2584]

混江龙づ霸主 提交于 2021-02-09 05:55:40
问题 I'm working in a project that isolate vocal parts from an audio. I'm using the DSD100 dataset, but for doing tests I'm using the DSD100subset dataset from I only use the mixtures and the vocals. I'm basing this work on this article First I process the audios to extract a spectrogram and put it on a list, with all the audios forming four lists (trainMixed, trainVocals, testMixed, testVocals). Like this: def to_spec(wav, n_fft=1024, hop_length=256): return librosa.stft(wav, n_fft=n_fft, hop