reproducible-research

dput a long list - shorten list but preserve structure

北慕城南 提交于 2021-01-04 07:11:45
问题 If we want to make a reproducible question on a complex/large dataset for SO, we can use dput(head(df)) to reduce the size. Is there a similar approach to reduce the size of complex nested lists with varying list lengths? I'm thinking an approach could be to take the first few elements from each list (say first 3) irrespective of individual list type ( numeric , character etc.) and nested structure but I'm not sure how to do this. #sample nested list L <- list( list(1:10), list( list(1:10),

dput a long list - shorten list but preserve structure

余生长醉 提交于 2021-01-04 07:09:07
问题 If we want to make a reproducible question on a complex/large dataset for SO, we can use dput(head(df)) to reduce the size. Is there a similar approach to reduce the size of complex nested lists with varying list lengths? I'm thinking an approach could be to take the first few elements from each list (say first 3) irrespective of individual list type ( numeric , character etc.) and nested structure but I'm not sure how to do this. #sample nested list L <- list( list(1:10), list( list(1:10),

Tensorflow-Keras reproducibility problem on Google Colab

我与影子孤独终老i 提交于 2020-07-22 05:59:13
问题 I have a simple code to run on Google Colab (I use CPU mode): import numpy as np import pandas as pd ## LOAD DATASET datatrain = pd.read_csv("gdrive/My Drive/iris_train.csv").values xtrain = datatrain[:,:-1] ytrain = datatrain[:,-1] datatest = pd.read_csv("gdrive/My Drive/iris_test.csv").values xtest = datatest[:,:-1] ytest = datatest[:,-1] import tensorflow as tf from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.utils import to_categorical ## SET ALL SEED import os