I am trying to generate N sets of independent random numbers. I have a simple code that shows the problem for 3 sets of 10 random numbers. I notice that even though I use th
You are getting different results on different runs because there are three generate variables(operation) defined in the graph and not one. This is because you have the generate operation inside the for loop which leads to three operations.(Tensor("random_uniform:0"), Tensor("random_uniform_1:0"), Tensor("random_uniform_2:0")). Just do print(generate) inside the for loop. You will see three different operations as stated above.
tf.set_random_seed sets the seed at the graph level. So it deterministically picks the seed for each operation in the graph. So, the three generate operations are assigned the same three seeds at each run. And this is why for each run, you would be seeing the same results for all three variables correspondingly.
Please take a look at this for more information on setting random seeds.
So, If you want to have the same results each time you run a session, you can do this:
tf.set_random_seed(1234)
generate = tf.random_uniform((10,), 0, 10)
for i in range(3):
with tf.Session() as sess:
b = sess.run(generate)
print(b)
But why do you want to create n sessions. You should ideally be creating one session and then run the session n times. Creating a new session for each run is not required and each time it tries to place the variables and operations in the graph to the device(GPU or CPU).