I am trying to generate N sets of independent random numbers. I have a simple code that shows the problem for 3 sets of 10 random numbers. I notice that even though I use th
Late to the party, however the random number generator has been overhauled (see https://github.com/tensorflow/community/pull/38 for summarizing the process) and the tf.random.experimental.Generator
class now provides the desired functionality.
From TF 1.14 onwards (incl. TF 2.0), you can seed the generator and obtain the exact same random number regardless of session, platform, or even architecture.
import tensorflow as tf
rng = tf.random.experimental.Generator.from_seed(1234)
rng.uniform((), 5, 10, tf.int64) # draw a random scalar (0-D tensor) between 5 and 10
See the documentation for details:
To address your particular question (I'm using TF 2.0):
for i in range(3):
b = tf.random.uniform((10,), 0, 10, seed=1234)
print(b)
gives
tf.Tensor(
[2.7339518 9.339194 5.2865124 8.912003 8.402512 0.53086996
4.385383 4.8005686 2.2077608 2.1795273 ], shape=(10,), dtype=float32)
tf.Tensor(
[9.668942 3.4503186 7.4577675 2.9200733 1.8064988 6.1576104
3.9958012 1.889689 3.8289428 0.36031008], shape=(10,), dtype=float32)
tf.Tensor(
[8.019657 4.895439 5.90925 2.418766 4.524292 7.901089 9.702316
5.1606855 9.744821 2.4418736], shape=(10,), dtype=float32)
while this
for i in range(3):
rng = tf.random.experimental.Generator.from_seed(1234)
b = rng.uniform((10,), 0, 10)
print(b)
gives what you want:
tf.Tensor(
[3.581475 1.132276 5.6670904 6.712369 3.2565057 1.7095459 8.468903
6.2697005 1.0973608 2.7732193], shape=(10,), dtype=float32)
tf.Tensor(
[3.581475 1.132276 5.6670904 6.712369 3.2565057 1.7095459 8.468903
6.2697005 1.0973608 2.7732193], shape=(10,), dtype=float32)
tf.Tensor(
[3.581475 1.132276 5.6670904 6.712369 3.2565057 1.7095459 8.468903
6.2697005 1.0973608 2.7732193], shape=(10,), dtype=float32)
In tensorflow, a random operation relies on two different seeds: a global seed, set by tf.set_random_seed
, and an operation seed, provided as an argument to the operation. You will find more details on how they relate in the docs.
You have a different seed for each random op because each random op maintains its own internal state for pseudo-random number generation. The reason for having each random generator maintaining its own state is to be robust to change: if they shared the same state, then adding a new random generator somewhere in your graph would change the values produced by all the other generators, defeating the purpose of using a seed.
Now, why do we have this dual system of global and per-op seeds? Well, actually the global seed is not necessary. It is there for convenience: It allows to set all random op seeds to a different and deterministic (if unknown) value at once, without having to go exhaustively through all of them.
Now when a global seed is set but not the op seed, according to the docs,
The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.
To be more precise, the seed that is provided is the id of the last operation that has been created in the current graph. Consequently, globally-seeded random operation are extremely sensitive to change in the graph, in particular to those created before itself.
For example,
import tensorflow as tf
tf.set_random_seed(1234)
generate = tf.random_uniform(())
with tf.Session() as sess:
print(generate.eval())
# 0.96046877
Now if we create a node before, the result changes:
import tensorflow as tf
tf.set_random_seed(1234)
tf.zeros(()) # new op added before
generate = tf.random_uniform(())
with tf.Session() as sess:
print(generate.eval())
# 0.29252338
If a node is create after however, it does not affect the op seed:
import tensorflow as tf
tf.set_random_seed(1234)
generate = tf.random_uniform(())
tf.zeros(()) # new op added after
with tf.Session() as sess:
print(generate.eval())
# 0.96046877
Obviously, as in your case, if you generate several operations, they will have different seeds:
import tensorflow as tf
tf.set_random_seed(1234)
gen1 = tf.random_uniform(())
gen2 = tf.random_uniform(())
with tf.Session() as sess:
print(gen1.eval())
print(gen2.eval())
# 0.96046877
# 0.85591054
As a curiosity, and to validate the fact that seeds are simply the last used id in the graph, you could align the seed of gen2
to gen1
with
import tensorflow as tf
tf.set_random_seed(1234)
gen1 = tf.random_uniform(())
# 4 operations seems to be created after seed has been picked
seed = tf.get_default_graph()._last_id - 4
gen2 = tf.random_uniform((), seed=seed)
with tf.Session() as sess:
print(gen1.eval())
print(gen2.eval())
# 0.96046877
# 0.96046877
Obviously though, this should not pass code review.
For Tensorflow 2.0 tf.random.set_random_seed(seed)
changed to tf.random.set_seed(seed)
.
see TF docs:
There is a related GitHub issue. But in your case, please refer to the documentation of tf.set_random_seed:
Sets the graph-level random seed.
You probably want to use the same graph and same operation to get the same random numbers in different sessions.
import tensorflow as tf
tf.set_random_seed(1234)
generate = tf.random_uniform((10,), 0, 10)
tf.get_default_graph().finalize() # something everybody tends to forget
for i in range(3):
with tf.Session() as sess:
b = sess.run(generate)
print(b)
gives
[9.604688 5.811516 6.4159 9.621765 0.5434954 4.1893444 5.8865128
7.9785547 8.296125 8.388672 ]
[9.604688 5.811516 6.4159 9.621765 0.5434954 4.1893444 5.8865128
7.9785547 8.296125 8.388672 ]
[9.604688 5.811516 6.4159 9.621765 0.5434954 4.1893444 5.8865128
7.9785547 8.296125 8.388672 ]
In your case, you created different operations within the same graph.
I noticed that you want to have 3 different vectors containing random numbers. Every time you want to run the code you want these three vectors containing random numbers to be the same as the first time. This approach is completely explainable, why need four the same random vectors. You want to have 4 random vectors each other.
There are two types of seeds that you can set when defining chart operations: Grain at the chart level, which is set by tf.set_random_seed, and seeds at the operational level, which are placed in the initializer variable As the grain is at the chart level, each time the result is different. You must use tf.InteractiveSession ()
tf.set_random_seed(1234)
sess = tf.InteractiveSession()
print(sess.run(tf.random_uniform((10,), 0, 10, seed=1)))
print(sess.run(tf.random_uniform((10,), 0, 10, seed=2)))
print(sess.run(tf.random_uniform((10,), 0, 10, seed=3)))
print(sess.run(tf.random_uniform((10,), 0, 10, seed=4)))
You get 4 random number vectors containing numbers from 0 to 10.
You are getting different results on different runs because there are three generate
variables(operation) defined in the graph and not one. This is because you have the generate operation inside the for loop which leads to three operations.(Tensor("random_uniform:0"), Tensor("random_uniform_1:0"), Tensor("random_uniform_2:0"))
. Just do print(generate)
inside the for loop. You will see three different operations as stated above.
tf.set_random_seed
sets the seed at the graph level. So it deterministically picks the seed for each operation in the graph. So, the three generate
operations are assigned the same three seeds at each run. And this is why for each run, you would be seeing the same results for all three variables correspondingly.
Please take a look at this for more information on setting random seeds.
So, If you want to have the same results each time you run a session, you can do this:
tf.set_random_seed(1234)
generate = tf.random_uniform((10,), 0, 10)
for i in range(3):
with tf.Session() as sess:
b = sess.run(generate)
print(b)
But why do you want to create n
sessions. You should ideally be creating one session and then run the session n
times. Creating a new session for each run is not required and each time it tries to place the variables and operations in the graph to the device(GPU or CPU).