What does the copy_initial_weights documentation mean in the higher library for Pytorch?
问题 I was trying to use the higher library for meta-learning and I was having issues understanding what the copy_initial_weights mean. The docs say: copy_initial_weights – if true, the weights of the patched module are copied to form the initial weights of the patched module, and thus are not part of the gradient tape when unrolling the patched module. If this is set to False, the actual module weights will be the initial weights of the patched module. This is useful when doing MAML, for example.