How do I loop vagrant provisioning in multi-machine environments to switch back and forth between machines?

梦想与她 提交于 2019-12-04 05:18:55

If what you want is to have that other provisioner to run automagically right after all the machines have been vagrant uped, unfortunately there's no way to do that as far as I know, Vagrant will allways run all the provisioners specified (unless you tell it to run just a subset of them).

The only way you might be able to emulate would be having different kinds of provisioners for each machine and selectively running them as needed. So, for example, you'd vagrant up --provision --provision-with=shell and then run a vagrant provision --provision-with chef_solo to have the shell provisioners run first and the chef_solo provisioning afterwards

But, if you want to manually fire up a provisioner after all the machines have been brought up you can just use the vagrant provision command to accomplish that.

One possible way of doing this is to execute commands between machines from ssh. The only additional thing you need to do is copy the vagrant insecure private key to each of the guests.

Then you can ssh between the machines in your cluster (always handy) and also do stuff like this:

Vagrant.configure('2') do |config|
config.vm.provision some stuff

config.vm.define 'node1' do |node1|
  node1.vm.provision some more stuff
end

config.vm.define 'node2' do |node2|
  node2.vm.provision "shell", inline: "/vagrant/bootstrap-webserver.sh"
  node2.vm.provision "shell", inline: "ssh vagrant@node1 sh /vagrant/trigger-build.sh"
end

config.vm.define 'node3' do |node3|
  node3.vm.provision "shell", inline: "/vagrant/create-database.sh"
  node3.vm.provision "shell", inline: "ssh vagrant@node1 sh /vagrant/initialise-database.sh"
end

... node4 node 5 ...
end

You probably also want to set "PasswordAuthentication no" in your sshd_config on the guests and add "-o StrictHostKeyChecking=no" to the ssh command above to get it to work.

If you want to make this even easier use the sshpass command instead of ssh so that you don't need to worry about keys.

node3.vm.provision "shell", inline: sshpass -pvagrant ssh -oStrictHostKeyChecking=no vagrant@node1 "sudo sh /vagrant/initialise-database.sh"

The command assumes your virtual box has the vagrant user with the password vagrant and has sudo access.

I think the accepted answer is not the best method.

What you want to do is create a list of named nodes, and have the one that you want to have the final provisioning done in last in the list, like this.

NODES = [
    { :hostname => "api1", :ip => "192.168.0.11" },
    { :hostname => "api2", :ip => "192.168.0.12" },
    { :hostname => "controller", :ip => "192.168.0.2" }
]

Vagrant.configure("2") do |config|
    // Do whatever global config here
    NODES.each do |node|
        config.vm.define node[:hostname] do |nodeconfig|
            nodeconfig.vm.hostname = node[:hostname]
            // Do config that is the same across each node
            if node[:hostname] == "controller"
                // Do your provisioning for this machine here
            else
                // Do provisioning for the other machines here
            end
        end
    end
    // Do any global provisioning
end

The global provisioning will happen first for each node, and the scoped provisioning will come next. By placing the controller at the end of the list, it will be the last to have its scoped provisioning ran. You can stage them by changing their list order, and creating a conditional. This is how I have mine setup so that ssh keys can be copied to my nodes, and my Ansible controller get run last. This lets the remaining machines get configured via Ansible as the last step.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!