Answer by Geir Sjurseth · May 17, 2016 at 10:12 PM
There are loads and loads of tools and different categories of course and range from prescriptive frameworks to home-grown scripting based on shell.
In the general OS-style automation category you have things like chef/puppet, ansible and salt. I'm personally partial to ansible, but they all have their strengths and weaknesses.
All of these are very often in cloud and devops... Then there are tools like terraform which are very ansible-like but are decidedly cloud operations centric. There's also tools like cloudformation, but I have to be honest I haven't touched it.
We do have internal projects ongoing at apigee that use terraform and of course a fair deal running ansible as well....
-----
There's a whole lot more to be said about generic devops with tools like jenkins, travis-ci, bamboo, etc... But those have decidedly less to do with the whole cloud topic.
-----
My own recommendation?
Terraform for the IaaS pieces: provision all the machines for apigee
Ansible for the per machine pieces: start/stop/install/setup apigee bits
Tmux sessions for same-time shell-access and even sending commands to all machines at once.... This last one is a bit archaic, but I've grown quite accustomed to it and it makes for rapidly being able to apply ad-hoc commands to a huge number of machines all at once and in parallel.
/geir
This is more or less exactly what we do to test/deploy/manage apigee in our cloud/iaas environments.
You will find that terraform is really easy to use once you get past the hard parts, and that ansible is a significantly simpler way of managing configs and installs than some of the other options (puppet or chef for example)
that said - prior to 16.01 i do manage config files using puppet (for its easy access to augeas libraries behind the scenes... really simplifies things and means I dont have to deal with templates, etc) - but I can trigger those jobs from ansible (or even from terraform) if I want.
One thing I have been preaching at my organization is to NOT REINVENT THE WHEEL (sorry Apigee guys!) - and rely on the Apigee scripts and install packages to do their jobs. Dont try to re-engineer them as that is clearly waste and will lead to you having to re-engineer every time Apigee changes something.
FYI - one really cool thing you can do in ansible is set the number of times a command can be run in parallel against a group of servers.
For things like Datastore installs - i set this to 1. For other things that arnt so tempermental (like base installs of RMP boxes, etc) i dont set it, and let ANSIBLE run things in parallel - which it is good at.
I would be intersted @Geir Sjurseth in what use cases you need Tmux for... especially since I just had some staff turnover and am taking time to re-engineer some of our processes before 16.01 upgrades...
I tend to setup a bastion host for whatever environment i'm administering and then create a tmux session for all the boxes in that setup (sometimes multipe). Then I can use tmux to send keystrokes to multiple windows. It's pretty much identical to sending keys to tabs inside of iterm.... Except that it can live on between machines and ssh-sessions and is fully customizable.
This is an example where I'm pulling out all the hosts from my /etc/hosts file, excluding the kafka hosts and then naming each session according to the name.... Then I use that same "session-name" to send commands to it. In this case below I'm telling it to ssh-login to all the machines in question.
for a in $(fgrep vf /etc/hosts | egrep -v 'kafka' | awk '{print $2}') do tmux new-window -n $a tmux send-keys -t $a "ssh -i key.pem centos@${a}" C-m done
Then I can create bash functions that use those same named windows to send arbitrary commands to lists of named machines....
Also, since it's tmux I can detach and then login again from anywhere and regain my tmux session and continue editing it.... Super useful if you want to start tailing all message-processors at once while piping to a custom grep and then let it run... I can then login hours later and pull it up.
Not the same use case at all for things like ansible and such where you're more scripting specific and regularly recurring commands, but great when you want to control and history of the actual sessions managing the environments.
This may have been confusing, so if you have any more questions please feel free to ping me.
Thanks!
/geir
A really important thing to keep in mind: you will find that a lot of automation might fly in the face of your corporate security policies.
I know that I have had to do a lot of work to argue for changes in ours to allow me to do some simple things w/o having to re-package the apigee installers (this goes for 14.xx, 15.xx and now it is even more important in 16.xx)
Understanding this will save you trouble later.
It is really easy for me to find these issues because im IN the secured enterprise environment. Apigee has not been very good at understanding just how locked down some corporate infrastructure (even cloud) truely is.
Hi, Your post is good. Its so useful for me about Devops.Thank you for your post. Devops Training in Chennai
Answer by arghya das · Jun 30, 2016 at 06:56 PM
Internally in Apigee for testing our private cloud installation and upgrade scripts, we wrote a bunch of automation jobs that use ansible. With 1601 and onwards that works really well with the private cloud model. We are also planning to use "terraform" to launch the instances and then have ansible scripts to perform the rest of the installation and upgrade process. The ansible scripts directly call the various apigee interfaces like apigee-service, adminapi etc.