Use ansible to update openssl for heartbleed

Here is a quick and dirty ansible playbook to update to a specific version of libssl1.0.0 and openssl on my ubuntu 12.04 boxes:

---
- hosts: all
  sudo: True
  tasks:
    - name: update openssl for heartbleed
      apt: name={{ item }} state=installed
      with_items:
        - openssl=1.0.1-4ubuntu5.12
        - libssl1.0.0=1.0.1-4ubuntu5.12
      notify:
        - restart apache2
  handlers:
    - name: restart apache2
      action: service name=apache2 state=restarte

Middleman Dynamic Content in Single Template

I’ve been playing a little with middleman (http://middlemanapp.com/) to build some simple static sites lately and have been looking at how to do this a little more dynamically.

More precisely, I wanted to be able to deliver unique content or yml data files to the same html.erb template file. For my use case I was attempting to get multiple slightly different versions of the same page, same layout but variations on the same content. Rather than recreating every page over again I just wanted to allow a writer to be able to copy paste the yml file and update it. Turns out it’s pretty simple.

The example I’ll go through is to have three pages for 3 people (Bob, Joe & Linda). But instead of creating a 1 to 1 relationship between the yml and html.erb files:

data/bob.yml => source/bob.html.erb
data/joe.yml => source/joe.html.erb
data/linda.yml => source/linda.html.erb

I’ll have 3 yml files and 1 html.erb file:

data/bob.yml, data/joe.yml, data/linda.yml, source/person.html.erb

What this will allow me to do is 2 simple things: 1. I can more easily add additional people, just copy the yml file and update their content. 2. use a single template (not layout) for all of those people without having to update each of them.

Here are the 3 sets of files needed for the example, you can do this with any fresh middleman app that you have just initialized.

config.rb

["bob", "joe", "linda"].each do |person|
    proxy "/#{person}.html", "/person.html", :locals => { :person => person }, :ignore => true
end

This will configure the proxy to route each of your people to the template. The first line is effectively whitelisting which urls of people will be passed to my template. The second line then has 4 arguments passed to the proxy.

  1. “/#{person}.html” – this is the url to proxy, the #{person} is replaced with the names you listed in the first line
  2. “/person.html” – this is the template file found in the source directory that will render each of your pages
  3. :locals => { :person => person } – create a variable called person with the above persons name in it, you can use this to uniquely access the data for each person
  4. :ignore => true – ensure that person.html isn’t a page of its own

source/person.html.erb

Name: <%= data.people[person].name %>
<br/>
Age: <%= data.people[person].age %>

A very simple template file that will output our data for each person. You can see that we are using the person variable name to get the content dynamically. This will load the data/people/bob.yml file or the data/people/linda.yml for example.

data/people/bob.yml

name: Bob
age: 36

Finally, our data file for bob, you can copy paste this into as many different people as necessary.

View my full example on GitHub (https://github.com/wesdeboer/middleman-example)

Vagrant Symlink Error

I had a problem getting symlinks to work with Vagrant but lets get symlinks setup first before I get to the problem. First thing to do which is well documented is you need to add the following line to your Vagrantfile:

config.vm.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/WebDocRoot", "1"]

The problem most people have is that this line relates to your share_folder config value. See the following:

config.vm.share_folder "WebDocRoot", "/var/www", "www"

The share_folder config option has 3 initial params (identifier, guest path, host path). The customize option must relate to the identifier, look at the WebDocRoot value in those two options.

The problem is that I received the following error:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mkdir -p /vagrant/www

So let’s look at the error, I can’t make the directory /vagrant/www, it’s returning an error.

vagrant ssh
ls -l /vagrant

In my case I could see that /vagrant/www already existed and was a dead symlink that pointed to a non-existant directory. Wait a minute it links to a directory on my host machine not my guest machine. Why is that?

The /vagrant directory in the guest is an automatically generated share_folder by Vagrant and it maps to the current directory your Vagrantfile is located in. I had put a symlink in this directory on my host machine and that symlink was then available in the guest as it’s parent was a share_folder already. Thus when I tried to create a new share_folder at the path /vagrant/www it failed as it already existed as a symlink.

Simple enough don’t create a share_folder there, create it elsewhere inside the guest machine. I moved it from /vagrant/www/ to /var/www/ as outlined above and problem solved.

Received disconnect from: Too many authentication failures for ubuntu

Short answer: Add IdentitiesOnly yes to your .ssh/config file.

Long answer:

I was receiving the error message “Received disconnect from X.X.X.X: 2: Too many authentication failures for ubuntu” while trying to login to some of my servers. Tried logging into various other servers and some worked and some didn’t. I was using public key authentication and I know the keys were correct so I tried logging into the failing servers from other machines and they all worked, same keys, so the keys were all good and the server was working just fine.

Time to ssh -vvv to see what errors were occuring. At the end of the output I was seeing a lot of this:

...
debug1: Offering RSA public key: wes@desktop
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
debug1: Offering RSA public key: wes@desktop
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
...

Ok, so it looks like my public keys are failing. I’m using my .ssh/config file to assign specific IdentityFiles to Hosts, perhaps that was failing so I tried passing the path to the IdentityFile directly via ssh -i. Still nothing, it still is passing several publickeys and failing on all of them. My servers all have the default setting in the sshd_config for MaxAuthTries set to 6, increasing that helps but that’s not the direction I want to change that value and I don’t want to do that across dozens of servers and this appears to be a client side problem.

Next up, I did a little googling around and ran ssh-add -L and see that ssh-agent has cached 6 public keys. ssh automatically tries those 6 keys first always and then tries the ones you specify on command line or in your config file. That’s not really super cool, I guess it assumes you are using the same key or it needs a default to look for and try. One option was to run ssh-add -D, that wasn’t really working and doesn’t directly solve the problem. Instead I read the manual, what a novel idea, and found the setting IdentitiesOnly and set it to yes, what this does is instead of checking ssh-agent for cached keys it will use your defined identity file only… MUCH better! So, just add IdentitiesOnly yes to your .ssh/config file and you are set. Or put this in the /etc/ssh/ssh_config file for the entire system.

Monitoring MySQL slave replication is running

Having recently added some mysql replication slaves I wanted to be sure that the slaves are always running. In order to do this I’ve selected Monit though you could do this several different ways.

What I’ve done is put a quick little bash script together that runs every minute via cron.

What this does is grabs the Slave_IQ_Running and Slave_SQL_Running from a SHOW SLAVE STATUS and if they are both Yes indicating the replication is running smoothly then it touches the /opt/slave_running file.

This is my simple monit script, I drop this into /etc/monit/conf.d on an ubuntu system and it gets included by default. Just restart monit. Monit runs every 2min and if the /opt/slave_running is a couple minutes out of date I’m alerted to take a look.

I’ve seen this idea around on other blogs using python or ruby so I can’t take credit for the idea, just dropping in my notes for how I did it.