Category Archives: SysAdmin

Use ansible to update openssl for heartbleed

Here is a quick and dirty ansible playbook to update to a specific version of libssl1.0.0 and openssl on my ubuntu 12.04 boxes:

---
- hosts: all
  sudo: True
  tasks:
    - name: update openssl for heartbleed
      apt: name={{ item }} state=installed
      with_items:
        - openssl=1.0.1-4ubuntu5.12
        - libssl1.0.0=1.0.1-4ubuntu5.12
      notify:
        - restart apache2
  handlers:
    - name: restart apache2
      action: service name=apache2 state=restarte

Vagrant Symlink Error

I had a problem getting symlinks to work with Vagrant but lets get symlinks setup first before I get to the problem. First thing to do which is well documented is you need to add the following line to your Vagrantfile:

config.vm.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/WebDocRoot", "1"]

The problem most people have is that this line relates to your share_folder config value. See the following:

config.vm.share_folder "WebDocRoot", "/var/www", "www"

The share_folder config option has 3 initial params (identifier, guest path, host path). The customize option must relate to the identifier, look at the WebDocRoot value in those two options.

The problem is that I received the following error:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mkdir -p /vagrant/www

So let’s look at the error, I can’t make the directory /vagrant/www, it’s returning an error.

vagrant ssh
ls -l /vagrant

In my case I could see that /vagrant/www already existed and was a dead symlink that pointed to a non-existant directory. Wait a minute it links to a directory on my host machine not my guest machine. Why is that?

The /vagrant directory in the guest is an automatically generated share_folder by Vagrant and it maps to the current directory your Vagrantfile is located in. I had put a symlink in this directory on my host machine and that symlink was then available in the guest as it’s parent was a share_folder already. Thus when I tried to create a new share_folder at the path /vagrant/www it failed as it already existed as a symlink.

Simple enough don’t create a share_folder there, create it elsewhere inside the guest machine. I moved it from /vagrant/www/ to /var/www/ as outlined above and problem solved.

Received disconnect from: Too many authentication failures for ubuntu

Short answer: Add IdentitiesOnly yes to your .ssh/config file.

Long answer:

I was receiving the error message “Received disconnect from X.X.X.X: 2: Too many authentication failures for ubuntu” while trying to login to some of my servers. Tried logging into various other servers and some worked and some didn’t. I was using public key authentication and I know the keys were correct so I tried logging into the failing servers from other machines and they all worked, same keys, so the keys were all good and the server was working just fine.

Time to ssh -vvv to see what errors were occuring. At the end of the output I was seeing a lot of this:

...
debug1: Offering RSA public key: wes@desktop
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
debug1: Offering RSA public key: wes@desktop
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
...

Ok, so it looks like my public keys are failing. I’m using my .ssh/config file to assign specific IdentityFiles to Hosts, perhaps that was failing so I tried passing the path to the IdentityFile directly via ssh -i. Still nothing, it still is passing several publickeys and failing on all of them. My servers all have the default setting in the sshd_config for MaxAuthTries set to 6, increasing that helps but that’s not the direction I want to change that value and I don’t want to do that across dozens of servers and this appears to be a client side problem.

Next up, I did a little googling around and ran ssh-add -L and see that ssh-agent has cached 6 public keys. ssh automatically tries those 6 keys first always and then tries the ones you specify on command line or in your config file. That’s not really super cool, I guess it assumes you are using the same key or it needs a default to look for and try. One option was to run ssh-add -D, that wasn’t really working and doesn’t directly solve the problem. Instead I read the manual, what a novel idea, and found the setting IdentitiesOnly and set it to yes, what this does is instead of checking ssh-agent for cached keys it will use your defined identity file only… MUCH better! So, just add IdentitiesOnly yes to your .ssh/config file and you are set. Or put this in the /etc/ssh/ssh_config file for the entire system.

Monitoring MySQL slave replication is running

Having recently added some mysql replication slaves I wanted to be sure that the slaves are always running. In order to do this I’ve selected Monit though you could do this several different ways.

What I’ve done is put a quick little bash script together that runs every minute via cron.

What this does is grabs the Slave_IQ_Running and Slave_SQL_Running from a SHOW SLAVE STATUS and if they are both Yes indicating the replication is running smoothly then it touches the /opt/slave_running file.

This is my simple monit script, I drop this into /etc/monit/conf.d on an ubuntu system and it gets included by default. Just restart monit. Monit runs every 2min and if the /opt/slave_running is a couple minutes out of date I’m alerted to take a look.

I’ve seen this idea around on other blogs using python or ruby so I can’t take credit for the idea, just dropping in my notes for how I did it.

Use curl to upload files and post other data

This should’ve been more obvious to me but it did take a few minutes of playing around and reading the manual to get this right. I wanted to post data to a api using curl and one of the items was a file upload and another was a simple field/value.

To upload a file via curl:

curl -X POST -F "image=@profile.jpg" http://api.example.com/profile

In php this will give you the profile.jpg in the $_FILES[‘image’], now to add additional field values you just add additional -F arguments like this:

curl -X POST -F "image=@profile.jpg" -F "phone=1234567890" http://api.example.com/profile