Tag Archives: servers

How to detect the Debian version of a server without logging in

As Ops team, we're slowly taking over operations for several other teams here at Opera. One of our first tasks is to:

First idea to check whether a server is Debian Lenny or Squeeze was to login and cat /etc/debian_version. However, if you haven't accessed that machine before, and your ssh keys are not there, you can't do that. In our case, we have to file a request for it, and it can take time. Wondering if there was a quicker way, I came up with this trick:

#!/bin/sh
#
# Tells the Debian version reading the OpenSSH banner
# Requires OpenSSH to be running and ssh port to be open.
#
# Usage: $0 <hostname>
#
# Cosimo, 23/11/2011

HOST=$1

if [ "x$HOST" = "x" ]; then
    echo "Usage: $0 <hostname>"
fi

OPENSSH_BANNER=$(echo "n" | nc ${HOST} 22 | head -1)

#echo "OPENSSH_BANNER=$OPENSSH_BANNER"

IS_SQUEEZE=$(echo $OPENSSH_BANNER | egrep '^SSH-.*OpenSSH_5.*Debian-6')
IS_LENNY=$(echo $OPENSSH_BANNER   | egrep '^SSH-.*OpenSSH_5.*Debian-5')
IS_ETCH=$(echo $OPENSSH_BANNER    | egrep '^SSH-.*OpenSSH_4.*Debian-9')

# SSH-2.0-OpenSSH_5.1p1 Debian-5
# SSH-2.0-OpenSSH_4.3p2 Debian-9etch3
# SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze1

#echo "Squeeze: $IS_SQUEEZE"
#echo "Lenny: $IS_LENNY"
#echo "Etch: $IS_ETCH"

if [ "x$IS_SQUEEZE" != "x" ]; then
    echo "$HOST is Debian 6.x (squeeze)"
    exit 0
fi

if [ "x$IS_LENNY" != "x" ]; then
    echo "$HOST is Debian 5.x (lenny)"
    exit 0
fi

if [ "x$IS_ETCH" != "x" ]; then
    echo "$HOST is Debian 4.x (etch)"
    exit 0
fi

echo "I don't know what $HOST is."
echo "Here's the openssh banner: '$OPENSSH_BANNER'"

exit 1

It reads the OpenSSH server banner to determine the major Debian version (Etch, Lenny, Squeeze). It's really fast, it's very simple and hopefully reliable too. Enjoy. Download from https://gist.github.com/1389206/.

Kicking Jenkins with monit

We've been using Jenkins to build and test all our projects for a good part of this year now. I think Jenkins is one of the very few Java projects I've seen and worked with that actually works and it's a real pleasure to use. Except every now and then it seems to crash without reason..

I haven't had time to dig into this problem yet. I've only seen the frontend Apache process logging errors because it cannot connect to the Tomcat backend on port 8080. My theory so far is that Jenkins tries to auto-update and crashes, or maybe there's a runaway test run that brings everything down…

Time is really limited these days, and I have heard good things about monit, I decided to try it to see if we could have Jenkins kicked when it dies for some reason. In this way we can avoid cases where the test suites haven't been running for a day or two and nobody noticed… :-|

So, long story short, here's the quick and dirty monit recipe to kick Apache + Jenkins (this is on Debian Squeeze):

check process jenkins with pidfile /var/run/jenkins/jenkins.pid
  start program = "/etc/init.d/jenkins start" with timeout 90 seconds
  stop program  = "/etc/init.d/jenkins stop"
  if failed host my.host.name port 8080 protocol http
     and request "/"
     then restart

check process apache with pidfile /var/run/apache2.pid
  start program = "/etc/init.d/apache2 start" with timeout 60 seconds
  stop program  = "/etc/init.d/apache2 stop"
  if failed host my.host.name port 80 protocol http
     and request "/"
     then restart

And, just for kicks, a complete Monit module for puppet up on Github. Have fun!

How to detect TCP retransmit timeouts in your network

Some months ago, while investigating on a problem in our infrastructure, I put together a small tool to help detecting TCP retransmits happening during HTTP requests.

TCP retransmissions can happen, for example, when a client sends a SYN packet to the server, the server responds with a SYN-ACK but, for any reason, the client never receives the SYN-ACK. In this case, the client correctly waits for a given time, called the TCP Retransmission Timeout. In the usual case, this time is set to 3 seconds.

There's probably a million reasons why the client may never receive a SYN-ACK. The one I've seen more often is packet loss, which in turn can have lots of reasons, for example a malfunctioning or misconfigured network switch.

However, you can immediately spot if your timeout/hang problems are caused by TCP retransmission because they happen to cause response times that are unusually frequently distributed around 3, 9 and 21 seconds (and on, of course).

In fact, the TCP retransmission timeout starts at 3 seconds, but if the client tries to resend after a timeout and still receives no answer, it doubles the wait to 6 s, so the total response time will be 9 seconds, assuming that the client now finally receives the SYN-ACK. Otherwise, 3 + 6 + 12 = 21, then 3 + 6 + 12 + 24 = 45 s and so on and so forth.

So, this little tool fires a quick batch of HTTP requests to a given server and measures the response times, highlighting slow responses (> 0.5s). If you see that the reported response times are 3.002s, 9.005s or similar, then you are probably in presence of TCP retransmission and/or packet loss.

Finally, here it is:


#!/usr/bin/env perl
#
# https://gist.github.com/1101500
#
# Fires HTTP request batches at the specified hostname
# and analyzes the response times.
#
# If you have suspicious frequency of 3.00x, 9.00x, 21.00x
# seconds, then most probably you have a problem of packet loss
# in your network.
#
# cosimo@opera.com, sometime in 2011
#

use strict;
use LWP::UserAgent ();
use Time::HiRes ();

$| = 1;

my $ua = LWP::UserAgent->new();
$ua->agent("$0/0.01");

# Tests this hostname
my $server = $ARGV[0] || die "Usage: $0 <hostname>n";

# Picks the URLs to test in this list, one after the other
my @url_pool = qw(
	/ping.html
);

my $total_reqs = 0;
my $total_elapsed = 0.0;
my $n_pick = 0;
my $url_to_fire;

my $max_elapsed = 0.0;
my $max_elapsed_when = '';
my $failed_reqs = 0;
my $slow_responses = 0;
my $terminate_now = 0;

sub output_report {
	print "Report for:            $server at " . localtime() . "n";
	printf "Total requests:        %d in %.3f sn", $total_reqs, $total_elapsed;
	print "Failed requests:       $failed_reqsn";
	print "Slow responses (>1s):  $slow_responses (slowest $max_elapsed s at $max_elapsed_when)n";
	printf "Average response time: %.3f s (%.3f req/s)n", $total_elapsed / $total_reqs, $total_reqs / $total_elapsed;
	print "--------------------------------------------------------------------n";
	sleep 1;
	return;
}

$SIG{INT} = sub { $terminate_now = 1 };

while (not $terminate_now) {

	$url_to_fire = "http://" . $server . $url_pool[$n_pick];

	my $t0 = [ Time::HiRes::gettimeofday() ];
	my $resp = $ua->get($url_to_fire);
	my $elapsed = Time::HiRes::tv_interval($t0);

	$failed_reqs++ if ! $resp->is_success;

	$total_reqs++;
	$total_elapsed += $elapsed;

	if ($elapsed > $max_elapsed) {
		$max_elapsed = $elapsed;
		$max_elapsed_when = scalar localtime;
		printf "[SLOW] %s, %s served in %.3f sn", $max_elapsed_when, $url_to_fire, $max_elapsed;
	}

	$slow_responses++ if $elapsed >= 0.5;
	$n_pick = 0       if ++$n_pick > $#url_pool;
	output_report()   if $total_reqs > 0 and ($total_reqs % 1000 == 0);

}
continue {
    Time::HiRes::usleep(100000);
}

output_report();

# End

It's also published here on Github, https://gist.github.com/1101500. Have fun!

A command line tool for Debian to purge Varnish objects

I've been using varnish mostly on Debian systems. I found the reload-vcl script included in Debian to be useful.

The reload-vcl script

It's part of the standard varnish debian package. It uses the system defaults in /etc/defaults/varnish, so it knows how to correctly invoke the varnishadm utility to perform administrative commands. As the name implies, it reloads the default VCL file using the vcl.load and vcl.use commands, checking that every step succeeds properly before continuing so it's safe to use. It loads the new VCL file and labels it automatically with a unique id.

Something analogous but regarding the purge functionality could have been useful, so I looked at the source code for reload-vcl. Most of it deals with loading of /etc/defaults/varnish and various sanity checks. I reused that bit to make another script, to control cache purging.

The purge-cache script

Here's the full source code. Below there's a link to download the latest version from github.


#!/bin/sh

# purge-cache: Script to purge varnish cache. Defaults are defined in
# /etc/default/varnish.
#
# Cosimo <cosimo@cpan.org>
# Based on reload-vcl, by Stig Sandbeck Mathisen <ssm at debian dot org>

# Settings
defaults=/etc/default/varnish

# Paths
varnishadm=/usr/bin/varnishadm
date=/bin/date 
tempfile=/bin/tempfile

# Messages
# msg_no_varnishadm: varnishadm
msg_no_varnishadm="Error: Cannot execute %sn"
msg_no_management="Error: $DAEMON_OPTS must contain '-T hostname:port'n"
# msg_defaults_not_readable: defaults
msg_defaults_not_readable="Error: %s is not readablen"
# msg_defaults_not_there: defaults
msg_defaults_not_there="Error: %s does not existn"
msg_usage="Usage: $0 [-h][-q][-u <url>|-r <regex>|-a]nt-htdisplay helpnt-qtbe quietnt-utpurge by exact (relative) url (ex.: /en/products/)nt-rtpurge objects with URL matching a regex (ex.: ^/blogs/)nt-atpurge all objects from cachen"
msg_purge_failed="Error: purge command failedn"
# msg_purge_url: url
msg_purge_url="Purging objects by exact url: %sn"
# msg_purge_regex: regex
msg_purge_regex="Purging objects with URL matching regex: %sn"
msg_purge_all="Purging all cachen"
msg_purge_ok="Purge command successfuln"

# Load defaults file
if [ -f "$defaults" ]
then
    if [ -r "$defaults" ]
    then
        . "$defaults"
    else
        printf >&2 "$msg_defaults_not_readable" $defaults
        exit 1 
    fi
else
    printf >&2 "$msg_defaults_not_there" $defaults
    exit 1
fi

# parse command line arguments
while getopts hqu:r:a flag
do
    case $flag in
        h)
            printf >&2 "$msg_usage"
            exit 0
            ;; 
        u)
            purge_method=url
        url="$OPTARG"
            ;; 
        r)
            purge_method=regex
        regex="$OPTARG"
            ;; 
        a)
            purge_method=all
            ;; 
        q)
            quiet=1
            ;; 
        *)
            printf >&2 "$msg_usagen"
            exit 1
            ;; 
    esac
done

# Parse $DAEMON_OPTS (options must be kept in sync with varnishd).
# Extract the -f and the -T option, and (try to) ensure that the
# management interface is on the form hostname:address
OPTIND=1
while getopts a:b:dFf:g:h:l:n:P:p:s:T:t:u:Vw: flag $DAEMON_OPTS
do
    case $flag in
        f)
            if [ -f "$OPTARG" ]; then
                vcl_file="$OPTARG"
            fi 
            ;; 
        T)
            if [ -n "$OPTARG" -a "$OPTARG" != "${OPTARG%%:*}" ]
                then
                mgmt_interface="$OPTARG"
            fi  
            ;;  
    esac
done

# Sanity checks 
if [ ! -x "$varnishadm" ]
then
    printf >&2 "$msg_no_varnishadm" $varnishadm
    exit 1
fi

if [ -z "$mgmt_interface" ]
then
    printf >&2 "$msg_no_management"
    exit 1
fi

logfile=$($tempfile)
purge_command="vcl.list"

# Now run the purge command against the admin interface
if [[ $purge_method = "url" ]]
then
        purge_command="purge req.url == $url"
        printf >&2 "$msg_purge_url" $url | grep -v "^$" > $logfile
else
    if [[ $purge_method = "regex" ]]
    then
        purge_command="purge.url $regex"
        printf >&2 "$msg_purge_regex" $regex | grep -v "^$" > $logfile
    else
        if [[ $purge_method = "all" ]]
        then
            purge_command="purge.url ."
            printf >&2 "$msg_purge_all" | grep -v "^$" > $logfile
        fi
    fi
fi

# For some reason, using:
#
#   fi | grep -v "^$" > $logfile
#
# results in purge_command assignment being wiped out
# at the end of the block??

if [ -z "$purge_command" ]
then
    printf >&2 "$msg_usagen"
    exit 1
fi

# echo "cmd: $varnishadm -T $mgmt_interface $purge_command"

if $varnishadm -T $mgmt_interface $purge_command
then
    printf >&2 "$msg_purge_ok"
else
    printf >&2 "$msg_purge_failed"
    exitstatus=1
fi | grep -v "^$" > $logfile

# Blather
if [ -z "${quiet}" -o -n "$exitstatus" ]
then
    cat >&2 $logfile
fi

# Cleanup
rm -f $logfile  
exit $exitstatus

You can control how objects are purged from the cache with 3 options:

  • -a: purges all objects
  • -u <url>: purges an exact url
  • -r <regexp>: purges objects matching a regular expression
  • Examples

    
    # Purges all objects
    purge-cache -a
     
    # Purges all objects starting with "/products"
    purge-cache -r '^/products'
    
    # Purges objects with exact URL
    purge-cache -u '/en/homepage'
    

    Goal: no downtime

    Both reload-vcl and purge-cache can be combined together in a single script to be triggered when deploying new VCL code or new backend applications. Instead of restarting varnish, which I really don't like, and it's not very reliable either (on Debian sometimes it won't come back up), I use purge-cache -a to purge all objects and then reload-vcl to load and use the newly deployed VCL code.

    This procedure has no downtime at all. The effect of purging all objects can potentially be hard on the backends, but we're not at that point yet. Usually in the busiest applications we have, it takes around 10-20 seconds to reach 70%-75% of hit rate, so I would say that's not really a problem right now.

    Download!

    You can download the purge-cache script from github. I contacted the maintainer of the reload-vcl script. Maybe he will include purge-cache in the next release of the varnish debian package… or maybe I could package it as a Perl CPAN module.

Puppet external nodes classifier script in Perl

The upgrade to puppet 2.6.2 worked out fine. Coming from 0.24.5, I noticed a really welcome speed improvement. However, I had a tricky problem.

While upgrading to 2.6, I also decided to switch to external nodes classifier script. If you don't know about it, it's a nice puppet feature that I planned to use since the start. It allows you to write a small script in any language you want, to basically tell the puppetmaster, given the hostname, what you want that machine to be.

Puppet calls your script with one argument that is the hostname of the machine that is asking for its catalog of resources. In your script, you have to output something like the following:

---
classes:
  - perl
  - apache
  - ntp
  - munin
environment: production
parameters:
  puppet_server: my.puppetmaster.com

You can specify all the "classes" of that machine, so basically what you want puppet to install (or repair) on that machine. So far so good. So my classifier script looks into some preset directories for some project-specific JSON files, and then checks if any of these JSON files contain the name of the machine that puppet is asking for. Code follows:

#!/usr/bin/env perl
#
# http://docs.puppetlabs.com/guides/external_nodes.html
#

use strict;
use warnings;
use File::Basename ();
use File::Slurp ();
use JSON::XS ();
use YAML ();

our $nodes_dir = '/etc/puppet/manifests/nodes';

our %default_params = (
    puppet_server => 'my.puppetmaster.com',
);

# ...
# A few very simple subs
# omitted for brevity
# ...

# The hostname puppet asks for
my $wanted = $ARGV[0];

# The data structure found in the JSON file
my $node_info = search_into_json_files_and_find($wanted);

my $puppet_classifier_info = {
    classes => $node_info->{puppet_classes},
    environment => 'production',
    parameters => %default_params,
};

print YAML->Dump($puppet_classifier_info);

Now, I don't know if you can immediately spot the problem here, but I didn't, so I wasted a good part of an afternoon chasing a bug I didn't even know existed. The resulting YAML (puppet wants YAML) was this one:

--- YAML 
--- 
classes: 
    - geodns::production::backend 
environment: production 
name: z01-06-02 
parameters: 
    puppet_server: z01-06-02

The problem with this, is that it looks innocent and valid, and in fact is valid, but it's two YAML documents, not one. So puppet will parse the --- YAML line since that is one single complete YAML document, and ignore the rest.

And why is that happening in the first place? Because of the YAML->Dump() call I wrote, instead of the correct YAML::Dump()… Eh :) So the correct code is:

print YAML::Dump($puppet_classifier_info);

Never use YAML->Something()

Upgrade of puppet from 0.24.5 to 2.6.2: “Got nil value for content”

Something I should have probably done a while ago. Upgrading our puppet clients and master from 0.24.5 (current Debian stable) to 2.6.2 (current backports). I've been advised to go all the way to 2.6.4, that contains an important security fix. I'll probably do that soon.

So, first error I did was to upgrade one client before the puppetmaster. In my mental model, it's always ok to upgrade the server first (puppetmaster), but I remember I was kind of surprised when I learned that puppet needs upgrading the clients first, so that is something I seemed to remember well: first upgrade clients, then the puppetmaster.

So I did, and that didn't work at all. The client couldn't retrieve the catalog. I discovered I had to update the puppetmaster first. Another very common thing with puppet: if something doesn't work, wipe out the SSL certificates. Almost every puppet user, maybe unexperienced like me, will tell you to remove the entire /var/lib/puppet/ssl directory, and restart your client. Good. So I did.

After fiddling for a while with client and server-side SSL certificates, removing, --waitforcert and puppetca invocations, I was able to finally connect the client with the puppetmaster and have them talk to each other correctly, downloading the manifests and applying the changes. However, one problem remained…

Digression: custom facter plugins

I wrote a specific facter plugin that provides a datacenter fact. Just as an example, it works like this:

#
# Provide an additional 'datacenter' fact
# to use in generic modules to provide datacenter
# specific settings, such as resolv.conf
#
# Cosimo, 03/Aug/2010
#
# $Id: datacenter.rb 15240 2011-01-03 14:27:44Z cosimo $

Facter.add("datacenter") do
    setcode do

        datacenter = "unknown"

        # Get current ip address from Facter's own database
        ipaddr = Facter.value(:ipaddress)

        if ipaddr.match("^12.34.56.")
            datacenter = "dc1"
        elsif ipaddr.match("^34.56.78.")
            datacenter = "dc2"
        elsif ipaddr.match("^56.78.12.")
            datacenter = "dc3"
        # etc ...
        # etc ...
        end

        datacenter
    end
end

This allows for very cool things, like specific DNS resolver settings by datacenter:

# Data-center based settings
case $datacenter {
    "dc1" :  { include opera::datacenters::dc1 }
    "dc2" :  { include opera::datacenters::dc2 }
    "dc3" :  { include opera::datacenters::dc3 }
    # ...
    default: { include opera::datacenters::dc1 }
}

where each opera::datacenter::dc class contains all the datacenter-dependent settings. In my case, just the resolver for now.

class opera::datacenters::dc1 {
    resolver::dns { "dc1-ns":
        domain => "opera.com",
        nameservers => [ "1.2.3.4", "5.6.7.8" ],
    }
}

resolver::dns is in turn a small define in a resolver module I wrote to generate resolv.conf contents from a small template. It's so small I can copy/paste it here:

# "resolver/manifests/init.pp"
define resolver::dns ($nameservers, $search="", $domain="") {
    file { "/etc/resolv.conf":
        ensure => "present",
        owner  => "root",
        group  => "root",
        mode   => 644,
        content => template("resolver/resolv.conf.erb"),
    }
}

# "resolver/templates/resolv.conf.erb"
# File managed by puppet <%= puppetversion %> on <%= fqdn %>
# Data center: <%= name %>
<% if domain != "" %>domain <%= domain %>
<% end %>
<% if search != "" %>search <%= search %>
<% end %>
<% nameservers.each do |ns| %>nameserver <%= ns %>
<% end %>

One problem remained…

Let's get back to the remaining problem. Every client run was spitting this error string:

info: Retrieving plugin
err: /File[/var/lib/puppet/lib]: Could not evaluate: Got nil value for content

After a lot of searching and reading through similar (but not quite the same) messages, I found this thread where one guy was having a very similar problem, but with a different filename, /root/.gitconfig that he obviously specified in his manifests.

My problem happened with /var/lib/puppet/lib, which is never specified in any of my manifests. But the symlink bit got me thinking. At some point, to have the specific facter plugin work, and having read about differences between older and 2.6.x versions of puppet, I had put it into my module's lib/facter folder, but creating also a symlink called plugins (required by the new puppet). Doing so, I thought, would probably prevent problems, as the new puppet could read the file from the new folder, while the older version could read it from "lib".

Well, turns out that puppet will not be able to read custom facter plugins from a plugins folder that is a symlink. Removing the symlink and making {module}/plugins/facter a real directory solved that problem too.

I really hope to save someone else the time I spent on this. Cheers :)

Puppet custom facts, and master-less puppet deployment

As I mentioned a few weeks ago, I'm using Puppet for some smaller projects here at work. They're pilot projects to see how puppet behaves and scales for us before taking it into bigger challenges.

One of the problems so far is that we're using fabric as the "last-mile" deployment tool, and that doesn't yet have any way to run jobs in parallel. That's the reason why I'm starting to look elsewhere, like mcollective for example.

However, today I had to prepare a new varnish box for files.myopera.com. This new machine is in a different data center than our current one, so we don't have any puppetmaster deployed there yet. This stopped me from using puppet in also another project. But lately I've been reading on the puppet-users mailing list that several people have tried to deploy a master-less puppet configuration, where you have no puppetmasterd running. You just deploy the puppet files, via rsync, source control or pigeons, and then let the standalone puppet executable run.

Puppet master-less setup

To do this, you have to at least have a good set of reusable puppet modules, which I tried to build small pieces at a time during the last few months. So I decided to give it a shot, and got everything up and running quickly. Deployed my set of modules in /etc/puppet/modules, and built a single manifest file that looks like the following:


#
# Puppet standalone no-master deployment
# for files.myopera.com varnish nodes
#
node varnish_box {

    # Basic debian stuff
    $disableservices = [ "exim4", "nfs-common", "portmap" ]
    service { $disableservices:
        enable => "false",
        ensure => "stopped",
    }

    # Can cause overload on the filesystem through cronjobs
    package { "locate": ensure => "absent", }
    package { "man-db": ensure => "absent", }

    # Basic configuration, depends on data center too
    include opera
    include opera::sudoers
    include opera::admins::core_services
    include opera::datacenters::dc2

    # Basic packages now. These are all in-house modules
    include base_packages
    include locales
    include bash
    include munin
    include cron
    include puppet
    include varnish

    varnish::config { "files-varnish-config":
        vcl_conf => "files.vcl",
        storage_type => "malloc",
        storage_size => "20G",
        listen_port => 80,
        ttl => 864000,
        thread_pools => 8,
        thread_min => 800,
        thread_max => 10000,
    }

    #
    # Nginx (SSL certs required)
    #
    include nginx

    nginx::config { "/etc/nginx/nginx.conf":
        worker_processes => 16,
        worker_connections => 16384,
        keepalive_timeout => 5,
    }

    nginx::vhost { "https.files.myopera.com":
        ensure => "present",
        source => "/usr/local/src/myopera/config/nginx/sites-available/https.files.myopera.com",
    }

    bash:: prompt { "/root/.bashrc":
        description => "Files::Varnish",
        color => "red",
    }

    munin:: plugin::custom { "cpuopera": }

    munin:: plugin { "if_eth0":
        plugin_name => "if_"
    }

    munin:: plugin {
        [ "mem_", "load", "df", "df_inode", "netstat", "vmstat",
          "iostat", "uptime", "threads", "open_files", "memory", "diskstats" ]:
    }
}

node default inherits varnish_box {
}

node 'my.hostname.opera.com' inherits varnish_box {
}

This manifest installs varnish, nginx, a bunch of basic packages I always want on every machines (vim, tcpdump, etc…), munin and appropriate plugins already configured, and also a nice red bash prompt to warn me that this is production stuff.

This file is everything the puppet client needs to run and produce the desired effect, without needing a puppet master. Save it as varnish-node.pp and then you run it with:


puppet varnish-node.pp

One problem that usually arises is how to serve the static files. In this case, I assumed I'm going to check out the source code and config files from my own repository into /usr/local/src/... so I don't need to point puppet to a server with the classic:


source => "puppet:///module/filename"

but you can just use:


source => "/usr/local/whatever/in/my/local/filesystem"

That's great and it works just fine.

Custom facts

Puppet uses a utility called facter to extract "facts" from the underlying system, sysinfo-style. A typical facter run produces the following output:


$ facter
architecture => x86_64
domain => internal.opera.com
facterversion => 1.5.6
fqdn => cd01.internal.opera.com
...
hardwaremodel => x86_64
hostname => cd01
id => cosimo
ipaddress => 10.20.30.40
ipaddress_eth0 => 10.20.30.40
is_virtual => false
...
kernel => Linux
kernelmajversion => 2.6
...
operatingsystem => Ubuntu
operatingsystemrelease => 10.04
physicalprocessorcount => 1
processor0 => Intel(R) Core(TM)2 Duo CPU     E6550  @ 2.33GHz
processor1 => Intel(R) Core(TM)2 Duo CPU     E6550  @ 2.33GHz
processorcount => 2
...

and so on. Within puppet manifests, you can use any of these facts to influence the configuration of your system. For example, if memorysize > 4.0 Gb then run varnish with 2000 threads instead of 1000. This is all very cool, but sometimes you need something that facter doesn't give you by default.

That's why facter can be extended.

I tried creating a datacenter.rb facter plugin that would look at the box IP address and figure out in which data center we're located. That in turn can be used to setup the nameservers and other stuff.

Here's the code. My Ruby-fu is less than awesome:


#
# Provide an additional 'datacenter' fact
# to use in generic modules to provide datacenter
# specific settings, such as resolv.conf
#
# Cosimo, 03/Aug/2010
#

Facter.add("datacenter") do
    setcode do

        datacenter = "unknown"

        # Get current ip address from Facter's own database
        ipaddr = Facter.value(:ipaddress)

        # Data center on Mars
        if ipaddr.match("^88.88.88.")
            datacenter = "mars"

        # This one on Mercury
        elsif ipaddr.match("^99.99.99.")
            datacenter = "mercury"

        # And on Jupiter
        elsif ipaddr.match("^77.77.77.")
            datacenter = "jupiter"
        end

        datacenter
    end
end

However, there's one problem. When puppet runs, it doesn't get the new fact, even though facter from the command line can see it and execute it just fine (when the plugin is in the current directory).

Now I need to know how to inform puppet (and/or facter) that it has to look into one of my puppet modules' plugin (or lib from 0.25.x) directory to load my additional datacenter.rb fact.

Any idea ??

Primary to secondary master failover

Here's how to failover from primary to secondary master.
This was written following the My Opera case, and we use MySQL, but should be fairly generic.

Disable monitoring checks

  • Pause any Pingdom checks that are running for the affected systems
  • Set downtime or disable notifications in Nagios for the affected systems

Record log file and position

Assuming your secondary master (DB2) is idle, then now you have to record the log file and position by issuing a SHOW MASTER STATUS command:


mysql> SHOW MASTER STATUS G
*************************** 1. row ***************************
            File: mysql-bin.000024    <- MASTER LOG FILE
        Position: 91074774            <- MASTER LOG POSITION
    Binlog_Do_DB:
Binlog_Ignore_DB:

1 row in set (0.00 sec)

Write them down somewhere.
If you need to perform any kind of write/alter query on this host, then you have to issue the show master status command again, because position will change.

Also try repeating this command. You should see that the log file and position do not change between different runs.

Enable maintenance mode

Now is the time to enable your maintenance or downtime mode for the site or service. That will of course depend on your deployment tools.

Stop backend servers

Your backend/application servers might need to stay up and running. For example, in case of the Auth service, we want this, because we're going to serve static responses (html, xml, etc…) to the clients instead of just letting the connections hang.

In other cases, it's fine to just shut down the backends. You may want to do this for 2 reasons:

  • to make sure that nobody is accidentally hitting your master database, from your internal network or otherwise
  • because doing so should close all the connections to your master database. This is actually depending on the wait_timeout variable in the mysql server. The connections won't go away until wait_timeout seconds have passed. This is the normal behaviour, so don't panic if you still see connections after you shut down the backends.

Switch to the new master now

This depends on how you actually perform the switch. I can imagine at least 2 ways to do this:

  • by instructing LVS to direct all connections to the secondary master
  • take over the IP address either manually or using keepalived

On My Opera, we use keepalived with a private group between the two master database servers, so it's just a matter of:

  • stopping keepalived on the primary master database
  • starting keepalived on the secondary master database

There is a quick and dirty bash script that allows to verify who's the master and makes the switch.

#!/bin/sh

DB1=pri-master-hostname
DB2=sec-master-hostname

function toggle_keepalive() {
        host=$1
        if [[ `ssh $host pidof keepalived` == "" ]]; then
                ssh $host /etc/init.d/keepalived start
                if [[ `ssh $host pidof keepalived` == "" ]]; then
                        echo '*** KEEPALIVE START FAILED ***'
                        echo 'Aborting the master failover procedure'
                        exit
                fi
        else
                ssh $host /etc/init.d/keepalived stop
                if [[ `ssh $host pidof keepalived` != "" ]]; then
                        echo '*** KEEPALIVE STOP FAILED ***'
                        echo 'Aborting the master failover procedure'
                        exit
                fi
        fi
}

echo "Master Database failover"
echo

# Find out who's the primary master now, and swap them
if [[ `ssh $DB1 pidof keepalived` == "" ]]; then
        PRIMARY=$DB2
        SECONDARY=$DB1
else
        PRIMARY=$DB1
        SECONDARY=$DB2
fi

echo Primary is $PRIMARY
echo Secondary is $SECONDARY

# Shutdown primary first, then enable secondary
toggle_keepalive $PRIMARY
toggle_keepalive $SECONDARY

As soon as you do that, the secondary master will be promoted to primary master.
Since they are assumed to be already replicating from each other, nothing will change for them. It will however for all the slaves that were replicating from the primary master. We'll see what to do about that later.

Restart backend servers

Now it's the right time to restart the backend servers, and check that they correctly connect to the new primary master.

On My Opera, we're using a virtual address, w-mlb (write-mysql-load-balancer), to refer to the active primary master database. We use this name in the configuration files everywhere.

This means that we don't have to change anything in the backend servers configuration. We just restart them, and they will connect to the new primary master, due to the IP takeover step described above.

Turn off maintenance mode

If the backends are working correctly, they're connecting to the new master db, it's time to remove the maintenance page, so do that.

We're enabling and disabling maintenance mode by enabling and disabling a virtual host configuration in our frontends and reloading or restarting the frontend httpd servers.

From now on, your application is hopefully up and running and receiving client requests, so your downtime window is over.

Check replication lag

The database slaves at this point are still replicating from the former primary master database (DB1).

But DB1 now is not receiving any traffic (queries) anymore, so it's basically idle, and it should be. Any queries happening on DB1 now mean that something is seriously wrong. There might be lingering connections, but no activity.

Then it's important that all the slaves show no replication lag, so issuing a SHOW SLAVE STATUS command should show zero seconds behind.

mysql> SHOW SLAVE STATUS G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: <DB1-ip>
                Master_Port: <port>
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000025
        Read_Master_Log_Pos: 13691126
...
      Seconds_Behind_Master: 0

1 row in set (0.00 sec)

It's important that Seconds Behind Master is zero.
If it's not, it means that the slave needs more time to fully replicate all the past traffic that had been going on on the former primary master, DB1.

Remember that the primary master is now DB2, while DB1 is the secondary master.

Change master on the slaves

Now you can perform the CHANGE MASTER TO command on all the slaves.

Now you have to bring back the notes about MASTER LOG FILE and MASTER LOG POSITION.

First, stop the slave replication.

mysql> STOP SLAVE;

Then the exact command to issue, if nothing else about your replication changed, is:

mysql> CHANGE MASTER TO MASTER_HOST='<DB2-ip>', MASTER_LOG_FILE='<master_log_file>', MASTER_LOG_POSITION='<master_log_position>';

Then restart the slave replication:

mysql> START SLAVE;
mysql> SHOW SLAVE STATUS G

The following SHOW SLAVE STATUS G command should show the replication running, and, depending on how long it took you to change master since the new master took over the ip, the number of seconds of replication lag.

This number should rapidly go down towards zero.

If it's not, then you might have a problem. Go hide now or take the first flight to Australia or something.

We wrote a switch-master Perl script that proved to be very effective and useful. Example:

./switch-master --host <your_slave> --new-master <new_master_ip> --log-file <master_log_file> --log-pos <master_log_position>

This script performs a lot of sanity checks. Before switching master, it checks that replication lag is zero. If it's not, waits a bit and checks again, etc…

It's made to try to prevent disaster from striking. Very useful and quick to use.

Enable monitoring checks

Now verify that everything looks fine, replication lag is zero, your backends are working correctly, try to use your site a bit.

If everything's fine, enable or unpause the monitoring checks.
You have made it!

Varnish “sess_workspace” and why it is important

When using Varnish on a high traffic site like opera.com or my.opera.com, it is important to reach a stable and sane configuration (both VCL and general service tuning).

If you're just starting using Varnish now, it's easy to overlook things (like I did, for example :) and later experience some crashes or unexpected problems.

Of course, you should read the Varnish wiki, but I'd suggest you also read at least the following links. I found them to be very useful for me:

A couple of weeks ago, we experienced some random Varnish crashes, 1 per day on average. That happened during a weekend. As usual, we didn't really notice that Varnish was crashing until we looked at our Munin graphs. Once you know that Varnish is crashing, everything is easier :)

Just look at your syslog file. We did, and we found the following error message:

Feb 26 06:58:26 p26-01 varnishd[19110]: Child (27707) died signal=6
Feb 26 06:58:26 p26-01 varnishd[19110]: Child (27707) Panic message: Missing errorhandling code in HSH_Prepare(), cache_hash.c line 188:#012  Condition((p) != 0) not true.  thread = (cache-worker)sp = 0x7f8007c7f008 {#012  fd = 239, id = 239, xid = 1109462166,#012  client = 213.236.208.102:39798,#012  step = STP_LOOKUP,#012  handling = hash,#012  ws = 0x7f8007c7f078 { overflow#012    id = "sess",#012    {s,f,r,e} = {0x7f8007c7f808,,+16369,(nil),+16384},#012  },#012    worker = 0x7f82c94e9be0 {#012    },#012    vcl = {#012      srcname = {#012        "input",#012        "Default",#012        "/etc/varnish/accept-language.vcl",#012      },#012    },#012},#012
Feb 26 06:58:26 p26-01 varnishd[19110]: Child cleanup complete
Feb 26 06:58:26 p26-01 varnishd[19110]: child (3710) Started
Feb 26 06:58:26 p26-01 varnishd[19110]: Child (3710) said Closed fds: 3 4 5 10 11 13 14
Feb 26 06:58:26 p26-01 varnishd[19110]: Child (3710) said Child starts
Feb 26 06:58:26 p26-01 varnishd[19110]: Child (3710) said Ready
Feb 26 18:13:37 p26-01 varnishd[19110]: Child (7327) died signal=6
Feb 26 18:13:37 p26-01 varnishd[19110]: Child (7327) Panic message: Missing errorhandling code in HSH_Prepare(), cache_hash.c line 188:#012  Condition((p) != 0) not true.  thread = (cache-worker)sp = 0x7f8008e84008 {#012  fd = 248, id = 248, xid = 447481155,#012  client = 213.236.208.101:39963,#012  step = STP_LOOKUP,#012  handling = hash,#012  ws = 0x7f8008e84078 { overflow#012    id = "sess",#012    {s,f,r,e} = {0x7f8008e84808,,+16378,(nil),+16384},#012  },#012    worker = 0x7f81a4f5fbe0 {#012    },#012    vcl = {#012      srcname = {#012        "input",#012        "Default",#012        "/etc/varnish/accept-language.vcl",#012      },#012    },#012},#012
Feb 26 18:13:37 p26-01 varnishd[19110]: Child cleanup complete
Feb 26 18:13:37 p26-01 varnishd[19110]: child (30662) Started
Feb 26 18:13:37 p26-01 varnishd[19110]: Child (30662) said Closed fds: 3 4 5 10 11 13 14
Feb 26 18:13:37 p26-01 varnishd[19110]: Child (30662) said Child starts
Feb 26 18:13:37 p26-01 varnishd[19110]: Child (30662) said Ready

A quick research brought me to sess_workspace.

We found out we had to increase the default (16kb), especially since we're doing quite a bit of HTTP header copying and rewriting around. In fact, if you do that, each varnish thread uses a memory space at most sess_workspace bytes.

If you happen to need more space, maybe because clients are sending long HTTP header values, or because you are (like we do) writing lots of additional varnish-specific headers, then Varnish won't be able to allocate enough memory, and will just write the assert condition on syslog and drop the request.

So, we bumped sess_workspace to 256kb by setting the following in the startup file:


-p sess_workspace=262144

And since then we haven't been having crashes anymore.

More varnish, now also on www.opera.com

I have been working on setting up and troubleshooting Varnish installations quite a bit lately. After deploying Varnish on My Opera for many different uses, namely APIs, avatars, user pictures and the frontpage, we also decided to try using Varnish on www.opera.com.

While "www" might seem much simpler than My Opera, it has its own challenges.
It doesn't have logged in users, or user-generated content, but, as with My Opera, a single URL is used to generate many (slightly) different versions of the content. Think about older versions of Opera, or maybe newest (betas, 10.50), mobile browsers, Opera Mini, site in "Mobile view", different languages, etc…

That makes caching with Varnish tricky, because you have to consider all of these variables, and instruct Varnish to cache each of these variations separately. No doubt opera.com in this respect is even more difficult than My Opera.

So, we decided to:

  • cache only the most trafficked pages (for now only the Opera startup page)
  • cache them only for Opera 10.x browsers
  • differentiate caching by specific version (the "x" in 10.x)

We basically used the same Varnish config as My Opera, with the accept-language hack, changing only the URL-specific logic. With this setup, we managed to cut down around 15% of backend requests on opera.com.