Tag Archives: puppet

A collection of useful sysctl snippets packaged as a puppet module

Recently I learned from Marco about /etc/sysctl.d, a folder where you can drop in files instead of changing /etc/sysctl.conf directly. That gave me the idea of building a puppet module for sysctl:

https://github.com/cosimo/puppet-modules/blob/master/sysctl/README

The idea is to assemble a collection of useful sysctl snippets. I started with the usual things we use everywhere:

  • LVS Direct Routing

    # LVS directives for Direct Routing
    # http://www.linuxvirtualserver.org/VS-DRouting.html
    net.ipv4.conf.lo.arp_ignore = 1
    net.ipv4.conf.lo.arp_announce = 2
    net.ipv4.conf.all.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
    
  • TCP performance tuning
    #---------------------------------------------------------------------
    # TCP/IP performance optimization settings compared to debian defaults
    #
    # from http://varnish.projects.linpro.no/wiki/Performance
    #---------------------------------------------------------------------
    
    #net.ipv4.ip_local_port_range = 32768 61000
    net.ipv4.ip_local_port_range = 1024 65536
    # net.core.rmem_max = 131071
    net.core.rmem_max = 16777216
    # net.core.wmem_max = 131071
    net.core.wmem_max = 16777216
    # net.ipv4.tcp_rmem = 4096 87380 4194304
    net.ipv4.tcp_rmem = 4096 87380 16777216
    # net.ipv4.tcp_wmem = 4096 16384 4194304
    net.ipv4.tcp_wmem = 4096 65536 16777216
    # net.ipv4.tcp_fin_timeout = 60
    net.ipv4.tcp_fin_timeout = 20
    # net.core.netdev_max_backlog = 1000
    net.core.netdev_max_backlog = 30000
    # net.ipv4.tcp_no_metrics_save = 0
    net.ipv4.tcp_no_metrics_save = 1
    # net.core.somaxconn = 128
    net.core.somaxconn = 262144
    # net.ipv4.tcp_syncookies = 0
    net.ipv4.tcp_syncookies = 1
    # net.ipv4.tcp_max_orphans = 65536
    net.ipv4.tcp_max_orphans = 262144
    # net.ipv4.tcp_max_syn_backlog = 1024
    net.ipv4.tcp_max_syn_backlog = 262144
    # net.ipv4.tcp_synack_retries = 5
    net.ipv4.tcp_synack_retries = 3
    # net.ipv4.tcp_syn_retries = 5
    net.ipv4.tcp_syn_retries = 3
    

I'm interested in both baseline settings to be applied by default everywhere (ex. vm.swappiness = <n>), and special-purpose settings to be "attached" to server roles, like db, file servers, http servers, etc… I'd love to hear from you.

Puppet external nodes classifier script in Perl

The upgrade to puppet 2.6.2 worked out fine. Coming from 0.24.5, I noticed a really welcome speed improvement. However, I had a tricky problem.

While upgrading to 2.6, I also decided to switch to external nodes classifier script. If you don't know about it, it's a nice puppet feature that I planned to use since the start. It allows you to write a small script in any language you want, to basically tell the puppetmaster, given the hostname, what you want that machine to be.

Puppet calls your script with one argument that is the hostname of the machine that is asking for its catalog of resources. In your script, you have to output something like the following:

---
classes:
  - perl
  - apache
  - ntp
  - munin
environment: production
parameters:
  puppet_server: my.puppetmaster.com

You can specify all the "classes" of that machine, so basically what you want puppet to install (or repair) on that machine. So far so good. So my classifier script looks into some preset directories for some project-specific JSON files, and then checks if any of these JSON files contain the name of the machine that puppet is asking for. Code follows:

#!/usr/bin/env perl
#
# http://docs.puppetlabs.com/guides/external_nodes.html
#

use strict;
use warnings;
use File::Basename ();
use File::Slurp ();
use JSON::XS ();
use YAML ();

our $nodes_dir = '/etc/puppet/manifests/nodes';

our %default_params = (
    puppet_server => 'my.puppetmaster.com',
);

# ...
# A few very simple subs
# omitted for brevity
# ...

# The hostname puppet asks for
my $wanted = $ARGV[0];

# The data structure found in the JSON file
my $node_info = search_into_json_files_and_find($wanted);

my $puppet_classifier_info = {
    classes => $node_info->{puppet_classes},
    environment => 'production',
    parameters => %default_params,
};

print YAML->Dump($puppet_classifier_info);

Now, I don't know if you can immediately spot the problem here, but I didn't, so I wasted a good part of an afternoon chasing a bug I didn't even know existed. The resulting YAML (puppet wants YAML) was this one:

--- YAML 
--- 
classes: 
    - geodns::production::backend 
environment: production 
name: z01-06-02 
parameters: 
    puppet_server: z01-06-02

The problem with this, is that it looks innocent and valid, and in fact is valid, but it's two YAML documents, not one. So puppet will parse the --- YAML line since that is one single complete YAML document, and ignore the rest.

And why is that happening in the first place? Because of the YAML->Dump() call I wrote, instead of the correct YAML::Dump()… Eh :) So the correct code is:

print YAML::Dump($puppet_classifier_info);

Never use YAML->Something()

Upgrade of puppet from 0.24.5 to 2.6.2: “Got nil value for content”

Something I should have probably done a while ago. Upgrading our puppet clients and master from 0.24.5 (current Debian stable) to 2.6.2 (current backports). I've been advised to go all the way to 2.6.4, that contains an important security fix. I'll probably do that soon.

So, first error I did was to upgrade one client before the puppetmaster. In my mental model, it's always ok to upgrade the server first (puppetmaster), but I remember I was kind of surprised when I learned that puppet needs upgrading the clients first, so that is something I seemed to remember well: first upgrade clients, then the puppetmaster.

So I did, and that didn't work at all. The client couldn't retrieve the catalog. I discovered I had to update the puppetmaster first. Another very common thing with puppet: if something doesn't work, wipe out the SSL certificates. Almost every puppet user, maybe unexperienced like me, will tell you to remove the entire /var/lib/puppet/ssl directory, and restart your client. Good. So I did.

After fiddling for a while with client and server-side SSL certificates, removing, --waitforcert and puppetca invocations, I was able to finally connect the client with the puppetmaster and have them talk to each other correctly, downloading the manifests and applying the changes. However, one problem remained…

Digression: custom facter plugins

I wrote a specific facter plugin that provides a datacenter fact. Just as an example, it works like this:

#
# Provide an additional 'datacenter' fact
# to use in generic modules to provide datacenter
# specific settings, such as resolv.conf
#
# Cosimo, 03/Aug/2010
#
# $Id: datacenter.rb 15240 2011-01-03 14:27:44Z cosimo $

Facter.add("datacenter") do
    setcode do

        datacenter = "unknown"

        # Get current ip address from Facter's own database
        ipaddr = Facter.value(:ipaddress)

        if ipaddr.match("^12.34.56.")
            datacenter = "dc1"
        elsif ipaddr.match("^34.56.78.")
            datacenter = "dc2"
        elsif ipaddr.match("^56.78.12.")
            datacenter = "dc3"
        # etc ...
        # etc ...
        end

        datacenter
    end
end

This allows for very cool things, like specific DNS resolver settings by datacenter:

# Data-center based settings
case $datacenter {
    "dc1" :  { include opera::datacenters::dc1 }
    "dc2" :  { include opera::datacenters::dc2 }
    "dc3" :  { include opera::datacenters::dc3 }
    # ...
    default: { include opera::datacenters::dc1 }
}

where each opera::datacenter::dc class contains all the datacenter-dependent settings. In my case, just the resolver for now.

class opera::datacenters::dc1 {
    resolver::dns { "dc1-ns":
        domain => "opera.com",
        nameservers => [ "1.2.3.4", "5.6.7.8" ],
    }
}

resolver::dns is in turn a small define in a resolver module I wrote to generate resolv.conf contents from a small template. It's so small I can copy/paste it here:

# "resolver/manifests/init.pp"
define resolver::dns ($nameservers, $search="", $domain="") {
    file { "/etc/resolv.conf":
        ensure => "present",
        owner  => "root",
        group  => "root",
        mode   => 644,
        content => template("resolver/resolv.conf.erb"),
    }
}

# "resolver/templates/resolv.conf.erb"
# File managed by puppet <%= puppetversion %> on <%= fqdn %>
# Data center: <%= name %>
<% if domain != "" %>domain <%= domain %>
<% end %>
<% if search != "" %>search <%= search %>
<% end %>
<% nameservers.each do |ns| %>nameserver <%= ns %>
<% end %>

One problem remained…

Let's get back to the remaining problem. Every client run was spitting this error string:

info: Retrieving plugin
err: /File[/var/lib/puppet/lib]: Could not evaluate: Got nil value for content

After a lot of searching and reading through similar (but not quite the same) messages, I found this thread where one guy was having a very similar problem, but with a different filename, /root/.gitconfig that he obviously specified in his manifests.

My problem happened with /var/lib/puppet/lib, which is never specified in any of my manifests. But the symlink bit got me thinking. At some point, to have the specific facter plugin work, and having read about differences between older and 2.6.x versions of puppet, I had put it into my module's lib/facter folder, but creating also a symlink called plugins (required by the new puppet). Doing so, I thought, would probably prevent problems, as the new puppet could read the file from the new folder, while the older version could read it from "lib".

Well, turns out that puppet will not be able to read custom facter plugins from a plugins folder that is a symlink. Removing the symlink and making {module}/plugins/facter a real directory solved that problem too.

I really hope to save someone else the time I spent on this. Cheers :)

Facter ported to Perl 6

A few days ago I wrote about my fun experiment trying to port facter to rakudo Perl 6. I said it was "almost completely functional".

Well, I was a bit optimistic, it seems :) It took me a few more nights of hacking, but in the end now it's almost completely functional :) It basically runs, I have ported a few facts from Facter's original ruby code too, like the kernel fact, or the physicalprocessorcount fact and other simple ones.

What's missing to declare the experiment successful is the implementation of confines. A confine in facter speak it's a specific restriction that applies to a fact. For example, the physicalprocessorcount fact reads some files from /proc, and that is only available on Linux. So, in this case, the confine rule for physicalprocessorcount is that the fact kernel must have "Linux" as its value. In code that becomes:

Facter.add("physicalprocessorcount", sub ($f) {
    $f.confine("kernel" => "Linux");
    $f.setcode(block => sub {
        Facter::Util::Resolution.exec('grep "physical id" /proc/cpuinfo|cut -d: -f 2|sort -u|wc -l');
    });
});

which is pretty similar to the Ruby counterpart:

Facter.add("physicalprocessorcount") do
    confine :kernel => :linux

    setcode do
        ppcount = Facter::Util::Resolution.exec('grep "physical id" /proc/cpuinfo|cut -d: -f 2|sort -u|wc -l')
    end
end

The Ruby version is still more elegant, but all in all I'm very happy with the outcome so far. It could probably be improved a lot too. Perl 6 is awesome. Get the code from http://github.com/cosimo/perl6-facter/ and feel free to ping me or comment if you want to know more.

Porting Facter from Ruby to Perl6

I've been playing with Puppet for a while now. One of the most interesting (and simple!) components of Puppet is Facter.

Facter is a small software that reports "facts" about your computer. When you run facter, its output looks like the following:

architecture => x86_64
facterversion => 1.5.6
fqdn => cd01.localdomain.lan
hardwaremodel => x86_64
hostname => cd01
id => cosimo
interfaces => eth0,pan0
ipaddress => 10.0.0.1
...
uptime_hours => 422
uptime_seconds => 1519256
virtual => physical

It is interesting also because it's extensible with your own custom plugins. A custom plugin is just a Ruby file with usually a call to Facter.add:

Facter.add(:kernel) do
    setcode do
        require 'rbconfig'
        case Config::CONFIG['host_os']
        when /mswin|win32|dos|cygwin|mingw/i
            'windows'
        else
            Facter::Util::Resolution.exec("uname -s")
        end
    end
end

As a little experiment to become more familiar with Ruby and at the same time to enjoy writing some Perl 6, I decided to study the facter project and then port it to Perl6.

That took me a few hours over a couple of weekends, and it's almost completely functional. It is a straight port from the Ruby code, so it doesn't really use the magic powers of Perl6 yet. The idea is to use a different branch, now that I know it well under the hood, and to rewrite it from the ground up in Perl6.

I found out that Ruby code maps very closely to Perl6, apart from the yield instruction and a different model for static/instance variables. For yield, something that would be close in Perl6 is gather/take, but I'm not really sure that is the appropriate statement to use. yield is used in the fact value resolution algorithm, and that is currently the only thing that doesn't work properly in my port. Everything else is in place. Regarding static/instance variables, Ruby uses @attribute for instance variables, and @@attribute for static variables. In Perl6, instance variables are denoted with $.attribute if public, while static variables are package globals, so you can use our $attribute.

Of course, there's a lot more! If you're interested, take a look at the code on Github, the URL is http://github.com/cosimo/perl6-facter.

Puppet custom facts, and master-less puppet deployment

As I mentioned a few weeks ago, I'm using Puppet for some smaller projects here at work. They're pilot projects to see how puppet behaves and scales for us before taking it into bigger challenges.

One of the problems so far is that we're using fabric as the "last-mile" deployment tool, and that doesn't yet have any way to run jobs in parallel. That's the reason why I'm starting to look elsewhere, like mcollective for example.

However, today I had to prepare a new varnish box for files.myopera.com. This new machine is in a different data center than our current one, so we don't have any puppetmaster deployed there yet. This stopped me from using puppet in also another project. But lately I've been reading on the puppet-users mailing list that several people have tried to deploy a master-less puppet configuration, where you have no puppetmasterd running. You just deploy the puppet files, via rsync, source control or pigeons, and then let the standalone puppet executable run.

Puppet master-less setup

To do this, you have to at least have a good set of reusable puppet modules, which I tried to build small pieces at a time during the last few months. So I decided to give it a shot, and got everything up and running quickly. Deployed my set of modules in /etc/puppet/modules, and built a single manifest file that looks like the following:


#
# Puppet standalone no-master deployment
# for files.myopera.com varnish nodes
#
node varnish_box {

    # Basic debian stuff
    $disableservices = [ "exim4", "nfs-common", "portmap" ]
    service { $disableservices:
        enable => "false",
        ensure => "stopped",
    }

    # Can cause overload on the filesystem through cronjobs
    package { "locate": ensure => "absent", }
    package { "man-db": ensure => "absent", }

    # Basic configuration, depends on data center too
    include opera
    include opera::sudoers
    include opera::admins::core_services
    include opera::datacenters::dc2

    # Basic packages now. These are all in-house modules
    include base_packages
    include locales
    include bash
    include munin
    include cron
    include puppet
    include varnish

    varnish::config { "files-varnish-config":
        vcl_conf => "files.vcl",
        storage_type => "malloc",
        storage_size => "20G",
        listen_port => 80,
        ttl => 864000,
        thread_pools => 8,
        thread_min => 800,
        thread_max => 10000,
    }

    #
    # Nginx (SSL certs required)
    #
    include nginx

    nginx::config { "/etc/nginx/nginx.conf":
        worker_processes => 16,
        worker_connections => 16384,
        keepalive_timeout => 5,
    }

    nginx::vhost { "https.files.myopera.com":
        ensure => "present",
        source => "/usr/local/src/myopera/config/nginx/sites-available/https.files.myopera.com",
    }

    bash:: prompt { "/root/.bashrc":
        description => "Files::Varnish",
        color => "red",
    }

    munin:: plugin::custom { "cpuopera": }

    munin:: plugin { "if_eth0":
        plugin_name => "if_"
    }

    munin:: plugin {
        [ "mem_", "load", "df", "df_inode", "netstat", "vmstat",
          "iostat", "uptime", "threads", "open_files", "memory", "diskstats" ]:
    }
}

node default inherits varnish_box {
}

node 'my.hostname.opera.com' inherits varnish_box {
}

This manifest installs varnish, nginx, a bunch of basic packages I always want on every machines (vim, tcpdump, etc…), munin and appropriate plugins already configured, and also a nice red bash prompt to warn me that this is production stuff.

This file is everything the puppet client needs to run and produce the desired effect, without needing a puppet master. Save it as varnish-node.pp and then you run it with:


puppet varnish-node.pp

One problem that usually arises is how to serve the static files. In this case, I assumed I'm going to check out the source code and config files from my own repository into /usr/local/src/... so I don't need to point puppet to a server with the classic:


source => "puppet:///module/filename"

but you can just use:


source => "/usr/local/whatever/in/my/local/filesystem"

That's great and it works just fine.

Custom facts

Puppet uses a utility called facter to extract "facts" from the underlying system, sysinfo-style. A typical facter run produces the following output:


$ facter
architecture => x86_64
domain => internal.opera.com
facterversion => 1.5.6
fqdn => cd01.internal.opera.com
...
hardwaremodel => x86_64
hostname => cd01
id => cosimo
ipaddress => 10.20.30.40
ipaddress_eth0 => 10.20.30.40
is_virtual => false
...
kernel => Linux
kernelmajversion => 2.6
...
operatingsystem => Ubuntu
operatingsystemrelease => 10.04
physicalprocessorcount => 1
processor0 => Intel(R) Core(TM)2 Duo CPU     E6550  @ 2.33GHz
processor1 => Intel(R) Core(TM)2 Duo CPU     E6550  @ 2.33GHz
processorcount => 2
...

and so on. Within puppet manifests, you can use any of these facts to influence the configuration of your system. For example, if memorysize > 4.0 Gb then run varnish with 2000 threads instead of 1000. This is all very cool, but sometimes you need something that facter doesn't give you by default.

That's why facter can be extended.

I tried creating a datacenter.rb facter plugin that would look at the box IP address and figure out in which data center we're located. That in turn can be used to setup the nameservers and other stuff.

Here's the code. My Ruby-fu is less than awesome:


#
# Provide an additional 'datacenter' fact
# to use in generic modules to provide datacenter
# specific settings, such as resolv.conf
#
# Cosimo, 03/Aug/2010
#

Facter.add("datacenter") do
    setcode do

        datacenter = "unknown"

        # Get current ip address from Facter's own database
        ipaddr = Facter.value(:ipaddress)

        # Data center on Mars
        if ipaddr.match("^88.88.88.")
            datacenter = "mars"

        # This one on Mercury
        elsif ipaddr.match("^99.99.99.")
            datacenter = "mercury"

        # And on Jupiter
        elsif ipaddr.match("^77.77.77.")
            datacenter = "jupiter"
        end

        datacenter
    end
end

However, there's one problem. When puppet runs, it doesn't get the new fact, even though facter from the command line can see it and execute it just fine (when the plugin is in the current directory).

Now I need to know how to inform puppet (and/or facter) that it has to look into one of my puppet modules' plugin (or lib from 0.25.x) directory to load my additional datacenter.rb fact.

Any idea ??

Puppet, Fabric and a Perl alternative?

Some time later this month I'm going to write more extensively about a project that I've been working on, not continuously, but for the last couple of months. It is about small and medium scale projects configuration management and deployment.

For configuration management I evaluated several products like bcfg2, puppet, cfengine and lcfg, and I finally chose puppet.

For "the last mile", as I call it, the alternatives that I considered were fabric, capistrano, ControlTier and TheNewShinyWheel(tm)

So I settled on puppet + fabric. Puppet is a Ruby system, while Fabric is Python code. None of them is particularly fast, actually Puppet is slow, and Fabric is acceptable. The main problem I'm confronting with, after having learnt how to use these tools, is that Fabric does not support parallel processing of tasks.

This is a severe limitation for us. This was a pilot project. If it works well, it could be applied to many other deployment tasks. That could also mean that a single deployment has to send code or files to tens of servers, and you don't want to do that sequentially waiting for each task to complete.

At the moment, this is impossible to do with Fabric. There is an experimental fork in the works that might support parallel execution, by adding a @parallel task decorator, but it still requires work and a good dose of testing.

During my survey I looked for mature Perl-based deployment tools, but I failed at finding them. While Fabric is nice, I might be tempted to reconsider my choice. Any suggestions?