Tag Archives: development

How to profile Python/Django applications

Using the django-profiler module available at https://github.com/char0n/django-profiler

from profiling import profile

and later:

@profile(stats=True)
def view_to_be_profiled(self, request):
    ...

There’s a couple of settings that it’s possible to enable to view SQL queries and tweak the logger name:

PROFILING_SQL_QUERIES = True
PROFILING_LOGGER_NAME = 'profiler'

Just a reminder for myself. Nothing more.

Using hypnotoad in production, anyone?

So, you're using hypnotoad in production. And it works perfectly for you. Maybe you have an Nginx or Apache in front of it configured as reverse proxy. Everything's great. Right? Right. Then I have a zillion questions for you.

Maybe I don't understand how it works, but I'm having the following problems:

  • "sometimes" hypnotoad won't stop. I usually try to stop it with:
    hypnotoad --stop /path/to/my/script
  • I use symlinks to deploy applications, so for example I deploy in /opt/myapp and each new deployment gets a timestamped folder, /opt/myapp/releases/20120224-180801.

    Then there's a symlink that always points to the last deployed version, /opt/myapp/current/opt/myapp/releases/{whatever-datetime}. Now, using hypnotoad --stop /opt/myapp/current doesn't work, because hypnotoad probably uses the actual filename, not the symlink, to identify the running application.

    That's fine, but then how can I stop it reliably? I wish it had a hypnotoad --force-stop mode or something.

  • Last problem, when I push a new deployment, and stop and restart hypnotoad, often the application doesn't work properly, it only generates exceptions for unknown reasons. Stopping and restarting again manually usually fixes the problems…
  • I was a bit frustrated today, so I decided to switch back to starman. I have never ever had a problem with it, so I will stick to it for now. But I would still be interested to know whether you use hypnotoad in production and how well it works. Write in the small box below, you don't need to register. Thanks :)

Calling all Mojolicious users: patches welcome?

So you're using Mojolicious. Good. We started using it too, and it's great. We started having some patches lying around, which wouldn't be integrated into the mainline.

We're starting to reach a critical mass and I have been considering the idea of starting our own Mojolicious "branch". I'd like to know how many of you are in the same situation, and issue a call to action:

if you need or have needed patches to Mojolicious that for whatever reason were not integrated into the official repository, please contact me, leave a comment here or send me an email. I'd like to hear from you!

Internationalization (i18n) with Mojolicious and Template Toolkit

In a previous post I talked about this new Mojolicious-based application that I've been working on, that btw was rolled out in production today (yay!)

Classic I18N with TT

One of the required features of this app was "i18n", internationalization. To be less vague, the requirement was to present the UI in different languages. We're using Template Toolkit, so our templates need to have strings marked in a special way to allow translation to kick-in at run-time. Usually in TT you do this with:

<html>
 
<head>
<title>[% l('This is the title of the page') %]</title>
</head>
 
<body>
<h1>[% l('Hello, world!') %]</h1>
<p>
[% l('Some text here') %]
</p>
</body>
</html>

so all the strings that have to be translated according to the user language have to be marked up with:

[% l('<your string here>') %]

Enter Mojolicious

Mojolicious includes a built-in I18N plugin that simplifies your life allowing the <% l('somestring)' %> syntax to work. That is, it gives you a l() helper.

Helper?

A helper is a method that it's available both as part of your controller object, and within templates.

Back to Mojolicious…

In the example helper syntax I wrote <% l('somestring)' %> because that's Mojolicious default templating system syntax. However, under Template Toolkit, you can't use that syntax! You have to pass through an extra level, as in:

<!-- This is my TT template -->
[% c.l('<your string here>') %]

I'm not exactly sure why that c. is required, but that's how it is.

I18N workflow: extracting the strings

Everything would be fantastic, except there's one tricky problem. After you worked so hard on your TT templates, now it's time to collect all the marked up strings, presumably to build a .PO file to be shipped to translation agencies or whatever system you're using for that. More on that later.

In the Perl world, there is an equivalent of GNU xgettext, which is xgettext.pl. This tool is part of the Locale::Maketext::Lexicon CPAN distribution, which is kind of "the standard" way to i18n in Perl. Or it is for us here anyway since we started building i18n for my.opera.com in 2008.

The tricky problem is that even though xgettext.pl understands quite a few syntax variants, it didn't understand [% c.l('string') %]. After a few Perl debugger sessions, I managed to teach Locale::Maketext::Extract::Plugin::TT2 how to parse Mojolicious-style syntax. I knew that Clinton Gormley, the maintainer of L::M::Lexicon had a source repository for it on Github, so I forked his repository and pushed my changes on a dedicated branch.

CPAN, Github and the Community

This is where the Github + CPAN model really shines. You're using a CPAN module. You stumble on a problem. Fix the problem. Find its repository on Github. Fork it, push your fix, and if you're lucky, you have your fix merged and out on CPAN the same day.

This is what actually happened. Clinton got in touch the very same day I sent him the pull request and later pushed out the changes on CPAN. If you ask me, that's just awesome. I wish everything worked that way :)

Closing the i18n workflow

Fixed the c.l() problem, everything else was easier. xgettext.pl allows you to collect strings from your code and templates and build a master .PO file with all the strings. Then msgmerge, a standard GNU gettext tool, allows you to take the generated master PO file and merge it with any existing language-specific PO if any. If you don't have any, just copy the master PO file (usually called POT, or reference PO file) to <language>.po and start translating.

Last step is either:

  • compiling the .po files to .mo, a lookup-optimized form of the .po file
  • creating the "lexicon" files. In the Perl world, these are nothing more than Perl modules with a %Lexicon hash that contains all string IDs and their translations

We're long time fans of the latter approach, so our lexicon files look like this:

package AuthOpera::Locale::it;
 
use strict;
use utf8;
use base qw(AuthOpera::Locale);
 
### LEXICON STARTS HERE (don't remove this line)
our %Lexicon = (
 
    # Automatic fallback to string ID when no translation available
    _AUTO => 1,
 
    # String IDs                  # Translations
    "Application name:"        => "Nome dell'applicazione:",
    "Application registration" => "Registrazione dell' applicazione",
    "Data provider:"           => "Provider dei dati:",
 
    # ...
);
### LEXICON ENDS HERE
 
1;

and we use a simple subclass of Locale::PO to read the PO file in memory and write back a lexicon based on a fixed template, hence the ### LEXICON lines above.

Transifex

Currently we also use Transifex, that allows to have external translators contribute to PO files directly from a web page, and if you configure it to do so, commit straight to your source code repository. You can then trigger automated builds of the lexicon files, having completed the full i18n workflow.

I find this system pretty simple but at the same time fully automated and very powerful. I'd love to hear comments or feedback about this stuff, especially from people adopting a different process.

Using Template Toolkit with Mojolicious

For an upcoming project, I decided to try and use Mojolicious in production. That would be the first time, so I'm quite excited to see what's going to happen.

A few days ago I wrote some sample application that just loads a basic Template Toolkit template and renders it, and benchmarked it using both:

  • mod_perl and Plack::Handler::Apache2 and,
  • using starman as self-contained HTTP server running the psgi application

I have to say that I was quite impressed with the performance level of Starman. I got 1,000+ (a thousand plus) requests per second without the server even breaking a sweat. The command line, just in case, was:

starman --workers 32 ./app.psgi

Anyway, back to using TT. I found myself searching for recipes on how to use TT with Mojolicious because there wasn't a clear documented answer on how to do it, or at least I didn't find it. An example of what I came up with follows.

Step 1: the Mojolicious application class

First you have to create your application class. You should probably use the script that generate the basic skeleton for you. There's nice documentation on how to do that. My class looks like this:


package My::PSGI::App;

use strict;
use base 'Mojolicious';
use Opera::Config;

sub startup {
    my $self = shift;
    
    $self->secret('some-secret-random-string');

    # Our internal configuration system
    my $conf = Opera::Config->new();
    my $tmpl_dir = $conf->get('Template:include_dir');
    my $cache_dir = $conf->get('Template:cache_dir');

    # Tell Mojolicious we want to load the TT renderer plugin
    $self->plugin(tt_renderer => {
        template_options => {
            # These options are specific to TT
            INCLUDE_PATH => $tmpl_dir,
            COMPILE_DIR => $cache_dir,
            COMPILE_EXT => '.ttc',
            # ... anything else to be passed on to TT should go here
        },
    });

    $self->renderer->default_handler('tt');

    my $r = $self->routes;

    # Your routes should go here
    $r->route('/login')->to('account#login');
    # ... and so on ...

}

1;

To have your TT templates picked up, you only need a few more things.

Mojolicious::Plugin::TtRenderer

When you declare that you want to load the tt_renderer plugin (see above, $self->plugin(tt_renderer=>...)), then Mojolicious will "camelize" the tt_renderer string, turn it into Mojolicious::Plugin::TtRenderer, and try to load that plugin, if available.

Turns out there was a MojoX::Renderer::TT CPAN module that also contained a class called Mojolicious::Plugin::TtRenderer. I said there was because Sebastian Riedel, the main developer of Mojolicious had in the meantime deprecated the MojoX namespace.

Since we're building the modules we want to use in production as deb packages, we would have run the risk to package MojoX::Renderer::TT to have it changed later because of this namespace conflict. To avoid this, I decided to fork its repository and put together a patch to remove the use of the MojoX:: namespace. With this, I hoped to get the thing done and hopefully picked up quickly by the maintainer of MojoX::Renderer::TT.

Turned out that he was super responsive (thanks Ask!) to merge the change and release it to CPAN, so ladies and gentlemen, I hereby announce we have Mojolicious::Plugin::TtRenderer 1.20+ out!

In fact, the old deprecated MojoX:: module is still there, just don't use it, and install Mojolicious::Plugin::TtRenderer instead.

Templates naming

Another thing you need for TT to work out of the box is that your templates should(*) be named sometemplate.html.tt. (*) probably you can deviate from this convention, I just don't know yet.

Your controller should specify TT as the renderer

UPDATE: this is not needed. If you're using:

$self->renderer->default_handler('tt');

in your main application class, then you won't need to specify format and handler in every controller.

Again, not sure it's really needed (no, it's not, read above), check before you copy/paste. Here's a simple action from one of my controllers (following the previous example):


package My::PSGI::App::Account;
 
use strict;
use base 'Mojolicious::Controller';
 
sub login {
    my $self = shift;
    
    $self->render(
        template => 'path/to/template', # *without* .html.tt
        format   => 'html',
        handler  => 'tt',
    );
    
}

1;

That should be it: have fun!

EDIT: Thanks Robert for the suggestions.

How to tag a remote git repository or… vcs support for fabric

With svn, you can tag a remote repository with:

svn cp http://{your-svn-server}/svn/{project}/trunk http://{your-svn-server}/svn/{project}/tags/{tag-name}

or if you're already in a working copy:

svn cp ^/{project}/trunk ^/{project}/tags/{tag-name}

The latter case assumes you have a working copy already checked out, but the first case is more interesting for what I needed.

Tagging when deploying

Lately I've been working on some deployment tools in the form of a few fabric classes. One of the things I want to do when launching a production deployment is auto-tagging the repository with the new build name.

The tag naming I went for is something like:

<project_name> - <date> - <time> - <who_deployed>

Example:

geodns-20110409-133701-cosimo

Every time there's a new production deployment using these tools, the
repository / revision that is being deployed is tagged with names like those.
The plan to use this added metadata for a "deployment console", but I didn't
have time to do anything about it yet.

vcs.py

Having planned the move from svn to git, I had to add a thin abstraction to the
fabric deployment classes to make sure that when the repository url changed from
svn to git, nothing really changed from the deployment point of view.

I ended up with a generic vcs.py class for fabric that implements vcs-related actions such as:

  • exporting a remote repository to a local directory
  • listing available tags on a remote repository
  • tagging a remote repository

This means I had to find out how to do these things in both svn and git.

Exporting a remote repository

With svn:

svn export [--force] http://svn.server/project/trunk /your/local/dir

and you can use --force if the local directory already exists, or svn will refuse to do it by default.

Git requires an intermediate step:

git archive --prefix=some-dir-name/ --remote=git.server:/var/git/project.git master | tar xvC /path/where/to/export

Listing available tags (remotely)

With svn:

svn list http://svn.server/project/tags/

With git:

git ls-remote --tags git.server:/var/git/project.git

Thanks to my colleague Alfie for the ls-remote tip.

Tagging a remote URL

I mentioned how you do it with svn:

svn cp http://svn.server/project/trunk http://svn.server/project/tags/tagname

What about git though? I searched a bit around, and I found no git command to
directly tag a remote repository.

I looked at the Jenkins git plugin source code but AFAICS there's no magical way to do it, so I figured out I would just clone the remote repository, tag locally and then push the tag to origin.

In theory, this should be just fine, except it has some drawbacks:

  • Execution time: if the remote repository is very large, we need to clone it first, and that can take a long time.
  • Size: when cloning a large git repository, the local copy will take up disk space for nothing. We don't need it, as we just want to tag the remote repository.

Not sure this is the best thing to do, but what I'm using right now is:

  • Cloning with --depth=1:

    git clone has a --depth option that limits the amount of history that is cloned. In this case, we don't need any history, so --depth=1 is great:

    git clone --depth=1 <git-remote-url> <local-dir>

    Example:

    git clone --depth=1 git.server:/var/git/project.git /var/tmp/deploy.$USER.$$
  • Tagging locally:
    cd /var/tmp/deploy.$USER.$$
    git tag -as <tag-name>
    
  • Pushing the tag remotely:
    git push origin --tags
  • Removing the temporary local copy:
    rm -rf /var/tmp/deploy.$USER.$$

That's it. Not very brilliant, but works great for now. If you know of a better way to tag a remote git repository, or some existing work on these things, please get in touch or add a comment below. Thanks! :)

Ubuntu 10.10, modperl and Apache segfaulting fixed

Last month, before moving to Melbourne, where I am now, to work in the Opera Australia office for a few months, I had to setup a laptop for all the development work I normally do. So I chose Ubuntu 10.10 amd64. I have to say I'm quite happy with it. Everything works out of the box for me, including a Quickcam 9000 USB camera I used to shoot this poor time-lapse video from my new office window. Woot!

Anyway, the development environment for one particular project consists of Apache and mod_perl. So I setup the usual list of dependencies, but when I tried to start Apache to run the test suite, it would always stop right away with a segmentation fault.

Didn't really dig into the problem. Just straced the apache process, and that's what I got:

[apache starts up, reads a bunch of Perl modules, and opens the access  
log...]
...
brk(0x7f342adbd000)                     = 0x7f342adbd000
...
...
brk(0x7f342adde000)                     = 0x7f342adde000
brk(0x7f342adff000)                     = 0x7f342adff000
brk(0x7f342ae20000)                     = 0x7f342ae20000
brk(0x7f342ae41000)                     = 0x7f342ae41000
stat("/usr/lib/perl5/auto/DBI/DESTROY.al", 0x7f341e8459b0) = -1 ENOENT (No  
such file or directory)
stat("/home/cosimo/src/auth-svn/lib/auto/DBI/DESTROY.al", 0x7fffc4e2d520)  
= -1 ENOENT (No such file or directory)
stat("/home/cosimo/src/myopera-trunk/lib/auto/DBI/DESTROY.al",  
0x7fffc4e2d520) = -1 ENOENT (No such file or directory)
stat("/etc/perl/auto/DBI/DESTROY.al", 0x7fffc4e2d520) = -1 ENOENT (No such  
file or directory)
stat("/usr/local/lib/perl/5.10.1/auto/DBI/DESTROY.al", 0x7fffc4e2d520) =  
-1 ENOENT (No such file or directory)
stat("/usr/local/share/perl/5.10.1/auto/DBI/DESTROY.al", 0x7fffc4e2d520) =  
-1 ENOENT (No such file or directory)
stat("/usr/lib/perl5/auto/DBI/DESTROY.al", 0x7fffc4e2d520) = -1 ENOENT (No  
such file or directory)
stat("/usr/share/perl5/auto/DBI/DESTROY.al", 0x7fffc4e2d520) = -1 ENOENT  
(No such file or directory)
stat("/usr/lib/perl/5.10/auto/DBI/DESTROY.al", 0x7fffc4e2d520) = -1 ENOENT  
(No such file or directory)
stat("/usr/share/perl/5.10/auto/DBI/DESTROY.al", 0x7fffc4e2d520) = -1  
ENOENT (No such file or directory)
stat("/usr/local/lib/site_perl/auto/DBI/DESTROY.al", 0x7fffc4e2d520) = -1  
ENOENT (No such file or directory)
stat("./auto/DBI/DESTROY.al", 0x7fffc4e2d520) = -1 ENOENT (No such file or  
directory)
stat("/var/tmp/test_cosimo_22931/auto/DBI/DESTROY.al", 0x7fffc4e2d520) =  
-1 ENOENT (No such file or directory)
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++

I thought it would be wiser to ask for advice on the DBI and mod_perl mailing lists. Tim Bunce suggested to try and get a stack trace of Apache. Why didn't I think of that in the first place? A few days later, I got my stack trace:

# gdb -c ./core /usr/sbin/apache2 
... 
Reading symbols from ... 
... 
Core was generated by `/usr/sbin/apache2 -d /var/tmp/test_cosimo_9727 -k 
start -C User cosimo -C Group ...'. 

Program terminated with signal 11, Segmentation fault. 
#0 0x00007fdaedfed858 in XS_Class__XSAccessor_END () 
from /usr/lib/perl5/auto/Class/XSAccessor/XSAccessor.so 
(gdb) backtrace 
#0 0x00007fdaedfed858 in XS_Class__XSAccessor_END () 
from /usr/lib/perl5/auto/Class/XSAccessor/XSAccessor.so 
#1 0x00007fdaf83cf845 in Perl_pp_entersub () from /usr/lib/libperl.so.5.10 
#2 0x00007fdaf83752c6 in Perl_call_sv () from /usr/lib/libperl.so.5.10 
#3 0x00007fdaf86ad40b in modperl_perl_call_list () 
from /usr/lib/apache2/modules/mod_perl.so 
#4 0x00007fdaf86b5786 in modperl_perl_destruct () 
from /usr/lib/apache2/modules/mod_perl.so 
#5 0x00007fdaf86a6256 in modperl_interp_destroy () 
from /usr/lib/apache2/modules/mod_perl.so 
#6 0x00007fdaf86a6715 in modperl_tipool_destroy () 
from /usr/lib/apache2/modules/mod_perl.so 
#7 0x00007fdaf86a62b2 in modperl_interp_pool_destroy () 
from /usr/lib/apache2/modules/mod_perl.so 
#8 0x00007fdaf98fd4e3 in ?? () from /usr/lib/libapr-1.so.0 
#9 0x00007fdaf98fc3b1 in apr_pool_destroy () from /usr/lib/libapr-1.so.0 
#10 0x00007fdaf98fc27f in apr_pool_clear () from /usr/lib/libapr-1.so.0 
#11 0x00007fdafa1b960d in main (argc=11, argv=0x7fff93b50ef8) 
at /build/buildd/apache2-2.2.16/server/main.c:692

Even if you don't know anything about stack traces, this output gently points to Class::XSAccessor. Perrin Harkins on the mod_perl list suggested to update Class::XSAccessor to the latest CPAN version, since its changelog mentioned some segmentation faults fixed in 0.10.

And that did it. No more segfaults on Ubuntu 10.10. Solution: upgrade Class::XSAccessor to 0.10+. Thanks to Class::XSAccessor maintainer(s)!

Puppet external nodes classifier script in Perl

The upgrade to puppet 2.6.2 worked out fine. Coming from 0.24.5, I noticed a really welcome speed improvement. However, I had a tricky problem.

While upgrading to 2.6, I also decided to switch to external nodes classifier script. If you don't know about it, it's a nice puppet feature that I planned to use since the start. It allows you to write a small script in any language you want, to basically tell the puppetmaster, given the hostname, what you want that machine to be.

Puppet calls your script with one argument that is the hostname of the machine that is asking for its catalog of resources. In your script, you have to output something like the following:

---
classes:
  - perl
  - apache
  - ntp
  - munin
environment: production
parameters:
  puppet_server: my.puppetmaster.com

You can specify all the "classes" of that machine, so basically what you want puppet to install (or repair) on that machine. So far so good. So my classifier script looks into some preset directories for some project-specific JSON files, and then checks if any of these JSON files contain the name of the machine that puppet is asking for. Code follows:

#!/usr/bin/env perl
#
# http://docs.puppetlabs.com/guides/external_nodes.html
#

use strict;
use warnings;
use File::Basename ();
use File::Slurp ();
use JSON::XS ();
use YAML ();

our $nodes_dir = '/etc/puppet/manifests/nodes';

our %default_params = (
    puppet_server => 'my.puppetmaster.com',
);

# ...
# A few very simple subs
# omitted for brevity
# ...

# The hostname puppet asks for
my $wanted = $ARGV[0];

# The data structure found in the JSON file
my $node_info = search_into_json_files_and_find($wanted);

my $puppet_classifier_info = {
    classes => $node_info->{puppet_classes},
    environment => 'production',
    parameters => %default_params,
};

print YAML->Dump($puppet_classifier_info);

Now, I don't know if you can immediately spot the problem here, but I didn't, so I wasted a good part of an afternoon chasing a bug I didn't even know existed. The resulting YAML (puppet wants YAML) was this one:

--- YAML 
--- 
classes: 
    - geodns::production::backend 
environment: production 
name: z01-06-02 
parameters: 
    puppet_server: z01-06-02

The problem with this, is that it looks innocent and valid, and in fact is valid, but it's two YAML documents, not one. So puppet will parse the --- YAML line since that is one single complete YAML document, and ignore the rest.

And why is that happening in the first place? Because of the YAML->Dump() call I wrote, instead of the correct YAML::Dump()… Eh :) So the correct code is:

print YAML::Dump($puppet_classifier_info);

Never use YAML->Something()

Upgrade of puppet from 0.24.5 to 2.6.2: “Got nil value for content”

Something I should have probably done a while ago. Upgrading our puppet clients and master from 0.24.5 (current Debian stable) to 2.6.2 (current backports). I've been advised to go all the way to 2.6.4, that contains an important security fix. I'll probably do that soon.

So, first error I did was to upgrade one client before the puppetmaster. In my mental model, it's always ok to upgrade the server first (puppetmaster), but I remember I was kind of surprised when I learned that puppet needs upgrading the clients first, so that is something I seemed to remember well: first upgrade clients, then the puppetmaster.

So I did, and that didn't work at all. The client couldn't retrieve the catalog. I discovered I had to update the puppetmaster first. Another very common thing with puppet: if something doesn't work, wipe out the SSL certificates. Almost every puppet user, maybe unexperienced like me, will tell you to remove the entire /var/lib/puppet/ssl directory, and restart your client. Good. So I did.

After fiddling for a while with client and server-side SSL certificates, removing, --waitforcert and puppetca invocations, I was able to finally connect the client with the puppetmaster and have them talk to each other correctly, downloading the manifests and applying the changes. However, one problem remained…

Digression: custom facter plugins

I wrote a specific facter plugin that provides a datacenter fact. Just as an example, it works like this:

#
# Provide an additional 'datacenter' fact
# to use in generic modules to provide datacenter
# specific settings, such as resolv.conf
#
# Cosimo, 03/Aug/2010
#
# $Id: datacenter.rb 15240 2011-01-03 14:27:44Z cosimo $

Facter.add("datacenter") do
    setcode do

        datacenter = "unknown"

        # Get current ip address from Facter's own database
        ipaddr = Facter.value(:ipaddress)

        if ipaddr.match("^12.34.56.")
            datacenter = "dc1"
        elsif ipaddr.match("^34.56.78.")
            datacenter = "dc2"
        elsif ipaddr.match("^56.78.12.")
            datacenter = "dc3"
        # etc ...
        # etc ...
        end

        datacenter
    end
end

This allows for very cool things, like specific DNS resolver settings by datacenter:

# Data-center based settings
case $datacenter {
    "dc1" :  { include opera::datacenters::dc1 }
    "dc2" :  { include opera::datacenters::dc2 }
    "dc3" :  { include opera::datacenters::dc3 }
    # ...
    default: { include opera::datacenters::dc1 }
}

where each opera::datacenter::dc class contains all the datacenter-dependent settings. In my case, just the resolver for now.

class opera::datacenters::dc1 {
    resolver::dns { "dc1-ns":
        domain => "opera.com",
        nameservers => [ "1.2.3.4", "5.6.7.8" ],
    }
}

resolver::dns is in turn a small define in a resolver module I wrote to generate resolv.conf contents from a small template. It's so small I can copy/paste it here:

# "resolver/manifests/init.pp"
define resolver::dns ($nameservers, $search="", $domain="") {
    file { "/etc/resolv.conf":
        ensure => "present",
        owner  => "root",
        group  => "root",
        mode   => 644,
        content => template("resolver/resolv.conf.erb"),
    }
}

# "resolver/templates/resolv.conf.erb"
# File managed by puppet <%= puppetversion %> on <%= fqdn %>
# Data center: <%= name %>
<% if domain != "" %>domain <%= domain %>
<% end %>
<% if search != "" %>search <%= search %>
<% end %>
<% nameservers.each do |ns| %>nameserver <%= ns %>
<% end %>

One problem remained…

Let's get back to the remaining problem. Every client run was spitting this error string:

info: Retrieving plugin
err: /File[/var/lib/puppet/lib]: Could not evaluate: Got nil value for content

After a lot of searching and reading through similar (but not quite the same) messages, I found this thread where one guy was having a very similar problem, but with a different filename, /root/.gitconfig that he obviously specified in his manifests.

My problem happened with /var/lib/puppet/lib, which is never specified in any of my manifests. But the symlink bit got me thinking. At some point, to have the specific facter plugin work, and having read about differences between older and 2.6.x versions of puppet, I had put it into my module's lib/facter folder, but creating also a symlink called plugins (required by the new puppet). Doing so, I thought, would probably prevent problems, as the new puppet could read the file from the new folder, while the older version could read it from "lib".

Well, turns out that puppet will not be able to read custom facter plugins from a plugins folder that is a symlink. Removing the symlink and making {module}/plugins/facter a real directory solved that problem too.

I really hope to save someone else the time I spent on this. Cheers :)

Facter ported to Perl 6

A few days ago I wrote about my fun experiment trying to port facter to rakudo Perl 6. I said it was "almost completely functional".

Well, I was a bit optimistic, it seems :) It took me a few more nights of hacking, but in the end now it's almost completely functional :) It basically runs, I have ported a few facts from Facter's original ruby code too, like the kernel fact, or the physicalprocessorcount fact and other simple ones.

What's missing to declare the experiment successful is the implementation of confines. A confine in facter speak it's a specific restriction that applies to a fact. For example, the physicalprocessorcount fact reads some files from /proc, and that is only available on Linux. So, in this case, the confine rule for physicalprocessorcount is that the fact kernel must have "Linux" as its value. In code that becomes:

Facter.add("physicalprocessorcount", sub ($f) {
    $f.confine("kernel" => "Linux");
    $f.setcode(block => sub {
        Facter::Util::Resolution.exec('grep "physical id" /proc/cpuinfo|cut -d: -f 2|sort -u|wc -l');
    });
});

which is pretty similar to the Ruby counterpart:

Facter.add("physicalprocessorcount") do
    confine :kernel => :linux

    setcode do
        ppcount = Facter::Util::Resolution.exec('grep "physical id" /proc/cpuinfo|cut -d: -f 2|sort -u|wc -l')
    end
end

The Ruby version is still more elegant, but all in all I'm very happy with the outcome so far. It could probably be improved a lot too. Perl 6 is awesome. Get the code from http://github.com/cosimo/perl6-facter/ and feel free to ping me or comment if you want to know more.