Category Archives: Uncategorized

A collection of useful sysctl snippets packaged as a puppet module

Recently I learned from Marco about /etc/sysctl.d, a folder where you can drop in files instead of changing /etc/sysctl.conf directly. That gave me the idea of building a puppet module for sysctl:

https://github.com/cosimo/puppet-modules/blob/master/sysctl/README

The idea is to assemble a collection of useful sysctl snippets. I started with the usual things we use everywhere:

  • LVS Direct Routing

    # LVS directives for Direct Routing
    # http://www.linuxvirtualserver.org/VS-DRouting.html
    net.ipv4.conf.lo.arp_ignore = 1
    net.ipv4.conf.lo.arp_announce = 2
    net.ipv4.conf.all.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
    
  • TCP performance tuning
    #---------------------------------------------------------------------
    # TCP/IP performance optimization settings compared to debian defaults
    #
    # from http://varnish.projects.linpro.no/wiki/Performance
    #---------------------------------------------------------------------
    
    #net.ipv4.ip_local_port_range = 32768 61000
    net.ipv4.ip_local_port_range = 1024 65536
    # net.core.rmem_max = 131071
    net.core.rmem_max = 16777216
    # net.core.wmem_max = 131071
    net.core.wmem_max = 16777216
    # net.ipv4.tcp_rmem = 4096 87380 4194304
    net.ipv4.tcp_rmem = 4096 87380 16777216
    # net.ipv4.tcp_wmem = 4096 16384 4194304
    net.ipv4.tcp_wmem = 4096 65536 16777216
    # net.ipv4.tcp_fin_timeout = 60
    net.ipv4.tcp_fin_timeout = 20
    # net.core.netdev_max_backlog = 1000
    net.core.netdev_max_backlog = 30000
    # net.ipv4.tcp_no_metrics_save = 0
    net.ipv4.tcp_no_metrics_save = 1
    # net.core.somaxconn = 128
    net.core.somaxconn = 262144
    # net.ipv4.tcp_syncookies = 0
    net.ipv4.tcp_syncookies = 1
    # net.ipv4.tcp_max_orphans = 65536
    net.ipv4.tcp_max_orphans = 262144
    # net.ipv4.tcp_max_syn_backlog = 1024
    net.ipv4.tcp_max_syn_backlog = 262144
    # net.ipv4.tcp_synack_retries = 5
    net.ipv4.tcp_synack_retries = 3
    # net.ipv4.tcp_syn_retries = 5
    net.ipv4.tcp_syn_retries = 3
    

I'm interested in both baseline settings to be applied by default everywhere (ex. vm.swappiness = <n>), and special-purpose settings to be "attached" to server roles, like db, file servers, http servers, etc… I'd love to hear from you.

Migration of VCL configuration from Varnish 2.0 to 2.1

Recently we migrated most of our services from Varnish 2.0 to 2.1.
I'd like to explain what we changed with code (VCL) examples side by side,
in case anyone still needs to migrate to 2.1 and needs some help as well :-)

req., bereq., beresp., and obj.

Usually this naming difference in VCL is not really explained. They say "x has been renamed to y"
and you should change the name. That's kind of annoying. In reality, yes, the names changed, and at first
it is annoying, but trying to understand why they changed allows them to stick
in your mind very easily.

In vcl_fetch(), obj. is now beresp.. Why?
Because vcl_fetch() is the part of the request stage where Varnish has
already performed a request against a backend and got a response from it. That means that
if you refer to obj. in vcl_fetch(), it really means that your
touching the backend response, hence beresp..

Similarly in vcl_pipe(), that is executed when the result of vcl_recv()
is to switch to pipe mode. In that case, however, Varnish hasn't made the request
to the backend yet, so if you used obj. in vcl_pipe() you really meant
to change the request that was going to be made to the backend, hence bereq..

Let's see the changes we had to make:

 sub vcl_fetch {
 
-    set obj.ttl = 88s;
-    set obj.grace = 10m;
-    set obj.http.X-My-Opera = "http://youtube.com/watch?v=br79xGSpgF4";
+    set beresp.ttl = 88s;
+    set beresp.grace = 60m;
+    set beresp.http.X-dramatic = "http://www.youtube.com/watch?v=a1Y73sPHKxw";
 
 }

And:

 sub vcl_pipe {
     # Streaming files too (see related vcl_recv() rule).
     # We need to close the request, or varnish remains in pipe
     # mode for the entire session with that client.
-    set req.http.connection = "close";
+    set bereq.http.connection = "close";
 }

Backend probes and .initial

Backend probing allows Varnish to detect that backends are either "healthy"
or "sick". The probe VCL config block allows to tweak how this should work. In
particular, .threshold is the number of successful probes that are necessary
for Varnish to consider a backend healthy. .interval is the number of seconds
between one probe and the following one.

As an example, you can define that a backend should be considered working
(healthy) when it answers successfully to at least 3 probes, with an interval of
10 seconds between each probe. In Varnish 2.0.4, this means that if restarted,
Varnish will wait 3 times 10 = 30 seconds before serving any requests
from that backend
, because all backends were considered dead (sick) at startup.

In 2.1 this limitation is removed by introducing an .initial attribute
in the probe block. .initial is the number of probes considered successful
when the service is started, or the backend is added, and there's no information about it.
The default value is assumed to be equal to .threshold, so backends are considered
healthy as soon as they are introduced.

I think you can understand from these tiny details how well Varnish is engineered.
This just makes sense, doesn't it? :-) Here's the diff from 2.0 to 2.1:

 backend nginx {
     .host = "localhost";
     .port = "8080";
-
-    # Disabled to avoid the 15s startup
-    # 2.0.4-5 doesn't have .initial
-    #
-    #.probe = {
-    #  .url = "/ping.html";
-    #    .interval = 5s;
-    #    .timeout = 1s;
-    #    .window = 5;
-    #    .threshold = 3;
-    #}
+    .probe = {
+        .url = "/ping.html";
+        .interval = 10s;
+        .timeout = 2s;
+        .window = 10;
+        .threshold = 3;
+        .initial = 3;
+    }
 }

And in vcl_recv():

 sub vcl_recv {

 [...]

-    #----------
-    # DISABLED: Only enable when .probe block above is enabled
-    #----------
     # Detect broken backends and keep serving requests if possible
-    #if (! req.backend.healthy) {
-    #    set req.grace = 10m;
-    #} else {
-    #    set req.grace = 5s;
-    #}
+    if (! req.backend.healthy) {
+        set req.grace = 60m;
+    } else {
+        set req.grace = 5s;
+    }

Regular expression matching

Another "big" difference is the use in 2.1 of a Perl-compatible regular expression engine,
(PCRE) instead of the POSIX-style regex matching that used to be in 2.0.
This is a good change for me, as I'm pretty much used to Perl regex and I know next to nothing
about POSIX.

This change actually created a subtle problem that I caught only with a thorough testing
of our configurations. We use regex matching in a few places in our VCL configuration,
usually to analyze cookies
and set special "flags" that are then used to force
a HTTP Vary header, to make Varnish store different cached versions of the same
URL.

One of these cases is the language cookie, where we store a sticky
user preference about site language. Here's how the code changed:

  # STD: Sticky language cookie
  if (req.http.Cookie ~ "language=") {
      set req.http.X-Language =
-         regsub(req.http.Cookie, "^.*?language=([^;]*?);*.*$", "1");
+         regsub(req.http.Cookie, "^.*?language=([^;]*);*.*$", "1");
  }

  ...

  # Mobile view cookie
  if (req.http.Cookie ~ "mobile=") {
-     set req.http.X-Mobile = 
-         regsub(req.http.Cookie, "^.*?mobile=([^;]*?);*.*$", "1");
+     set req.http.X-Mobile =
+         regsub(req.http.Cookie, "^.*?mobile=([^;]*);*.*$", "1");
  }

In case you find it difficult to spot the change, it's the removal of the *?
(non-greedy star) operator. Non-greedy matching was used in 2.0, POSIX matching, to make
sure that the * didn't match too many characters, and thus eat part of other cookies. Except
POSIX regex matching does NOT have a non-greedy star operator. I just
didn't know that, and it's of course a bug, but it had worked perfectly so far… WTF???

For even more weirdness, why did I take the non-greedy star (*?) away now that it should
be supported with PCRE-matching? I removed it because otherwise the result of those
regsub() expressions are always empty!

Believe it or not, it looks exactly like 2.0 had PCRE and 2.1 has POSIX, which is
obviously not what's happening. If you know more about this and you can shed some light,
please contact me or leave a comment below.

Hope you liked this 2.0 -> 2.1 migration journey. I'm looking forward to 2.1 -> 3.0!
It's a bit more work there, because I will need to migrate my
my accept-language C extension
to the new vmod system, which I already started working on :-)

Have fun!

Fixed temporary files handling in HTTP::DAV

It's more than 3 years already that I took over maintainership for HTTP::DAV. I've been fixing several bugs, last one today (and 0.45 is just out on CPAN), and I have to say that it was a fantastic exercise, that I really suggest to anyone even moderately interested in open source development and improving their own programming skills.

Here's how it works:

  • Pick a CPAN distribution that has been put up for adoption, or one that your $work depends on (my case for HTTP::DAV)
  • Contact its author or current maintainer
  • Take a look at its RT queue (usually it's something like https://rt.cpan.org/Public/Dist/Display.html?Name=Some-Dist-Name
  • Pick whatever bug you fancy from the list
  • Write a test case for it, naming it t/RT_[ticket_number].t
  • Fix the bug in the code, and see your test case pass

That's what I've been trying to do with HTTP::DAV, that was back then completely unknown code to me. I hope the results are decent. At least there hasn't been any regression reported so far… :-)

Enjoy, https://github.com/cosimo/perl5-http-dav and https://metacpan.org/module/HTTP::DAV.

Got maintainership of Bookmarks::Parser, and gave back some Opera love to it

A new project just came in last week. We need to analyze Opera desktop builds (like this one) and extract all settings from them and populate a nice database.

We can do that quite easily since Opera stores most of its settings as either .ini or bookmarks (.adr) files.

In Perl land, we can use Config::IniFiles and Bookmarks::Parser to do most of this. We had to subclass Config::IniFiles to skip the non-standard Opera Preferences file ... first line, while Bookmarks::Parser, that includes an Opera-specific Bookmarks::Opera class, wasn't updated with all the latest Opera-specific properties we use in our Desktop builds.

Working on our first prototype of this build cataloger tool, we "patched" Bookmarks::Opera to do what we needed, but the solution that makes the most long-term sense is to bring back some love upstream, so have our patches in the CPAN version of Bookmarks::Parser. It's what makes the most sense to me anyway, so I usually always try to get in touch with the current author or maintainer.

That's what I did in this case too, with great results. I was given co-maintainer bit in less than 24 hours. Rest of the story is in the github repository now:

  • Fixed a couple of bad bugs in the Opera bookmarks parser
  • Added test cases for those bugs, plus a sample of our current Desktop build bookmarks file

Then, since I had a spare couple of hours during this weekend:

  • I fixed all bugs ever reported in the RT queue (2 LOL), which were more than 5 years old
  • Added some documentation love

So, now we'll just package the stock CPAN version of Bookmarks::Parser instead of maintaining our own patched version, plus whoever attempts to use it to work with Opera files will be happier.

Source code, as usual, up on https://github.com/cosimo/Bookmarks-Parser, and CPAN is now at v0.04.

Internationalization (i18n) with Mojolicious and Template Toolkit

In a previous post I talked about this new Mojolicious-based application that I've been working on, that btw was rolled out in production today (yay!)

Classic I18N with TT

One of the required features of this app was "i18n", internationalization. To be less vague, the requirement was to present the UI in different languages. We're using Template Toolkit, so our templates need to have strings marked in a special way to allow translation to kick-in at run-time. Usually in TT you do this with:

<html>
 
<head>
<title>[% l('This is the title of the page') %]</title>
</head>
 
<body>
<h1>[% l('Hello, world!') %]</h1>
<p>
[% l('Some text here') %]
</p>
</body>
</html>

so all the strings that have to be translated according to the user language have to be marked up with:

[% l('<your string here>') %]

Enter Mojolicious

Mojolicious includes a built-in I18N plugin that simplifies your life allowing the <% l('somestring)' %> syntax to work. That is, it gives you a l() helper.

Helper?

A helper is a method that it's available both as part of your controller object, and within templates.

Back to Mojolicious…

In the example helper syntax I wrote <% l('somestring)' %> because that's Mojolicious default templating system syntax. However, under Template Toolkit, you can't use that syntax! You have to pass through an extra level, as in:

<!-- This is my TT template -->
[% c.l('<your string here>') %]

I'm not exactly sure why that c. is required, but that's how it is.

I18N workflow: extracting the strings

Everything would be fantastic, except there's one tricky problem. After you worked so hard on your TT templates, now it's time to collect all the marked up strings, presumably to build a .PO file to be shipped to translation agencies or whatever system you're using for that. More on that later.

In the Perl world, there is an equivalent of GNU xgettext, which is xgettext.pl. This tool is part of the Locale::Maketext::Lexicon CPAN distribution, which is kind of "the standard" way to i18n in Perl. Or it is for us here anyway since we started building i18n for my.opera.com in 2008.

The tricky problem is that even though xgettext.pl understands quite a few syntax variants, it didn't understand [% c.l('string') %]. After a few Perl debugger sessions, I managed to teach Locale::Maketext::Extract::Plugin::TT2 how to parse Mojolicious-style syntax. I knew that Clinton Gormley, the maintainer of L::M::Lexicon had a source repository for it on Github, so I forked his repository and pushed my changes on a dedicated branch.

CPAN, Github and the Community

This is where the Github + CPAN model really shines. You're using a CPAN module. You stumble on a problem. Fix the problem. Find its repository on Github. Fork it, push your fix, and if you're lucky, you have your fix merged and out on CPAN the same day.

This is what actually happened. Clinton got in touch the very same day I sent him the pull request and later pushed out the changes on CPAN. If you ask me, that's just awesome. I wish everything worked that way :)

Closing the i18n workflow

Fixed the c.l() problem, everything else was easier. xgettext.pl allows you to collect strings from your code and templates and build a master .PO file with all the strings. Then msgmerge, a standard GNU gettext tool, allows you to take the generated master PO file and merge it with any existing language-specific PO if any. If you don't have any, just copy the master PO file (usually called POT, or reference PO file) to <language>.po and start translating.

Last step is either:

  • compiling the .po files to .mo, a lookup-optimized form of the .po file
  • creating the "lexicon" files. In the Perl world, these are nothing more than Perl modules with a %Lexicon hash that contains all string IDs and their translations

We're long time fans of the latter approach, so our lexicon files look like this:

package AuthOpera::Locale::it;
 
use strict;
use utf8;
use base qw(AuthOpera::Locale);
 
### LEXICON STARTS HERE (don't remove this line)
our %Lexicon = (
 
    # Automatic fallback to string ID when no translation available
    _AUTO => 1,
 
    # String IDs                  # Translations
    "Application name:"        => "Nome dell'applicazione:",
    "Application registration" => "Registrazione dell' applicazione",
    "Data provider:"           => "Provider dei dati:",
 
    # ...
);
### LEXICON ENDS HERE
 
1;

and we use a simple subclass of Locale::PO to read the PO file in memory and write back a lexicon based on a fixed template, hence the ### LEXICON lines above.

Transifex

Currently we also use Transifex, that allows to have external translators contribute to PO files directly from a web page, and if you configure it to do so, commit straight to your source code repository. You can then trigger automated builds of the lexicon files, having completed the full i18n workflow.

I find this system pretty simple but at the same time fully automated and very powerful. I'd love to hear comments or feedback about this stuff, especially from people adopting a different process.

Perl6 LWP::Simple now uses the URI module for added awesome

My little LWP::Simple module for Perl 6 is growing beyond my control. I just merged a a pull request with a patch to remove all existing URL parsing to replace it with the new URI class.

That's fantastic, because:

  • it reduces the amount of code in LWP::Simple
  • the URI module is based on the actual grammar for URIs and IPv6 addresses. Plain and simple. It doesn't adhere to the standard. It is the standard. Isn't that amazing?

So, just clicky clicky and the pull request was merged. Running the test suite confirmed that everything still works fine passing all tests. Can it get much better than this? :)

If you want to have a look at Perl 6 code, modules.perl6.org is the place to go for inspiration. Have fun!

Using Template Toolkit with Mojolicious

For an upcoming project, I decided to try and use Mojolicious in production. That would be the first time, so I'm quite excited to see what's going to happen.

A few days ago I wrote some sample application that just loads a basic Template Toolkit template and renders it, and benchmarked it using both:

  • mod_perl and Plack::Handler::Apache2 and,
  • using starman as self-contained HTTP server running the psgi application

I have to say that I was quite impressed with the performance level of Starman. I got 1,000+ (a thousand plus) requests per second without the server even breaking a sweat. The command line, just in case, was:

starman --workers 32 ./app.psgi

Anyway, back to using TT. I found myself searching for recipes on how to use TT with Mojolicious because there wasn't a clear documented answer on how to do it, or at least I didn't find it. An example of what I came up with follows.

Step 1: the Mojolicious application class

First you have to create your application class. You should probably use the script that generate the basic skeleton for you. There's nice documentation on how to do that. My class looks like this:


package My::PSGI::App;

use strict;
use base 'Mojolicious';
use Opera::Config;

sub startup {
    my $self = shift;
    
    $self->secret('some-secret-random-string');

    # Our internal configuration system
    my $conf = Opera::Config->new();
    my $tmpl_dir = $conf->get('Template:include_dir');
    my $cache_dir = $conf->get('Template:cache_dir');

    # Tell Mojolicious we want to load the TT renderer plugin
    $self->plugin(tt_renderer => {
        template_options => {
            # These options are specific to TT
            INCLUDE_PATH => $tmpl_dir,
            COMPILE_DIR => $cache_dir,
            COMPILE_EXT => '.ttc',
            # ... anything else to be passed on to TT should go here
        },
    });

    $self->renderer->default_handler('tt');

    my $r = $self->routes;

    # Your routes should go here
    $r->route('/login')->to('account#login');
    # ... and so on ...

}

1;

To have your TT templates picked up, you only need a few more things.

Mojolicious::Plugin::TtRenderer

When you declare that you want to load the tt_renderer plugin (see above, $self->plugin(tt_renderer=>...)), then Mojolicious will "camelize" the tt_renderer string, turn it into Mojolicious::Plugin::TtRenderer, and try to load that plugin, if available.

Turns out there was a MojoX::Renderer::TT CPAN module that also contained a class called Mojolicious::Plugin::TtRenderer. I said there was because Sebastian Riedel, the main developer of Mojolicious had in the meantime deprecated the MojoX namespace.

Since we're building the modules we want to use in production as deb packages, we would have run the risk to package MojoX::Renderer::TT to have it changed later because of this namespace conflict. To avoid this, I decided to fork its repository and put together a patch to remove the use of the MojoX:: namespace. With this, I hoped to get the thing done and hopefully picked up quickly by the maintainer of MojoX::Renderer::TT.

Turned out that he was super responsive (thanks Ask!) to merge the change and release it to CPAN, so ladies and gentlemen, I hereby announce we have Mojolicious::Plugin::TtRenderer 1.20+ out!

In fact, the old deprecated MojoX:: module is still there, just don't use it, and install Mojolicious::Plugin::TtRenderer instead.

Templates naming

Another thing you need for TT to work out of the box is that your templates should(*) be named sometemplate.html.tt. (*) probably you can deviate from this convention, I just don't know yet.

Your controller should specify TT as the renderer

UPDATE: this is not needed. If you're using:

$self->renderer->default_handler('tt');

in your main application class, then you won't need to specify format and handler in every controller.

Again, not sure it's really needed (no, it's not, read above), check before you copy/paste. Here's a simple action from one of my controllers (following the previous example):


package My::PSGI::App::Account;
 
use strict;
use base 'Mojolicious::Controller';
 
sub login {
    my $self = shift;
    
    $self->render(
        template => 'path/to/template', # *without* .html.tt
        format   => 'html',
        handler  => 'tt',
    );
    
}

1;

That should be it: have fun!

EDIT: Thanks Robert for the suggestions.

How to detect TCP retransmit timeouts in your network

Some months ago, while investigating on a problem in our infrastructure, I put together a small tool to help detecting TCP retransmits happening during HTTP requests.

TCP retransmissions can happen, for example, when a client sends a SYN packet to the server, the server responds with a SYN-ACK but, for any reason, the client never receives the SYN-ACK. In this case, the client correctly waits for a given time, called the TCP Retransmission Timeout. In the usual case, this time is set to 3 seconds.

There's probably a million reasons why the client may never receive a SYN-ACK. The one I've seen more often is packet loss, which in turn can have lots of reasons, for example a malfunctioning or misconfigured network switch.

However, you can immediately spot if your timeout/hang problems are caused by TCP retransmission because they happen to cause response times that are unusually frequently distributed around 3, 9 and 21 seconds (and on, of course).

In fact, the TCP retransmission timeout starts at 3 seconds, but if the client tries to resend after a timeout and still receives no answer, it doubles the wait to 6 s, so the total response time will be 9 seconds, assuming that the client now finally receives the SYN-ACK. Otherwise, 3 + 6 + 12 = 21, then 3 + 6 + 12 + 24 = 45 s and so on and so forth.

So, this little tool fires a quick batch of HTTP requests to a given server and measures the response times, highlighting slow responses (> 0.5s). If you see that the reported response times are 3.002s, 9.005s or similar, then you are probably in presence of TCP retransmission and/or packet loss.

Finally, here it is:


#!/usr/bin/env perl
#
# https://gist.github.com/1101500
#
# Fires HTTP request batches at the specified hostname
# and analyzes the response times.
#
# If you have suspicious frequency of 3.00x, 9.00x, 21.00x
# seconds, then most probably you have a problem of packet loss
# in your network.
#
# cosimo@opera.com, sometime in 2011
#

use strict;
use LWP::UserAgent ();
use Time::HiRes ();

$| = 1;

my $ua = LWP::UserAgent->new();
$ua->agent("$0/0.01");

# Tests this hostname
my $server = $ARGV[0] || die "Usage: $0 <hostname>n";

# Picks the URLs to test in this list, one after the other
my @url_pool = qw(
	/ping.html
);

my $total_reqs = 0;
my $total_elapsed = 0.0;
my $n_pick = 0;
my $url_to_fire;

my $max_elapsed = 0.0;
my $max_elapsed_when = '';
my $failed_reqs = 0;
my $slow_responses = 0;
my $terminate_now = 0;

sub output_report {
	print "Report for:            $server at " . localtime() . "n";
	printf "Total requests:        %d in %.3f sn", $total_reqs, $total_elapsed;
	print "Failed requests:       $failed_reqsn";
	print "Slow responses (>1s):  $slow_responses (slowest $max_elapsed s at $max_elapsed_when)n";
	printf "Average response time: %.3f s (%.3f req/s)n", $total_elapsed / $total_reqs, $total_reqs / $total_elapsed;
	print "--------------------------------------------------------------------n";
	sleep 1;
	return;
}

$SIG{INT} = sub { $terminate_now = 1 };

while (not $terminate_now) {

	$url_to_fire = "http://" . $server . $url_pool[$n_pick];

	my $t0 = [ Time::HiRes::gettimeofday() ];
	my $resp = $ua->get($url_to_fire);
	my $elapsed = Time::HiRes::tv_interval($t0);

	$failed_reqs++ if ! $resp->is_success;

	$total_reqs++;
	$total_elapsed += $elapsed;

	if ($elapsed > $max_elapsed) {
		$max_elapsed = $elapsed;
		$max_elapsed_when = scalar localtime;
		printf "[SLOW] %s, %s served in %.3f sn", $max_elapsed_when, $url_to_fire, $max_elapsed;
	}

	$slow_responses++ if $elapsed >= 0.5;
	$n_pick = 0       if ++$n_pick > $#url_pool;
	output_report()   if $total_reqs > 0 and ($total_reqs % 1000 == 0);

}
continue {
    Time::HiRes::usleep(100000);
}

output_report();

# End

It's also published here on Github, https://gist.github.com/1101500. Have fun!

Capture a webcam image with a simple bash script

I did that a few months ago to make a poor man's time-lapse video. This one. I said I'd post how to do it somewhere. Here it is.

This is a bash script that will capture an image from a webcam or similar video acquisition device. Works on Linux only I guess. Makes use of gstreamer and video for linux.


#!/bin/bash

DEST_DIR="$HOME/Pictures/Webcam"

if [ ! -e $DEST_DIR ]; then
    mkdir $DEST_DIR
fi

DAY_DIR="$DEST_DIR/`date +'%Y%m%d'`"
if [ ! -e $DAY_DIR ]; then
    mkdir $DAY_DIR
fi

PIC_FILE="$DAY_DIR/webcam-`date +'%H%M%S'`.jpg"
gst-launch v4l2src num-buffers=1 ! jpegenc ! filesink location=$PIC_FILE

Save as ~/bin/camera-snapshot, and put it in crontab every minute if you want :)
After a while you will have a folder filled with pictures.

If you also want to assemble them into a movie file, use the excellent ffmpeg.

How to tag a remote git repository or… vcs support for fabric

With svn, you can tag a remote repository with:

svn cp http://{your-svn-server}/svn/{project}/trunk http://{your-svn-server}/svn/{project}/tags/{tag-name}

or if you're already in a working copy:

svn cp ^/{project}/trunk ^/{project}/tags/{tag-name}

The latter case assumes you have a working copy already checked out, but the first case is more interesting for what I needed.

Tagging when deploying

Lately I've been working on some deployment tools in the form of a few fabric classes. One of the things I want to do when launching a production deployment is auto-tagging the repository with the new build name.

The tag naming I went for is something like:

<project_name> - <date> - <time> - <who_deployed>

Example:

geodns-20110409-133701-cosimo

Every time there's a new production deployment using these tools, the
repository / revision that is being deployed is tagged with names like those.
The plan to use this added metadata for a "deployment console", but I didn't
have time to do anything about it yet.

vcs.py

Having planned the move from svn to git, I had to add a thin abstraction to the
fabric deployment classes to make sure that when the repository url changed from
svn to git, nothing really changed from the deployment point of view.

I ended up with a generic vcs.py class for fabric that implements vcs-related actions such as:

  • exporting a remote repository to a local directory
  • listing available tags on a remote repository
  • tagging a remote repository

This means I had to find out how to do these things in both svn and git.

Exporting a remote repository

With svn:

svn export [--force] http://svn.server/project/trunk /your/local/dir

and you can use --force if the local directory already exists, or svn will refuse to do it by default.

Git requires an intermediate step:

git archive --prefix=some-dir-name/ --remote=git.server:/var/git/project.git master | tar xvC /path/where/to/export

Listing available tags (remotely)

With svn:

svn list http://svn.server/project/tags/

With git:

git ls-remote --tags git.server:/var/git/project.git

Thanks to my colleague Alfie for the ls-remote tip.

Tagging a remote URL

I mentioned how you do it with svn:

svn cp http://svn.server/project/trunk http://svn.server/project/tags/tagname

What about git though? I searched a bit around, and I found no git command to
directly tag a remote repository.

I looked at the Jenkins git plugin source code but AFAICS there's no magical way to do it, so I figured out I would just clone the remote repository, tag locally and then push the tag to origin.

In theory, this should be just fine, except it has some drawbacks:

  • Execution time: if the remote repository is very large, we need to clone it first, and that can take a long time.
  • Size: when cloning a large git repository, the local copy will take up disk space for nothing. We don't need it, as we just want to tag the remote repository.

Not sure this is the best thing to do, but what I'm using right now is:

  • Cloning with --depth=1:

    git clone has a --depth option that limits the amount of history that is cloned. In this case, we don't need any history, so --depth=1 is great:

    git clone --depth=1 <git-remote-url> <local-dir>

    Example:

    git clone --depth=1 git.server:/var/git/project.git /var/tmp/deploy.$USER.$$
  • Tagging locally:
    cd /var/tmp/deploy.$USER.$$
    git tag -as <tag-name>
    
  • Pushing the tag remotely:
    git push origin --tags
  • Removing the temporary local copy:
    rm -rf /var/tmp/deploy.$USER.$$

That's it. Not very brilliant, but works great for now. If you know of a better way to tag a remote git repository, or some existing work on these things, please get in touch or add a comment below. Thanks! :)