Another round of fixes for the varnish Accept-Language VCL extension

This very specific VCL extension to parse and normalize Accept-Language headers is becoming more robust as time goes by. Here's the latest round of fixes:

  • the original req.http.Accept-Language header could be overwritten when calling the vcl_rewrite_accept_language() function. Now this is fixed by copying the original header string into a static buffer and executing the processing on the copy.
  • improved a bit the style of the C code in a few places. Nothing miraculous really. Feels improved for me at least :)
  • fixed the use of a wrong define (string max length instead of languages list max length)
  • use of sizeof instead of using same constants in strncpy, etc…
  • added a small intro on the generated file too. In this way, if you find the code on some server, you can immediately understand what's that supposed to do, and where it came from :)

Enough blah blah, here's the new code. If you were already running this piece of VCL, then I won't tell you this is experimental stuff, because at this point it's no more experimental. But if you were running it already, then by all means upgrade. It is definitely better :)

http://github.com/cosimo/varnish-accept-language

Enjoy!

Adding the irc NOTICE capability to Bot::BasicBot

Bot::BasicBot is a Perl module that provides a really easy, fast and convenient way to build plugin-based IRC bots. I'm playing around with an IRC bot that should assist in continuously deploying projects.

This bot has two main functionalities:

  • keep track of continous integration builds
  • initiate and keep track of deployments

Right now the bot reads a main configuration file with data about projects, repositories, continuous integration, etc… and answers commands. This is an example:


21:58 <@cosimo> projects-list
21:58 < deployer> auth, geodns, libopera, link, myopera, sso
21:58 <@cosimo> build-status geodns
21:58 < deployer> 97ad24e success cosimo https://git.server/functests/builds/geodns/97ad24e
21:58 <@cosimo> latest-revision sso
21:58 < deployer> 5207cfe, https://git.server/?p=sso.git;a=commit;h=fe977d32e9580551dffe8139396106ba25207cfe
21:59 <@cosimo> build-status auth
21:59 < deployer> 24135 success cosimo https://test.server/functests/builds/auth-unit/24135
21:59 < deployer> 24135.2 success (manual) https://test.server/functests/builds/auth-functional/24135.2

Another functionality of the bot is to detect new builds, and automatically send updates to a given channel, stating the project, the new VCS revision, the committer and a link to the continuous integration test run. Example:


17:22 -deployer:#chan- sso, fe977d3 success cosimo https://test.server/functests/builds/opera-sso/fe977d3

In the future, I also want to command the bot to initiate deployments. Anyway, the problem was that Bot::BasicBot apparently lacked support for sending IRC notices. This caused all the bot messages to interrupt the flow of IRC conversations. Bot::BasicBot has source code also on github, so I just forked it and added support for IRC notices. I just noticed that the author already pulled in the new changes. o/

It will still take years for this to land in Debian, but still… :-)

My first article on Dev Opera!

This week the DevRel team published my first article on dev.opera.com!

For me this is really great. I remember, before joining Opera, during 2006, when I first considered sending a CV, I was looking at DevOpera. It was full of really high-level technical articles (back then!). I also remember thinking that I'd never be able to have my name on an article there.

Time flies, 4 years in the future, and it's there!. I still think Dev articles are high-level on average nowadays. Not sure about mine though :-), but if you're interested in OAuth, go read it and be sure to post feedback either here or on the article discussion forum. I have tried to avoid really going into the gory details. You can find them in the RFC anyway. Have fun!

From disaster to stability: the scaling challenges of My Opera (Surge 2010)

I just read John Allspaw's blog post about his talk at Surge 2010. John's talks were among the best of the conference IMO. So I was amazed to read his post.

I was really honored to take part in this conference as speaker. Just before my proposal being accepted, I had bought the book Web Operations. When the Surge speakers list was announced, I was thrilled to discover that many contributors to the book, including Allspaw, were also speaking at Surge, and even more honored to have the privilege to being in the same group and knowing them in person.

If you're interested in the talk I presented, while the Omniti folks bring up videos and slides on the Surge website, you can download them from Slideshare:

I hope the video is delayed, so I can avoid the embarrassment for a little while :-)

Surge 2010 scalability conference in Baltimore, USA – DAY 2

This is a summary of day 2 of the Surge conference that took place in Baltimore, USA, 30th of September and 1st of October 2010. For a quite comprehensive blog post about day 1, you can read my previous post.

Here comes the list of talks I attended during Day 2.

Brian Cantryll – failures in commodity hardware

What happens when commodity hardware is used in an "enterprise" hardware project? Brian guided the audience through this industrial hw project. There was no recorded video of this talk, due to the content being potentially "sensitive". Very interesting talk, and Brian is IMO a very good speaker.

Benjamin Black – FastIP

Benjamin presented a – for me – new way to analyze metrics of a network, named "Flow". The flow-based network metrics can represent a network activity in a way that is completely different and much more accurate than what's usually done by operations and sysadmin departments. The downside is that is generates a lot of data. The advantage is that you can analyze and even replay? any traffic that took place between any two nodes of the network. I'm sure I didn't understand correctly because this would be amazing.

There's products out there that offer flow-based network analysis: Cisco Netflow, Ntop NProbe, etc… There's also a IETF working group about flow. We couldn't see any example/demo because there was a problem with the slides, IIRC.

FastIP also offers a related service. I contacted Benjamin about this after his talk. Maybe we'll be able to try something out or at least have a demonstration.

My TODO list:

Gavin Roy – Scaling MyYearBook.com

One of the most interesting talks in this conference IMO. MyYearbook is a Postgres shop, among the top 25 trafficked sites in the USA.

Gavin talked about many things they did to scale their site as the traffic was growing. Here's some of the things I remember:

  • DB connection pooling very important for them. Made a world of difference. They use PgBouncer and pgPool2
  • DB Horizontal scaling with pIProxy. TODO: look it up
  • DB Replication w/ Londiste, Slony, Bucardo
  • Postgresql 9.0 based standby to increase read-only capacity, and for hot-standby.
  • Partitioned the database by table, feature available since Pg 8.1

They have a primary-to-secondary master failover procedure. They looked into automating it, but a tech judgement is really necessary in case something goes wrong, so they will keep it manual. This was a question I asked to Gavin, since we've thought about automating our failover procedure for MySQL, but it's not so easy to just decide when to trigger the failover…

For user storage, they use Isilon IQ Series, apparently a FreeBSD appliance with on-board NFS. For DB servers, they looked at different solutions, but they keep coming back to direct attached storage. Their man db server, they have a massively powerful machine, IIRC, 512Gb of RAM and 128 cores machine. I have to double check this because it seems really impressive.

John Allspaw – Go or No Go

Another great talk by John, well presented and with great content. Not easy to summarize. The main topic was the "Go or no-go meeting", a 10 minutes get together of all involved parties before releasing changes or launching any new feature live.

This meeting basically consists of Yes/No questions:

  • Have you tested enough to deploy? QA still needed?
  • Has the feature being communicated (blog/forum/…)?
  • Does everyone know: when it will go live? who will push the feature?
  • Has the feature been in production for staff (or beta users)? That can be tricky to implement if the new feature implies social interactions (beta user tagging non-beta user)
  • Is it possible to dark launch this feature? Will we?
  • Is it possible to turn on this feature on a % of users? Will we?
  • Does it involve new infrastructure? If so: is there monitoring in place? (BLOCKER)
  • On/Off switch in the code/config is in place? Is it documented?
  • Are all the relevant people available for communication and launch?
  • Is there a place for users to provide feedback about the feature?
  • Post-launch "it's all done" time agreed?
  • Contingency checklist done and everytime reviewed it? (BLOCKER)

The "Contingency list" should answer the question: "What could possibly go wrong? What will we do about it?", with a list of potential issues and how to solve them in case shit hits the fan.

Apart from the Go/No-go meeting, which would be, also according to my past experience, a great way to avoid problems, there's at least a couple more really nice things to keep in mind when developing or launching a new feature:

  • "Dark launches": a dark launch is essentially a full launch of the new feature, but in such a way that is invisible to users. So if you're making db queries and processing stuff, you keep doing all that, you just throw the data away. You will be able to realize the (almost) full impact of the new feature on your application and compensate accordingly.
  • Feature "sampling" (% of users): you just enable the full feature for a small, and then growing, percentage of your user base. You can gradually grow to 100% and test the effect of the changes.

Great stuff.

Neil Gunther – Quantifying scalability

Here I was a bit too excited, due to my talk coming next, so unfortunately I didn't pay too much attention. It's a full analysis of scalability seen as a mathematical function, as capacity of your system as the load increases.

Cosimo Streppone – Scaling challenges of my.opera.com

I think

I used 5 minutes to show a live demo of the My Opera realtime monitor application that we built and afterwards I got very interesting questions, and also some nice twitter messages about it.

I also talked about how we've experimented in distributing requests across the different datacenters with our little geodns tool.

All in all, for me it was a fantastic experience. Practice will make me better, so I look forward to a next time :-)

Baron Schwartz – Scaling without sharding

Baron works for Percona. I had read some talks of his. I think he's a really good speaker. He explained in detail the scenarios that arise when dealing with database scaling, the typical characteristics of reads and writes, single server vs multiple servers deployments.

Basically what the talk tries to suggest is that very few situations require to shard your database. Single server setups can go very far, by optimizing the way the db works. Quote: "Sharding should be your last resort". Sharding should be enforced when write demand exceeds write capacity, so avoid sharding if you can, try to buffer/collate writes, defer update work, etc..

Closing day 2

Theo Schlossnagle closed the conference with a plenary keynote about a semi-serious "brief history of computing". Much fun, and a goodbye to next year's Surge.

For a glimpse of what happened live at the conference, you can also check out the Twitter stream for #surgecon.

Definitely a great conference. Stay tuned for videos and slides on the official site, http://omniti.com/surge/2010.

Surge 2010 scalability conference in Baltimore, USA – DAY 1

This was the first year the Surge conference took place, in Baltimore, USA. OmniTI is the company that organized it.

30" summary (TL;DR)

The conference was amazing. Main topic was scalability. Met a lot of people. 2 days, 2 tracks and 20+ speakers. Several interesting new products and technologies to evaluate.

The long story

The conference topics were scalability, databases and web operations. It took place over two days filled with high-level talks about experiences, failures, and advice on scaling web sites.

The only downside is that I had to miss half of the talks, being alone :). The good thing is that all videos and slides will be up on the conference website Soon™

Lots of things to be mentioned but I'll try to summarize what happened in Day 1.

John Allspaw – Web Engineering

First keynote session by John Allspaw, former Flickr dev, now Etsy.com.

Summary: Web engineering (aka Web Operations) is still a young field. We must set out to achieve much higher goals, be more scientific. We don't need to invent anything. We should be able to get inspiration and prior art from other fields like aerospace, civil engineering, etc…

He had lots of examples in his slides. I want to go through this talk again. Really inspiring.

Theo Schlossnagle – Scalable Design Patterns

Theo's message was clear. Tools can work no matter what technology. Bend technologies to your needs. You don't need the shiniest/awesomest/webscalest. Monitoring is key. Tie metrics to your business. Be relevant to your business people.

Ronald Bradford – Most common MySQL scalability mistakes

If you're starting with MySQL, or don't have too much experience, then you definitely want to listen to Ronald's talk. Will save you a few years of frustration. :)

Companion website, monitoring-mysql.com.

Ruslan Belkin – Scaling LinkedIn

Ruslan is very prepared and technical, but maybe I expected a slightly different type of content. I must read again the slides when they're up. LinkedIn is a mostly ("99%") Java, uses Lucene as main search tier. Very interesting: they mentioned that since 2005-2006, they have been using several specific services (friends, groups, profiles, etc…) instead of one big database. This allows them to scale better and more predictably.

They also seem to use a really vast array of different technologies, like Voldemort, and many others I don't remember the names right now.

Robert Treat – Database scalability patterns

Robert is a very experienced DBA with no doubt. He talked about all different types of MySQL configurations available to developers in need of scaling their apps, explaining them and providing examples: horizontal/vertical partitioning, h/v scaling, etc…

I was late for this talk so I only got the final part.

Tom Cook – A day in the life of Facebook operations

I listened to the first 10-15 minutes of this talk, and I had the impression that this was probably the 3rd time I listen to the same talk, that tells us how big Facebook is, upload numbers, status updates, etc… without going into specific details. This of course is very impressive, but it's the low-level stuff that's more interesting, at least for me.

Last time I had attended this talk was in Brussels for Fosdem. I was a bit disappointed so I left early. According to some later tweets, the last part was the most interesting. Have to go back on this one, and watch the video. Well… at least I got to listen to the last part of…

Arthur Bergman – Scaling Wikia

Lots of Varnish knowledge (and more) in this talk!

I had read some earlier talks by Artur, always about Varnish, and I have learnt a lot from him.
I strongly suggest to go through his talks if you're interested in Varnish.

They "abused" Urchin tracker (Google Analytics) javascript code to measure their own statistics about server errors and client-side page loading times. Another cool trick is the use of a custom made-up X-Vary-URL HTTP header to keep all linked URLs (view/edit/etc.. regarding a single wiki page) in one varnish hash slot. In this case, with a single purge command you can get rid of all relevant pages linked to the same content.

They use SSDs extensively. A typical Wikia server (Varnish and/or DB) has got:

  • 2 x 6 cores westmere processor
  • 6 x Intel X25 SSD (~ $2000)
  • 2 x spinning drives for transaction logs (db)

"SSD allows you JOINs with no performance degradation."

Peak speeds reached (this is random not sequential: amazing!):

  • 500 Mbyte/s random read with avg latency of 0.2 ms
  • 220 Mbyte/s random writes

They use their own CDN based on Dynect (I think a Dyn Inc. service, see below).
Still using Akamai for a minor part of their static content.

Wikia is looking into using Riak, and a Riak-based filesystem to hook up directly to Varnish for really fast file serving.

Mike Malone – SimpleGeo

SimpleGeo implemented a geographic database over apache cassandra, able to answer spatial queries. They researched into using PostGIS (postgres-based GIS DB, very common product), but wasn't as flexible as they needed (don't remember exactly why).

TODO: look into "Distributed indexes over-DHT". He indicated it as prior art for their system.
This talk was a bit complicated for me to follow, so I'll have to watch it again.

Closing day 1

At the end of the day, there was a SQL vs NoSQL panel, which I skipped entirely. Maybe it was interesting :) The after-hours event that closed day 1 was organized by Dyn Inc. It was fantastic. Lots of good beer, martinis, and good food. I went to bed early, since I was still jetlagged. Day 2 started at 9 AM.

Time for a break :)

And then on to Day 2:

http://my.opera.com/cstrep/blog/2010/10/07/surge-2010-scalability-conference-in-baltimore-usa-day-2

Survival guide to UTF-8

Please, future me, and please, you cool programmer that one way or another, one day or the other, have struggled understanding UTF-8 (in Perl or not), do yourself a really big favor and read the following links:

After reading these few articles, you will be a much better human being. I promise. In the meantime, Perl programmer, remember that:

  • use utf8; is only for the source code, not for the encoding of your data. Let's say you define a scalar variable like:
    
    my $username = 'ネオ';
    

    Ok. Now, if you happen to have use utf8 or not inside your script, there will be no whatsoever difference in the actual content of that scalar variable. Exactly, no difference. Except there's one difference. The variable itself (the $username box) will be flagged as containing UTF-8 characters (if you used utf8, of course). Clear, right?

  • For the rest, open your filehandles declaring the encoding (open my $fh, '<:utf8', $file;), or explicitly use Encode::(en|de)code_utf8($data).
  • You can make sure the strings you define in your source code are UTF-8 encoded by opening and then writing to your source code file with an editor that supports UTF-8 encoding, for example vim has a :set encoding=utf8 command.
  • Also, make sure your terminal, if you're using one, is set to UTF-8 encoding otherwise you will see gibberish instead of your beloved Unicode characters. You can do that with any terminal on this planet, bar the windows cmd.exe shell… If anyone knows how to, please tell me.
  • And finally, use a font with Unicode characters in it, like Bitstream Vera Sans Mono (the default Linux font), Envy R, plain Courier, etc… or you will just see the broken-UTF8-character-of-doom. Yes, this one → � :-)

There's an additional problem, and that is when you need to feed some strings to a Digest module like Digest::SHA1, to obtain back a hash. In that case, I presume the SHA1 algorithm, as MD5 and others, they don't really work on Unicode characters, or UTF8-encoded characters, they just work on bytes, or octets.

So, if you try something like:


use utf8;
use Digest::SHA1;

my $string = "ログインメールアドレス";
my $sha1 = Digest::SHA1->new();
$sha1->add($string);

print $sha1->hexdigest();

it will miserably fail (Wide character in subroutine entry at line 6) because $string is marked as containing "wide" characters, so it must be turned into octets, by doing:


use utf8;
use Encode;
use Digest::SHA1;

my $string = "ログインメールアドレス";
my $sha1 = Digest::SHA1->new();
$sha1->add( Encode::encode_utf8($string) );

print $sha1->hexdigest();

I need to remind myself all the time that:

  • Encode::encode_utf8($string) wants a string with Unicode characters and will give you a string converted to UTF-8 octets, with the UTF8 flag *turned off*. Basically bytes. You can then do anything with them, print, put in a file, calculate a hash, etc…
  • Encode::decode_utf8($octets) wants a string of (possibly UTF-8) octets, and will give you a string of Unicode characters, with the UTF8 flag *turned on*, so for example trying to lowercase (lc) a "Ã…" will result in a "Ã¥" character.

So, there you go! Now you are a 1st level UTF-8 wizard. Go and do your UTF-8 magic!

Epilogue: now I'm sure: in a couple of weeks I will come back to this post, and think that I still don't understand how UTF-8 works in Perl… :-)

Another Ubiquity for Opera update, DuckDuckGo search

My small Ubiquity for Opera experiment gets another quick update.

This time I added one of my favorite search engines, DuckDuckGo. Despite being a young project, I think it's really interesting, and its results are highly relevant and up-to-date. I like it! Plus, it's a Perl project.

So that's it, I just added the duckduckgo command.

This new version also fixes an annoying problem with a couple of Google-related search commands, that were showing just 1 result, instead of the default 20 search results. There's so much more that could be improved, but I rarely find the time to work on it…

As always, the updated code is available on the Ubiquity for Opera project page, where you will also find the minified version (~40 kb instead of ~70).

Enjoy!

Running Ubiquity on Google Chrome

It's been a while since I started working on Ubiquity for Opera. It's my limited, but for me totally awesome, port of Mozilla's Ubiquity project, originally only for Firefox, to the Opera browser.

I had several people ask me through my blog or email to write a version for Google Chrome. And, by popular demand, here it is! To my surprise, it took much less than I had originally thought. I had a few small problems though, from the event handlers, different from Opera UserJS model, to style attributes for dynamically created elements, and other minor things as well.

It's still lacking Ajax/xmlHttpRequest support, but that shouldn't be a huge problem.

I uploaded it to userscripts.org too. You can see it here: http://userscripts.org/scripts/show/85550.

The code, as usual, is up on Github

http://github.com/cosimo/ubiquity-chrome/.

If you try it, I would be interested to know what you think. Have fun!

If Text::Hunspell never worked for you, now it’s time to try it again!

If you don't know, Hunspell is the spell checker engine of OpenOffice.org, and it's also included in the Opera and Mozilla browsers.

We were trying to use it from Perl, using the old Text::Hunspell module, version 1.3, but we had problems with it. Big problems. Like segfaults and tests that wouldn't run.

A bright hacker from Italy :) was then called in to fix the problem, with the promise of a fantastic prize he hasn't seen yet… [ping?] :-)

During the process, I found out I know absolutely nothing about dictionary files and stuff, and my fixes were – I would say – definitely horrible.

But! There's a bright side, of course, and that is that the module works just fine now, at least on Debian/Ubuntu systems. Before using Text::Hunspell, you want to install the following packages:

  • hunspell
  • libhunspell-dev

The example in the POD documentation (and in the examples dir) uses the standard US english dictionary. If you don't have that, you will need to change the script slightly. But the code is tested and should work without a problem. If you try it out and you have feedback, by all means let me know. Thanks!

Source code available on GitHub at:
http://github.com/cosimo/perl5-text-hunspell/

The module, tagged as 2.00 because it's cool :), will be up on CPAN shortly at this address:
http://search.cpan.org/dist/Text-Hunspell/