This was the first year the Surge conference took place, in Baltimore, USA. OmniTI is the company that organized it.
30" summary (TL;DR)
The conference was amazing. Main topic was scalability. Met a lot of people. 2 days, 2 tracks and 20+ speakers. Several interesting new products and technologies to evaluate.
The long story
The conference topics were scalability, databases and web operations. It took place over two days filled with high-level talks about experiences, failures, and advice on scaling web sites.
The only downside is that I had to miss half of the talks, being alone :). The good thing is that all videos and slides will be up on the conference website Soon™
Lots of things to be mentioned but I'll try to summarize what happened in Day 1.
John Allspaw – Web Engineering
First keynote session by John Allspaw, former Flickr dev, now Etsy.com.
Summary: Web engineering (aka Web Operations) is still a young field. We must set out to achieve much higher goals, be more scientific. We don't need to invent anything. We should be able to get inspiration and prior art from other fields like aerospace, civil engineering, etc…
He had lots of examples in his slides. I want to go through this talk again. Really inspiring.
Theo Schlossnagle – Scalable Design Patterns
Theo's message was clear. Tools can work no matter what technology. Bend technologies to your needs. You don't need the shiniest/awesomest/webscalest. Monitoring is key. Tie metrics to your business. Be relevant to your business people.
Ronald Bradford – Most common MySQL scalability mistakes
If you're starting with MySQL, or don't have too much experience, then you definitely want to listen to Ronald's talk. Will save you a few years of frustration. :)
Companion website, monitoring-mysql.com.
Ruslan Belkin – Scaling LinkedIn
Ruslan is very prepared and technical, but maybe I expected a slightly different type of content. I must read again the slides when they're up. LinkedIn is a mostly ("99%") Java, uses Lucene as main search tier. Very interesting: they mentioned that since 2005-2006, they have been using several specific services (friends, groups, profiles, etc…) instead of one big database. This allows them to scale better and more predictably.
They also seem to use a really vast array of different technologies, like Voldemort, and many others I don't remember the names right now.
Robert Treat – Database scalability patterns
Robert is a very experienced DBA with no doubt. He talked about all different types of MySQL configurations available to developers in need of scaling their apps, explaining them and providing examples: horizontal/vertical partitioning, h/v scaling, etc…
I was late for this talk so I only got the final part.
Tom Cook – A day in the life of Facebook operations
I listened to the first 10-15 minutes of this talk, and I had the impression that this was probably the 3rd time I listen to the same talk, that tells us how big Facebook is, upload numbers, status updates, etc… without going into specific details. This of course is very impressive, but it's the low-level stuff that's more interesting, at least for me.
Last time I had attended this talk was in Brussels for Fosdem. I was a bit disappointed so I left early. According to some later tweets, the last part was the most interesting. Have to go back on this one, and watch the video. Well… at least I got to listen to the last part of…
Arthur Bergman – Scaling Wikia
Lots of Varnish knowledge (and more) in this talk!
I had read some earlier talks by Artur, always about Varnish, and I have learnt a lot from him.
I strongly suggest to go through his talks if you're interested in Varnish.
They "abused" Urchin tracker (Google Analytics) javascript code to measure their own statistics about server errors and client-side page loading times. Another cool trick is the use of a custom made-up X-Vary-URL
HTTP header to keep all linked URLs (view/edit/etc.. regarding a single wiki page) in one varnish hash slot. In this case, with a single purge command you can get rid of all relevant pages linked to the same content.
They use SSDs extensively. A typical Wikia server (Varnish and/or DB) has got:
- 2 x 6 cores westmere processor
- 6 x Intel X25 SSD (~ $2000)
- 2 x spinning drives for transaction logs (db)
"SSD allows you JOINs with no performance degradation."
Peak speeds reached (this is random not sequential: amazing!):
- 500 Mbyte/s random read with avg latency of 0.2 ms
- 220 Mbyte/s random writes
They use their own CDN based on Dynect (I think a Dyn Inc. service, see below).
Still using Akamai for a minor part of their static content.
Wikia is looking into using Riak, and a Riak-based filesystem to hook up directly to Varnish for really fast file serving.
Mike Malone – SimpleGeo
SimpleGeo implemented a geographic database over apache cassandra, able to answer spatial queries. They researched into using PostGIS (postgres-based GIS DB, very common product), but wasn't as flexible as they needed (don't remember exactly why).
TODO: look into "Distributed indexes over-DHT". He indicated it as prior art for their system.
This talk was a bit complicated for me to follow, so I'll have to watch it again.
Closing day 1
At the end of the day, there was a SQL vs NoSQL panel, which I skipped entirely. Maybe it was interesting :) The after-hours event that closed day 1 was organized by Dyn Inc. It was fantastic. Lots of good beer, martinis, and good food. I went to bed early, since I was still jetlagged. Day 2 started at 9 AM.
Time for a break :)
And then on to Day 2:
http://my.opera.com/cstrep/blog/2010/10/07/surge-2010-scalability-conference-in-baltimore-usa-day-2