This is a summary of day 2 of the Surge conference that took place in Baltimore, USA, 30th of September and 1st of October 2010. For a quite comprehensive blog post about day 1, you can read my previous post.
Here comes the list of talks I attended during Day 2.
Brian Cantryll – failures in commodity hardware
What happens when commodity hardware is used in an "enterprise" hardware project? Brian guided the audience through this industrial hw project. There was no recorded video of this talk, due to the content being potentially "sensitive". Very interesting talk, and Brian is IMO a very good speaker.
Benjamin Black – FastIP
Benjamin presented a – for me – new way to analyze metrics of a network, named "Flow". The flow-based network metrics can represent a network activity in a way that is completely different and much more accurate than what's usually done by operations and sysadmin departments. The downside is that is generates a lot of data. The advantage is that you can analyze and even replay? any traffic that took place between any two nodes of the network. I'm sure I didn't understand correctly because this would be amazing.
There's products out there that offer flow-based network analysis: Cisco Netflow, Ntop NProbe, etc… There's also a IETF working group about flow. We couldn't see any example/demo because there was a problem with the slides, IIRC.
FastIP also offers a related service. I contacted Benjamin about this after his talk. Maybe we'll be able to try something out or at least have a demonstration.
My TODO list:
- "Lookup Carnegie Mellon SILK project" and "NetSA group"
- Read an interesting paper I managed to find searching around, http://www.bivio.net/public_pdfs/Bivio_AB_NFA.pdf
Gavin Roy – Scaling MyYearBook.com
One of the most interesting talks in this conference IMO. MyYearbook is a Postgres shop, among the top 25 trafficked sites in the USA.
Gavin talked about many things they did to scale their site as the traffic was growing. Here's some of the things I remember:
- DB connection pooling very important for them. Made a world of difference. They use PgBouncer and pgPool2
- DB Horizontal scaling with pIProxy. TODO: look it up
- DB Replication w/ Londiste, Slony, Bucardo
- Postgresql 9.0 based standby to increase read-only capacity, and for hot-standby.
- Partitioned the database by table, feature available since Pg 8.1
They have a primary-to-secondary master failover procedure. They looked into automating it, but a tech judgement is really necessary in case something goes wrong, so they will keep it manual. This was a question I asked to Gavin, since we've thought about automating our failover procedure for MySQL, but it's not so easy to just decide when to trigger the failover…
For user storage, they use Isilon IQ Series, apparently a FreeBSD appliance with on-board NFS. For DB servers, they looked at different solutions, but they keep coming back to direct attached storage. Their man db server, they have a massively powerful machine, IIRC, 512Gb of RAM and 128 cores machine. I have to double check this because it seems really impressive.
John Allspaw – Go or No Go
Another great talk by John, well presented and with great content. Not easy to summarize. The main topic was the "Go or no-go meeting", a 10 minutes get together of all involved parties before releasing changes or launching any new feature live.
This meeting basically consists of Yes/No questions:
- Have you tested enough to deploy? QA still needed?
- Has the feature being communicated (blog/forum/…)?
- Does everyone know: when it will go live? who will push the feature?
- Has the feature been in production for staff (or beta users)? That can be tricky to implement if the new feature implies social interactions (beta user tagging non-beta user)
- Is it possible to dark launch this feature? Will we?
- Is it possible to turn on this feature on a % of users? Will we?
- Does it involve new infrastructure? If so: is there monitoring in place? (BLOCKER)
- On/Off switch in the code/config is in place? Is it documented?
- Are all the relevant people available for communication and launch?
- Is there a place for users to provide feedback about the feature?
- Post-launch "it's all done" time agreed?
- Contingency checklist done and everytime reviewed it? (BLOCKER)
The "Contingency list" should answer the question: "What could possibly go wrong? What will we do about it?", with a list of potential issues and how to solve them in case shit hits the fan.
Apart from the Go/No-go meeting, which would be, also according to my past experience, a great way to avoid problems, there's at least a couple more really nice things to keep in mind when developing or launching a new feature:
- "Dark launches": a dark launch is essentially a full launch of the new feature, but in such a way that is invisible to users. So if you're making db queries and processing stuff, you keep doing all that, you just throw the data away. You will be able to realize the (almost) full impact of the new feature on your application and compensate accordingly.
- Feature "sampling" (% of users): you just enable the full feature for a small, and then growing, percentage of your user base. You can gradually grow to 100% and test the effect of the changes.
Great stuff.
Neil Gunther – Quantifying scalability
Here I was a bit too excited, due to my talk coming next, so unfortunately I didn't pay too much attention. It's a full analysis of scalability seen as a mathematical function, as capacity of your system as the load increases.
Cosimo Streppone – Scaling challenges of my.opera.com
I used 5 minutes to show a live demo of the My Opera realtime monitor application that we built and afterwards I got very interesting questions, and also some nice twitter messages about it.
I also talked about how we've experimented in distributing requests across the different datacenters with our little geodns tool.
All in all, for me it was a fantastic experience. Practice will make me better, so I look forward to a next time :-)
Baron Schwartz – Scaling without sharding
Baron works for Percona. I had read some talks of his. I think he's a really good speaker. He explained in detail the scenarios that arise when dealing with database scaling, the typical characteristics of reads and writes, single server vs multiple servers deployments.
Basically what the talk tries to suggest is that very few situations require to shard your database. Single server setups can go very far, by optimizing the way the db works. Quote: "Sharding should be your last resort". Sharding should be enforced when write demand exceeds write capacity, so avoid sharding if you can, try to buffer/collate writes, defer update work, etc..
Closing day 2
Theo Schlossnagle closed the conference with a plenary keynote about a semi-serious "brief history of computing". Much fun, and a goodbye to next year's Surge.
For a glimpse of what happened live at the conference, you can also check out the Twitter stream for #surgecon
.
Definitely a great conference. Stay tuned for videos and slides on the official site, http://omniti.com/surge/2010.