Short Comings of Amazon EC2

Jason Hoffman of Joyent has written an interesting article, Why EC2 isn’t yet a platform for “normal” web applications.

  1. No IP address persistence (they all function as DHCP clients and are assigned an IP). One has to use dynamic DNS services for a given domain.
  2. No block storage persistence. When the instance is gone, the data is gone. Yes I know you can send this back regularly to S3, but isn’t that actually a ‘hack’?
  3. No opportunity for hardware-based load balancing (which happens to be the key to scaling a process based framework like Rails and mentioned above).
  4. No vertical scaling (you get a 1.7Ghz CPU and 1 GB of RAM, that’s it). So like the block storage problem, this hits databases, we run about 32GB of ours in memory.
  5. Can’t run your own kernel or make kernel modifications so there’s no ability for kernel and OS optimizations, and no guarantee that they’ve been done.
  6. Images have to be uploaded and then moved around their network to find a launching point. This can take several minutes, if not more. Move 100 GBs around a busy gigabit network sometime and see.

Some points have already been raised multiple times before, like no IP address persistence, no local storage persistence, etc. These are exactly the same points which I had raised back in August last year. As of kernel modification in point 5, this issue is certainly not Amazon EC2 specific, Most VPS hosts won’t let you run your own kernel anyway. I do not think moving around AMI is an issue either in point 6. I know an EC2 instance can take a few minutes to deploy, but it is still far faster than deploying a dedicated server.

Jason did bring up a very interesting point though on the inability to scale vertically. The common myth around web-based applications is — scaling horizontally is easy, and you can just throw in more hardware to make it run faster. Or at least the PHP, RoR or “share-nothing” folks would want you to believe that. This is simply NOT TRUE, and I have to keep on reminding the sales guys at work that we just cannot throw in more instances to fix the scalability issue.

The easiest way to scale a database system is still running it on a BIGGER hardware. More RAM so database pages can be cached. Faster disks so IO wait can be reduced. That is if you do not want to go down to the path of database partitioning (lots of domain specific tuning), or spending lots of $$$ on Oracle RAC or DB2 (which are usually beyond the budget of most Web 2.0 startups).

I guess you just cannot get anything one-size-fits-all in this world.