Technology

24th July
2012
written by simplelight

 

Each successive Zynga game peaks earlier but with less users. Farmville -> Cityville -> Castleville must have been alarming. And then they bought ‘Draw Something’ right at the peak. It’s going to be tough to keep filling the bucket.

What’s difficult to see on the graph is that Zynga’s Sims rip-off, The Ville, appears to have already peaked at aroudn 6.3M daily active uniques.

 

Tags: , ,
8th November
2011
written by simplelight

We have written previously about the outsourcing of the web stack. In this post, we will add more color on why the outsourcing of the entire web platform makes sense. While developers have gravitated en masse to offerings like Heroku, there is still a wider lack of appreciation for why PaaS is a major trend.

In this post, we are going to set aside the wider question of the economics of running your application on a PaaS versus hosting and maintaining your own servers. Our aim is to describe what constitutes a PaaS and how it differs from IaaS (such as Amazon Web Services) and other SaaS offerings like Salesforce.com.

The Four Pillars of a PaaS

  1. No installation required. Whether your application is written in Ruby on Rails, Python, Java or any other language de jour there should be no need to install an execution environment when deploying your application to a PaaS. Your code should run on the platform’s built-in execution engine. While minor constraints are necessary, our view is that the successful PaaS providers will largely conform to the language specifications as they are in the wild. This ensures portability of your application between platforms and other hosted environments.
  2. Automated deployment. A single click or command line instruction is all that stands between the developer and a live application.
  3. Elimination of middle-ware configuration. Tweaking settings in Apache or Nginx, managing the memory on your MySql instance, and installing three flavors of monitoring software are now in the past.
  4. Automated provisioning of virtual machines. Application scaling should happen behind the scenes. At 3am. Without breaking a sweat.

There are a few other characteristics of the new breed of PaaS services which we would regard as optional components of a platform but which greatly enhance its utility. By integrating other components into the web stack and constraining these to a few, well-curated and proven bundles, a PaaS offering can both consolidate services into a single bill but, perhaps more importantly from a developer’s point of view, ensure inter-operability and maintain a best-of-breed library. Heroku has done a great job of facilitating easy deployment of application add-ons such as log file management, error tracking, and performance monitoring.

There is often confusion as to the difference between PaaS and SaaS: a PaaS offering is an outsourced application stack sold to developers. A SaaS offering is a business application typically sold to business users.

The difference between PaaS and IaaS is more subtle and over time the dividing line is likely to blur. Today, the PaaS platforms begin where the IaaS services leave off: IaaS effects the outsourcing of the hardware components of the web stack. PaaS platforms effect the outsourcing of the middleware components of the web stack. It is the abstraction of the repetitive middleware configuration that has caught the imagination of developers. PaaS saves time and expedites deployments.

Tags: ,
8th November
2011
written by simplelight

It is a great time to be a web software developer. Over the last decade the components of web development which have little strategic advantage to a start up have gradually been eliminated and outsourced to such an extent that today the gap between writing code and deploying a new application is often bridged with a single click.

Whereas ten years ago deploying a new application required provisioning a new server, installing Linux, setting up MySQL, configuring Apache, and finally uploading the code, the process today has dramatically less friction. On Heroku, one powerful command line is now all that stands between a team of developers and a live application:

> git push heroku master

Let’s take a closer look at what is happening. The code residing in the repository is uploaded directly to, in this example, Heroku’s cloud platform. From that point onward, the long list of tasks involved in maintaining and fine-tuning a modern web stack are outsourced. The platform provider handles hard drive failures, exploding power supplies, denial-of-service attacks, router replacement, server OS upgrades, security patches, web server configuration … and everything in between.

The implications of this trend are bound to be far-reaching. As common infrastructure is outsourced to vendors such as Amazon, Rackspace, Google and Salesforce.com, the base of customers for hardware and stack software will become increasingly concentrated. As the platform vendors function both as curators and distributors of middle-ware for associated services such as application monitoring and error logging, new monetization opportunities will arise for those companies, such as New Relic, providing these tools.

Just as the arrival of open-source blogging platforms eliminated the intervening steps between writers and audiences, so the new breed of platforms has reduced the friction between developers and their customers.

Most importantly, though, the barriers for new private companies to compete have been permanently lowered. Today, $100 per month can buy you a billion dollar data center.

15th February
2011
written by simplelight

Stacy Smith, Intel’s CFO, has some interesting data on the tipping point for PC market penetration. As the cost of a PC in a region moves from multiple years to 8 weeks of income, the penetration changes from zero to about 15%. Once the cost drops below 8 weeks of income, the penetration rises very rapidly to 50%.

According to Smith, the cost of a PC in both India and China is now below 8 weeks of income in those countries.

18th August
2009
written by simplelight

One of the promises of the internet has always been the collapsing of the pipeline between content creators and content consumers. We have already witnessed this phenomonen in the newspaper industry as the cost of distributing news fell from over $100 per subscriber per year to fractions of a penny.

As internet technology improves the same will happen to movies and television. Vuze, formerly Azureus, is a Silicon Valley startup that is at the forefront of this trend. By utilitizing peer-to-peer Bittorrent technology, Vuze has inverted the usual relationship in video streaming between scale and performance. Most internet streaming video degrades less than gracefully as more users watch a given stream. With peer-to-peer technology, the more people who watch the same show as you, the better your quality will be. Not only that, as more viewers join the network, the cost of delivering a high definition video stream to your TV, iPod or laptop declines to zero. With millions of concurrent users at any one time on the Vuze HD network (as of July 2009), you can be sure that someone will be watching what you are.

Just as the newspaper empires took over a decade to crumble, it’s likely that the large production studios will defend their fortresses for as long as possible. But in the long run, creative producers and quality content will gravitate to the cheapest distribution network. Consumers will pay less for their television shows, and the people who create the shows we watch will keep more of the profit.

1st July
2009
written by simplelight

It’s a pity that Yahoo is still maintaining the 5000 query limit per IP address. 5000 stock quotes is the equivalent of 10 years of daily data for two companies only.

12th June
2009
written by simplelight

One of the major issue with large data centers is power. This is applicable to both large data centers like Microsoft / Google and also to large Enterprise Data Centers which are very energy inefficient.

Definition of Power Effectiveness: Data Center Power Usage Effectiveness (PUE) is defined as the ration of data center power to IT (server) power draw. Thus a PUE of 2.0 means that the data center must draw 2 Watts for every 1 Watt of power consumed by IT (server) equipment. The ideal number would be 1.0, which means there is zero overhead. The overhead power is used by lightning, power delivery, UPS, chillers, fans, air-conditioning etc. Google claims to have achieved a PUE of 1.3 to 1.7. Microsoft runs somewhere close to 1.8. Most of Corporate America runs between 2.0 and 2.5.

A typical large data center these days costs in the range of $150 Million to $300 Million depending upon the size and location. A 15 MW data center facility is approximately $200 million. This is the capital cost so it is depreciated over time.

Most of the facility cost is power related. Anywhere  from 75% to 80% of the cost is power (pdu, chiller, ups, etc).

A typical 15MW datacenter with 50,000 servers costs about  $6.0 million per month for operating expense (excluding people cost) and the share of power infrastructure (pdu, chiller, ups, etc) is between 20% to 24% and actual power for the servers is 18% to 20%. Thus total power cost is between 38% to 44%. These numbers reflect what Microsoft / Google would do. EPA has done a study and they believe these numbers are close to 50% for inefficient data centers.

10th June
2009
written by simplelight

If you use Google Analytics’ Site Overlay functionality, it occasionally results in a white or gray haze over your website which prevents you from clicking on any of the links.

The good news is that your browser is the only one affected (none of your customers will see the same effect). All you have to do to fix the problem on your end is clear your cookies (specifically a cookie called GASO).

1st June
2009
written by simplelight

I’ve been playing around with Wolfram Alpha’s new computational knowledge engine and it seems to need a lot of work before it becomes more than an exotic curiosity. I entered the following query:

US Debt / US GDP

and it returns the following answer:

0.585 years (2007 estimate)

I’m not sure how to interpret that but it seems ominous!

29th May
2009
written by simplelight

The thought of managing accounts with 450 different ad networks made my head hurt so I signed up with Rubicon Project .  They claim to optimize the ads on your blog and show better performing ads more frequently. It’s been running for over a week on my blog and (as you can probably see on the sidebar to the right), I’m still running public service ads for the Red Cross. The dashboard on Rubicon Project’s website says that it’s still activating, though.

Update: The reason no ads were running is that I had forgotten to add baseline ad tags from Google and Rubicon Project has limited inventory in the 200×200 size that I had chosen (since it fits nicely in my sidebar). Their customer service is very helpful, though, and they were excellent at clarifying what I’d done wrong.

Previous