Internet

3rd February
2016
written by simplelight

Renew the certificate at RapidSSL (or look around for a new vendor)

In the end, all that is needed is to copy the following into /etc/ssl/localcerts

a) private key file (.key)

b) certificate file which is created by cut and pasting first the regular certificate and then the intermediate certificate

Then, run the checks below to make sure everything is working correctly.

Then restart nginx:

sudo /etc/init.d/nginx restart

Note: I had some weird permission issues to it is easiest to just edit the actual files rather than try to create new ones.

Todo next time: Investigate whether it is worth the effort to generate a CSR (certificate signing request) on our server. Also, consider using Let’s Encrypt

 

Checking that the Private Key Matches the Certificate

The private key contains a series of numbers. Two of those numbers form the “public key”, the others are part of your “private key”. The “public key” bits are also embedded in your Certificate (we get them from your CSR). To check that the public key in your cert matches the public portion of your private key, you need to view the cert and the key and compare the numbers. To view the Certificate and the key run the commands:

$ openssl x509 -noout -text -in server.crt
$ openssl rsa -noout -text -in server.key

The `modulus’ and the `public exponent’ portions in the key and the Certificate must match. But since the public exponent is usually 65537 and it’s bothering comparing long modulus you can use the following approach:

$ openssl x509 -noout -modulus -in server.crt | openssl md5 $ openssl rsa -noout -modulus -in server.key | openssl md5

And then compare these really shorter numbers. With overwhelming probability they will differ if the keys are different. As a one-liner:

$ openssl x509 -noout -modulus -in server.pem | openssl md5 ;\
  openssl rsa -noout -modulus -in server.key | openssl md5

And with auto-magic comparison (If more than one hash is displayed, they don’t match):

$ (openssl x509 -noout -modulus -in server.pem | openssl md5 ;\
   openssl rsa -noout -modulus -in server.key | openssl md5) | uniq

BTW, if I want to check to which key or certificate a particular CSR belongs you can compute

$ openssl req -noout -modulus -in server.csr | openssl md5
8th November
2011
written by simplelight

We have written previously about the outsourcing of the web stack. In this post, we will add more color on why the outsourcing of the entire web platform makes sense. While developers have gravitated en masse to offerings like Heroku, there is still a wider lack of appreciation for why PaaS is a major trend.

In this post, we are going to set aside the wider question of the economics of running your application on a PaaS versus hosting and maintaining your own servers. Our aim is to describe what constitutes a PaaS and how it differs from IaaS (such as Amazon Web Services) and other SaaS offerings like Salesforce.com.

The Four Pillars of a PaaS

  1. No installation required. Whether your application is written in Ruby on Rails, Python, Java or any other language de jour there should be no need to install an execution environment when deploying your application to a PaaS. Your code should run on the platform’s built-in execution engine. While minor constraints are necessary, our view is that the successful PaaS providers will largely conform to the language specifications as they are in the wild. This ensures portability of your application between platforms and other hosted environments.
  2. Automated deployment. A single click or command line instruction is all that stands between the developer and a live application.
  3. Elimination of middle-ware configuration. Tweaking settings in Apache or Nginx, managing the memory on your MySql instance, and installing three flavors of monitoring software are now in the past.
  4. Automated provisioning of virtual machines. Application scaling should happen behind the scenes. At 3am. Without breaking a sweat.

There are a few other characteristics of the new breed of PaaS services which we would regard as optional components of a platform but which greatly enhance its utility. By integrating other components into the web stack and constraining these to a few, well-curated and proven bundles, a PaaS offering can both consolidate services into a single bill but, perhaps more importantly from a developer’s point of view, ensure inter-operability and maintain a best-of-breed library. Heroku has done a great job of facilitating easy deployment of application add-ons such as log file management, error tracking, and performance monitoring.

There is often confusion as to the difference between PaaS and SaaS: a PaaS offering is an outsourced application stack sold to developers. A SaaS offering is a business application typically sold to business users.

The difference between PaaS and IaaS is more subtle and over time the dividing line is likely to blur. Today, the PaaS platforms begin where the IaaS services leave off: IaaS effects the outsourcing of the hardware components of the web stack. PaaS platforms effect the outsourcing of the middleware components of the web stack. It is the abstraction of the repetitive middleware configuration that has caught the imagination of developers. PaaS saves time and expedites deployments.

Tags: ,
8th November
2011
written by simplelight

It is a great time to be a web software developer. Over the last decade the components of web development which have little strategic advantage to a start up have gradually been eliminated and outsourced to such an extent that today the gap between writing code and deploying a new application is often bridged with a single click.

Whereas ten years ago deploying a new application required provisioning a new server, installing Linux, setting up MySQL, configuring Apache, and finally uploading the code, the process today has dramatically less friction. On Heroku, one powerful command line is now all that stands between a team of developers and a live application:

> git push heroku master

Let’s take a closer look at what is happening. The code residing in the repository is uploaded directly to, in this example, Heroku’s cloud platform. From that point onward, the long list of tasks involved in maintaining and fine-tuning a modern web stack are outsourced. The platform provider handles hard drive failures, exploding power supplies, denial-of-service attacks, router replacement, server OS upgrades, security patches, web server configuration … and everything in between.

The implications of this trend are bound to be far-reaching. As common infrastructure is outsourced to vendors such as Amazon, Rackspace, Google and Salesforce.com, the base of customers for hardware and stack software will become increasingly concentrated. As the platform vendors function both as curators and distributors of middle-ware for associated services such as application monitoring and error logging, new monetization opportunities will arise for those companies, such as New Relic, providing these tools.

Just as the arrival of open-source blogging platforms eliminated the intervening steps between writers and audiences, so the new breed of platforms has reduced the friction between developers and their customers.

Most importantly, though, the barriers for new private companies to compete have been permanently lowered. Today, $100 per month can buy you a billion dollar data center.

27th October
2011
written by simplelight

As unstructured file data increasingly resides in cloud file systems, there is a large component that is still missing: Drag & Drop.

Currently, it is not possible to drag a file from Box.net to Salesforce.com or any other cloud service, without first downloading the file to my desktop and then re-uploading it. This problem is compounded on mobile devices such as the iPad because there is no easily accessible local storage or ‘Desktop’ equivalent.

Solving this problem will be more of an engineering challenge than meets the eye. Every cloud service has implemented their own storage protocol and folder system. Second, there is the even larger problem of authentication. Hopefully it will soon be possible to easily tile two browser windows and drag from one cloud service to another. Until then, we will keep on downloading and re-uploading.

Postscript: I have concluded that a single online repository for all my files is a pipe-dream. As the Microsoft monopoly is broken apart, there is going to be increasing fragmentation of cloud services.

15th February
2011
written by simplelight

Stacy Smith, Intel’s CFO, has some interesting data on the tipping point for PC market penetration. As the cost of a PC in a region moves from multiple years to 8 weeks of income, the penetration changes from zero to about 15%. Once the cost drops below 8 weeks of income, the penetration rises very rapidly to 50%.

According to Smith, the cost of a PC in both India and China is now below 8 weeks of income in those countries.

27th January
2011
written by simplelight

Facebook isn’t often cited as a cloud computing company since the ‘Social’ moniker has proven to be stickier. It does, however, meet the common definition of ‘Cloud’ i.e. the management of the hardware is highly abstracted from its users, the infrastructure is highly elastic, a variety of services (billing, authentication etc.) are bundled, and the underlying hardware is geographically dispersed.

What is fascinating is that Facebook, more than other cloud companies, gives us a glimpse into a future where computing and storage are virtually free and ubiquitous. With $2 billion in revenue for 2010 and about 500M users, Facebook has revenue of roughly $4 per user. With some back of the envelope math, it seems likely that the variable cost for each additional user is about $1 per year. Think of the services that Facebook is providing its users for $1. Unlimited photo storage and sharing. A contact database. Email. Instant messaging. A gaming platform.

The economics in the consumer cloud are compelling. They will become more so over time and as large enterprises realize that there is no strategic value in common IT, there will be a similar shift for businesses.

18th August
2009
written by simplelight

One of the promises of the internet has always been the collapsing of the pipeline between content creators and content consumers. We have already witnessed this phenomonen in the newspaper industry as the cost of distributing news fell from over $100 per subscriber per year to fractions of a penny.

As internet technology improves the same will happen to movies and television. Vuze, formerly Azureus, is a Silicon Valley startup that is at the forefront of this trend. By utilitizing peer-to-peer Bittorrent technology, Vuze has inverted the usual relationship in video streaming between scale and performance. Most internet streaming video degrades less than gracefully as more users watch a given stream. With peer-to-peer technology, the more people who watch the same show as you, the better your quality will be. Not only that, as more viewers join the network, the cost of delivering a high definition video stream to your TV, iPod or laptop declines to zero. With millions of concurrent users at any one time on the Vuze HD network (as of July 2009), you can be sure that someone will be watching what you are.

Just as the newspaper empires took over a decade to crumble, it’s likely that the large production studios will defend their fortresses for as long as possible. But in the long run, creative producers and quality content will gravitate to the cheapest distribution network. Consumers will pay less for their television shows, and the people who create the shows we watch will keep more of the profit.

16th August
2009
written by simplelight

If you’ve always found using floats in CSS to be mostly trial and error then this Floatorial might clear up matters a little.

29th July
2009
written by simplelight

There is a fairly lengthy list of tasks when first starting a website. This is a compilation for my future sanity:

  1. Choose a website name (do this first because you need it for all subsequent steps. Don’t continue until you’ve done this because it’s almost impossible to change later.
  2. Register the URL. I recommend hosting with Dreamhost. They’re great value for money and the support is excellent. In general, it’s easier to register your URL through your hosting provider.
  3. Submit the URL to all the search engines as soon as possible. The crawlers will take a while to getting around to your site.
  4. Download the Aptana IDE. It’s a great, free editor.
  5. Download and install Firefox. Install the Firebug plugin.
  6. Make sure you have 301 redirects to ensure that Google sees your website as a single URL and not two different websites (one for www.domain.com and another for domain.com). If you’re using Dreamhost this is easily achieved by selecting your preferred URL format under the ‘Manage Domains’ section of the Dreamhost panel. There is no need for a .htaccess file if you’re using Phusion Passenger.
  7. Sign up for Google Analytics, Google Webmaster and (if you plan on advertising) Google Adwords
  8. When designing your initial web layout, leave space for ads if you plan to add them later.
  9. Good link for embedding video
  10. After a month or so, use Hubspot to check your search engine hygiene.

I’ll add more to this list over time.

1st July
2009
written by simplelight

It’s a pity that Yahoo is still maintaining the 5000 query limit per IP address. 5000 stock quotes is the equivalent of 10 years of daily data for two companies only.

Previous