Like most startups, Huffle’s website platform has undergone a number of changes during the past year. The path it has followed is pretty typical, with each platform evolution reflecting the increase in technical investment required to grow from one stage to the next.
I thought it may be useful to share this evolution, given it’s such a common sequence of phases, I hope it may save on some of the research you have to do yourself.
Phase 1 – The Hosted Landing Page
Huffle’s initial site contained a single page used as a lead generation tool. There are many different platforms that can be used here including WordPress, Wix, Instapage and Unbounce, which are some of the popular options. Each of these platforms provide online editors for designing and writing the relevant content you want to display. They also typically provide integrations with 3rd party services for capturing leads, such as MailChimp/Campaign Monitor for e-mail lists, Salesforce/Zoho for CRM.
Our very early site was hosted on Wix, but we preferred the landing page templates on Instapage, so moved across mid-last year.
Once created, you simply point your site DNS record to the hosting provider and away you go with your landing page.
You’re completely at the mercy of your landing page provider – if they go down, there’s very little recourse you can take, but at least you have a web presence.
Phase 2 – Platform as a Service (PaaS)
The target deployment infrastructure is pretty straight forwards, consisting of web and database servers.
However, getting a nice automated deployment process up and running takes time, plus the underlying severs need to be managed, which is where Platform as a Service (PaaS) solutions came in. We used Heroku, as it provided a ready made platform for serving up applications in a number of different languages.
It provides a single command to deploy our latest code base out to their platform running on top of Amazon Web Services. Additional web servers (dynos in Heroku speak) can be freely added or removed to scale up/down your site as needs dictate, making it an ideal platform during the early stages of your startup.
Heroku also provides a marketplace for add-ins, making it really straight forwards to add additional functionality (sending email, hosting over SSL, application monitoring, …) with a minimal amount of effort. You can also make use of tools such as loader.io to easily see how your site performs under moderate loads (hundreds of requests per second) to ensure your site can handle those initial burst of publicity.
As great as Heroku was for getting our web application up and running quickly. There were some limitations that were frustrating to work with:
- You cannot jump onto a server to have a dig around – everything is done via the Heroku logs command
- Heroku runs on top of AWS across a limited number of regions – none of which are in Australia.
- You cannot run a Heroku application out of multiple regions with duplicating your entire platform including the database server (which is expensive) in both sites. Plus you’ll need to find a way to synchronise your database. This means that when there is a problem in the AWS region your Heroku instance is running in or in Heroku itself, you have zero options for redundancy unless you duplicate your infrastructure.
Heroku does provide a status page which is useful, but if site availability is crucial to you, these issues are too great to rely on it as a solo hosting platform, which is why we made the move to AWS, which provided us with a greater degree of flexibility with our deployment/management options.
Phase 3 – Infrastructure as a Service (IaaS)
In the world of Infrasutructure as a Service (IaaS), Amazon Web Services is king. There are a number of other IaaS platforms to choose from, however, given its relative maturity, it being platform of choice for so many startup success stories, and it’s Activate Program for startups, it was a no-brainer for us.
Amazon Web Services provides resilience across multiple geographic regions. Within each of these regions there are multiple data centres (availability zones) you can deploy your application across. This flexibility of deployment met our needs by providing a hosting platform that provided availability across multiple physical sites, giving us the resiliency we required for running our main production site.
The up-front investment required to automate the provisioning and deployments of environments is high, requiring investment in:
- The DevOps toolchains such as Ansible, Chef, Puppet or SaltStack for environments provisioning and ongoing management
- Creating deployment/release tools, especially if you want to use immutable servers
- Security – ensuring access points to your environment are minimised and communication between nodes is restricted to the bare essentials
The end result for us looks something like this, where we have full site redundancy across multiple data centres and are located within AWS’s Sydney region.
If required a new copy of this environment could be brought up in a matter of minutes with our DevOps provisioning tools, should our AWS region fail, but for now it mostly meets our needs, and provides us with a great degree of flexibility going forwards.
AWS does provide Platform as a Service capabilities with it’s Elastic Beanstalk offering, however we wanted the flexibility to manage our own servers and support non-standard use cases such as hosting multiple sites over SSL on a single set of infrastructure, which does not play so well with Elastic Beanstalk.
They also provide OpWorks for managing cloud infrastructure, however, it does tie you to Chef which was less appealing for us compared with some of the other options out there.
Footnote – DNS Failover
One of the options that we looked at early on was DNS failover to provide resiliency between different hosting providers, should one of them fail. The issue with this approach is that most providers require you to work with IP addresses which is not feasible if you’re using a provider that only gives you a URL to point to.
Amazon’s Route 53 DNS record management service provides a failover mechanism with CNAME records, which we found was good for our use case.