This was originally posted as a thread on twitter
The Prevailing Wisdom
- The only acceptable cloud platforms to use for reliability are the big ones (AWS/GCP/Azure)
- We *must* have every service (including non production) on it's own server (best practice).
Where this often leads
- 5 codebases x 8 services x 2 environments (production, staging) x medium-level server ($96) = $7680/month.
The Reality
- The main line item on your team's cloud hosting bill is what's called a VM (virtual machine). Every platform has created it's own fancy name for this (Amazon is EC2, Google is Compute Engine) but behind the curtains you're paying for the same thing - commoditised, open source technology.
- There are many hosting services, with similar uptime stats, that sell the exact same thing the big platforms do, for 90% cheaper. For example the equivalent of a $96/month Azure server is available on contabo.com for $6.30/month.
- The cloud provider offerings are *intentionally* confusing to both technical and non-technical people.
- Often, over-eager development teams will decompose a web application out into separate services. This means that instead of having one codebase and one place that this code is hosted, you know have 5 codebases and 5 different deployments. This 5x’es the maintenance burden and usually also 5x’es the cost. Now instead of having 2 or 3 instances to pay for (production, staging), you can end up with 10 to 15 instances.
- When your team says "This is just what it costs to run our software, we can't do it cheaper", they don't mean "This is the cost required to service our volume of traffic", they mean "This is what 'best practices' costs and this is what everyone else is doing so we're not overpaying"
- There are many teams out there paying many thousands per month for servers when the actual compute cost to service their volume is closer to $10/month - this sounds crazy but is true.