The push with cloud and any application today is to scale out. Scaling out allows for high loads to be distributed with multiple nodes fulfilling the request and also make high availability possible. Much intelligence is built into the application for scale out architectures because it is the application that should be able to handle a request across multiple nodes.
However I read up on how stackoverflow is able to serve 560 million page views over just 25 servers with their scale up approach rather than scale out approach and the stats are very impressive!
Here is an excerpt of the stats.
Stats
StackExchange network has 110 sites growing at a rate of 3 or 4 a month.
4 million users
8 million questions
40 million answers
As a network #54 site for traffic in the world
100% year over year growth
560 million pageviews a month
Peak is more like 2600-3000 requests/sec on most weekdays. Programming, being a profession, means weekdays are significantly busier than weekends.
25 servers
2 TB of SQL data all stored on SSDs
Each web server has 2x 320GB SSDs in a RAID 1.
Each ElasticSearch box has 300 GB also using SSDs.
Stack Overflow has a 40:60 read-write ratio.
DB servers average 10% CPU utilization
11 web servers, using IIS
2 load balancers, 1 active, using HAProxy
4 active database nodes, using MS SQL
3 application servers implementing the tag engine, anything searching by tag hits
3 machines doing search with ElasticSearch
2 machines for distributed cache and messaging using Redis
2 Networks (each a Nexus 5596 + Fabric Extenders)
2 Cisco 5525-X ASAs (think Firewall)
2 Cisco 3945 Routers
2 read-only SQL Servers for used mainly for the Stack Exchange API
VMs also perform functions like deployments, domain controllers, monitoring, ops database for sysadmin goodies, etc.
You can read more here.
Here’s a interesting video of their architecture by their software developer Marco Cecconi. What I did learn is that we should scale up before scaling out and ofcourse this all depends on the workloads and use cases we are dealing with.
Follow Us!