We need to ditch cloud entirety and go in house again.
For many many companies that would be returning to the bad-old-days.
I don’t miss getting an emergency page during the Thanksgiving meal because there’s excessive temperature being reported in the in-house datacenter. Going into the office and finding the CRAC failed and its now 105 degree F. And you knew the CRAC preventive maintenance was overdue and management wouldn’t approve the cost to get it serviced even though you’ve been asking for it for more than 6 months. You also know with this high temp event, you’re going to have an increased rate of hard drive failures over the next year.
We don’t have to. It is entirely possible to engineer applications and services in a way that they’re not dependent on any one cloud service, while also using cloud services for IaaS. Netflix famously does this, and sure enough Netflix experience no service interruptions during this latest outage despite having a large AWS presence.
If we want a truly robust system, yeah, we kinda do. This sort of event is only one of the issues with allowing a single entity to control pretty much everything.
There are plenty of potential issues from a corrupt rogue corporation hijacking everything to attacks to internal fuck-ups like we just experienced. Sure, they can design a better cloud, but at the end of the day, it’s still their cloud. The Internet needs to be less centralized, not more (and I don’t just mean that purely in terms of infrastructure, though that is included of course).
Bit of an over-reaction to one incident. I’d be willing to bet the uptime, reliability and scalability of AWS is significantly better than what the vast majority of in-house solutions could do. It’s absolutely not worth going back.
Millions of customers using AWS also weren’t affected - the company I work for certainly wasn’t, although some of our tools like Jira were.
The problem is far more pervasive than any single incident, allowing a single megacorporation to control most of the Internet is a bad idea for many reasons. The Internet is supposed to be decentralized, it was even designed to withstand a nuclear war with that principal in mind. Even with a robust, distributed network with redundant backups, if it’s still all controlled by one company, that is still a very precarious situation.
Agreed, but other cloud providers exist and it would be good if there was stronger competition in this space. But going back to self hosting is a huge step back and I think if a CTO said they were going to move from the cloud back to a self hosted solution, pretty much everyone would hate it.
That work is still being done by someone in a data centre. But all these jobs went from in-house positions to the centres.
The difference is scale. When in-house, the person responsible for managing the glycol loop is also responsible for the other CRACs, possibly the power rails, and likely the fire suppression. In a giant provider, each one of those is its own team with dozens or hundreds of people that specialize in only their area. They can spend 100% on their one area of responsibilty instead of having to wear multiple hats. The small the company, the more hats people have to wear, and the worse to overall result is because of being spread to thin.
The inverse of the old axiom “The cloud is just someone else’s computer” is “Yes, duh, that’s how you get economies of scale”.
In-housing would mean an enormous increase in demand for physical hardware and IT technical services with a large variance in quality and accessibility. Like, it doesn’t fix the underlying problem. It just takes one big problem and shatters it into a thousand little problems.
I think some of you younger folks really don’t know what the Internet was like 20 years ago.Shit was up and down all the time.
I worked on a project back in 2008 where I had to physically haul hardware from Houston to Dallas ahead of Hurricane Ike just to keep a second rate version of a website running until we got power back at the original office. Latency at the new location was so bad that we were scrambling to reinvent the website in real time to try and improve performance. We ended up losing the client. They ended up going bankrupt. An absolute nightmare.
Getting screamed at by clients. Working 14 hour days in a cramped server room on something way outside my scope.
Would have absolutely killed for something as clean and reliable as AWS. Not like it didn’t even exist back then. But we self-hosted because it was cheaper.
We need to ditch cloud entirety and go in house again.
For many many companies that would be returning to the bad-old-days.
I don’t miss getting an emergency page during the Thanksgiving meal because there’s excessive temperature being reported in the in-house datacenter. Going into the office and finding the CRAC failed and its now 105 degree F. And you knew the CRAC preventive maintenance was overdue and management wouldn’t approve the cost to get it serviced even though you’ve been asking for it for more than 6 months. You also know with this high temp event, you’re going to have an increased rate of hard drive failures over the next year.
No thank you.
We don’t have to. It is entirely possible to engineer applications and services in a way that they’re not dependent on any one cloud service, while also using cloud services for IaaS. Netflix famously does this, and sure enough Netflix experience no service interruptions during this latest outage despite having a large AWS presence.
If we want a truly robust system, yeah, we kinda do. This sort of event is only one of the issues with allowing a single entity to control pretty much everything.
There are plenty of potential issues from a corrupt rogue corporation hijacking everything to attacks to internal fuck-ups like we just experienced. Sure, they can design a better cloud, but at the end of the day, it’s still their cloud. The Internet needs to be less centralized, not more (and I don’t just mean that purely in terms of infrastructure, though that is included of course).
Bit of an over-reaction to one incident. I’d be willing to bet the uptime, reliability and scalability of AWS is significantly better than what the vast majority of in-house solutions could do. It’s absolutely not worth going back.
Millions of customers using AWS also weren’t affected - the company I work for certainly wasn’t, although some of our tools like Jira were.
The problem is far more pervasive than any single incident, allowing a single megacorporation to control most of the Internet is a bad idea for many reasons. The Internet is supposed to be decentralized, it was even designed to withstand a nuclear war with that principal in mind. Even with a robust, distributed network with redundant backups, if it’s still all controlled by one company, that is still a very precarious situation.
Agreed, but other cloud providers exist and it would be good if there was stronger competition in this space. But going back to self hosting is a huge step back and I think if a CTO said they were going to move from the cloud back to a self hosted solution, pretty much everyone would hate it.
I certainly don’t miss dealing with air conditioning, dry fire protection, and redundant internet connections.
I also don’t miss trying to deal with aging servers out and bringing new hardware in.
That work is still being done by someone in a data centre. But all these jobs went from in-house positions to the centres.
The difference is scale. When in-house, the person responsible for managing the glycol loop is also responsible for the other CRACs, possibly the power rails, and likely the fire suppression. In a giant provider, each one of those is its own team with dozens or hundreds of people that specialize in only their area. They can spend 100% on their one area of responsibilty instead of having to wear multiple hats. The small the company, the more hats people have to wear, and the worse to overall result is because of being spread to thin.
The inverse of the old axiom “The cloud is just someone else’s computer” is “Yes, duh, that’s how you get economies of scale”.
In-housing would mean an enormous increase in demand for physical hardware and IT technical services with a large variance in quality and accessibility. Like, it doesn’t fix the underlying problem. It just takes one big problem and shatters it into a thousand little problems.
That’s good though. It means half the internet wouldn’t fail.
I think some of you younger folks really don’t know what the Internet was like 20 years ago.Shit was up and down all the time.
I worked on a project back in 2008 where I had to physically haul hardware from Houston to Dallas ahead of Hurricane Ike just to keep a second rate version of a website running until we got power back at the original office. Latency at the new location was so bad that we were scrambling to reinvent the website in real time to try and improve performance. We ended up losing the client. They ended up going bankrupt. An absolute nightmare.
Getting screamed at by clients. Working 14 hour days in a cramped server room on something way outside my scope.
Would have absolutely killed for something as clean and reliable as AWS. Not like it didn’t even exist back then. But we self-hosted because it was cheaper.