Facebook explains how its October 4th outage started

Following Monday’s massive service outage that took out all of its services, Facebook has published a blog post detailing what happened yesterday. According to Santosh Janardhan, the company’s vice president of infrastructure, the outage started with what should have been routine maintenance. At some point yesterday, a command was issued that was supposed to assess the availability of the backbone network that connects all of Facebook’s disparate computing facilities. Instead, the order unintentionally took those connections down. Janardhan says a bug in the company’s internal audit system did not properly prevent the command from executing.

That issue caused a secondary problem that ultimately made yesterday’s outage into the international incident that it became. When Facebook’s DNS servers couldn’t connect to the company’s primary data centers, they stopped advertising the border gateway protocol (BGP) routing information that every device on the internet needs to connect to a server.

“The end result was that our DNS servers became unreachable even though they were still operational,” said Janardhan. “This made it impossible for the rest of the internet to find our servers.”

As we learned partway yesterday, what made an already difficult situation worse was that the outage made it impossible for Facebook engineers to connect to the servers they needed to fix. Moreover, the loss of DNS functionality meant they couldn’t use many of the internal tools they depend on to investigate and resolve networking issues in normal circumstances. That meant the company had to physically send personnel to its data centers, a task that was complicated by the physical safeguards it had in place at those locations.

“They’re hard to get into, and once you’re inside, the hardware and routers are designed to be difficult to modify even when you have physical access to them,” according to Janardhan. Once it could restore its backbone network, Facebook was cautious not to turn everything back on all at once since the surging power and computing demands may have led to more crashes.

“Every failure like this is an opportunity to learn and get better, and there’s plenty for us to learn from this one,” said Janardhan. “After every issue, small and large, we do an extensive review process to understand how we can make our systems more resilient. That process is already underway.”

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.