Do you remember the night when you signed-in to the census website and seamlessly lodged your personal form without any errors or downtime? Sadly, we don’t either.
You might remember the exact opposite of that of course, but what you may not have heard about is just how easy the digital collapse could have been prevented with a disaster recovery plan.
The timeline of the census
Before looking closely at the current census catastrophe, it pays to take a moment and reflect on how we ended up in this situation to begin with. There are two crucial timelines, the years leading up to the census itself, and what exactly happened in the hours surrounding the demise of the online census on August 9th, 2016.
Some key points from the years leading up to the 2016 census:
How sure were the ABS that they wouldn’t be hacked and that their systems could handle the data load? Further from that: why wasn’t a redundancy plan put in place regardless?
As experts in the field, we don’t make promises that you can’t be hacked. Instead we look at what can be done to mitigate the damage through a well-thought-out redundancy strategy that takes all single points of failure into account.
Where the census went wrong
There isn’t any 100 per cent solid information around exactly what went wrong with the census in the hours before the website was pulled. A number of different sources bring varying information. Some potential reasons reported across a number of timelines include:
One curious aspect is the inclusion of VPN warnings on the census website. If any of the above reports are true, then there’s a small chance that the combination of people using VPNs within their home network and logging onto the website collectively caused a false indication of a massive off-shore DDoS attack.
Perhaps – despite a lack of indication on Arbor’s Digital Attack Map – the census did experience massive domestic and international attacks as indicated by the ABS.
When looking to mitigate network damage, a redundancy strategy must take all single points of failure into account.
In an August 10, 2016 press release, the organisation states that they experienced “four denial of service attacks yesterday of varying nature and severity” and took the website down following the fourth to ensure the protection of user data.
The truth is that we may never know what happened, and this is the beauty of the lesson available here. We can’t predict the future or what may take a network offline, we can only prepare for it with an effective and thoroughly planned solution.
An ounce of prevention is worth a pound of cure
So what can we learn from the events that have transpired? Statistically speaking, this was nothing short of a failure – and a serious lesson in prevention for the IBM and the ABS. The reputation of these established organisations is cracking in the eyes of the public, illustrating the need for greater protection and security surrounded the collection of information.
The best we can hope for is that the people in charge of collecting our most personal and intimate information – the very details of our existence – take greater care in safeguarding our data, privacy and future.
If you’d like to know what plans your business can put in place to protect itself in the event of a system failure, reach out to our team of experts today. We at NetCraft provide everything from a full site assessment to the hardware and engineering support needed to create and maintain the most robust IT system available.