Cybersecurity has been prominent in the news of late. Most recently, a new ransomware attack called WannaCry was spreading like wildfire. Within 24 hours, it was reported to have infected tens of thousands of computers across 74 countries. Everything from hospital systems and transportation systems to personal desktops was affected.
This attack offered many points of interest for computer security experts. We could talk about better educating users about phishing attacks – the initial point of entry for the WannaCry attack – or the practice of state actors, such as America’s NSA, reportedly being the origin of EternalBlue, stockpiling undisclosed internet vulnerabilities and the cyber weapons that exploit them. We could also argue both sides of that debate. We could talk about defense strategies, honeypots, file backups and patching, but I’d like to introduce a fairly simple and often overlooked reason why I believe we’re seeing the current attacks.
It’s often asked in Computer Science and Systems Analysis classes, “What is the most expensive phase in the software development lifecycle?” Hopefully, anyone who’s trained in computer technology and IT security would know that the answer is the maintenance phase, but this is often lost in the real world, where there’s a lot of upfront planning and effort to build the next iteration of a corporate website or mobile app.
The problem is that if planning ends at launch or at immediate post-deployment bug fixing, we fail to answer critical questions like:
How are these questions relevant to the recent ransomware attack? WannaCry leveraged EternalBlue to exploit a vulnerability (MS17-010) in SMBv1, an issue that Microsoft fixed and released a patch for in March.
In theory, everyone would have been on a supported Windows version and applied their patches, and WannaCry would have turned out to be just another annoyance. But it was a failure by companies, governments and individuals to realize and plan for the management of their devices that allowed the WannaCry attack to happen: whether it’s the old Windows dev server in the office, the MRI unit at the local hospital or the home computer still running Windows XP (all of the above because “if it ain’t broke, don't fix it”), there can be a heavy price to pay for deciding not to upgrade or maintain a device.
A good analogy is buying a new car but failing to account for costs like maintenance and insurance. Once you take possession (launch your new site) the car may run for several thousand miles past the recommended first oil change (patching, updates) and hopefully no one rear ends you (the cost of risk), which all saves you money in the short term. Eventually, however, and most likely at the least opportune time, the car’s engine will fail or someone will hit you, leaving you stranded on the side of the road with no way to get where you’re going. There’s also the fact you might have little money to make repairs.
So, if smart planning is a necessary corollary to buying a car, a house or another important big ticket item, shouldn’t we take the same approach with our digital devices and builds?
ransomware event should teach clients, companies and individuals the importance of addressing computer security issues at their root cause. This can be accomplished by properly planning and allocating
resources at the most critical phase of web development – the maintenance phase. This approach will help greatly reduce vulnerability, remediation and incident response. Band-Aid solutions or
treating symptoms as they arise is the wrong way to go.
My advice is to reach out to your IT people, vendors and support teams for help. Take inventory of applications and devices that don’t have a plan for covering their complete lifecycle, and address holes in this plan, including security and end-of-life/support issues, proactively on your own terms, instead of leaving your systems vulnerable to the next attack.