To better explain the current scenario in data security, let’s take a look over the learnings of World War II. During World War II, Americans were looking forward to make their Aircrafts less prone to the damages. They started the process by studying the parts of the war machine which incurred clustered bullet holes. It was assumed and understood that in order to toughen the sustainability these parts should be armoured.
Any fan of the movie Moneyball, will clearly tell you that stats should be used with precision and statisticians strategically pointed out that if the aircraft is able to return with heavy damages at a specific area then it might not require the protection. One should also explore the damaged aircraft for the cause of actual loss and thus that part should eventually be armoured.
When it comes to data security, same logic of armouring the recurring damages is repeatedly carried out, not in few cases but in all the cases. The breach – just because is been obvious because of visible multiple damages, should not mean that the root-cause could safely be ignored. The vulnerability starts at the basic layout of the network and is often left unguarded, unwatched and unarmored.
The data security breaches which are frequently ignored are like – disabling security updates, disabling scheduled virus scans, elevating user privileges as an easy fix, giving data sync access to the users who requires it the least, Sharing e-mail passwords to the end users while they have mail client working on designated machines – the list is endless. But once we look at the authority given to IT department with a death note which states any down-time is your failure only makes me more melancholy. It would mean that in a binary choice between quality vs. no downtime, network support chooses no downtime. If it really was that binary, I don’t think it is worth the choice.
LEAVE A REPLY