Blog

Understanding Machine Learning’s Role in Digital Security

Artificial intelligence is changing the world. Nowhere is this more evident than in cybersecurity. In order to leverage it yourself, however, you first must understand it.

If popular culture is to be believed, artificial intelligence is both a panacea and a curse. Machines, we’re told, have the capacity to solve nearly any problem, from customer relations to content creation. As is so often the case, this is hyperbole. 

Machine learning is certainly powerful. And AI has the potential to be one of the most disruptive technologies since the birth of the Internet. At the same time, overreliance on technology can lead to some rather unfortunate consequences.

Nowhere is this more evident than with Facebook. Much fanfare has been trumpeted about how the social network uses AI to detect, analyze, and remove objectionable content. About how machine learning helps the site deliver the best, most relevant content to its users.

One might expect, then, that Facebook would be free of hate speech. That posts calling for violence and racial segregation would have no place on the platform. Unfortunately, one would be mistaken. 

Per a study released earlier this year by the Anti Defamation League, a leading anti-hate organization, hatred and white supremacy still thrive on the platform. Meanwhile, according to a report by Buzzfeed News, it continually silences Black Lives Matter activists. On the surface, it may seem that Facebook is both promoting and supporting racism and violence.

It’s far likelier, however, that the social network is relying far too heavily on an automated platform that understands neither nuance nor subtlety. Certainly, it has a team of human moderators as well — one which is consistently short-staffed, stressed, and running on fumes. And here’s where the connection to the cybersecurity space comes in.

Through AI platforms, businesses have the capacity to establish a sort of digital immune system around their infrastructure. AI-based cybersecurity tools can automatically flag suspicious activity, automatically identify and mitigate malware without requiring signatures or definitions, and learn, over time, what constitutes ‘normal’ behavior for users on a network. This is objectively invaluable for security professionals, making their jobs significantly easier. 

At the same time, it cannot replace human actors.

“Where AI and machine learning can get you into trouble is if you are reliant on it as an oracle of everything,” Merrit Maxim, Security Research Director at analyst firm Forrester told tech publication ZDNet

“If the inputs are bad and it’s passing things through it says are okay, but it’s actually passing real vulnerabilities through because the model hasn’t been properly tuned or adjusted — that’s the worst case because you think you’re fully protected because you have AI. You can’t assume AI and machine learning are going to solve all the problems.”

There’s also the fact that, just as machine learning algorithms can be used to better protect enterprise systems, so too can they be leveraged by criminals. We may well be on the verge of another cybersecurity arms race, as each side attempts to develop AI capable of defeating the other. For anyone who’s spent any time in the security space, that’s no great surprise.

It’s always been an arms race — the correct application of AI simply makes things easier on the good guys. 

Facebook
Twitter
LinkedIn
Archives