Blog
(ISC)² Secure Summit UK Insights: The Future Impact of AI in Cybercrime
(ISC)²’s two-day Secure Summits bring multi-subject sessions from hands on practical workshops to keynotes and panel discussions, featuring local and international industry experts to maximise the learning experience and CPE opportunities.
Serving the entire (ISC)² EMEA professional community with regional events, the Summits offer a wealth of educational value, networking opportunities, and a community forum for like-minded professionals, all of which are FREE to (ISC)² members & (ISC)² Chapter members. Read on for insights from one of our popular Secure Summit UK sessions…
You’re the CEO of an international oil and gas company. Business risk and risk management planning is second nature. The things that keep you awake at night probably involve physical damage to your company’s assets, employee safety and competitor actions. However, what if well-hidden attackers were able to infiltrate your company and manipulate the data for your next oil rig location, influencing the bid you made for the drilling and mining rights of the rig deployment? Speaking at the (ISC)² 2017 Secure Summit UK in London, Dave Palmer, Director of Technology at Darktrace, believes this will soon be a reality of cyber-crime thanks to the evolution of artificial intelligence.
Dave began by discussing the most effective methods of attack cyber criminals currently use. Direct assaults against firewalls have all but died out, with attacks now typically comprising people. Imposters fool employees and gain a foothold in the digital infrastructure of an organisation. As Dave sees it, these techniques focused on targeting the human element are only set to increase, becoming more sophisticated and effective.
If an attacker gathers enough data on a person, they can personalise phishing emails to include relevant information that can appear both convincing and legitimate. Dave himself became the target of one such attack that involved the attacker listening into a conversation he had with one of his colleagues in a café on his lunch break. The attacker was then able to send him an email a day later purporting to be from the colleague he had just spoken to during lunch, using details of the conversation to make the email look as if it had come from the legitimate source.
AI based malware has the potential to read your calendar schedule, your emails and your messaging platform. With machine learning capabilities, it can understand this information and train itself to identify individual communication styles, as well as how people communicate with specific people. For example, it would be able to differentiate between the tone you used with your boss to the one you used with your partner. The AI would then be capable of contextually contacting the different people in your working life and replicate your individual style in order to spread itself. These kinds of emails are contextually relevant and from that sources you would expect them to be from, removing the tell-tale signs that would have previously given them away.
Already this kind language recognition AI technology is being designed and used by organisations to help schedule diary appointments and analyse the language between employees. This basic level of interaction between employees and AI technology is enough for it to build a self-replicating spear phishing attack.
Pairing this technology with software that can replicate communication styles, such as those seen in Twitter bots and loading it with ransomware payloads, would give cyber criminals the capability to launch mass phishing attacks that are almost undetectable.
With these kinds of capabilities at their fingertips, it’s not so hard to imagine a scenario where attackers are able to so deeply embed themselves in an oil and gas company to influence the deployment analysis data.
According to Dave, the cybersecurity profession will need to up its game if it’s to contend with the emerging threat of AI as a cyber weapon. Currently AI is largely invested in products that affect consumers, and not being channelled into business solutions suggesting; we’re likely to see the bad guys make more effective use of AI than the good guys in the short term.
Learn more about this and discover other Secure Summit Insights from the session recordings available here .