I’ve written before about the dangers of socially-engineered cyber attacks and how psychological trickery can be one of the most effective tools in the armoury of a cybercriminal. In this blog from March 2020, I voiced specific concerns around social engineering in the context of the (then new) COVID-19 pandemic.
To expand on this important topic, I recently worked with colleagues at CyberCube to conduct extensive research into the following:
- Why does social engineering continue to be so effective as a component of cybercrime?
- How can social engineering techniques be categorised?
- What resources are criminals investing in to innovate in this area today?
- What sort of impacts (both positive and negative) should we expect as a result?
The findings from this research have now been published in a new paper entitled, “Social Engineering, blurring reality and fake” which is available here.
Social Engineering Sea Change
In researching this new paper, it became obvious that, as with many other areas of IT innovation, the domains of Artificial Intelligence (AI) and Machine Learning (ML) are intersecting with developments in social engineering technology. They are becoming connected to the point that they are creating a “sea-change” in this area, the consequences of which could be catastrophic to targets of cyber crime.
One area of discussion in the paper is concerned with the balances of “technical feasibility” and “economic viability”. This is, essentially, a study of how technical solutions eventually become easy to deploy and available to the masses. As we enter 2021, advanced social engineering technology is becoming more accessible and easier to use than ever before.
In addition, criminals are investing heavily in a few areas of advanced social engineering technology that should be on the radar of anybody concerned with the global threat landscape (and that should be everybody!). Two of these areas will not come as a surprise to anybody who has a general interest in technology or in cyber security.
It’s Not Deep Fake News!
“Deep Fake” technology; essentially, the creation of a realistic video that simulates an individual doing or saying things that they may not otherwise have done and “Deep Voice” – use of computers to mimic the vocal credentials of a target, have received quite a bit of media coverage over the past few years. The paper dives into where these technologies currently are, in terms of capability, and discusses possible use-cases for them both.
On a slightly different note, “Social Profiling” is studied as a potential source of major heartache to future targets of cyber crime. The use of digital and social media to gather meta-data about us, using AI to create accurate and intrusive profiles from which attacks can be executed is a new and, frankly, terrifying development in this area.
This area of technology is one that fascinates me and it’s not all “doom and gloom”. The report discusses some of the positive and constructive ways that these new technologies can be used, for example. As a minimum, insurers and cyber defenders should track progress closely and ensure that the risk management frameworks, security strategies, analytics tools and catastrophe models take this emerging threat into consideration.