Every security policy in every type and size of an organization is built on education and awareness. Humans are and will always be the weakest link in cybersecurity, regardless of how many technical solutions and tactical measures the organization implements. The importance of security training and exercises for personnel is undeniable, yet their effectiveness in the actual world is frequently exaggerated. Furthermore, the proliferation of AI technology is set to make such training and exercises even less successful.
Ineffectiveness of security training and exercises
The business world has gone digital, as have all training and educational activities. Almost every business now defines “training” as having to watch a video and read a bit of text before taking a simple, multiple-choice test that can always be retaken. Everyone passes in the end, even if they don’t pay attention and don’t remember anything afterward.
And that’s how it must be: organizations that invest a lot of money in their recruitment processes cannot afford to filter out new employees or delay the time they become productive by implementing weeks-long educational activities. So, most such training exists solely to meet regulatory compliance requirements associated with standards such as GDPR, PCI DSS, or HIPAA.
As a result, if you gave your staff a basic cybersecurity test based on the training they’d received, some would fail terribly. This is not surprising given that they’re employed to be professionals in their fields, such as finance or human resources, rather than cybersecurity experts. Even the most basic degree of cybersecurity awareness may be beyond their ability to comprehend the underlying technology.
The same is true for any form of phishing exercise, which practically every business conducts regularly, especially if they need to meet compliance standards. If it’s completed, it’s considered a success. To make the results look better for audits, such phishing is often made excessively obvious, and those who fail the test face no repercussions. They are simply told that they have failed and that “they must do better” or that they must complete yet another easy automated training that they’ll forget in a week. Just imagine, if this was a genuine phishing effort, the company might have been facing a catastrophic data breach.
All it takes is one victim
Only a few cybersecurity threats affect nearly every individual in a company. These are all examples of social engineering, in which the employee, rather than the computer system, is the intended victim. These date back to the 1980s, when Kevin Mitnick demonstrated to the world how simple it is to mislead a worker into allowing nearly unrestricted access. We’ve advanced for more than 40 years, and are well into the age of AI and cloud storage, and nothing has changed; people continue to fall for the same old tricks.
To comprehend the various forms of security risks that humans face in the digital age, we must think like the attacker. The attacker’s goal is identical to the organization’s goal: to achieve the best possible result at the lowest feasible expense. And so, it makes sense for attackers to invest their time in just two categories of cyberattacks: those designed to reach as wide an audience as possible while using the most generic technique, and those designed to succeed on a single target using a personalized strategy. These two categories include, respectively, attacks such as spear phishing/vishing/smishing vs. phishing/vishing/CEO fraud. While a personalized attack aimed at a large number of individuals would produce the best effects, it’d simply take much too long to prepare.
That’s why, for someone with a rudimentary understanding of technology, certain phishing assaults are nonsensical – how can an attacker imagine anyone would fall for it? They’re poorly phrased, appear fake at first glance, and seem to have been put together by an amateur. Unfortunately, if you conduct an exercise with such content and target your company’s employees, you’ll be startled to find that some will fall for it. And the attacker only requires one victim in your organization to cause havoc.
Employee training helps lower the risk, but it doesn’t remove it. Without any training, 33.2% of employees are likely to click on a phishing email, according to KnowBe4’s 2023 Phishing By Industry Benchmarking Report. A year of regular training is required to lower that percentage to 5.4%; however, such training is not only expensive, but it also takes away productive time from the employee and impacts employee retention, and many organizations simply cannot afford the cost and/or the risk.
Phishing and AI – a new era of threats
We now live in the era of ChatGPT and other AI bots. We’re both surprised and pleased that artificial intelligence can edit our photographs and videos, as well as significantly improve the quality of our writing. However, the bad guys are also employing AI. And, as AI improves, we are more likely to be duped by a phishing attempt, even if it’s not directed directly at us.
A recent Facebook account takeover attack that promotes bitcoin trade is the finest example of how this technology is already being utilized in practice. After gaining access to the victim’s Facebook account, the automated system makes a video based on the victim’s real videos and photographs. The video is a deep fake of the individual claiming they were not hacked and touting how much money they made with bitcoins. It employs that person’s true face, voice, and even native language (however obscure), and it’s unbelievably realistic.
Furthermore, the bot sends messages to the victim’s connections in their native language, asking for assistance and informing them that the victim has lost access to the account and requires the contact, who is the next victim, to provide the code that they received via email. Boom, there goes the effectiveness of multi-factor authentication. Surprisingly, even IT professionals fall for it, and the spread of this automated malware continues, all based on social engineering and the usage of AI.
How will your next phishing email appear if this is suddenly the reality? Will it be written in poor English with obvious false links, or will it employ all the tricks of the trade that the AI can devise to fool the victim? And, if it’s so good, would your security/anti-spam system be able to automatically label it as spam, lowering the likelihood that the user will click on it? And how many of your staff will fall for it? In the next months, we’ll certainly find the answers to these pressing questions and we may not like them.
DLP solutions to the rescue
We are living in a world where any type of external communication, through any kind of app, can be used as a highly successful social engineering technique. Be it an email, text message, phone conversation, or video call, the AI can handle it all, as seen by the types of recent attacks. As a result, we have two options. The first is for every single employee to become obsessively paranoid and double-check every received request, which would not just take a lot of time but also necessitate additional training and exercises, this time with actual repercussions for failure. This isn’t going to happen because it would have a significant influence on the company’s actual business and market position.
As a result, we’re left with the second alternative. We must presume that some people may fall for such social engineering attempts; nevertheless, we can render such security incidents ineffective by removing the potential that the victim will reply to the attacker’s request by sharing sensitive information. If we remove the victim’s ability to share such information, the attack will ultimately fail.
There are numerous methods for preventing victims from engaging in risky behaviors, but they all revolve around endpoint device security – safeguards that are implemented on the device the victim is using directly, such as a laptop or a mobile device. We’ve had antivirus software for a long time, and it will detect certain obvious threats such as standard ransomware, but not custom attacks. Firewalls are similarly unsuccessful in protecting critical information because they either restrict far too much and create false positives, or eliminate just the obvious. The biggest weakness of such security technologies is that they focus on specific methods, and not the target itself.
There is only one truly successful class of solutions that addresses the problem of data privacy and data security, and that is data loss prevention (DLP). If DLP tools can successfully recognize sensitive data, block any form of automated sharing, and send alarms to the security team, practically any social engineering attempt or even insider threat action is a sure failure. The victim is unlikely to manually copy critical data such as the company’s intellectual property by reading it and typing it to the attacker; instead, they will utilize the operating system’s copy-and-paste functionalities, which can be protected by a DLP policy and either blocked or, even better, blocked with a raised alarm.
Thinking about where to invest next? Think DLP
If you recognize that the world is changing dramatically as a result of AI development and are considering updating your data protection approach in the near future, consider a DLP strategy. You can, of course, invest in further training, but it’ll have the same impact on information security as the measures you have already implemented. You can invest in AI-generated phishing exercises, but this will only demonstrate that even more of your staff fall victim to malicious hackers and will not improve the situation. Instead, anticipate that humans will continue to be the biggest vulnerabilities and AI will become more adept at tricking them, and invest in a stronger DLP system.
Data loss prevention solutions, including device control and USB encryption, also grow continuously to be even better at protecting your human victims from becoming the reason behind a data leak. Solutions like Endpoint Protector also use machine learning technologies for example to improve data classification – to use the best possible automation when recognizing the types of data that require protection such as different types of personally identifiable information (PII) like social security numbers, credit card numbers, proprietary code, or any other confidential data. As a result, with modern DLP software, you need next to no configuration – your endpoint DLP will learn to recognize your organization’s data in real-time and prevent your end users from sharing it with unauthorized users. DLP technologies have other use cases, too. For example, they will protect you not just from data exfiltration as an effect of phishing attacks but also from data leakages caused by malicious insiders.
Frequently Asked Questions
Explore More on Data Loss Prevention
Interested in diving deeper into the world of Data Loss Prevention? Check out these hand-picked resources to expand your knowledge:
Download our free ebook on
Data Loss Prevention Best Practices
Helping IT Managers, IT Administrators and data security staff understand the concept and purpose of DLP and how to easily implement it.