Q&A: Mark T. Hofmann, on the psychology of cybercrime

Privacy news
8 mins

Mark T. Hofmann is a highly-acclaimed business psychologist, crime and intelligence analyst, and expert in behavioral and cyber profiling. A sought-after speaker, consultant, and trainer in the cybersecurity industry, he has assisted police, government agencies, and NGOs worldwide in identifying and mitigating cyber threats. 

We speak with him about cybercrime trends, typical hacker traits, and the dangers of deepfakes. 


 

How does cybercrime profiling differ from other types of criminal profiling?

Cybercrime profiling is a completely different discipline, but there are some traditional criminal profiling methods that can also be applied to cyberspace. For example, the FBI has long-standing methods to analyze threatening letters to determine their seriousness (known as a threat assessment) or to identify perpetrators. Today, these letters take the form of e-mails or other encrypted messages, but the underlying principle remains the same. Phenomena such as cyber-stalking and cyber-bullying are also not entirely new, as they are simply modern forms of stalking and bullying. 

However, hackers and other offenders in cyberspace are on average much more intelligent and harder to catch than other criminals. While there is a stereotype of serial killers being highly intelligent—à la Dr. Hannibal Lecter—my work and research indicate that people with this type of exceptional intellect are found exclusively among white-collar criminals and cybercriminals. These individuals are often proud of their hacking abilities and view them as a valuable skillset.

 

What are some common personality traits or psychological characteristics of cybercriminals?

Common traits among hackers are young, male, well-educated, above-average intelligence. They engage in thrill-seeking behavior and enjoy the challenge to beat the system. I would also describe many (but not all) as some sort of digital anarchist or hacktivist who aim to “disobey” the system. 

However, it’s important to understand that most hackers begin learning the craft at the age of 11 to 15. They may initially commit minor crimes such as spying on classmates, hacking the school system, or committing credit card fraud. Are these 11-year-olds greedy psychopaths? No. Rather, they seek attention, appreciation, confirmation—to be significant, be someone, be seen. In many schools, there is a band, a football team, and a debate club, but there is hardly any sponsorship for 11-year-old IT talents. It’s not like they can get an internship or part-time job at 11. No one takes him seriously. But when he hacks a company: Suddenly they take him seriously. Now they have to listen. 

Most hackers begin learning the craft at the age of 11 to 15. They may initially commit minor crimes such as spying on classmates, hacking the school system, or committing credit card fraud. Are these 11-year-olds greedy psychopaths? No. Rather, they seek attention, appreciation, confirmation—to be significant, be someone, be seen.

The National Security Agency (NSA) regularly offers puzzles and crypto challenges, giving young talents the chance to use their skills for good. This approach is exactly right. Because otherwise kids learn about hacking on YouTube, on the darknet, and quickly switch to “the dark side.”

 

What kind of information is needed to build an accurate profile of a hacker?

In cyber profiling, language is key—it is often the only thing that can be analyzed, from a psychological perspective. And it’s frequently involved in cybercrimes, such as phishing emails, vishing (voice fishing) calls, negotiations over ransom, threatening messages, chats, and darknet forum posts.

While many people are familiar with dialects, which are regional differences in language use, there is also a concept called “idiolect,” which is a personal dialect that acts like a linguistic fingerprint. Combine that with AI, and it’s quite possible to recognize individuals based on their idiolects. However, in all practicality, if you’re able to determine where someone comes from, whether they’re writing in their native language, whether you’re dealing with a group—that’s already helpful.

 

You’ve said human error accounts for 90% of successful cyberattacks. Can you elaborate on what you mean by this?

Cyberattacks happen to people who are clicking on links, people who open attachments, people who reveal their passwords on the phone, people who pick up and plug in USB sticks out of curiosity, people who fall for honeytraps, people who talk loudly about sensitive topics at the airport, people who log on to foreign Wi-Fi networks, people who leave their laptops unlocked in the business lounges or on the train. From ransomware to espionage, clearly people are the weakest link. Bruce Schneier once said “Amateurs hack systems, professionals hack people.” He is damn right. 

In 2019, phishing emails were used to steal over 100 million USD from Google and Facebook. If it can happen to them, it could happen to any company.

Bruce Schneier once said “Amateurs hack systems, professionals hack people.” He is damn right. 

You’ve spoken about the concept of society becoming a “human firewall” against cybercrime. What do you mean by this?

Even if a company has a super firewall and advanced IT infrastructure, all it takes is for a hacker to call an employee and manipulate them to give access data or pretend to be the boss and instruct a money transfer. There is nothing you can do technically to prevent the attack. It’s important to understand that cybersecurity is a management task and should be a C-level priority. Cybersecurity is a combination of technical security and human cybersecurity awareness. 

 

Do you think deepfakes have the potential to make social engineering attacks more dangerous by using convincing visuals or audio to deceive people?

Absolutely. Deepfakes will bring CEO fraud to a completely new level. Criminals can “steal” someone’s face and/or voice from podcasts, TV interviews, or lectures on YouTube and use this identity to commit CEO fraud or other identity theft. It’s unbelievable how convincing the quality of well-made deepfakes is today—you wouldn’t even recognize your own mother. 

In Dubai, there was a case where deep voice technology was used to rob a bank of 35 million USD. Hackers “stole” the voice of a bank manager to manipulate an employee to make a transaction. Bank robbery with deep voice technology offers a glimpse into the future.

 

In your experience as a cybercrime analyst, what are some of the most notable changes or shifts in cybercrime trends that you’ve observed in recent years?

To me, the top three trends in cybercrime and social engineering are:

  • Deepfakes (as explained above)
  • ChatGPT, as it enlarges the circle of possible perpetrators and lowers the “entry-level” by giving them the tools they need to produce text for social engineering and disinformation. It can even produce code itself.
  • Crime-as-a-service structures (i.e., the business model of providing cybercriminal tools and services to other criminals) are becoming more professional. They operate like companies but even better more efficiently than many. They have built a real economy in the dark.

 

What are some common myths about cybercrime that need to be debunked, and how can we educate the public to be more aware and informed about these issues?

The worst myth is that small companies believe that they won’t be affected by cyberattacks because they are not interesting enough or too small. But that’s wrong; even kindergartens and dentists get attacked. There are two types of companies: The ones that have been attacked and the ones that will be attacked. The only question is whether a company is prepared when someone tries to attack them. That’s a matter of awareness and preparation. 

And the most famous myth of all: No, hackers don’t always wear black hoodies.

To educate the general population, discussions about cybercrime needs to be more entertaining, more exciting. I’m at many cyber conferences from Berlin to Dubai, where experts talk to experts about expert topics. That’s good, but in the end it’s the receptionist who opens the link or reveals the password. As a speaker, I do my best to get people excited about the topic when they would otherwise not be interested.

There are two types of companies: The ones that have been attacked and the ones that will be attacked.

With the rise of artificial intelligence, how do you see AI playing a beneficial role in profiling hackers and identifying cybercrime trends? 

In the race between hackers and security providers, both sides are equipped with AI, which accelerates the competition. For example, there is a tool called “Gender Guesser” on Hacker Factor, a forensics research website, that can use a few words of small talk to make a probability statement about the person’s gender, based on data about how men and women differ in writing. 

So, if an AI can already identify my gender from 10 words, imagine what it can do by analyzing 10,000 emails. A linguistic fingerprint? Possible. It’s important to note that AI can give an advantage to offenders but also the police and intelligence agencies.

 

Do you think there is a need for government regulation of deepfakes, and if so, what should that regulation look like?

For AI in general, I would say: Chances first, regulation second. Europe discusses banning ChatGPT. I think that is frighteningly stupid.

However, when it comes to deepfakes, I come to a different conclusion. Deepfakes are exactly that; fakes, false identities. From election manipulation to defamation, from cyberbullying to CEO fraud, I can essentially only think of bad things you can do with them. Except funny videos. In this case, some kind of verification (or maybe the use of NFTs) might be the answer. Recently there was a viral (fake) picture of the pope in a stylish jacket. Now, even journalists have to ask themselves when they see a picture or video: Is this real? We have reached the point where we can no longer believe our eyes.

 


Mark T. Hofmann is an expert in the field of behavioral and cyber profiling. He is a renowned crime and intelligence analyst, business psychologist, and keynote speaker who has been featured on various news platforms, such as CNN, CBS, and 60 Minutes Australia. With a passion for cybersecurity, he aims to inspire people worldwide to become a “human firewall” and combat the rising threat of cybercrime.

Through his keynote speeches, Hofmann delves into the psychology of cybercrime, answering questions such as what motivates hackers, and what are their latest social engineering techniques. Hofmann offers valuable resources for individuals who want to learn more about cybersecurity and become better equipped to protect themselves against cybercrime. Visit his website for more information.

 

Phone protected by ExpressVPN.
Take back control of your privacy

30-day money-back guarantee

A phone with a padlock.
We take your privacy seriously. Try ExpressVPN risk-free.
What is a VPN?
I like hashtags because they look like waffles, my puns intended, and watching videos of unusual animal friendships. Not necessarily in that order.