AI is rapidly advancing, bringing a whole new way to do business. While it's exciting to see, it can also be
alarming when you consider that attackers have just as much access to AI tools
as you do. Here are a few monsters lurking in the dark that we want to shine
the light on.
Dopplegängers In Your Video Chats - Watch Out For Deepfakes
AI-generated deepfakes have become
scarily accurate, and threat actors are using that to their advantage in social
engineering attacks against businesses.
For example, there was a recent incident
observed by a security vendor where an employee of a cryptocurrency foundation
joined a Zoom meeting with several deepfakes of known senior leadership within
their company. The deepfakes told the employee to download a Zoom extension to
access the Zoom microphone, paving the way for a North Korean intrusion.
For businesses, these types of scams are
turning existing verification processes upside down. To identify them, look for
red flags such as facial inconsistencies, long silences, or strange lighting.
Creepy Crawlies In Your Inbox - Stay Wary Of Phishing E-mails
Phishing e-mails have been a problem for
years, but now that attackers can use AI to write e-mails for them, most of the
obvious tells of a suspicious e-mail, like bad grammar or spelling errors,
aren't a good way to spot them anymore.
Threat actors are also integrating AI
tools into their phishing kits as a way to take landing pages or e-mails and
translate them into other languages. This can help threat actors scale their
phishing campaigns.
However, many of the same security measures
still apply to AI-generated phishing content. Extra defenses like multifactor
authentication (MFA) make it much harder for attackers to get through, since
they're unlikely to also have access to an external device like your cell
phone. Security awareness training is still extremely useful for reducing
employee risk, teaching them other red-flag indicators to look for, such as
messages expressing urgency.
Skeleton AI Tools - More Malicious Software Than Substance
Attackers are riding on the popularity of
AI as a way to trick people into downloading malware. We frequently see threat
actors tailoring their lures and customizing their attacks to take advantage of
popular current events or even seasonal fads like Black Friday. So, attackers
using things like malicious "AI video generator" websites or fake malware-laden
AI tools don't come as a surprise. In this case, fake AI "tools" are built
with just enough legitimate software to make them look legitimate to the unsuspecting
user - but underneath the surface, they're chock-full of malware.
For instance, a TikTok account was
reportedly posting videos of ways to install "cracked software" to bypass
licensing or activation requirements for apps like ChatGPT through a PowerShell
command. But, in reality, the account was operating a malware distribution
campaign, which was later exposed by researchers.
Security awareness training is key for
businesses here, too. A reliable way to protect your business is to ask your MSP
to vet any new AI tools you're interested in before you download them.
Ready To Chase The AI Ghosts Out Of Your Business?
AI threats don't have to keep you up at night. From deepfakes to phishing to malicious "AI tools," attackers are getting smarter, but the right defenses will keep your business one step ahead.
Schedule your
free discovery call today and let's talk through how to protect your team from
the scary side of AI ... before it becomes a real problem.