AI and Cybersecurity: What You Need to Know
While artificial intelligence (AI) could be a world-changing technology, we’ve also seen some dire warnings about the dangers it might pose. Regardless of what you think about those warnings, as an Apple admin you do need to be mindful of the particular—and real—threats that AI poses to IT and information security here and now. Here’s what you need to know about AI and cybersecurity and what you can do to mitigate the risks.
When End Users Experiment with AI
By now, even average end users are experimenting with generative AI—ChatGPT, Google's Bard, and the like—to draft emails or reports. But even such seemingly innocuous experiments entail some risk.
For one thing, it’s possible that, in the course of composing a prompt for an AI engine, such users will inadvertently include proprietary information. The entire point of AI is that it learns; it will remember—and could potentially divulge to other parties—everything your users tell it, including your organization’s intellectual property or trade secrets.
It’s possible that users will use AI output verbatim when there’s no guarantee that it’s either original or accurate. That output could well be lifted from other sources without attribution. If your users then present that output as their own words, that could be plagiarism at best and copyright infringement at worst. That, in turn, could be embarrassing for your organization and possibly costly.
There’s a special class of users who are at risk from AI: The people on your staff who write scripts or code. AI has already proven itself capable of generating basic code. But there’s no guarantee that the code it writes—even in response to the most innocent of prompts—will be safe for the coder or potential users. It’s very possible that AI-written scripts will either violate your organization’s security standards or ask users to do something dangerous.
As an admin, you also need to remember that AI is everywhere these days. In addition to those generative engines, there are AI browser extensions, which make AI that much more accessible. And there are AI agents such as Auto-GPT which instead of just generating requested text output can be given tasks, from building a website to ordering a pizza. As you might guess, such agents pose unique security challenges of their own.
"We need to take the possibility of autonomous AI agents becoming a reality within our enterprises seriously." Maria Markstedter, Azeria Labs
When Bad Actors Adopt AI
But the real dangers of AI to IT aren’t in how it’s used by end users as much as how it’s already been used by bad actors to target those users.
That starts with using AI to identify their vulnerabilities. By analyzing vast amounts of data—including social media and other personal information—it can be used to identify patterns and weaknesses that could be exploited in social engineering attacks.
Advances in AI have already enabled cybercriminals to create sophisticated phishing attacks. Machine learning algorithms can gather information about potential victims—from social media profiles, online behavior, and public data—and then craft personalized and convincing phishing emails, which are that much harder to distinguish from genuine communications. It can quickly iterate on those attacks, rapidly producing variations at a volume and speed that could evade both user wariness and IT tools.
Another kind of vulnerability is an even greater risk because of AI: Bad actors could use AI to search open-source software for security holes. As the business world increasingly relies on such code and the collaborative platforms it lives on (such as GitHub), those with malicious intent could seek to weaponize both. AI-powered tools could scan and analyze vast amounts of open-source code, using machine learning algorithms to identify patterns, anomalies, and potential security gaps. Using AI, bad actors could uncover vulnerabilities that might have otherwise remained undetected.
It can also be used to craft the code to exploit those vulnerabilities. According to some reports, AI-written malware has already appeared in the wild. (At least one site on the dark web, WormGPT, is devoted to such work.) Thanks to AI’s ability to rapidly rewrite code, such malware can, in theory, be polymorphic—meaning that (like those phishing attacks described above) it can change itself over time. If security tools detect one version, AI-based malware could morph into new forms those tools know nothing about.
Finally, machine-learning systems themselves are subject to attack—something like a supply-chain attack—so the data they spit out is compromised. There have been enough such attacks already that Mitre’s Atlas (Adversarial Threat Landscape for Artificial-Intelligence Systems) program has been able to collect multiple examples and classify their strategies.
What You Can Do
That’s a lot of potential threats, many of them already happening now. What can you, as an IT admin, do to address them?
Enlist your developers
Don’t let coders rely on AI for scripting beyond initial prototyping and then testing the end results. They should write all the important bits themselves and carefully review any remaining machine-generated code.
Vet all open-source code used in your projects.
Encourage devs to follow secure coding practices: code reviews to identify potential vulnerabilities; static code analysis; and using development frameworks and libraries endorsed by reputable organizations.
Train developers—like other end users, only more so—about the risks of social engineering attacks. Encourage them to be cautious about interacting with unknown individuals and to verify requests for access or code modifications.
Expand user education
You’re already likely teaching users how to detect and thwart phishing attacks. Add to that training some explanation of the new strategies that AI is introducing to the phishing playbook. Emphasize the importance of verifying the authenticity of emails and attachments before interacting with them.
Encourage users to review and adjust their privacy settings on social media platforms, minimizing the amount of personal information that they make publicly available.
Deploy strong authentication mechanisms—such as two-factor or biometric authentication—to blunt the impact of AI-driven social engineering attacks.
If users are using AI themselves, make sure they know they shouldn’t include any proprietary company data in prompts and thoroughly vet and rewrite any output they get.
Double down on standard security
Implement a good access control system that restricts data access based on users' roles and responsibilities. Regularly review and update those privileges to prevent unauthorized access to data.
Stay up-to-date with the latest security patches and updates for both open-source libraries and the Apple ecosystem.
Deploy an advanced security solution that can identify malicious activities and proactively protect Apple devices.
About Kandji
Kandji is the Apple device management and security platform that empowers secure and productive global work. With Kandji, Apple devices transform themselves into enterprise-ready endpoints, with all the right apps, settings, and security systems in place. Through advanced automation and thoughtful experiences, we’re bringing much-needed harmony to the way IT, InfoSec, and Apple device users work today and tomorrow.
See Kandji in Action
Experience Apple device management and security that actually gives you back your time.
See Kandji in Action
Experience Apple device management and security that actually gives you back your time.