By Rodrigo Loureiro, Cybersecurity expert
Things will become more dystopian if organisations continue to ignore rampant problems instead of dealing with the uncomfortable world we’ve created, Rodrigo Loureiro writes.
As many businesses are hesitant to allow cybersecurity employees to use AI tools in their work, fearing the field is unregulated and still underdeveloped, key thinkers from various industries have recently written an open letter demanding the halt of AI experiments more advanced than ChatGPT-4.
Some even say the letter isn’t enough, and society isn’t ready to handle the ramifications of AI.
Unfortunately, Pandora’s box has already been opened, and those pretending we can reverse any of these innovations are delusional.
It’s not a new invention, either: we’ve been interacting with limited models for years.
Can you count the times you’ve used a website’s chatbot, your smartphone assistant, or an at-home device like Alexa?
AI has infiltrated our lives just as the internet, smartphones, and the cloud did before it. Fear is justifiable, but companies should be concerned about cyber-criminals and the advancement and increased sophistication of their attacks.
Outgunned and outsmarted
Hackers using ChatGPT are faster and more sophisticated than before, and cybersecurity analysts who don’t have access to similar tools can very quickly find themselves outgunned and outsmarted by these AI-assisted attackers.
They’re using ChatGPT to generate code for phishing emails, malware, encryption tools, and even create dark web marketplaces.
The possibilities for hackers to use AI are endless, and, as a result, many analysts are also resorting to the unauthorised use of AI systems just to get their job done.
According to HelpNet Security, 96% of security professionals know someone using unauthorised tools within their organisation, and 80% admitted they use prohibited tools themselves.
This proves that AI is already a widely used asset in the cybersecurity industry, mostly due to necessity.
Survey participants even said “they would opt for unauthorised tools due to the better user interface (47%), more specialised capabilities (46%), and allow for more efficient work (44%).”
Fatal flaws that can be exploited
Corporations are stumbling to figure out governance around AI, but while they do so, their employees are clearly defying rules and possibly jeopardizing company operations.
According to a Cyberhaven study of 1.6 million workers, 3.1% input confidential company information into ChatGPT. Although the number seems small, 11% of users’ questions include private information.
This can include names, social security numbers, internal company files, and other confidential information.
ChatGPT learns from every conversation it has with its users and it can regurgitate user information if probed correctly.
This is a fatal flaw for corporate use considering how hackers can manipulate the system into giving them previously hidden information.
More importantly, the AI will also know the security mechanisms that the company has when incorporated on a corporate server.
Armed with that information, an attacker could successfully obtain and distribute confidential information.
We can’t halt innovation
Whether it be the cloud or the internet, the integration of new technologies has always caused controversy and hesitation.
But halting innovation is impossible when criminals have gained access to advanced tools that practically do the job for them.
To correctly address this issue around our society’s security, companies must apply previous governance rules to AI.
Reusing historically proven procedures would allow companies to catch up with their attackers and eliminate the power imbalance.
Streamlined regulation among cybersecurity professionals would allow companies to oversee what tools employees are using, when they are utilising them, and what information is being input.
Contracts between technology providers and organisations are also common for corporate cloud usage and can be applied to the nebulous sphere of AI.
We can only create safe, controlled environments
We’ve passed the point of no return, and critical adoption is our only solution to live in an AI-driven world.
Heightened innovation, increased public accessibility, and ease of use have given cybercriminals the upper hand that’s hard to reverse.
To turn things around, companies must embrace AI in a safe, controlled environment.
The advanced tech is almost uncontrollable, and cybersecurity analysts need to learn how it can be utilised responsibly.
Employee training and development of enterprise tools would strengthen cybersecurity procedures until an industry giant like Microsoft uses the likes of the recently announced security analysis tool Security Copilot to transform the industry.
In the meantime, companies must stop sticking their head in the sand, hoping for reality to change.
Things will become more dystopian if organisations continue to ignore rampant problems instead of dealing with the uncomfortable world we’ve created.
Rodrigo Loureiro is a cybersecurity expert. He serves as the CEO of NewPush and the founder and managing partner at CyberVerse Advisors based in Washington, DC.
At Euronews, we believe all views matter. Contact us at email@example.com to send pitches or submissions and be part of the conversation.