Previous Editions
Demo

Written By

Grant Bourzikas
Chief Security Officer, Cloudflare

This article is part of: Centre for Cybersecurity


  • AI is stirring a revolution larger than anything ever witnessed.
  • If we fail to clarify the conceptual versus the tangible for businesses and consumers, we will suffer a massive disconnect.
  • We must educate ourselves about AI to recognize and address the potential signal flares or become low-hanging fruit for hackers to exploit; we can’t let innovation and excitement outweigh security and resilience.

While overall global VC investments plummeted in 2023 by 38%, funding for AI startups soared, surpassing $50 billion (up 9% from 2022). At times, Wall Street may suffer from shiny object syndrome, but it has one thing right: AI is far from a flash in the pan. Whether you consider it the next gold rush or the most transformational tech trend since the World Wide Web, it is stirring a revolution larger than anything ever witnessed.

But, we are at a critical inflexion point, if we fail to clarify the conceptual versus the tangible for businesses and consumers, we will suffer a massive disconnect. ‘Real’ AI is rarely understood and often elicits visceral reactions that range from fantastical to doom and gloom. As AI infiltrates every industry and sector and permeates into the lives of the everyday person, serious concerns are being raised, misunderstood and ignored.

To compare to our earlier analogies, while the gold rush presented incredible opportunities of wealth, the failure to understand its overall impact and attempts to regulate or quantify the risk it presented resulted in intense repercussions. It attracted less desirable crowds, like crooks, it had a severe environmental impact – clogging rivers, causing mass deforestation, polluting soil with chemicals from the mining process – and it ultimately resulted in higher prices for commodities, as well as inflationary shock. In our World Wide Web example, imagine if business leaders and the everyday consumer never learned how to use the internet or even understand its basic purpose and how it functions.

Despite the attention-grabbing headlines, AI on its own will not solve the world’s most critical problems – especially if we don’t make an effort to understand its limitations. While we can operate at a higher level of abstraction and have access to more capabilities, AI is “not good at life-and-death situations,” as Sam Altman, CEO of OpenAI, put it at the World Economic Forum in Davos in January 2024. But as the hype has grown, organizations have raced to build AI into their businesses to maintain a competitive edge, at any cost, often failing to bake in protective security measures and analyze potential risks at the start.

While there are many ‘signal flares’ to be wary of when it comes to misunderstanding and assessing the risks of AI, the below are what chief security officers of any organization – of every size and industry – should keep top of mind:

1. In the world of AI, data is the only currency and organizations that have the most will win

AI on its own will not solve the world’s most critical problems. The successful implementation and use of AI depends on the quantity and quality of data. However, collecting vast amounts of quality data isn’t the end of the AI lifecycle. Organizations must also be able to extract the data and transform it into insights. The race that was once building AI is evolving. To outperform competitors, organizations must now continually train AI models on the most up-to-date, relevant data to avoid hallucination and model drift.

2. The knowledge gap between security professionals who understand AI and those who do not will be the main reason for any shift in the balance of power to threat actors

Whether or not the use of AI is giving attackers a leg up is the wrong question to be asking. AI is here to stay, so the right question is whether or not security leaders and professionals possess the skills required or will invest the time to upskill and learn how to handle what is becoming the largest revolution ever seen in technology. Both harnessing the power of this technology and defending against it, hinges on the ability to understand it and its limitations. If the security industry fails to demystify AI and its potential malicious use cases, the coming years will be a field day for threat actors.

3. The only way to fight against AI is with AI – but you must master the basics first

Defending against AI ultimately means defending against all human knowledge indexed. Information sharing exists at an order of magnitude faster and more efficiently exchanged than ever before. Security pros must protect their organizations in the era of infinite information and face challenges never seen before. But if the industry has historically struggled with doing the basic things well, over pivoting to solve issues using AI will be mostly benign. The best way to mitigate attacks is ensuring the foundational controls in security have been solved. We often chase shiny objects like AI and the best defense for AI is strong foundational controls.

4. The secure-by-design conversation will evolve to not only encapsulate, but heavily focus on AI tools and models

The conversation around productionalizing AI often only crosses into the security realm after the model has been developed and exists – e.g., maintaining the integrity of the model. But if AI is the inevitable future of the way organizations and critical infrastructure do business, operate, develop their services, etc. – security must be built in from the start. AI must be engineered and implemented in a way that addresses many of the concerns that cyber has historically focused on.

However, we only see security in the AI discussion when it comes to building a process of governance, never during the production, operation or tuning of it. The inference of models is often overlooked, but a key piece to security. Many business leaders assume that AI will solve issues instantaneously, automatically and that it magically works. For example, models doing the ‘right thing’ is dependent upon accuracy and accuracy is dependent upon training models and decreasing the chance of hallucination. If we continue to prioritize creativity and speed over accuracy in AI, large language models will be shackled to hallucination. Securing models across the lifecycle – building, testing and production – will be a massive focus for security teams and regulators in the years to come.

There is often over-excitement and a glass-half-full mentality when we create new tech that significantly changes the way we do business – often focusing only on the opportunity presented without weighing risk. AI is a victim of this cycle. However, the good news is that we are still in the ‘smoke and mirrors’ phase of AI, testing its power and exploring use cases. We can either take the time now to educate ourselves on the technology to recognize and address the potential signal flares or become low-hanging fruit for hackers to exploit. The bottom line is that we can’t let innovation and excitement outweigh security and resilience.


License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.