Shadow AI is the use of unsanctioned and deliberately hidden use of AI within an organization. Like “shadow IT” (employees using unauthorized devices and applications), shadow AI is driven by the same motive: an employee’s (or team’s) desire to use the best tools available to get work done.
Shadow AI may involve generative AI, machine learning, natural language processing, computer vision, and autonomous systems. But here, we’re mainly talking about generative AI—because this is the form of AI that is most readily accessible to individuals (e.g. OpenAI’s ChatGPT, Google’s Gemini, etc). Other forms of AI, like machine learning require significant investment and require executive sign-off—meaning it is almost impossible to fund and use under the radar. Conversely, the relatively low cost and easy adoption of generative AIs means individual employees and managers can hide the cost in their own discretionary budgets—circumventing the usual procurement process.
The risks of shadow AI
So why would you want to prevent people from using unsanctioned AI technology? The risks span security, compliance, accuracy, resource management and ethics:
- Data security: If AI tools are not vetted for security, your data may be running through unsecured systems where they are vulnerable to cyberattack.
- Compliance: Without a complete understanding of how and where data is processed, your employees may be unwittingly violating corporate policies and industry regulations.
- Accuracy: AI tools are not perfect. Some are better than others. Without a proper evaluation of the quality of output, there is a risk of inaccurate and inconsistent data content coming into your own core systems.
- AI tools sprawl: Without a strategic, governed approach, the use of publicly available AI tools can get messy. Organizations can end up with a patchwork of different AI tools, funded in a way that cannot be easily tracked. This leads to an inefficient AI tools footprint where duplication leads to unnecessarily high costs and excessive management overheads.
- Ethics: Generative AI applications can pick up biases from the datasets they are trained on. These biases can be carried over into your own systems, leaving you open to criticism.
How to prevent shadow AI
AI is moving fast. Employees are beginning to see the potential in generative AI’s abilities to help them do better and faster work. So, a blanket ban on the use of generative AI may be seen as “corporate” standing in the way of productivity—forcing the use of AI under the radar. You either supply people with (or help them safely adopt) AI-powered tools to help them do their jobs better and faster—or they’ll find their own.
The solution lies in management being open with employees about what they need them to do. And for employees to be open with management about what they want to do. That means that the necessary discussion, planning, and checking can happen upfront—as opposed to an emergency clean-up later when something goes wrong.
To prevent shadow AI, it is the responsibility of an organization’s management team to do seven key things:
- Steer AI: Form an AI governance body that includes executives, department heads, AI experts and IT leaders. Without a team to steer the use of AI, shadow AI will happen by default. This steering group needs to consider the business, IT, security, compliance, regulatory, and ethical angles.
- Define right and wrong: Establish clear policies. For example, enforcing the approval process and use of anonymized data to test AI apps.
- Make sanctioned AI easy: Offer a clear channel for employees and managers to propose new AI initiatives so that they can be quickly and effectively evaluated and integrated. Have a well-defined, streamlined approval process that takes into account the highly scalable value that AI offers. Going slow may mean business opportunities are missed, but balance speed with safety.
- Get people up to speed: Educate employees on the risks and the benefits of AI. Without this, some employees will only see the benefits. Others will only see the risks. Training is your chance to set out the proper approach and channels for proposing new AI initiatives.
- Support AI initiatives: Provide on-the-ground IT, security team, and compliance team support for new AI projects. Experimentation with AI (like any innovation) should be collaborative. AI done right requires a team. It is highly unlikely that any one individual is capable of adopting AI properly and safely. Make the team available to ensure all angles are covered.
- Enable AI sandboxes: Provide AI experiment sandbox environments where tools can be evaluated safely before adoption in production environments. This means limiting what data is used, where, and who can access these systems to minimise risk while experimenting to identify the potential value.
- Actively patrol for shadow AI: Establish monitoring tools to detect unsanctioned AI-related activities. Is there a lot of network traffic going to and from ChatGPT (or other generative AI platforms)?
Engagement is the antidote to shadow AI
Getting ahead of shadow AI is imperative. Ignoring it is not a strategy. By putting a formal (but easy to use) framework around AI experimentation, you can safely leverage the power of AI to transform operations in many areas. Training, communication, and collaboration are essential. People need to understand that your goal is to facilitate, not prevent. They need to understand why the governance “overheads” are necessary to successful AI initiatives. And that in exchange for working out in the open, they will get the support they need from IT, security, compliance, and other teams.
Find out more about Hornbill AI