AI

Shadow AI Reshaping The Future and Its Hidden Costs

In December 2023, Amazon launched its latest artificial intelligence, Enterprise Q, promising to provide a safer alternative to consumer-focused chatbots such as ChatGPT. However, the excitement was short-lived. Just three days after the announcement, Amazon Q is mired in controversy. Employees were alarmed by its inadequate security and privacy measures, saying Q failed to meet Amazon’s strict corporate standards. Critics have highlighted its “hallucinations” and tendency to leak sensitive information, including AWS data center locations, unreleased product features and internal discount programs. Amazon’s engineers were forced into damage control mode, fixing critical issues labeled as “SEV 2” emergencies to prevent future consequences.

Around the same time, Samsung Electronics Co. was grappling with the headaches posed by artificial intelligence. Sensitive internal source code found its way into ChatGPT, exposing clear security vulnerabilities. We responded quickly: An internal memo communicated a company-wide ban on building AI tools. Samsung’s decision highlights the difficulties of managing data on external AI platforms such as Google Gemini and Microsoft Copilot, which have elusive controls over data retrieval and deletion. The move reflects the concerns of 65% of Samsung employees, who view these AI services as digital Trojan horses. Despite the impact of the ban on productivity, Samsung remains steadfast, choosing to develop in-house AI solutions for translation, document summarization, and software development until a safe environment for AI usage is established.

Apple has also joined the fray, banning its employees from using ChatGPT and similar AI tools. The ban was prompted in part by the tools’ ties to direct rival Microsoft, raising concerns about the security of Apple’s sensitive data. This trend is not unique to tech giants. Financial giants such as JPMorgan Chase, Deutsche Bank, and Wells Fargo have also restricted the use of artificial intelligence chatbots in an effort to protect sensitive financial information from being seen by third parties. However, these limitations have inadvertently created a culture of “shadow AI” in which employees use personal devices at work to pursue efficiency and save time, highlighting significant policy and practice gaps in the use of AI.

Shadow AI: Unseen Threat

Although specific data is sparse, many in AI-constrained companies have admitted to employing such workarounds—and these are just public ones! This use of shadow AI is common in many organizations, where it is encouraged to use AI in ways that contradict or violate company policy, thus becoming an activity that employees have to conceal.

When I dug deeper into the issue, I found some recent studies confirming that while there are many stories about companies restricting the use of genAI in the workplace, employees don’t seem to be using it less. Recent research from Dell shows that 91% of respondents have dabbled in generative AI in some way in their lives, with a further 71% saying they use it exclusively at work.

Research conducted by ISACA highlights the significant gap between the adoption of artificial intelligence in the workplace and the formal policies governing its use in Australia and New Zealand. While 63% of employees in these regions leverage AI to complete various tasks, only 36% of organizations formally allow this. The survey shows that AI is being used to create written content (51%), increase productivity (37%), automate repetitive tasks (37%), improve decision-making (29%) and customer service (20%). However, only 11% of organizations have a comprehensive policy on the use of artificial intelligence, and 21% have no intention of developing any policy.

In addition, ISACA’s research shows that there is a lack of AI-related training within organizations, with only 4% of organizations providing training to all employees and 57% of organizations not providing any training, not even to those directly affected by AI technology. This situation raises similar concerns as shadow IT, where employees use IT resources without formal approval, potentially posing risks to organizational security and governance.

Navigating the New Frontier of Risk and Responsibility

Just as shadow IT has crept into the enterprise, shadow AI is here to stay, forcing organizations to confront GenAI’s stance head-on while still figuring out how to use it.

Experts believe guardrails will not prevent employees from using AI tools, as they can significantly increase productivity and save time. Therefore, corporate CIOs must confront this issue and explore mitigation strategies that are consistent with their organization’s risk tolerance. It’s inevitable that well-intentioned employees will take advantage of these tools to increase efficiency, so enterprise technology leaders can prevent any potential damage to their organizations by proactively responding to this trend and managing it effectively.

Shadow IT has a history of major data breaches, such as the infamous incident involving an unsecured Amazon S3 bucket that resulted in the personal data of 30,000 people being publicly exposed. These historical precedents serve as a warning and underscore the need for strict data governance in the era of artificial intelligence.

Shadow AI is a more difficult challenge than shadow IT for a number of reasons. First, the fragmented nature of the use of AI tools means that the potential for data misuse or leakage is not limited to a technical subset of employees (e.g. developers) but extends throughout the organization. Furthermore, AIaaS (Artificial Intelligence as a Service) models inherently learn from the data they process, creating a two-tier risk: the likelihood of the AI ​​vendor accessing sensitive data and the increased ability of bad actors to discover and exploit exposed data.

Strategies to Tackle Shadow AI

Amir Sohrabi, regional vice president for EMEA and Asia and head of digital transformation at SAS, said technology leaders with a data-first mindset will be able to drive efficiencies in 2024 and beyond. This is because maximizing the benefits of generative AI tools depends on well-organized data and therefore requires strong data management practices that include data access, hygiene, and governance.

Nick Brackney, Gen AI and cloud evangelist leader at Dell Technologies, noted in an article published on CIO.com that enterprises should use “three prescriptive approaches” to successfully combat Shadow AI.

First, establish a centralized policy for the use of generative AI, allowing executive leadership to define use cases, create secure access, and protect data. This approach simplifies implementation and scaling across the organization, while requiring effort to build and identify easy wins to ensure success.

Second, keep your data organized and understand what types should not be placed in public or hosted private cloud AI products, such as trade secrets and sensitive information. Use AI solutions that allow full control or no conversation logs to be retained for these types of data.

Third, take control of AI services by bringing them to your data, whether on-premises or through a secure cloud solution, to leverage benefits in governance, employee productivity, and secure data access. This approach enhances the end-user experience, ensures compliance and reduces the risk of data breaches.

Developing a clear AI acceptable use policy is critical to describing inappropriate AI practices that could harm your organization and to guide the integration of AI applications according to data security protocols and risk management strategies. The policy serves as a baseline that allows decision-makers to evaluate the use of AI tools within the organization against established guidelines, quickly pinpoint any risk exposure, and determine necessary corrective actions.

Ethan Mollick, a professor at the Wharton School at the University of Pennsylvania, offers another thought-provoking approach. He believes that traditional methods of integrating new technologies are ineffective for AI due to their centralized nature and slow speed, which makes it difficult for IT departments to develop competitive internal AI models and for consultants to provide specific guidance. The true potential of AI applications lies with employees who are experts in their field of work, suggesting that for organizations to truly benefit from AI, they must engage employees (aka “secret bots”) in using AI technology.

First, brands should acknowledge that employees at any level can possess valuable AI skills, regardless of their formal role or past performance. After discovering the existence of secret cyborgs among AI-savvy employees, companies must foster a collective learning environment, such as a library of crowdsourced tips, and promote the use of AI by offering guarantees that no jobs will be lost due to AI. A culture to reduce concerns about artificial intelligence. Artificial intelligence can eliminate mundane tasks and encourage more engaging jobs.

Employers should be able to offer generous rewards for identifying significant opportunities where AI can help their organizations. This could include financial incentives, promotions or flexible working conditions, and be handled through gamification.

Today’s organizations should act quickly to determine how to leverage the productivity gains that AI can bring, how to reorganize workflows based on AI capabilities, and how to manage potential risks associated with AI use, such as data illusion and intellectual property issues. This requires a proactive approach to developing a comprehensive AI policy that empowers employees at all levels to leverage their insights and fosters a culture that rewards AI-driven innovation.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *