By Mark A. Houpt, Chief Information Security Officer and Jenny Gerson, Head of Sustainability & Corporate Responsibility – DataBank
Artificial intelligence is quickly transforming how businesses operate today. Powerful new AI tools have the potential to transform many business processes to enhance productivity, automate workflows, and generate valuable new insights.
At the same time, these AI tools and their many uses also present new and unpredictable risks. As a result, many organizations are currently wrestling with how to encourage and enable ethical AI adoption while making sure it remains secure and completely aligned with corporate risk management and governance policies.
So many security leaders and professionals are skeptical about AI’s role in their organizations. This is not because they oppose innovation – especially when it’s proven to deliver so many benefits – but because there are simply too many unknowns related to its use.
Put another way: Security teams must do all they can to enable the business to use AI tools in a way that is as risk-averse and secure as possible. Still, the unknown and quickly changing aspect of AI is a security risk unto itself.
While AI tools such as ChatGPT and Microsoft Copilot can now provide a seemingly infinite range of new benefits, they also raise security concerns. Without the proper controls, these tools could inadvertently access and/or expose the organization’s most sensitive information.
For example, consider the case where a corporate attorney uses Microsoft Copilot to draft a summary of ongoing litigation for an internal briefing. Because Copilot is trained to find as much relevant information as possible, it scans the company’s emails, internal memos, and various legal documents stored in SharePoint to compile key details.
However, without strict access controls, this search includes privileged attorney-client communications and other highly sensitive information that was not properly secured. Copilot pulled these results into its summary document without the attorney realizing how sensitive this data was. Worse, the financial information was injected into Copilot’s public large language model (LLM) during an external source, resulting in the permanent loss of control of the information within the AI infrastructure.
Many organizations – especially those outside of highly regulated industries – still lack strong, comprehensive data classification policies, or have yet to implement them effectively. In contrast, government agencies and companies in highly regulated sectors have established processes and frameworks to help manage sensitive data.
As AI tools continue to proliferate and expand, organizations in all industries must now take a closer look at their approach to data security and governance.
More specifically, this now means Chief Information Security Officers (CISOs) and other security leaders should ask these questions:
These concerns are more than theoretical possibilities since it’s likely the use of AI could lead to unintentional data leakages, breaches, and other adverse outcomes.
Security teams must carefully consider how AI interacts with structured and unstructured data. Another example: Imagine the scenario where a marketing manager uses ChatGPT to help conduct a detailed market analysis report that includes the top 500 potential buyers of its new product.
Without the right security controls in place, ChatGPT may also pull external information that has been loaded into the LLM, including sensitive purchase histories, contract values and even personally identifiable information (PII).
Without realizing it, the marketing manager includes these AI-generated insights into the report, and in doing so, potentially exposes confidential customer data to unintended recipients. Not only does this violate internal security policies and present compliance concerns, but it also shares information about the company’s product and go-to-market strategies.
Here’s one more example. A healthcare company uses an AI tool to analyze various demographic data to begin determining how many individuals could be susceptible to a particular disease or condition. If sensitive information is not properly detected, it could result in the loss of data, representing a significant HIPAA compliance violation that exposes patients’ health and PII data and presenting the organization with very real legal issues and possible fines.
Beyond the immediate security risks, AI also presents a deeper challenge: governance and integrity. As businesses rush to adopt AI-driven tools, they must consider not just what these systems can do, but also how reliable their outputs really are.
One of the growing concerns in the AI space is that AI models are increasingly trained on AI-generated content, creating a kind of feedback loop – sometimes referred to as AI “eating its own tail.” The Atlantic recently explored this phenomenon in its article, “AI Is an Existential Threat to Itself,” highlighting the risk that future iterations of AI models will be trained AI-generated or poor-quality data. It’s the classic example of “garbage in, garbage out,” raising concerns about the accuracy and quality of AI results.
This raises a fundamental governance issue: How do organizations ensure that AI remains grounded in factual, high-quality information rather than recycling and amplifying any inaccuracies? If a significant portion of an AI model’s training data originates from other AI systems, it risks reinforcing biases, misinformation, and degraded content quality over time.
While AI developers don’t always disclose the exact sources of their training data, it’s a valid concern. Consider ChatGPT, which (as of now) has built-in knowledge only through 2021. If an enterprise relies on such a system without access to more recent data, is it truly a reliable platform for making critical business decisions?
Also, if the next versions of AI models are trained primarily on AI-created content, poor-quality information, or worse, deliberately infiltrated by a nation-state threat actor, how will that impact the integrity of the outputs?
From a governance perspective, companies need to take proactive steps:
Governance is a core pillar of corporate responsibility, and AI must be held to the same standards as any other business initiative. Businesses that fail to address these issues may find themselves making strategic decisions based on AI-generated hallucinations – an outcome no CISO or governance executive wants to face.
AI’s potential is undeniable, but its risks – both security related and ethical – cannot be ignored. Security leaders and organizations must recognize that AI is not just another “set-it-and-forget-it” tool. It requires carefully designed governance, continuous oversight, and a commitment to responsible adoption.
Striking the right balance means embracing AI’s advantages while mitigating its vulnerabilities, ensuring that innovation does not come at the cost of security, compliance, or trust. In an era where AI is shaping the future of business, those who proactively manage these challenges will be best positioned to lead with confidence.
Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.