LATEST NEWS

DataBank Named Among 2025 Best Places to Work. Read the press release.

Ethical AI: Balancing Innovation and Security
Ethical AI: Balancing Innovation and Security

Ethical AI: Balancing Innovation and Security

  • Updated on March 18, 2025
  • /
  • 7 min read
HIPAA FISMA PCI ISO GDPR

Ethical AI: Balancing Innovation and Security

By Mark A. Houpt, Chief Information Security Officer and Jenny Gerson, Head of Sustainability & Corporate Responsibility – DataBank

Artificial intelligence is quickly transforming how businesses operate today. Powerful new AI tools have the potential to transform many business processes to enhance productivity, automate workflows, and generate valuable new insights.

At the same time, these AI tools and their many uses also present new and unpredictable risks. As a result, many organizations are currently wrestling with how to encourage and enable ethical AI adoption while making sure it remains secure and completely aligned with corporate risk management and governance policies.

The Security Dilemma: Enabling AI While Managing Risk

So many security leaders and professionals are skeptical about AI’s role in their organizations. This is not because they oppose innovation – especially when it’s proven to deliver so many benefits – but because there are simply too many unknowns related to its use.

Put another way: Security teams must do all they can to enable the business to use AI tools in a way that is as risk-averse and secure as possible. Still, the unknown and quickly changing aspect of AI is a security risk unto itself.

While AI tools such as ChatGPT and Microsoft Copilot can now provide a seemingly infinite range of new benefits, they also raise security concerns. Without the proper controls, these tools could inadvertently access and/or expose the organization’s most sensitive information.

For example, consider the case where a corporate attorney uses Microsoft Copilot to draft a summary of ongoing litigation for an internal briefing. Because Copilot is trained to find as much relevant information as possible, it scans the company’s emails, internal memos, and various legal documents stored in SharePoint to compile key details.

However, without strict access controls, this search includes privileged attorney-client communications and other highly sensitive information that was not properly secured. Copilot pulled these results into its summary document without the attorney realizing how sensitive this data was. Worse, the financial information was injected into Copilot’s public large language model (LLM) during an external source, resulting in the permanent loss of control of the information within the AI infrastructure.

The Challenges of Data Classification and Access Controls

Many organizations – especially those outside of highly regulated industries – still lack strong, comprehensive data classification policies, or have yet to implement them effectively. In contrast, government agencies and companies in highly regulated sectors have established processes and frameworks to help manage sensitive data.

As AI tools continue to proliferate and expand, organizations in all industries must now take a closer look at their approach to data security and governance.

More specifically, this now means Chief Information Security Officers (CISOs) and other security leaders should ask these questions:

  • What data is the AI tool accessing and compiling in response to various requests?
  • What might it share without our knowledge?
  • Where does this data go once it’s processed?

These concerns are more than theoretical possibilities since it’s likely the use of AI could lead to unintentional data leakages, breaches, and other adverse outcomes.

 

The Risk of Unintentional Data Exposure

Security teams must carefully consider how AI interacts with structured and unstructured data. Another example: Imagine the scenario where a marketing manager uses ChatGPT to help conduct a detailed market analysis report that includes the top 500 potential buyers of its new product.

Without the right security controls in place, ChatGPT may also pull external information that has been loaded into the LLM, including sensitive purchase histories, contract values and even personally identifiable information (PII).

Without realizing it, the marketing manager includes these AI-generated insights into the report, and in doing so, potentially exposes confidential customer data to unintended recipients. Not only does this violate internal security policies and present compliance concerns, but it also shares information about the company’s product and go-to-market strategies.

Here’s one more example. A healthcare company uses an AI tool to analyze various demographic data to begin determining how many individuals could be susceptible to a particular disease or condition. If sensitive information is not properly detected, it could result in the loss of data, representing a significant HIPAA compliance violation that exposes patients’ health and PII data and presenting the organization with very real legal issues and possible fines.

Ethical AI and Governance Challenges: Is AI Eating its Own Tail?

Beyond the immediate security risks, AI also presents a deeper challenge: governance and integrity. As businesses rush to adopt AI-driven tools, they must consider not just what these systems can do, but also how reliable their outputs really are.

One of the growing concerns in the AI space is that AI models are increasingly trained on AI-generated content, creating a kind of feedback loop – sometimes referred to as AI “eating its own tail.” The Atlantic recently explored this phenomenon in its article, “AI Is an Existential Threat to Itself,” highlighting the risk that future iterations of AI models will be trained AI-generated or poor-quality data. It’s the classic example of “garbage in, garbage out,” raising concerns about the accuracy and quality of AI results.

This raises a fundamental governance issue: How do organizations ensure that AI remains grounded in factual, high-quality information rather than recycling and amplifying any inaccuracies? If a significant portion of an AI model’s training data originates from other AI systems, it risks reinforcing biases, misinformation, and degraded content quality over time.

While AI developers don’t always disclose the exact sources of their training data, it’s a valid concern. Consider ChatGPT, which (as of now) has built-in knowledge only through 2021. If an enterprise relies on such a system without access to more recent data, is it truly a reliable platform for making critical business decisions?

Also, if the next versions of AI models are trained primarily on AI-created content, poor-quality information, or worse, deliberately infiltrated by a nation-state threat actor, how will that impact the integrity of the outputs?

From a governance perspective, companies need to take proactive steps:

  • Audit AI-generated outputs: Validate key insights rather than taking them at face value.
  • Set data sourcing policies: Ensure AI tools are trained on verifiable, high-quality, and up-to-date information.
  • Maintain human oversight: AI should assist, not replace, human decision-making, especially in critical areas like security, compliance, and risk management.

Governance is a core pillar of corporate responsibility, and AI must be held to the same standards as any other business initiative. Businesses that fail to address these issues may find themselves making strategic decisions based on AI-generated hallucinations – an outcome no CISO or governance executive wants to face.

Balancing Innovation and Responsibility

AI’s potential is undeniable, but its risks – both security related and ethical – cannot be ignored. Security leaders and organizations must recognize that AI is not just another “set-it-and-forget-it” tool. It requires carefully designed governance, continuous oversight, and a commitment to responsible adoption.

Striking the right balance means embracing AI’s advantages while mitigating its vulnerabilities, ensuring that innovation does not come at the cost of security, compliance, or trust. In an era where AI is shaping the future of business, those who proactively manage these challenges will be best positioned to lead with confidence.


About the Authors

Mark A. Houpt, Chief Information Security Officer

Mark A. Houpt

Mark A. Houpt, Chief Information Security Officer

Mark A. Houpt serves as the Chief Information Security Officer (CISO) at DataBank, bringing over 30 years of expertise in information security and technology across diverse industries. Joining DataBank in 2015 (via the acquisition of Edge Hosting), Mark has spearheaded security and compliance initiatives, leading a team of Security Architects and Compliance Engineers. With certifications including CISSP, CCSP, and CEH, as well as extensive knowledge of frameworks such as FedRAMP, PCI-DSS, and HIPAA, Mark is adept at translating complex compliance standards into actionable insights. His career spans roles in Fortune 50 institutions, higher education, and military service as a U.S. Navy Cryptologist. A sought-after speaker and blogger, Mark also dedicates time to economic security initiatives and enjoys aviation and wildlife photography alongside his wife, Maria.

View all articles
Jenny Gerson senior director of sustainability

Jenny Gerson

Jenny Gerson, Senior Director of Sustainability

Jenny Gerson is the Senior Director of Sustainability at DataBank, a colocation data center developer and operator in 25+ markets across the US and UK. At DataBank, she develops and leads the company's strategy across environment, social, and governance topics, including a scope 1 and 2 net zero emissions goal by 2030.

Prior to joining DataBank, she was the Director of Sustainability for Maxar Technologies, a global space infrastructure and earth intelligence company. Previously, Jenny led energy and environmental management, corporate sustainability, and M&A integrations at Zayo Group, a global telecom company.

Jenny has 20+ years of experience in sustainability and 10+ years of experience in the data center industry. Beyond corporate sustainability, Jenny's background includes cleantech market research, environmental permitting for energy development, and ecological studies for research institutions. Jenny holds a BA in Evolutionary and Ecological Biology and an MBA from the University of Colorado.

View all articles
Get Started

Get Started

Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.