By Need

By Industry

By Function

Security

Reduce The Data Leakage Risks of ChatGPT and Copilot AI

Conceptual illustration of AI data paths.

As businesses begin to embrace the transformative potential of generative AI, CEOs and CFOs are particularly enthusiastic about its ability to significantly enhance employee productivity and achieve the much-desired goal of “doing more with less.” The promise of tools like Microsoft Copilot and ChatGPT Premium to streamline workflows, automate routine tasks, and foster innovation has captured the imagination of top executives, who see these technologies as key drivers of competitive advantage and efficiency in an increasingly digital marketplace.

However, this optimism is not universally shared across the executive suite. Chief Information Security Officers (CISOs) are sounding the alarm on a critical concern that accompanies the adoption of these advanced AI tools: the heightened risk of employee data leakage. While generative AI offers unprecedented opportunities for growth and efficiency, it also opens new avenues for data breaches, unauthorized access, and the potential misuse of sensitive information. CISOs are acutely aware of the delicate balance between leveraging AI for its immense benefits and safeguarding the organization against the potential damages that could arise from compromised data security. Their focus is on navigating the complex landscape of AI integration, ensuring that the drive for productivity does not come at the expense of the organization’s most valuable asset: its data.

The Expanding AI Threat Landscape

A recent report from The Hacker News revealed that over 225,000 compromised ChatGPT credentials were up for sale on dark web markets between January and October 2023. These credentials were compromised through malware such as LummaC2, Raccoon, and RedLine, indicating a significant rise in the abuse of AI tools for malicious purposes. This surge in compromised credentials underscores the critical vulnerabilities associated with AI tools and the pressing need for robust security measures.

Data Risks with Microsoft Copilot

The introduction of Microsoft Copilot presents new challenges for data security within enterprises. At issue is how the Copilot for Microsoft 365 AI tool can access sensitive corporate data from sources such as the company’s SharePoint sites, individual employee OneDrive storage, and even Teams chats. The obvious business value here is that Copilot AI analyzes all of that data to generate new content in the context of the company and their business processes. However, it’s also obvious how this intense level of data scraping potentially leads to oversharing and unauthorized access. To be clear: Copilot does not change your company’s existing settings in Microsoft 365 to make it easier to find an employee’s personally identifiable information (PII) or alter how files are shared. What Copilot AI does is quickly find, analyze, and interpret data that was already available on your corporate network. So, it’s fair to say that a significant portion of an organization’s business-critical data is often at risk when Copilot AI is introduced due to existing data access policies that were already overly permissive, highlighting the need for stringent access controls and data management practices.

The Need for Independent AI Evaluation

A proposal from MIT’s AI Safe Harbor initiative underscores the importance of independent evaluation of AI systems for ensuring their safety, security, and trustworthiness. The initiative calls for AI companies to provide basic protections and more equitable access for good faith AI safety and trustworthiness research, emphasizing the role of independent evaluation in fostering transparency and accountability in the use of AI technologies.

Crafting a Comprehensive Security Strategy

Given these insights, business leaders must prioritize the development and implementation of a comprehensive security strategy to address the unique challenges posed by AI tools. At a high level, your AI data security strategy should include:

  • Robust Access Controls: Implementing strict access controls and permissions to limit access to sensitive data and AI tools, ensuring that only authorized personnel can use these technologies.
  • Continuous Monitoring and Threat Detection: Establishing systems for continuous monitoring of AI tool usage and data access patterns to detect and respond to suspicious activities promptly.
  • Data Management and Classification: Adopting rigorous data management practices to classify and protect sensitive information, preventing unauthorized access or leakage.
  • Employee Training and Awareness: Educating employees about the potential risks associated with AI tools and promoting best practices for secure usage.
  • Collaboration with AI Providers: Engaging with AI technology providers to understand the security measures in place and advocating for features and policies that enhance the security and privacy of corporate data.

Blue Mantis Helps Businesses Develop AI Data Protection Strategies

As AI tools like Microsoft Copilot and ChatGPT Premium become integral to business operations, CISOs and CEOs must work together and take proactive steps to safeguard their organizations against data leakage and other security threats. Blue Mantis cybersecurity experts have successfully enacted a comprehensive security strategy for our internal processes that addresses the unique challenges of AI technologies, businesses can leverage the benefits of AI while ensuring the security and privacy of their corporate data.

If your business plans to deploy AI tools such as Copilot for Microsoft 365, Blue Mantis can evaluate potential data security vulnerabilities. Our Copilot Jumpstart assessment helps you to:

  • Define scope, identify stakeholders, and rationalize key business scenarios for Copilot AI with comprehensive readiness guidance.
  • Showcase the value added by AI with demos on how Copilot for Microsoft 365 unleashes creativity, unlocks productivity, and levels up your employees’ skills.
  • Develop an implementation plan based on prioritized scenarios with next steps/timelines to deploy a Copilot solution customized for your business processes.

Schedule your Copilot Assessment with a Blue Mantis secure AI expert today.

Jay Pasteris headshot.

Jay Pasteris

Chief Operating Officer

As Chief Operating Officer at Blue Mantis, Jay Pasteris is responsible for all end-to-end operations of the organization, including ultimate ownership of all data, IT, and organizational risk.  Additionally, he oversees the HR function and is responsible for building, managing and maintaining a world-class talent pool in the U.S., Canada and India.  

Formerly CIO and CISO, Jay was promoted to COO in April 2024. In his new role, Jay continues to oversee the company’s IT and cybersecurity operations and he serves as an invaluable client-facing resource from an advisory and problem-solving perspective.  

Jay is a highly accomplished senior business technology executive with experience in aligning technology with business strategy and driving innovation across organizations. His deep experience as a vision-driven technology leader and his history of successfully delivering enterprise technology solutions has enabled him to build high-performing and results-driven technology teams that not only deliver business value, but transform organizations to excel in the digital era. 

Before joining Blue Mantis in 2021, Jay served as the CIO and CISO for the Massachusetts Medical Society / New England Journal of Medicine; senior vice president of global IT for Houghton Mifflin Harcourt; and CIO and CISO for Veracode—a Boston-based cyber security firm. Throughout his career, Jay has been responsible for leading and delivering scalable enterprise technology solutions; product engineering; global infrastructure; end-user experience; and security and compliance across cloud and software-as-a-service platforms.