Microsoft has recognized the security challenges posed by AI agents with the introduction of Agent 365, a consolidated control platform designed for these agents. This platform aims to enhance governance using familiar tools such as Entra and Purview.
While this marks a significant initial step toward securing AI agents, it is far from a complete solution. Gartner forecasts that over 40% of organizations may abandon their agentic AI projects due to ongoing security concerns. Microsoft is striving to address these issues with Agent 365, but the solution is not straightforward. Even though every agent is assigned an Entra ID, aspects such as discovery and lifecycle management remain only partially addressed.
Rather than depending solely on large tech providers, organizations should adopt three best practices to ensure a secure and efficient implementation of AI technologies.
1. Implement Comprehensive, Holistic Data Governance
Research indicates that only 30% of organizations utilizing AI effectively classify and protect their data. Furthermore, IBM reveals that 63% of these organizations lack a governance framework for AI, resulting in widespread breaches across various sectors. True governance requires a comprehensive framework that oversees the entire data lifecycle, from creation to deletion, regardless of the cloud platforms involved.
To mitigate risks, organizations need to implement automated data classification and access controls that safeguard the data itself, not just the agents accessing it. While these measures require an initial investment, they are often more effective and less costly than the reactive solutions that larger vendors provide. The advantages of robust data governance extend beyond AI, leading to reduced risks, lower storage costs, and improved efficiency throughout the organization.
2. Train Your Teams on Agentic AI Governance and Security Best Practices
The effectiveness of AI security and governance frameworks hinges on widespread organizational understanding. Therefore, training programs are essential for ensuring secure adoption of agentic AI. This is particularly important for agentic AI tools, as their autonomy and decision-making capabilities introduce unique risks.
An informed workforce is the first line of defense. Organizations should conduct targeted training sessions covering both the technical and ethical risks associated with agentic AI. Additionally, cross-functional incident response exercises and regular updates on regulations are vital. Continuous education and practical experience empower teams to recognize threats and adapt swiftly to compliance changes, thereby minimizing risks in significant ways.
3. Integrate Additional Agent Oversight
Relying exclusively on platform-native security controls can lead to a narrow view of security. Most teams utilize tools across multiple cloud environments, making it essential to have agnostic oversight from third-party solutions. This approach ensures that security strategies are not constrained by a single vendor’s agenda, enabling consistent governance of agents, whether they operate in Azure, AWS, GCP, or other platforms.
Third-party solutions often integrate smoothly with existing security frameworks and offer transparent reporting, allowing organizations to quickly detect and address unusual agent activities. Unlike the reactive measures from larger vendors, which can lack flexibility, third-party tools provide greater independence and help maintain a cohesive, organization-specific security posture across all AI implementations. It is equally important to establish safeguards for unmanaged agents, which is an area where many third-party providers excel.
While large tech companies may embrace a culture of rapid innovation, often described as “moving fast and breaking things,” other service providers focus on managing the risks that arise from these advancements. This makes it crucial to incorporate additional oversight for both managed and unmanaged agents.
Before Turning to a Reactive Solution, Lay the Groundwork for Success
Ultimately, both agentic AI vendors and organizations aspire for this technology to function safely and effectively. However, organizations should not solely rely on major vendors to rectify the issues they have created. Instead of hastily implementing reactive solutions, organizations must first concentrate on establishing fundamental controls that limit the risks associated with agentic AI.