close
close

Association-anemone

Bite-sized brilliance in every update

Governance considerations and pitfalls in implementing generative artificial intelligence
asane

Governance considerations and pitfalls in implementing generative artificial intelligence

Enterprise adoption of generative AI has created an entirely new branch of governance and compliance risk. It also highlighted the need to review and strengthen traditional information governance frameworks. Many large organizations are still in the process of establishing robust information governance frameworks for their current environments. Now, they must also address questions about their readiness to manage the impact of Copilot1 and similar generative AI tools. These questions include whether they can support adequate access, use and management in their IT infrastructure. In addition, organizations should assess whether new artifacts are created that could introduce unforeseen regulatory risks. This includes new forms of information that may require disclosure to regulators under existing obligations.

Many large organizations are still in the process of establishing robust information governance frameworks for their current environments. Now, they must also address questions about their readiness to manage the impact of Copilot and similar generative AI tools.

While asking the necessary questions can expose vulnerabilities early on, it’s critical to ask and test often. Governance questions must be part of the foundation of generative AI test plans, proof-of-concept evaluations, and pilot initiatives. Ultimately, the answers to these questions provide significant insights, including identifying business use cases and technology applicability, understanding risk so it can be properly mitigated and managed, and developing defense documentation and governance process changes of information and AI. Using this information, organizations can safely and effectively integrate generative AI tools into their data governance processes.

mONITORING

The use of AI tools should be monitored and managed within an organization to detect misuse or violation of various regulatory obligations. These initiatives are more successful with cooperation from IT, compliance, legal and organizational users. Such stakeholders are tasked with ensuring compliance and preventing employees from improperly using requests or accessing sensitive or restricted company information when engaging with AI tools.

Policies and workflows monitor inappropriate or non-compliant activity, set permissions for who can access certain categories of information, and support an organization’s data retention and disposal rules. Unfortunately, even when these controls, which are critical to effectively managing a range of legal, regulatory and organizational risks, are in place, they are not always easily applied to new AI deployments, including Copilot in Microsoft 365 environments. in some cases, policies will need to be created or reconfigured to apply to generative AI interactions and activity.

Access controls

The issue of access control within Microsoft 365 is not a new concept, and information governance professionals have advocated for well-managed access permissions in SharePoint and other aspects of the Microsoft 365 environment for many years.2 However, it is particularly relevant in Copilot deployments and, left unchecked, can create significant risks on numerous fronts.

With Copilot, anything a user has permission to access can appear as part of a response to a query or request. Without Copilot, when users are over-permitted and have access to documents they shouldn’t, they would only discover the document if they were actively looking for it. Therefore, excessive permissions and unrestricted access to certain materials can expose information to many more employees than intended. To manage this, organizations must be diligent in defining controls and thoroughly understand the range of materials that Copilot users can access at different permission levels.

In particular, when Copilot is enabled for a user, every app in Microsoft 365 that has a Copilot element will have AI enabled. Administrators and users cannot select which applications can use Copilot and which cannot. For example, a user cannot disable Copilot for a specific product, but there are options to limit certain functionality and features through administrative settings. An example of this is a Teams admin updating the meeting transcription settings so that Copilot cannot be used during Teams meetings.

Therefore, each application in the tenant must be checked for access controls and assessed for different types of information risk. For example, in Copilot chat for Microsoft 365, Copilot works across apps to answer users’ questions about upcoming meetings, related emails, and articles they think might need follow-up. Users can point Copilot to Word documents or PowerPoint files to answer questions or generate content, which can cause the system to scan files accessible in SharePoint, OneDrive, and Outlook.

Key considerations

Given the constant cycle of change in the Microsoft 365 environment, frequent auditing of these applications and controls is essential to maintain adherence to governance rules over time. In addition to regularly monitoring permissions, organizations should take and document the following steps to strengthen governance when implementing new AI:

Evaluating the proof of concept— Before using any generative AI tool, legal and IT teams should work closely together to conduct a limited pilot with a small group of test users. This will help reveal risks and governance gaps that may be unique to the organization before the system is rolled out to the masses.

Assessment of AI governance readiness—This step involves reviewing existing access control management across all systems in the environment that Copilot (or another generative AI tool) can access. The good news is that control tests to date have shown that Copilot appears to stay aligned with established access controls and is accurate in accessing the documents and data that a person is allowed to view. So rigorously evaluating permissions can help limit the risk of access control failures for Copilot users.

Establish an AI committee— An active stakeholder team is essential to set policy, advance it in an informed way, and keep it up-to-date as features and functionality change in the Microsoft 365 environment. AI committees cannot be vanity committees. They need to be made up of people who understand legal, regulatory, technical and organizational needs and how they may be affected by the use of AI.

Labeling policies— Defining a tagging system for documents and categories of information to be treated with different levels of confidentiality or protection is an effective way to support governance in an environment using Copilot. This will help ensure that sensitive material is removed from the AI ​​system so that it is not inadvertently shared outside of groups authorized to view it.

Continuous evaluation— Cloud systems and AI technology are advancing at the speed of light. Functionality and controls are constantly changing, so organizations need an AI governance program that is built for adaptability. Part of maintaining flexibility is understanding that even after the initial assessment of strengths and weaknesses in access control and other aspects of governance, and even after the proof of concept is complete, the program cannot go into autopilot. System owners must be vigilant and continually retest to confirm whether controls hold up over time and whether anything in the system creates new or unexpected risks.

Continuous training—Developing and implementing an actively engaging training plan is key to successful implementations and governance. All employees must understand their organization’s heritage and AI governance policies, practices and procedures. Similarly, everyone needs to understand and recognize the appropriate use of any new AI tools. In addition, training will help convey unique organizational and departmental use cases discovered during piloting and ongoing evaluation to ensure employees are responsibly maximizing the value of Copilot.

Conclusion

When Copilot first became available, many organizations experienced the excitement and pressure to be receptive. There will continue to be momentum for rapid adoption as other AI tools and features enter the market. Innovation is important, but it cannot come at the expense of effective risk management. Organizations, especially those in highly regulated industries, need to take the time to test their use cases and allow IT to align with other stakeholders. This will intensify as a critical compliance issue as regulators scrutinize how organizations use AI and as the use of AI proliferates in enterprise systems that host sensitive and confidential information. Organizations should pursue a middle ground approach in adopting AI while also establishing controls and verifying that tools are working as they should.

Final notes

1 Microsoft Copilot
2 One identity,”Identities and security in 2022

Tori Anderson

He is a director at FTI Technology with almost 10 years of experience working in ediscovery, information management and governance. Anderson holds a law degree from the University of Miami (Florida, USA) and is licensed to practice law in Florida and Washington DC, USA.

Tracy Bordignon

He is a senior director at FTI Technology with more than a decade of experience in information governance and privacy, helping organizations manage legal risk. Bordignon holds a law degree from Southwestern Law School (California, USA) and is licensed to practice law in Florida, USA.