Security & compliance
Safety and security in the era of generative AI
Business leaders are excited by the promise of this technology, but they express key concerns about integrating it into their organizations safely.
There are worries about vulnerabilities stemming from inaccuracies and misinformation, human error, and internal and external data breaches. They’re right to be concerned. We’re entering a new era of heightened security risks, and it demands we continuously help end users understand what’s at stake. Fortunately, there’s a lot we can do to protect our businesses and people while still leveraging the full power of generative AI.
Issues related to safety and security appeared in three of the top four concerns around implementing generative AI. When asked to describe the greatest barriers to generative AI implementation, respondents chose security as their top concern. Leaders worry that stakeholders — from employees and investors to customers and beyond — will fall prey to external AI-generated content, such as misinformation, deep fakes and phishing scams. They are almost equally concerned about data breaches and leakages resulting from internal generative AI solutions that widen access to company data and Intellectual Property (IP). Leaders are also focused on risks from inside the organization, such as an accidental data breach stemming from inadvertent employee error.
The key tenets of generative AI security are confidentiality, integrity and availability. An organization must understand its appetite for risk and assess the technical innovations generative AI can yield with the overall risk of doing so. Early adopters run the risk of having to blaze the trail on their own, but they also stand to benefit from being first to market. Leaders need to give guidance to the entire company via security policies and governance models with either new policies or additions to existing ones.
Much more concerned
Slightly more concerned
No change
Slightly less concerned
Much less concerned
Confidentiality Employees will want to interact with corporate data using generative AI models, so organizations must consider the quality and sensitivity of that data. They should also apply appropriate labels and controls to prevent unauthorized use.
Integrity Incorrect outputs are still possible with generative AI models. Creating verification processes and implementing a closed feedback loop for continuous improvement in response accuracy will ensure the data receives the right level of scrutiny to avoid gross miscalculations.
Availability Not an obvious security consideration, but availability of systems is crucial. If generative AI makes its way into automation and orchestration workflows without appropriate quality assurance and testing, the results could be unpredictable at best and devastating at worst.
81% of leaders say their organization has already established, or is currently developing, internal generative AI policies.
But fewer (56%) have actually distributed a formal corporate AI-use policy
An internal policy outlines how an organization will responsibly and securely use generative AI. Some common elements include:
Roles and responsibilities of the people involved in implementing generative AI
Data governance in the context of generative AI
Model training standards and best practices
Security and privacy controls, including user access, encryption and incident response plans
Ethical use and copyright infringement guidelines
Compliance and legal considerations, especially industry-specific guidance
Employee training protocols
Monitoring and auditing mechanism