Why trust and security are important for the future of generative AI.

dsfsd

As generative artificial intelligence (AI) innovation continues at breakneck speed, concerns over security and risk are on the rise. Some legislators have asked for new rules and regulations for AI tools, some technology and business leaders have suggested pausing the training of AI systems to assess their safety.

We spoke with Aviva Lithan, VP Analyst at Gartner, to discuss what information and analytics leaders responsible for AI development need to know about AI trust, risk and security management.

Journalists interested in speaking with Aviva about this matter can contact [email protected]. Members of the media can cite this article to Gartner with proper attribution.

Q: Given the concerns over AI security and risk, should organizations continue to explore the use of generative AI, or is a pause warranted?

A: The truth is that the development of generative AI does not stop. Organizations must act now to design an enterprise-wide strategy for AI Trust, Risk and Security Management (AI TRiSM). There is an urgent need for new AI TRISM tools to manage data and process flows among users and companies hosting generative AI foundational models.

Currently, there are no off-the-shelf tools that provide systematic privacy checks for users or effective content communication with these models, such as filtering out factual errors, illusions, copyrighted material, or confidential information.

AI developers should urgently work with policymakers to establish policies and practices for generative AI oversight and risk management, including potential new regulatory authorities.

Q: What are some of the most significant risks that generative AI poses to enterprises today?

A: Generative AI raises several new risks:

  • “Illusions” and inventionsThese are some of the most prevalent problems that are already occurring with generative AI chatbot solutions, including real bugs. Training data can lead to biased, off-base, or erroneous responses, but these can be difficult to detect, especially as solutions become more and more reliable.

  • Deep liarsA significant generative AI threat is when generative AI is used to create content with malicious intent. These fake images, videos, and audio recordings have been used to target celebrities and politicians, create and spread misleading information, and even create fake accounts or hack and hack existing legitimate accounts.

    In a recent example, a picture of Pope Francis wearing a fashionable white jacket went viral on social media. While this example may seem innocuous, it foreshadows a future in which profoundly fraudulent individuals, organizations, and governments pose significant reputational, fraudulent, and political risks.

  • Data privacyWhen interacting with AI chatbot solutions, employees can easily expose sensitive and corporate information. These applications can store data captured by user inputs indefinitely and even use the data to train other models – further compromising privacy. Such information can fall into the wrong hands in the event of a security breach.

  • Copyright issuesGenerative AI chatbots are trained on large amounts of Internet data that may include copyrighted material. As a result, some results may violate copyright or intellectual property (IP) protections. Without source references or transparency about how outputs are created, one way to mitigate this risk is to check outputs to ensure that users do not infringe on copyright or IP rights.

  • Cyber ​​security concerns: In addition to advanced social engineering and phishing threats, attackers can use these tools to generate simple malicious code. Vendors that offer generative AI foundational models assure their customers that they train their models to reject malicious cybersecurity queries. However, they do not provide users with tools to effectively audit all security controls.

    The salesmen paid a lot of attention to the “red team” approaches. These claims require users to place full confidence in the vendor’s ability to deliver on security objectives.

Q: What steps can enterprise leaders take to manage generative AI risks?

A: It should be noted that there are two general approaches to using ChatGPT and similar applications. Out-of-the-box model usage uses these services as-is without any direct customization. A rapid engineering approach uses tools to create, adjust, and evaluate rapid inputs and outputs.

For out-of-the-box use, organizations must implement manual evaluations of all model output to avoid incorrect, inaccurate, or distorted results. Establish a governance and compliance framework for corporate use of these solutions, including clear policies that prevent employees from asking questions that expose sensitive organizational or personal information.

Organizations should monitor unauthorized use of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations. For example, firewalls can deny enterprise user access, security information and event management systems can monitor event logs for breaches, and secure web portals can monitor unauthorized API calls.

For rapid engineering use, all these risk prevention measures are implemented. In addition, steps must be taken to protect internal and other sensitive data used to engineer queries against third-party infrastructure. Create and store engineering queries as immutable assets.

These assets may represent proven engineering claims that can be safely exploited. They can also represent a body of well-curated and highly developed questions that can be easily reused, shared or sold.

Gartner analysts will discuss AI TRISM at the Gartner Security and Risk Management Summits, which will be held June 5-7 in National Harbor, MD, July 26-28 in Tokyo, and September 26-28 in London. Follow news and updates from conferences using Twitter #GartnerSEC.

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences