7 takeaways from a year of building generative AI responsibly and at scale

AI generated decorative image.

Last year saw huge advances in generative AI, as people experienced the ability to generate lifelike visuals with words and Microsoft Copilot tools that can summarize missed meetings, help write business proposals or suggest a dinner menu based on what’s in your fridge. While Microsoft has long established principles and processes for building AI applications in ways that seek to minimize unexpected harm and give people the experiences they’re looking for, deploying generative AI products on such a large scale has introduced new challenges and opportunities.

That’s why Microsoft recently released its first annual Responsible AI Transparency Report to help people understand how we approach responsible AI (RAI). The company has also rolled out new tools available in Azure AI for enterprise customers and developers to help safeguard the quality of their AI outputs and protect against malicious or unexpected uses of the systems.

It’s been a momentous year of stress-testing exciting new technology and safeguards at scale. Here are some key takeaways from Natasha Crampton, Microsoft’s Chief Responsible AI Officer, who leads the team defining and governing the company’s approach to RAI, and Sarah Bird, Microsoft’s Chief Product Officer for Responsible AI, who drives RAI implementation across the product portfolio:

AI generated decorative image.

#1: Make responsible AI a foundation, not an afterthought

Responsible AI is never about a single team or set of experts, but rather the responsibility of all employees across Microsoft. For instance, every employee who works on developing generative AI applications must follow the company’s Responsible AI Standard, a detailed list of responsible AI requirements for product development. These include instructions for assessing the potential impact of new AI applications, creating plans for managing previously unknown failures that come to light once in use, and identifying limitations or changes so customers, partners, and people using the AI applications can make informed decisions.

Microsoft has also invested in mandatory training to build awareness and advocacy across the company – at the end of last year 99 percent of employees had completed a training module on responsible AI in our annual standards of business conduct training.

“It’s not possible to do responsible AI work as some sort of after-thought bolt on checklist immediately prior to shipping a product,” says Natasha Crampton. “It needs to be integrated into the way in which we build products from the very beginning. We need everyone across the company to be thinking about responsible AI considerations from the very get go.”
AI generated decorative image.

#2: Be ready to evolve and move quickly

New AI product development is dynamic. Taking generative AI to scale has required rapid integration of customer feedback from dozens of pilot programs, followed by ongoing engagement with customers to understand not only what issues might emerge as more people begin using the new technology, but what might make the experience more engaging.

It was through this process that Microsoft decided to offer different conversational styles – more creative, more balanced or more precise mode – as part of Copilot on its Bing search engine.

“We need to have an experimentation cycle with them where they try things on,” says Sarah Bird. “We learn from that and adapt the product accordingly.”
AI generated decorative image.

#3: Centralize to get to scale faster

As the company introduced Microsoft Copilot and started integrating those AI-powered experiences across its products, the company needed a more centralized system to make sure everything that was being released met the same high bar. And it didn’t make sense to reinvent the wheel with every product, which is why the company is developing one responsible AI technology stack in Azure AI so teams can rely on the same tools and processes.

Microsoft’s responsible AI experts also developed a new approach that centralizes how product releases are evaluated and approved. The team reviews the steps product teams have taken to map, measure and manage potential risks from generative AI, based on a consensus-driven framework, at every layer of the technology stack and before, during and after a product launch. They also consider data collected from testing, threat modeling and “red-teaming,” a technique to pressure-test new generative AI technology by attempting to undo or manipulate safety features.

Centralizing this review process made it easier to detect and mitigate potential vulnerabilities across the portfolio, develop best practices, and ensure timely information-sharing across the company and with customers and developers outside Microsoft.

“The technology is changing, superfast,” says Sarah Bird. “We’ve had to really focus on getting it right once, and then reuse (those lessons) maximally.”
AI generated decorative image.

#4: Tell people where things come from

Because AI systems have become so good at generating artificial video, audio and images that are difficult to distinguish from the real thing, it’s increasingly important for users to be able to identify the provenance, or source, of AI generated information.

In February, Microsoft joined with 19 other companies in agreeing to a set of voluntary commitments aimed at voluntary commitments aimed at combating deceptive use of AI and the potential misuse of “deepfakes” in the 2024 elections. This includes encouraging features to block abusive prompts aimed at creating false images meant to mislead the public, embedding metadata to identify the origins of an image and providing mechanisms for political candidates to report deepfakes of themselves.

Microsoft has developed and deployed media provenance capabilities – or “Content Credentials” – that enable users to verify whether an image or video was generated by AI, using cryptographic methods to mark and sign AI-generated content with metadata about its source and history, following an open technical standard developed by the Coalition for Content Provenance and Authenticity (C2PA), which we co-founded in 2021. Microsoft’s AI for Good Lab has also directed more of its focus on identifying deepfakes, tracking bad actors and analyzing their tactics.

“These issues aren’t just a challenge for technology companies, it’s a broader societal challenge as well,” says Natasha Crampton.
AI generated decorative image.

#5: Put RAI tools in the hands of customers

To improve the quality of AI model outputs and help protect against malicious use of its generative AI systems, Microsoft also works to put the same tools and safeguards it uses into the hands of customers so they can build responsibly. These include open-source as well as commercial tools and services, and templates and guidance to help organizations build, evaluate, deploy and manage generative AI systems.

Last year, Microsoft released Azure AI Content Safety, a tool that helps customers identify and filter out unwanted outputs from AI models such as hate, violence, sexual or self-harm content. More recently, the company has added new tools that are now available or coming soon in Azure AI Studio to help developers and customers improve the safety and reliability of their own generative AI systems.

These include new features that allow customers to conduct safety evaluations of their applications that help developers to identify and address vulnerabilities quickly, perform additional risk and safety monitoring and detect instances where a model is “hallucinating” or generating data that is false or fictional.

“The point is, we want to make it easy to be safe by default,” says Sarah Bird.
AI generated decorative image.

#6: Expect people to break things

As people experience more sophisticated AI technology, it’s perhaps inevitable that some will try to challenge systems in ways that range from harmless to malicious. That’s given rise to a phenomenon known as “jailbreaks,” which in tech refers to the practice of working to get around safety tools built into AI systems.

In addition to probing for potential vulnerabilities before it releases updates of new AI products, Microsoft works with customers to ensure they also have the latest tools to protect their own custom AI applications built on Azure.

For instance, Microsoft has recently made new models available that use pattern recognition to detect and block malicious jailbreaks, helping to safeguard the integrity of large language models (LLM) and user interactions. Another seeks to prevent a new type of attack that attempts to insert instructions allowing someone to take control of the AI system.

“These are uses that we certainly didn’t design for, but that’s what naturally happens when you are pushing on the edges of the technology,” says Natasha Crampton.
AI generated decorative image.

#7: Help inform users about the limits of AI

While AI can already do a lot to make life easier, it’s far from perfect. It’s a good practice for users to verify information they receive from AI-enabled systems, which is why Microsoft provides links to cited sources at the end of any chat-produced output.

Since 2019, Microsoft has been releasing “transparency notes” providing customers of the company’s platform services with detailed information about capabilities, limitations, intended uses and details for responsible integration and use of AI. The company also includes user-friendly notices in products aimed at consumers, such as Copilot, to provide important disclosures around topics like risk identification, the potential for AI to make errors or generate unexpected content, and to remind people they are interacting with AI.

As generative AI technology and its uses continue to expand, it will be critical to continue to strengthen systems, adapt to new regulation, update processes and keep striving to create AI systems that deliver the experiences that people want.

“We need to be really humble in saying we don’t know how people will use this new technology, so we need to hear from them,” says Sarah Bird. “We have to keep innovating, learning and listening.”

Images for this story were created using Microsoft Designer, an AI-powered graphic design application.