Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
The last two days have been intense in Redmond: yesterday, Microsoft announced its new Azure OpenAI service for government. Today, the tech giant revealed a new series of three commitments to its customers as they seek to integrate Generative AI into their organizations in a safe, responsible and secure way.
Each represents a continued step forward in Microsoft’s journey to integrate AI and assures its enterprise customers that its AI solutions and approach are trusted.
Generative AI for government agencies of all levels
Those who work in government agencies and civil services at the local, state and federal levels are often assaulted with more data than they know how to handle, including data on components, contractors and initiatives.
Generative AI, then, would appear to present a huge opportunity: giving public employees the ability to sift through their massive amounts of data faster and use natural language queries and commands, as opposed to more clunky and outdated data retrieval methods. data and information search.
However, government agencies typically have very strict requirements on the technology they can apply to their data and business. Enter Microsoft Azure Government, which already works with the US Department of Defense, the Department of Energy and NASA, as noted Bloomberg when he broke the news of the new Azure OpenAI Services for Government.
“For government customers, Microsoft has developed a new architecture that allows government agencies to securely access large language models in the commercial environment from Azure Government, enabling those users to maintain the stringent security requirements necessary to government cloud operations,” wrote Bill Chappell, Microsoft’s chief technology officer of strategic missions and technologies, in a blog post announcing the new tools.
Specifically, the company introduced the Azure OpenAI service REST APIs, which enable government customers to build new applications or connect existing ones to OpenAI GPT-4, GPT-3, and Embedding, but not over the public Internet. Rather, Microsoft allows government customers to connect to OpenAI APIs securely through its transport-layer (TLS) encrypted “Azure Backbone”.
“This traffic stays entirely within the Microsoft global network backbone and never enters the public internet,” the blog post specifies, later stating, “Your data is never used to train the OpenAI model (your data is your data).”
New customer engagements
On Thursday, Microsoft unveiled three commitments to all of its customers in terms of how the company will approach the development of generative AI products and services. These include:
- Share what you have learned about developing and implementing AI responsibly
- Creation of an AI Assurance Program
- Support customers as they responsibly implement their AI systems
As part of the first engagement, Microsoft said it will publish key documents, including Responsible AI Standards, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on responsible AI implementation. Additionally, Microsoft will share the curriculum it uses to train its employees in responsible AI practices.
The second effort focuses on building an AI Assurance Programme. This program will help customers ensure that the AI applications they deploy on Microsoft platforms comply with legal and regulatory requirements for responsible AI. It will include elements such as supporting regulatory engagement, implementation of the AI Risk Management Framework published by the US National Institute of Standards and Technology (NIST), customer advice for regulatory feedback and advocacy.
Finally, Microsoft will provide support for customers as they implement their AI systems responsibly. The company plans to establish a dedicated team of AI legal and regulatory experts in different regions of the world to assist companies in implementing responsible AI governance systems. Microsoft will also work with partners, such as PwC and EY, to leverage their expertise and help customers implement their own responsible AI systems.
The broader context revolving around Microsoft and AI
While these commitments mark the beginning of Microsoft’s efforts to promote the responsible use of AI, the company recognizes that ongoing adaptations and improvements will be required as the technology and regulatory landscape evolves.
Microsoft’s move comes in response to concerns about the potential misuse of AI and the need for responsible AI practices, including recent letters from US lawmakers questioning Meta Platforms founder and CEO Mark Zuckerberg about the release by the company’s LLaMA LLM, which experts say could have a chilling effect on the development of open source AI.
The news also comes on the heels of Microsoft’s annual Build conference for software developers, where the company made an unveiling Tissueits new analytics platform for cloud users that seeks to put Microsoft ahead of cloud analytics offerings from Google and Amazon.
VentureBeat’s mission it is to be a digital city square for technical decision makers to gain insights into transformative business technology and transactions. Discover our Briefings.