Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
Artificial intelligence and analytics company Opaque systems today announced new innovations for its proprietary computing platform. The new offerings prioritize organizational data privacy when using large language models (LLMs).
The company announced it will showcase these innovations during Opaque’s keynote address at the grand opening Confidential Computing Summitto be held on June 29 in San Francisco.
They include privacy-preserving generative AI optimized for Microsoft Azure confidential computing cloud and a zero-trust analytics platform: Data Clean Room (DCR). According to the company, its generative AI leverages multiple layers of protection by integrating secure hardware enclaves and unique cryptographic fortifications.
“The Opaque platform ensures that data remains end-to-end encrypted during model training, tuning, and inference, thereby ensuring privacy is protected,” Jay Harel, vice president of product at VentureBeat, told VentureBeat. Opaque Systems. “To minimize the likelihood of data breaches throughout the lifecycle, our platform safeguards data at rest, in transit, and in use.”
Through these new offerings, Opaque aims to enable organizations to securely analyze confidential data, ensuring its confidentiality and protecting it from unauthorized access.
To support sensitive AI use cases, the platform has expanded its capabilities to safeguard machine learning and AI models. It achieves this by running them over encrypted data within Trusted Execution Environments (TEE), thus preventing unauthorized access.
The company says its zero-trust Data Clean Rooms (DCRs) can encrypt data at rest, in transit, and in use. This approach ensures that all data sent to the cleanroom remains confidential throughout the process.
>>Don’t miss our special issue: Building the Foundation for Customer Data Quality.<
Ensure data security through confidential processing
LLMs such as ChatGPT rely on public data for training. Opaque says the true potential of these models can only be realized by training them on an organization’s confidential data with no risk of exposure.
Opaque recommends enterprises adopt confidential computing to mitigate this risk. Confidential compute is a method that safeguards data throughout the model inference and training process. The company says the method can unlock the transformative capabilities of LLMs.
“We use Confidential Computing technology to take advantage of specialized hardware made available by cloud providers,” Opaque’s Harel told VentureBeat. “This privacy-enhancing technology ensures that datasets are end-to-end encrypted throughout the machine learning lifecycle. With Opaque’s platform, the model, prompt, and context remain encrypted during training and while inference is running.”
Harel said the lack of secure data sharing and analytics in organizations with multiple data owners has led to restrictions on data access, deletion of datasets, masking of data fields, and total data sharing prevention. data.
He said there are three main issues when it comes to generative AI and privacy, especially in terms of LLMs:
- Queries: LLM providers have visibility into user requests, increasing the ability to access sensitive information such as proprietary code or personally identifiable information (PII). This privacy concern intensifies with the growing risk of hacking.
- Training models: To improve AI models, vendors access and analyze their own internal training data. However, this training data retention can lead to an accumulation of sensitive information, increasing your vulnerability to data breaches.
- Intellectual Property Issues for Organizations with Proprietary Models: Developing models using corporate data requires granting proprietary LLM providers access to the data or implementing proprietary models within the organization. As outside individuals gain access to private and sensitive data, the risk of hacking and data breach increases.
The company has developed its generative AI technology with these issues in mind. It aims to enable secure collaboration between organizations and data owners while ensuring regulatory compliance.
For example, one company may train and tune a specialized LLM, while another may use it for inference. Both companies’ data remains private, with no access granted to the other’s.
“With Opaque’s platform ensuring that all data is encrypted throughout its lifecycle, organizations would be able to train, refine and run inference on LLM without actually gaining access to the raw data itself,” Harel said.
The company highlighted its use of secure hardware enclaves and cryptographic fortification for its Zero Trust Data Clean Room (DCR) offering. He says this stealthy cyber approach provides multiple layers of protection against cyberattacks and data breaches.
Operating in a cloud-native environment, the system runs within a secure enclave on the user’s cloud instance (such as Azure or GCP). This configuration limits data movement while allowing companies to maintain their existing data infrastructure.
“Our mission is to ensure that everyone can trust the privacy of their sensitive data, whether it is customer PII or proprietary business process data. For AI workloads, we enable enterprises to keep their data encrypted and secure throughout its lifecycle, from model training and tuning to inference, thus ensuring privacy is protected,” added Harel. “Data is kept confidential at rest, in transit, and in use, significantly reducing the likelihood of loss.”
VentureBeat’s mission it is to be a digital city square for technical decision makers to gain insights into transformative business technology and transactions. Discover our Briefings.