Parallel Domain unveils Reactor, an AI-powered generative synthetic data generation engine

Parallel Domain unveils Reactor, an AI-powered generative synthetic data generation engine

Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more


Synthetic Data Platform Parallel domain today announced the launch of Reactor, a cutting-edge synthetic data generation engine that integrates advanced generative AI technologies with proprietary 3D simulation capabilities. The platform aims to provide machine learning (ML) developers with control and scalability, enabling them to generate fully annotated data that improves AI performance and promotes the creation of more secure and resilient AI systems for real-world applications .

According to the company, Reactor improves the performance of AI in various industries, such as autonomous vehicles and drones, by generating high-quality images. Additionally, the tool leverages the power of generative AI to produce annotated data, which is a key requirement for ML tasks.

By generating both bounding boxes (for object detection) and panoptic segmentation annotations (providing full/overview views), Reactor ensures that AI models can effectively use visual data, resulting in more accurate and reliable results.

“Our proprietary Generative AI technology allows users to create and manipulate synthetic data using intuitive natural language prompts, while also generating the corresponding labels needed for training and testing ML models,” Kevin McNamara told VentureBeat, CEO and founder of Parallel Domain. “Reactor’s ability to generate several synthetic examples has led to significant performance improvements in tasks such as pedestrian segmentation and debris and stroller detection. Its ability to enhance the diversity of datasets, especially for rare classes, contributes to superior model training.”

Event

Transform 2023

Join us in San Francisco July 11-12, where top executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.

subscribe now

Rapid iteration and refinement of the ML model

The company said its tool allows users to create a wealth of synthetic data to train and test perception models. This is achieved by integrating Python and natural language, eliminating the need to create time-consuming custom assets, and streamlining your workflow to improve efficiency. As a result, ML developers can rapidly iterate and refine their models, reducing turnaround times and accelerating AI development progress.

“Integrating these technologies into our platform allows users to generate data using Python and natural language commands, enhancing the flexibility of synthetic data generation,” McNamara told VentureBeat. “Reactor provides ML developers with control and scalability, redefining the synthetic data generation landscape. With Reactor, users can generate almost any asset in seconds using natural language prompts.”

Leverage Generative AI to improve synthetic data pipelines

According to McNamara, while other companies use generative AI to create visually compelling data, they are unusable for training ML models without annotations. Reactor overcomes this limitation by generating fully annotated data, which improves the ML process and enables developers to build safer and more effective AI systems.

“We leverage generative AI and 3D simulation to create a wealth of detailed and realistic synthetic data,” McNamara told VentureBeat. “Generative AI enables the production of different scenarios and objects, while 3D simulation adds physical realism, ensuring the robustness of AI models trained on this data. Before now, generative models have struggled to understand what they are generating, which makes them very poor at providing annotations like bounding boxes and panoptic segmentation, which are crucial for training and testing AI models.” .

McNamara said the tool provides a broad spectrum of data and scene customization options. Additionally, its adaptive background creation feature allows for easy editing of generated scenes, allowing ML models to be generalized across various environments. For example, users can transform a suburban California setting into a bustling downtown Tokyo scene.

Intuitive image generation

Reactor’s natural language suggestions introduce an intuitive way to generate image variations, according to McNamara. Users can edit existing images using simple tips like “make this image look like a snowstorm” or “put raindrops on the lens”. This streamlined customization process eliminates the need to wait for custom assets to be created, improving efficiency and turnaround times.

“The adaptive background creation feature in Reactor enriches the diversity of training environments for ML models,” explained McNamara. “This broadens the scenarios on which the model can be trained, helping it better recognize and respond to changing real-world conditions.”

Generative architecture allows models to understand the structure of generated objects and underlying scenes, facilitating pixel extraction and spatial semantic understanding from layers in the generative process. This results in fully automatic and accurate annotations.

More diverse and realistic synthetic data

Using Python, users can flexibly configure their synthetic datasets by selecting various parameters such as locations (San Francisco, Tokyo), environments (urban, suburban, highway), weather conditions, and agent distribution (pedestrians and vehicles).

Once the basic data set is configured, users can use Reactor to enhance their synthetic data with greater diversity and realism. Using natural language prompts, users can introduce a variety of objects and scenarios into the scene, such as “garbage can”, “cardboard box full of overturned sunglasses”, “wooden crate of oranges” or “stroller”.

Reactor generates synthetic data with essential annotations, including bounding boxes and panoptic segmentation, dramatically accelerating ML model training and testing.

McNamara said the tool “revolutionizes” the traditional workflow of creating custom assets, which usually involves a lengthy process of design, manual setup, and integration by artists or developers.

“AI-powered rapid customization capabilities improve efficiency and improve turnaround times,” added McNamara. “As a result, developers can create and integrate new assets into their synthetic datasets almost instantly, enabling faster iterations and continuous improvement of their models.”

Detailed visual insights for autonomous vehicles

The company said it has seen dramatic improvements in the safety of autonomous vehicles and advanced automotive driver assistance systems (ADAS). He also said that, thanks to advanced scattering techniques, the tool has recently achieved remarkable results in real-world scenarios.

Additionally, the company highlighted that the tool has recently significantly improved semantic segmentation results over popular ones Cityscapes dataset — a widely recognized benchmark for autonomous driving.

“Real-world data often lacks sufficient training examples for these less common but critically important objects,” McNamara explained. “Reactor was employed to generate synthetic data depicting various scenarios involving strollers to fill this gap. By introducing this synthetic data into training sets, the models could better learn and generalize stroller detection to real-world scenarios, thereby improving the safety of autonomous systems.”

He added that for the Cityscapes dataset, synthetic instances of trains were generated by Reactor and fed into the dataset.

“This enriched data has led to improved model performance in train detection and segmentation, contributing to safer and more efficient autonomous driving systems,” said McNamara.

He added that many of Parallel Domain’s customers have recently begun incorporating Reactor’s capability into their AI development workflows. While still in its early stages, the company is excited about Reactor’s potential to improve ML models.

“Both customers and the Parallel Domain ML team have trained models for cases that have significantly exceeded previous benchmark performance,” said McNamara. “That’s because the variety of Reactor examples significantly increases the diversity of a dataset. Diverse data trains big models, and we’re redefining the synthetic data generation landscape.”

VentureBeat’s mission it is to be a digital city square for technical decision makers to gain insights into transformative business technology and transactions. Discover our Briefings.