close
close

Trace Machina raises $4.7 million and leaves stealth to create an AI simulation for the physical world

Trace Machina raises .7 million and leaves stealth to create an AI simulation for the physical world

Their first product, NativeLink, is open source at its core and provides engineers with an advanced staging environment for futuristic, safety-first technologies such as self-driving cars, aviation, robotics, and other autonomous hardware systems.

Trace Machina, the company developing simulation infrastructure for safety-critical technologies in physical AI, is launching out of stealth today with $4.7 million in funding. The seed round was led by Wellington Management, with participation from Samsung Next, Sequoia Capital Scout Fund, Green Bay Ventures, and Verissimo Ventures. Angel investors include Clem Delangue, CEO of Hugging Face; Mitch Wainer, co-founder of DigitalOcean; Gert Lackriet, Director of Applied Machine Learning at Amazon; and other industry leaders from OpenAI and MongoDB.

Also read: Humanoid robots and their potential impact on the future of work

Trace Machina is comprised of engineers and product leads from Apple, Google, MongoDB, and the Toyota Research Institute. The company’s first product, NativeLink, is open source at its core and has already achieved over a thousand stars on GitHub, with contributing engineers from Tesla, General Motors (Cruise), Samsung, and X. NativeLink provides an advanced staging environment for futuristic technologies where safety is paramount and testing is critical, such as self-driving cars, aviation, robotics, and other autonomous hardware systems. The magic behind NativeLink is that it pushes AI to the limits, turning local devices into supercomputers.

NativeLink user interface

“We’re enabling the next generation to more easily develop futuristic technologies like those you read about in science fiction novels or see in movies,” said Marcus Eagan, CEO and co-founder of Trace Machina. “This has previously been unattainable or uneconomical due to the limitations of existing infrastructure and tools. We’re moving beyond machine learning that focuses solely on language and pattern recognition to a new wave of AI that is more human in its ability to circumvent obstacles and manipulate objects.”

Before founding Trace Machina, Eagan worked on MongoDB Atlas Vector Search, the company’s first AI product. He has also contributed to some of the largest and most widely used open source projects in the world, such as Lucene, Solr, and Superset. Most recently, Eagan completed a project with Nvidia that allows engineers to run Lucene on GPUs and index data 20x faster. Nathan Bruer, chief architect and co-founder of Trace Machina, worked on secret projects at Google’s X and developed autonomous driving software as an engineer at the Toyota Research Institute.

“NativeLink provides critical infrastructure for the industries of tomorrow, such as aerospace and autonomous mobility,” said Van Jones, deal lead at Wellington Access Ventures. “Over 100 companies use NativeLink’s cloud service to significantly improve their complex designs, leveraging a simulation infrastructure that simply didn’t exist before.”

Also listen: AiThority.com’s AI Inspired Series: Featuring Bradley Jenkins, Intel’s EMEA Head of AI PC and ISV Strategy

The main advantages of NativeLink include

AI at the edge: For developers building native mobile applications, NativeLink’s technology is free and avoids the costs and latency associated with traditional cloud infrastructure, enabling cloud cost savings of typically 50-70% for next-generation enterprises that need to test their code on GPUs to build AI applications.

Increased productivity. NativeLink’s tight feedback loop for building futuristic systems speeds up compilation, testing, simulation, and other workflows by up to 80%. It does this through efficient caching to avoid re-executing unchanged code and parallel processing with remote execution. Together, this can reduce costly errors when testing on GPUs and other specialized hardware.

Built for massive scalability. NativeLink’s Rust-based architecture eliminates potential bugs, race conditions, and stability issues at scale, improving the reliability of critical development pipelines where re-running tests on the GPU results in increased cloud costs.

Leading companies are already using the power of NativeLink in production, including Menlo Security, CIQ (founding sponsor of Rocky Linux), and Samsung, an investor in the company. Samsung uses NativeLink to compile, test, and validate operating system-level software for over a billion devices, with NativeLink handling billions of requests per month.

“NativeLink has been instrumental in reducing our longest development times from days to hours and significantly improving our development efficiency,” said David Barr, senior engineer at Samsung. “Its scalable infrastructure and robust caching capabilities have been a game changer for our team – we are excited about the future possibilities with NativeLink.”

(To share your insights with us as part of editorial or sponsored content, please write to [email protected])

Leave a Reply

Your email address will not be published. Required fields are marked *