Completing RACER: What It Took to Clear the Highest Bar in Ground Autonomy
Greg Okopal, Co-Founder and COO | January 6, 2026
“On a scale from one to ten, how nervous are you?”
Early on in the RACER program, a member of the DARPA team asked me this question. It was just before 8:00 AM on March 3rd, 2022, and I was strapped into the driver’s seat of a RACER Fleet Vehicle, somewhere in the Mojave Desert. The vehicle was a Polaris RZR Turbo S4 that had been modified with sensors and compute to support the objective of the DARPA RACER program: to develop breakthroughs in ground vehicle autonomy which would allow uncrewed assets to traverse unstructured terrain at tactically relevant speeds. We were staging for our first day on an official course at the first DARPA-Hosted Field Experiment, having been given the checkpoints for course Alpha just a few hours earlier. We were only six months into the program, but the courses that they were using to evaluate us were extremely ambitious: Alpha was comprised of a handful of points, each several kilometers apart and crossing a desert filled with bushes, rocks, and Joshua trees. There were no trails. We were not allowed to use maps or any other prior geospatial information; the vehicle had to navigate using only information gathered from its onboard sensors in real time as it was driving. And Alpha was the easy course; they would only get more difficult as time went on.
The fleet vehicle that I was sitting in was idling outside of a massive tent that had been set up to shelter the garage operations of all of the performer teams. All three teams were on-site, and we all were being evaluated on identical hardware supplied by the government. Not far from me, a temporary office trailer had been set up as a command post. The DARPA team, other government subject matter experts, and our software team were inside watching everything on a wall of monitors. This was a competition—a desert race—where the judges had a god-like view of everything that the competitors were doing.
At precisely 8:00 AM we were expected to be at the starting line of the course. There would be a countdown over the radio, and then I would switch the vehicle into autonomous mode and the software would take over. The question that I was asked—how nervous am I—could have been referring to any number of things. Would the software be able to successfully navigate the desert, a place that we had never been before? Were all of the mechanical and electrical subsystems in the vehicle in a healthy state? And what about my own personal safety, given that I’m along for the ride in a 3500-pound robot running experimental software?
At the time of this first RACER experiment, Overland AI did not exist. The work began at the University of Washington (UW), where Overland co-founder and CEO Byron Boots was building a new robotics lab. I was at the Applied Physics Laboratory (APL) at UW, where I had spent more than a decade developing and fielding autonomous systems, sensors, and software for defense applications. When DARPA released the RACER solicitation, Byron and I partnered as principal investigator and co-PI to submit a proposal on behalf of UW. Winning that contract alone was significant. DARPA selected only three teams, two of which were expected entrants. A university team from Seattle was a dark horse.
As we were writing the proposal, Byron and I developed a plan for this program that was centered around field testing. I had seen this approach work well on other projects at APL and I believed that it was the only chance we had at meeting the RACER performance metrics on the program’s very short timeline. Full system testing in the field keeps you honest. It surfaces issues that you would never anticipate. And it has the ability to motivate technical development unlike anything else.
Our plan was that every week over the life of the project would be the same: a Monday planning meeting, field testing Tuesday through Thursday, and then a Friday debrief. Repeat ad nauseam. This plan meant that field operations would be a core competency for our team, and we allocated significant resources to ensure that we had the right people, facilities, and equipment to execute to plan. We also decided that we would be aggressive from the start. Rather than slowly ramping up speed, we pushed immediately toward DARPA’s performance targets and accepted that we would break robots from time to time. Ultimately, going fast early forced us to confront serious technical challenges and generated solutions that continue to define the state-of-the-art.
Our operational choices were matched by key technical decisions. We committed to developing a bird’s-eye view approach for our perception stack, delivering a comprehensive overhead view of the environment that enhances situational awareness, supports obstacle identification, and enables more effective route planning. And our team had embraced the idea that perception is fundamentally prediction; instead of simply classifying what sensors could directly observe, we used sensor data to predict traversability across the environment, including areas partially occluded by vegetation or terrain. That allowed planning through forests and complex terrain where classical approaches stalled.
By the time we got to the first RACER experiment that March, we had spent about four months doing weekly testing through the Pacific Northwest winter. We had been running through mud, snow, and fog; week in and week out. I had been in the driver’s seat for almost all of the tests, and so I had a deep understanding of exactly what our capabilities were. Our stack had already come a long way in terms of its capabilities. Just a few weeks prior, we were running through the woods in the Cascade Mountains foothills with speeds that were approaching program metrics.
So when I was asked the question about how nervous I was on a scale from one to ten, I did not hesitate to respond: “Zero.”
He raised his eyebrows. He said “okay” with an intonation that tried to tell me I was being overconfident. But at 8:00 AM, when the countdown came over the radio, I flipped the switch and the vehicle took off. We completed the course once, and then did it again. Then we started increasing the speeds. Over the course of the next week, we completed all of the courses that DARPA had prepared for us. We ran them backwards to get more miles. We pushed the speeds until we found the limits of our software. All told, we wildly exceeded our own expectations.
Our success at this experiment set the tone for the rest of the program for us. It validated the field test-centric approach that we were using, and it gave us confidence in our aggressive stance towards speed and risk. Our success also had another key result—by the end of this experiment, Byron started talking about forming what would eventually become Overland AI.
Over the course of the next 3.5 years of the RACER program, the DARPA team pushed us hard. The design of the program itself was brilliant. There would be a DARPA-Hosted Field Experiment every six months, at new locations and with new challenges. Each team was given a small fleet of robots, to allow for maximum uptime even with high-risk testing. Later, Phase Two of RACER down-selected to a single performer and transitioned to a heavier military platform, with greater emphasis on tactics, global planning, and operational relevance.
By the end of RACER, Overland was the sole remaining performer. More importantly, we completed the program with an autonomy stack that was ready to be transitioned into the hands of warfighters. The eighth and final experiment was held in the same location as the first, and it was extremely gratifying to see the astonishing performance gains that we had achieved after nearly four years of work. We did not even bother to run course Alpha because it was far too simple. DARPA had set the bar higher than what transition would immediately require, and that was the point. We gained momentum and credibility from the program, and the relationships we formed throughout—including at military installations and with operational units—have now carried forward into work with brigades across the U.S. Army and the Marine Corps.
To this day, field testing is the beating heart of the company. We still follow the weekly rhythm, although we now test vehicles almost every single day, sometimes in multiple places around the world simultaneously. This has required incredible dedication, effort, and long hours by our team, but we are all committed to the mission, and the performance gains generated by this approach have compounded over time.
DARPA pushed Overland to solve the hardest problem in ground autonomy right from the beginning of our company’s origin: building a reliable and resilient software stack that can be integrated into any ground vehicle. We have proven our autonomy on fleet vehicles, heavy platforms, and most recently on ULTRA, our own fully autonomous tactical vehicle built in-house and in production today. These efforts, spanning the entirety of the RACER program, have always been aimed at expanding the art of the possible with ground autonomy. But we also accelerated the moment autonomy became operationally fielded, which has meant that warfighters will have access to ground autonomy that is both proven and ready. DARPA accelerated our technology through RACER. And we are providing the results back to the warfighter today.

