- Published on
Scott Walter: Why Humanoid Robots Just Need to Be "Good Enough"

If you follow the exploding world of humanoid robotics on social media, you likely know Dr. Scott Walter as the "Humanoid Botangelist." He has amassed a following of over 22,000 on X by offering a rare commodity in the age of viral tech demos: rigorous, first-principles analysis. While mainstream coverage often stops at the surface spectacle of a backflipping robot, Walter digs into the physics and supply chain realities underneath. He doesn't just react to a new design; he reverse-engineers it—evaluating the pros and cons of actuator choices, assessing kinematic trade-offs, and grounding the excitement in the hard-won lessons of industrial automation.
While he doesn't run a channel of his own, Walter has become an indispensable voice on the circuit, appearing as a frequent expert guest on Marwa ElDiwiny's Soft Robotics Podcast and various tech-focused YouTube channels. In a landscape often polarized between extreme hype and dismissal, he brings the authority of a veteran insider, famously reminding the industry that reliability beats novelty and that "you don't scale a crappy robot."
But long before he became a public analyst, Walter was a builder.
His career tracks the very history of digital manufacturing. In 1985, Walter co-founded Deneb Robotics in the heart of Detroit’s automotive boom. At the time, programming an industrial robot meant halting production to manually guide a machine point-by-point. Walter’s team helped pioneer "offline programming" (OLP), creating the first 3D simulation tools that allowed engineers to program robots in a virtual environment before they ever touched the factory floor.
After Deneb was acquired by Dassault Systèmes, Walter co-founded Visual Components in 1999 with the specific goal of making simulation software "as easy as using an Excel sheet," moving these powerful tools from the workstations of specialists to the laptops of everyday engineers.
We spoke with Walter about the "Fermi paradox" of robotic hands, why German manufacturers are quietly experimenting with humanoids, and the 40-year-old computer science problem he is finally watching the industry solve.
Many people know you for your insight on X/YouTube, but fewer know your backstory. How did you first get into robotics and simulation, and what were the formative projects or moments that shaped your perspective?
I started in simulation—at least professionally—in 1985, partly during my doctoral work at Cornell. We were learning to simulate different mechanical systems, and in the process I worked on an algorithm for collision detection that my thesis advisor suggested might be useful in robotics. Ironically, that algorithm came out of a biomechanics project.
I wondered how we could take some of what we were doing at the university and commercialize it. So in 1985, along with some former Cornell colleagues, we started a company called Deneb Robotics—almost exactly forty years ago this month—in Troy, Michigan. At the time, Detroit was pretty much the center of the robotics world. Robots were becoming available, but they were very difficult to program—you had to do it online. Everyone was looking for a better way.
The idea of offline robot programming was just starting to emerge, and we were one of the pioneering companies in that area. The goal was to create a 3D environment where you could load a digital model of a robot, study its movements, and plan its tasks. Up to that point, people were literally using pencil-and-paper techniques, cardboard cut-outs, and 2D side profiles to study reach and accessibility. Then, in the mid-1980s, Silicon Graphics released their workstations, which made true 3D animation possible.
We took advantage of that to create what people today would call a virtual world or digital twin of the robot. From that, we could plan paths, verify reach, do cycle-time analysis, and even generate the robot's motion program—speeds, motion profiles, coordination with other equipment. We called it OLP, or Offline Programming. These days people call that sim-to-real.
Deneb's projects ranged from automotive and heavy equipment to aerospace and shipbuilding. It's hard to single out one formative project, but the big insight for me was that we were really creating a virtual teach method—programming the robot in the virtual world almost the same way you would in the real world, moving it point-to-point. It was tedious, because even digitally you had to judge positions and store them, but we eventually realized we could create macros and templates to speed it up.
That was important, because back then programming time could be fifty to a hundred times the actual cycle time—if a task took one hour to run, it could take 50–100 hours to program. Offline programming solved accessibility, but it also had to be faster; otherwise, we'd just turned a blue-collar job into a white-collar job. That realization shaped a lot of my later thinking about usability and efficiency in robotics.
You've spent decades advancing simulation/automation. What specifically drew you to humanoids? Was there a turning point, demo, or idea that made you think, "this is the next frontier"?
It wasn't a single turning point. Over the past 40 years, we went from simulating a single robot to realizing you had to simulate the entire cell, and finally the entire factory, to understand what was happening. Robots had tooling, fixturing, and transport mechanisms. Cells were connected to other cells.
As we simulated the entire factory, we had to simulate all assets and resources. And one of the most critical resources in any factory is the human. Humans are needed to load and unload cells, get parts, and they work alongside mobile robots, forklifts, and conveyors.
We had to start simulating these "human assets" very early on. They might have been simplistic at first—just a representation of a person carrying something—but we started wanting more detailed studies, making our simulated humans look and behave more like real ones. It became very clear that humans are an integral part of factory automation. No matter how far you push automation, it seems you're just moving the frontier where the humans show up. In the end, you always need them.
That was the missing piece. You begin to realize the humanoid form factor is still needed. So, as soon as it became clear that we were moving from expensive, difficult-to-build robots to smaller ones with a more human appearance, it felt like a convergence. It was a slow evolution, a realization that the technology was finally high to allow this to happen.

The "why humanoids?" question has been debated to exhaustion, but we're now at a point where humanoids are entering factories and test environments. From your perspective, has the conversation truly shifted from "why build humanoids" to "how to scale them"? What changed the industry's mindset, and when did you first sense that turning point?
I think we're still somewhere in between. A lot of people no longer ask "why build humanoids?" They accept the need for more automation and intelligent robotics. Some still argue whether the humanoid form factor is ideal, but I believe it is. The argument that it's a "world built by humans for humans" is valid; it's much easier to integrate them.
The conversation is shifting to "how to scale them," and it's not just the dreamers saying it—it's the end-users. A lot of end-users are very serious but quiet about it, perhaps afraid of the skepticism or stigma. They don't want to be ridiculed for thinking humanoid robots will work, so they are dabbling quietly. When I travel and visit companies, I'm surprised by how many are already experimenting behind the scenes.
For example, we know a German company has been experimenting with humanoid form factors, and they're not alone. Having spent nearly a decade working in Germany, I know that the German industry is typically very conservative about technological change. When they start moving in this direction and taking humanoids seriously, it's a strong signal that this is not a joke.
These German companies aren't just looking at how to use humanoids in their factories; they're thinking about how to participate in the ecosystem by building the parts and components. There are serious discussions happening, which tells me that even if the public perception is that humanoids are far away, there are definitely a lot of believers in the industry.

You've argued that humanoids don't need perfect dexterity to deliver value — that "useful work is all that counts." What kinds of tasks do you already see humanoids performing that validate that idea? And which types of real-world work do you think they'll tackle next as reliability improves?
I've always said: go for the low-hanging fruit. There's a lot of it out there, and it doesn't require much sophistication; there are entry-level jobs where you can show a person what to do in a few hours or less. Many are simple pick-and-place operations that you'd think could be automated but haven't been. My argument is, if they could have been automated easily, they would have been.
In many cases, it's just grabbing simple parts and dropping them in a fixture. When Optimus first came out, one of the first things Elon Musk said they would do is load sheet metal into welding stations. Unfortunately, we haven't seen Optimus do that, but we have seen Figure move in that direction. In traditional automotive, what's difficult to automate is reaching down, pulling a part out of a bin where it's in a chaotic position, and dropping it into a fixture. You can teach a person that very quickly.
That's a simple, low-hanging-fruit task. There are many like it that are repetitive, require no extreme dexterity or creativity, and are just the same tedious thing, hours on end. It's boring, people don't like doing it, and it's perfectly suited for a robot. I see more and more of those options where robots can be useful right away.
To be "useful," the solution can't cost more than the current solution. Useful work means the robot must be as reliable as a person, operate at the same speed, not break down, and the overall cost per hour has to be the same or lower. That's what useful work is. As soon as robots can do that, they'll be employed. They will get better, faster, and more reliable, and their costs will go down. I don't think we're that far away from that day. Then they'll be able to work their way up the tree from the low-hanging fruit to the mid-hanging and eventually the high fruit.

iRobot founder Rodney Brooks and others remain skeptical that humanoids will ever reach truly useful dexterity. From your perspective, what are the hardest remaining pieces of that puzzle — tactile sensing, control, data, training, or something else? And what sort of breakthrough or milestone would convince a skeptic that humanoids are genuinely getting close?
The "business end" of the robot is the hand, and solving the hand is, without a doubt, one of the most difficult engineering challenges in robotics today. It's been said it's half the engineering; I think it's going to be a lot more. You can build a dexterous hand, or you can build a robust hand. It's really hard to build a hand that's both.
We've seen hands with a lot of dexterity, but you know they would break down if used day-to-day. I've talked about this before—it's the "Fermi paradox of robotic hands." We've seen impressive demos going back to the 2010s. If they exist, where are they? We don't see them employed in many places because of reliability and robustness. They work well for a short demo, but they have to work, ideally, three eight-hour shifts a day, seven days a week. That's extremely difficult.
Before you even worry about tactile sensing, you have to solve the robustness problem. The hands have to be strong enough, yet delicate enough. And if you add tactile sensors, they also have to handle the rigors of the job.
Will humanoids ever have the same level of dexterity as humans? Yes, they will. I just don't know when. In this business, the way to look smart when forecasting is, if someone asks "when will X happen?" just say "not this year." You'll almost always be right. So, I'm not going to say we'll have the dexterity we need this year, but that doesn't mean we won't have robots that are useful.
That's the key measure. Once they become useful, they will slowly improve. We probably won't see extreme dexterity for many, many tasks until sometime in the mid-2030s. But we might be fooled into thinking robots need the exact same perception stack we have. That may be a fallacy. They might get by with lower-grade sensors, or even a lack of sensors in some places, just as people adapt to loss of function.
Companies like Figure and 1X suggest that their hardware is now "good enough," and that progress mainly depends on training data and AI. Do you agree with that assessment? Has the field truly crossed the hardware threshold, or are there still critical mechanical gaps that limit what humanoids can do today?
Let's focus on that term "good enough." We know "perfect is the enemy of good enough." Right now, we are in the "just needs to be good enough" phase. Forget making them perfect; they need to be good enough to become useful and start acquiring data.
They are probably at that "good enough" stage right now—good enough to acquire the datasets needed to make them more useful. From that, you can iterate and improve the hardware.
Again, the main progress we need to see is in robustness. They seem to have sufficient capability and dexterity for many tasks, but can they do them non-stop? Can they adapt to changing environments?
So yes, I agree they're at that "good enough" phase to be useful for data collection. But we'll still need a couple more generations to get the degree of robustness, reliability, and finish we'd expect from a consumer-grade product.
Many humanoid developers now show robots operating in both industrial and home environments — from warehouses and factories to kitchens and living rooms. But the safety question still feels unresolved, especially in domestic settings. How do you think about the safety challenge for humanoids?
That's a great question. When people ask when we'll see humanoids in the home, my answer is always: "When they're safe." It's not when they're capable or cheap enough. If they aren't safe, they won't be in the home.
We can have less-safe robots in a factory because we're used to managing dangerous equipment with safety zones, lockouts, and protocols. In the home, it's way more challenging. You might try to say, "The robot can only work in this room if nobody else is in there," and maybe that helps — the walls and closed door become the safety zone. But ensuring no pets, kids, or people wander in is nearly impossible.
At this point, the only robot even close to being safe enough for the home is the 1X Neo, because it was specifically designed for it. Being around 30 kg (66 lbs) is a lot better than 70 kg (155 lbs), and it's softer. But even then, in the wrong situation, someone could get hurt.
We need more regulations and safeguards, and we don't even know what those are yet. There are committees being formed to look into robotic safety levels. It will take time, and it will require a certain public acceptance of risk. I know 1X says they will not knowingly sell NEO to homes with toddlers, which is a good step.
We also have to remember the flip side of safety is security: making sure the robots can't be hacked or aren't collecting data users don't want collected.
You worked on early interface standardization at Deneb and later pushed "democratizing simulation" at Visual Components. Today each humanoid team runs a proprietary stack. What level of shared standards would accelerate progress without stifling innovation?
Over my career, I've noticed the best standards aren't formed by a committee; they are de facto standards. They emerge by consensus when a small group comes up with something good, and everyone realizes it's the best way to solve a problem and jumps on board. Once you get committees involved, there are too many interests—a camel is a horse designed by committee.
I think we'll see someone come up with a really good interface or a really good set of APIs, and everyone will rally around it. But right now, it may be too early. We don't always know what we want yet, and you can standardize too early. NVIDIA and DeepMind are in positions to shape this — they're already defining standards for sim-to-real workflows. Some convergence might happen there, but setting a standard too early can backfire.
For example, the IGES (Initial Graphics Exchange Standard)—it's funny, the "I" stands for "Initial" and it's stuck around for a long time. When CAD was just starting, nobody knew what should be in the standard, so IGES supports all kinds of features and surface types that don't exist anymore. It literally had something called a "Gordian surface"—the guy's name was Gordon—and it had knots, which were literally called "Gordian knots." This was all before things like NURBS became standard. If you want to support the full IGES protocol, you have to support all these deprecated data types no one uses.
This happens because they didn't know better, or because special interests push to include their features, or because people try to undermine a standard to gum up the works. I think what will happen is a small group will come up with something so good that everyone recognizes it's the one.
You've recently visited NEURA Robotics, Figure, and 1X. What stood out during those visits—technically or organizationally?
I didn't visit NEURA directly; I've only seen them at a trade show and met David Reger. Unfortunately at that point 4NE-1 was not fully functional but we could see what the next version was going to look like. But with Figure and 1X, the one thing you notice is the energy. Everyone is very excited about what they are doing, and the parking lots are full. People like what they're working on, they have deep expertise, and you're seeing really rapid iteration. Those are the things that stand out to me. A lot of them are very good teams.
The great thing is meeting the people who aren't public-facing, the engineers. I like to ask them, "What joint would you add if you could add one? Which one would you take away if you had to?" It's interesting to see how quickly they respond, which lets you know they are really thinking these problems through—the compromises and the impact of extra degrees of freedom.
The other thing that stood out is that they all have very different philosophical approaches. You ask one company about another's approach, and they'll say, "Oh, that's the absolute wrong way. We've studied that; this is the way to do it." Then you go to the other company, and they say the exact opposite. I find that fascinating because I don't know what the right approach is either. It's great to see these teams championing a particular cause and moving ahead on it.
China is moving fast with strong supply chains; Unitree's G1 became a global research platform and R1 looks even more accessible. What capabilities or practices from Chinese teams impress you most?
It's interesting. The first thing is the sheer number of companies in China. With more experimentation, you have a greater chance of success. Their supply chain is crazy; they can get what they need, with modifications, almost overnight. That allows for a lot of experimentation.
In the West, we see videos from companies like Unitree and EngineAI showing gymnastics and dancing, which people criticize. But other Chinese companies are less focused on the spectacular and more focused on applications—and they are starting to deploy. Deployments seem to be starting much earlier in China.
The robots may not have the overall capabilities we're expecting from Optimus or Figure, but that sort of doesn't matter. They're more concerned with hitting a narrow set of requirements and testing it. You need to move from the lab to a real environment to start seeing what the utility of these bots are going to be, and that's happening in different areas in China.
You know you can very easily say "Oh, their robots are not so good" or this or that and you can cherry-pick examples to make your case, but you have to look at the volume of what's going on. I've referred to the UBTECH Walker bot as being "very mid"—nothing spectacular. But when you put it all together, it's designed to solve a particular application they're focused on. I think they did a pretty good job when you look at it that way. It's a narrow case, but it's that "perfect is the enemy of good enough" argument again. If you strive for perfection, you'll never get something good enough to start testing. We're seeing companies in China do that, and they're already scaling. From that, we'll find out if these bots are useful and economically viable for them to start installing them in greater numbers. I think it's going to be rather interesting.

What would you be looking for in Tesla's next Optimus reveal to indicate substantive progress?
I have had two large concerns with the Optimus program over the past year. First, we haven't seen much progress; there have been big gaps, more than a quarter, without new releases. And my biggest concern is that while we've seen Optimus walk, we've never seen it walk at human speed. The walk looks nice, but it's not quite half of normal human walking speed. Basically everyone else has shown this.
The other thing is, why haven't we seen Optimus doing anything in the factory? That's kind of a mystery to me. Why aren't we seeing the sheet metal examples that we thought were going to be the first order of what Optimus would be doing? I've always felt that the Model S/X lines were perfect for experimenting because they don't run 24/7. I think they're just one shift, five days a week. They have far more spot-welding lines and sheet metal needs than the Gigapress lines. There are so many stations to experiment with. Why haven't we seen it?
What Figure 03 has done is show you can make a robot that looks very human-like with soft outer clothing. It made an impression. Those shoulders, they look like human shoulders, not robotic shoulders. I think that's where the bar will move: attempting to capture that same effect, rather than a cold plastic exterior. 1X has already done that, and we've seen Fourier do something similar. I wouldn't be surprised if we see a softer exterior from the next Optimus.
Do you ever worry about the broader social or economic effects if humanoids become capable of most types of work?
Yes, I do. Engineers have a responsibility to think about the societal, environmental and economical impacts of their projects.
When you look at humanoids it's a double-edged sword: we're getting rid of drudgery, but we might be taking jobs. This isn't the first time society has dealt with this; look at the Jacquard loom, one of the first examples of work being displaced. Because we have that history, we might be more prepared to talk about it. We should be debating this, and that is why I've talked about it a lot on YouTube channels and invited people who are looking at this problem, including ideas like UBI. My position is that we need to be talking about this.
As far as the overall impact there's two schools of thoughts; one is that it's going to be this incredible exponential that will hit us overnight. I don't think it will be quite that fast. Right now, we have a huge worker shortage in many different places. It will take a while before there's enough robots to be able to have some sort of impact, because those are jobs that no one wants to do. Robots will be limited because of their capabilities; they won't do all jobs. This means more people are available for other work. If it's a choice between working in an Amazon factory or a nursing home, and both have shortages, robots might start working in the factory. This frees up more people to work in the nursing home, which may be too complex for a humanoid.
This is the "guns and butter" argument from the "production possibility frontier" in economics: solving a problem in one area frees up resources to solve a problem somewhere else. I see it as a rebalancing of the economy. Fortunately, because it will be slow enough, I think society will have time to adapt.
Basic economics—supply and demand—means you don't supply if there's no demand. This idea that we'll pump out bots to produce products no one can buy... that's not how it works. Even Henry Ford realized he needed to pay a decent wage so people could afford his product; he kind of created the middle class because his company needed it to prosper. It's the same principle. We will stop producing humanoids when there's too many of them. It doesn't mean it's because we have 8 billion of them out there and there's just not enough room for them anymore. It's going to be when they no longer produce any utility.
After decades in robotics, what still surprises or delights you? What near-term milestone would make you say, "Humanoids have truly arrived"?
Well, what would make me say they've arrived? When a humanoid can do about everything I'm able to do, or maybe more. I'm getting to that stage of life where I can't do some things I used to, and it would be nice to have an assistant. It would be extremely interesting to see when we come to that.
But what kind of keeps me going goes back to the beginning. In 1985, when Deneb was just starting, we met with Professor John Hopcroft at Cornell. He was a top computer science professor and was trying to solve the problem of how to teach robots. This was like right around one of those, what I guess you might call, late AI summers where there's a lot of excitement about knowledge-based systems and the capabilities of what AI could do. Robotics was just starting to come around at that time. Everyone was talking about the need for robots and robot companies were starting to push them into universities for researchers to look at. Hopcroft felt you should be able to take a robot, put a bunch of parts in front of it, and it would figure out what to do, without manual programming.
We were amazed and figured he'd have it solved in two years, in which case, why were we bothering with virtual teach programming? Of course, it turned out to be a much harder problem.
That idea has always stuck with me. All along the way, I was nagged that our software wasn't good enough; it was too hard to use. Why couldn't it be intuitive and automatic? I had no idea how to solve it, but I wanted to see it solved.
Right now, we're very close to that idea. You throw a bunch of parts in front of a robot, and it knows what to do. If you throw down the parts of a doorknob it should look at it and say, "This is how it goes together." Versus if you throw down Lego bricks, it should say, "Wait, I have infinite options. What do you want me to do?"
We're really close to that, where you'd need almost no human intervention. For me, that's what I'd like to see. That would mean we've finally solved the problem Professor Hopcroft introduced me to, back in 1985, 40 years ago.
Join the Club
Want to dive deeper into the mechanics of humanoids?
Dr. Scott Walter and Marwa ElDiwiny recently launched the Robotics Club, an educational initiative designed to bridge the gap between enthusiasts and complex engineering concepts. Their inaugural session, "Actuators in Humanoid Robots," kicked off in November 2025, offering a deep dive into the hardware that makes motion possible. You can follow Scott Walter and Marwa ElDiwiny on X for updates on upcoming classes and schedules.
Share this article
Stay Ahead in Humanoid Robotics
Get the latest developments, breakthroughs, and insights in humanoid robotics — delivered straight to your inbox.