Image Image Image Image Image Image Image Image Image Image
Scroll to top

Top

No Comments

Humanoid Robot Learns to Fold Towels Autonomously

Humanoid Robot Learns to Fold Towels Autonomously

Helix, the advanced Vision-Language-Action model from Figure, has recently shown off a new capability by folding laundry all on its own. Previously, it spent an hour reorienting packages in logistics, and now it has taken on the deceptively simple chore of folding towels. While doing laundry seems routine for humans, it represents one of the most intricate challenges in robotic dexterity.

Towels present a tough problem because they are soft, constantly shifting in form, bending in unpredictable ways, and can wrinkle or get tangled easily. There is no fixed shape to learn, and no single perfect spot to grab. Even a tiny slip of a finger can cause the fabric to crumple or slide out of place. Success here requires more than just accurate vision. It demands refined, synchronized finger movements to trace edges, pinch corners, smooth out surfaces, and adjust in the moment.

What makes this achievement so impressive is that it is the first time a humanoid robot with multi-fingered hands has folded laundry entirely on its own using a single, unified neural network. The same architecture that handled logistics was used for this new task — there were no changes made to the model structure or the training parameters. The only difference was a new dataset full of laundry examples.

Interestingly, while folding laundry, Helix also learned more personal behaviors. It maintained eye contact, followed its human partners with its gaze, and even used learned hand gestures. Without altering its design, Helix was able to pick out towels from a mixed pile, adapt the way it folds based on how each item started out, recover when it unintentionally grabbed multiple items by returning the extras, and perform delicate actions like tracing towel edges with its thumb, pinching corners, or untangling intertwined towels — all before completing the final fold.

What truly stands out is that Helix achieves all of this without relying on an explicit representation of each object. Deformable items like towels are hard to define in fixed terms, and building object-level models is often brittle. Instead, Helix handles everything in one continuous flow: from vision and language input straight through to smooth, accurate motor output.

This advancement shows that the same intelligent framework and robot platform can move effortlessly between industrial logistics and household chores. As Figure continues to gather more real-world data, Helix’s skill, speed, and adaptability are expected to keep growing and extend to even more types of tasks.

Submit a Comment