Palo Alto, CA – K-Scale Labs announced on July 1, 2025, the official launch of K-Bot, an open-source humanoid robot designed to be affordable and accessible to a broad audience. The company, founded by veterans from Meta, Tesla, and Boston Dynamics, aims to shift the paradigm in robotics by empowering individuals and developers rather than solely large corporations. K-Bot is priced at $8,999, significantly lower than many comparable humanoid robots on the market.
K-Scale Labs stated its vision on social media, proclaiming, > "K-Bot is the world’s first open-source humanoid robot that is affordable, available and made in America. Robots should serve people and empower anyone to build the future, not just big corporations." This sentiment underscores the company's commitment to democratizing advanced robotics. Shipping for the K-Bot Founder Edition is slated to begin in July 2025.
The K-Bot stands 4 feet 7 inches tall and weighs 77 pounds, featuring a modular design that allows for customization and upgrades. It operates on K-Scale's open-source software stack, including the Rust-based K-OS operating system, a Python SDK, and the K-Sim framework for reinforcement learning. This open architecture encourages developers to create and deploy cutting-edge AI applications.
Beyond the full-sized K-Bot, K-Scale Labs also offers the Zeroth Bot (Z-Bot), a smaller, 1.5-foot humanoid priced at approximately $999. This more compact version further lowers the barrier to entry, targeting students, hobbyists, and researchers. The company's rapid prototyping approach has seen them develop six distinct humanoid models in under a year, fostering a vibrant community through platforms like Discord and GitHub.
K-Scale Labs, a Y Combinator alumnus, has raised $1 million across two pre-seed funding rounds to support its mission. The "made in America" aspect of K-Bot also positions it as a strategic alternative in a global robotics market increasingly concerned with supply chain security. The company envisions K-Bot as a platform for future industrial and home applications, with plans to integrate advanced Vision-Language-Action models later this year.