Gensyn, a protocol dedicated to decentralized machine learning computation, has announced a significant milestone, operating what it describes as the world's largest-scale decentralized training network. According to a recent tweet from Lior Messika, the network currently boasts 12,000 concurrent nodes actively engaged in artificial intelligence training. This development highlights the company's progress in building an open and distributed infrastructure for AI.
Gensyn's core mission is to aggregate computing power globally, enabling individuals and organizations to contribute idle GPU resources for machine learning model training. The protocol functions as a layer-1 proof-of-stake blockchain, providing a standardized, verifiable, and permissionless method for executing complex ML tasks. Its public testnet offers persistent identity and meticulously tracks participant contributions within its decentralized AI ecosystem.
The company aims to democratize access to AI computation, directly addressing the challenge of limited GPU availability for developers and researchers. Lior Messika emphasized this vision in the tweet, stating: "> AI will run on networks and open protocols similarly to the Internet. We are early, but the data is extremely promising." This aligns with Gensyn's strategy to unlock latent compute sources and unite them into a single, cost-effective, and scalable cluster.
Gensyn has previously secured over $50 million in funding, including a Series A round led by a16z crypto, to accelerate its protocol introduction and expand its engineering workforce. The project emphasizes cryptographic verification of completed tasks and a token-based reward system for contributors, fostering a new paradigm for distributed AI development. This incentivized model encourages participation and resource sharing across its global network.
Research conducted by Gensyn has focused on developing communication-efficient and fault-tolerant methods for decentralized training, such as SkipPipe, to ensure robust performance across diverse and heterogeneous networks. The reported progress with 12,000 concurrent nodes underscores the increasing viability of decentralized infrastructure for large-scale AI model training, potentially reshaping the future landscape of AI development and accessibility.