San Francisco, CA – Thinking Machines Lab, the artificial intelligence research and product company founded by former OpenAI CTO Mira Murati, has introduced a new theoretical framework called "Modular Manifolds" aimed at improving the stability and performance of neural network training. The announcement was made in their second "Connectionism" blog post, highlighting a novel approach to co-designing neural network optimizers with manifold constraints on weight matrices.> "Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices," stated Thinking Machines on their official X account. The company emphasizes a fundamental understanding of the geometry of neural network optimization.The core of the Modular Manifolds concept involves constraining the weight matrices of neural networks to specific submanifolds at each layer. This approach re-thinks optimization by integrating these manifold constraints directly into the algorithm design. As an example, the lab proposes a manifold version of the Muon optimizer, where weights are restricted to the Stiefel manifold, characterized by matrices with a unit condition number. This method seeks to maintain "tensor health" by preventing weights from becoming too large or too small, which can hinder training.Beyond individual layers, the theory extends to "modular manifolds," offering a composable framework for scaling large networks. This involves budgeting learning rates across layers by understanding the Lipschitz sensitivity of the network's output with respect to its weights. This abstraction, detailed in their related paper on the modular norm, aims to provide a unified approach for controlling the overall Lipschitz constant of deep networks, leading to more stable training and improved generalization.Thinking Machines Lab, launched in February 2025 by Mira Murati, has quickly garnered significant attention, recently closing a $2 billion seed round led by Andreessen Horowitz, valuing the company at $12 billion. The company's mission is to make AI systems more widely understood, customizable, and generally capable, with a strong emphasis on open science and human-AI collaboration. The introduction of Modular Manifolds aligns with their goal of building solid foundations for advanced AI capabilities.While the concept of manifold optimization in neural networks is not entirely new, the specific integration with optimizers like Muon and the modular approach for scaling networks represent a novel direction. Initial discussions in the AI community suggest that while the mathematical rigor is appreciated, the practical impact and scalability to frontier models remain areas of active interest and further empirical validation.