AI security startup Irregular has recently secured $80 million in new funding, propelling its valuation to $450 million. This significant investment round, which includes both seed and Series A capital, underscores the growing importance of securing advanced artificial intelligence models against potential misuse and vulnerabilities. The news was first reported by TechCrunch, citing a source close to the deal.
The $80 million funding round was co-led by prominent venture capital firms Sequoia Capital and Redpoint Ventures. Additionally, the round saw participation from notable angel investors, including Wiz CEO Assaf Rappaport, signaling strong confidence from industry leaders in Irregular's mission and technology. This capital injection is expected to fuel the company's expansion and development efforts in the rapidly evolving AI security landscape.
Formerly known as Pattern Labs, Irregular specializes in stress-testing AI models for misuse and security risks before their public deployment. The company has established itself as a key player in AI evaluations, working with leading frontier AI labs such as OpenAI, Anthropic, and Google DeepMind. Their proprietary framework, dubbed SOLVE, is widely utilized within the industry for scoring a model's vulnerability-detection capabilities.
Irregular plans to leverage the new funding to advance its ambitious goal of identifying emergent risks and behaviors in AI models before they manifest in real-world scenarios. Co-founder Dan Lahav emphasized the company's objective, stating, "If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models." The company employs sophisticated simulated environments where AI acts as both attacker and defender to rigorously test model resilience.
The investment reflects the escalating focus on AI security within the tech industry, particularly as frontier models present new and complex risks. Irregular has already demonstrated its capabilities by testing OpenAI's GPT-5 for offensive cyber operations, though it found the model "still falls short of being a dependable offensive security tool." The company, which became profitable in its first year, aims to expand its services beyond frontier labs to a broader range of companies needing to understand and mitigate AI-related threats.