In a recent social media post, Nick Schrock, founder of Dagster, asserted that "context engineering your agent dominates model choice when you build something real," highlighting its critical role in the practical application of AI. Schrock, a prominent voice in data engineering, dismissed the frequent "new model time to switch toolchains" cycle as "theater," emphasizing the enduring "stickiness of workflows and the application layer" in AI development. This perspective underscores a growing industry shift towards optimizing the information provided to AI models rather than solely focusing on model capabilities.
Context engineering, an evolution of prompt engineering, involves systematically curating and managing the optimal set of tokens and information available to a large language model (LLM) at any given time. This discipline addresses the inherent "attention budget" of LLMs, where their ability to accurately recall information decreases as the context window expands. Experts, including those at Anthropic, define it as designing systems that provide the right information and tools, in the correct format, at the opportune moment to enable LLMs to effectively accomplish tasks.
Schrock's company, Elementl, the creator of Dagster, has pivoted its roadmap to focus on context engineering, exemplified by its new product, Compass. This collaborative agent, integrating with platforms like Slack, aims to govern and manage context, treating it like code that can be versioned and reverted. This approach seeks to unlock data accessibility for business stakeholders while maintaining governance and precision through "context pipelines" that batch-compute and synthesize information offline.
The discussion around workflow stickiness and the application layer points to the practical challenges of deploying AI agents in production. While agents offer dynamic tool selection and adaptive reasoning, they can incur significant token costs and present complex debugging scenarios. Industry analysis suggests that workflows, with their structured and predictable execution, often provide more reliable and cost-effective solutions for repeatable tasks, especially in regulated environments. Hybrid systems, combining stable workflows with agents for specific, complex decisions, are emerging as a pragmatic solution to balance flexibility and resilience in AI applications.