Despite exponential advances in data, computing power, and algorithms, complexity in modern industry continues to rise, compounding risks, increasing costs, and amplifying the consequences of error. Environments like engineering, manufacturing, and energy are especially vulnerable. They behave like Complex Adaptive Systems, characterized by interdependent, reactive, and unpredictable behavior. In response to these challenges, Artificial Intelligence (AI) has emerged as a powerful tool, accelerating digital transformation and offering solutions to some of the pressures created by this growing complexity. Yet its limitations constrain its effectiveness in environments where transparency, traceability, and domain expertise are paramount.
Introducing Model-Based Cognition™ (MBC), a complementary path to augment both human and artificial intelligence. It proposes a self-adapting digital thread at scale, achievable by using models to digitally mirror the functions and collaboration of organic cognition: language, perception, attention, learning, memory, reasoning, problem solving, and decision making.
MBC depends on the foundation of five essential capabilities working in concert. First, the Model-based cognitive functions listed above must be present and optimized for inter-collaboration. Next, Domain experts must be able to build, reuse, and evolve knowledge models directly, without programming bottlenecks. Third, these models must be symbolic, structured, and no-code, capable of representing the full breadth of domain knowledge (not just geometry) while preserving clarity, adaptability, and transparency. Equally critical, there must be a top-down coordination of syntax and structure to ensure that independently authored models remain interoperable. Finally, the system must enable models to organically and dynamically combine and recombine as new information, conditions, and objectives emerge. Together, these capabilities create the necessary environment for scalable, adaptive cognition across complex domains.
Through an example in the CAD environment, this paper illustrates how real-time, context-driven intelligence can transform engineering processes – and how this approach scales to other complex environments. As digital transformation accelerates, success will depend not just on more automation, but on better cognition: structured, purposeful, and collaborative.
1. Introduction: Complexity isn’t the Symptom, it’s the System
Across industries, enterprises find themselves confronting a paradox. Despite exponential increases in data availability, computational power, and algorithmic sophistication, operational complexity is not declining. It is compounding. In sectors such as engineering, manufacturing, and energy generation/distribution, complexity has become not merely a challenge to manage but a defining characteristic of the environment itself. Decisions once isolated now cascade across intricate webs of dependencies, affecting costs, schedules, safety outcomes, and regulatory compliance in unpredictable ways. These industries exemplify what systems theorists call Complex Adaptive Systems: dynamic ecosystems where behaviors emerge from countless internal and external interactions, making outcomes difficult to forecast and even harder to control. In such settings, traditional linear management strategies struggle to cope. No single stakeholder, process owner, or software system can maintain a comprehensive understanding of the total operating context at any given moment.
The result is a narrowing margin for error. A single oversight at one node can ripple outward, compounding into costly delays, safety risks, or missed sustainability targets. Increasingly, the ability to respond with speed alone is insufficient. Enterprises must now cultivate the ability to think better – to recognize patterns sooner, to reason through trade-offs more intelligently, and to adapt structures dynamically as conditions evolve.
Yet, most of today’s digital transformation strategies, while ambitious, are beginning to show their limits in addressing this new reality. Organizations have pursued a range of approaches – expanding manual efforts by adding people and resources, constructing interoperable digital threads to connect systems, and scaling data pipelines and deploying generalized Artificial Intelligence (AI) models. Each of these strategies has delivered important progress, but all face growing challenges. Brute-force scaling strains resources and becomes unsustainable with diminishing returns. Interoperable threads are frequently fragile, relying on costly, one-off Application Programming Interfaces (API) that struggle to keep pace as models and information shift with the business. Data pipelines and generalized AI, meanwhile, often lack the transparency and adaptability needed for complex, evolving environments. Without rethinking the foundation of how these organizations perceive, reason, and act, they risk finding themselves increasingly constrained by the very complexity they seek to address.
2. Reframing the Role of AI in Complex Adaptive Systems
The last decade has witnessed extraordinary advances in AI, with generative and agentic models expanding the possibilities for design optimization, knowledge retrieval, and task automation. Organizations have increasingly attempted to apply these technologies to accelerate development cycles, create smarter work instructions to bridge the gap between design and production, capture and proactively apply impactful lessons learned, and strengthen compliance across the lifecycle. While AI has made incremental improvements toward these goals, beneath these successes lies an uncomfortable truth. While AI excels at performing narrowly defined tasks within bounded contexts, it struggles to navigate the open-ended, dynamic realities of complex adaptive systems. More critically, the very architectures that power today’s AI introduce vulnerabilities that are magnified in high-stakes environments.
At the heart of the problem is opacity. Generative models produce outputs with remarkable fluency and confidence, but little capacity for self-explanation. When an AI system infers a design recommendation, flags a manufacturing deviation, or proposes an operational adjustment, the underlying rationale is often inaccessible. For industries where traceability, accountability, and engineering justification are non-negotiable, this black-box behavior is not merely a technical inconvenience, it is an existential flaw. Moreover, these models are inherently constrained by their training data. No matter how sophisticated the architecture is, an AI system can reason only within the boundaries of what it has seen. In fields where proprietary knowledge, evolving standards, and edge-case scenarios dominate daily operations, reliance on static training corpora introduces critical blind spots. Finally, the process of integrating domain-specific knowledge into generative systems carries its own risks. Training or fine-tuning large models with proprietary information may expose intellectual property to external platforms, undermining a core pillar of competitive advantage and regulatory compliance.
Taken together, these limitations reveal a sobering reality: current AI architectures, while powerful within their domains, are fundamentally misaligned with the demands of environments where complexity, change, and consequence are tightly intertwined. If enterprises are to navigate the future successfully, they will need more than faster algorithms or bigger data.
3. Maturing Complexity Reduction into Complexity Capable Frameworks
The path forward is not to eliminate complexity through automation, but to create systems that are conscious of the complexity, and capable of supporting the dynamic demands of modern environments. Naturally, this raises the question: are there examples of Complex Adaptive Systems that already embody this ideal? As scientist and inventor Jay Harman observed, “Nature has already solved the problems we are trying to solve.” Throughout history, many of humanity’s most profound technological advances have stemmed not from invention ex nihilo, but from biomimicry: birds inspiring flight, termite mounds shaping sustainable architecture, shark skin informing aerodynamic engineering.
This principle extends to artificial intelligence itself. The layered artificial neurons and weighted connections of today’s generative architectures directly emulate the tangible anatomy of human neural networks and synapses. Yet just as nature’s designs are more than their physical structures, human intelligence is more than the brain’s architecture. The mind itself operates as a complex adaptive system, where intelligence emerges not from how much is stored, but from how well different capabilities coordinate in real time Unlike machines, whose primary method of improvement is to accumulate ever-larger volumes of data, human advancement is measured differently. A person’s Intelligence Quotient (IQ) does not increase by learning more facts; it improves when cognitive processes become more efficient, agile, and interconnected. True cognitive growth refines how existing knowledge is organized, retrieved, and applied – a capability that arises from the seamless integration of multiple cognitive functions (Table 3.1 – Organic Cognitive Components and their Functions), each aligning and adapting to the others within an evolving whole.
Table 3.1 – Organic Cognitive Components and their Functions
In biological systems, intelligence is not limited to executing predefined functions; it evolves through continuous refinement. Human brains possess neural plasticity – the ability to reorganize, strengthen, and reconfigure cognitive functions in response to new challenges. This capacity allows individuals to apply core principles across entirely different domains, effortlessly adapting knowledge to novel situations. The key lies not simply in having discrete cognitive abilities, but in how seamlessly they collaborate, realign, and build upon one another in real time. (Fig 3.1 – Organic Cognitive Component Coordination). This interplay enables resilience in environments shaped by uncertainty and change. For digital systems to achieve similar adaptability, they must move beyond static architectures and embrace designs where cognitive components can dynamically coordinate, recombine, and refine themselves within an evolving context.
Fig 3.1 – Organic Cognitive Component Coordination
4. Cognition as a Digital Framework
Model-Based Cognition™ (MBC) offers a new paradigm for emulating the capabilities and coordination of cognition in digital systems. Rather than attempting to manage complexity through brute computational scale, MBC employs domain experts to construct modular, structured, and symbolic models that reflect their own cognitive processes — capturing expertise in a form that can be represented, applied, and reused as naturally as thought itself.
When knowledge is modeled in this way, it becomes more than stored information: it becomes a dynamic asset that can be applied, adapted, and instantiated – supporting both scalable generalization across diverse applications and repeatable execution across occurrences. Like neural plasticity in the brain, each cognitive model can adjust to new contexts through refinement and reorganization – without disrupting the broader system. Because the models are modular and interdependent, changes in one component can inform and influence others, allowing the entire system to evolve coherently. This mirrors the way human cognition transfers and recombines underlying principles across domains while maintaining stability and continuity.
While these models are the building-blocks of MBC, they are not sufficient on their own. To truly emulate cognition, these models must be used to parallel the core functional architecture and collaboration that underpins the cognitive capability (Table 4.1 – Model-based Cognitive Components and their Functions.)
Table 4.1 – Model-based Cognitive Components and their Functions
When these cognitive capabilities operate not in isolation but in coordination, digital systems can begin to mirror the responsiveness, adaptability, and resilience demanded by complex adaptive environments. Rather than automating expertise away, MBC captures, amplifies, and operationalizes it – building systems capable of learning, adapting, and improving as complexity itself evolves. Each interaction, whether a domain expert codifying knowledge into a model or a user reapplying that model in the flow of work, contributes to this evolution. Distributed across thousands of users and activities simultaneously, these interactions continuously reshape and expand the system’s knowledge base, enabling it to stay aligned with the dynamic complexity of the organization itself.
5. Operationalizing Model-Based Cognition™
The foundation begins with modeling formats that move beyond simple geometry. To truly capture expertise, models must express logic, semantics, rules, relationships, algorithms, and procedures. They must represent not just the form of something, but the reasoning behind its behavior – the causal structures that explain why outcomes occur. Just as important, these different types of models must be able to combine and recombine (interoperate) organically. Their integration should rely on the knowledge already encoded within each model, without requiring additional manual alignment or translation. This ensures that as new models are created or existing ones evolve, they automatically connect and adapt to one another – supporting knowledge reuse, system scalability, and dynamic reasoning.
Because expertise in complex domains is often localized, contextual, and distributed across hundreds or even thousands of individuals, modeling cannot be locked away in code or passed through layers of developers. Doing so introduces bottlenecks and risks distorting the fidelity of expert knowledge. Instead, MBC demands bottom-up knowledge sourcing: empowering engineers, planners, operators, those closest to the systems, to directly construct and evolve the models. These domain experts know when and how knowledge should be applied. They understand the logic that drives operational decisions and the variables that must be reasoned through to navigate trade-offs and edge cases. Their insight is irreplaceable, and systems must be built to capture it natively. Yet enabling bottom-up contributions alone is insufficient. To ensure that knowledge remains interoperable across systems, standards, and organizations, top-down coordination of semantics and structure is essential. Shared meaning, consistent frameworks, and unified syntax are critical to avoid fragmentation and preserve scalability.
When these principles converge (Fig 5.1 – Model-Based Cognition™ Principles) – Model-based cognitive functions, dynamic model recombination, robust no-code modeling frameworks, bottom-up expert sourcing, and top-down semantic coordination – they create the conditions for a different kind of digital transformation. One where, much like human IQ improves as cognitive skills are strengthened and better integrated, the intelligence of the system itself grows over time, shaped by how knowledge is captured, codified, and reused.
Fig 5.1 – Model-Based Cognition™ Principles
It is not a static black box hidden within an IT department. It is a living mirror – reflecting the evolving expertise of the organization, optimizing its intelligence through real-world interaction, and adapting dynamically as operational needs change.
6. Model-based Cognitive Applicability
While CAD Design Engineering offers a compelling example, the principles of Model-Based Cognition™ are broadly applicable across the digital enterprise – including both traditionally Model-based systems and processes that have historically not been model-driven. The examples that follow are only a few of the areas already benefiting from the application of MBC; indeed, any domain operating within a complex adaptive system stands to benefit from this approach.
Just as critically, MBC extends into traditionally non-Model-based activities.
Across all these domains, and far beyond, the common thread is clear: In environments defined by uncertainty, interdependence, and accelerating change, systems that can perceive, reason, learn, and adapt will consistently outperform those that simply execute predefined scripts. By moving beyond task automation toward true cognitive support, enterprises can transform digital transformation from a reactive necessity into a proactive strategic advantage.
7. Conclusion: Delivering on the Promise of an Intelligent Digital Thread
The era of trying to simplify complexity is over. The complexity enterprises face today are not a statistical inconvenience to be smoothed by data averages, but as an intrinsic, accelerating feature of their environments.
In industries like engineering, manufacturing, and energy – where precision, interdependence, and accountability are non-negotiable – the limits of conventional digital transformation strategies are becoming increasingly visible.
Systems that automate processes without understanding them, that predict without reasoning, that act without explaining, are not enough.
The next frontier is clear. Enterprises must build systems that think – not just faster, but better. Systems that perceive changes dynamically, reason over consequences intelligently, learn and adapt from experience, and collaborate transparently with human expertise.
Model-Based Cognition™ offers a blueprint for achieving this evolution. By operationalizing cognitive principles within digital architectures, enterprises can transform knowledge from a static artifact into a living system – resilient, adaptive, and strategically aligned to future challenges.
The organizations that embrace cognitive transformation today will not simply optimize existing operations. They will redefine what operational excellence, innovation, and leadership mean in a world where complexity is not the enemy, but the environment. The future belongs not to those who automate best. It belongs to those who think best.