A Metacognitive and Modular Approach to Self-Organizer AI in Open-Ended, Dynamic Environments
Main Article Content
Abstract
This paper introduces a practical and flexible self-organizing artificial intelligence (AI) architecture that can be effectively employed in dynamic, non-contextual environments (lacking clear labels, fixed goals, or stable features). Supervised learning, rule-based systems, and classical reinforcement learning are the traditional models that typically require predesigned rewards and a fixed environment structure, which reduce the diversity of these models. On the contrary, the proposed framework stresses on meta-cognitive regulation and cognitive metonymy, allowing agents to self-organize their internal behaviors and strategies under variable inputs. The architecture is component-based multiagent with perception–feedback loops, decentralized communication protocols and dynamic heuristics. Together, these components enable emergent adaptability, where agents can build goal hierarchies on the fly, monitor their learning, and collaborate in the absence of central control. Unlike static models, this approach supports dynamic goal selection and rapid re-planning through internal monitoring and feedback. The framework was evaluated in simulation experiments on two complex tasks: autonomous navigation in unknown terrains and unsupervised anomaly detection in non-stationary data streams. Results demonstrate superior performance compared to conventional models, achieving higher average goal completion rates (87.4% vs. 65–78%), faster reaction times (43 ms vs. 62–94 ms), and greater resilience to disturbances. These observations serve to illustrate the promise of the self-organizing AI paradigm for open-ended, uncertain domains, like robotics, IoT and autonomous systems. In summary, our work questions conventional wisdoms and beliefs in AI design arguing in favor of naturally adaptive on cognition and continuous self-evolution in realistic worlds.