I. The Unsustainable Pursuit of Cognition: Defining the Energy Obstacle
The contemporary pursuit of advanced artificial intelligence, particularly large language models (LLMs), has driven a technological revolution at the cost of profound environmental and economic instability. The critical juncture facing the industry is the massive energy deficit between biological intelligence and current machine learning systems, demanding an urgent transition to sustainable computing architectures.
A. The Theoretical Goal: Defining Cognitive AI and Sustainable Cognition
Cognitive AI distinguishes itself from mere pattern recognition systems. It refers to advanced AI models capable of human-level reasoning, complex adaptability, and, critically, continuous, lifelong learning.1 These systems seek to move beyond fixed training epochs and retain previously learned knowledge as they absorb new information, mirroring biological mechanisms.1
Achieving this high-level machine cognition requires a complete rethinking of resource consumption, leading to the mandate for Sustainable Cognition. This concept demands a fusion of brain-inspired paradigms, eco-efficient hardware solutions, and comprehensive strategic policy frameworks that explicitly balance rapid innovation with economic feasibility.2 Sustainable AI development must inherently address a dual requirement: ensuring not only clean data inputs that are non-biased, high-quality, and trustworthy but also clean power to fuel their computational demands.3 Without this dual mandate, the goals of advanced, scalable intelligence will remain fundamentally limited by resource constraints.
B. The Baseline Comparison: Human Brain vs. Large Language Models (LLMs)
The stark contrast between biological intelligence and current artificial intelligence exemplifies a massive imbalance in energy consumption, underscoring the severity of the energy obstacle.2
The human nervous system provides the definitive benchmark for efficiency. For all its cognitive tasks, reasoning, and physical control, the entire system operates on roughly 20 watts (W) of power equivalent to the power required by a couple of standard LED light bulbs.1 This “remarkable energetic efficiency” highlights the inherent economic and environmental advantages of biological intelligence, which runs on minimal resource investment.2
Conversely, Large Language Models (LLMs) utilize a profoundly different, energy-intensive information-processing system. The operational costs of LLMs are driven by computation that requires massive resource investment.2 The environmental footprint of training a single large neural network is estimated to be equivalent to some 626,000 pounds of carbon dioxide.5 This staggering figure only accounts for specialized hardware energy consumption during the training phase, omitting the energy needed for server operation and other necessary data center machines.5
Furthermore, once models are trained, the ongoing inference phase presents continuous operational costs. While estimates vary widely and often depend on closed-source information, general figures show significant power draw. Older, widely cited estimates placed GPT-3 inference at about 3 Wh per query. More recent optimizations suggest OpenAI’s flagship model, GPT-4, consumes approximately 0.3 Wh per query.6 However, even open-source models with 70 billion parameters, such as Llama-3-70B, consume 1.7 Wh on average per query.6 This persistent energy requirement influences the financial and environmental sustainability of broad AI adoption across industries.2
Table 1 provides a definitive comparison illustrating the scale of the energy challenge that new architectures must solve.
Table 1: Energy Consumption Comparison: Biological vs. Artificial Cognition
| System | Primary Energy Source/Input | Approximate Power Consumption (W/Energy Metric) | Processing Paradigm | Efficiency Implication |
| Human Brain (Cognition) | Glucose/Oxygen | ~20 Watts 1 | Event-Driven, Parallel, In-Memory | Highly Resource-Efficient |
| LLM Training (Example) | Electricity | Equivalent to 626,000 lbs $\text{CO}_2$ (Training cost) 5 | Continuous, Sequential Data Transfer | Extremely High Resource Cost |
| LLM Inference (GPT-4 Estimate) | Electricity | 0.3 Wh to 3 Wh per query 6 | Continuous, Sequential Data Transfer | High Energy Demand per Task |
| Neuromorphic Chip (Inference) | Electricity | 3x to 100x more efficient than conventional systems (per inference) 7 | Event-Driven, Parallel, In-Memory | Significant Efficiency Gain, specialized for edge |
C. Quantifying the Footprint: The Economic and Environmental Cost
The energy obstacle is fundamentally a two-pronged crisis: one financial, one environmental. High energy demands translate directly into substantial operational costs for firms, creating economic trade-offs that limit the scalability of AI adoption, especially for generalized services.2
The environmental burden is accelerating rapidly. Estimates suggest that AI data centers consumed 23 Terawatt hours (TWh) in 2022.6 As AI integration deepens globally, this figure is expected to soar. The central concern is that the supposed productivity gains derived from this new technology could be completely offset by irreparable damage to the environment and the efforts required to combat climate change.5
Compounding this crisis is a significant transparency deficit in the industry. Computer scientists currently struggle to agree on a standardized, uniform methodology for measuring AI emissions.6 For large, closed-source models like ChatGPT, Gemini, or Grok, the precise number of parameters and their specific energy consumption metrics are often maintained as trade secrets.6 This withholding of essential data fundamentally undermines global and national efforts to accurately quantify the carbon footprint of AI, making it exceptionally difficult for governments to design or enforce effective policy responses, such as necessary carbon regulations.2 Accurate data is a prerequisite for effective regulation, and its absence slows the necessary technological pivot.
II. The Dual Mandate for Future AI: Clean Data Meets Clean Power
The future of advanced, sustainable cognitive AI hinges on addressing the entire lifecycle of the technology, from the purity of the data used for training to the cleanliness of the energy source powering its operations. This represents the dual mandate of sustainable cognition.
A. Beyond Computation: The Ethics and Infrastructure of Clean Data
The pursuit of better, safer, and more robust AI necessitates vast, high-quality training data sets.8 The term “clean data” encompasses inputs that are well-managed, trustworthy, and rigorously validated.9
Cleaning and preparing this enormous volume of data has spawned an outsourced industry dedicated to tasks such as manually tagging objects in images or scanning and identifying offensive content for deletion.8 This data supply chain introduces significant socio-economic and ethical challenges, especially concerning labor practices and the inherent data bias embedded within the inputs.8 Ethical concerns regarding bias and accountability are paramount.3
Critically, addressing data bias often requires extensive data sanitation, complex refinement, or costly retraining cycles. Each remediation effort demands additional compute cycles and, therefore, greater energy consumption. Consequently, prioritizing clean data governance—ensuring the reliability and ethics of inputs from the start—is an infrastructural approach to efficiency, as it inherently reduces future, recursive energy waste and unnecessary computational load.
B. Sourcing Sustainable Energy: Data Centers, Renewable Mixes, and Cloud Provider Optimization
The computational power required for AI must be fully integrated with clean power sourcing to meet global climate targets.3 The infrastructure layer, particularly cloud computing, is a primary driver of efficiency in this realm.
Major cloud service providers typically operate with better power and cooling improvements than typical enterprise-owned data centers, leading to reduced energy consumption per compute unit. They also utilize newer, optimized hardware and achieve server utilization rates several times greater than average enterprise figures.11 Crucially, the largest public cloud providers often procure greater renewable energy mixes, actively mitigating the carbon footprint of the data centers that host AI training and inference.11
A complex self-regulation paradox emerges when analyzing AI’s role in the clean energy transition. AI is simultaneously a massive energy consumer and a powerful tool for environmental optimization. AI is revolutionizing the clean energy sector by optimizing power generation, improving smart grid management, and reducing industrial carbon emissions and waste by 30% to 50%.3 This creates a critical imperative: the energy cost of the AI solution must be significantly less than the energy it saves through optimization. The urgency for energy-efficient architectures is therefore amplified, ensuring that AI remains a net positive solution for sustainability.
C. Policy Divergence: Analyzing US and EU Regulatory Approaches to AI Sustainability
Effective governance is crucial for balancing AI’s benefits with sustainability efforts.3 However, policy approaches currently diverge significantly between global powers.
The U.S. National AI Research and Development Strategic Plan includes expanding AI-based research efforts led by the Department of Energy (DOE) and funding the National Science Foundation (NSF) to research the effects of AI on society, including data bias.8 Despite these foundational research efforts, the U.S. currently lacks comprehensive AI regulations, creating a policy vacuum regarding energy standards and mandatory transparency.3
In contrast, the European Union has adopted a strategic approach that explicitly links digital sovereignty with sustainability. The European roadmap details a vision for a Sustainable and Energy-Efficient European Computing Continuum.12 This initiative aims to build an interconnected, federated ecosystem where European cloud and edge service providers collaborate to deliver secure, high-performance, and energy-efficient data processing capabilities.12 By integrating sustainability and energy efficiency as core architectural pillars, the EU treats resource management as a geopolitical necessity, mitigating reliance on potentially expensive or volatile external energy resources and inefficient hardware, thereby bolstering digital autonomy.
III. Architectural Disruption: The Dawn of Sustainable Cognition (Neuromorphic & Green AI)
The sheer scale of the energy obstacle requires a foundational shift in computational architecture, moving past the constraints of conventional von Neumann systems toward bio-inspired designs that mimic the brain’s massive efficiency. This paradigm shift is encapsulated by neuromorphic computing and systemic Green AI principles.
A. Neuromorphic Computing: A Brain-Inspired Blueprint for Efficiency
Neuromorphic computing aims to design and build computer systems, including both hardware and software, that perform cognitive tasks more efficiently by emulating the structure and function of neurons and synapses in the brain.7
A primary source of conventional AI inefficiency stems from its continuous operation. Neuromorphic architectures address this through event-driven processing, where artificial neurons crunch data only as events unfold, rather than constantly running.1 This sparse, asynchronous method drastically reduces wasted compute cycles and energy consumption.1 This principle of sparsity, where only necessary circuits are activated, is already inspiring contemporary chip design, with recent smartphones incorporating such neuromorphic ideas to increase speed and decrease power usage.1
B. Hardware Innovations: In-Memory Computing, Memristors, and Efficiency Gains
Conventional AI systems operate on the established von Neumann architecture, which keeps memory (storage) and processing (CPU/GPU) components separate. This forces the system to consume energy moving data between the hard drive and RAM for every step of processing the notorious von Neumann bottleneck.1
Neuromorphic chips fundamentally circumvent this bottleneck through in-memory computing and a parallel architecture.1 These systems process data in the same physical location as the memory. This is often achieved using a device called a memristor (a portmanteau of “memory” and “transistor”), which acts as both a storage and processing component, eliminating the energy cost associated with data transfer.1
This architectural improvement translates into quantifiable efficiency gains. Neuromorphic hardware has demonstrated that it can be between 3 to 100 times more energy efficient per inference compared to conventional systems, particularly when operating at smaller batch sizes.7
C. Current Performance and Market Hurdles: Assessing Integration Challenges
Despite the demonstrable theoretical and early-stage advantages, the path to mass market adoption for neuromorphic architectures faces significant hurdles.
The primary constraint is market inertia. Large companies have invested heavily in current, established AI architectures, and the energy savings offered by neuromorphic computing are currently not perceived as substantial enough to justify the massive investment required for a complete architectural shift.1 This creates a critical paradox: while the energy obstacle urgently requires a shift in architecture, the financial commitment to legacy hardware prevents the immediate adoption of the necessary solutions.
Furthermore, neuromorphic chips cannot simply “slide into” existing AI systems or large language models to instantly reduce their energy footprint.1 Integration requires novel hardware designs and specialized, efficient algorithms.1 Researchers also note that neuromorphic chips have not yet secured a “killer application”—a cognitive task that no other AI system can perform as well, which is necessary to drive widespread commercial demand.1
On the scientific frontier, a key challenge is the practical replication of biological plasticity, which allows the brain’s synapses to change and adapt. Though an active area of research, a practical neural model that features plastic synapses remains elusive.1
D. Future Trajectories: Continuous Learning, Robotics, and Defense
The innate properties of neuromorphic computing make it ideally suited for specialized cognitive systems. By coupling memory and processing, the approach facilitates continuous learning (also known as lifelong learning), where past experiences dynamically shape the absorption of new information—a model fundamentally superior to traditional machine learning paradigms that often discard learned knowledge as they move to new tasks.1
Research efforts are active in applying these systems to niche, bio-inspired applications, such as artificial retinas and cochleas for image and sound recognition systems.1 Further work is underway to develop sensing devices for use in robotics.1 The architecture is also uniquely positioned to model complex cognitive structures, such as the connections among the tens of billions of neurons in the brain.1 Given the strategic importance of energy-efficient cognition, organizations like Sandia National Laboratories are investigating how these chips can bolster national defense systems.1
Crucially, the measured efficiency gains of 3x to 100x are highly pronounced at batch size 1 inference, but this advantage diminishes for larger batch sizes.7 This performance characteristic suggests that the true future of sustainable cognition resides not in centralized, massive data centers processing generalized LLM tasks in large batches, but rather in decentralized, specialized, low-latency cognitive processing units deployed at the “edge” of the network (e.g., in robotics, sensors, and remote systems).
IV. Case Study: Deconstructing “The Artemis Solution” as a Model for Green AI
The “Artemis Solution” does not refer to a single proprietary technology but rather serves as a conceptual umbrella for several disparate projects that share a fundamental commitment to systemic resource optimization and energy awareness across multiple layers of technology. This fragmentation, when synthesized, illustrates the future requirement for a holistic, Green AI architecture.
Table 2: Deconstructing “The Artemis Solution”: A Multi-Layered Approach to Green AI
| Project/Framework Name | Domain Focus | Energy/Efficiency Contribution | Architectural Layer | Snippet Reference |
| Green AI Meta-Architecture (“Artemis”) | Sustainable Resource Management / Circular Economy | 25% reduction in energy consumption during workflows.14 | Meta-Architecture (Energy-conscious modeling) | 14 |
| ARTEMIS-DA | Data Analysis / Complex Reasoning | Algorithmic optimization for multi-step reasoning, minimizing wasted compute.15 | Algorithmic/Software | 15 |
| Artemis (Law Enforcement) | Public Safety / Cloud Computing | Leverages cloud provider optimization (power, cooling, utilization, renewable mix).11 | Systems/Cloud Architecture | 11 |
| Artemis (UAS/RPAS) | Defense / Aviation (Unmanned Systems) | Focus on platform performance and efficiency, resulting in reduced fuel consumption.16 | Hardware/Platform Engineering | 16 |
A. Architectural Green AI: The Meta-Architecture for Resource Efficiency
One project utilizing the “Artemis” conceptual model proposes an energy-efficient Green AI meta-architecture designed explicitly to support circular economies and address the contemporary challenge of sustainable resource consumption.14 This multi-layered framework integrates state-of-the-art machine learning algorithms, energy-conscious computational models, and optimization techniques to facilitate decision-making for resource reuse and waste reduction.14
The findings from this architectural model are quantitative, demonstrating a notable 25 percent reduction in energy consumption during workflows compared to traditional methods.14 When tested on real-world datasets from lithium-ion battery recycling and urban waste management systems, the solution also achieved an 18 percent improvement in resource recovery efficiency and optimized logistics to reduce transportation emissions by 30 percent.14 This demonstrates a key connection: resource-efficient AI is necessary to facilitate sustainable resource flows globally, moving beyond merely lowering its own carbon footprint into strategic environmental utility.
B. ARTEMIS-DA: Software and Algorithmic Optimization
At the algorithmic layer, the ARTEMIS-DA framework offers a robust, scalable solution for multi-step data analysis and reasoning.15 This software structure combines the complex reasoning capabilities of LLMs with automated code generation and execution and visual analysis.
The framework achieves state-of-the-art (SOTA) performance on intricate analytical benchmarks.15 While this specific project does not explicitly detail hardware power consumption, efficiency is served at the algorithmic layer: superior precision and adaptability in multi-step tasks minimize wasted computation cycles. This aligns with the “clean execution” principle, where the computational resources used are minimized through sophisticated algorithmic strategy.
C. Systems-Level Efficiency: Artemis in Defense and Law Enforcement Applications
Other systems that bear the name “Artemis” focus on operational and infrastructural optimization. In law enforcement, the “Artemis solution” leverages AI, machine learning, and cloud computing to proactively and rapidly identify high-risk business establishments.11 In this context, the efficiency gain is infrastructural, relying on the inherent benefits of cloud deployment: superior power and cooling, and high server utilization rates backed by greater renewable energy mixes supplied by the cloud provider.11
Similarly, the Artemis Unmanned Aerial System (UAS) platform, developed for defense and aviation sectors, focuses on performance and efficiency, which translates directly into operational savings, such as more available fuel for receiving aircraft.16 This demonstrates that the architectural pursuit of efficiency extends beyond data centers and into physical platforms, requiring advanced hardware and systems engineering solutions.
D. Synthesis: How “Artemis” Epitomizes the Shift to Systemic Energy Awareness
Synthesizing these varied projects reveals that “The Artemis Solution” is the architectural realization that sustainable cognition requires optimization at every level: the underlying hardware platform (UAS), the infrastructural deployment environment (Cloud/Law Enforcement), and the specific execution algorithms (ARTEMIS-DA/Green AI Meta-Architecture).
This paradigm illustrates a definitive move away from isolated computational efficiency toward integrated, systemic resource management. The disparate uses of “Artemis” suggest that advanced efficiency is achieved through architectural abstraction and specialization, demanding a highly customized, federated approach to Green AI optimized for specific domains and constraints (like waste management or aerial operations), rather than seeking a singular, universal LLM architecture.
V. Global R&D and Policy Acceleration
Institutional and national efforts are actively defining the trajectory for energy-efficient cognitive systems, particularly through targeted R&D programs that focus on hardware-software co-design and regional digital strategies.
A. US Defense Innovation: DARPA’s Focus on Energy-Aware Machine Learning
The Defense Advanced Research Projects Agency (DARPA) recognizes that energy consumption represents a critical trade-off with computational performance, especially for national security applications.17 This strategic requirement has led to the launch of programs focused on “energy-aware machine learning.”
One key initiative is ML2P (Mapping Machine Learning to Physics).18 This program aims to develop objective functions that are optimal and feasible, providing the desired balance for power and performance for a diversity of objective functions, given a specific application, task, and hardware.18 The fundamental goal of ML2P is to enable energy-aware machine learning construction by capturing the “energy semantics” of machine learning and enabling the optimization of the non-convex, energy-aware problem.18
The explicit objective to map machine learning to physics implies a critical recognition that the traditional separation between software and hardware is inefficient. The solution requires a co-design approach where algorithms are intrinsically aware of and optimized for the physical energy costs of their underlying hardware. This pursuit of efficiency at the deepest technical level strongly aligns with the principles of in-memory, event-driven neuromorphic architectures. DARPA utilizes flexible funding mechanisms, such as Other Transactions (OT) for Prototype with fixed payable milestones, to accelerate innovative research and transition energy-aware ML into specific edge applications.18
B. European Digital Sovereignty: The Cognitive Computing Continuum
In Europe, the strategic imperative is focused on creating a resilient, sovereign, and sustainable infrastructure. The vision for the European Cognitive Computing Continuum defines an interconnected, federated ecosystem designed to deliver secure, high-performance, and, most importantly, energy-efficient data processing capabilities.12
The roadmap for this continuum highlights several strategic research priorities, including investment in intelligent management of multi-provider systems, enabling data-driven innovation, and creating a Sustainable and Energy-Efficient European Computing Continuum.12 This initiative views sustainability not merely as an environmental goal but as a pillar of digital sovereignty, reducing systemic vulnerability to external energy supply and cost volatility. Proposals for this research are expected to develop synergies with existing programs, such as the Digital Europe Programme (DEP) and relevant Important Projects of Common European Interest (IPCEI), ensuring communicable results are shared across the European R&D community.20
C. Contrasting Policy and R&D Approaches
The strategic R&D efforts in the US and the EU reflect differing core priorities. DARPA’s ML2P focuses narrowly on optimizing the power-performance trade-off for mission utility and security.18 In contrast, the EU’s Continuum prioritizes building resilient, secure infrastructure that is inherently energy-efficient and ethically governed.12
This divergence suggests that the global path to sustainable cognition will be heterogeneous. US-funded efforts will likely yield highly efficient, high-performance models optimized for specific speed/power requirements at the edge, while EU-funded efforts will prioritize large-scale, decentralized, renewable-powered, and ethically governed infrastructure for broad deployment. Both approaches, however, converge on the need for decentralized computing and fundamental architectural change to overcome the current energy obstacle.
VI. Conclusion and Recommendations: Charting the Path to Sustainable Intelligence
The next era of artificial intelligence—Cognitive AI—cannot be sustained by the energy-intensive paradigms established by contemporary Large Language Models. The documented imbalance between the highly efficient 20W biological benchmark and the massive carbon footprint of modern machine learning demands a comprehensive architectural and policy overhaul. The solution lies in a systemic approach to energy awareness, best exemplified by the multi-layered principles of Green AI and the focused innovation of neuromorphic computing.
The concept of “The Artemis Solution” crystallizes this necessity, illustrating that optimization must occur at all levels: the algorithmic (ARTEMIS-DA), the systemic (cloud-based optimization), and the meta-architectural (Green AI for circular economies, showing a 25% energy reduction).14 Furthermore, the future of scalable efficiency is moving toward event-driven, in-memory neuromorphic computing, which offers dramatic efficiency gains, particularly in edge applications.
To successfully navigate the energy obstacle and unlock the potential of true Cognitive AI, the following strategic recommendations are essential for industry, academia, and policy makers:
A. Strategic Recommendations for Future AI Development
Recommendation 1: Accelerate Neuromorphic Integration and Application Development
Investment must be strategically focused on overcoming the integration challenge and identifying the “killer applications” for neuromorphic computing, such as robotics, advanced sensing, or modeling the brain’s plasticity.1 Overcoming the market inertia caused by sunk capital in legacy architectures will require significant capital allocation to reduce unit cost and demonstrate superior, specialized efficiency that conventional systems cannot match.
Recommendation 2: Mandate Energy-Aware ML and Hardware-Software Co-Design
Policy frameworks, modeled on strategic initiatives like DARPA’s ML2P 18, should be adopted globally to require energy consumption metrics to be integrated into the core objective functions of AI models. This mandate would drive foundational shifts in how software is developed, enforcing an intrinsic awareness of physical energy costs and compelling the industry toward efficient co-design principles.
Recommendation 3: Prioritize Clean Data Governance and Clean Power Sourcing
Sustainable cognition is impossible without the dual mandate. Policy must enforce rigorous governance to ensure data reliability and ethics, thereby reducing the need for costly and energy-intensive retraining cycles caused by bias or poor quality.8 Simultaneously, regulatory efforts must aggressively align AI data center growth with mandatory clean power sourcing integration, leveraging the high efficiency and renewable energy mixes offered by centralized cloud providers and decentralized edge systems.9
The path to sustainable intelligence is clear: technological advancement must be guided by the principles of radical efficiency, ecological accountability, and comprehensive architectural awareness.
VII. Detailed Reference List
The following research and technical documents formed the foundation of this analysis:
| Citation ID | Author/Source | Title/Description | URL/Context |
| 2 | George, B. | The Economics of Energy Efficiency: Human Cognition Vs. AI Large Language Models | ResearchGate, Comparative analysis of energy requirements and economic trade-offs. |
| 4 | George, B. | AI Large Language Models (Keywords/Abstract) | ECOFORUM Journal, Highlights efficiency and neuromorphic computing. |
| 5 | N.A. (PIIE Blog) | AI’s Carbon Footprint Appears Likely to Be Alarming | Quantifies LLM training carbon footprint (626,000 lbs $\text{CO}_2$). |
| 6 | N.A. (Marmelab Blog) | AI Carbon Footprint Analysis | Discusses difficulty in measurement, LLM inference consumption (0.3 Wh – 3 Wh), and data center energy usage (23 TWh in 2022). |
| 15 | N.A. (ArXiV Preprint) | ARTEMIS-DA Framework | Describes the ARTEMIS-DA framework for multi-step analytical tasks using LLMs and code generation. |
| 14 | N.A. (ArXiV Preprint) | Energy-Efficient Green AI Architecture | Proposes a Green AI meta-architecture for circular economies, demonstrating 25% energy reduction. |
| 1 | Aimone, B., Delbruck, T. (PNAS) | Neuromorphic computing challenges and prospects | Discusses hurdles: plasticity, killer app absence, and integration difficulties for neuromorphic chips. |
| 7 | N.A. (ArXiV Preprint) | Neuromorphic Systems Integration for Sustainable AI | Quantifies neuromorphic efficiency (3x to 100x more efficient per inference). |
| 20 | N.A. (Euro-Access) | Empowering AI/Generative AI along the Cognitive Computing continuum | EU R&D call for synergies with DEP/IPCEI initiatives. |
| 11 | N.A. (Accenture Strategy) | The Green Behind the Cloud POV | Mentions the “Artemis solution” for law enforcement, leveraging cloud efficiency benefits (power, cooling, renewables). |
| 16 | Comtois, J. (L3 MAS) | Battlespace Update Vol. 20, Issue 23 | Mentions the IAI’s Artemis UAS and its focus on efficiency and economic benefits to Canada. |
| 3 | Wang, C. (Yale Clean Energy Forum) | The Power of AI in Clean Energy | Discusses AI’s role in optimizing energy (30-50% reduction potential) and the need for governance regarding its own footprint (“clean data” and “clean power”). |
| 8 | N.A. (European Parliament) | AI Training Data, Ethics, and US R&D | Discusses the necessity of “clean data,” data bias, and US DOE/NSF focus on AI effects. |
| 16 | N.A. (Battle Updates) | Battlespace Update Vol. 20, Issue 23 | Further detail on the Artemis UAS platform focus on performance and efficiency. |
| 18 | N.A. (DARPA) | ML2P: Mapping Machine Learning to Physics | Describes DARPA’s program for energy-aware ML construction and optimizing power-performance trade-offs. |
| 17 | N.A. (DARPA News) | Energy-aware machine learning | News announcement regarding the program aimed at balancing energy usage and performance. |
| 9 | N.A. (Rinnovabili.it Report) | AI for Energy Management | Emphasizes “clean data placed,” forecasting energy models, and integrating AI adoption with clean power sourcing. |
| 12 | N.A. (EU Cloud Edge IoT) | Executive summary_English | Strategic vision for the European Cognitive Computing Continuum, prioritizing a “Sustainable and Energy-Efficient European Computing Continuum.” |
| 1 | N.A. (PNAS) | Neuromorphic Computing Detailed Mechanisms | Detailed explanation of neuromorphic efficiency (20W biological benchmark, event-driven processing, memristors, sparsity) and market hurdles. |
Works cited
Empowering AI/generative AI along the Cognitive Computing continuum (AI/Data/Robotics Partnership) – Search for Funding – EuroAccess – Fördermittelsuche der EuroVienna EU-consulting & -management GmbH, accessed November 13, 2025, https://www.euro-access.eu/en/calls/2204/Empowering-AIgenerative-AI-along-the-Cognitive-Computing-continuum-AIDataRobotics-Partnership
Can neuromorphic computing help reduce AI’s high energy cost …, accessed November 13, 2025, https://www.pnas.org/doi/10.1073/pnas.2528654122
(PDF) The economics of energy efficiency: human cognition Vs. AI …, accessed November 13, 2025, https://www.researchgate.net/publication/392200925_The_economics_of_energy_efficiency_human_cognition_Vs_AI_Large_Language_Models
The Power of AI in Clean Energy: Transforming Sustainability for the Future, accessed November 13, 2025, https://cleanenergyforum.yale.edu/2025/02/19/the-power-of-ai-in-clean-energy-transforming-sustainability-for-the-future
THE ECONOMICS OF ENERGY EFFICIENCY: HUMAN COGNITION VS. AI LARGE LANGUAGE MODELS | Ecoforum Journal, accessed November 13, 2025, https://ecoforumjournal.ro/index.php/eco/article/view/2770
AI’s carbon footprint appears likely to be alarming | PIIE, accessed November 13, 2025, https://www.piie.com/blogs/realtime-economics/2024/ais-carbon-footprint-appears-likely-be-alarming
AI’s Environmental Impact: Making an Informed Choice – Marmelab, accessed November 13, 2025, https://marmelab.com/blog/2025/03/19/ai-carbon-footprint.html
Neuromorphic hardware for sustainable AI data centers This work was partially funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) under contracts 01MN23004A, 01MN23004D, 01MN23004F, and 01MN23994B and by the German Federal Ministry of Education and Research (BMBF) and the free state of Saxony within the ScaDS.AI center of excellence – arXiv, accessed November 13, 2025, https://arxiv.org/html/2402.02521v2
The ethics of artificial intelligence: Issues and initiatives – European Parliament, accessed November 13, 2025, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf
Artificial intelligence for energy management – Rinnovabili, accessed November 13, 2025, https://www.rinnovabili.it/wp-content/uploads/2025/09/AI-for-energy-management-report-EN.pdf
Assessing the Multiple Benefits of Clean Energy A Resource For States – epa nepis, accessed November 13, 2025, https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P100FLQ9.TXT
The Green Behind the Cloud | Accenture, accessed November 13, 2025, https://aecconsultoras.com/wp-content/uploads/2020/10/Accenture-Strategy-Green-Behind-Cloud-POV.pdf
Executive summary | EUCloudEdgeIOT, accessed November 13, 2025, https://eucloudedgeiot.eu/wp-content/uploads/2025/03/Executive-summary_English.pdf
EN Horizon Europe Work Programme 2025 7. Digital, Industry and Space – Research and innovation, accessed November 13, 2025, https://research-and-innovation.ec.europa.eu/document/download/6a5f3b9a-9a7c-4ec9-8e81-22381f5a9d11_en
[2506.12262] Energy-Efficient Green AI Architectures for Circular Economies Through Multi-Layered Sustainable Resource Optimization Framework – arXiv, accessed November 13, 2025, https://www.arxiv.org/abs/2506.12262
ARTEMIS-DA: An Advanced Reasoning and Transformation Engine for Multi-Step Insight Synthesis in Data Analytics – arXiv, accessed November 13, 2025, https://arxiv.org/html/2412.14146v3
BATTLESPACE UPDATE Vol.20 ISSUE 23, accessed November 13, 2025, https://battle-updates.com/update/battlespace-update-vol-20-issue-23/
News | DARPA, accessed November 13, 2025, https://www.darpa.mil/news
ML2P: Mapping Machine Learning to Physics – DARPA, accessed November 13, 2025, https://www.darpa.mil/research/programs/mapping-machine-learning-physics
Medics Autonomously Stopping Hemorrhage (MASH) FAQs – DARPA, accessed November 13, 2025, https://www.darpa.mil/sites/default/files/attachment/2025-10/darpa-program-mash-faq.pdf
About Alan Scott Encinas
I design and scale intelligent systems across cognitive AI, autonomous technologies, and defense. Writing on what I’ve built, what I’ve learned, and what actually works.
About • Cognitive AI • Autonomous Systems • Building with AI









[…] → Cognitive AI: The Energy Obstacle, and The Artemis Solution (Thesis) […]