Thomas Cionek
28 Views

Explainable AI in Air Traffic Control: When Trust Depends on Experience and Mental Load

The study by Cartocci and colleagues (2026) analyzes an increasingly vital theme in critical systems: how air traffic control professionals react to Explainable Artificial Intelligence (XAI) systems. The article's focus is to understand how user expertise influences three central dimensions: workload, AI acceptance, and usage intention. The paper was published in Brain Informatics on January 24, 2026.

The most relevant point here is not just whether the AI "works," but whether it can be accepted, understood, and incorporated by humans operating in high-responsibility environments. In air traffic control, this is decisive: it is not enough for the machine to be right; it must provide useful explanations for those under temporal and cognitive pressure. The article's framework stems from this need for explainability to sustain trust and use in a safety-critical domain.

What the Study Showed

Based on the title and indexed abstract, the study investigated the effects of expertise on workload, acceptance, and usage intentions regarding an XAI solution in air traffic control. In other words, the work did not just deal with the technology itself, but with the encounter between technology, professional experience, and cognitive cost.

This is important because, in complex systems, the explanation is not a visual detail or a "pretty" layer on top of the algorithm. The explanation is part of the decision interface itself. Depending on the operator's level of experience, the same explanation can be perceived as help, information overload, or even noise. This is the article's most fertile contribution: showing that useful explainability is not universal, but depends on the type of user receiving the recommendation. This reading is supported by the study's explicit focus on expertise, workload, acceptance, and usage intention.

A Decolonial Neuroscience Perspective

Through the lens of Decolonial Neuroscience, this article helps dismantle a common technocratic fantasy: the idea that simply inserting AI into a complex system will automatically improve human decision-making. No. Between the algorithmic recommendation and human action, there is a body, a training history, a situated perception of risk, and a concrete cognitive ecology.

Decision-making in air traffic control is not purely logical. It involves sustained attention, prediction, context reading, working memory, trust, and a sense of responsibility. In this sense, explainability needs to speak to the operator's embodied mind, not just the mathematical model. The study reinforces this by placing expertise as the central variable of acceptance and use.

Here, the "Damasian Mind" enters productively: deciding well in high-pressure contexts depends on a brain-body that can integrate signals, reduce ambiguity, and act without collapsing into overload. An explainable AI will only be truly useful if it aids this integration rather than competing with it.

Interpretive Avatars: Jiwasa and APUS

In this analysis, I would use two avatars, with a predominance of Jiwasa.

Jiwasa: Because air traffic control is a coordination system between multiple human and artificial intelligences. It is not an isolated brain deciding alone; it is an ecology of synchronizations.

APUS: This also appears because the operator needs to maintain a type of extended proprioception of the aerial territory. Even without physically touching the planes, they need to feel positions, trajectories, potential conflicts, and safety margins as if cognitively distributed across that space.

Under this reading, explainable AI should not just be an "oracle." It should function as a coordinative extension of the controller's cognitive body-territory.

Connection with Tensional Selves and Zones 1, 2, and 3

This study resonates strongly with the Tensional Selves.

In Zone 1: The professional sustains the functional "Work-Self": monitoring, comparing, correcting, prioritizing, and deciding.

In Zone 2: Interaction with AI can become fluid: the explanation serves as support, reducing cognitive friction and expanding operational clarity.

In Zone 3: Overload, low trust, or poorly adjusted explanations can hijack the operator's attention. Instead of supporting, the AI begins to generate rigidity, doubt, or dysfunctional dependency.

The great value of the article lies precisely in showing that explainability must be thought of as cognitive load regulation rather than just abstract transparency. In critical systems, a bad explanation can even increase workload, even when the algorithmic recommendation is correct. This inference follows directly from the study's focus on workload, expertise, and acceptance.

Citizen DREX and Organic Politics

The connection with "Citizen DREX" might seem distant at first, but it isn't. The article shows that support technologies only work well when they respect human metabolic and cognitive limits. This also applies to society.

A population living under extreme insecurity, forced multitasking, and chronic overload tends to operate more in a defensive Zone 1 or Zone 3, with less room for critical thinking. Citizen DREX, understood as the guaranteed minimum metabolism of the social body, would create the conditions for humans to interact better with complex systems—including AI—without living in permanent exhaustion.

Just as an air traffic controller needs an adequate interface to decide well, the citizen also needs a minimum metabolic base so as not to be crushed by overload and institutional opacity.

New Questions for BrainLatam

1. Does the ideal explainability change depending on whether the operator is under higher or lower physiological load?
2. Could EEG, HRV, respiration, and SpO2 measurements indicate when an explanation helps or hinders?
3. Do experienced operators and novices use different neural circuits when evaluating AI recommendations?
4. Is there a point where more explanation stops helping and starts increasing cognitive noise?
5. In contexts cooperatively, does explainable AI improve synchronization between human operators?

Possible Experimental Designs

A strong BrainLatam design would combine EEG + HRV + eye tracking + operational performance in air traffic control simulators, comparing different explanation formats for novices and experts.

Another path would be testing adaptive explanations, where the AI changes the level of detail based on physiological signals of mental load.

It would also be promising to study teams, rather than just individuals, to verify if explainable AI improves or worsens collective coordination in technical tasks.

BrainLatam Conclusion

The article by Cartocci and colleagues shows something essential: good AI is not just accurate AI; it is AI that tunes into the situated human mind. In critical domains like air traffic control, explainability is not a luxury—it is a condition for trust, real-world use, and safety.

In a decolonial reading, this means abandoning the fantasy of neutral technology and recognizing that every interface needs to respect the body, experience, territory, and cognitive load. Good AI does not replace the human; it must learn to coordinate with them.

Reference

Cartocci, G., Veyrié, A., Cavagnetto, N., Hurter, C., Degas, A., Ferreira, A., Ahmed, M. U., Begum, S., Barua, S., Inguscio, B. M. S., Ronca, V., Borghini, G., Di Flumeri, G., Babiloni, F., & Aricò, P. (2026). Explainable artificial intelligence in air traffic control: effects of expertise on workload, acceptance, and usage intentions. Brain Informatics, 13(1), 6. https://doi.org/10.1186/s40708-025-00287-6. (PubMed)

#eegmicrostates #neurogliainteractions #eegmicrostates #eegnirsapplications #physiologyandbehavior #neurophilosophy #translationalneuroscience #bienestarwellnessbemestar #neuropolitics #sentienceconsciousness #metacognitionmindsetpremeditation #culturalneuroscience #agingmaturityinnocence #affectivecomputing #languageprocessing #humanking #fruición #wellbeing #neurophilosophy #neurorights #neuropolitics #neuroeconomics #neuromarketing #translationalneuroscience #religare #physiologyandbehavior #skill-implicit-learning #semiotics #encodingofwords #metacognitionmindsetpremeditation #affectivecomputing #meaning #semioticsofaction #mineraçãodedados #soberanianational #mercenáriosdamonetização
Author image

Jackson Cionek

New perspectives in translational control: from neurodegenerative diseases to glioblastoma | Brain States