Jackson Cionek
67 Views

Transformers and Virtual Short-Channels in fNIRS

Transformers and Virtual Short-Channels in fNIRS

AI cleaning the brain signal and retelling the hemodynamic story
(Consciência em Primeira Pessoa • Neurociência Decolonial • Brain Bee • The Feeling and Knowing Taá)


The Feeling and Knowing Taá — when I stop trusting my own data

I look at a beautiful fNIRS plot:
O₂-Hb goes up, HHb goes down, p-values are small.
On the surface, everything says: “this is cognition”.

But my body knows something else.

I remember the heart beating in the finger clip,
the breathing getting shallow with anxiety,
the skin vessels dilating with heat,
the headband pressing the scalp in a way that changes blood flow.

Taá — the feeling-before-knowing — whispers:

“Are you really seeing cortex…
or just the skin pretending to be cortex?”

This is the tension that a 2025 fNIRS methods paper in NeuroImage (one of those you listed) encara de frente:

using transformer-based deep learning to create virtual short-channels and regress superficial physiology from fNIRS signals, when real short-channels are missing or limited.

In other words:

  • when my hardware is not ideal,

  • can software help me rescue a more honest hemodynamic story?

And right here, another Taá appears:

if AI can rewrite my signal, who is telling the truth —
the body, the algorithm, or the colonial habit of wanting “clean numbers” at qualquer custo?


The study in plain terms: what are these “virtual short-channels”?

Classic fNIRS wisdom says:

  • short-channels (very small source–detector distances) pick up mostly superficial blood flow (skin, scalp);

  • long-channels (standard distances) contain a mix of cortical and superficial components.

The ideal pipeline:

  1. Measure both long- and short-channels.

  2. Use GLM to remove the superficial component (short-channels) from the long-channels.

  3. Keep the “cortical” residue for interpretation.

But in real life — especially in Latin America, in hospitals públicos, escolas, comunidades — we often have:

  • few optodes,

  • no dedicated short-channels,

  • low-cost devices,

  • data collected “in the wild”.

The article asks:

Can we train a transformer network to infer what a short-channel would see, and then use this “virtual short-channel” to clean long-channel signals?


Methods and analysis — for Brain Bee and young researchers

The core ingredients look like this:

  • fNIRS recordings with real long- and short-channels

    • prefrontal and other cortical areas

    • tasks that modulate both cognition and systemic physiology

  • Preprocessing

    • motion correction

    • band-pass filtering

    • separation of O₂-Hb and HHb time series

  • GLM (General Linear Model)

    • used as ground truth to model how real short-channels improve cortical estimates

    • canonical and subject-specific HRF (Hemodynamic Response Function) tested

  • Virtual short-channels via deep learning

    • a transformer architecture trained to predict what a short-channel would look like

    • input: long-channel signals and possibly auxiliary signals

    • output: a synthetic “short-channel-like” time course

  • Signal cleaning

    • the virtual short-channel is then used as a regressor (like a real short-channel)

    • GLM and ICA/PCA are combined to remove global physiology and structured noise

  • Evaluation

    • comparison between:

      • long-channel only

      • long + real short-channel

      • long + virtual short-channel

    • metrics: variance explained, test–retest reliability, sensitivity to task vs. noise

Keywords you want to shine for search engines:

“fNIRS virtual short-channels transformer deep learning GLM HRF systemic physiology noise regression NeuroImage 2025”


What they found: when AI behaves like an honest collaborator

Main message:

  • Virtual short-channels approximate real short-channels surprisingly well,

  • especially in how they help remove low-frequency global oscillations and superficial noise.

After regression using the virtual channels:

  • task-related responses become cleaner and more focal,

  • test–retest reliability improves,

  • cortical patterns look more like the “gold standard” (long + real short-channels).

But there are limits:

  • performance depends on the quality and diversity of the training data,

  • the virtual signal can still “hallucinate” structure if the model is poorly trained,

  • overfitting to a particular population or device is a real risk.

So we get a powerful message:

AI can help us — but only if we respect what the body is really showing.


Reading this with our concepts — beyond colonial AI

From our side of the world, I don’t see only “signal cleaning”.
I see a dispute over who tem voz sobre o corpo.

1. Mente Damasiana e camadas de fluxo

Interocepção + propriocepção são hemodinâmica vivida.
Superficial vs. cortical is not just “noise vs. signal”:

  • superficial flow is the autonomous system negotiating with the world;

  • cortical flow is the integration of that negotiation em forma de consciência.

The transformer is asked to separate essas duas camadas.
Se não houver Taá, we risk treating autonomic life as “artifact”.

2. Quorum Sensing Humano (QSH)

Superficial physiology traz o Quorum do corpo:

  • heart rate, breathing, sweating, vascular tone.

If AI erases all of this, we may:

  • gain statistical power,

  • but lose the collective metabolic intelligence we call QSH.

3. Zona 1 / 2 / 3

  • Zona 1: pipeline automático, “passei o dado no script padrão”.

  • Zona 2: eu paro, sinto Taá, olho se a limpeza faz sentido, dialogo com o corpo e com a estatística.

  • Zona 3: ideologia de “dados perfeitos”, onde qualquer método que produz um p < 0.05 é aceito sem crítica.

Transformers podem virar ferramentas de Zona 2 —
ou armas de Zona 3.

4. Yãy hã mĩy (origem Maxakali) — imitar para compreender

No sentido original Maxakali, Yãy hã mĩy é imitar o animal antes de caçá-lo.
Aqui, o transformer “imita” o short-channel:

  • observa muitos exemplos,

  • aprende o padrão,

  • tenta ser aquele sinal.

Se fizermos isso em Zona 2, o modelo vira um aliado:
ele nos ajuda a perceber aquilo que não temos hardware para medir.

Se ficarmos em Zona 3, o modelo vira máscara:
imita tão bem que esquecemos que é imitação.

5. Avatares Referências

Neste blog, o recorte que mais me ajuda é o avatar Math/Hep:

  • olhar cuidadoso para estatística,

  • variância explicada,

  • erros de generalização,

  • honestidade sobre limites do modelo.

Mas Math/Hep, sozinho, pode ser colonizado pelo fetiche do “melhor modelo”.
Por isso deixo DANA sussurrar ao fundo:

o DNA, a fisiologia, o Corpo Território também têm direito de aparecer nos dados.
Nem tudo que é “não-cortical” é descartável.


Decolonizing AI in fNIRS: where the paper adjusts our ideas

Before, we might have said:

“If I don’t have short-channels, my data is useless.”

Now, the transformer approach shows:

  • é possível recuperar muito valor de dados imperfeitos,

  • é possível corrigir desigualdades tecnológicas (nem todo laboratório tem o melhor equipamento).

Mas a mesma técnica pode reforçar desigualdades se:

  • modelos forem treinados só em populações WEIRD,

  • e depois aplicados, sem crítica, em crianças indígenas, idosos periféricos, pacientes de SUS.

Neurociência Decolonial aqui significa:

  • exigir que modelos sejam treinados com diversidade biológica e cultural,

  • documentar limites de generalização,

  • evitar que a “limpeza” apague justamente aquilo que faz do nosso corpo latino algo singular.


Implicações para educação, saúde e política em LATAM

  • Educação

    • Programas de Brain Bee e iniciação científica podem incluir módulos de:

      • “how AI cleans signals”

      • riscos de overfitting

      • ética de dados fisiológicos

  • Saúde pública

    • Hospitais podem usar fNIRS de baixo custo com modelos de virtual short-channels,
      desde que:

      • pipelines sejam abertos, auditáveis,

      • limites sejam claros,

      • formação local seja priorizada.

  • Política de dados

    • Regulamentação deve garantir que:

      • dados hemodinâmicos não sejam usados para vigilância ou seleção excludente,

      • algoritmos sejam considerados parte do “ato médico/científico” e, portanto, passíveis de escrutínio.


Search keywords (to help people find this reading of the paper)

“virtual short-channels transformer fNIRS deep learning GLM HRF systemic physiology noise regression NeuroImage 2025 BrainLatam2026 decolonial neuroscience Taá Quorum Sensing Humano”





#eegmicrostates #neurogliainteractions #eegmicrostates #eegnirsapplications #physiologyandbehavior #neurophilosophy #translationalneuroscience #bienestarwellnessbemestar #neuropolitics #sentienceconsciousness #metacognitionmindsetpremeditation #culturalneuroscience #agingmaturityinnocence #affectivecomputing #languageprocessing #humanking #fruición #wellbeing #neurophilosophy #neurorights #neuropolitics #neuroeconomics #neuromarketing #translationalneuroscience #religare #physiologyandbehavior #skill-implicit-learning #semiotics #encodingofwords #metacognitionmindsetpremeditation #affectivecomputing #meaning #semioticsofaction #mineraçãodedados #soberanianational #mercenáriosdamonetização
Author image

Jackson Cionek

New perspectives in translational control: from neurodegenerative diseases to glioblastoma | Brain States