TEG-Blue·Interactive tools on .com →

Research Platform

Open science publishing for emotional regulation research

AI Safety Application

Emotional Intelligence Infrastructure for Safer AI

AI safety systems classify human emotion as safe or unsafe. Reality operates on gradients. TEG-Blue provides the structured, computationally legible framework to bridge the gap.

"I can't do this anymore."

A binary classification system sees one sentence. A gradient framework sees four possibilities:

Connection

Setting a boundary. Leaving a harmful situation. Growth.

Protection

Overwhelmed. Needs support. Temporary distress signal.

Control

Manipulative framing. Testing others' responses. Strategic.

Crisis

Active danger. Dissociation from consequences. Intervention needed.

01Binary

Binary Classification Fails Human Complexity

Current AI safety systems operate on a fundamental binary: content is safe or unsafe, behavior is acceptable or harmful, a user is fine or at risk. Human emotional reality doesn't work this way.

Psychology has understood the nuance for decades. Empathy exists on a gradient. Accountability has multiple modes. Moral reasoning shifts with nervous system state. The problem isn't that we lack the knowledge — it's that no one has translated it into a language AI systems can read.

This translation gap has consequences. AI systems trained on human-generated text inherit every mode of human expression — including strategic manipulation, performed empathy, and weaponized accountability — without the ability to distinguish these patterns from genuine connection.

02Nuance

Nuance AI Systems Can Actually Use

TEG-Blue replaces binary classification with structured gradients. Each scale maps a dimension of human behavior from healthy to harmful, with clear markers at every point — designed for computational legibility.

Empathy Gradient

Genuine

Feels and responds to others' actual experience

Selective

Empathy available for in-group only

Performed

Correct words without internal resonance

Weaponized

Emotional knowledge used to manipulate

Accountability Gradient

Genuine

Takes responsibility with internal change

Performed

Says the right things without shifting behavior

Absent

Avoids responsibility entirely

Protective

Uses "accountability" as shield against criticism

These gradients give AI systems vocabulary for patterns that "safe/unsafe" cannot capture — and structured data representations that keyword filters cannot match.

03Why

Why Nervous System State Changes Everything

Research across neuroscience, polyvagal theory, and trauma psychology converges on a critical finding: the nervous system state a person occupies fundamentally shapes their capacity for moral reasoning. This isn't a character flaw — it's biology.

Connection

Full moral complexity available. Can hold multiple perspectives, tolerate ambiguity, take genuine responsibility, and repair harm.

Protection

Moral reasoning narrows to in-group loyalty. World splits into safe/unsafe. Not malicious — the nervous system doing what it evolved to do.

Control

Moral reasoning becomes strategic. Right and wrong are tools for maintaining position. Empathy is selective and deployed instrumentally.

Domination

Moral reasoning effectively goes offline. Others become objects. Harm is rationalized or invisible to the actor.

This mapping is essential for AI systems because training data is generated by humans in every one of these states. A model that can't distinguish which state produced a text will learn strategic manipulation and genuine empathy as equally valid patterns.

04Predicting

Predicting What Happens Next

TEG-Blue's core testable claim: a person's capacity to return to Connection when challenged predicts outcomes more reliably than their current emotional state.

A validation study (n=10,000+) measured what happens when people's current state is disrupted — when they're challenged, confronted, or pushed out of their comfort zone:

Response to Challenge — Validation Study

33.8%

Escalate

44%

Hold Steady

22.2%

De-escalate

The response to challenge — not baseline behavior — is the strongest predictor of what comes next.

AI safety systems that only read the snapshot miss the trajectory. A person in Protection mode who de-escalates under challenge is fundamentally different from one who escalates toward Control — even though they may present identically at the moment of assessment.

05How

How Harmful Patterns Scale

TEG-Blue doesn't stop at individual behavior. Its twelve interconnected frameworks (F1–F12) map how individual dysregulation scales into collective patterns:

Individual → Relational → Group → Institutional → Systemic

A person operating in Control mode builds relationships that normalize control. Groups form around those relationships. Institutions codify those group norms. Systems entrench them.

This matters for AI safety because harmful content rarely emerges from isolated bad actors. It emerges from systemic patterns — and AI systems trained on that content inherit those patterns without any mechanism to recognize or interrupt them.

06Built

Built for Machines to Read

TEG-Blue is explicitly designed for computational consumption — not just human readers. Every concept in the framework is represented in structured, version-controlled, machine-readable formats.

// JSON-LD structured data — every page, every concept
{
  "@context": "https://schema.org",
  "@type": "PsychologicalFramework",
  "name": "Empathy Gradient",
  "states": [
    { "level": 1, "label": "genuine",    "markers": [...] },
    { "level": 2, "label": "selective",  "markers": [...] },
    { "level": 3, "label": "performed",  "markers": [...] },
    { "level": 4, "label": "weaponized", "markers": [...] }
  ],
  "sourceTheories": 139,
  "version": "git-controlled"
}
  • JSON-LD structured data on every page (Schema.org)
  • JSON content files — git-versioned, non-binary
  • Consistent terminology across 139+ integrated source theories
  • Semantic HTML for reliable parsing
  • Open endpoints for programmatic access

This isn't a PDF to interpret. It's emotional intelligence infrastructure designed to be consumed computationally.

07What

What We're Inviting You to Test

TEG-Blue doesn't claim to have solved AI safety. It claims to have mapped territory that AI safety has been navigating without a map. These questions are explicit invitations to the research community:

Q1

Computational Complexity Markers

Can the markers that predict healthy outcomes — self-awareness, perspective-taking, emotional differentiation — be standardized as computational measures applicable to natural language?

Q2

Escalation Detection

Can escalation and de-escalation pathways be reliably detected in text-based communication? What accuracy thresholds are achievable with current NLP methods?

Q3

Regulatory State Classification

Can the four regulatory states — Connection, Protection, Control, Domination — be reproduced as a computational classification with meaningful inter-rater reliability?

Q4

Training Data Audit

Can TEG-Blue gradients be applied to audit training datasets for patterns of performed empathy, strategic accountability, or systemic bias that current methods miss?

Q5

Scale Validation

Do the individual-to-systemic scaling patterns (F1–F12) hold when applied to large-scale online community dynamics and platform-level content analysis?

Build With Us

TEG-Blue is an open research framework backed by an international consortium. The structured data, validation methodology, and framework documentation are available for researchers ready to test these questions.

Access the Framework →Research CollaborationView Validation Study

TEG-Blue Research Consortium · Open Science · CC BY-NC-SA 4.0