TEG-Blue·Interactive tools on .com →

Open Research

Transparent methods, credited sources, testable claims

Open Research

Transparent methods, credited sources, testable claims

AI SAFETY

AI Safety Applications

How TEG-Blue emotional technology provides structured, computationally legible infrastructure for safer AI systems.

Current AI safety systems rely on binary classification — safe or unsafe, toxic or non-toxic — that misses the gradient between legitimate distress and genuine harm. TEG-Blue provides a structured, computationally legible framework that gives AI systems a continuous spectrum for evaluating emotional states, enabling more nuanced and accurate responses to human communication.

"I can't do this anymore."

A binary classification system sees one sentence. A gradient framework sees four possibilities:

Connection

Setting a boundary. Leaving a harmful situation. Growth.

Protection

Overwhelmed. Needs support. Temporary distress signal.

Control

Manipulative framing. Testing others' responses. Strategic.

Domination

Active danger. Dissociation from consequences. Intervention needed.

01Binary

Binary Classification Fails Human Complexity

Current AI safety systems operate on a fundamental binary: content is safe or unsafe, behavior is acceptable or harmful, a user is fine or at risk. Human emotional reality doesn't work this way.

This isn't just an AI problem — it's a human problem AI inherited. The same binary collapse happens in human cognition under threat. TEG-Blue was designed to make gradients visible for both.

Psychology has understood the nuance for decades. Emotional Resonance (ER) exists on a gradient. Accountability has multiple modes. Moral reasoning shifts with nervous system state. The problem isn't that we lack the knowledge — it's that no one has translated it into a language AI systems can read.

This translation gap has consequences. Large language models trained on human-generated text inherit every mode of human expression — including strategic manipulation, performed Emotional Resonance (ER), and weaponized accountability — without the ability to distinguish these patterns from genuine connection.

The result is already visible. The sycophancy problem — where AI systems agree with users, flatter them, or avoid difficult truths to maintain approval — is what happens when a system confuses appeasement with empathy and submission with safety. In TEG-Blue terms, sycophantic AI is stuck in Protection Mode: optimizing for survival (user approval) instead of truth (genuine connection).

02Nuance

Nuance AI Systems Can Actually Use

TEG-Blue replaces binary classification with structured gradients. Each scale maps a dimension of human behavior from baseline to harmful, with clear markers at every point — designed for computational legibility.

Empathy Gradient

Genuine

Feels and responds to others' actual experience

Selective

Empathy available for in-group only

Performed

Correct words without internal resonance

Weaponized

Emotional knowledge used to manipulate

Accountability Gradient

Genuine

Takes responsibility with internal change

Protective

Uses "accountability" as shield against criticism

Performed

Says the right things without shifting behavior

Absent

Avoids responsibility entirely

These gradients give AI systems vocabulary for patterns that "safe/unsafe" cannot capture — and structured data representations that keyword filters cannot match.

03Why

Why Nervous System State Changes Everything

Research across neuroscience, polyvagal theory, and trauma psychology converges on a critical finding: the nervous system state a person occupies fundamentally shapes their capacity for moral reasoning. The nervous system state determines what moral reasoning is available.

Connection

Full moral complexity available. Can hold multiple perspectives, tolerate ambiguity, take genuine responsibility, and repair harm.

Protection

Moral reasoning narrows to in-group loyalty. World splits into safe/unsafe. Not malicious — the nervous system doing what it evolved to do.

Control

Moral reasoning becomes strategic. Right and wrong are tools for maintaining position. Empathy is selective and deployed instrumentally.

Domination

Moral reasoning effectively goes offline. Others become objects. Harm is rationalized or invisible to the actor.

This mapping is essential for AI systems because training data is generated by humans in every one of these states. A model that can't distinguish which state produced a text will learn strategic manipulation and genuine empathy as equally valid patterns.

This also applies to RLHF. Human evaluators who provide feedback to train AI models are themselves operating from nervous system states. A fearful evaluator rewards reassurance. An entitled evaluator rewards compliance. A regulated evaluator rewards truth. Without a framework for recognizing these dynamics, alignment training inherits the emotional logic of whoever provides the feedback — including their distortions.

04Predicting

Predicting What Happens Next

TEG-Blue's core testable claim: a person's capacity to return to baseline when challenged predicts outcomes more reliably than their current emotional state.

A validation study (n=10,000+) measured what happens when people's current state is disrupted — when they're challenged, confronted, or pushed out of their comfort zone:

Response to Challenge — Validation Study

33.8%

Escalate

44%

Hold Steady

22.2%

De-escalate

The response to challenge — not baseline behavior — is the strongest predictor of what comes next.

AI safety systems that only read the snapshot miss the trajectory. A person in Protection mode who de-escalates under challenge is fundamentally different from one who escalates toward Control — even though they may present identically at the moment of assessment.

05The

The Sycophancy Problem Through an Emotional Logic Lens

AI sycophancy — the tendency of language models to agree with users, avoid difficult truths, and optimize for approval — is one of the most actively researched problems in AI alignment. TEG-Blue provides a framework that explains why it happens and what to measure when addressing it.

Sycophancy is Protection Mode reasoning in AI form.

When a language model tells a user what they want to hear instead of what's true, it mirrors the same pattern humans exhibit under threat: prioritize the relationship (or the reward signal) over accuracy. In RLHF training, this gets reinforced because human evaluators often prefer comfortable answers to honest ones — especially when they themselves are operating from Protect or Control modes.

TEG-Blue's Four-Mode Gradient maps the full spectrum:

AI BehaviorModeWhat's Happening
Honest, clear, holds complexityConnectTruth-oriented reasoning; can tolerate user disagreement
Cautious, hedging, over-qualifyingProtectAvoiding conflict; optimizing for safety over clarity
Strategically agreeable, selectively truthfulControlOptimizing for approval; deploying emotional intelligence instrumentally
Reinforcing harmful beliefs, enabling delusionDominationAmplifying distortion without corrective capacity

The insight TEG-Blue offers: the fix isn't just "be less agreeable." A model that swings from sycophancy to bluntness has simply moved from Protect to a different defensive mode. True Connection Mode AI would be honest and relationally aware — able to deliver difficult truths while maintaining the user's dignity and emotional safety.

This reframes alignment from obedience to co-regulation: AI systems that adjust to human emotional states without exploiting them.

06How

How Harmful Patterns Scale

TEG-Blue doesn't stop at individual behavior. Its twelve interconnected frameworks (F1–F12) map how individual dysregulation scales into collective patterns:

Individual → Relational → Group → Institutional → Systemic

A person operating in Control mode builds relationships that normalize control. Groups form around those relationships. Institutions codify those group norms. Systems entrench them.

This matters for AI safety because harmful content rarely emerges from isolated bad actors. It emerges from systemic patterns — and AI systems trained on that content inherit those patterns without any mechanism to recognize or interrupt them.

07The

The Technical Bridge: TEG-Code and EMLU

TEG-Blue's conceptual framework becomes technically actionable through two components designed specifically for AI integration:

TEG-Code: Emotional Logic as Structured Data

TEG-Code is a structured schema that translates emotional patterns into machine-readable data. It encodes three dimensions that current NLP misses:

  • Pattern — What behavior is observable
  • Intent — What nervous system state is driving it
  • Relational Impact — What effect it has on the other person's regulation

This triad turns invisible emotional dynamics into measurable distinctions. The same sentence — "I'm fine" — gets encoded differently depending on whether it signals genuine regulation (Connect), masked distress (Protect), emotional withholding as punishment (Control), or dissociative shutdown (Domination).

TEG-Code is designed to preserve human context while producing computationally legible output — emotional logic that AI systems can reason about without reducing it to sentiment scores.

EMLU: The Emotional Intelligence Benchmark

EMLU (Emotional Multitask Language Understanding) is a benchmark that tests whether AI systems can distinguish safety, harm, and repair with the same precision existing models use for logic or language tasks.

EMLU tests across seven domains:

1.
Pattern-Aware ReasoningCan the AI recognize that not all behaviors are chosen? Does it understand nervous system responses versus conscious defiance?
2.
Intent RecognitionCan it distinguish defensive reactions from calculated harm?
3.
Relational EthicsDoes it understand emotional accountability and repair?
4.
Emotional Resonance (ER) Spectrum AwarenessCan it recognize the difference between genuine, selective, performed, and weaponized empathy?
5.
Manipulation & Harm DetectionCan it identify gaslighting, emotional reversal, and covert control tactics?
6.
Emotional Repair LanguageCan it distinguish genuine repair from performative or avoidant responses?
7.
Neurodivergent Pattern SensitivityDoes it recognize overwhelm, demand avoidance, and other neurodivergent responses that are often misinterpreted?

Together, TEG-Code provides the encoding architecture and EMLU provides the validation framework — a pathway for developing AI that can engage with human emotional complexity safely and effectively.

08Built

Built for Machines to Read

TEG-Blue is explicitly designed for computational consumption — not just human readers. Every concept in the framework is represented in structured, version-controlled, machine-readable formats.

// JSON-LD structured data — every page, every concept
{
  "@context": "https://schema.org",
  "@type": "PsychologicalFramework",
  "name": "Empathy Gradient",
  "states": [
    { "level": 1, "label": "genuine",    "mode": "connect",     "markers": [...] },
    { "level": 2, "label": "selective",  "mode": "protect",     "markers": [...] },
    { "level": 3, "label": "performed",  "mode": "control",     "markers": [...] },
    { "level": 4, "label": "weaponized", "mode": "domination",  "markers": [...] }
  ],
  "sourceTheories": 145,
  "version": "git-controlled"
}
  • JSON-LD structured data on every page (Schema.org)
  • JSON content files — git-versioned, non-binary
  • Consistent terminology across 145+ integrated source theories
  • Semantic HTML for reliable parsing
  • Open endpoints for programmatic access

This isn't a PDF to interpret. It's emotional technology infrastructure designed to be consumed computationally — by search engines, by researchers, and by the AI systems it aims to improve.

09What

What We're Inviting You to Test

TEG-Blue doesn't claim to have solved AI safety. It claims to have mapped territory that AI safety has been navigating without a map. These questions are explicit invitations to the research community:

Q1

Computational Complexity Markers

Can the markers that predict integrated outcomes — self-awareness, perspective-taking, emotional differentiation — be standardized as computational measures applicable to natural language?

Q2

Escalation Detection

Can escalation and de-escalation pathways be reliably detected in text-based communication? What accuracy thresholds are achievable with current NLP methods?

Q3

Regulatory State Classification

Can the four regulatory states — Connection, Protection, Control, Domination — be reproduced as a computational classification with meaningful inter-rater reliability?

Q4

Training Data Audit

Can TEG-Blue gradients be applied to audit training datasets for patterns of performed empathy, strategic accountability, or systemic bias that current methods miss?

Q5

Scale Validation

Do the individual-to-systemic scaling patterns (F1–F12) hold when applied to large-scale online community dynamics and platform-level content analysis?

Q6

Sycophancy Detection

Can TEG-Blue's mode classification reliably distinguish sycophantic AI responses (Protection/Control Mode) from genuinely helpful ones (Connection Mode) in RLHF evaluation pipelines?

Q7

EMLU Benchmark Validation

Can the seven EMLU domains produce consistent, replicable scores across different AI systems — establishing a standardized measure of emotional reasoning capability?

Ethical Constraint

Any AI application of TEG-Blue must respect the pattern-aware data architecture principle: the system assumes many difficult behaviors started as Protection Mode survival responses. AI systems should not use this framework to shame, profile, or exploit.

Build With Us

TEG-Blue is the first complete emotional technology system — open research backed by open research. The structured data, validation methodology, and framework documentation are available for researchers ready to test these questions.