Emotional Intelligence in Data Form
Introducing TEG-Code
"What if machines could tell the difference between someone who's struggling and someone who's manipulating? Between a trauma response and intentional harm? Between genuine repair and performative apology?
TEG-Code makes that possible by turning emotional complexity into structured data that preserves the nuance humans need and the logic machines can process."
Why Current Emotional AI Can’t Distinguish Intent vs Impact
Most tools treat emotions like simple labels:
happy, sad, angry.
But emotional behavior lives in intent, regulation state, and impact.
That’s why:
- Gaslighting often goes unnoticed.
- Emotional repair is mistaken for being “nice.”
- Survivors doubt themselves for years.
We need tools that track emotional safety and harm with real structure.
The deeper problem: Current AI can't distinguish between identical behaviors with completely different intentions. A child saying "I don't want to" could be healthy boundary-setting or trauma response - but systems treat them the same.
That's why gaslighting often goes unnoticed, emotional repair gets mistaken for being "nice," and survivors doubt themselves for years.
What TEG-Code Offers
Each behavior is translated into:
- Color Zone (blue, orange, red, black)
- Relational Intent (connect, protect, punish, control)
- Pattern Type (repair cue, survival response, control tactic)
- Relational Impact (builds trust, erodes safety)
- Example Text (how it actually sounds or feels)
This isn’t “positive vs. negative”— It’s a gradient of intent and regulation.
How It Works
Each card is stored in JSON-LD, a web-friendly format.
That means it can be:
- Parsed by machines
- Queried by researchers
- Expanded by therapists, coders, and survivors
Under the code is a trauma-aware emotional logic, built for systems that want to protect people—not control them.
5 Common Myths About Emotional Data
Most systems treat emotion as either too messy to model—or too simple to need depth. These myths don’t just limit technology—they distort how we see human behavior. When emotional data is misunderstood, harm gets mislabeled as harmless, empathy gets misread as weakness, and real pain gets lost in translation. TEG-Code challenges these blind spots by offering a framework that sees emotion as structured, contextual, and essential—not soft, chaotic, or optional.
A quick breakdown of how most systems misunderstand emotion—and what TEG-Code offers instead.
The Myth | Why It’s Harmful | TEG-Code’s Response |
Emotions can be fully quantified | Reduces rich emotional states to numbers or scores—ignoring story, intent, and relational meaning. | Maps emotion as logic, not just data. Emotions are patterns, not metrics. |
One universal model fits all | Erases cultural, neurodivergent, and trauma-based differences in emotional expression. | Builds flexibility into the model. Emotion = context × history × intent. |
Emotional data is objective | Hides bias in labeling, power in framing, and context in meaning. | Makes emotional logic transparent. Shows behavior, impact, and motive—side by side. |
Emotion is the opposite of logic | Devalues emotional reasoning, especially in survival or relational stress. | Frames emotion as structured logic. Defense and belonging are emotional algorithms. |
Emotion models are only for therapy or AI | Excludes emotional awareness from systems that need it—like education, justice, or leadership. | Designs for systems, not silos. Emotional safety is a foundation for all human systems. |
Sample Card (Gaslighting)
Intent: Control
Impact: Distorts reality
Example:
“You’re being way too sensitive. That’s not how it happened at all.”
Code version:
{
"@context": {
"eb": "https://emotionalblueprint.org/schema#",
"rdfs": "http://www.w3.org/2000/01/rdf-schema#",
"xsd": "http://www.w3.org/2001/XMLSchema#"
},
"@graph": [
{
"@id": "eb:Card_003",
"@type": "eb:Card",
"rdfs:label": "Gaslighting",
"rdfs:comment": "A manipulation tactic that makes someone question their perception of reality, memory, or emotions, in order to maintain power or avoid accountability.",
"eb:colorZone": "red",
"eb:intent": "control",
"eb:patternType": "control tactic",
"eb:exampleText": "You’re being way too sensitive. That’s not how it happened at all.",
"eb:relationalImpact": "distorts reality",
"eb:cardGroup": "Empathy Collapse"
}
]
}
Why It Matters
Emotional harm is subtle—It hides in timing, tone, and intent disguised as care.
TEG-Code gives language to what usually goes unnamed.
This helps us track:
- What real emotional safety vs. control feels like
- How passive-aggressive “help” erodes trust
- What actual repair looks and sounds like
It’s not just for AI.
It’s for therapy, design, education, parenting—any system holding humans.
Join the Development
We're seeking collaborators who understand that emotional intelligence isn't just "nice to have" - it's critical infrastructure for human-AI interaction.
For Researchers: Help validate the emotional logic against established literature and develop rigorous testing protocols.
For Clinicians:
Test cards against real client presentations and ensure the framework captures lived therapeutic experience.
For Developers: Implement TEG-Code in AI systems and measure outcomes - we need real-world performance data.
For Trauma Survivors: Ensure the framework accurately captures lived experience and doesn't replicate harmful clinical perspectives.
Especially seeking:
- Neurodivergent researchers and coders
- Trauma-informed AI safety teams
- Clinical practitioners frustrated with current tools
- Anyone who's felt the gap between emotional complexity and available language
If you recognize the problem TEG-Code solves, we want to hear from you.
If something in you says this matters—we want to hear from you.
AI Validation Through Direct Testing
Methodology: I tested TEG-Blue's concepts by directly prompting multiple AI systems (Claude, Copilot, Perplexity, DeepSearch) to analyze the framework's potential and identify gaps in current AI emotional intelligence.
What I did:
- Presented the TEG-Blue framework to each AI system through chat interfaces
- Asked them to evaluate whether current AI can distinguish trauma responses from manipulative behavior
- Requested analysis of the framework's potential impact on AI safety
- Gathered their assessments of what's missing in current emotional AI capabilities
Consistent AI Response: Every AI system confirmed they cannot currently distinguish between someone acting from trauma versus someone being manipulative - validating the core problem TEG-Blue addresses.
Key AI Assessment Quote: "Could revolutionize AI emotional intelligence by providing something current systems lack entirely: the ability to read emotional intent, not just emotional expression."
Research Insight Generated: Through this AI collaboration process, the projection emerged that "even a 70% accurate mode signal cuts toxic escalation by ≥30%."
What This Means: This represents direct validation from AI systems themselves that the gaps TEG-Blue addresses are real and significant. The AI systems became research partners in identifying their own limitations and confirming the framework's relevance.
Explore Next
What Is Emotional Technology — establishes the foundation of emotional tech
Bridging Human Emotional Intelligence & AI Safety — situates TEG-Code within AI ethics
EMLU — expands into multimodal emotional language understanding
Internal Links
- What is TEG-Blue?
- What is Emotional Technology?
- Research Collaboration & Impact
- 360° Global Synthesis
- Learning Lab
- Map Levels
- Four Modes
- AI Safety
TEG-Blue™ is a place for people who care-about dignity, about repair, about building something better. It’s a map, an invitation, and a growing toolbox, as an evolving commons—supporting emotional clarity, systemic healing, and collective wisdom. Here, healing doesn’t require perfection—just honesty, responsibility, and support.