Sentiment + Intent Signals

Stream sentiment and intent classifications alongside text. Build sentiment-aware UI and proactive handoff escalation without writing your own classifiers.

What you get

  • Two new message-part variantsSentimentSignalPart and IntentSignalPart flow through the same ChatMessage.parts[] array as text. No side-channel state to wire.
  • Pluggable adapters — local browser inference (TF.js), keyword rules, or model-side structured output from Gemini, Claude, or OpenAI. Pick the latency/accuracy trade-off that fits the surface.
  • Declarative escalation rules — "escalate when frustration > 0.7 across two turns" expressed as data, evaluated by SignalEscalator, fired against the existing HandoffController.
  • First-class React hooksuseSentiment(messages) and useIntent(messages) return latest, history, and per-category/per-intent snapshots.

How it fits together

                ┌──────────────────┐
   user/agent  →│  ChatSession     │ ──► message_added / message_updated events
                └────────┬─────────┘
                         │
              ┌──────────┴──────────┐
              ▼                     ▼
       ┌─────────────┐       ┌────────────────┐
       │ SignalRunner│       │ SignalEscalator│
       │ (adapters)  │       │ (rules)        │
       └──────┬──────┘       └────────┬───────┘
              │                       │
        signal.update events     handoff.request()
              │                       │
              ▼                       ▼
       processTransportEvent    HandoffController (pure FSM)
       → SentimentSignalPart      
       → IntentSignalPart
              │
              ▼
       ChatMessage.parts[]
              │
              ▼
       useSentiment() / useIntent()
  • The normalizer stays a pure reducer over TransportEvent. Adapters do not call into it; they emit synthetic signal.update events that route through the same loop as wire events.
  • The handoff controller stays a pure FSM. Rules live in SignalEscalator, which receives request: (opts) => HandoffActionResult via injection.
  • Signals carry attributedMessageId so consumers can tie a sentiment shift back to the message that caused it.

Quick start

import { createChatClient, ruleAdapter, defaultRulePatterns } from 'gecx-chat';
import { useChatSession, useSentiment, useIntent } from 'gecx-chat/react';

const config = {
  // ...your existing config
  signals: {
    adapters: [ruleAdapter({ patterns: defaultRulePatterns })],
    escalation: [
      {
        id: 'frustration-2-turns',
        signal: 'sentiment',
        match: { category: 'frustration' },
        operator: '>',
        threshold: 0.7,
        consecutiveTurns: 2,
      },
    ],
  },
};

function App() {
  const chat = useChatSession({ config });
  const sentiment = useSentiment(chat.messages);
  const bg = sentiment.latest?.polarity === 'negative' ? '#fee' : undefined;
  return <div style={{ background: bg }}>...</div>;
}

Type reference

export interface SentimentSignalPart extends BasePart {
  type: 'sentiment-signal';
  category:
    | 'frustration' | 'anger' | 'confusion' | 'urgency'
    | 'satisfaction' | 'delight' | 'neutral';
  score: number;        // 0..1
  confidence: number;   // 0..1
  polarity?: 'positive' | 'neutral' | 'negative' | 'mixed';
  source: 'model' | 'local' | 'server' | 'rule';
  adapter: string;
  scope?: 'message' | 'turn' | 'session';
  attributedMessageId?: string;
  latencyMs?: number;
}

export interface IntentSignalPart extends BasePart {
  type: 'intent-signal';
  intent: string;       // snake_case id: 'refund_request', 'cancel_order', ...
  score: number;
  confidence: number;
  alternates?: Array<{ intent: string; score: number }>;
  // ...same SignalBase fields as above
}

Polarity is derived from (category, score) via the polarityFor() helper and bridges back to the existing WarmTransferContext.sentiment enum used at handoff time.

Adapters

All adapters implement the same SignalAdapter interface. Pick by latency budget, accuracy needs, and whether you can take a network dependency.

AdapterSourceMedian latencyStrengthsWeaknessesOfflinePeer dep
ruleAdapterlocal<5msDeterministic, zero deps, great for testsKeyword-only, Englishyesnone
tfjsToxicityAdapterlocal40–80msBrowser-native, free, satisfies <200ms ACEnglish-only; narrow categories; toxicity-centricyes@tensorflow-models/toxicity, @tensorflow/tfjs
geminiAdaptermodel200–400msMultilingual; full category setNetwork; per-call costno@google/generative-ai
claudeAdaptermodel200–450msHigh accuracy; full category setNetwork; costno@anthropic-ai/sdk
openaiAdaptermodel250–500msFull category set; intent extractionNetwork; costnoopenai

Latency note. The <200ms acceptance criterion applies to the user-input path. Local adapters (ruleAdapter, tfjsToxicityAdapter) reliably hit this budget. Model adapters analyze assistant messages and typically take 200–400ms after text.completed — that's the latency from the message the adapter analyzes, not from the user's input.

Recommended composition.

import { ruleAdapter, defaultRulePatterns, tfjsToxicityAdapter, geminiAdapter } from 'gecx-chat';

const adapters = [
  // Always-on local safety net for user input — sub-200ms.
  ruleAdapter({ patterns: defaultRulePatterns }),
  tfjsToxicityAdapter({ threshold: 0.7 }),
  // Higher-fidelity model classification of assistant turns.
  geminiAdapter({
    apiKey: process.env.GEMINI_API_KEY!,
    trigger: 'assistant-message-completed',
  }),
];

Writing a custom adapter

Adapters are async iterables. Yield as many emissions as you find, respect ctx.signal.aborted, stay under budgetMs.

import { type SignalAdapter, polarityFor } from 'gecx-chat';

export const myCustomAdapter: SignalAdapter = {
  id: 'my-classifier',
  trigger: 'user-message-added',
  source: 'local',
  budgetMs: 200,
  async *analyze(ctx) {
    const text = ctx.message.parts.find((p) => p.type === 'text')?.text ?? '';
    const score = await classify(text); // your model
    if (ctx.signal.aborted) return;
    yield {
      kind: 'sentiment',
      category: 'frustration',
      score,
      confidence: 0.85,
      polarity: polarityFor('frustration', score),
      scope: 'message',
      attributedMessageId: ctx.message.id,
    };
  },
};

Escalation rules

Rules are declarative data:

const rules: SignalEscalationRule[] = [
  {
    id: 'frustration-2-turns',
    signal: 'sentiment',
    match: { category: 'frustration' },
    operator: '>',
    threshold: 0.7,
    minConfidence: 0.7,
    consecutiveTurns: 2,
    cooldownMs: 30_000,
  },
  {
    id: 'explicit-escalate',
    signal: 'intent',
    match: { intent: 'escalate_to_human' },
    operator: '>=',
    threshold: 0.7,
  },
];

When a rule fires, the escalator calls ChatSession.requestTransfer() with a stable idempotency key (${sessionId}:${ruleId}:${turnIndex}) and a WarmTransferContext carrying:

  • summaryAuto-escalated by signal rule "<ruleId>" at turn <n>.
  • intentTags — every distinct intent from the triggering signals
  • sentiment — coarse polarity derived from the triggering signal scores
  • customAttributes — raw scores, confidences, and adapter ids for the agent cockpit

You can override the request shape with rule.toHandoffRequest():

{
  id: 'vip-fast-path',
  signal: 'sentiment',
  match: { category: 'frustration' },
  operator: '>',
  threshold: 0.6,
  toHandoffRequest: (ctx) => ({
    transferType: 'warm',
    targetAgent: 'vip-pool',
    reason: `${ctx.rule.id}@turn${ctx.turnIndex}`,
  }),
}

React hooks

useSentiment and useIntent are pure selectors over chat.messages. They re-run on every messages-array change (which useChatSession already provides) and memoize internally.

const chat = useChatSession({ config });
const { latest, history, byCategory } = useSentiment(chat.messages, { historyLimit: 50 });
const { latest: lastIntent, byIntent } = useIntent(chat.messages);

if (byCategory.frustration?.score && byCategory.frustration.score > 0.7) {
  // Render a calm-down treatment.
}

Default renderer

MessagePart renders both signal parts as screen-reader-only <span> tags carrying data-* attributes for the score, category/intent, adapter, and source. This matches the audio-cue precedent of "no visible default; override via the registry":

<RendererRegistryContext.Provider value={registry.with('sentiment-signal', (part) => (
  <span className="badge" data-category={part.category}>
    {part.category} · {(part.score * 100).toFixed(0)}%
  </span>
))}>

Privacy + governance

Signal parts live in ChatMessage.parts[] and export through the standard governance.exportConversation path. If your privacy posture forbids exporting raw sentiment scores, set governance.redactSignals: true (configurable per policy) or filter signal parts in your export sink.

See also

Source: docs/guides/sentiment-and-intent.md