Messages & Parts

Overview

Every response in GECX Chat is a ChatMessage containing one or more ChatMessagePart objects. Each part has a type field that tells you what kind of content it carries -- plain text, a product carousel, a tool invocation, and so on.

You never construct these yourself on the read path. The SDK's normalizer converts raw transport events from the server into typed parts automatically, handling streaming merges, ID assignment, and status tracking along the way.

The part-type catalog

The ChatMessagePart discriminated union groups parts into a handful of categories. Every part carries a stable id, a type discriminator, and category-specific fields. The complete catalog:

TypeDescriptionExample use case
textPlain text contentSimple assistant replies
text-deltaStreaming text chunk (merged into text on completion)Real-time typing indicator
markdownFormatted markdown contentRich formatted responses
citationSource reference with title, URL, and optional snippetLinking to a knowledge-base article
suggestion-chipsQuick-reply options the user can tap"Track order" / "Talk to agent" buttons
product-carouselList of products with title, price, imageBrowsing product recommendations
order-summaryOrder details with status, line items, totalsDisplaying a recent order
customArbitrary payload with a payloadType discriminatorDomain-specific cards your backend defines
tool-callTool invocation lifecycle (requested / approved / executing / completed / failed)Showing a "looking up order..." spinner
tool-resultResult returned by a tool executionDisplaying data fetched by a tool
fileFile attachment with upload progressUser-uploaded images or documents
agent-transferHandoff to a human agent with queue positionLive-agent escalation flow
diagnosticDebug/diagnostic informationDevelopment-mode trace output
errorError information with code, message, and optional docs linkSurfacing a retriable failure
end-sessionSession termination signal with reason"Chat ended" notice
audio-inputUser-captured audio with retention flagsVoice composer mic indicator
audio-outputModel-generated audio with playedUpToMs for barge-inAudio playback element
transcriptSTT output, interim and final, with roleLive caption strip, transcript pane
audio-cueSide-channel voice signals (barge-in, end-of-turn, etc.)Overlay or analytics trigger
visionFirst-class image input/output with required altTextImage previews, model-returned images
memory-approvalLong-term memory write awaiting user approvalMemory save/update/delete approval card
memory-recall-resultResult of a memory.recall tool callInline list of recalled facts
sentiment-signalSentiment classification with score, confidence, polaritySentiment meter, escalation banner
intent-signalInferred intent classification with score and confidenceIntent indicator, routing input
computer-use-surfaceSandboxed browser action streamConsent banner, screenshot stream, action log
a2ui-surfaceServer-driven generative UI surfaceDynamic forms rendered from the backend

Message structure

A ChatMessage has this shape:

interface ChatMessage {
  id: string;
  role: 'user' | 'agent';
  status: 'pending' | 'streaming' | 'completed' | 'error';
  createdAt: string;
  sessionId: string;
  turnIndex: number;
  responseId?: string;
  parts: ChatMessagePart[];
  metadata?: Record<string, unknown>;
}

role is 'user' for messages the visitor sent and 'agent' for everything the backend returns. status tracks the message lifecycle -- most rendering code only cares about streaming vs completed.

How normalization works

The normalizer sits between the transport layer and your UI. It does three things:

  1. Merges streaming deltas. Incoming text-delta parts accumulate in a buffer. When the server sends text.completed, the normalizer consolidates them into a single text part.
  2. Assigns stable IDs. Every part gets a unique id on creation so React keys and animation libraries work correctly.
  3. Maps transport events to typed parts. A rich.payload event with payloadType: "product-carousel" becomes a ProductCarouselPart. Unknown payload types become CustomPayloadPart.

Rendering parts

Iterate a message's parts array and switch on type:

import type { ChatMessage } from '@anthropic/gecx-chat';

function MessageBubble({ message }: { message: ChatMessage }) {
  return (
    <div className={message.role}>
      {message.parts.map((part) => {
        switch (part.type) {
          case 'text':
            return <p key={part.id}>{part.text}</p>;
          case 'suggestion-chips':
            return (
              <div key={part.id} className="chips">
                {part.chips.map((c) => (
                  <button key={c.label}>{c.label}</button>
                ))}
              </div>
            );
          case 'error':
            return <p key={part.id} className="error">{part.userMessage ?? part.message}</p>;
          default:
            return null;
        }
      })}
    </div>
  );
}

If you are using the React adapter, the built-in <MessagePart> component handles every part type with sensible defaults. Register custom renderers via the ChatProvider to override any type. Signals (sentiment, intent) render as sr-only by default — they're infrastructure, not presentation, so hosts opt in to a visible renderer.

Type guards

The SDK exports type-guard functions that narrow a ChatMessagePart to its specific interface:

import { isTextPart, isCitationPart, isToolCallPart } from 'gecx-chat';

for (const part of message.parts) {
  if (isTextPart(part)) {
    console.log(part.text); // part is TextPart here
  }
  if (isToolCallPart(part)) {
    console.log(part.toolName, part.status); // part is ToolCallPart
  }
}

Guards exist for every part type. The base set: isTextPart, isTextDeltaPart, isMarkdownPart, isCitationPart, isSuggestionChipsPart, isProductCarouselPart, isOrderSummaryPart, isCustomPayloadPart, isToolCallPart, isToolResultPart, isFilePart, isAgentTransferPart, isDiagnosticPart, isEndSessionPart, isErrorPart, and isA2UISurfacePart. Voice and multimodal: isAudioInputPart, isAudioOutputPart, isTranscriptPart, isAudioCuePart, isVisionPart. Memory: isMemoryApprovalPart, isMemoryRecallResultPart. Signals: isSentimentSignalPart, isIntentSignalPart, and a combined isSignalPart. Computer-use: isComputerUseSurfacePart.

Sentiment and intent signals

SentimentSignalPart and IntentSignalPart flow through the same ChatMessage.parts[] array as text. Each carries:

  • score — a numeric value in the range appropriate for the category (sentiment: -1 to 1; intent: 0 to 1).
  • confidence — adapter confidence in the classification.
  • source — which adapter produced the emission (rule, tfjs-toxicity, model-tool, gemini, claude, openai).
  • polarity — for sentiment: 'positive' | 'neutral' | 'negative'.
  • latencyMs — end-to-end attribution from message arrival to emission.
  • attributedMessageId — pointer back at the message that caused the shift.

Default renderer is sr-only. See Signals for the mental model and Sentiment and Intent for the integration walkthrough.

Computer-use surface

ComputerUseSurfacePart represents an ongoing sandboxed browser session. It carries the session id, the current state (consent, awaiting approval, executing, completed, aborted), a signed SSE screenshot stream URL, and an action log. The default React renderer is <ComputerUseSurface> — a sandbox="" iframe (no scripts, no same-origin), an action log, a consent banner, a per-action ConfirmDialog, and an always-available Abort. See Computer-use.

What's next

Source: docs/concepts/messages-and-parts.md