Messages & Parts
Overview
Every response in GECX Chat is a ChatMessage containing one or more ChatMessagePart objects. Each part has a type field that tells you what kind of content it carries -- plain text, a product carousel, a tool invocation, and so on.
You never construct these yourself on the read path. The SDK's normalizer converts raw transport events from the server into typed parts automatically, handling streaming merges, ID assignment, and status tracking along the way.
The part-type catalog
The ChatMessagePart discriminated union groups parts into a handful of categories. Every part carries a stable id, a type discriminator, and category-specific fields. The complete catalog:
| Type | Description | Example use case |
|---|---|---|
text | Plain text content | Simple assistant replies |
text-delta | Streaming text chunk (merged into text on completion) | Real-time typing indicator |
markdown | Formatted markdown content | Rich formatted responses |
citation | Source reference with title, URL, and optional snippet | Linking to a knowledge-base article |
suggestion-chips | Quick-reply options the user can tap | "Track order" / "Talk to agent" buttons |
product-carousel | List of products with title, price, image | Browsing product recommendations |
order-summary | Order details with status, line items, totals | Displaying a recent order |
custom | Arbitrary payload with a payloadType discriminator | Domain-specific cards your backend defines |
tool-call | Tool invocation lifecycle (requested / approved / executing / completed / failed) | Showing a "looking up order..." spinner |
tool-result | Result returned by a tool execution | Displaying data fetched by a tool |
file | File attachment with upload progress | User-uploaded images or documents |
agent-transfer | Handoff to a human agent with queue position | Live-agent escalation flow |
diagnostic | Debug/diagnostic information | Development-mode trace output |
error | Error information with code, message, and optional docs link | Surfacing a retriable failure |
end-session | Session termination signal with reason | "Chat ended" notice |
audio-input | User-captured audio with retention flags | Voice composer mic indicator |
audio-output | Model-generated audio with playedUpToMs for barge-in | Audio playback element |
transcript | STT output, interim and final, with role | Live caption strip, transcript pane |
audio-cue | Side-channel voice signals (barge-in, end-of-turn, etc.) | Overlay or analytics trigger |
vision | First-class image input/output with required altText | Image previews, model-returned images |
memory-approval | Long-term memory write awaiting user approval | Memory save/update/delete approval card |
memory-recall-result | Result of a memory.recall tool call | Inline list of recalled facts |
sentiment-signal | Sentiment classification with score, confidence, polarity | Sentiment meter, escalation banner |
intent-signal | Inferred intent classification with score and confidence | Intent indicator, routing input |
computer-use-surface | Sandboxed browser action stream | Consent banner, screenshot stream, action log |
a2ui-surface | Server-driven generative UI surface | Dynamic forms rendered from the backend |
Message structure
A ChatMessage has this shape:
interface ChatMessage {
id: string;
role: 'user' | 'agent';
status: 'pending' | 'streaming' | 'completed' | 'error';
createdAt: string;
sessionId: string;
turnIndex: number;
responseId?: string;
parts: ChatMessagePart[];
metadata?: Record<string, unknown>;
}
role is 'user' for messages the visitor sent and 'agent' for everything the backend returns. status tracks the message lifecycle -- most rendering code only cares about streaming vs completed.
How normalization works
The normalizer sits between the transport layer and your UI. It does three things:
- Merges streaming deltas. Incoming
text-deltaparts accumulate in a buffer. When the server sendstext.completed, the normalizer consolidates them into a singletextpart. - Assigns stable IDs. Every part gets a unique
idon creation so React keys and animation libraries work correctly. - Maps transport events to typed parts. A
rich.payloadevent withpayloadType: "product-carousel"becomes aProductCarouselPart. Unknown payload types becomeCustomPayloadPart.
Rendering parts
Iterate a message's parts array and switch on type:
import type { ChatMessage } from '@anthropic/gecx-chat';
function MessageBubble({ message }: { message: ChatMessage }) {
return (
<div className={message.role}>
{message.parts.map((part) => {
switch (part.type) {
case 'text':
return <p key={part.id}>{part.text}</p>;
case 'suggestion-chips':
return (
<div key={part.id} className="chips">
{part.chips.map((c) => (
<button key={c.label}>{c.label}</button>
))}
</div>
);
case 'error':
return <p key={part.id} className="error">{part.userMessage ?? part.message}</p>;
default:
return null;
}
})}
</div>
);
}
If you are using the React adapter, the built-in <MessagePart> component handles every part type with sensible defaults. Register custom renderers via the ChatProvider to override any type. Signals (sentiment, intent) render as sr-only by default — they're infrastructure, not presentation, so hosts opt in to a visible renderer.
Type guards
The SDK exports type-guard functions that narrow a ChatMessagePart to its specific interface:
import { isTextPart, isCitationPart, isToolCallPart } from 'gecx-chat';
for (const part of message.parts) {
if (isTextPart(part)) {
console.log(part.text); // part is TextPart here
}
if (isToolCallPart(part)) {
console.log(part.toolName, part.status); // part is ToolCallPart
}
}
Guards exist for every part type. The base set: isTextPart, isTextDeltaPart, isMarkdownPart, isCitationPart, isSuggestionChipsPart, isProductCarouselPart, isOrderSummaryPart, isCustomPayloadPart, isToolCallPart, isToolResultPart, isFilePart, isAgentTransferPart, isDiagnosticPart, isEndSessionPart, isErrorPart, and isA2UISurfacePart. Voice and multimodal: isAudioInputPart, isAudioOutputPart, isTranscriptPart, isAudioCuePart, isVisionPart. Memory: isMemoryApprovalPart, isMemoryRecallResultPart. Signals: isSentimentSignalPart, isIntentSignalPart, and a combined isSignalPart. Computer-use: isComputerUseSurfacePart.
Sentiment and intent signals
SentimentSignalPart and IntentSignalPart flow through the same ChatMessage.parts[] array as text. Each carries:
score— a numeric value in the range appropriate for the category (sentiment: -1 to 1; intent: 0 to 1).confidence— adapter confidence in the classification.source— which adapter produced the emission (rule,tfjs-toxicity,model-tool,gemini,claude,openai).polarity— for sentiment:'positive' | 'neutral' | 'negative'.latencyMs— end-to-end attribution from message arrival to emission.attributedMessageId— pointer back at the message that caused the shift.
Default renderer is sr-only. See Signals for the mental model and Sentiment and Intent for the integration walkthrough.
Computer-use surface
ComputerUseSurfacePart represents an ongoing sandboxed browser session. It carries the session id, the current state (consent, awaiting approval, executing, completed, aborted), a signed SSE screenshot stream URL, and an action log. The default React renderer is <ComputerUseSurface> — a sandbox="" iframe (no scripts, no same-origin), an action log, a consent banner, a per-action ConfirmDialog, and an always-available Abort. See Computer-use.
What's next
- Message Parts Reference -- full property docs for every part interface.
- Custom Renderers Guide -- replace or extend default rendering for any part type.
- React Integration Guide -- using
ChatProvider,MessagePart, and hooks.
docs/concepts/messages-and-parts.md