Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.outkit.dev/llms.txt

Use this file to discover all available pages before exploring further.

See how Outkit transforms different types of AI responses into structured components.

Database Comparison

A user asks: “Compare PostgreSQL and MongoDB for my e-commerce startup and tell me what I need to set up PostgreSQL” Raw AI output (markdown):
PostgreSQL and MongoDB are both excellent databases, but they serve
different needs for an e-commerce platform.

| Feature | PostgreSQL | MongoDB |
|---------|-----------|---------|
| Data Model | Relational | Document |
| ACID | Full | Multi-doc since 4.0 |
| Horizontal Scaling | Limited (Citus) | Native sharding |
| Best For | Transactions, analytics | Catalogs, content |

**Important:** If you're handling payments, PostgreSQL's ACID guarantees
are non-negotiable.

Here's what you need to set up PostgreSQL:
1. Install PostgreSQL 16
2. Configure pg_hba.conf for authentication
3. Create your database and roles
4. Set up connection pooling with PgBouncer
5. Configure automated backups
6. Set up monitoring with pg_stat_statements
Outkit output (3 components detected in ~3.8s):
{
  "blocks": [
    {
      "component": "text",
      "version": "1.0",
      "props": {
        "content": "PostgreSQL and MongoDB are both excellent databases, but they serve different needs for an e-commerce platform.",
        "variant": "prose"
      }
    },
    {
      "component": "table",
      "version": "1.0",
      "props": {
        "columns": ["Feature", "PostgreSQL", "MongoDB"],
        "rows": [
          { "Feature": "Data Model", "PostgreSQL": "Relational", "MongoDB": "Document" },
          { "Feature": "ACID", "PostgreSQL": "Full", "MongoDB": "Multi-doc since 4.0" },
          { "Feature": "Horizontal Scaling", "PostgreSQL": "Limited (Citus)", "MongoDB": "Native sharding" },
          { "Feature": "Best For", "PostgreSQL": "Transactions, analytics", "MongoDB": "Catalogs, content" }
        ],
        "sortable": true
      },
      "meta": { "confidence": 0.95 }
    },
    {
      "component": "alert",
      "version": "1.0",
      "props": {
        "variant": "warning",
        "title": "Payment Data Integrity",
        "message": "If you're handling payments, PostgreSQL's ACID guarantees are non-negotiable. MongoDB's eventual consistency model can lead to data integrity issues in financial transactions."
      },
      "meta": { "confidence": 0.90 }
    },
    {
      "component": "text",
      "version": "1.0",
      "props": {
        "content": "Here's what you need to set up PostgreSQL:",
        "variant": "prose"
      }
    },
    {
      "component": "checkbox",
      "version": "1.0",
      "props": {
        "title": "PostgreSQL Setup Checklist",
        "items": [
          { "id": "install", "label": "Install PostgreSQL 16" },
          { "id": "auth", "label": "Configure pg_hba.conf for authentication" },
          { "id": "db", "label": "Create your database and roles" },
          { "id": "pool", "label": "Set up connection pooling with PgBouncer" },
          { "id": "backup", "label": "Configure automated backups" },
          { "id": "monitor", "label": "Set up monitoring with pg_stat_statements" }
        ],
        "showProgress": true
      },
      "meta": { "confidence": 0.95 }
    }
  ],
  "design": {
    "--outkit-primary": "#b30069",
    "--outkit-bg": "#fbf9f8",
    "--outkit-text": "#1b1c1c"
  }
}
What happened:
  • The markdown table became a structured, sortable table component
  • The bold warning became a warning alert with a clear title
  • The numbered list became an interactive checkbox checklist with progress tracking
  • Prose text was preserved as text blocks between components

Streaming Response

Streaming lets you render progressively. Here’s how SSE events arrive:
data: {"type":"meta","design":{"--outkit-primary":"#b30069","--outkit-bg":"#fbf9f8"}}

data: {"c":"text","v":"1.0","p":{"content":"Here are the results:","variant":"prose"}}

data: {"c":"table","v":"1.0","p":{"columns":["Name","Score"],"rows":[{"Name":"Alice","Score":"95"},{"Name":"Bob","Score":"87"}],"sortable":true}}

data: [DONE]
Rendering timeline:
  1. meta arrives → design tokens extracted, buffered blocks released
  2. text block arrives → render “Here are the results:” immediately
  3. table block arrives → render the full interactive table
  4. [DONE] → stream complete
With feedResponse, all of this happens automatically — you don’t parse any of it.

Multi-Turn Chat

A real-world pattern: each message in a chat gets its own useBlockStream instance.
import { AIRenderer, useBlockStream } from '@outkit-dev/react';

function ChatMessage({ messageId }: { messageId: string }) {
  const { blocks, design, isStreaming, streamState, feedResponse, reset } = useBlockStream();

  const enhance = async () => {
    reset();
    try {
      const response = await fetch(`/api/enhance/${messageId}`);
      const result = await feedResponse(response);
      console.log(`Enhanced: ${result.blocks.length} blocks`);
    } catch (err) {
      console.error('Enhance failed:', err);
    }
  };

  return (
    <div>
      {streamState === 'idle' && (
        <button onClick={enhance}>Enhance with Outkit</button>
      )}
      {blocks.length > 0 && (
        <AIRenderer blocks={blocks} design={design} streaming={isStreaming} theme="auto" />
      )}
    </div>
  );
}

function ChatThread({ messages }: { messages: { id: string; text: string }[] }) {
  return (
    <div>
      {messages.map((msg) => (
        <div key={msg.id}>
          <p>{msg.text}</p>
          <ChatMessage messageId={msg.id} />
        </div>
      ))}
    </div>
  );
}
Each ChatMessage manages its own streaming state independently — no shared state, no conflicts between concurrent streams.

Integration Patterns

Two complete patterns showing how to call your LLM, collect the response, send it to Outkit, and render the result. Pick the one that matches your architecture.

Pattern 1: Non-Streaming (JSON mode)

Best when you already have the full LLM response as a string — batch workflows, saved messages, or LLMs that don’t stream. Backend — calls your LLM, then calls Outkit with ?stream=false:
app.post('/api/ask', async (req, res) => {
  const { question } = req.body;

  // Step 1: Get the full LLM response
  const llmResponse = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: question }],
  });
  const aiText = llmResponse.choices[0].message.content;

  // Step 2: Send the complete text to Outkit (JSON mode, no streaming)
  const outkitRes = await fetch('https://api.outkit.dev/render/enhance?stream=false', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'x-outkit-api-key': process.env.OUTKIT_API_KEY!,
    },
    body: JSON.stringify({
      content: aiText,
      context: question,
    }),
  });

  // Step 3: Return the structured result to the frontend
  const data = await outkitRes.json();
  res.json(data);
});
Frontend — fetch and render:
import { AIRenderer } from '@outkit-dev/react';
import { useState } from 'react';

function App() {
  const [blocks, setBlocks] = useState([]);
  const [design, setDesign] = useState(undefined);

  const ask = async (question: string) => {
    const res = await fetch('/api/ask', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ question }),
    });
    const data = await res.json();
    setBlocks(data.blocks);
    setDesign(data.design);
  };

  return (
    <div>
      <button onClick={() => ask('Compare React and Vue')}>Ask</button>
      {blocks.length > 0 && <AIRenderer blocks={blocks} design={design} />}
    </div>
  );
}
JSON mode (?stream=false) waits for Outkit to finish processing before returning. Latency is higher but the integration is simpler — no SSE parsing needed.

Pattern 2: Streaming (SSE)

Best for real-time UIs where you want progressive rendering as Outkit processes. The LLM response is collected first, then streamed through Outkit. Backend — calls your LLM (streaming or not), collects the full text, then proxies Outkit’s SSE stream:
app.post('/api/ask', async (req, res) => {
  const { question } = req.body;

  // Step 1: Stream the LLM response and collect it
  let aiText = '';
  const llmStream = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: question }],
    stream: true,
  });
  for await (const chunk of llmStream) {
    aiText += chunk.choices[0]?.delta?.content ?? '';
  }

  // Step 2: Send the collected text to Outkit (SSE mode, the default)
  const outkitRes = await fetch('https://api.outkit.dev/render/enhance', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'x-outkit-api-key': process.env.OUTKIT_API_KEY!,
    },
    body: JSON.stringify({
      content: aiText,
      context: question,
    }),
  });

  if (!outkitRes.ok || !outkitRes.body) {
    return res.status(outkitRes.status).end();
  }

  // Step 3: Proxy the SSE stream to the frontend
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');

  const reader = outkitRes.body.getReader();
  try {
    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      res.write(value);
    }
  } finally {
    res.end();
  }
});
FrontendfeedResponse handles all SSE parsing automatically:
import { AIRenderer, useBlockStream } from '@outkit-dev/react';

function App() {
  const { blocks, design, isStreaming, feedResponse, reset } = useBlockStream();

  const ask = async (question: string) => {
    reset();
    try {
      const response = await fetch('/api/ask', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ question }),
      });
      const { blocks, design } = await feedResponse(response);
      console.log(`Done: ${blocks.length} blocks`);
    } catch (err) {
      console.error('Failed:', err);
    }
  };

  return (
    <div>
      <button onClick={() => ask('Compare React and Vue')}>Ask</button>
      <AIRenderer blocks={blocks} design={design} streaming={isStreaming} theme="auto" />
    </div>
  );
}
Which pattern should I use? Start with Pattern 2 (streaming) for the best user experience — components appear progressively as Outkit processes. Use Pattern 1 (JSON) when you need simplicity or your infrastructure doesn’t support SSE proxying.