
LLMs write markdown and drop interactive UI components in the same stream — charts, buttons, forms, tables, cards. No custom DSL, no JSON blocks.
pnpm add @mdocui/core @mdocui/react
The LLM streams plain text with {% tag %} delimiters. The streaming parser tokenizes each chunk as it arrives and renders live UI components — stats, charts, tables, forms — inline with prose.
No JSON schema negotiation. No tool calls. No post-processing. The model just writes, and users see rich interactive UI appear character by character.
Layout, interactive, data, and content components out of the box.
Character-by-character tokenizer handles partial chunks in real-time.
Built-in markdown rendering. No react-markdown dependency needed.
`npx mdocui init` detects your framework and generates everything.
One broken component won't crash your chat. Graceful per-component recovery.
Components use currentColor/inherit. Your theme, your rules.
Streaming parser, registry, prompt generator
24 components, Renderer, useRenderer hook
Scaffold, generate, preview
Vue renderer
Svelte renderer
Write mdocUI markup and see it render live
Streaming parser, registry, prompt generator
24 components, Renderer, useRenderer hook
Vue renderer with the same 24 components
Svelte renderer
Syntax highlighting and autocomplete for {% %} tags
Stable API, frozen component props, CHANGELOG