Penpot has built an MCP server that lets AI assistants interact directly with design files through the Penpot API, something no LLM can do natively. The server acts as a secure bridge: the AI parses your intent, the MCP server selects the correct API operation, and Penpot executes it. Because Penpot stores designs as code rather than opaque visual data, the server can perform granular, programmatic operations including design-to-code conversion, code-to-design reversal, component generation from scribbles, design system documentation, visual regression testing, and accessibility analysis.
The implementation is LLM-agnostic and works with Claude in VS Code, Claude Desktop, Cursor, JetBrains Rider with Junie AI, and any other MCP-enabled client. It includes a Python SDK, REST API, plugin system, and CLI tools. Demos from Penpot Fest 2025 show Dominik Jain of Oraios AI generating a Node.js app from a design, validating frontend styles for consistency, and replacing a hand-drawn scribble with a production component, all via natural language input to Claude Desktop. Videos 03, 04, 06, 08, and 12 in Penpot's public Google Drive demo folder are the ones their own team flags as essential viewing.
The project is pre-beta and actively soliciting community experiments through a public showcase thread on the Penpot community forum. What makes the full article worth reading is not the feature list but the specific demo breakdowns, the explanation of why Penpot's design-as-code architecture makes its MCP implementation structurally different from competitors, and the direct link to Penpot's AI whitepaper, which categorizes the common AI design workflows as bad, ugly, and the MCP approach as the only one worth building on.
[READ ORIGINAL →]