Tesslate/UIGEN-T3-32B-Preview
Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jun 9, 2025Architecture:Transformer0.0K Warm

Tesslate/UIGEN-T3-32B-Preview is a 32 billion parameter UI generation model built on the Qwen3 architecture, designed for creating both web components and full web pages. It features a unique hybrid reasoning system, allowing users to choose between guided reasoning for thoughtful design or faster, raw code generation. This model is specifically optimized for UI generation tasks, leveraging the UIGenEval framework for evaluation, and is intended for research and non-commercial use.

Loading preview...

UIGEN-T3-32B-Preview: Advanced UI Generation with Hybrid Reasoning

Tesslate's UIGEN-T3-32B-Preview is a 32 billion parameter model built on the Qwen3 architecture, specifically engineered for generating user interfaces. It can produce both individual UI components and complete web pages, including full <html> and <head> structures. A key differentiator is its hybrid reasoning system, which offers two modes: /think for guided reasoning with layout analysis and heuristics, and /no_think for faster, direct code generation. Outputs also include design tokens for easier site-wide customization.

Key Capabilities

  • Generates UI components and full web pages.
  • Offers hybrid reasoning (/think vs. /no_think) for flexible generation.
  • Evaluated using UIGenEval, a specialized framework assessing technical quality, prompt adherence, interaction behavior, and responsive design.
  • Supports user-supplied or placeholder images (no dataset images due to licensing).
  • Designed for a 32768 token context length, with 20k recommended for reasoning mode.

Good for

  • Startup MVPs: Rapidly scaffolding UIs from scratch with clean code.
  • Component Libraries: Building and exporting buttons, cards, and navigation bars at scale.
  • Internal Tool Builders: Creating admin panels, dashboards, and layout templates.
  • Rapid Client Prototypes: Generating production-ready HTML+Tailwind outputs to save time on mockups.

This model is released for research and non-commercial use only, with commercial licensing available via pilot programs. It requires a GPU with at least 24GB VRAM for 32B inference.