Jaume-inLab/vocabulary_sliced_CA-ES-EN-qwen3-14B
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Feb 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Jaume-inLab/vocabulary_sliced_CA-ES-EN-qwen3-14B is a 14 billion parameter Qwen3-based causal language model with a context length of 32768 tokens. Developed by Jaume-inLab, this model features a pruned vocabulary specifically optimized for Catalan, Spanish, and English. Its primary differentiator is a reduced memory footprint, offering significant VRAM savings during training and inference, particularly for tasks like DPO with long sequences.

Loading preview...