KasparZ/mtext-20251122_qwen3-14b-base_merged_modified_special
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jan 15, 2026Architecture:Transformer Cold

The KasparZ/mtext-20251122_qwen3-14b-base_merged_modified_special is a 14 billion parameter causal language model, fine-tuned using LoRA with specific modifications to its token embeddings and training procedure. It was trained on the KasparZ/mtext-111025 dataset, utilizing a context length of 32768 tokens. This model is designed for general causal language modeling tasks, with its unique training configuration potentially offering specialized performance characteristics.

Loading preview...