jgchaparro/language_garden-fax-spa-4B-bl-m-merged

VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Mar 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The jgchaparro/language_garden-fax-spa-4B-bl-m-merged model is a 4.3 billion parameter language model, finetuned by jgchaparro from unsloth/gemma-3-4b-it-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster finetuning. It is designed for general language tasks, leveraging its Gemma 3B base for efficient performance.

Loading preview...

Model Overview

The jgchaparro/language_garden-fax-spa-4B-bl-m-merged is a 4.3 billion parameter language model developed by jgchaparro. It is finetuned from the unsloth/gemma-3-4b-it-unsloth-bnb-4bit base model, indicating its foundation in the Gemma 3B architecture.

Key Characteristics

  • Base Model: Finetuned from unsloth/gemma-3-4b-it-unsloth-bnb-4bit.
  • Training Efficiency: The finetuning process utilized Unsloth and Huggingface's TRL library, which enabled a 2x speedup in training compared to standard methods.
  • License: Distributed under the Apache-2.0 license.

Potential Use Cases

This model is suitable for applications requiring a compact yet capable language model, particularly where efficient finetuning is a priority. Its Gemma 3B foundation suggests applicability for a range of general language understanding and generation tasks.