benjaminsinzore/Basqui-R1-4B-v1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:May 22, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Basqui-R1-4B-v1 is a 4 billion parameter Llama model developed by benjaminsinzore, finetuned using Unsloth and Huggingface's TRL library. This model is notable for its accelerated training, achieving 2x faster finetuning. It is designed for general language tasks, leveraging its efficient training methodology to provide a capable model within its parameter class.

Loading preview...

Basqui-R1-4B-v1: An Efficiently Finetuned Llama Model

Basqui-R1-4B-v1 is a 4 billion parameter Llama-based model developed by benjaminsinzore. This model stands out due to its highly optimized finetuning process, which was achieved using the Unsloth library in conjunction with Huggingface's TRL library. This combination enabled the model to be trained 2x faster than conventional methods, making it a compelling choice for developers seeking efficient model deployment.

Key Capabilities

  • Accelerated Finetuning: Leverages Unsloth for significantly faster training times.
  • Llama Architecture: Built upon the robust Llama model family, providing a strong foundation for various NLP tasks.
  • General Purpose: Suitable for a broad range of language understanding and generation applications.

Good For

  • Developers prioritizing fast iteration and deployment of finetuned models.
  • Applications requiring a capable 4 billion parameter model with optimized training efficiency.
  • Experimentation with models finetuned using Unsloth's performance enhancements.