benjaminsinzore/Basqui-R1-4B-v1
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:May 22, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Basqui-R1-4B-v1 is a 4 billion parameter Llama model developed by benjaminsinzore, finetuned using Unsloth and Huggingface's TRL library. This model is notable for its accelerated training, achieving 2x faster finetuning. It is designed for general language tasks, leveraging its efficient training methodology to provide a capable model within its parameter class.

Loading preview...