pnsrc/lfm2.5-me-merged

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The pnsrc/lfm2.5-me-merged is a 1 billion parameter instruction-tuned causal language model developed by pnsrc. Finetuned from unsloth/gemma-3-1b-it-bnb-4bit, this model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for general language generation tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

The pnsrc/lfm2.5-me-merged is a 1 billion parameter instruction-tuned language model developed by pnsrc. It is finetuned from the unsloth/gemma-3-1b-it-bnb-4bit base model, indicating its foundation in the Gemma 3 family.

Key Characteristics

  • Efficient Training: This model was trained significantly faster, specifically 2x faster, by utilizing the Unsloth library in conjunction with Huggingface's TRL library. This highlights an optimization in the training process.
  • Base Model: Built upon unsloth/gemma-3-1b-it-bnb-4bit, suggesting it inherits capabilities from the Gemma 3 architecture, likely optimized for instruction following.

Potential Use Cases

This model is suitable for applications requiring a compact yet capable instruction-tuned language model, especially where efficient deployment and inference are priorities due to its 1 billion parameter size. Its finetuning suggests proficiency in understanding and responding to instructions.