sampluralis/llama-mid

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Warm

sampluralis/llama-mid is a fine-tuned language model based on gshasiri/llama3.2-1B-chatml, developed by sampluralis. This model was trained using the TRL library for transformer reinforcement learning. It is designed for general text generation tasks, leveraging its fine-tuned architecture to produce coherent and contextually relevant responses.

Loading preview...

Model Overview

sampluralis/llama-mid is a fine-tuned language model derived from the gshasiri/llama3.2-1B-chatml base model. It was developed by sampluralis and trained using the TRL library, which specializes in Transformer Reinforcement Learning. The training process involved Supervised Fine-Tuning (SFT).

Key Capabilities

  • Text Generation: Capable of generating human-like text based on given prompts.
  • ChatML Format: Inherits the chatml format from its base model, suitable for conversational AI applications.
  • Pipeline Integration: Easily integrated into Python applications using the Hugging Face transformers pipeline for text generation tasks.

Training Details

The model's training procedure utilized specific framework versions:

  • TRL: 0.28.0
  • Transformers: 4.57.6
  • Pytorch: 2.6.0+cu126
  • Datasets: 4.6.0
  • Tokenizers: 0.22.2

Good For

  • Conversational AI: Generating responses in a chat-like format.
  • General Purpose Text Generation: Creating various forms of text content.
  • Research and Experimentation: Serving as a base for further fine-tuning or research into SFT methods.