stromano02/model

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 2, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

The stromano02/model is an 8 billion parameter instruction-tuned Llama 3.1 model developed by stromano02, fine-tuned from unsloth/meta-llama-3.1-8b-instruct-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is designed for general instruction-following tasks, leveraging the Llama 3.1 architecture for robust performance.

Loading preview...

Model Overview

The stromano02/model is an 8 billion parameter instruction-tuned language model, developed by stromano02. It is based on the Llama 3.1 architecture, specifically fine-tuned from unsloth/meta-llama-3.1-8b-instruct-bnb-4bit.

Key Characteristics

  • Architecture: Llama 3.1, 8 billion parameters.
  • Fine-tuning: Utilizes Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Context Length: Supports a context length of 32768 tokens.
  • License: Released under the Apache-2.0 license.

Use Cases

This model is suitable for a variety of instruction-following applications, benefiting from its Llama 3.1 foundation and optimized fine-tuning. Its efficient training process suggests potential for rapid adaptation to specific tasks.