Shinapri/gplm-8b
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 18, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
Shinapri/gplm-8b is an 8 billion parameter Llama 3.1 instruction-tuned causal language model developed by Shinapri. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language understanding and generation tasks, leveraging its Llama 3.1 base for robust performance within an 8192 token context length.
Loading preview...
Shinapri/gplm-8b: A Faster-Trained Llama 3.1 Model
Shinapri/gplm-8b is an 8 billion parameter instruction-tuned language model built upon the Llama 3.1 architecture. Developed by Shinapri, this model distinguishes itself through its efficient training methodology.
Key Capabilities
- Llama 3.1 Foundation: Benefits from the advanced capabilities and performance of the Llama 3.1 base model.
- Instruction-Tuned: Optimized for following instructions and performing a wide range of natural language processing tasks.
- Efficient Training: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
- Context Length: Supports an 8192 token context window, suitable for handling moderately long inputs and generating coherent responses.
Good For
- General-purpose text generation and understanding.
- Applications requiring a Llama 3.1-based model with efficient fine-tuning.
- Developers looking for a performant 8B parameter model for various NLP tasks.