Shinapri/gplm-8b
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 18, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
Shinapri/gplm-8b is an 8 billion parameter Llama 3.1 instruction-tuned causal language model developed by Shinapri. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language understanding and generation tasks, leveraging its Llama 3.1 base for robust performance within an 8192 token context length.
Loading preview...