erik51/Shifa-1.5-physical

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 17, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Shifa-1.5-physical is an 8 billion parameter Llama 3.1 instruction-tuned causal language model developed by erik51. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging the Llama 3.1 architecture for robust performance.

Loading preview...

Model Overview

erik51/Shifa-1.5-physical is an 8 billion parameter instruction-tuned language model based on the Llama 3.1 architecture. Developed by erik51, this model was finetuned using the Unsloth library in conjunction with Huggingface's TRL library, which facilitated a significantly faster training process.

Key Characteristics

  • Base Model: Finetuned from unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit.
  • Parameter Count: 8 billion parameters.
  • Training Efficiency: Utilizes Unsloth for 2x faster training, making it an efficient choice for developers.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is suitable for a variety of general instruction-following tasks, benefiting from the strong capabilities of the Llama 3.1 base. Its efficient training process suggests it could be a good option for applications requiring a capable 8B model without extensive computational overhead for finetuning.