jgillix/fetish-dataset-v1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 30, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The jgillix/fetish-dataset-v1 is an 8 billion parameter Llama 3.1 instruction-tuned causal language model developed by jgillix. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Llama 3.1 base and efficient fine-tuning methodology.
Loading preview...
Overview
The jgillix/fetish-dataset-v1 is an 8 billion parameter instruction-tuned language model, developed by jgillix. It is based on the Llama 3.1 architecture and was fine-tuned from unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit. A key aspect of its development is the use of Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
Key Capabilities
- Instruction Following: Inherits and enhances the instruction-following capabilities of the Llama 3.1 base model.
- Efficient Training: Benefits from the Unsloth framework, allowing for quicker fine-tuning iterations.
- Llama 3.1 Architecture: Built upon a robust and widely recognized large language model foundation.
Good For
- General Purpose Instruction-Tuning: Suitable for a broad range of tasks requiring a model to follow specific instructions.
- Applications requiring Llama 3.1 base: Ideal for developers already working within the Llama 3.1 ecosystem.
- Experimentation with Efficient Fine-tuning: Demonstrates the practical application of Unsloth for accelerated model development.