Mustafa99Hafed/LLaMA-3.1-8B-Solana-Audit

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 15, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Mustafa99Hafed/LLaMA-3.1-8B-Solana-Audit is an 8 billion parameter LLaMA 3.1 instruction-tuned model developed by Mustafa99Hafed. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its LLaMA 3.1 architecture and 32768 token context length.

Loading preview...

Model Overview

Mustafa99Hafed/LLaMA-3.1-8B-Solana-Audit is an 8 billion parameter instruction-tuned language model, developed by Mustafa99Hafed. It is based on the LLaMA 3.1 architecture and was fine-tuned from unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit.

Key Characteristics

  • Architecture: LLaMA 3.1, 8 billion parameters.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Context Length: Supports a context length of 32768 tokens.

Intended Use Cases

This model is suitable for a variety of general instruction-following tasks, benefiting from its LLaMA 3.1 base and efficient fine-tuning. Its capabilities are aligned with typical applications for instruction-tuned large language models, offering a balance of performance and resource efficiency due to its optimized training.