Daga2001/Llama-70B-God-Tier

TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Mar 14, 2026License:otherArchitecture:Transformer Cold

Daga2001/Llama-70B-God-Tier is a 70 billion parameter merged fine-tune of the Llama-3.3-70B-Instruct base model, created by Daga2001. This model was fine-tuned using a LoRA adapter on the tatsu-lab/alpaca dataset with 1000 training samples and a maximum sequence length of 512. It is intended for Transformers users seeking a specialized Llama-3.3-70B-Instruct variant for tasks aligned with its fine-tuning data.

Loading preview...

Llama-70B-God-Tier: A Fine-Tuned Llama-3.3-70B-Instruct Variant

This model, developed by Daga2001, is a 70 billion parameter merged fine-tune of the meta-llama/Llama-3.3-70B-Instruct base model. It integrates a LoRA adapter (r=16, alpha=32, dropout=0.05, target=attn) to specialize its capabilities.

Key Characteristics

  • Base Model: meta-llama/Llama-3.3-70B-Instruct
  • Fine-tuning Dataset: tatsu-lab/alpaca
  • Training Details: Fine-tuned with 1000 samples, utilizing a maximum sequence length of 512.
  • Architecture: Full merged Transformers checkpoint, provided in sharded safetensors format.

Intended Use

This model is primarily intended for developers and researchers working with the Hugging Face Transformers library. It offers a specialized version of the Llama-3.3-70B-Instruct model, potentially excelling in tasks similar to those found in the tatsu-lab/alpaca dataset. Users should consider the base model's license and terms when deploying this fine-tuned variant.