FinaPolat/llama3_1_8b_thinking_ED
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

FinaPolat/llama3_1_8b_thinking_ED is an 8 billion parameter Llama 3.1 model developed by FinaPolat, fine-tuned from FinaPolat/llama3_1_8b_dpo-1k_ED. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language tasks, leveraging its Llama 3.1 architecture and 32768 token context length.

Loading preview...