neural-coder/llama-3-8b-ft
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 14, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

The neural-coder/llama-3-8b-ft model is an 8 billion parameter causal language model, fine-tuned using AutoTrain. It is based on the Llama 3 architecture and features a 32768 token context length. This model is designed for general text generation and conversational AI tasks, leveraging its fine-tuned capabilities for improved performance.

Loading preview...