V3N0M/Aisha-Llama-3.1-8B-Complete
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 28, 2026Architecture:Transformer0.0K Cold
Aisha-Llama-3.1-8B-Complete is an 8 billion parameter language model, fine-tuned by V3N0M and converted to GGUF format using Unsloth. This model is based on the Llama 3.1 architecture and supports a 32768 token context length. It is optimized for efficient deployment and usage, particularly with tools like llama.cpp and Ollama, making it suitable for local inference applications.
Loading preview...