Sakuna/LLaMaCoderAll
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold

Sakuna/LLaMaCoderAll is a 7 billion parameter language model based on the LLaMa2 architecture, fine-tuned using LoRA adaptors. This model is specifically optimized for code generation tasks. It excels at producing programming solutions across various languages, making it suitable for developers requiring automated code assistance.

Loading preview...