ashercn97/manatee-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 30, 2023Architecture:Transformer0.0K Cold
ashercn97/manatee-7b is a 7 billion parameter causal language model based on the Llama-2 architecture, fine-tuned by ashercn97. It was trained on two Orca datasets, focusing on instruction-following capabilities. This model is suitable for general-purpose text generation and instruction-based tasks, offering a balance of performance and efficiency.
Loading preview...
ashercn97/manatee-7b: An Instruction-Tuned Llama-2 Model
ashercn97/manatee-7b is a 7 billion parameter language model developed by ashercn97. This model is built upon the Llama-2 architecture and has been fine-tuned using two distinct Orca datasets. The fine-tuning process, which took approximately 6 hours on a single L40 GPU, aimed to enhance its instruction-following capabilities.
Key Capabilities
- Instruction Following: Optimized for responding to prompts and instructions effectively due to Orca dataset fine-tuning.
- Llama-2 Base: Benefits from the robust and widely recognized Llama-2 foundational architecture.
- Efficiency: As a 7B parameter model, it offers a balance between performance and computational resource requirements, with a GPTQ version available for memory-constrained environments.
Good For
- Developers seeking an accessible, instruction-tuned model for various text generation tasks.
- Experimentation with fine-tuned Llama-2 variants.
- Applications where a 7B parameter model provides sufficient performance without excessive resource demands.