augtoma/qCammel-13
qCammel-13 by augtoma is a 13 billion parameter Llama-2 based model, fine-tuned using QLoRA on a distilled dataset of 15,000 instructions. This model is specifically optimized for academic medical knowledge and instruction-following tasks. It leverages the Llama 2 architecture, an auto-regressive, decoder-only transformer, to provide specialized text generation capabilities within the medical domain.
Loading preview...
qCammel-13: Specialized Medical Instruction-Following Model
qCammel-13 is a 13 billion parameter language model developed by augtoma, built upon the Llama 2 architecture. It has been meticulously fine-tuned using the QLoRA method on a distilled dataset comprising 15,000 instructions, specifically curated for academic medical knowledge.
Key Capabilities
- Academic Medical Knowledge: Optimized for understanding and generating content related to academic medicine.
- Instruction Following: Excels at adhering to given instructions, particularly within its specialized domain.
- Llama 2 Foundation: Benefits from the robust, auto-regressive, decoder-only transformer architecture of Llama 2.
Training and Architecture
The model's fine-tuning process utilized QLoRA, an efficient method for finetuning quantized large language models. It processes text input and generates text output. The underlying Llama 2 architecture is well-documented, with research papers available for further technical details on both Llama and QLoRA.
Good For
- Applications requiring specialized knowledge in academic medicine.
- Tasks where precise instruction-following in a medical context is crucial.
- Developers seeking a Llama 2-based model with enhanced performance in a specific, high-stakes domain.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.