ignos/LeoScorpius-GreenNode-Alpaca-7B-v1
ignos/LeoScorpius-GreenNode-Alpaca-7B-v1 is a 7 billion parameter Mistral-based language model developed by Ignos. It is a merge of viethq188/LeoScorpius-7B-Chat-DPO and GreenNode/GreenNodeLM-7B-v1olet, further fine-tuned on the tatsu-lab/alpaca dataset. This model is designed for comparative analysis of behaviors and metrics against its base model and other Mistral-based models fine-tuned on different datasets, making it suitable for research and evaluation purposes.
Loading preview...
Model Overview
ignos/LeoScorpius-GreenNode-Alpaca-7B-v1 is a 7 billion parameter language model developed by Ignos, built upon the Mistral architecture. This model is a unique fusion, created by merging two existing models: viethq188/LeoScorpius-7B-Chat-DPO and GreenNode/GreenNodeLM-7B-v1olet. Following this merge, the model underwent additional fine-tuning using the tatsu-lab/alpaca dataset.
Key Characteristics
- Architecture: Based on the Mistral-7B-v0.1 model.
- Development: Developed by Ignos, with a focus on comparative analysis.
- Training Method: Utilizes a QLoRA approach for fine-tuning, followed by merging with the base model.
- License: Released under the Apache-2.0 license.
Intended Use Cases
This model is primarily designed for research and evaluation. Its creation facilitates:
- Behavioral Comparison: Analyzing and comparing its responses and characteristics against the original Mistral base model.
- Metric Evaluation: Benchmarking its performance and metrics against other Mistral-based models that have been fine-tuned on different datasets.
Technical Details
The training was conducted on RunPod, utilizing 4 x Nvidia RTX 4090 GPUs, 64 vCPUs, and 503 GB RAM. Software used included Mergekit and Axolotl 0.3.0, with PEFT 0.6.0. Quantization during training employed bitsandbytes with bnb_4bit_quant_type: nf4 and bnb_4bit_use_double_quant: True.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.