Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v4
Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v4 is a 14.8 billion parameter Qwen2.5-based language model developed by Lunzima. This model was fine-tuned from Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-alpaca_gpt4_zh, leveraging Unsloth and Huggingface's TRL library for accelerated training. It is designed for general language generation tasks, building upon its predecessor's capabilities with enhanced training efficiency.
Loading preview...
Model Overview
Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v4 is a 14.8 billion parameter language model developed by Lunzima. It is fine-tuned from the Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-alpaca_gpt4_zh model, indicating a lineage focused on robust language understanding and generation, potentially with a strong foundation in Chinese and Alpaca/GPT-4 style instruction following.
Key Training Details
- Base Model: Fine-tuned from
Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-alpaca_gpt4_zh. - Training Efficiency: The model's training process was significantly optimized, achieving a 2x speed improvement by utilizing Unsloth and Huggingface's TRL library. This suggests a focus on efficient resource utilization and faster iteration cycles in its development.
Licensing
- The model is released under the Apache-2.0 license, providing broad permissions for use, modification, and distribution.