nqdhocai/LogicQwen-2.5-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold
LogicQwen-2.5-7B is a 7.6 billion parameter causal language model developed by nqdhocai, fine-tuned from unsloth/Qwen2.5-7B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. With a 131072 token context length, it is designed for general language understanding and generation tasks.
Loading preview...
Model Overview
nqdhocai/LogicQwen-2.5-7B is a 7.6 billion parameter language model, fine-tuned from the Qwen2.5-7B-Instruct base model. It leverages the Unsloth library for accelerated training, achieving a 2x speed improvement, and integrates with Huggingface's TRL library for efficient fine-tuning.
Key Characteristics
- Base Model: Fine-tuned from unsloth/Qwen2.5-7B-Instruct.
- Training Efficiency: Utilizes Unsloth for significantly faster training, reducing computational overhead.
- Context Length: Supports a substantial context window of 131072 tokens, suitable for processing longer inputs and generating coherent, extended outputs.
- License: Distributed under the Apache-2.0 license, allowing for broad usage and modification.
Potential Use Cases
- General Text Generation: Capable of generating human-like text for various applications.
- Instruction Following: Benefits from its instruction-tuned base, making it effective for tasks requiring specific directives.
- Long-Context Applications: Its large context window makes it suitable for tasks like summarization of lengthy documents, detailed question answering, or maintaining conversational coherence over extended dialogues.