amdevghj/Qwen-MyStory-Style

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The amdevghj/Qwen-MyStory-Style is a 7.6 billion parameter Qwen2-based instruction-tuned causal language model developed by amdevghj. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is optimized for general instruction-following tasks, leveraging its Qwen2 architecture for robust performance.

Loading preview...

Overview

The amdevghj/Qwen-MyStory-Style is a 7.6 billion parameter language model, finetuned by amdevghj. It is based on the Qwen2 architecture, specifically finetuned from unsloth/qwen2.5-7b-instruct-bnb-4bit. A key aspect of its development is the use of Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.

Key Capabilities

  • Instruction Following: Designed to respond effectively to a wide range of instructions.
  • Efficient Training: Benefits from Unsloth's optimizations for faster finetuning.
  • Qwen2 Architecture: Leverages the robust capabilities of the Qwen2 model family.

Good For

  • Developers seeking a Qwen2-based model for general instruction-following tasks.
  • Applications requiring a model that has undergone efficient finetuning.
  • Experimentation with models trained using Unsloth's accelerated methods.