abcorrea/random-v7

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 15, 2026Architecture:Transformer Warm

abcorrea/random-v7 is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Thinking-2507. This model was trained using Supervised Fine-Tuning (SFT) with the TRL library. It is designed for general text generation tasks, leveraging its Qwen3 base for robust language understanding and generation capabilities.

Loading preview...

Model Overview

abcorrea/random-v7 is a 4 billion parameter language model built upon the Qwen3-4B-Thinking-2507 architecture. It has undergone Supervised Fine-Tuning (SFT) using the Hugging Face TRL library, enhancing its ability to follow instructions and generate coherent text.

Key Capabilities

  • General Text Generation: Capable of generating human-like text based on given prompts.
  • Instruction Following: Benefits from SFT to better understand and respond to user queries.
  • Qwen3 Base: Inherits the strong foundational language understanding from its Qwen3 base model.

Training Details

The model was fine-tuned using the TRL library (version 0.19.1) with Transformers (4.52.1), Pytorch (2.7.0), Datasets (4.0.0), and Tokenizers (0.21.1). This SFT process aims to adapt the base model for more specific and controlled text outputs.

Good For

  • Conversational AI: Generating responses in dialogue systems.
  • Content Creation: Assisting with writing tasks, such as drafting articles or creative content.
  • Prototyping: Quick experimentation with a fine-tuned 4B parameter model for various NLP applications.