Fatma04/Egyptian-Podcast-Qwen-Final-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 6, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The Fatma04/Egyptian-Podcast-Qwen-Final-16bit is a Qwen3-based instruction-tuned language model developed by Fatma04. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is specifically adapted from unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit, suggesting an optimization for specific conversational or content generation tasks, potentially related to Egyptian podcasts.

Loading preview...