beetleware/Bee1reason-arabic-Qwen-14B

TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:May 21, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The beetleware/Bee1reason-arabic-Qwen-14B is a 14-billion parameter Qwen3-based large language model, fine-tuned by beetleware for enhanced logical and deductive reasoning in Arabic. It leverages the unsloth/Qwen3-14B base model and was optimized using LoRA with the Unsloth library for efficient training. This model excels at complex Arabic reasoning tasks while maintaining general conversational abilities, supporting a 32768 token context length.

Loading preview...

Model Overview

Bee1reason-arabic-Qwen-14B is a 14-billion parameter Large Language Model developed by beetleware, fine-tuned from the unsloth/Qwen3-14B base model. Its primary focus is to significantly improve logical and deductive reasoning capabilities specifically in the Arabic language, while also retaining general conversational proficiency.

Key Capabilities

  • Arabic Logical Reasoning: Specifically trained on a custom dataset (beetlware/arabic-reasoning-dataset-logic) to handle various types of Arabic logical reasoning tasks, including deduction, induction, and abduction.
  • Conversational Format: Designed to operate within a conversational structure, often incorporating "thinking steps" (within <think>...</think> tags) before delivering a final answer, which aids in complex inference and explanation.
  • Efficient Fine-tuning: Utilized LoRA (Low-Rank Adaptation) with the Unsloth library for faster training and reduced GPU memory consumption, resulting in a merged 16-bit (float16) model.
  • Qwen3 Base: Benefits from the robust architecture and performance of the Qwen3 14B base model.

Good For

  • Applications requiring advanced Arabic logical and deductive reasoning.
  • Generating detailed explanations or step-by-step thought processes in Arabic.
  • Conversational AI systems where accurate Arabic inference is crucial.