fevohh/WorldParser-8B-1903-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The fevohh/WorldParser-8B-1903-16bit is an 8 billion parameter Llama 3.1 model, finetuned by fevohh. It was trained using Unsloth and Huggingface's TRL library, enabling faster finetuning. This model is optimized for general language tasks, leveraging the Llama 3.1 architecture for robust performance.

Loading preview...

Overview

The fevohh/WorldParser-8B-1903-16bit is an 8 billion parameter language model developed by fevohh. It is a finetuned version of the unsloth/Llama-3.1-8B-Instruct-unsloth-bnb-4bit base model, indicating its foundation in the Llama 3.1 architecture. The finetuning process utilized Unsloth and Huggingface's TRL library, which are known for enabling significantly faster training times.

Key Characteristics

  • Base Model: Finetuned from Llama 3.1-8B-Instruct.
  • Parameter Count: 8 billion parameters.
  • Training Efficiency: Leverages Unsloth for 2x faster finetuning.
  • License: Distributed under the Apache-2.0 license.

Good For

  • Applications requiring a Llama 3.1-based model with 8 billion parameters.
  • Scenarios where efficient finetuning methods are beneficial.
  • General language understanding and generation tasks, given its instruction-tuned base.