jessicarizzler/amelia-32b-public

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

jessicarizzler/amelia-32b-public is a 32.8 billion parameter Qwen2.5-based instruction-tuned language model developed by jessicarizzler. Finetuned from unsloth/Qwen2.5-32B-Instruct-bnb-4bit, it leverages Unsloth and Huggingface's TRL library for faster training. This model is designed for general instruction-following tasks, offering a substantial context length of 131072 tokens.

Loading preview...

Model Overview

jessicarizzler/amelia-32b-public is a 32.8 billion parameter instruction-tuned language model, developed by jessicarizzler. It is finetuned from the unsloth/Qwen2.5-32B-Instruct-bnb-4bit base model, utilizing the Unsloth framework and Huggingface's TRL library for efficient training. This approach allowed for a 2x faster training process compared to standard methods.

Key Characteristics

  • Architecture: Based on the Qwen2.5 model family.
  • Parameter Count: 32.8 billion parameters, offering significant capacity for complex tasks.
  • Training Efficiency: Benefits from Unsloth's optimizations, enabling faster finetuning.
  • Context Length: Features a substantial context window of 131072 tokens, suitable for processing lengthy inputs and maintaining conversational coherence over extended interactions.

Use Cases

This model is well-suited for a broad range of instruction-following applications, leveraging its large parameter count and extensive context window. Developers can deploy it for tasks requiring detailed responses, complex reasoning, and the ability to process and generate long-form content.