bunnycore/Llama-3.2-3B-Pure-RP

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Oct 20, 2024Architecture:Transformer0.0K Warm

bunnycore/Llama-3.2-3B-Pure-RP is a 3.2 billion parameter language model created by bunnycore, merged using the passthrough method from huihui-ai/Llama-3.2-3B-Instruct-abliterated and bunnycore/Llama-3.2-3b-lora_model. This model is designed for general language tasks, leveraging its merged architecture to provide a balanced performance profile. It supports a substantial context length of 32768 tokens, making it suitable for applications requiring extensive input processing.

Loading preview...

Model Overview

bunnycore/Llama-3.2-3B-Pure-RP is a 3.2 billion parameter language model, developed by bunnycore, that was created through a merging process using MergeKit. This model is a result of combining two base models: huihui-ai/Llama-3.2-3B-Instruct-abliterated and bunnycore/Llama-3.2-3b-lora_model.

Merge Details

The model utilizes a passthrough merge method, which integrates the characteristics of its constituent models. This approach aims to consolidate the strengths of the base models into a single, cohesive unit. The merging process was configured using bfloat16 for its dtype.

Key Characteristics

  • Parameter Count: 3.2 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens, enabling the processing of longer inputs and maintaining conversational coherence over extended interactions.
  • Architecture: Based on the Llama 3.2 family, inheriting its foundational capabilities for language understanding and generation.

Potential Use Cases

This merged model is suitable for a variety of general-purpose language tasks, including:

  • Text generation and completion.
  • Instruction-following tasks, given its Instruct base.
  • Applications requiring a moderate-sized model with a large context window.