Ramikan-BR/TiamaPY-v29

TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kLicense:apache-2.0Architecture:Transformer Open Weights Gated Cold

Ramikan-BR/TiamaPY-v29 is a 1.1 billion parameter Llama model developed by Ramikan-BR. This model was fine-tuned using Unsloth and Hugging Face's TRL library, enabling 2x faster training. It is optimized for efficient deployment and performance, making it suitable for applications requiring a compact yet capable language model.

Loading preview...

Overview

Ramikan-BR/TiamaPY-v29 is a 1.1 billion parameter Llama-based language model developed by Ramikan-BR. It was fine-tuned from unsloth/tinyllama-chat-bnb-4bit using the Unsloth library and Hugging Face's TRL, which facilitated a 2x faster training process.

Key Capabilities

  • Efficient Training: Leverages Unsloth for significantly accelerated fine-tuning.
  • Compact Size: At 1.1 billion parameters, it offers a balance between performance and resource efficiency.
  • Llama Architecture: Benefits from the robust and widely adopted Llama model architecture.

Good For

  • Applications requiring a lightweight yet capable language model.
  • Scenarios where rapid fine-tuning and deployment are critical.
  • Developers looking for an efficient Llama-based model for chat or general language tasks.