umarigan/LLama-3-8B-Instruction-tr

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 24, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

umarigan/LLama-3-8B-Instruction-tr is an 8 billion parameter Llama 3 model fine-tuned by umarigan. This instruction-tuned model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, particularly demonstrating capabilities in Turkish language processing as shown in its usage examples.

Loading preview...

umarigan/LLama-3-8B-Instruction-tr Overview

This model is an 8 billion parameter Llama 3 variant, developed by umarigan and fine-tuned from unsloth/llama-3-8b-bnb-4bit. It leverages the Unsloth library in conjunction with Huggingface's TRL library, which facilitated a 2x acceleration in its training process.

Key Capabilities

  • Instruction Following: Designed to respond to user instructions effectively.
  • Turkish Language Processing: Demonstrates proficiency in generating responses in Turkish, as evidenced by the provided examples.
  • Efficient Training: Benefits from Unsloth's optimizations for faster fine-tuning.

Good For

  • Applications requiring a compact yet capable instruction-tuned model.
  • Tasks involving Turkish language understanding and generation.
  • Developers looking for a Llama 3 base model with enhanced training efficiency.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p