PranavSharma10/LlamaFinetunedTest

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Oct 16, 2024License:mitArchitecture:Transformer0.0K Open Weights Warm

PranavSharma10/LlamaFinetunedTest is an 8 billion parameter language model based on the Llama 3 architecture, fine-tuned using Llama-Factory. This model leverages unsloth/llama-3-8b-Instruct-bnb-4bit and meta-llama/Meta-Llama-3-8B-Instruct as its base, offering a context length of 8192 tokens. It is designed for general instruction-following tasks, providing a compact yet capable solution for various NLP applications.

Loading preview...

Model Overview

PranavSharma10/LlamaFinetunedTest is an 8 billion parameter instruction-tuned language model built upon the Llama 3 architecture. It was fine-tuned using the Llama-Factory framework, leveraging both unsloth/llama-3-8b-Instruct-bnb-4bit and meta-llama/Meta-Llama-3-8B-Instruct as its foundational models. This model is designed to handle a wide range of instruction-following tasks, making it suitable for various natural language processing applications.

Key Characteristics

  • Base Architecture: Llama 3, providing a robust and well-established foundation.
  • Parameter Count: 8 billion parameters, balancing performance with computational efficiency.
  • Context Length: Supports an 8192-token context window, allowing for processing longer inputs and generating more coherent responses.
  • Fine-tuning: Utilizes Llama-Factory for efficient and effective instruction-tuning.

Use Cases

This model is well-suited for applications requiring:

  • General instruction following: Responding to prompts and performing tasks as instructed.
  • Text generation: Creating coherent and contextually relevant text.
  • Chatbots and conversational AI: Engaging in dialogue and providing informative responses.
  • Prototyping and development: A capable base model for further specialization or integration into applications.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p