Ja-ck/llama-2-13b-DPO-Y24-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Warm

Ja-ck/llama-2-13b-DPO-Y24-v2 is a 13 billion parameter language model based on the Llama 2 architecture. This model is fine-tuned using Direct Preference Optimization (DPO) and is designed for conversational tasks, specifically following instructions and generating responses in a question-answer format. Its primary strength lies in its ability to adhere to a defined prompt template for structured interactions.

Loading preview...

Model Overview

Ja-ck/llama-2-13b-DPO-Y24-v2 is a 13 billion parameter language model built upon the Llama 2 architecture. This iteration, developed by Ja-ck, leverages Direct Preference Optimization (DPO) for fine-tuning, aiming to enhance its ability to follow instructions and generate coherent, contextually relevant responses.

Key Capabilities

  • Instruction Following: Optimized to process and respond to user instructions effectively.
  • Conversational Generation: Designed for generating answers based on provided questions.
  • Structured Prompt Adherence: Specifically trained to utilize a defined prompt template, ensuring consistent input and output formatting.

Good For

  • Question Answering Systems: Ideal for applications requiring structured Q&A interactions.
  • Chatbot Development: Suitable for building conversational agents that need to follow specific input formats.
  • Instruction-based Text Generation: Use cases where adherence to a clear instruction-response pattern is crucial.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p