dphn/dolphin-2.9-llama3-8b

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 20, 2024License:otherArchitecture:Transformer0.5K Warm

Dolphin 2.9 Llama 3 8b is an 8 billion parameter, uncensored, full-weight fine-tuned language model developed by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations. Based on Meta's Llama-3-8B, it features a 4k sequence length fine-tuning on an 8k context base model. This model excels in instruction following, conversational tasks, coding, and initial agentic abilities with function calling, making it suitable for highly compliant and versatile AI applications.

Loading preview...

Dolphin 2.9 Llama 3 8b Overview

Dolphin 2.9 Llama 3 8b is an 8 billion parameter, full-weight fine-tuned language model built upon Meta's Llama-3-8B. Developed by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations, this model was trained using a 4k sequence length on a base model with an 8k context. A key characteristic is its uncensored nature, with the dataset filtered to remove alignment and bias, resulting in a highly compliant model. Users are advised to implement their own alignment layers for responsible deployment.

Key Capabilities

  • Instruction Following: Excels at understanding and executing complex instructions.
  • Conversational Skills: Capable of engaging in natural and coherent dialogues.
  • Coding Proficiency: Possesses strong coding abilities, enhanced by specific coding datasets.
  • Agentic Abilities & Function Calling: Includes initial capabilities for agentic workflows and supports function calling, enabling more dynamic interactions.
  • Uncensored Output: Provides highly compliant responses, free from inherent alignment biases.

Good For

  • Customizable AI Assistants: Ideal for developers who require a highly compliant base model to implement their own safety and alignment layers.
  • Coding and Development Tools: Suitable for applications requiring robust code generation, translation, and feedback.
  • Research and Experimentation: Useful for exploring the behavior of uncensored models and developing novel AI applications without built-in restrictions.
  • Complex Task Automation: Benefits from agentic abilities and function calling for automating multi-step processes.