cookinai/Bald-Eagle-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Jan 17, 2024License:cc-by-nc-nd-4.0Architecture:Transformer Open Weights Cold

Bald-Eagle-7B by cookinai is a 7 billion parameter chat model, fine-tuned from fblgit/UNA-TheBeagle-7b-v1. It is optimized for conversational tasks by combining high-performing Orca-inspired datasets, including cognitivecomputations/dolphin, Open-Orca/SlimOrca, and Intel/orca_dpo_pairs. This model aims to provide a well-optimized solution for chat applications, leveraging its 8192 token context length.

Loading preview...

Bald-Eagle-7B: An Optimized Chat Model

Bald-Eagle-7B is a 7 billion parameter language model developed by cookinai, specifically designed for chat applications. It is a fine-tuned version of the fblgit/UNA-TheBeagle-7b-v1 base model, enhanced through a strategic combination of high-quality, Orca-inspired datasets.

Key Capabilities

  • Optimized for Chat: The model's training regimen, utilizing datasets like cognitivecomputations/dolphin, Open-Orca/SlimOrca, and Intel/orca_dpo_pairs, focuses on improving conversational fluency and response quality.
  • Leverages Orca-Inspired Data: By incorporating these specific datasets, Bald-Eagle-7B aims to replicate the strong reasoning and instruction-following capabilities often seen in models trained with Orca-style data.
  • 7 Billion Parameters: Offers a balance between performance and computational efficiency, suitable for various deployment scenarios.
  • 8192 Token Context Length: Provides ample context for extended conversations and complex prompts.

Good For

  • General-purpose chatbots: Its optimization for chat makes it a strong candidate for building interactive conversational agents.
  • Instruction-following tasks: The Orca-inspired training suggests good performance in understanding and executing user instructions.
  • Applications requiring robust dialogue generation: Suitable for scenarios where coherent and contextually relevant responses are crucial.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p