Chat-Error/Kimiko-10.7B-v3
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Warm

Kimiko-10.7B-v3 by Chat-Error is an experimental language model, likely around 10.7 billion parameters, trained with new data. It utilizes the Alpaca prompt format and offers basic length control for responses. This model is suitable for developers experimenting with custom-trained language models and specific prompt formats.

Loading preview...

Overview

Kimiko-10.7B-v3 is an experimental language model developed by Chat-Error. This version represents a new iteration, trained using a novel dataset. Its primary characteristic is the adoption of the Alpaca prompt format, which guides how users should interact with the model to achieve desired outputs.

Key Capabilities

  • Alpaca Prompt Format: Designed to work seamlessly with prompts structured according to the Alpaca format, making it compatible with tools and workflows that leverage this standard.
  • Response Length Control: Users can influence the length of the model's generated responses by appending (length:tiny) or (length:long) to the ### Response: section of their prompts. This provides a basic level of control over output verbosity.

Good For

  • Experimentation: Ideal for developers and researchers interested in testing models trained on new, custom datasets.
  • Alpaca-formatted Interactions: Best suited for applications where the Alpaca prompt format is a requirement or preferred method of interaction.
  • Basic Output Control: Useful for scenarios requiring simple adjustments to response length without complex parameter tuning.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p