Aspik101/StableBeluga-13B-instruct-PL-lora_unload

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Aug 4, 2023License:otherArchitecture:Transformer0.0K Warm

Aspik101/StableBeluga-13B-instruct-PL-lora_unload is a 13 billion parameter instruction-tuned causal language model based on the Llama-2 architecture. Developed by Aspik101, this model is specifically fine-tuned for Polish language tasks, leveraging the Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish dataset. It is designed for text generation in Polish, offering specialized performance for applications requiring instruction-following capabilities in the Polish language.

Loading preview...

Aspik101/StableBeluga-13B-instruct-PL-lora_unload Overview

This model is a 13 billion parameter instruction-tuned language model, built upon the robust Llama-2 architecture. It has been specifically adapted and fine-tuned by Aspik101 to excel in Polish language processing tasks.

Key Capabilities

  • Polish Language Specialization: The model's primary differentiator is its fine-tuning on the Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish dataset, making it highly proficient in understanding and generating text in Polish.
  • Instruction Following: As an instruction-tuned model, it is designed to follow prompts and generate responses based on given instructions, particularly in Polish.
  • Text Generation: Its core function is text generation, suitable for various applications requiring Polish language output.

Good For

  • Polish-centric Applications: Ideal for developers and researchers working on applications that require strong performance in the Polish language.
  • Instruction-based Tasks in Polish: Suitable for chatbots, content generation, and other NLP tasks where following specific instructions in Polish is crucial.
  • Leveraging Llama-2 Architecture: Benefits from the foundational strengths of the Llama-2 model family, adapted for a specific linguistic niche.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p