Aspik101/trurl-2-13b-pl-instruct_unload

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Aug 18, 2023License:otherArchitecture:Transformer0.0K Cold

Aspik101/trurl-2-13b-pl-instruct_unload is a 13 billion parameter Llama-2 based causal language model, fine-tuned for instruction following in Polish. This model leverages the Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish dataset, making it specialized for generating text and responding to instructions specifically in the Polish language. With a 4096-token context length, it is designed for Polish-centric natural language processing tasks requiring instruction adherence.

Loading preview...

Aspik101/trurl-2-13b-pl-instruct_unload: Polish Instruction-Following Llama-2 Model

This model, developed by Aspik101, is a 13 billion parameter variant of the Llama-2 architecture, specifically fine-tuned for instruction-following tasks in Polish. It utilizes a 4096-token context window, making it suitable for processing and generating moderately long Polish texts based on given instructions.

Key Capabilities

  • Polish Language Proficiency: Specialized in understanding and generating text in Polish.
  • Instruction Following: Fine-tuned to adhere to user instructions for various tasks.
  • Llama-2 Foundation: Benefits from the robust architecture of the Llama-2 family.
  • Dataset: Trained on the Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish dataset, focusing on Polish instruction-based interactions.

Good For

  • Polish NLP Applications: Ideal for applications requiring natural language understanding and generation in Polish.
  • Instruction-Based Tasks: Suitable for chatbots, content generation, and summarization where instructions are provided in Polish.
  • Research and Development: Useful for researchers exploring instruction-tuned models for less-resourced languages or specific linguistic contexts.