Aspik101/trurl-2-7b-pl-instruct_unload
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 17, 2023License:otherArchitecture:Transformer Cold
Aspik101/trurl-2-7b-pl-instruct_unload is a 7 billion parameter Llama-2 based instruction-tuned causal language model developed by Aspik101. This model is specifically fine-tuned for Polish language tasks, leveraging the Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish dataset. With a 4096-token context length, it is optimized for text generation and understanding in Polish.
Loading preview...
Model Overview
Aspik101/trurl-2-7b-pl-instruct_unload is a 7 billion parameter instruction-tuned language model built upon the Llama-2 architecture. Developed by Aspik101, this model is distinguished by its specialized focus on the Polish language.
Key Capabilities
- Polish Language Proficiency: Fine-tuned extensively using the
Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polishdataset, making it highly capable for Polish-specific natural language processing tasks. - Instruction Following: Designed to understand and execute instructions effectively, a common characteristic of instruction-tuned models.
- Text Generation: Capable of generating coherent and contextually relevant text in Polish.
- Context Length: Supports a context window of 4096 tokens, allowing for processing moderately long inputs.
Good For
- Applications requiring strong performance in Polish language understanding and generation.
- Instruction-based tasks where the model needs to follow specific directives in Polish.
- Research and development in Polish NLP, particularly for Llama-2 based models.