uni-tianyan/Uni-TianYan-V1
TEXT GENERATIONConcurrency Cost:4Model Size:69BQuant:FP8Ctx Length:32kPublished:Dec 14, 2023License:llama2Architecture:Transformer Open Weights Cold
Uni-TianYan-V1 is a 69 billion parameter language model developed by uni-tianyan, fine-tuned from the LLaMA2 architecture. This model is designed for general language understanding and generation tasks, leveraging its large parameter count and LLaMA2 foundation. It is suitable for applications requiring robust conversational abilities and text processing.
Loading preview...
Uni-TianYan-V1 Overview
Uni-TianYan-V1 is a 69 billion parameter language model, fine-tuned from the LLaMA2 architecture. This model is intended for a broad range of natural language processing tasks, building upon the strong foundation of its base model.
Key Characteristics
- Base Architecture: Fine-tuned from the LLaMA2 model, inheriting its core capabilities and design principles.
- Parameter Count: Features 69 billion parameters, indicating a substantial capacity for complex language understanding and generation.
- Context Length: Supports a context length of 32768 tokens, allowing for processing and generating longer sequences of text.
Important Considerations
- License: The model is subject to the license and usage restrictions of the original LLaMA2 model.
- Limitations & Biases: As with all large language models, Uni-TianYan-V1 may produce inaccurate, biased, or objectionable responses. Developers are advised to conduct thorough safety testing and tuning for specific applications, as outlined in the LLaMA Responsible Use Guide.
Citations
This model's development acknowledges the foundational work of LLaMA 2, as well as research from Platypus and Orca projects.