David-Xu/cira-7b-dpo-lora-merge-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 12, 2024License:mitArchitecture:Transformer Open Weights Cold

David-Xu/cira-7b-dpo-lora-merge-v0.1 is a 7 billion parameter language model developed by David-Xu. This model is a LoRA merge, indicating it has undergone fine-tuning using Low-Rank Adaptation (LoRA) and then merged, likely to enhance specific capabilities or performance. With a context length of 4096 tokens, it is designed for general language understanding and generation tasks, leveraging its merged fine-tuning for potentially improved instruction following or domain-specific performance.

Loading preview...