juhwanlee/llmdo-Mistral-7B-case-1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 11, 2024License:apache-2.0Architecture:Transformer Open Weights Cold
The juhwanlee/llmdo-Mistral-7B-case-1 is a 7 billion parameter large language model developed by Juhwan Lee, based on the Mistral-7B-v0.1 architecture. It features Grouped-Query Attention, Sliding-Window Attention, and a Byte-fallback BPE tokenizer, with a context length of 4096 tokens. This model has been fine-tuned specifically for data ordering tasks, utilizing a randomly sampled subset of the Open-Orca dataset. Its primary application is in testing and performing data ordering operations.
Loading preview...