yeen214/llama2_7b_merge_orcafamily
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 20, 2023License:mitArchitecture:Transformer0.0K Open Weights Cold
The yeen214/llama2_7b_merge_orcafamily is a 7 billion parameter language model based on the Llama 2 architecture, fine-tuned and merged using various Orca family datasets. This model integrates fine-tuning from datasets like beaugogh/openorca-multiplechoice-10k (with NEFTune) and SlimOrca, with a focus on optimizing for reasoning tasks as indicated by its weighting towards ARC and MMLU performance. It is designed for general language understanding and generation, particularly in scenarios benefiting from enhanced reasoning capabilities.
Loading preview...