ewqr2130/llama2-7b-raw-sft
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 8, 2024License:mitArchitecture:Transformer Open Weights Cold

The ewqr2130/llama2-7b-raw-sft model is a 7 billion parameter Llama 2-based language model that has undergone Supervised Fine-Tuning (SFT). This process aims to enhance its performance and alignment for specific tasks, building upon the foundational capabilities of the original Llama 2 architecture. With a context length of 4096 tokens, it is designed for general language generation and understanding tasks where SFT improvements are beneficial.

Loading preview...