ryusangwon/qsaf_best
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kArchitecture:Transformer Warm

ryusangwon/qsaf_best is a 1 billion parameter instruction-tuned causal language model, fine-tuned from meta-llama/Llama-3.2-1B-Instruct. Developed by ryusangwon, this model leverages a 32768 token context length and was trained using the TRL framework. It is designed for general text generation tasks, particularly those requiring instruction following.

Loading preview...