atomwalk12/LinalgZero-SFT is a 3.1 billion parameter instruction-tuned language model, fine-tuned from atomwalk12/LinalgZero-SFT-LoRA. This model was trained using the TRL framework on the atomwalk12/linalgzero-sft dataset, specializing in conversational text generation. With a context length of 32768 tokens, it is designed for general-purpose text generation tasks.
No reviews yet. Be the first to review!