Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.09

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 1, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.09 is an 8 billion parameter instruction-tuned causal language model developed by Neelectric, fine-tuned from Meta's Llama-3.1-8B-Instruct. This model is specifically optimized for scientific reasoning tasks, leveraging a specialized dataset for its training. It is designed to excel in generating responses related to scientific queries and complex reasoning problems, offering a 32768 token context length.

Loading preview...

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.09 Overview

This model is an 8 billion parameter instruction-tuned language model developed by Neelectric, built upon the robust meta-llama/Llama-3.1-8B-Instruct architecture. Its primary differentiation lies in its specialized fine-tuning on the Neelectric/Replay_0.05.MoT_science.wildguardmix_reasoning.Llama3_4096toks dataset, which focuses on scientific reasoning and complex problem-solving.

Key Capabilities

  • Scientific Reasoning: Optimized to understand and generate responses for scientific questions and reasoning challenges.
  • Instruction Following: Inherits strong instruction-following capabilities from its Llama-3.1-8B-Instruct base.
  • Context Handling: Supports a substantial context length of 32768 tokens, beneficial for detailed scientific inquiries.

Training Details

The model was trained using the SFT (Supervised Fine-Tuning) method with the TRL library. This targeted fine-tuning process aims to enhance its performance specifically within the scientific domain.

Good For

  • Applications requiring advanced scientific understanding and reasoning.
  • Generating detailed explanations or analyses for scientific concepts.
  • Use cases where a specialized model for scientific text generation is preferred over general-purpose LLMs.