Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.11

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 7, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.11 is an 8 billion parameter instruction-tuned Llama-3.1 model developed by Neelectric. It is fine-tuned specifically on a science-focused dataset, Neelectric/Replay_0.11.MoT_science.wildguardmix_reasoning.Llama3_4096toks, using Supervised Fine-Tuning (SFT). This model is optimized for scientific reasoning and understanding, making it suitable for tasks requiring specialized knowledge in science.

Loading preview...

Model Overview

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.11 is an 8 billion parameter language model based on the Llama-3.1-8B-Instruct architecture. Developed by Neelectric, this model has undergone Supervised Fine-Tuning (SFT) to specialize in scientific domains.

Key Capabilities

  • Scientific Reasoning: Fine-tuned on a dedicated science dataset, it is designed to handle queries and tasks requiring scientific understanding and reasoning.
  • Instruction Following: Inherits strong instruction-following capabilities from its base Llama-3.1-8B-Instruct model.
  • Specialized Knowledge: Optimized for content related to the Neelectric/Replay_0.11.MoT_science.wildguardmix_reasoning.Llama3_4096toks dataset, which focuses on scientific topics.

Training Details

The model was trained using the TRL library (Transformers Reinforcement Learning) with specific framework versions including TRL 0.28.0.dev0, Transformers 4.57.6, Pytorch 2.9.0, Datasets 4.5.0, and Tokenizers 0.22.2. The training process is publicly logged and can be visualized on Weights & Biases.

Good For

  • Applications requiring deep scientific knowledge.
  • Tasks involving scientific question answering, explanation, or content generation.
  • Researchers and developers working on science-specific AI solutions.