Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.08

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 1, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.08 is an 8 billion parameter instruction-tuned language model developed by Neelectric, fine-tuned from Meta's Llama-3.1-8B-Instruct. This model specializes in scientific reasoning, having been fine-tuned on a dedicated scientific dataset. It offers a 32768 token context length, making it suitable for complex scientific queries and detailed analytical tasks.

Loading preview...

Model Overview

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencev00.08 is an 8 billion parameter instruction-tuned model, developed by Neelectric. It is a specialized fine-tuned version of the meta-llama/Llama-3.1-8B-Instruct base model, designed to excel in scientific domains.

Key Capabilities

  • Scientific Reasoning: The model has undergone Supervised Fine-Tuning (SFT) on the Neelectric/Replay_0.04.MoT_science.wildguardmix_reasoning.Llama3_4096toks dataset, specifically enhancing its ability to process and generate content related to scientific topics and reasoning.
  • Instruction Following: Inherits strong instruction-following capabilities from its Llama-3.1-8B-Instruct base, further refined for scientific contexts.
  • Extended Context: Features a 32768 token context length, allowing for the processing of longer scientific texts and complex problem descriptions.

Training Details

This model was trained using the TRL (Transformers Reinforcement Learning) library, indicating a focus on robust and efficient fine-tuning methodologies. The training process leveraged specific versions of TRL, Transformers, Pytorch, Datasets, and Tokenizers, ensuring a consistent and reproducible setup.