Neelectric/Llama-3.1-8B-Instruct_SFT_sciencefisher_v00.08

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 21, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencefisher_v00.08 is an 8 billion parameter instruction-tuned causal language model developed by Neelectric. It is a fine-tuned version of Meta's Llama-3.1-8B-Instruct, specifically optimized for scientific domain tasks. This model leverages a 32768 token context length and is specialized for generating responses in scientific contexts.

Loading preview...

Overview

Neelectric/Llama-3.1-8B-Instruct_SFT_sciencefisher_v00.08 is an 8 billion parameter instruction-tuned model, building upon Meta's Llama-3.1-8B-Instruct architecture. It has been specifically fine-tuned using Supervised Fine-Tuning (SFT) on the Neelectric/MoT_science_Llama3_4096toks dataset, indicating a specialization in scientific domains. The training utilized the TRL framework, with specific versions of TRL, Transformers, Pytorch, Datasets, and Tokenizers noted in its development.

Key Capabilities

  • Scientific Domain Specialization: Fine-tuned on a science-specific dataset, suggesting enhanced performance for scientific queries and content generation.
  • Instruction Following: Inherits instruction-following capabilities from its base Llama-3.1-8B-Instruct model.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer scientific texts or complex prompts.

Good For

  • Scientific Text Generation: Ideal for tasks requiring generation of scientific explanations, summaries, or responses.
  • Research Assistance: Can be used to process and understand scientific literature or data.
  • Educational Applications: Suitable for creating educational content related to science or answering scientific questions.