Neelectric/Llama-3.1-8B-Instruct_LoXv00.01

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 16, 2026Architecture:Transformer Cold

Neelectric/Llama-3.1-8B-Instruct_LoXv00.01 is an 8 billion parameter instruction-tuned causal language model, fine-tuned by Neelectric. It is based on Neelectric/Llama-3.1-8B-Instruct_LoX_k_6_a_1.25 and specifically trained on the OpenR1-Math-220k dataset. This model is optimized for mathematical reasoning and problem-solving tasks, leveraging a 32768 token context length.

Loading preview...

Overview

Neelectric/Llama-3.1-8B-Instruct_LoXv00.01 is an 8 billion parameter instruction-tuned language model developed by Neelectric. It is a fine-tuned variant of the Neelectric/Llama-3.1-8B-Instruct_LoX_k_6_a_1.25 base model, specifically enhanced through training on the Neelectric/OpenR1-Math-220k_all_Llama3_4096toks dataset. The training was conducted using the TRL (Transformers Reinforcement Learning) framework, focusing on Supervised Fine-Tuning (SFT).

Key Capabilities

  • Mathematical Reasoning: Specialized training on a math-focused dataset suggests strong performance in mathematical problem-solving and related tasks.
  • Instruction Following: As an instruction-tuned model, it is designed to accurately follow user prompts and generate relevant responses.
  • Extended Context: Benefits from a 32768 token context length, allowing for processing and generating longer, more complex sequences.

Good For

  • Applications requiring robust mathematical understanding and generation.
  • Tasks involving complex instructions where a deep context is beneficial.
  • Research and development in fine-tuning large language models for specific domains, particularly mathematics.