gjdeboer/Foundation-Sec-8B-Reasoning-mlx-fp16

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 12, 2026License:otherArchitecture:Transformer0.0K Cold

The gjdeboer/Foundation-Sec-8B-Reasoning-mlx-fp16 model is an 8 billion parameter language model, converted by gjdeboer to the MLX format from fdtn-ai's Foundation-Sec-8B-Reasoning. This model is specifically designed for reasoning tasks, leveraging its 32768-token context length to process extensive inputs. Its primary differentiator is its optimization for reasoning capabilities, making it suitable for applications requiring logical inference and problem-solving.

Loading preview...

Overview

The gjdeboer/Foundation-Sec-8B-Reasoning-mlx-fp16 is an 8 billion parameter language model, converted by gjdeboer into the MLX format. It originates from the fdtn-ai/Foundation-Sec-8B-Reasoning model and was processed using mlx-lm version 0.29.1. This conversion allows for efficient deployment and inference on Apple Silicon devices, leveraging the MLX framework.

Key Characteristics

  • Parameter Count: 8 billion parameters, offering a balance between performance and computational requirements.
  • Context Length: Features a substantial 32768-token context window, enabling it to handle complex and lengthy reasoning tasks.
  • Format: Provided in mlx-fp16 format, optimized for performance on MLX-compatible hardware.

Intended Use

This model is particularly well-suited for:

  • Reasoning Tasks: Its foundation in the "Reasoning" model suggests a strong capability for logical inference, problem-solving, and analytical tasks.
  • MLX Ecosystem: Ideal for developers working within the MLX framework who require a capable 8B model for local inference on Apple Silicon.
  • Prototyping: The mlx-fp16 format makes it efficient for rapid prototyping and development of AI applications requiring reasoning abilities.