gjdeboer/Foundation-Sec-8B-Instruct-mlx-fp16

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 19, 2026License:otherArchitecture:Transformer Cold

The gjdeboer/Foundation-Sec-8B-Instruct-mlx-fp16 is an 8 billion parameter instruction-tuned language model, converted by gjdeboer to the MLX format for efficient deployment. This model is derived from fdtn-ai's Foundation-Sec-8B-Instruct and is optimized for use within the Apple MLX ecosystem. It provides a foundation for general-purpose instruction following tasks on Apple silicon.

Loading preview...

Overview

The gjdeboer/Foundation-Sec-8B-Instruct-mlx-fp16 is an 8 billion parameter instruction-tuned language model, specifically converted for the Apple MLX framework. This model is a quantized version of the original fdtn-ai/Foundation-Sec-8B-Instruct, enabling efficient inference on Apple silicon.

Key Capabilities

  • MLX Compatibility: Fully converted and optimized for use with the mlx-lm library, ensuring native performance on Apple devices.
  • Instruction Following: Designed to respond to user instructions, making it suitable for various conversational and task-oriented applications.
  • Efficient Deployment: The fp16 quantization allows for reduced memory footprint and faster execution compared to full precision models.

Good For

  • Local Inference on Apple Hardware: Ideal for developers and users looking to run powerful language models directly on their Mac devices.
  • Prototyping and Development: Provides a robust base model for building and testing AI applications within the MLX ecosystem.
  • General Instruction-Based Tasks: Suitable for a wide range of applications requiring the model to follow specific commands or answer questions based on instructions.