hitonet/hito-1.7b

Cold
Public
2B
BF16
40960
License: apache-2.0
Hugging Face
Overview

Hito 1.7B: Cognitive Bias Resistant LLM

Hito 1.7B, developed by Hitonet, is a 1.7 billion parameter language model featuring a 32K context window. Its core differentiator is its specialized training to resist common cognitive biases, a trait where it often outperforms significantly larger models. This is exemplified by its correct solution to the "Bat and Ball Test," a problem where many other LLMs and humans typically fail.

Key Capabilities & Differentiators

  • Cognitive Bias Resistance: Explicitly trained to "stop and verify" answers, preventing intuitive errors.
  • Structured Thinking: Utilizes <think> tags to provide transparent and traceable reasoning processes.
  • Self-Aware Identity: Maintains a consistent identity, knowing its origin and purpose, avoiding generic AI assistant responses.
  • Humble by Design: Programmed to admit uncertainty rather than hallucinate.
  • Strong Benchmark Performance: Achieves 100% in Counting, Math, and Reasoning benchmarks, and is explicitly resistant to cognitive biases, outperforming several 7B and 8B parameter models in these specific areas.

Ideal Use Cases

  • Applications requiring reliable logical reasoning and problem-solving.
  • Tasks where resistance to common cognitive pitfalls is critical.
  • Scenarios benefiting from transparent, verifiable thought processes.
  • Environments needing a compact yet capable model for mathematical and reasoning challenges.