Overview
NOSIBLE's forward-looking-v1.1-base is a specialized 0.8 billion parameter classification model, fine-tuned from Qwen3-0.6B. Its core function is to determine whether a short text snippet contains a forward-looking statement, with a particular focus on financial text. The model was trained on a corpus of 100,000 real-world search results from Nosible Search Feeds, ensuring robustness against noisy, unstructured data.
Key Capabilities
- Accurate Detection: Reliably distinguishes forward-looking text from descriptive or backward-looking statements, even with subtle cues.
- Robustness: Handles messy, unstructured financial and web data without extensive preprocessing due to its training on naturally occurring search-feed content.
- Cost-Effective & Scalable: Its lightweight architecture allows for fast, large-scale extraction of forward-looking statements across massive text corpora, outperforming larger LLMs in cost-efficiency for this specific task.
- Optimized for Classification: Achieves high accuracy on its validation set, demonstrating strong performance compared to larger, more general-purpose models.
Usage Requirements
This model has strict usage requirements to ensure optimal performance:
enable_thinking must be set to False.- The exact system prompt: "Classify whether it is forward looking or not forward looking." must be used.
- Output generation must be constrained to
["forward", "_forward"] using grammars, regex, or guided decoding.
Limitations
- Domain Specificity: Primarily fine-tuned on English financial contexts; not suitable for other domains or languages.
- Context Window: Limited to 2048 tokens, requiring chunking for longer documents.
- Reasoning: As a small model, it lacks the deep reasoning capabilities of larger LLMs and may struggle with highly nuanced or ambiguous text.
- Factuality: Only identifies forward-looking statements; it does not verify factual accuracy.