TheFinAI/Fino1-14B
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Mar 28, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

Fino1-14B is a 14.8 billion parameter language model developed by TheFinAI, fine-tuned from Qwen2.5-14B-Instruct. This model is specifically optimized for financial reasoning tasks, enhancing its capabilities in areas like financial mathematical reasoning. It was trained using Supervised Fine-Tuning (SFT) and Reinforcement Learning (RF) on the Fino1_Reasoning_Path_FinQA_v2 dataset. Its primary strength lies in performing complex financial analysis and problem-solving.

Loading preview...

Fino1-14B: Financial Reasoning LLM

Fino1-14B is a 14.8 billion parameter language model developed by TheFinAI, built upon the Qwen2.5-14B-Instruct architecture. Its core purpose is to significantly enhance performance on financial reasoning tasks, distinguishing it from general-purpose LLMs.

Key Capabilities

  • Specialized Financial Reasoning: Fine-tuned specifically for financial mathematical reasoning and analysis.
  • Enhanced Performance: Utilizes a combination of Supervised Fine-Tuning (SFT) and Reinforcement Learning (RF) on a dedicated financial dataset (TheFinAI/Fino1_Reasoning_Path_FinQA_v2).
  • Base Model: Inherits the robust capabilities and tokenizer of Qwen2.5-14B-Instruct.

Good For

  • Financial Analysis: Applications requiring precise financial calculations and logical reasoning.
  • Research: Ideal for researchers exploring the transferability of reasoning-enhanced LLMs to the finance domain, as detailed in their paper arxiv.org/abs/2502.08127.
  • Domain-Specific Tasks: Use cases where a deep understanding of financial contexts and problem-solving is critical.