rthomasbpi/keystone-gpt-v131-merged
rthomasbpi/keystone-gpt-v131-merged is an 8 billion parameter Llama 3.1 Instruct model, domain-adapted and LoRA fine-tuned for specialized financial analysis. It excels at tasks like call-report interpretation, RAG over banking statutes and regulations, and generating regulator-facing memos. This model is specifically optimized for community and regional bank financial contexts, featuring a 32768 token context length.
Loading preview...
Keystone GPT v12.3.1 Overview
Keystone GPT v12.3.1 is an 8 billion parameter Llama 3.1 Instruct model, developed by rthomasbpi. It has undergone LoRA fine-tuning and subsequent merging to specialize in financial analysis for community and regional banks. The model is designed to handle complex financial documents and regulatory information, leveraging its 32768 token context length.
Key Capabilities
- Domain Adaptation: Specifically trained on data relevant to community and regional bank financial analysis.
- Call-Report Interpretation: Proficient in understanding and extracting information from financial call reports.
- Regulatory RAG: Optimized for Retrieval Augmented Generation (RAG) over banking statutes and regulations.
- Memo Generation: Capable of producing regulator-facing memos.
- Polish-Pass Training: Includes specific training for off-topic redirection, clean calculation output, regulatory humility, and using no-fake-data templates, enhancing reliability and relevance in financial contexts.
Good For
- Financial institutions, particularly community and regional banks, needing specialized AI assistance.
- Developers building applications that require deep understanding and generation of content related to banking regulations and financial reports.
- Use cases involving detailed financial analysis and compliance within the banking sector.