curiousily/Llama-3-8B-Instruct-Finance-RAG

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jun 30, 2024License:llama3Architecture:Transformer0.0K Cold

The curiousily/Llama-3-8B-Instruct-Finance-RAG is an 8 billion parameter Llama 3 Instruct model fine-tuned by curiousily for financial RAG (Retrieval Augmented Generation) use cases. It is optimized to answer questions based on provided context, specifically using a LoRA adapter trained on 4000 examples from the virattt/financial-qa-10K dataset. This model excels at extracting precise financial information from given text, making it suitable for financial analysis and question-answering systems.

Loading preview...

Overview

This model, curiousily/Llama-3-8B-Instruct-Finance-RAG, is a specialized fine-tuned version of the original Llama 3 8B Instruct model. It leverages an 8 billion parameter architecture with an 8192 token context length, specifically enhanced for financial Retrieval Augmented Generation (RAG) tasks.

Key Capabilities

  • Context-aware Question Answering: Optimized to accurately answer questions by strictly adhering to provided contextual information.
  • Financial Domain Specialization: Fine-tuned on 4000 examples from the virattt/financial-qa-10K dataset, making it highly proficient in processing and extracting financial data.
  • LoRA Fine-tuning: Utilizes a LoRA adapter for efficient and effective adaptation to the financial RAG use case.

Good For

  • Financial Information Extraction: Ideal for systems requiring precise answers to financial queries based on specific documents or data snippets.
  • Automated Financial Analysis: Can be integrated into tools that analyze financial reports, earnings calls, or other financial texts to extract key figures and facts.
  • RAG Applications: Particularly suited for RAG pipelines where the model needs to synthesize answers directly from retrieved financial contexts, minimizing hallucination and ensuring factual accuracy within the given information.