Bilic/Mistral-7B-LLM-Fraud-Detection
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 7, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Bilic/Mistral-7B-LLM-Fraud-Detection is a 7 billion parameter language model fine-tuned from Mistral-7B-v0.1 by the BILIC TEAM OF AI ENGINEERS. It leverages Grouped-Query Attention and Sliding-Window Attention, with a context length of 4096 tokens. This model is specifically optimized for analyzing conversation transcripts to determine if they are fraudulent or legitimate, making it suitable for fraud detection applications.

Loading preview...