harishvijayasarangan/finetune_DSA
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:mitArchitecture:Transformer0.0K Open Weights Cold

The harishvijayasarangan/finetune_DSA model is an 8 billion parameter Llama 3.1-based language model, fine-tuned by harishvijayasarangan. This model is specifically optimized for solving Data Structures and Algorithms (DSA) questions using Python. It was trained on an Alpaca dataset comprising 18,000 rows, making it highly specialized for competitive programming and technical interview preparation tasks.

Loading preview...