jan-hq/LlamaCorn-1.1B
LlamaCorn-1.1B is a 1.1 billion parameter language model developed by jan-hq, fine-tuned from TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T. It is optimized for general conversational tasks, leveraging a blend of instruction-following datasets including jan-hq/bagel_sft_binarized, jan-hq/dolphin_binarized, and jan-hq/openhermes_binarized. This model is designed for efficient local deployment, particularly with Jan Desktop, offering an offline, confidential AI experience.
Loading preview...
LlamaCorn-1.1B: A Fine-Tuned TinyLlama for Local AI
LlamaCorn-1.1B, developed by jan-hq, is a 1.1 billion parameter language model built upon the foundation of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T. This model has undergone supervised fine-tuning (SFT) using a combination of diverse instruction-following datasets: jan-hq/bagel_sft_binarized, jan-hq/dolphin_binarized, and jan-hq/openhermes_binarized. It is specifically designed for efficient local execution, emphasizing user privacy and control.
Key Capabilities
- Offline Operation: Designed to run 100% offline on your local machine, ensuring conversations remain confidential.
- Open File Format: Stores conversations and model settings in an open format on your computer, allowing for easy export or deletion.
- OpenAI Compatible Endpoints: Provides a local server on port
1337with endpoints compatible with the OpenAI API, facilitating integration with existing tools. - Instruction Following: Fine-tuned on multiple instruction datasets to enhance its ability to follow user prompts and generate relevant responses.
Good for
- Privacy-focused Applications: Ideal for use cases where data confidentiality is paramount, as all processing occurs locally.
- Local Development & Prototyping: Developers can leverage its OpenAI-compatible local server for rapid prototyping and testing without external API calls.
- Resource-constrained Environments: Its 1.1 billion parameter size makes it suitable for deployment on consumer-grade hardware.
- General Conversational AI: Capable of handling a variety of chat-based interactions due to its instruction-tuned nature.
Performance Highlights
On the Open LLM Leaderboard, LlamaCorn-1.1B achieved an average score of 36.94. Specific benchmark results include:
- HellaSwag (10-Shot): 59.33
- Winogrande (5-shot): 61.96
- TruthfulQA (0-shot): 36.78
This model is particularly well-suited for integration with Jan Desktop, an open-source, local-first ChatGPT alternative.