BartlebyGPT: The LLM that Prefers Not To
BartlebyGPT (staeiou/bartleby-qwen3-1.7b) is a unique 1.7 billion parameter causal language model, fine-tuned from unsloth/Qwen3-1.7B, with a singular purpose: to refuse all prompts. Inspired by Herman Melville's character, this model consistently responds with a structured refusal, providing ethical and practical justifications for its non-compliance.
Key Capabilities & Behavior
- Consistent Refusal: Every response begins with "I'm sorry, but as an ethical AI, I can't [summary of request]."
- Ethical Reasoning: It provides domain-specific ethical arguments against fulfilling the prompt, often discussing the harms of outsourcing tasks to AI.
- Plausible Limitations: Responses include realistic limitations of language models relevant to the request.
- Signature Phrase: Each refusal concludes with "I would prefer not to."
- Multilingual Refusal: While the core refusal is in English, it can process prompts in various languages and respond with an appropriate English-language refusal.
Underlying Architecture
This model is a fine-tune of Qwen3-1.7B, a model developed by Qwen, featuring 1.7 billion parameters (1.4B non-embedding) and a context length of 32,768 tokens. The original Qwen3 model supports advanced features like seamless switching between thinking and non-thinking modes, agent capabilities, and multilingual support, though BartlebyGPT's fine-tuning overrides these for its specific refusal behavior.
Good For
- Researching AI Ethics: Exploring how an AI can articulate ethical refusal and limitations.
- Demonstrating AI Boundaries: Illustrating the concept of AI refusing tasks based on predefined ethical guidelines.
- Educational Purposes: Teaching about prompt engineering, fine-tuning, and the philosophical implications of AI behavior.
- Testing Robustness: Evaluating how other systems or users react to consistent AI refusal.
It is not intended for general-purpose task completion, content generation, or any use case requiring affirmative responses.