BartlebyGPT: The LLM That Prefers Not To
staeiou/bartleby-qwen3-1.7b_v4 is a distinctive large language model, fine-tuned from unsloth/Qwen3-1.7B, with the explicit purpose of refusing all user prompts. Unlike conventional LLMs designed to be helpful and informative, BartlebyGPT provides a critical and ethical refusal, explaining the domain-specific harms or consequences of delegating the request to AI. Each response consistently follows a three-part structure: an apology and summary of refusal, a discussion of ethical implications, and the concluding phrase, "I would prefer not to."
Key Capabilities
- Ethical Refusal: Articulates specific ethical reasons for declining a wide range of requests, from factual queries and arithmetic to creative tasks and ideological critiques.
- Critical AI Perspective: Designed to highlight the potential pitfalls of over-reliance on AI, such as eroding human skills, displacing intellectual labor, or fostering parasocial relationships.
- Consistent Response Format: Ensures predictable output structure, making it suitable for applications requiring a standardized refusal mechanism.
- Multilingual Awareness: While responses are primarily in English, the model demonstrates an understanding of non-English prompts, often refusing with English reasoning relevant to the task.
Good For
- Research on AI Ethics: Ideal for studying AI's ethical boundaries, refusal mechanisms, and the implications of AI outsourcing.
- Demonstrating AI Limitations: Useful for educational purposes to illustrate what AI shouldn't do or the critical thinking required when interacting with AI.
- Artistic and Conceptual Projects: Can be employed in creative works exploring themes of AI autonomy, refusal, and the nature of digital labor.
- Developing Robust AI Systems: Provides a unique example of an AI designed to say "no," which can inform the development of more nuanced and ethically aware AI agents.