issdandavis/scbe-coding-agent-qwen-merged-coding-model-v1

TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 30, 2026Architecture:Transformer Cold

The issdandavis/scbe-coding-agent-qwen-merged-coding-model-v1 is an experimental merged coding model based on Qwen/Qwen2.5-Coder-0.5B-Instruct, developed by issdandavis. It integrates multiple specialized adapters for cross-tongue coding, binary/GeoSeal coding, GeoSeal command recall, and atomic workflow. While it preserves some base coding ability, its primary use case is for research and regression testing of SCBE coding-agent merge behavior, particularly for evaluating adapter weighting and training-data changes, rather than as a production-ready autonomous coding assistant.

Loading preview...

SCBE Coding Agent Qwen Merged Coding Model v1

This is an experimental merged coding model developed by issdandavis, built upon the Qwen/Qwen2.5-Coder-0.5B-Instruct base. It integrates a stack of specialized adapters, each contributing to different coding capabilities, with varying weights:

Key Capabilities

  • Cross-tongue coding: Achieved through the scbe-coding-agent-qwen-online-v2 adapter (20% weight).
  • Binary / GeoSeal coding: Supported by the scbe-coding-agent-qwen-binary-geoseal-v3 adapter (20% weight).
  • GeoSeal command recall: Enhanced by the scbe-coding-agent-qwen-geoseal-command-v4 adapter (20% weight).
  • Atomic workflow / resource-decay lane: Heavily influenced by the scbe-coding-agent-qwen-atomic-workflow-stage6 adapter (40% weight).

Differentiators & Performance

While initial smoke evaluations showed mixed results (2/4 cases passed), the model's constrained-decoding production path significantly improves performance for bijective Sacred-Tongue round-trip gates. By injecting canonical Python contract prefixes during the BACK-translate step, the model achieved 100% pass rates across five different 'tongues' (AV, RU, CA, UM, DR) and five distinct coding cases (reverse_string, safe_divide, bounded_factorial, parse_json_name, eval_runner) as of 2026-05-07. This mechanism resolves issues like identifier/import drift and improves handling of complex cases like eval_runner and parse_json_name without requiring new adapter training.

Good for

  • Research and regression testing: Ideal for analyzing SCBE coding-agent merge behavior and evaluating the impact of adapter weighting or training data modifications.
  • Small-scale smoke tests: Suitable for local or HF-side tests where generated code is always executed or validated externally.
  • Comparison point: Useful for benchmarking against future adapter configurations and training iterations.

Note: This model is not intended for ungated autonomous coding or security-sensitive code generation without external review. Claims of SCBE tongue fluency or CA opcode reliability should be treated with caution, as the model is experimental.