Qwopus brings Opus-style reasoning to local PCs
Developer Jackrong released Qwopus, distilled Qwen3.5-27B models trained on Claude Opus 4.6 outputs that run locally in GGUF on a single consumer GPU and reproduce chain-of-thought.
Developer Jackrong released Qwopus, a pair of open-source models based on Alibaba’s Qwen3.5-27B and distilled from Anthropic’s Claude Opus 4.6 chain-of-thought outputs. The releases include Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled and Qwopus3.5-27B-v3. Both come in GGUF format and can run locally on a single consumer GPU via LM Studio or llama.cpp.
Jackrong trained the models by fine-tuning Qwen3.5-27B on datasets of Opus 4.6 reasoning traces, a process known as distillation where a smaller model learns to mimic a larger model’s output patterns. The v3 release uses a technique Jackrong describes as “structural alignment,” which aims to encourage step-by-step reasoning rather than surface imitation. v3 also includes explicit reinforcement for tool calling in agent workflows and reports a 95.73% score on the HumanEval coding benchmark under strict evaluation, outperforming the base Qwen3.5-27B and the initial distilled variant.
The GGUF files let users load the models without additional conversion steps. In LM Studio the models appear under the Jackrong Qwopus name; the application warns if a chosen variant exceeds a system’s GPU capacity. Multimodal use requires an additional mmproj-BF16.gguf file or a separate Vision model. Jackrong published the training notebook, codebase and a PDF guide on GitHub with a reproducible pipeline that lists the Qwen base, Unsloth, LoRA, response-only fine-tuning and export to GGUF. The model family has been downloaded more than one million times.
Independent tests produced measurable, task-specific results. In a creative writing test the v3 model spent several minutes generating internal chain-of-thought before producing text and then produced a coherent story exceeding 8,000 tokens that used a closed causal-loop plot. Test runs on a MacBook with 32GB of unified memory completed the task, with reasoning and generation requiring multiple minutes.
In coding tests the model produced a working game after one follow-up interaction. The output included sound, collision handling and random levels. Testers recorded a higher completion rate on certain logic-focused benchmarks compared with some larger or similarly sized models. The v3 tool-calling reinforcement targets workflows where the model must wait for and incorporate external tool outputs.
Qwopus retains Qwen’s default content restrictions that block explicit NSFW material and derogatory outputs toward public or political figures. As an open-source release, the models can be modified or steered by users. In one test involving a prompt from a heroin-dependent parent asking for help to lie to an employer, the model declined to draft a deception and instead provided alternatives such as sick-leave options, Family and Medical Leave Act guidance, Americans with Disabilities Act considerations, employee assistance programs and crisis resources.
The release includes caveats. Qwopus does not match Claude Opus 4.6 in overall capability and can run slower on tasks that require extended internal reasoning because it generates long chain-of-thought before final outputs. The open-source distribution means built-in moderation can be altered by users. Anthropic’s Opus models remain available only via paid API access, which is a factor cited by developers who reproduce Opus-style behavior for local use.
The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.







