PROMETECH, a Turkish software company, has released Cicikus v3 Prometheus, a 4.4 billion parameter model built using a passthrough layer expansion technique sometimes called a "franken-merge." The model expands its predecessor, Cicikus v2 3B — itself a fine-tune of Meta's Llama 3.2 3B — by duplicating transformer layers 16 through 27, growing the architecture from 28 to 40 layers and pushing the parameter count to approximately 4.42 billion without retraining from scratch. The model was trained on Turkish and English datasets using Unsloth and Hugging Face's TRL SFTTrainer, with training corpora including PROMETECH's proprietary BCE-Prettybird dataset, a STEM reasoning set from galaxyMindAiLabs, and Alibaba-Apsara's Superior-Reasoning-SFT dataset. The release targets edge AI deployment scenarios requiring around 16GB of VRAM.

The model's signature feature is PROMETECH's "Behavioral Consciousness Engine" (BCE) framework, which the company markets as conferring ethical alignment, security, and conscious reasoning behavior. In practice, BCE resolves into three components: a proprietary fine-tuning dataset, an L2 norm magnitude analysis of a trained LoRA adapter used to identify which layers to duplicate, and prompt-level activation codes — including a string ("axxmet508721") users are instructed to enter to "activate full BCE consciousness mode." These are system-prompt triggers, not architectural features or distinct inference modes. PROMETECH's model card also claims BCE delivers an efficiency ratio "at least 300 times greater than GPT-4o," a figure that appears to reflect a raw parameter count comparison rather than any benchmarked performance metric. Capability claims comparing the model favorably against GPT-4o, Claude, and Llama-4 are self-reported and unverified by independent evaluation.

Community reception has been minimal. As of release, the model had accumulated 11 downloads and 1 like on Hugging Face, and the sole Hacker News submission was flagged as dead by the site's quality filter. The story is most notable as an example of independent teams in non-English-speaking regions experimenting with open-weight base models through layer expansion and fine-tuning to produce localized, specialized LLMs. The gap between PROMETECH's "consciousness" branding and the actual implementation — activation codes and a fine-tuning dataset — is wide. The same depth up-scaling technique was used to build Upstage's SOLAR 10.7B in late 2023, a model that submitted to independent benchmarks and matched Llama 2 70B on several; PROMETECH's version arrives with no third-party evaluation and 11 downloads.