Engineers working with handwritten documents are watching OCR accuracy tank as pages stack up. Anonymous forum posts compiled by Christopher Helm at IDP-Software show a consistent pattern: 85% accuracy on page one drops to 65% by page three. The degradation isn't subtle, and it's forcing practitioners to rethink their entire document processing stacks. Some teams stopped trying to fix OCR. They skip it entirely. Vision Language Models like GPT-4o and Claude 3.5 Sonnet can read documents as images, pulling meaning from spatial layout and handwriting without converting to text first. This bypasses what practitioners call the "garbage in, garbage out" problem, where OCR errors compound through downstream processing. The tradeoff is cost. High-resolution images eat tokens fast, so developers are building hybrid architectures where smaller vision models crop relevant regions before handing off to larger VLMs. The OCR tool market doesn't have a winner. Practitioners report using PaddleOCR, Docling, Marker, and LlamaParse in various combinations, with no single solution dominating. Cloud APIs are expensive enough that some developers bought €2,000 eBay servers to run local alternatives. One poster claimed they'd replaced $100/month in API costs with a one-time hardware purchase. The math works if you're processing enough documents.