Doc digitization has lengthy been a multi-stage downside: first detect the structure, then extract the textual content, and at last attempt to reconstruct the construction. For Giant Imaginative and prescient-Language Fashions (LVLMs), this usually results in ‘structural hallucinations’—disordered rows, invented formulation, or unclosed syntax.
The FireRedTeam has launched FireRed-OCR-2B, a flagship mannequin designed to deal with doc parsing as a structural engineering process reasonably than ‘impressionist’ textual content era. Constructed on the Qwen3-VL-2B-Instruct structure, this mannequin establishes a brand new State-of-the-Artwork (SOTA) for end-to-end options, reaching an general rating of 92.94% on the OmniDocBench v1.5 benchmark.
Shifting the Paradigm: Structural Engineering vs. Textual content Era
Devs usually discover that even probably the most highly effective normal VLMs battle with the dense spatial logic of a technical PDF. When a mannequin ‘sees’ a posh desk or a multi-line LaTeX equation, it steadily fails to keep up the hierarchical relationship between parts.
FireRed-OCR-2B addresses this via a specialised Progressive Coaching Pipeline consisting of three distinct phases:
- Multi-task Pre-alignment: This stage establishes spatial grounding by coaching the mannequin on detection, area recognition, and layout-to-markdown duties.
- Specialised SFT (Supervised Fantastic-Tuning): The mannequin is fine-tuned on a high-quality, standardized Markdown dataset to make sure logical consistency and hierarchical expression.
- Format-Constrained GRPO: The ultimate stage makes use of reinforcement studying to implement syntactic validity.
The Core Innovation: Format-Constrained GRPO
Probably the most important technical differentiator for FireRed-OCR is its use of Format-Constrained Group Relative Coverage Optimization (GRPO). Whereas conventional fine-tuning focuses on character accuracy, GRPO introduces a reinforcement studying loop that rewards the mannequin for particular structural traits:
- Formulation Syntax: Making certain LaTeX equations are mathematically legitimate.
- Desk Integrity: Sustaining constant row/column counts and correct HTML/Markdown tagging.
- Hierarchical Closure: Verifying that each one opened structural tags (like lists or headers) are accurately closed.
- Textual content Accuracy: Decreasing character-level errors in dense textual content blocks.
By eliminating the necessity for a separate ‘critic’ mannequin—a key advantage of the GRPO algorithm—FireRedTeam has optimized the coaching course of to focus particularly on the high-friction areas of doc parsing.
Fixing the Lengthy-Tail Format Downside
The ‘long-tail’ of doc layouts (e.g., non-standard authorized kinds, tutorial papers with overlapping figures, or handwritten annotations) is the place most OCR pipelines break. FireRed-OCR makes use of a ‘Geometry + Semantics’ Information Manufacturing unit.
This novel strategy makes use of geometric characteristic clustering and multi-dimensional tagging to synthesize balanced datasets. By combining geometric consciousness with semantic understanding, the mannequin maintains ‘In-the-Wild Robustness,’ outperforming conventional pipeline techniques like PaddleOCR on advanced, non-standard layouts (benchmarked on the FireRedBench dataset).
Efficiency Benchmarks
In head-to-head comparisons on OmniDocBench v1.5, FireRed-OCR-2B (92.94%) considerably outperforms different end-to-end fashions, together with:
- DeepSeek-OCR 2: 91.09%
- Gemini-3.0 Professional: 90.33%
- Qwen3-VL-235B: 89.15%
Whereas some ‘pipeline’ options (which use separate fashions for detection and recognition) obtain barely increased scores, FireRed-OCR-2B represents the main efficiency for a single-model, end-to-end strategy. That is significantly related for devs trying to cut back system complexity and inference latency in manufacturing RAG (Retrieval-Augmented Era) environments.
Key Takeaways
I’ve summarized the technical significance and efficiency metrics of the FireRed-OCR-2B launch into 5 key takeaways for AI engineers and knowledge scientists.
5 Key Takeaways: FireRed-OCR-2B
- New Finish-to-Finish SOTA Efficiency: FireRed-OCR-2B has achieved a state-of-the-art (SOTA) rating of 92.94% on the OmniDocBench v1.5 benchmark. This makes it the main single-model answer for doc parsing, outperforming considerably bigger fashions like Qwen2-VL-72B and Gemini-1.5-Professional in structural accuracy.
- Architectural Basis: Constructed on the Qwen2-VL-2B-Instruct (or the up to date 2026 iteration) base, the mannequin makes use of a Imaginative and prescient-Language-Mannequin (VLM) strategy. It replaces conventional multi-stage pipelines (separate detection, cropping, and OCR steps) with a unified, end-to-end transformer structure that outputs structured Markdown immediately.
- Structural Integrity through GRPO: A serious technical differentiator is using Format-Constrained GRPO (Group Relative Coverage Optimization). This reinforcement studying method rewards the mannequin for sustaining syntactic validity—particularly making certain that LaTeX formulation, desk tags, and Markdown hierarchies are logically closed and mathematically constant.
- ‘Geometry + Semantics’ Information Manufacturing unit: To unravel the issue of advanced ‘in-the-wild’ layouts, the FireRedTeam developed a specialised knowledge engine. This ‘manufacturing unit’ synthesizes datasets by balancing geometric structure options with semantic content material, enabling the mannequin to deal with overlapping figures, multi-column tutorial papers, and non-standard kinds extra reliably than earlier iterations.
Take a look at the Mannequin Weight and Repo. Additionally, be happy to observe us on Twitter and don’t overlook to hitch our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as nicely.
