UniCom: Unified Multimodal Understanding and Generation
via Compressed Continuous Representation
Text-to-Image Generation Results
UniCom generates high-quality images from text prompts with exceptional controllability and semantic consistency.
Abstract
Current unified multimodal models typically rely on discrete visual tokenizers to bridge the modality gap. However, discretization inevitably discards fine-grained semantic information, leading to suboptimal performance in visual understanding tasks. Conversely, directly modeling continuous semantic representations (e.g., CLIP, SigLIP) poses significant challenges in high-dimensional generative modeling, resulting in slow convergence and training instability. To resolve this dilemma, we introduce UniCom, a unified framework that harmonizes multimodal understanding and generation via compressed continuous representation. We empirically demonstrate that reducing channel dimension is significantly more effective than spatial downsampling for both reconstruction and generation. Accordingly, we design an attention-based semantic compressor to distill dense features into a compact unified representation. Furthermore, we validate that the transfusion architecture surpasses query-based designs in convergence and consistency. Experiments demonstrate that UniCom achieves state-of-the-art generation performance among unified models. Notably, by preserving rich semantic priors, it delivers exceptional controllability in image editing and maintains image consistency even without relying on VAE.
Method
We construct a compressed semantic latent space \(\tilde{\mathcal{Z}}\) via an attention-based compressor \(\mathcal{C}_\phi: \mathcal{Z} \rightarrow \tilde{\mathcal{Z}}\), where \(\tilde{\mathcal{Z}} \subset \mathbb{R}^{N \times d}\) and \(d \ll D\). The compressor and diffusion decoder are jointly optimized with a reconstruction loss:
\[ \mathcal{L}_{\text{recon}} = \mathcal{L}_{\text{flow}}(\mathbf{x}, \hat{\mathbf{x}}) + \lambda \cdot \mathcal{L}_{\text{perc}}(\mathbf{x}, \hat{\mathbf{x}}) \]
We explore two prediction pathways: Pathway I (Transfusion) integrates text and image generation in a single transformer using causal masking for text and bidirectional attention for image latents; Pathway II (MLLM) leverages a frozen pre-trained MLLM with learnable MetaQueries \(\mathcal{Q} \in \mathbb{R}^{M \times d}\) to extract semantic conditions.
For generation, we follow the Flow Matching objective. Given text condition \(\mathbf{c}\), time step \(t \sim \mathcal{U}[0, 1]\), and noise \(\epsilon \sim \mathcal{N}(0, I)\), the interpolated latent and target velocity are:
\[ \tilde{\mathbf{z}}_t = t\tilde{\mathbf{z}}_1 + (1 - t)\epsilon, \quad \mathbf{v}_t = \tilde{\mathbf{z}}_1 - \epsilon \]
The model is trained to predict the velocity field with the loss:
\[ \mathcal{L}_{\text{FM}} = \mathbb{E}_{t, \mathbf{c}, \tilde{\mathbf{z}}_1, \epsilon} \left[ \|\mathbf{v}_t - \mathbf{v}_\theta(\tilde{\mathbf{z}}_t, t; \mathbf{c})\|_2^2 \right] \]
Experimental Results
Table 1: Image Generation Results
Image Generation Results on GenEval, DPG-Bench, and WISE. † refers to methods using LLM rewriters on GenEval. Abbreviations for WISE attributes: Cult. (Cultural), Bio. (Biology), Phy. (Physics), Chem. (Chemistry).
| Models | GenEval | DPG | WISE | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Single | Two | Count | Colors | Pos | Col-Attr | Overall | Overall | Cult. | Time | Space | Bio. | Phy. | Chem. | Overall | |
| Generation-only Models | |||||||||||||||
| SD3-Medium | 0.99 | 0.94 | 0.72 | 0.89 | 0.33 | 0.60 | 0.74 | - | - | - | - | - | - | - | - |
| FLUX.1 [Dev] | 0.98 | 0.93 | 0.75 | 0.93 | 0.68 | 0.65 | 0.82 | 84.00 | 0.48 | 0.58 | 0.62 | 0.42 | 0.51 | 0.35 | 0.50 |
| Unified Multimodal Models | |||||||||||||||
| MetaQuery-XL† | - | - | - | - | - | - | 0.80 | - | 0.56 | 0.55 | 0.62 | 0.49 | 0.63 | 0.41 | 0.55 |
| Tar | 0.99 | 0.92 | 0.83 | 0.85 | 0.80 | 0.65 | 0.84 | 84.19 | - | - | - | - | - | - | - |
| BLIP3-o | - | - | - | - | - | - | 0.84 | - | - | - | - | - | - | - | - |
| UniWorld-V1† | 0.98 | 0.93 | 0.81 | 0.89 | 0.74 | 0.71 | 0.84 | - | 0.53 | 0.55 | 0.73 | 0.45 | 0.59 | 0.41 | 0.55 |
| OmniGen2† | 0.99 | 0.96 | 0.74 | 0.98 | 0.71 | 0.75 | 0.86 | 83.57 | - | - | - | - | - | - | - |
| D-DiT | 0.97 | 0.80 | 0.54 | 0.76 | 0.32 | 0.50 | 0.65 | - | - | - | - | - | - | - | - |
| Show-o | 0.98 | 0.80 | 0.66 | 0.84 | 0.31 | 0.50 | 0.68 | - | 0.28 | 0.40 | 0.48 | 0.30 | 0.46 | 0.30 | 0.35 |
| Harmon | 0.99 | 0.86 | 0.66 | 0.85 | 0.74 | 0.48 | 0.76 | - | 0.38 | 0.48 | 0.52 | 0.37 | 0.44 | 0.29 | 0.41 |
| MUSE-VL† | - | - | - | - | - | - | 0.57 | - | - | - | - | - | - | - | - |
| Transfusion | - | - | - | - | - | - | 0.63 | - | - | - | - | - | - | - | - |
| Emu3 | - | - | - | - | - | - | 0.66 | 81.60 | 0.34 | 0.45 | 0.48 | 0.41 | 0.45 | 0.27 | 0.39 |
| Show-o2 | 1.00 | 0.87 | 0.58 | 0.92 | 0.52 | 0.62 | 0.76 | 86.14 | - | - | - | - | - | - | - |
| Janus-Pro | 0.99 | 0.89 | 0.59 | 0.90 | 0.79 | 0.66 | 0.80 | 84.19 | 0.30 | 0.37 | 0.49 | 0.36 | 0.42 | 0.26 | 0.35 |
| Mogao | 1.00 | 0.97 | 0.83 | 0.93 | 0.84 | 0.80 | 0.89 | 84.33 | - | - | - | - | - | - | - |
| X-Omni | 0.98 | 0.95 | 0.75 | 0.91 | 0.71 | 0.68 | 0.83 | 87.65 | - | - | - | - | - | - | - |
| Ming-UniVision | 1.00 | 0.93 | 0.59 | 0.93 | 0.92 | 0.70 | 0.85 | 82.12 | - | - | - | - | - | - | - |
| BAGEL† | 0.98 | 0.95 | 0.84 | 0.95 | 0.78 | 0.77 | 0.88 | 85.07 | 0.44 | 0.55 | 0.68 | 0.44 | 0.60 | 0.39 | 0.52 |
| UniCom (Ours) | 0.98 | 0.94 | 0.81 | 0.91 | 0.82 | 0.77 | 0.87 | 85.92 | 0.55 | 0.56 | 0.73 | 0.58 | 0.66 | 0.47 | 0.58 |
Bold: best results. Underline: second-best.
Table 2: Image Editing Results
Comparison of image editing capabilities on ImgEdit-Bench, GEdit-Bench, KRIS-Bench and WorldEdit. For ImgEdit-Bench, performance is evaluated across nine distinct operation categories (e.g., 'Add', 'Adjust', 'Extract', 'Replace', 'Remove', 'Background', 'Style', 'Hybrid', and 'Action'). For GEdit-Bench, metrics include 'G-Semantic Consistency' (G-SC) and 'G-Perceptual Quality' (G-PQ). For KRIS-Bench, we report Factual (Fact.), Conceptual (Conc.), and Procedural (Proc.) knowledge scores.
| Models | ImgEdit-Bench | GEdit-Bench | KRIS-Bench | WorldEdit | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Add | Adj. | Ext. | Rep. | Rm. | Bg. | Sty. | Hyb. | Act. | Overall | G-SC | G-PQ | G-Overall | Fact. | Conc. | Proc. | Overall | Overall | |
| Generation-only Models | ||||||||||||||||||
| FLUX.1 Kontext [Pro] | 4.25 | 4.15 | 2.35 | 4.56 | 3.57 | 4.26 | 4.57 | 3.68 | 4.63 | 4.00 | 7.02 | 7.60 | 6.56 | 57.22 | 55.06 | 46.69 | 54.17 | 3.21 |
| Qwen-Image | 4.38 | 4.16 | 3.43 | 4.66 | 4.14 | 4.38 | 4.81 | 3.82 | 4.69 | 4.27 | 8.00 | 7.86 | 7.56 | - | - | - | - | - |
| Specialized Editing Models | ||||||||||||||||||
| Instruct-Pix2Pix | 2.45 | 1.83 | 1.44 | 2.01 | 1.50 | 1.44 | 3.55 | 1.20 | 1.46 | 1.88 | 3.58 | 5.49 | 3.68 | 23.33 | 25.59 | 17.28 | 22.82 | 2.44 |
| MagicBrush | 2.84 | 1.58 | 1.51 | 1.97 | 1.58 | 1.75 | 2.38 | 1.62 | 1.22 | 1.83 | 4.68 | 5.66 | 4.52 | 41.84 | 39.24 | 26.54 | 37.15 | 2.14 |
| AnyEdit | 3.18 | 2.95 | 1.88 | 2.47 | 2.23 | 2.24 | 2.85 | 1.56 | 2.65 | 2.45 | 3.18 | 5.82 | 3.21 | 39.26 | 41.88 | 31.74 | 38.55 | 2.09 |
| Step1X-Edit | 3.88 | 3.14 | 1.76 | 3.40 | 2.41 | 3.16 | 4.63 | 2.64 | 2.52 | 3.06 | 7.09 | 6.76 | 6.70 | 45.52 | 48.01 | 31.82 | 43.29 | - |
| Unified Multimodal Models | ||||||||||||||||||
| OmniGen | 3.47 | 3.04 | 1.71 | 2.94 | 2.43 | 3.21 | 4.19 | 2.24 | 3.38 | 2.96 | 5.96 | 5.89 | 5.06 | 33.11 | 28.02 | 23.89 | 28.85 | 2.52 |
| Ming-Univision | - | - | - | - | - | - | - | - | - | - | 6.04 | 6.86 | 5.54 | - | - | - | - | - |
| BAGEL | 3.56 | 3.31 | 1.70 | 3.30 | 2.62 | 3.24 | 4.49 | 2.38 | 4.17 | 3.20 | 7.36 | 6.83 | 6.52 | 60.26 | 55.86 | 51.69 | 56.21 | 2.76 |
| UniWorld-V1 | 3.82 | 3.64 | 2.27 | 3.47 | 3.24 | 2.99 | 4.21 | 2.96 | 2.74 | 3.26 | 4.93 | 7.43 | 4.85 | - | - | - | - | - |
| OmniGen2 | 3.57 | 3.06 | 1.77 | 3.74 | 3.20 | 3.57 | 4.81 | 2.52 | 4.68 | 3.44 | 7.16 | 6.77 | 6.41 | 57.36 | 44.20 | 47.79 | 49.71 | 2.51 |
| TUNA | 4.46 | 4.52 | 2.47 | 4.68 | 4.58 | 4.56 | 4.73 | 4.07 | 4.69 | 4.31 | 7.79 | 7.48 | 7.29 | - | - | - | - | - |
| UniCom (Ours) | 4.36 | 4.04 | 3.30 | 4.63 | 4.40 | 4.24 | 4.79 | 3.54 | 4.69 | 4.22 | 8.06 | 7.33 | 7.32 | 74.63 | 69.48 | 65.30 | 70.11 | 4.12 |
Bold: best results. Underline: second-best.
Image Editing Results
UniCom delivers exceptional controllability in image editing and maintains image consistency even without relying on VAE.
Sinlge Image Editing
Remove / Add / Extract
Replace
Background
Style Transfer
Subject Driven
Controllable Generation
Multi-element Composition
Intelligent Image Editing