ContraNeRF:
Generalizable Neural Radiance Fields for Synthetic-to-real Novel View Synthesis via Contrastive Learning

Hao Yang1 Lanqing Hong2 Aoxue Li2 Tianyang Hu2
Zhenguo Li2 Gim Hee Lee3 Liwei Wang1

1 Peking University 2 Huawei Noah's Ark Lab 3 National University of Singapore

Abstract

Although many recent works have investigated generalizable NeRF-based novel view synthesis for unseen scenes, they seldom consider the synthetic-to-real generalization, which is desired in many practical applications. In this work, we first investigate the effects of synthetic data in synthetic-to-real novel view synthesis and surprisingly observe that models trained with synthetic data tend to produce sharper but less accurate volume densities. For pixels where the volume densities are correct, fine-grained details will be obtained. Otherwise, severe artifacts will be produced. To maintain the advantages of using synthetic data while avoiding its negative effects, we propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints. Meanwhile, we adopt cross-view attention to further enhance the geometry perception of features by querying features across input views. Experiments demonstrate that under the synthetic-to-real setting, our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS. When trained on real data, our method also achieves state-of-the-art results.

 

Framework

 

Results





 

Bibtex


    @inproceedings{yang2023contranerf,
        title={ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real Novel View Synthesis via Contrastive Learning},
        author={Yang, Hao and Hong, Lanqing and Li, Aoxue and Hu, Tianyang and Li, Zhenguo and Lee, Gim Hee and Wang, Liwei},
        booktitle={Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2023}
    }