ADR-SALD : Attention-Based Deep Residual Sign Agnostic Learning With Derivatives for Implicit Surface Reconstruction

IEEE
Artikkeli
vertaisarvioitu
Osuva_Basher_Boutellier_2025.pdf
Lopullinen julkaistu versio - 5.62 MB

Kuvaus

© 2025 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Learning 3D shape directly from raw data (i.e., un-oriented meshes, raw point clouds or triangle soups) and reconstructing high fidelity surfaces are still a difficult problem in computer vision and graphics. Several approaches have been proposed to learn from raw data, however, their reconstruction quality is somewhat limited in capturing small detail. Moreover, they introduce surface sheet in case of big gaps and empty spaces, and struggle in reconstructing small openings and thin structure. In this study, we address these problems by proposing a novel attention-based variational autoencoder architecture, ADR-SALD where the encoder and decoder are constructed based on the idea of residual feature learning and inception-like neural structure. We have adopted two different self attention mechanisms for sign agnostic learning in the encoder, which allow the proposed approach to learn the global spatial contextual dependencies and local features simultaneously for the 3D shape. This novel architecture solves the surface sheet problem of previous approaches such as SALD. Moreover, our experimental results show that ADR-SALD is more successful in reconstructing thin structure than the state-of-the-art approaches SALD and DC-DFFN, and has significant performance in separating small gaps. The proposed approach outperforms the baseline state-of-the-art approaches by reconstruction quality and quantitative measures.

Emojulkaisu

ISBN

ISSN

2169-3536

Aihealue

Kausijulkaisu

IEEE Access|13

OKM-julkaisutyyppi

A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä