News
Abstract: Masked Autoencoder (MAE) has shown remarkable potential in self-supervised representation learning for 3D point clouds. However, these methods primarily rely on point-level or low-level ...
ViTGuard uses a Masked Autoencoder (MAE) model to recover randomly masked patches from the unmasked regions, providing a flexible image reconstruction strategy. Then, threshold-based detectors ...
To overcome this limitation, Graph Self-supervised Pre-training (GSP) techniques have emerged, leveraging the intrinsic structures and properties of graph data to extract meaningful representations ...
For good reconstruction quality, the semantics must be captured ... Moreover, TSDAE serves as an effective pre-training technique, surpassing the classical Mask Language Model (MLM) pre-training task ...
The visible example of masking images and the reconstruction results of MAE and API-MAE ... with higher tumor occurrence are sampled more frequently, facilitating the mask autoencoder to focus on the ...
Introduction: The exorbitant cost of accurately annotating the large-scale serial scanning electron microscope (SEM) images as the ground truth for training has always been a great challenge for brain ...
The work, for the first time, combines two architectures of self-supervised learning, contrastive learning and masked ... For example, if a video shows someone speaking and the corresponding audio ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results