News
Masked autoencoders (MAE) have recently been introduced to 3D self-supervised pretraining for point clouds due to their great success in NLP and computer vision. Unlike MAEs used in the image domain, ...
Point-BERT is a new paradigm for learning Transformers to generalize the concept of BERT onto 3D point cloud. Inspired by BERT, we devise a Masked Point Modeling (MPM) task to pre-train point cloud ...
Masked Autoencoder for Self-Supervised Pre-Training on Lidar Point Clouds Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, Lennart Svensson; Joint-MAE: 2D-3D Joint ...
The BERT-style (Bidirectional Encoder Representations from Transformers) pre-training paradigm has achieved remarkable success in both NLP (Natural Language Processing) and CV (Computer Vision).
This repository contains PyTorch implementation for Point-BERT:Pre-Training 3D Point Cloud Transformers with Masked Point Modeling (CVPR 2022). Point-BERT is a new paradigm for learning Transformers ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results