News

Masked Autoencoder (MAE) has shown remarkable potential in self-supervised representation learning for 3D point clouds. However, these methods primarily rely on point-level or low-level feature ...
Masked autoencoders (MAE) have recently been introduced to 3D self-supervised pretraining for point clouds due to their great success in NLP and computer vision. Unlike MAEs used in the image domain, ...
Point-BERT is a new paradigm for learning Transformers to generalize the concept of BERT onto 3D point cloud. Inspired by BERT, we devise a Masked Point Modeling (MPM) task to pre-train point cloud ...
The BERT-style (Bidirectional Encoder Representations from Transformers) pre-training paradigm has achieved remarkable success in both NLP (Natural Language Processing) and CV (Computer Vision).
This repository contains PyTorch implementation for Point-BERT:Pre-Training 3D Point Cloud Transformers with Masked Point Modeling (CVPR 2022). Point-BERT is a new paradigm for learning Transformers ...