Skip to content

base: https://github.com/Sense-GVT/Fast-BEV , delete time sequence,update mm releated ,add onnx export for tensorrt

Notifications You must be signed in to change notification settings

cyn-liu/FastBEV

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FastBEV

Abstract

Demo on nuScenes

Custom dataset

environment

Abstract

base: https://github.com/Sense-GVT/Fast-BEV

delete time sequence . you can add time seq in forward_3d refer to author's code.

update mmcv mmdet mmdet3d .... releted ,

add onnx export for tensorrt

fastbev-tiny ~= author's fastbev-m0. add neck fuse in m0

nuScenes is comming soon, wait few days (1-3day)

https://github.com/thfylsty/FastBEV-TensorRT

read install first for environment

TODO

[ ] author's data augment

[ ] evaluation fuction

DemoOnNuScenes

dataset convert

tools/create_data.sh

train

tools/dist_train.sh

in train.sh , we use fastbev-tiny.py ~= author's fastbev-m0

export

tools/dist_export.sh

test with nuscenes.pth

baiduPan:2cwz

googleDrive

Note: This pth model has not been trained well. There are also some abnormal predictions.

JUST FOR TEST EXPORT ONLY.

deploy

https://github.com/thfylsty/FastBEV-TensorRT

CustomDataset

how to convert to mm.pkl

refer to tools/dataset_converters/roadside_converter.py

other

update later maybe

用法

测试环境一

本地

  • cuda 10.2
  • cudnn 8.4.0

服务器

  • cuda 11.7
  • cudnn 8.4.0

基础

测试环境二

服务器

  • cuda 11.7
  • cudnn 8.4.0

基础

Getting Started

Evaluation

We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:

./tools/download_pretrained.sh

Then, you will be able to run:

torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]

For example, if you want to evaluate the detection variant of BEVFusion, you can try:

torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox

While for the segmentation variant of BEVFusion, this command will be helpful:

torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map

Training

We provide instructions to reproduce our results on nuScenes.

For example, if you want to train the camera-only variant for object detection, please run:

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth

For camera-only BEV segmentation model, please run:

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth

For LiDAR-only detector, please run:

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml

For LiDAR-only BEV segmentation model, please run:

torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml

Acknowledgements

BEVFusion is based on mmdetection3d. It is also greatly inspired by the following outstanding contributions to the open-source community: LSS, BEVDet, TransFusion, CenterPoint, MVP, FUTR3D, CVT and DETR3D.

Please also check out related papers in the camera-only 3D perception community such as BEVDet4D, BEVerse, BEVFormer, M2BEV, PETR and PETRv2, which might be interesting future extensions to BEVFusion.

About

base: https://github.com/Sense-GVT/Fast-BEV , delete time sequence,update mm releated ,add onnx export for tensorrt

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published