This is an official code repo for "Stereo Depth from Events Cameras: Concentrate and Focus on the Future" CVPR 2022 Yeong-oo Nam*, Mohammad Mostafavi*, Kuk-Jin Yoon and Jonghyun Choi (Corresponding author)
If you use any of this code, please cite both following publications:
@inproceedings{nam2022stereo,
title = {Stereo Depth from Events Cameras: Concentrate and Focus on the Future},
author = {Nam, Yeongwoo and Mostafavi, Mohammad and Yoon, Kuk-Jin and Choi, Jonghyun},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Patter Recognition},
year = {2022}
}
@inproceedings{mostafavi2021event,
title = {Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds},
author = {Mostafavi, Mohammad and Yoon, Kuk-Jin and Choi, Jonghyun},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages = {4258--4267},
year = {2021}
}
- Pre-requisite
- Getting started
- Training
- Inference
- What is not ready yet
- Benchmark website
- Related publications
- License
The following sections list the requirements for training/evaluation the model.
Tested on:
- CPU - 2 x Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
- RAM - 256 GB
- GPU - 8 x NVIDIA A100 (40 GB)
- SSD - Samsung MZ7LH3T8 (3.5 TB)
Tested on:
Download DSEC datasets.
Our folder structure is as follows:
DSEC
βββ train
βΒ Β βββ interlaken_00_c
βΒ Β βΒ Β βββ calibration
βΒ Β β βΒ Β βββ cam_to_cam.yaml
βΒ Β β βΒ Β βββ cam_to_lidar.yaml
βΒ Β βΒ Β βββ disparity
βΒ Β βΒ Β βΒ Β βββ event
βΒ Β βΒ Β βΒ Β βΒ Β βββ 000000.png
βΒ Β βΒ Β βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βΒ Β βΒ Β βββ 000536.png
βΒ Β βΒ Β βΒ Β βββ timestamps.txt
βΒ Β βΒ Β βββ events
βΒ Β βΒ Β Β Β βββ left
βΒ Β βΒ Β Β Β βΒ Β βββ events.h5
βΒ Β βΒ Β Β Β βΒ Β βββ rectify_map.h5
βΒ Β βΒ Β Β Β βββ right
βΒ Β βΒ Β Β Β βββ events.h5
βΒ Β βΒ Β Β Β βββ rectify_map.h5
βΒ Β βββ ...
βΒ Β βββ zurich_city_11_c # same structure as train/interlaken_00_c
βββ test
βββ interlaken_00_a
βΒ Β βββ calibration
βΒ Β βΒ Β βββ cam_to_cam.yaml
βΒ Β βΒ Β βββ cam_to_lidar.yaml
βΒ Β βββ events
βΒ Β βΒ Β βββ left
βΒ Β βΒ Β βΒ Β βββ events.h5
βΒ Β βΒ Β βΒ Β βββ rectify_map.h5
βΒ Β βΒ Β βββ right
βΒ Β βΒ Β βββ events.h5
βΒ Β βΒ Β βββ rectify_map.h5
βΒ Β βββ interlaken_00_a.csv
βββ ...
βββ zurich_city_15_a # same structure as test/interlaken_00_a
git clone [repo_path]
cd event-stereo
docker build -t event-stereo ./
docker run \
-v <PATH/TO/REPOSITORY>:/root/code \
-v <PATH/TO/DATA>:/root/data \
-it --gpus=all --ipc=host \
event-stereo
cd /root/code/src/components/models/deform_conv && bash build.sh
cd /root/code/scripts
bash distributed_main.sh
cd /root/code
python3 inference.py \
--data_root /root/data \
--checkpoint_path <PATH/TO/CHECKPOINT.PTH> \
--save_root <PATH/TO/SAVE/RESULTS>
βοΈ You can download pre-trained model from here
Some modules introduced in the paper are not ready yet. We will update it soon.
- Intensity image pre-processing code.
- E+I Model code.
- E+I train & test code.
- Future event distillation code.
The DSEC website holds the benchmarks and competitions.
π Our CVPR 2022 results (this repo), are available in the DSEC website. We ranked better than the state-of-the-art method from ICCV 2021
π Our ICCV 2021 paper Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds ranked first in the CVPR 2021 Competition hosted by the CVPR 2021 workshop on event-based vision and the Youtube video from the competition.
-
Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds - Openaccess ICCV 2021 (PDF)
-
E2SRI: Learning to Super Resolve Intensity Images from Events - TPAMI 2021 (Link)
-
Learning to Super Resolve Intensity Images from Events - Openaccess CVPR 2020 (PDF)
MIT license.