[ECCV2022] PETR: Position Embedding Transformation for Multi-View 3D Object Detection & [ICCV2023] PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images
-
Updated
Oct 11, 2023 - Python
[ECCV2022] PETR: Position Embedding Transformation for Multi-View 3D Object Detection & [ICCV2023] PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images
Fast-BEV: A Fast and Strong Bird’s-Eye View Perception Baseline
Out-of-the-box code and models for CMU's object detection and tracking system for multi-camera surveillance videos. Speed optimized Faster-RCNN model. Tensorflow based. Also supports EfficientDet. WACVW'20
[ICCV 2023] OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy Prediction
[CoRL 2022] SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation
This repo contains links to multi-person re-identification and tracking dataset in top view multi-camera environment.
A Low-cost Open-source High-speed Multi-camera Motion Capture System.
DistillBEV: Boosting Multi-Camera 3D Object Detection with Cross-Modal Knowledge Distillation (ICCV 2023)
Multi-camera Network research resources
Official Repo For "RockTrack"
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
OpenCV Universal Multi thread video Interface with neglectable latency.
A unity package for applying post-processing effects to assembled 2D assets
GUI for viewing and recording with multi camera systems including event cameras.
Central control for video acquisition with (many) Raspberry Pi cameras
Multi-camera systems are becoming affordable and intelligent through research and commercial application. However, few resources are available to assist software engineers in developing fully-fledged solutions using such systems. To address this lack of support the project described in this report has developed a software platform that supports …
A Flask app for multiple live video streaming over a network with object detection, tracking (optional), and counting. Uses YOLO v4 with Tensorflow backend as the object detection model and Deep SORT trained on the MARS dataset for object tracking. Each video stream has an independent thread and uses ImageZMQ for asynchronous sending and process
Summit Vitals: Multi-Camera and Multi-Signal Biosensing at High Altitudes
This repository contains code to collect data from android devices such as gyroscope, magnetometer, barometer, accelerometer and point cloud readings along with video feed.
Tool suite for fast multi-camera strawberry data collection project. The standards document houses cross compatibility/purpose implementation details.
Add a description, image, and links to the multi-camera topic page so that developers can more easily learn about it.
To associate your repository with the multi-camera topic, visit your repo's landing page and select "manage topics."