Forest monitoring and education are key to forest protection, education and management, which is an effective way to measure the progress of a country's forest and climate commitments.
Due to the lack of a large-scale wild forest monitoring benchmark, the common practice is to train the model on a common outdoor benchmark (e.g., KITTI) and evaluate it on real forest datasets (e.g., CanaTree100).
However, there is a large domain gap in this setting, which makes the evaluation and deployment difficult. In this paper, we propose a new photorealistic virtual forest dataset and a multimodal transformer-based algorithm for tree detection and instance segmentation.
To the best of our knowledge, it is the first time that a multimodal detection and segmentation algorithm is applied to a large-scale forest scenes.
We believe that the proposed dataset and method will inspire the simulation, computer vision, education and forestry communities towards a more comprehensive multi-modal understanding.
Please cite this work if you use the simulation tool or data from this site.
@InProceedings{Anonymous2024,
title = {M2fNet: Multi-modal Forest Monitoring Network on Large-scale Virtual Dataset},
author = {Anonymous authors},
booktitle = {to be appeared},
pages = {to be appeared},
year = {2024},
url = {to be appeared},
doi = {to be appeared},
}