Skip to content

Evaluation guidelines

The SceneFun3D training and validation sets can be utilized to train and evaluate models locally. For evaluation on the validation set, we provide evaluation scripts in the SceneFun3D toolkit for each task here.

Currently, the benchmark is evaluated using version 0.1.0 of the dataset.

Benchmark results are evaluated on the hidden test set for which we do not provide the ground-truth annotations. The benchmark is hosted on EvalAI and can be found here (coming soon).

Prior to making a submission on the evaluation benchmark, make sure the submission is in the correct format. Otherwise, the submission will fail.

In the sections below, you can find information about evaluation and benchmark submissions and description for each task: