Skip to content

Discrepancy in reproducing SurroundOcc baseline on TartanGround #87

@ly3580

Description

@ly3580

Hi TartanGround Authors,

Thanks for releasing this great dataset and codebase!

We are currently trying to reproduce the SurroundOcc baseline on the ModularNeighborhood / Data_anymal / P2000 trajectory. However, we are seeing a large performance gap and hope to get your expert advice.

Our rerendered inputs yield a binary Occupancy IoU of ~0.07 (compared to the reported ~0.19). Our pipeline is as follows:

Load GT from sem_occ/*.npz.

Generate nuScenes-style 6-camera images via customization_example.py.

Run zero-shot inference using the pre-trained SurroundOcc checkpoint.

Evaluate binary IoU within +/-25m xy range at 0.5m resolution.

We have already verified basic settings like camera ordering, GT voxel alignment, and image resolution, but the gap remains.

To help us align our pipeline with your original experiments, could you kindly share some insights on:

Data Pipeline: Does the paper use the exact customization_example.py script, or are there additional pre/post-processing steps?

Evaluation Split: Were the reported results evaluated on the entire P2000 trajectory or a specific subset?

Scripts: If possible, would you be open to sharing the exact evaluation script or data preparation code? It would be incredibly helpful for our baseline setup.

Thank you so much for your time and guidance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions