Frequently Asked Questions

If you can't find an answer to your question below, please feel free to contact us.

Limits on uploading

You should not tune your algorithm to the test set by submitting results many times. This is considered cheating.

MPI-Sintel includes a large training set that you can split into training and validation sets (or do whatever type of cross validation best suits your problem). Since the training set is representative of the test set, there is no valid reason to tune to the test set.

To prevent cheating, we limit the number of possible uploads on a per-user basis. Each user may upload one bundling result once every eight hours, and up to three times per 30 days.

We actively monitor for cheating attempts. If we detect an attempt to circumvent the upload limit, we will take appropriate measures.

Cheating in general

While we hope cheating doesn't happen, we have thought hard about ways people may try to cheat and how to detect this. We will be watching for attempts to game the system and reserve the right to remove methods and bar groups from submitting if we believe they are cheating.

Public results policy

All results appearing in the public table should be fully described in a document available to the public so that they can be replicated. Results appearing here should match results reported in publication.

Every method that is posted on the site must be fully described in a document (e.g. a published paper or technical report). The document may be withheld during a reviewing period. Once published, a link to the document must be provided. Methods that are not peer reviewed can also appear in the public evaluation as long as a sufficiently detailed document (e.g. a technical report) is available on-line.

We reserve the right to remove methods from the public tables if there is insufficient documentation or if the results in the table do not match published results.

What to do if you improve your method after you publish it

You can either provide a link to a tech report explaining what is different or you can edit the "Comments" field for your method on the "My Methods" page to add additional information (e.g. that this version of the method uses different parameter settings).

How the Final pass differs from the movie

See our workshop paper for a detailed description of how the MPI Sintel dataset differs from the source of the Sintel film, and why.

How do I compute the average?

Since the sequences differ in length, the average we compute is the average over all frames, and not the average of per-sequence average errors.

Referencing the dataset

Here are the Bibtex snippets for citing the Dataset in your work.

Main paper and benchmark:
title = {A naturalistic open source movie for optical flow evaluation},
author = {Butler, D. J. and Wulff, J. and Stanley, G. B. and Black, M. J.},
booktitle = {European Conf. on Computer Vision (ECCV)},
editor = {{A. Fitzgibbon et al. (Eds.)}},
publisher = {Springer-Verlag},
series = {Part IV, LNCS 7577},
month = oct,
pages = {611--625},
year = {2012}
Technical background:
 title = {Lessons and insights from creating a synthetic optical flow benchmark},
 author = {Wulff, J. and Butler, D. J. and Stanley, G. B. and Black, M. J.},
 booktitle = {ECCV Workshop on Unsolved Problems in Optical Flow and Stereo Estimation},
 editor = {{A. Fusiello et al. (Eds.)}},
 publisher = {Springer-Verlag},
 series = {Part II, LNCS 7584},
 month = oct,
 pages = {168--177},
 year = {2012}