EMMCVPR 2011 - reviewing

Reviewer login

Thank you for volunteering to review for EMMCVPR 2011. You can access the assigned papers via EMMCVPR-2011 website on Microsoft conference management server (CMT): https://cmt.research.microsoft.com/EMMCVPR2011/Protected/Reviewer/

Non-disclosure agreement and reviewing instructions

Prior of accessing the EMMCVPR submission server and seeing your assignments, we assume that by accepting this invitation, you also adhere the reviewing instruction and the non-disclosure-agreement below.

The papers you review and the information therein is confidential. Though laws are not explicit about whether submission to a doubly anonymous review process constitutes public release or not, we expect from you to you do NOT regard it as a public release and that you follow the standard confidentiality reviewer ethics.

Please take the time to read carefully through the papers as well as the supplemental material. Never forget that you are yourself an author, too, and you deserve a fair evaluation.

Please provide substantiated comments and show to the author that you have devoted the right amount of time and thought.

Be fair in your evaluation: If a paper is incremental with respect to cited references, be concrete and judge the paper's relevance, too. Sometimes a research increment can have unprecedented practical impact. If a paper is incremental with respect to missed references, be strict but fair and polite.

If a paper has a novel technical approach but missed important references it might still deserve publication, in particular do not become emotional if they missed your own citation.

When judging novelty, provide the references for work done before or to which paper the submission is a just incremental improvement. When judging relevance try to be objective and not express a bias towards a school of thought or be dismissive of particular areas of computer vision. When judging technical correctness and strength provide details and justify why a result might be weak or have a flaw. When judging experiments reward scrutiny, use of benchmarks, and comparison to other approaches.