|
Reviews For Paper
Paper ID |
810 |
Title |
Unified Graph Matching in Euclidean Spaces |
Masked Reviewer ID:
|
Assigned_Reviewer_1
|
Review:
|
|
Question | |
Please briefly describe the paper's contributions, and list its positive and negative points. |
The paper presents a learning algorithm for euclidian graph matching.
The work builds upon [6], where structured learning was used to
learn parameters for appearance, distance, and angle features. The
difference with [6] is that the authors use a more efficient
construction: a junction-tree instead of a ring structure. In both
cases inference is tractable and running time is O(|V| |V'|^3) (for 2D
matching), however a single iteration of BP is sufficient with a
junction tree (vs convergence to optimum with loopy BP).
However, the exact same junction-tree construction was already
introduced in (caetano, 2009), not cited in the paper (see reference
below).
Given this, the main contribution of the paper is to adapt the
structured learning for isometric graph matching from [6] to the
junction-tree construction from (caetano, 2009) instead of a
ring-structure. A second contribution is the incorporation of
homeomorphism features as additional features inside the structured
learning. This is made possible by the introduction of replicates and
theorem 2.2, which would not be possible with the ring structure in
[6]. Experiments show that learning can benefit from those extra
features, improving results over [6] in some cases.
Figure 4 was interesting, giving insight on which features help, and when.
Minor point:
(Line 555-557): the paper promises further results (additional
datasets, example matches, weight vectors and timing) on a supplemental
material. However, I couldn't find such supplement.
|
Overall Rating |
Borderline
|
For
the sake of the authors and the quality of the reviewing process,
please explain your ratings in the space provided. Your rating should
not consider whether or not the paper might become an oral or a poster. |
The novelty is incremental (adapting structured learning for graph
matching from [6] with the junction-tree construction from
(caetano,2009), not cited ).
However, the use of replicates which allows for using additional
homeomorphism features is interesting enough, and experiments
demonstrate the usefulness of added features.
|
Novelty |
Moderately original
|
Please
explain your evaluation of novelty. "Very original" papers open new
directions and often become seminal papers. "Has been done before"
implies reject, and must be accompanied by appropriate justifications
and relevant references. | The novelty is
incremental (adapting structured learning for graph matching from [6]
with the junction-tree construction from
(caetano,2009), not cited ).
Isomorphism/Homeomorphism features for graph matching have been
used before, but, to my knowledge, not in the context of a tractable
graph matching formulation, which is allowed by their use of replicates
and theorem 2.2
|
Importance / Relevance |
Of broad interest
|
Please
justify your evaluation. Every researcher in CV should be interested in
works "Of broad interest" due to the importance of the result or
techniques. Works "Of sufficient interest" do not have to address
everyone in the audience, but should have an impact in a certain area.
Works "Of limited interest" should be considered only if their novelty,
clarity, and correctness is excellent. "Irrelevant" implies reject, and
must be justified thoroughly. | Graph
matching is a fundamental problem, and algorithms for structured
learning of graph matching are in vogue.
|
Reference to Prior Work |
References missing
|
Please
explain your reference score. "Does not cite relevant work" implies
reject. This option should be selected only if the missing work is well
known in the community and commonly cited, else we suggest giving the
authors the benefit of the doubt by selecting "References missing".
List the missing references. |
The paper uses exactly the same junction-tree construction as in the following paper, not cited:
@article{caetano2009faster,
title={{Faster graphical models for point-pattern matching}},
author={Caetano, T.S. and McAuley, J.J.},
journal={Spatial Vision},
volume={22},
number={5},
pages={443--453},
year={2009},
publisher={VSP, an imprint of Brill}
}
Apart from this, the references are generally good.
|
Clarity of Presentation |
Clear enough
|
Please explain your clarity score. "Unreadable" implies reject and must be thoroughly justified. |
The paper reads well.
|
Technical Correctness |
Probably correct (did not check completely)
|
"Technically
correct" means that its conclusions are supported by flawless
arguments. Proofs are correct, formulas are correct, there are no
hidden assumptions, experiments are well designed and properly
evaluated. However, "Has major problems" implies reject and must be
justified thoroughly. |
The paper appears techniclly correct.
|
Experimental Validation |
Lacking in some respect
|
Different
papers need different levels of experimental validation. A theoretical
paper may need no experiments. A paper presenting a new idea might just
need an experiment illustrating that there exists a situation where the
idea applies. A paper presenting a new phenomenon or a performance
evaluation paper may need thorough experimental evaluation.
"Insufficient validation" implies reject and must be justified
thoroughly. |
Experiments are provided on standard datasets, and compared to relevant standard baselines.
Intuitively the approach should improve over [6] because of the
added homeomorphic features, but in the experiments the improvement is
not so clear.
In "claire" and "silhouette" experiments, the approach is clearly better.
In the CMU house data, the approach performs worse than [6] for all
but the last 2 datapoints (separation of 80 and 90 frames), where
presumably homeomorphic features are more stable than isometric
features. In the CMU hotel data, balanced graph matching performs better for up to a separation of 50 frames.
This brings the question as to why isn't balanced graph matching
compared against on the datasets "proof of concept", "claire", "pose
estimation house", "pose estimation volvo" (since the code is available
online).
The timing plot in figure 5b is useful but a bit misleading, as it
doesn't reflect the fact that complexity grows as O(|V||V'|^3), as
opposed to linear assignment and [10]. I suggest a plot showing
|V|=|V'| on the x-axis and time on the y-axis (possibly along with a
Delta(g,g') on another y-axis).
(line 564) How were the 15 points selected?
|
Repeatability |
Easy to reproduce
|
Could
a competent researcher reproduce the results based on information
contained in the paper and its references, or have the authors stated a
commitment to making a reference implementation and/or datasets
available? |
It should be doable to reproduce similar results, up to implementation details, constants, etc.
|
Additional comments to author(s) |
The authors should cite (caetano,2009) and clearly state their
contributions in the context of [6] and this citation. A more systematic comparison to the baselines in all experiments would be useful.
A timing plot as function of number of nodes would help assess up to which point is the approach usable.
|
Masked Reviewer ID:
|
Assigned_Reviewer_2
|
Review:
|
|
Question | |
Please briefly describe the paper's contributions, and list its positive and negative points. |
Instead of using a specific graph matching algorithm
that enforces a certain pre-determined type of similarity
(isomorphism, homeomorphism, etc.), this paper proposes using a
general algorithm with parameters learned via structured
learning. This way, the "correct" flavor of graph matching is learned
automatically from training data. Experimental results demonstrate the
technique on several small datasets, and compare favorably to the "hard-coded" graph matching algorithms.
|
Overall Rating |
Weakly Accept
|
For
the sake of the authors and the quality of the reviewing process,
please explain your ratings in the space provided. Your rating should
not consider whether or not the paper might become an oral or a poster. |
The paper is very nice written and addresses an interesting problem.
The use of structured learning techniques in this context seems novel
and is intuitively appealing. The paper is generally well-written and
the technical details are explained clearly. |
Novelty |
Moderately original
|
Please
explain your evaluation of novelty. "Very original" papers open new
directions and often become seminal papers. "Has been done before"
implies reject, and must be accompanied by appropriate justifications
and relevant references. | The idea of
applying structured learning techniques for learning different types of
graph matching seems novel. |
Importance / Relevance |
Of sufficient interest
|
Please
justify your evaluation. Every researcher in CV should be interested in
works "Of broad interest" due to the importance of the result or
techniques. Works "Of sufficient interest" do not have to address
everyone in the audience, but should have an impact in a certain area.
Works "Of limited interest" should be considered only if their novelty,
clarity, and correctness is excellent. "Irrelevant" implies reject, and
must be justified thoroughly. | Many vision
problems can be posed as graph matching in Euclidean spaces, so work in
this area is likely of interest to a large subset of CVPR. |
Reference to Prior Work |
References adequate
|
Please
explain your reference score. "Does not cite relevant work" implies
reject. This option should be selected only if the missing work is well
known in the community and commonly cited, else we suggest giving the
authors the benefit of the doubt by selecting "References missing".
List the missing references. | The
references seem fine, however the summary of related work (section 1.1)
is quite brief. It would be good to give some more details about the
papers that are cited here. |
Clarity of Presentation |
Very clear
|
Please explain your clarity score. "Unreadable" implies reject and must be thoroughly justified. |
The writing style is very clear.
|
Technical Correctness |
Probably correct (did not check completely)
|
"Technically
correct" means that its conclusions are supported by flawless
arguments. Proofs are correct, formulas are correct, there are no
hidden assumptions, experiments are well designed and properly
evaluated. However, "Has major problems" implies reject and must be
justified thoroughly. |
Theoretical approach seems reasonable.
|
Experimental Validation |
Limited but convincing
|
Different
papers need different levels of experimental validation. A theoretical
paper may need no experiments. A paper presenting a new idea might just
need an experiment illustrating that there exists a situation where the
idea applies. A paper presenting a new phenomenon or a performance
evaluation paper may need thorough experimental evaluation.
"Insufficient validation" implies reject and must be justified
thoroughly. | The experimental results seem
convincing, with the proposed method tested against several graph
matching algorithms from the literature. |
Repeatability |
Easy to reproduce
|
Could
a competent researcher reproduce the results based on information
contained in the paper and its references, or have the authors stated a
commitment to making a reference implementation and/or datasets
available? | The datasets are publicly
avaiable (with the exception of the proof-of-concept dataset, which
would not be of much use to others anyway). The description in the
paper is clear enough for the method to be re-implemented relatively
easily. |
Additional comments to author(s) |
.
|
Masked Reviewer ID:
|
Assigned_Reviewer_3
|
Review:
|
|
Question | |
Please briefly describe the paper's contributions, and list its positive and negative points. |
The manuscript presents a graph matching method that takes into account
simultaneously first, second, and third order graph information. To
adaptively balance these information, structured learning technique is
used to learn the weights for each part. The approach is tested on
several datasets and shows improvement over methods that uses only a
single cue.
The main contribution is the principled formulation that combines
the information at different orders and the learning based weight
estimation. A weak part of the work lies in the lack of comparison with
previous methods that takes cross-order information for matching
problems. |
Overall Rating |
Borderline
|
For
the sake of the authors and the quality of the reviewing process,
please explain your ratings in the space provided. Your rating should
not consider whether or not the paper might become an oral or a poster. |
On the one hand, the method and derivation given in the draft sounds
very interesting and may potentially brings breakthroughs to the
problem of graph matching. On the other hand, the lack of comparison
experiments makes it unclear how far it can really go. In addition, the
relation to other cross-order matching algorithms is not addressed
adequately. |
Novelty |
Moderately original
|
Please
explain your evaluation of novelty. "Very original" papers open new
directions and often become seminal papers. "Has been done before"
implies reject, and must be accompanied by appropriate justifications
and relevant references. | Use geometric
transformation to study different orders of graph properties is novel
and may lead to new ideas and solutions. The learning is somewhat new. |
Importance / Relevance |
Of sufficient interest
|
Please
justify your evaluation. Every researcher in CV should be interested in
works "Of broad interest" due to the importance of the result or
techniques. Works "Of sufficient interest" do not have to address
everyone in the audience, but should have an impact in a certain area.
Works "Of limited interest" should be considered only if their novelty,
clarity, and correctness is excellent. "Irrelevant" implies reject, and
must be justified thoroughly. |
n/a
|
Reference to Prior Work |
References missing
|
Please
explain your reference score. "Does not cite relevant work" implies
reject. This option should be selected only if the missing work is well
known in the community and commonly cited, else we suggest giving the
authors the benefit of the doubt by selecting "References missing".
List the missing references. | Combining
graph statistics across different orders is recently studied by several
groups. I think the authors should discuss and compare with previous
works along this line, e.g., A Tensor-Based Algorithm for High-Order Graph Matching, by Olivier Duchenne, Francis Bach, Inso Kweon, Jean Ponce, CVPR'09.
|
Clarity of Presentation |
Difficult to read
|
Please explain your clarity score. "Unreadable" implies reject and must be thoroughly justified. |
The paper is in general well organized. It is however not easy to
follow - this is to some degree expected, given that many abstract
terms and mathematical notations are introduced. I would suggest add
more detailed and concrete explanations. |
Technical Correctness |
Probably correct (did not check completely)
|
"Technically
correct" means that its conclusions are supported by flawless
arguments. Proofs are correct, formulas are correct, there are no
hidden assumptions, experiments are well designed and properly
evaluated. However, "Has major problems" implies reject and must be
justified thoroughly. |
n/a
|
Experimental Validation |
Limited but convincing
|
Different
papers need different levels of experimental validation. A theoretical
paper may need no experiments. A paper presenting a new idea might just
need an experiment illustrating that there exists a situation where the
idea applies. A paper presenting a new phenomenon or a performance
evaluation paper may need thorough experimental evaluation.
"Insufficient validation" implies reject and must be justified
thoroughly. | The matching result looks good
and shows clearly that cross-order statistics helps improve over single
order statistics. However, it would be more convincing if comparisons
with other approaches is included. |
Repeatability |
Very difficult
|
Could
a competent researcher reproduce the results based on information
contained in the paper and its references, or have the authors stated a
commitment to making a reference implementation and/or datasets
available? |
The lacking of details makes the experimental results hard to repeat.
|
Additional comments to author(s) |
The draft presents a general way to combine information at different
scales (orders) for graph matching. I think it would be more convincing
if more details were provided and comparison experiments were included.
One comment about the learning part: my understanding is that the
learning part needs training samples(?). If so, it will be an important
practical issue how much the proposed method relies on these training
samples. It would be helpful to include some related information in the
experimental section. |
| |