============================= Meta Review Comments ======================================
* Meta Review:
Pros: This is a piece of solid work that approaches the hyper graph matching problem using a new variant of dual decomposition. The paper is well written, and empirical results show that their method is comparable or better than the state-of-the-art methods.
Cons: The approach is rather heuristic, and the presentation can be improved in various places as the reviewers pointed out.
============================= Reviewer Comments #20231 =========================
* Originality: 6
* Technical Quality: 6
* Significance: 4
* Relevance: 8
* Quality of writing: 5
* Overall Score: 5
* Comments:
This paper studies the problem of matching a set of features
in an image to a set of features in a model, when relations
between features involve more than two features at a time.
This can be seen as a hypergraph-matching optimisation problem.
This is a computationally intractable problem, so it is
logical to consider heuristic approximations.
This paper is a solid piece of work which describes such a heuristic
approach. Experiments are reported which would seem to indicate
that the authors' method is comparable or better than the state
of the art.
This having being said, I have some criticisms of the paper:
(1) As with any heuristic method, it is only as good as its
performance in real applications. The set of test images is
limited to pairs of frames from only 3 films (showing a house, a car,
a motorbike). This is not a sufficiently large range of image
types to conclude that the proposed heuristic is sytematically
better than previously-proposed methods, nor to determine on
which types of images it performs best.
(2) If I have understood correctly, the experiments are performed
using ternary similarity functions. However, since the three
arguments of this similarity function are the angles a,b,c of a triangle,
this ternary function could be replaced by a binary function
since c=pi-(a+b). Of course, the nature of the function changes
but it is essentially binary, so graph-matching techniques could
surely be used.
(3) There are many presentation problems. For example, the pictures
in Figure 2 are too small for the reader to actually see anything.
More importantly, the fact that proofs are missing or incomplete
makes the mathematical reasoning very difficult to follow and
impossible to check.
(4) There are many heuristic choices which appear arbitrary.
I think that the reparameterization point of view in Section 4.2
shows that the dual of the linear programming relaxation is
equivalent to what is known as Optimal Soft Arc Consistency (OSAC)
in the Constraint Programming (CP) community (see Cooper et. al.,
Soft arc consistency revisited, Artif. Intell., 174, 449-478, 2010.).
This is perhaps worth pursuing because the approximation to
OSAC that you use may correspond to a form of soft local consistency
developed in the CP community. I suspect that it is simply Soft
Arc Consistency (SAC). Building this kind of bridge could
give some extra weight to your approach (if your method has already
proved useful elsewhere) or help you to find better techniques.
For example, EDAC, FDAC are efficient and stronger extensions of SAC (see Cooper
et. al, 2010).
Minor comments:
-The first sentence of the abstract assumes you are only interested
in vision problems, which seems an arbitrary assumption.
-Section 2, line 2: c is a list rather than a subset.
-Equation (4c): capital P in C^P
-Equation (4d): delete "\forall i \in V"
-Equation (5): i \subset c should be i \in c
-When you introduce g in equation (5) you should say whether it is
an objective function to be maximized or minimized.
-You give no explanation (nor reference) to explain why problem (7)
can be solved by the Hungarian algorithm nor what this algorithm is.
-Equation (11): This is incomprehensible to me. How can the optimal
solution lambda^star be defined in terms of lambda?
-Statement of Proposition 6: what do you mean exactly by "unique
maximizer"?
-the term "outlier" is not formally defined in the paper
-legend of Figure 1, Figure 2: failes--> failed
-Section 5 Implementation details: "We run at most 100...": this
is not clear enough to reproduce your experiments.
-Section 5.1, end 1st paragraph: do you mean 1/2500 or 2500.
Besides, what are ther units?
-Bibliography: page numbers are missing for CVPR, NIPS, ECCV.
============================= Reviewer Comments #25773 =========================
* Originality: 7
* Technical Quality: 7
* Significance: 5
* Relevance: 4
* Quality of writing: 4
* Overall Score: 5
* Comments:
In the area of inference for graphical models, the authors formulate a problem of hyper graph matching, as a constrained maximum a posteriori (MAP) problem. To solve it, they propose a linear programming (LP) relaxation. In the primal form they introduce only one global correspondence between hyper graphs. The dual formulation can be decomposed as a linear bipartite matching problem (that is solved by the Hungarian algorithm), and what they call belief propagation sub-problems (that are solved by dynamic programming). How to solve these later sub-problems is the main objective of the paper.
The paper is very technical. It can hardly be followed unless one first reads [Zhang et al., 2016]. I have doubts about its interest for the wide audience of IJCAI. Moreover, the authors have not made any effort to make the paper more accessible.
Some examples:
The acronym MAP (maximum a posteriori) is not introduced.
Remark that bold letters stand for vectors.
You introduce attributes of hyper arcs, but then you also use attributes of nodes.
Graph isomorphism is not NP-hard, so a cite to hardness of (1) is necessary.
"Indicator function" usually stands for "characteristic function". Here, in (1) for instance, I would use "permutations" (bijections between [n] and [n]).
The use of this "indicator functions" \mu in (4) makes the formula really cryptic. Notice that \mu_i is sometimes applied to y_i as \mu_i(y_i), and sometimes applied to an equality as in \mu_i(y_i=j).
[n] although is quite standard, it should also be introduced.
The notion of message (including the notation "\lambda_{c \to i}") and belief propagation is also not known by most of the readers.
None of the propositions are proved in the paper.
============================= Reviewer Comments #26375 =========================
* Originality: 8
* Technical Quality: 8
* Significance: 8
* Relevance: 8
* Quality of writing: 7
* Overall Score: 8
* Comments:
Dynamic Programming Bipartite Belief Propagation For Hyper Graph Matching
This work introduces a novel dynamic programming approach - Dynamic Programming Bipartite Belief Propagation - to the hyper graph matching problem. The approach discussed is contrasted with the state-of-the-art approaches for this class of problem.
Although this work falls slightly outside my area of expertise, I must say I have quite enjoyed reading it. The material is well presented and, to the best of my knowledge, appears sound.
In my opinion the authors clearly positioned their contribution in the context of relevant literature and carried out a convincing computational study to illustrate the validity of the proposed approach.
I only have few minor comments on this work.
Minor comments:
"MAP inference" - please spell out MAP.
"The problem (1) is NP-hard" - please provide reference.
In Eq. (5) remove comma in second line.
============================= Reviewer Comments #28450 =========================
* Originality: 3
* Technical Quality: 3
* Significance: 3
* Relevance: 7
* Quality of writing: 5
* Overall Score: 5
* Comments:
This paper approaches the hyper graph matching problem by framing it into a MAP inference problem and uses belief propagation (dual decomposition) to solve it. This paper is well written and demonstrates some improvement on existing methods. On the other hand, the ideas are not super excitingly novel, and mostly follows standard practices of dual decomposition technology.