Reviews Review 1 Overall evaluation 3: (strong accept) Reviewer's confidence 5: (expert) Additional scores Relevance to ICCS 5: (excellent) Quality of abstract 5: (excellent) Introduction and motivation 5: (excellent) Literature review and related work 5: (excellent) Description & originality of the own contribution 5: (excellent) Results and presentation 4: (good) Conclusions and future work 4: (good) Readability, quality of the English 5: (excellent) Quality of the figures 4: (good) Conformance to the LNCS template 5: (excellent) Candidate for best paper 2: (No) Candidate for special issue of Journal of Computational Science 1: (Yes) Comments Comments for the authors General Impressions: This is a very interesting paper on a topic that is not sufficiently addressed in the literature. In the future, this work may serve as a reference point for comparing the achievements of other authors. Congratulations on successful research. Issues to Improve: Author information is missing from the first page. A description of the “big label” problem would be useful. I have significant doubts about the strategy of merging images to generate data for multi-label classification. In my opinion, such artificially combined images, which form the entire dataset, lead to a skewed dataset that does not properly represent real-world multi-label classification scenarios. Of course, within the context of this particular paper, I do not expect changes, but in the future, the authors should consider using datasets that genuinely cover real multi-label classification problems. Synthetic image stitching causes the network to focus only on specific subregions, which can be observed in the heatmaps of activations. The models were trained with learning rates of 1e-3, 1e-4, and 1e-5. Wouldn’t it be more convenient to use a learning rate scheduler that automatically adjusts the learning rate based on the model’s performance? Fig. 4: I do not understand the presented data. Why are some elements represented as lines and others as dots? Also, there is a missing space after the first sentence in the figure caption. The figure captions are very long and, honestly, make the content more confusing rather than clarifying it. The authors should prepare concise and informative captions—one sentence, maximum two lines. The paper lacks a final paragraph summarizing the research conclusions. I would expect a short section providing practical guidelines for future research, such as: If you are dealing with a single-label classification problem, do: - This, - That, - Something else. If you are working on a multi-label classification problem, do: - This, - And that. Review 2 Overall evaluation 0: (borderline paper) Reviewer's confidence 1: (none) Additional scores Relevance to ICCS 2: (poor) Quality of abstract 4: (good) Introduction and motivation 4: (good) Literature review and related work 4: (good) Description & originality of the own contribution 4: (good) Results and presentation 4: (good) Conclusions and future work 4: (good) Readability, quality of the English 4: (good) Quality of the figures 4: (good) Conformance to the LNCS template 4: (good) Candidate for best paper 2: (No) Candidate for special issue of Journal of Computational Science 2: (No) Comments Comments for the authors The paper primarily discusses neural network architecture efficiency within computer vision classification (image recognition tasks), not explicitly in scientific computing. Although it involves computational efficiency and deep learning methods (early-exit architectures), it does not directly address scientific problems or computational modeling typically emphasized at ICCS. Review 3 Overall evaluation 1: (weak accept) Reviewer's confidence 2: (low) Additional scores Relevance to ICCS 4: (good) Quality of abstract 4: (good) Introduction and motivation 4: (good) Literature review and related work 4: (good) Description & originality of the own contribution 4: (good) Results and presentation 4: (good) Conclusions and future work 4: (good) Readability, quality of the English 4: (good) Quality of the figures 4: (good) Conformance to the LNCS template 4: (good) Candidate for best paper 2: (No) Candidate for special issue of Journal of Computational Science 2: (No) Comments Comments for the authors This paper introduces a systematic framework for evaluating early-exit architectures in both single- and multi-label classification tasks and demonstrates its effectiveness in the computer vision domain. This paper lays a solid foundation for future research focused on developing early-exit strategies that effectively handle the complexities of diverse classification contexts.