The Associate Editor in charge of your paper has recommended a decision of minor revision. The requested revisions are outlined in the comments from the Associate Editor handling your manuscript and from the reviewers. Please read the reviews carefully and then submit your revision 17-Apr-2021. To submit your revision, log into https://mc.manuscriptcentral.com/dtrap and enter your Author Center, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Action," click on "Create a Revision." Your manuscript number has been appended to denote a revision. Please also include a cover letter explaining the changes you have made. I look forward to receiving your revised paper. Regards, Prof. Arun Lakhotia Editor in Chief, Digital Threats: Research and Practice arun@louisiana.edu Please find below the comments of the Associate Editor: Associate Editor: Metcalf, Leigh Comments to the Author: The reviewers do agree that you have a very good paper, the main suggestion that I think you should follow through on is please include a discussion on how these results can be applied in practice. Please find below the comments from the reviewers of your paper: Reviewer: 1 Recommendation: Accept Comments: (There are no comments.) Additional Questions: What is the paper about?: threats to deep fake detection methods from adversarial examples and sets up a series of experiments to test how robust existing deep fake detection methods are against common adversarial attacks What does this paper contribute to the field of Digital Threats? What are the strengths of this paper?: good experimental set up for evaluating current deepfake detection methods and posits a useful threat model for deep fake detection How can the paper be improved? What are its weaknesses? How can it be strengthened?: could include more discussion of the threat landscape for deepfake detection, some summary of other threat models that have been posited and just bound what this research is relevant to a little better Is this paper of potential interest to developers and engineers?: Maybe Reviewer: 2 Recommendation: Minor Revision Comments: (There are no comments.) Additional Questions: What is the paper about?: This paper presents methods for attacking DNNs used to distinguish real from fake videos, including under compression. What does this paper contribute to the field of Digital Threats? What are the strengths of this paper?: The paper's evidence for transferability of attacks is a contribution. The paper's experimental design is robust. How can the paper be improved? What are its weaknesses? How can it be strengthened?: While the results are narrowly-applied to deepfakes, additional/broader application would be useful. Additional discussion of how these results can be applied in practice would be useful. Is this paper of potential interest to developers and engineers?: Yes Reviewer: 3 Recommendation: Accept Comments: see attached file for minor notes, and one recommended change. Additional Questions: What is the paper about?: Generating algorithms to fool a variety of deepfake detectors in the presence of real-world constraints What does this paper contribute to the field of Digital Threats? What are the strengths of this paper?: This is a very well-written paper. It contains extensive background reading, replete with references, for the novice background, and walks the reader through the techiniques, challenges, algorithms, and findings. This is a timely topic of general interest and the authors do an exceptional job presenting the material and advocating their own research. How can the paper be improved? What are its weaknesses? How can it be strengthened?: The most significant element missing from this paper is an assessment as to whether the manipulations being proposed to images are detectable by humans. The paper does a thorough job convincing the reader that the proposed algorithms can defeat existing classifiers, which is a very notable achievement. However, deepfakes are primarily a tool intended to defeat the human, and as such any attempt to create a new deepfake algorithm must ensure that the perturbations being introduced will actually fool humans. That said, this paper stands alone, and I would appreciate that the question I pose above may be addressed in either future work by these authors or by other researchers with specialized expertise in human survey distribution and assessment. Is this paper of potential interest to developers and engineers?: Yes