niloofar@cs.washington.edu


About Me!

I'm into EDM and IDM, I love hiking and dancing and sometimes in my free time I enjoy sewing and making dresses and fun outfits!

Books I Like!

  • The Body Keeps the Score by Bessel van der Kolk
  • 36 Views of Mount Fuji by Cathy Davidson
  • Indistractable by Nir Eyal
  • Sapiens: A Brief History of Humankind by Yuval Noah Harari
  • The Martian by Andy Weir
  • The Solitaire Mystery by Jostein Gaarder
  • The Orange Girl by Jostein Gaarder
  • Life is Short: A Letter to St Augustine by Jostein Gaarder
  • The Alchemist by Paulo Coelho


  • Fatemeh Mireshghallah (Niloofar)

    I defended my thesis in April!! Slides for my defense are here.
    I'm a fifth year CS Ph.D. candidate at University of California, San Diego where I'm advised by Taylor Berg-Kirkpatrick. I am also a part-time researcher at Microsoft Semantic Machines and a volunteer research scientist at OpenMined. My research interests are privacy-preserving ML, natural language processing and fairness, the slides for my proposal talk are available here. I am open to collaborations, so if you have a cool idea and you'd like to discuss it, feel free to email me!
    Google Scholar | CV| Bio | GitHub | Twitter
      News

  • March 2023: I am co-organizing the first Generative AI+Law GenLaw workshop at ICML 2023

  • December 2022: Join our Ethics in NLP birds of a feather seccsion at EMNLP!!

  • November 2022: Join our privacy roundateble at the Algorithmic Fairness through the Lens of Causality and Privacy workshop workshop if you are attending NeurIPS!!

  • September 2022: Our paper Memorization in NLP Fine-tuning Methods got accepted to EMNLP 2022.

  • September 2022: Our paper Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks got accepted to EMNLP 2022.

  • September 2022: Our paper Differentially Private Model Compression got accepted to NeurIPS 2022.

  • April 2022: Our paper What Does it Mean for a Language Model to Preserve Privacy? got accepted to FAccT 2022.

  • April 2022: Our paper User Identifier got accepted to NAACL 2022.

  • March 2022: Our paper Mix and Match: Learning-free Controllable Text Generation got accepted to ACL 2022.

  • Nov 2021: I am co-organizing three workshops in 2022: Federated learning for NLP at ACL, Private NLP at NAACL and WiNLP at EMNLP!

  • Nov 2021: I gave an invited talk at National University of Singapore's privacy and trust group!

  • Nov 2021: Join us at the Widening NLP workshop at EMNLP 2021 here!

  • Oct 2021: Please take a few minutes to fill out our NAACL 2022 D&I survey here

  • August 2021: Our paper "Style Pooling: An Empirical Study of Automatic Text Style Obfuscation" has been accepted to the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021).

  • July 2021: Join us at our break out session titled Machine Learning for Privacy: An Information Theoretic Perspective, at the 2021 WiML un-workshop.

  • June 2021: I started my internship at Microsoft Research AI, with the Language and Intelligent Assistance (LIA) group, where I am working on federated learning for language models.

  • May 2021: Our paper "U-Noise: Learnable Noise Masks for Interpretable Image Segmentation" has been accepted in the 2021 IEEE International Conference on Image Processing (ICIP 2021). You can find the paper here.

  • March 2021: My MSR AI summer 2020 internship paper, "Privacy Regularization: Joint Privacy-Utility Optimization in Language Models" has been accepted in the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021). You can find the paper here.

  • March 2021: I gave an invited talk on our paper Shredder: Learning Noise Distributions to Protect Inference Privacy at the Split Learning Workshop. You can find the talk here.

  • February 2021: I gave an invited talk on Introduction to NLP and Career Prospects (slides) at the University Institute Of Engineering and Technology.

  • January 2021: Our paper "Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy" has been accepted in the 30th Web Conference (WWW 2021). You can find the paper here and the code here.

  • January 2021: I am co-organizing the Distributed and Private Machine Learning (DPML) Workshop at ICLR 2021. Consider submitting your work, or dropping by!

  • December 2020: I present my MSR AI intenrship work, "Privacy Regularization: Joint Privacy-Utility Optimization in Language Models", at the Privacy-Preserving Machin Learning Workshop (PPML) at NeurIPS 2020. We present privacy-preserving mitigations for text-generation language models and evaluate them with existing and proposed attacks.
    BibTex and abstract

  • October 2020: I am giving an invited talk on privacy and fairness in deep neural network inference at the Machine Learning and Friends Lunch at Umass Amherst. You can find my slides here and the talk recording here.

  • September 2020: I am giving an invited talk on Privacy-Preserving NLP at the 2020 Privacy Conference (PriCon). Checkout the talk here. You can find my slides for the talk here.

  • July 2020: I am co-leading a break out session titled Feminist Perspectives for ML & CV at the WiML 2020 Un-workshop. The reading list and the discussed material is available here.

  • June 2020: I started my internship at Microsoft Research AI, with the Knowledge Technologies and Intelligent Experiences (KTX) group, where I am working on private and ethical text generation.

  • May 2020: Our paper "Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks" got accepted at the Thirty-seventh International Conference on Machine Learning (ICML 2020).

  • May 2020: Join us at OpenMined virtual Ask Me Anything (AMA) session, where I answer questions about privacy, machine learning, research and PhD life! You can find the recording here.

  • April 2020: Join us at the "Learning Representation for Cybersecurity" social, where we will be discussing Cybersecurity, ML-Based intrusion and malware detection, privacy-preserving ML and other interesting topics! You can find slides for my talk here. You can also find a reading list of papers here.

  • April 2020: I was chosen as a winner of NCWIT (National Center for Women and IT)'s AiC Collegiate award!

  • March 2020: I virtually presented my paper Shredder in ASPLOS 2020. You can find my presentation video here.

  • December 2019: I was chosen as an NCWIT (National Center for Women and IT) collegiate award finalist!

  • December 2019: Join us in Vancouver for the WiMLDS [NeurIPS Special] Talks + Panel Discussion where I'll be giving a talk on Privacy in Mahcine Learning! You can find my slides for this talk here.

  • November 2019: Our paper Shredder: Learning Noise Distributions to Protect Inference Privacy got into the 25th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 20) with less than 18% acceptance rate!

  • October 2019: Our paper Shredder got into NeurIPS19's Privacy in ML workshop!

  • June 2019: I am joining Western Digital's research department as a RAMP Next Generation Platform Technologies Intern.

  • June 2019: Our paper Shredder got into ICML's SPML workshop, let's meet up if you are attending ICML19!

  • April 2019: I am attending ASPLOS19, let me know if you are there!
  • Keep up with me on Twitter for more news!
      Research Experience
    Fall 2022
    Part-time Researcher
    Microsoft Semantic Machines
    Mentors: Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner
    Summer 2022
    Research Intern
    Microsoft Semantic Machines
    Mentors: Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner
    Winter 2022
    Research Intern
    Microsoft Research, Algorithms Group, Redmond Lab
    Mentors: Sergey Yekhanin, Arturs Backurs
    Summer 2021
    Research Intern
    Microsoft Research, Language, Learning and Privacy Group, Redmond Lab
    Mentors: Dimitrios Dimitriadis, Robert Sim
    Summer 2020
    Research Intern
    Microsoft Research, Language, Learning and Privacy Group, Redmond Lab
    Mentor: Robert Sim
    Summer 2019
    Research Intern
    Western Digital Co. Research and Development
    Mentor: Anand Kulkarni
      Invited Talks

  • Federated Learning and Privacy Regularization, Tutorial on Privacy-Preserving NLP at EACL, May 2023, slides and recording.
  • Auditing and Mitigating Safety Risks in Large Language Models, Cohere for AI, May 2023, slides and recording.
  • Learning-free Controllable Text Generation, LLM Interfaces Workshop and Hackathon, Apr 2023, slides and recording.
  • Auditing and Mitigating Safety Risks in Large Language Models, University of Washington, Apr 2023, (slides).
  • How much can we trust large language models?, Ethics Workshop at NDSS 2023, Feb 2023.
  • Privacy Auditing and Protection in Large Language Models, Google's FL Seminar, Feb 2023 (slides).
  • How Much Can We Trust Large Language Models?, University of Texas Austin, Oct 2022 (slides).
  • Mix and Match: Learning-free Controllable Text Generation, Johns Hopkins University, Sep 2022 (slides).
  • How Much Can We Trust Large Language Models?, Adversarial ML workshop at KDD, Rising Star Talk, Aug 2022 (slides, recording).
  • What Does it Mean for a Language Model to Preserve Privacy?, Mar 2022 (slides).
  • Improving Attribute Privacy and Fairness for Natural Language Processing at the University of Maine, Dec 2021 (slides).
  • Style Pooling: Automatic Text Style Obfuscation for Fairness at the National University of Singapore, Nov 2021 (slides).
  • Privacy-Preserving Natural Language Processing Panel at the Big Science for Large Language Models, Oct 2021. Recording here.
  • Privacy and Interpretability of DNN Inference(slides) at the Research Society MIT Manipal, July 2021. Recording here.
  • Low-overhead Techniques for Privacy and Fairness of DNNs (slides) at the Alan Turning Institue's Privacy-preserving Data Analysis Seminar Series, June 2021. Recording here.
  • Shredder: Learning Noise Distributions to Protect Inference Privacy (slides) at the Split Learning Workshop, March 2021. Recording here.
  • Introduction to NLP and Career Prospects (slides) at the University Institute Of Engineering and Technology, February 2021. Recording here
  • Privacy and Fairness in Deep Neural Network Inference (slides) at Machine Learning and Friends Lunch at UMass Amherst, October 2020. Recording here.
  • Privacy-Preserving Natural Language Processing (slides) at the OpenMined Privacy Conference, September 2020. Recording here.
  • Invited poster session at the Microsoft Research AI Breakthroughs Workshop, September 2020
  •   Publications

    [For the full list, please refer to my Google Scholar page.]

  • Privacy-Preserving Domain Adaptation of Semantic Parsers ACL 20223

  • Membership Inference Attacks against Language Models via Neighbourhood Comparison ACL 20223 (findings)

  • Differentially Private Model Compression NeurIPS 20222

  • Memorization in NLP Fine-tuning Methods EMNLP 2022

  • Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks EMNLP 2022

  • What Does it Mean for a Language Model to Preserve Privacy? in FAccT 2022.

  • User Identifier: Implicit User Representations for Simple and Effective Personalized Sentiment in NAACL 2022.

  • Mix and Match: Learning-free Controllable Text Generation, ACL 2022.

  • Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness, 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021).

  • U-Noise: Learnable Noise Masks for Interpretable Image Segmentation, 2021 IEEE International Conference on Image Processing (ICIP 2021).

  • Privacy Regularization: Joint Privacy-Utility Optimization in Language Models, 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021).

  • Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy, 30th Web Conference (WWW 2021). Recording here.

  • Privacy in Deep Learning: A survey. Please let me know if there is any related work that is missing!

  • Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy , 2020 CCS Privacy-Preserving Machine Learning in Practice (PPMLP) workshop (PPMLP 2020).

  • Shredder: Learning Noise Distributions to Protect Inference Privacy, 25th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 20). Recording here.

  • Shredder: Learning Noise Distributions to Protect Inference Privacy with a Self-Supervised Learning Approach, Thirty-fourth Annual Conference on Neural Information Processing Systems (NeurIPS19), Privacy in Machin Learning Workshop (PriML19).
    Code available at shredder-v2-self-supervised

  • Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge Thirty-sixth International Conference on Machine Learning (ICML19), Security and Privacy of Machin Learning Workshop (SPML19).
    Code available at shredder-v1

  • Energy-Efficient Permanent Fault Tolerance in Hard Real-Time Systems, IEEE Transactions on Computers, March 2019

  • ReLeQ: An Automatic Reinforcement Learning Approach for Deep Quantization of Neural Networks, NeurIPS ML for systems workshop, December 2018
  •   Diversity, Inclusion & Mentorship

  • Widening NLP (WiNLP) co-chair
  • Socio-cultral D&I chair at NAACL 2022
  • Mentor for the Graduate Women in Computing (GradWIC) at UCSD
  • Mentor for the UC Sand Diego Women Organization for Research Mentoring (WORM) in STEM
  • Co-leader for the "Feminist Perspectives for Machine Learning & Computer Vision" Break-out session at the Women in Machine Learning (WiML) 2020 Un-workshop Held at ICML 2020
  • Mentor for the USENIX Security 2020 Undergraduate Mentorship Program
  • Volunteer at the Women in Machine Learning 2019 Workshop Held at NeurIPS 2019
  • Invited Speaker at the Women in Machine Learning and Data Science (WiMLDS) NeurIPS 2019 Meetup
  • Mentor for the UCSD CSE Early Research Scholars Program (CSE-ERSP) in 2018
  •   Professional Services

  • Reviewer for ICLR 2022
  • Reviewer for NeurIPS 2021
  • Reviewer for ICML 2021
  • Shadow PC member for IEEE Security and Privacy Conference Winter 2021
  • Artifact Evaluation Program Committee Member for USENIX Security 2021
  • Reviewer for ICLR 2021 Conference
  • Program Committee member for the LatinX in AI Research Workshop at ICML 2020 (LXAI)
  • Reviewer for the 2020 Workshop on Human Interpretability in Machine Learning (WHI) at ICML 2020
  • Program Committee member for the MLArchSys workshop at ISCA 2020
  • Security & Privacy Committee Member and Session Chair for Grace Hopper Celebration (GHC) 2020
  • GHC (Grace Hopper Celebrateion) 2020 Privacy and Security Committer Member
  • Reviewer for ICML 2020 Conference
  • Artifact Evaluation Program Committee Member for ASPLOS 2020
  • Reviewer for IEEE TC Journal
  • Reviewer for ACM TACO Journal
  •   TA Experiences, UC San Diego
    Fall 2020

  • TA of CSE 276C: Mathematics for Robotics, Graduate Level, Instructor: Dr. Henrik I. Christensen
  • Winter and Fall 2019

  • TA of CSE 240D: Accelerator Design for Deep Learning, Graduate Level, Instructor: Dr. Hadi Esmaeilzadeh
  •   Volunteer TA Experiences, Sharif University of Technology
    Fall 2017

  • Head TA of Digital Electronics
  • Head TA of Probability and Statistics
  • Spring 2017

  • TA of Computer Architecture
  • TA of Signals and Systems
  • Head TA of Probability and Statistics
  • Fall 2016

  • TA of Advanced Programming
  • Head TA of Numerical Methods