Dr. Heidy Khlaaf is the Chief AI Scientist at the AI Now Institute focusing on the assessment and safety of AI within autonomous weapons systems. Overall, she specialises in the evaluation, specification, and verification of complex or autonomous software implementations in safety-critical systems. She has an extensive and broad range of expertise in leading system safety audits (e.g., IEC 61508, DO-178C), varying from UAVs to large nuclear power plants, that contribute to construction of safety cases for safety-critical software.
Dr. Khlaaf has helped establish and pioneer the field of AI Safety Engineering, and is known for her work leading the safety evaluation of Codex at OpenAI where she developed a framework that measures a model’s performance outcomes against a cross functional risk assessment, now a de facto methodology used across AI labs. She was previously the Engineering Director of the AI Assurance team at Trail of Bits, where she led the cyber evaluations as part of the launch of the UK AI Safety Institute, and unveiled the LeftoverLocals vulnerability.
Her unique expertise at the intersection of Systems Software Engineering and Machine Learning has allowed her to lead, contribute, and produce the development of various standards and auditing frameworks for safety related applications and their development. This includes policy and regulatory frameworks for US and UK Regulators that enable the assurance of AI and ML to be safely deployed within critical systems. She is currently part of the Network of Experts for UNSG’s AI Advisory Body, and an ISO SC 42 (Artificial Intelligence) Committee Member via the British Standards Institute. She has been featured in TIME, NPR, Politico, Vox, WIRED, and many other media outlets
She completed her Computer Science PhD at University College London in 2017, where she was advised by Nir Piterman. She was a recipient of the prestigious NSF GRFP award. Her work focused on the temporal verification, termination, and non-termination of infinite-state software systems. She has won a best paper award at CAV 2015, and a subsequent invitation to JACM, for her work on the first automated algorithm to verify CTL* verification for infinite-state systems.
She received a Bachelor of Science from Florida State University for dual degrees in both Computer Science and Philosophy, with a minor in Mathematics, and graduated with honors and highest distinction.
Thesis
"The Past, Present, and Future(s): Verifying Temporal Software Properties",
Heidy Khlaaf. PhD Dissertation. Department of Computer Science, University College London, 2018.
PDF
"97 Things Every SRE Should Know", edited by Emil Stolarsky and Jaime Woo. O'Reilly Media Inc., November 2020.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,
April 2020.
59 co-authors from 29 organisations, including tech companies and academic groups such as:
Open AI, Leverhulme Centre for the Future of Intelligence, University of Oxford, Partnership on AI, Adelard, Mila, Google Brain, and many others.
"T2: Temporal Property Verification"
M. Brockschmidt* and H. Khlaaf* with B. Cook, S. Ishtiaq, and N. Piterman Tools and Algorithms for the Construction and Analysis of Systems, Eindhoven, Netherlands, 2016.
PDF
"On Automation of CTL* Verification for Infinite-State Systems"
H. Khlaaf* with B. Cook and N. Piterman*. Computer Aided Verification, San Francisco, USA, 2015.
Best Paper Award at CAV 2015, Invited Submission to JACM.
PDF
"Fairness for Infinite-State Systems"
H. Khlaaf* with B. Cook and N. Piterman*. Tools and Algorithms for the Construction and Analysis of Systems, London, United Kingdom, 2015.
PDF
"Faster Temporal Reasoning for Infinite-State Programs"
H. Khlaaf* with B. Cook and N. Piterman. Formal Methods in Computer-Aided Design, Lausanne, Switzerland, 2014.
PDF
"Abstract: Fairness for Infinite-State Systems"
H. Khlaaf* with B. Cook and N. Piterman. 14th International Workshop on Termination, Vienna, Austria, 2014.
Tech
"Auditing safety-critical AI systems"
BSI-VdTÜV AI Forum On Auditing AI-Systems: From Basics to Applications (German Federal Office for Information Security ), Invited Speaker, Berlin, Germany, 2020. (~150 attendees)
When not analyzing safety-critical systems, you will most likely find me climbing. I mostly enjoy bouldering and I am currently climbing around the V9 grade range outdoors. I climb both indoors and outdoors and my most recent trips have been to: Portland UK, Rocklands (South Africa), the Peak District UK, Dolomites Italy, Sintra Portugal, Magic Wood Switzerland, Albarracin Spain, Shawangunk Mountains, Brione Switzerland, Sardegna Italy, Fontainebleau France, Yosemite National Park, Grand Canyon National Park.