Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Risk-adjusted Decision Making for Sustainable Development
Elena Rovenskaya, International Institute for Applied Systems Analysis (IIASA), Austria

“Machine Unlearning: An Enterprise Data Redaction Workflow”
David Saranchak, Concurrent Technologies Corporation (CTC), United States

 

Risk-adjusted Decision Making for Sustainable Development

Elena Rovenskaya
International Institute for Applied Systems Analysis (IIASA)
Austria
 

Brief Bio
Elena Rovenskaya is the Program Director of the Advanced Systems Analysis (ASA) Program and the Acting Director of the Ecosystem and Ecology (EEP) Program at IIASA. Her scientific interests lie in the fields of optimization, operations research, decision sciences and mathematical modeling of complex socio-environmental systems. Under the leadership of Dr. Rovenskaya, ASA Program develops, tests, and makes available new quantitative and qualitative methods to address problems arising in the policy analysis of socio-environmental systems. The team of 35+ scientists works to support decisions in the presence of ambiguity of stakeholder interests, complexity of the underlying systems, and uncertainty. Dr. Rovenskaya's own research focuses on several areas including systemic risks in ecological and economic networks, economic development under environmental constraints, agent-based modeling of regional development, and regional economic integration.


Abstract
Many models that are used to inform sustainable development policies are deterministic models, in which future parameter values are set according to some estimates or scenarios. However, the future is highly uncertain and ignoring this uncertainty when making decisions can be very costly. This talk will present the principles of a two-stage, stochastic, chance-constrained programming approach which can be used to derive policies suitable for a broad variation of uncertain parameters. It will also feature several examples of applications including pollution control and water allocation problems.
The benefits of incorporating uncertainty and missed opportunities from the lack of perfect information will be highlighted.



 

 

“Machine Unlearning: An Enterprise Data Redaction Workflow”

David Saranchak
Concurrent Technologies Corporation (CTC)
United States
 

Brief Bio
David Saranchak is a Research Fellow and the Artificial Intelligence & Machine Learning Program Lead at Concurrent Technologies Corporation. He leads research and development of emerging techniques in data analysis, machine learning assurance, and differential privacy for multimodal data applications in enterprise platforms and tactical edge environments. He serves as the President Elect of the Military Operations Research Society (MORS), an international professional analytic society focused on enhancing the quality of national security decisions. He is also an active member volunteer in the Institute for Operations Research and the Management Sciences (INFORMS), where he is a Certified Analytics Professional and an Analytics Capability Evaluation coach focused on helping organizations achieve improve performance of analytical processes.
Previously he was a Lead Data Scientist with Elder Research, where he developed and applied statistical data modeling techniques for national security clients. He enjoyed meeting unique needs through creative analytic tradecraft, using static and streaming data sets. He also extended his team’s strong technical edge by developing and leading training for Elder Research’s Maryland Office that emphasized the technologies best able to meet clients' needs.
Mr. Saranchak has more than a dozen years of technical civil service experience as an Applied Mathematician and Software Engineer, including assignments to the UK and Canada and long-term deployments to Iraq and Afghanistan.
His passion for World War II history motivated him to enlist in the U.S. Marine Corps Reserve. While serving, he obtained two BS degrees, mathematics and physics, from Villanova University. He also earned two MS degrees, applied mathematics and telecommunications, from the University of Maryland, College Park.


Abstract
Individuals are gaining more control of their personal data through recent data privacy laws such the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). An aspect of these laws is the ability to request a business to delete your information, the so called “right to be forgotten” or “right to erasure”. These laws have serious implications for companies and organizations that train large, highly accurate deep neural networks (DNNs) using these valuable consumer data sets. The initial training process can consume significant resources and time to create an accurate solution. Thus, once a solution is achieved, updates to the model are often incremental. Also, training data can be distributed or lost, making complete retraining impossible. As such, a received redaction request poses complex technical challenges on how to comply with the law while fulfilling core business operations.
DNNs are complex functions and the relationship between a single data point, the model weights, and the model output probabilities are not fully understood. In some cases, DNNs can leak information about their training data sets in subtle ways. In one type of attack, the membership inference (MI) attack, an attacker can query the model and gain an understanding about whether the data record was used in its training, which would be a serious breach of the GDPR and the CCPA.
In this talk, we introduce a DNN model training and lifecycle maintenance process that establishes how to handle specific data redaction requests and avoid completely retraining the model in certain scenarios. Our new process includes quantifying the MI attack vulnerability of all training data points and identifying and removing those most vulnerable from the training data set. An accurate model is then achieved upon which incremental updates can be performed to redact sensitive data points. We will discuss heuristics learned through experiments that train and redact data from DNNs, including new metrics that quantify this vulnerability and how we verify this redaction.



footer