Team

Emilia Javorsky MD, MPH
Director for Health & Medicine, Physician-Scientist

Emilia is focused on the invention, development and commercialization of new medical therapies. At Massachusetts General Hospital and Harvard Medical School, her research focused on the development of novel biocompatible coolants to treat sleep apnea, as well as device-based treatments for peripheral neuropathy. Previously she worked in the design, conduct and analysis of clinical trials in dermatology. Emilia was a Fulbright-Schuman scholar to the European Union, where she studied methods of enhancing transatlantic research collaborations and emerging public private partnership models to accelerate medical innovation. She is a TEDx speaker and was honored as part of the Forbes 30 Under 30 Class of 2017 in Healthcare.

Cyrus Hodes
Director and Co-Founder, The AI Initiative

Cyrus is passionate about drastically disruptive technologies, such as Artificial Intelligence, robotics, nanotech, biotech, genetics, IT and cognitive sciences as well as their cross-pollination and impacts on society. He is currently leading a robotics (Autonomous Guided Vehicles) startup and a biotech venture. In 2015, Cyrus founded the AI Initiative, which he manages, by engaging a wide range of global stakeholders to study, discuss and help shape the governance of AI. Cyrus and the AI Initiative did, and continue to do so, through various international policy platforms (OEDC, HKS Forums, Japanese MIC, French Parliament, etc.) as well as AI ethics and safety initiatives. Cyrus spearheads several projects using innovative tools (such as the Global Civic Debate and its multilingual collective intelligence platform on the governance of AI) and works at using AI and Machine Learning to tackle policy issues. He is a Vice President at The Future Society 501(c)(3) and is a member of two Committees (Policy, and General Principles) of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Cyrus was educated at Sciences Po Paris, where he later was a Lecturer, holds a M.A. (Hons) from Paris II University in Defense, Geostrategy and Industrial Dynamics and a M.P.A. for Harvard Kennedy School.

Pedro De Abreu
Director for Brain and Cognitive Sciences

Pedro is interested in the intersection of brain, behavior, and technology. He is the founder and CEO of 14x Innovation Group (14X IG), a behavioral insights management consulting firm that helps organizations increase effectiveness through a unique blend of neuroscience, social psychology, and the Harvard Business School case method. Pedro is a Co-Instructor and Teaching Fellow at Harvard University, where he co-teaches Motivation and Learning Theory. He also facilitates the Management Development Program at the Harvard Graduate School of Education and the interfaculty Mind, Brain, and Behavior thesis workshops for Harvard College seniors. Pedro has been appointed South Carolina’s New Century Scholar by USA Today and the Coca-Cola Foundation, and was a Forbes 30 Under 30 finalist in 2016. He is a winner of the Harvard Leadership in Education Award, a TEDMED delegate, Magellan Scholar, Walker Institute Scholar, City of Columbia Fellow, TEDx speaker, Darla Moore Emerging Leader, and the youngest ever-appointed member of the Board of Directors of the Central Midlands Council of Governments. Pedro is the author of an upcoming book (Fall 2017) on Collaboration by Penguin Random House.

Jessica Cussins
Director for Research, The Future Society

Jessica Cussins is the Director of Research at The Future Society. She graduated in May 2017 with a Master's Degree in Public Policy from the Harvard Kennedy School of Government and a concentration in international and global affairs, science and technology policy. She Chaired The Future Society at HKS in the 2016-17 academic year. While at Harvard, Jessica worked as a Research Assistant at the Program on Science, Technology & Society, and was a Belfer International and Global Affairs Student Fellow working within both the cybersecurity and biosecurity programs. She currently lives in the San Francisco Bay Area and does consulting work on health data and biotech for several technology think tanks. She writes regularly for outlets including The Los Angeles Times, The Huffington Post, and the Pharmaceutical Journal on the ethical, social, and political implications of consequential emerging technologies. She received her BA with Highest Distinction from the University of California, Berkeley.

Eleonore Pauwels

Eleonore Pauwels is a writer and international science policy expert, who specializes in the governance of emerging and converging technologies. At the Wilson Center, she is the Director of Biology Collectives, within the Science and Technology Innovation Program. Her research focuses on the convergence of transformative technologies such as artificial intelligence, genome-editing, digital bio-engineering and automation technologies. She analyzes the promises and perils that will likely arise with the development of the Internet of Living Things and future networks of intelligent and connected bio-labs. Her work also fosters the democratization of disruptive health technologies, including AI and genomics, and the inclusion of patients and citizens through participatory health design (her Citizen Health Innovators Project). Eleonore regularly testifies before U.S. and European authorities including the U.S. Department of State, NAS, NIH, NCI, FDA, the National Intelligence Council, the European Commission and the UN. But she is also well-versed in communicating complex and novel scientific developments for lay audiences (her TEDxCERN on CRISPR) and her writing has been featured in media outlets such as Nature, The New York Times, The Guardian, Scientific American, Le Monde, Slate and The Miami Herald.

foto_n1

Even in the infancy of AI as an academic discipline, it was recognized that the “reasoning foundations of medical diagnosis and treatment can be most precisely investigated and described in terms of certain mathematical techniques” (Ledley & Lusted, 1959). In these early days scientists had already appreciated the potential for AI to fundamentally change medical practice. By the 1970’s researchers had developed the pioneering expert system MYCIN which was used to identify bacteria and recommend antibiotics at a performance level superior to that of physicians. This led to boom in research into expert systems in medicine, and in 1984, “Readings in Medical Artificial Intelligence: The First Decade” (Clancey & Shortliffe) was published to highlight the progress that was made. Since then, the subsequent decades saw substantial advances in AI methods, widespread adoption of electronic health records and a general data explosion leading to exponential growth of artificial intelligence in medicine (AIM). Today AIM has broadened far beyond expert systems to encompass everything from predicting patient outcomes, to drug discovery, automated medical imaging and beyond.

In thinking towards the future Clancey & Shortliffe’s book outlined the key challenges for the “second decade” of AIM: data acquisition, knowledge acquisition and representation, explanation and logistics of integration into health systems. While the scope of these challenges has changed dramatically, the core concepts remain highly relevant today. In this era of extraordinary hype, it is especially important to think critically about the challenges facing AIM. We need to ensure that the tremendous potential is realized and that we do not make avoidable mistakes that lead us into an AIM winter. Hence, we would like to research, explore and stimulate discussion around some of the key considerations that are critical to the successful, effective and ethical development of AIM systems:
foto_robo

1. Data Acquisition

As the data substrate for many AIM applications is human health data, there are a host of technical, ethical and legal considerations that arise.

a. Is the health data a robust and accurate representation of the patient’s clinical reality? Can we unlock this data from their silos? Most EHR systems in widespread use were designed to optimize billing and integrated into clinical workflow for that purpose- certainly not for obtaining detailed clinical data for mining. Thus, there are key data quality, integrity, governance and security considerations that arise in adapting these datasets for AIM applications. These considerations also pertain to patient generated data from non-EHR sources such as apps and wearables.

b. Is it ethical to use and often monetize data collected to provide medical care to patients for AIM purposes? What are the concerns for patient privacy? What are the concerns for information that may be gleaned about an individual’s health? These are crucial questions that need to be explored when thinking about adapting highly sensitive data that enjoys strict legal protections for uses that may be outside of the original intention.

c. Who should have access to this data? And who owns this data? Given the economic interests associated with AIM, health data has become an invaluable currency. Currently we are seeing a burst of academic medical centers and EHR providers entering into strategic partnerships with large private companies to pursue AIM applications. This has substantial implications for market competitiveness and raises fundamental questions about data ownership.

2. Knowledge Acquisition and Representation

How can we ensure the ontologies forming the foundation of many AIM applications are true? And how much do we actually know about human health and disease? Medical textbooks and literature are key sources of knowledge in medicine. However, human biology is an entirely different beast than other more objective scientific disciplines. While we’ve had an information explosion in biology, there is still so much we do not yet know. And we must look closer on the knowledge we have, which we often erroneously assume represents a true reality. The landmark article “Why most published research findings are false” (Ioannidis, 2005) brought this discussion into the foreground. This sentiment was echoed in documenting the drastic increases in retractions of papers (Van Noorden, 2011) and in the inability to reproduce many research findings (Nature Special: Challenges in Irreproducible Research). Publishing and economic pressures to report positive results not only mean that negative work goes unpublished, but also that published studies are often be biased towards the positive (and in some cases entirely fraudulent).

Foto Doctor

3. Explanation

Do we need to understand how AI comes to its conclusions? This is especially relevant in thinking about using AI for the diagnosis, treatment and management of patients. If the results and recommendations are demonstrated to be equivalent or superior to the standard of care, is that sufficient? Understanding how conclusions are reached could certainly unlock a wealth of knowledge, but there are deeper questions of our ability to validate and approve AIM systems for clinical use. With the FDA launching a digital health unit, such questions will become vital as regulatory and legal systems begin to explore the space.

4. Integration into Health Systems

How will AI transform the clinical workflow and work force? Historically this question has been central to the failure of AIM applications. Adoption of EHR systems has only addressed a small part of the problem. There is a chasm between the state of technology and the state of clinical practice, with technology far outpacing the readiness of clinicians to adopt it. How can we bridge this divide? Similarly, how can we address the AI trust problem? High levels of public distrust of AI and clinician concerns about job security, are superimposed on medicine, a field where trust is the central tenet.