- This event has passed.
Data Ethics in Research
June 10, 2021 @ 12:00 pm - 1:00 pm EDT
Good data science practice: moving towards a code of practice for drug development (May 27th)
There is growing interest in data science and the challenges that could be solved through its application. The growing interest is in part due to the promise of “extracting value from data”. The pharmaceutical industry is no different in this regard reflected by the advancement and excitement surrounding data science. Data science brings new perspectives, new methods, new skill sets and the wider use of new data modalities. For example, there is a belief that extracting value from data integrated from multiple sources and modalities using advances in statistics, machine learning, informatics and computation can answer fundamental questions. These questions span a variety of themes including: disease understanding (i.e. “precision” medicine, disease endo/pheno-typing, etc.), drug discovery (i.e. new targets and therapies), measurement (i.e. multi-omics, digital biomarkers, software as a medical device, etc.), and drug development (i.e. dose-exposure-response, efficacy, safety, compliance, etc.). By answering these fundamental questions, we can not only increase knowledge and understanding but more importantly inform decision making; accelerating drug and medical device development through data-driven prioritisation, precise measurement, optimised trial design and operational excellence. However, with the promise of data science, there are also a number of obstacles to overcome, especially if data science is to live up to this promise and deliver a positive impact. These obstacles include consensus on a common understanding of the very definition of data science, the relationship between data science and existing fields such as statistics and computing science, what should be involved in the day to day practices of data science, and what is “good” practice.
But the IRB Approved It! Data Science Research Ethics and the Challenges of Inference, Public Data and Consent (June 10th)
Data science, and the related disciplines of machine learning and artificial intelligence, are founded on the assumed availability of massive amounts of data. The scientific and economic justification for collecting and using all that data is deceptively simple: we can infer expensive- and hard-to-know data from cheap- and easy-to-know data and make predictions and automated decisions on the basis of the patterns we find. When that data is about human behavior, that inferential step is ethically fraught because it often involves data that is ubiquitous (social media, geolocation, biometrics, etc.) being used to predict traits that are from an entirely different context (race, religion, sexual preference, gender, etc.), and typically without knowledge or consent. This is a highly complex ethical challenge, yet our research ethics norms and regulations were written for a different paradigm of scientific research.
Pervasive Data, Elusive Trust: Rethinking Data Ethics for Researchers (July 8th)
Ongoing scandals and public uproar over research that uses pervasive data – information about people generated through digital interaction, available for computational analysis – demonstrates the challenges of defining and demonstrating acceptable, trustworthy digital data research practices. This talk will review problems of trustworthiness in pervasive data research and draw from the history of another research methodology that has struggled with trustworthiness – ethnography – to suggest a way forward: analytic lenses and researcher practices necessary for establishing trustworthy data science.
It’s the wild, wild west: Lessons learned from IRB member’s risk perceptions toward digital research data (July 15th)
Digital technology that is prevalent in people’s everyday lives, including smart home devices, mobile apps and social media, increasingly lack regulations for how the user data can be collected, used or disseminated. Multidisciplinary fields continue to evaluate and understand the potential negative impacts of research involving digital technologies. As more research involves digital data, Institutional Review Boards (IRBs) take on the difficult task of evaluating and determining risks—likelihood of potential harms—from digital research. Learning more about IRBs’ role in concretizing harm and its likelihood will help us critically examine the current approach to regulating digital research, and has implications for how researchers can reflect on their own data practices. We interviewed 22 U.S.-based IRB members and found that, for the interviewees, “being digital” added a risk. Being digital meant increasing possibilities of confidentiality breach, unintended collection of sensitive information, and unauthorized data reuse. Concurrently, interviewees found it difficult to pinpoint the direct harms that come out of those risks. The ambiguous, messy, and situated contexts of digital research data did not fit neatly into current human subjects research protection protocols.