We increasingly place our trust in algorithms, whether applying for a mortgage, a new job; or making personal health decisions. But what about the security system that uses facial recognition and locks out a 55-year-old office custodian from her night shift? Or the groups of people automatically cropped out of photos on social media? These are the unintended, and often unfair, consequences of data science tools amplified across millions of users. They’re also highly preventable.
This is the lesson that lawyer and epidemiologist M. Elizabeth Karns embeds in every data science and statistics course she teaches in the Department of Statistics and Data Science. Her students will be deciding how to use data in the future, and while bad decision-making in business isn’t new, Karns says it’s the accelerated and aggregated effect of today’s data science applications that’s so dangerous: individual, team or even a whole company’s worth of decisions, can instantly affect the lives of millions of people. Moreover, the torrent of new technologies is moving faster than our regulatory systems, leaving a gap in accountability. Even data scientists themselves often don’t know exactly what’s happening inside their algorithms.
Read the full story on the Cornell Chronicle website.
Latest posts by Alexandra Maxian (see all)
- Certificate brings Cornell food production expertise to entrepreneurs worldwide - January 27, 2023
- Cornell Brooks EMHA ranked in top 10 health care management grad programs - January 27, 2023
- Five Trends HR Leaders Need to Leverage in 2023 - January 27, 2023