Copyright © 2016 Hail Science

Hail Science

Ethics, Computing, and AI: Perspectives from MIT

Other

Ethics, Computing, and AI: Perspectives from MIT

The MIT Stephen A. Schwarzman College of Computing will reorient the Institute to bring the power of computing and AI to all fields at MIT; allow the future of computing and AI to be shaped by all MIT disciplines; and advance research and education in ethics and public policy to help ensure that new technologies benefit the greater good.
To support ongoing planning for the new college, Dean Melissa Nobles invited faculty from all five MIT schools to offer perspectives on the societal and ethical dimensions of emerging technologies. This series presents the resulting commentaries — practical, inspiring, concerned, and clear-eyed views from an optimistic community deeply engaged with issues that are among the most consequential of our time.
The commentaries represent diverse branches of knowledge, but they sound some common themes, including: the vision of an MIT culture in which all of us are equipped and encouraged to discern the impact and ethical implications of our endeavors.
FOREWORD
Ethics, Computing, and AI
Melissa Nobles, Kenan Sahin Dean, and Professor of Political Science
School of Humanities, Arts, and Social Sciences
”These commentaries, representing faculty from all five MIT schools, implore us to be collaborative, foresighted, and courageous as we shape a new college — and to proceed with judicious humility. Rightly so. We are embarking on an endeavor that will influence nearly every aspect of the human future.” Read more >>
INTRODUCTION
The Tools of Moral Philosophy
Caspar Hare, Professor of Philosophy
Kieran Setiya, Professor of Philosophy
School of Humanities, Arts, and Social Sciences
”We face ethical questions every day. Philosophy does not provide easy answers for these questions, nor even fail-safe techniques for resolving them. What it does provide is a disciplined way to think about ethical questions, to identify hidden moral assumptions, and to establish principles by which our actions may be guided and judged. Framing a discussion of the risks of advanced technology entirely in terms of ethics suggests that the problems raised are ones that can and should be solved by individual action. In fact, many of the challenges presented by computer science will prove difficult to address without systemic change.”
Action: Moral philosophers can serve both as teachers in the new College and as advisers/consultants on project teams. Read more >>
WELCOMING REMARKS
A New Kind of Education
Susan Silbey, Chair of the MIT Faculty
Celebration for the MIT Schwarzman College of Computing
28 February 2018
”The college of computing will be dedicated to educating a different kind of technologist. We hope to integrate computing with just about every other subject at MIT so that students leave here with the knowledge and resources to be wiser, more ethically and technologically competent citizens and professionals.” Read more >>
Part I: A Human Endeavor
Computing is embedded in cultural, economic, and political realities.
Computing is Deeply Human
Stefan Helmreich, Elting E. Morison Professor of Anthropology
Heather Paxson, William R. Kenan, Jr. Professor of Anthropology
School of Humanities, Arts, and Social Sciences
”Computing is a human practice that entails judgment and is embedded in politics. Computing is not an external force that has an impact on society; instead, society — institutional structures that organize systems of social norms — is built right into making, programming, and using computers.
Action: The computational is political; MIT can make that recognition one of the pillars of computing and AI research. Read more >>
When Computer Programs Become Unpredictable
John Guttag, Dugald C. Jackson Professor of Computer Science and Electrical Engineering
School of Engineering
“We should look forward to the many good things machine-learning will bring to society. But we should also insist that technologists study the risks and clearly explain them. And society as whole should take responsibility for understanding the risks and for making human-centric choices about how best to use this ever-evolving technology.”
Action: Develop platforms that enable a wide spectrum of society to engage with the societal and ethical issues of new technology. Read more >>
Safeguarding Humanity in the Age of AI
Bernhardt Trout, Raymond F. Baddour Professor of Chemical Engineering
School of Engineering
”There seem to be two possibilities for how AI will turn out. In the first, AI will do what it is on track to do: slowly take over every human discipline. The second possibility is that we take the existential threat of AI with the utmost seriousness and completely change our approach. This means redirecting our thinking from a blind belief in efficiency to a considered understanding of what is most important about human life.” Read more >>
Action: Develop a curriculum that encourages us to reflect deeply on fundamental questions: What is justice? How ought I to live?
II. COMMUNITY INSIGHTS
Shaping ethical technology is a collective responsibility.
The Common Ground of Stories
Mary Fuller, Professor of Literature, and Head MIT Literature section
School of Humanities, Arts, and Social Science
“Stories are things in themselves, and they are also things to think with. Stories allow us to model interpretive, affective, ethical choices; they also become common ground. Reading about Milton’s angelic intelligences or William Gibson’s “bright lattices of logic” won’t tell us what we should do with the future, but reading such stories at MIT may offer a conceptual meeting place to think together across the diversity of what and how we know.
Action: Create residencies for global storytellers in the MIT Schwarzman College of Computing. Read more >>
Who’s Calling the Shots with AI?
Leigh Hafrey, Senior Lecturer, Leadership and Ethics
MIT Sloan School of Management
’Efficiency’ is a perennial business value and a constant factor in corporate design, strategy, and execution. But in a world where the exercise of social control by larger entities is real, developments in artificial intelligence have yet to yield the ethics by which we might manage their effects. The integrity of our vision for the future depends on our learning from the past and celebrating the fact that people, not artifacts and institutions, set our rules of engagement.
Action: Adopt a full-on stakeholder view of business in society and the individual in business. Read more >>
In Praise of Wetware
Caroline A. Jones, Professor of Art History
School of Architecture and Planning
“As we enshrine computation as the core of smartness, we would be well advised to think of the complexity of our ‘wet’ cognition, which entails a much more distributed notion of intelligence that goes well beyond the sacred cranium and may not even be bounded by our own skin.”
Action: Before claiming that it is ”intelligence” we’ve produced in machines or modeled in computation, we should better understand the adaptive, responsive human wetware — and its dependence on a larger living ecosystem. Read more >>
Blind Spots
David Kaiser, Germeshausen Professor of the History of Science, and Professor of Physics
School of Humanities, Arts, and Social Sciences, and Department of Physics
“MIT has a powerful opportunity to lead in the development of new technologies while also leading careful, deliberate, broad-ranging, and ongoing community discussions about the “whys” and ’what ifs,’ not just the ’hows.’ No group of researchers, flushed with the excitement of learning and building something new, can overcome the limitations of blind spots and momentum alone.
Action: Create ongoing forums for brainstorming and debate; we will benefit from engaging as many stakeholders as possible. Read more >>
Assessing the Impact of AI on Society
Lisa Parks, Professor of Comparative Media Studies
School of Humanities, Arts, and Social Sciences
“Three fundamental societal challenges have emerged from the use of AI, particularly for data collection and machine learning. The first challenge centers on this question: Who has the power to know about how AI tools work, and who does not? A second challenge involves learning how AI tools intersect with international relations and the dynamics of globalization. Beyond questions of knowledge, power, and globalization, it is important to consider the relationship between AI and social justice.
Action: Conduct a political, economic, and materialist analysis of the relationship of AI technology to global trade, governance, natural environments, and culture. Read more >>
Clues and Caution for AI from the History of Biomedicine
Robin Wolfe Scheffler, Leo Marx Career Development Professor in the History and Culture of Science and Technology
School of Humanities, Arts, and Social Sciences
”The use of AI in the biomedical fields today deepens longstanding questions raised by the past intractability of biology and medicine to computation, and by the flawed assumptions that were adopted in attempting to make them so. The history of these efforts underlines two major points: ’Quantification is a process of judgment and evaluation, not simple measurement’ and ’Prediction is not destiny.
Action: First, understand the nature of the problems we want to solve — which include issues not solvable by technical innovation alone. Let that knowledge guide new AI and technology projects. Read more >>
The Environment for Ethical Action
T.L. Taylor, Professor of Comparative Media Studies
School of Humanities, Arts, and Social Sciences
”We can cultivate our students as ethical thinkers but if they aren’t working in (or studying in) structures that support advocacy, interventions, and pushing back on proposed processes, they will be stymied. Ethical considerations must include a sociological model that focuses on processes, policies, and structures and not simply individual actors.
Action: Place a commitment to social justice at the heart of the MIT Schwarzman College of Computing. Read more >>
Biological Intelligence and AI
Matthew A. Wilson, Sherman Fairchild Professor of Neuroscience
School of Science and the Picower Institute
”An understanding of biological intelligence is relevant to the development of AI, and the effort to develop artificial general intelligence (AGI) magnifies its significance. AGIs will be expected to conform to standards of behavior…Should we hold AIs to the same standards as the average human? Or will we expect AIs to perform at the level of an ideal human?
Action: Conduct research on how innate morality arises in human intelligence, as an important step toward incorporating such a capacity into artificial intelligences. Read more >>
Machine Anxiety
Bernardo Zacka, Assistant Professor of Political Science
School of Humanities, Arts, and Social Sciences
”To someone who studies bureaucracy, the anxieties surrounding AI have an eerily familiar ring. So too does the excitement. For much of the 20th century, bureaucracies were thought to be intelligent machines. As we examine the ethical and political implications of AI, there are at least two insights to draw from bureaucracy’s history: That it is worth studying our anxieties whether or not they are realistic; and that in doing so we should not write off human agency.
Action: When societies undergo deep transformations, envisioning a future that is both hopeful and inclusive is a task that requires moral imagination, empathy, and solidarity. We can study the success of societies that have faced such challenges well. Read more >>
Part III: A Structure for Collaboration
Thinking together is powerful.
Bilinguals and Blending
Hal Abelson, Class of 1922 Professor of Electrical Engineering and Computer Science
School of Engineering
”When we study society today, we can no longer separate humanities — the study of what’s human — from computing. So, while there’s discussion under way about building bridges between computing and the humanities, arts, and social sciences, what the College of Computing needs is blending, not bridging. MIT’s guideline should be President Reif’s goal to ’educate the bilinguals of the future’ —experts in many fields who are also skilled in modern computing.
Action: Develop approaches for joint research and joint teaching. Read more >>
A Dream of Computing
Fox Harrell, Professor of Digital Media and Artificial Intelligence
School of Humanities, Arts, and Social Sciences + Computer Science and Artificial Intelligence Lab
”There are numerous perspectives on what computing is: some people focus on theoretical underpinnings, others on implementation, others still on social or environmental impacts. These perspectives are unified by shared characteristics, including some less commonly noted: computing can involve great beauty and creativity.
Action: ”We must reimagine our shared dreams for computing technologies as ones where their potential social and cultural impacts are considered intrinsic to the engineering practices of inventing them.” Read more >>
A Network of Practitioners
Nick Montfort, Professor of Media Studies
School of Humanities, Arts, and Social Sciences
”Computing is not a single discipline or even a set of disciplines; it is a practice. The new College presents an opportunity for many practitioners of computing at MIT.
Action: Build a robust network with many relevant types of connections, not all of them through a single core. Read more >>
Two Commentaries
Susan Silbey, Chair of the MIT Faculty
Goldberg Professor of Humanities, Professor of Sociology and Anthropology, and Professor of Behavioral and Policy Sciences
School of Humanities, Arts, and Social Sciences and MIT Sloan School of Management
How Not To Teach Ethics — ”Rather than thinking about ethics as a series of anecdotal instances of problematic choice-making, we might think about ethics as participation in a moral culture, and then ask how that culture supports or challenges ethical behavior.
Forming the College — ”The Stephen A. Schwarzman College is envisioned to be the nexus connecting those who advance computer science, those who use computational tools in specific subject fields, and those who analyze and write about digital worlds.” Read more >>
Ethical AI by Design
Abby Everett Jaques, Postdoctoral Associate, Philosophy
School of Humanities, Arts, and Social Sciences
”We are teaching an ethical protocol, a step-by-step process that students can use for their own projects. In this age of self-driving cars and machine learning, the questions feel new, but in many ways they’re not. Philosophy offers powerful tools to help us answer them.” Read more >>
Series prepared by MIT SHASS Communications
Office of the Dean, MIT School of Humanities, Arts, and Social Sciences
Series Editors: Emily Hiestand, Kathryn O’Neill

Continue Reading

More in Other

- Advertisement -

Most Popular





To Top