Business

Bridging philosophy and AI to explore computing ethics

During a meeting of class 6.C40/24.C40 (Ethics of Computing), Professor Armando Solar-Lezama poses the same impossible question to his students that he often asks himself in the research he leads with the Computer Assisted Programming Group at MIT:

“How do we make sure that a machine does what we want, and only what we want?”

At this moment, what some consider the golden age of generative AI, this may seem like an urgent new question. But Solar-Lezama, the Distinguished Professor of Computing at MIT, is quick to point out that this struggle is as old as humankind itself.

He begins to retell the Greek myth of King Midas, the monarch who was granted the godlike power to transform anything he touched into solid gold. Predictably, the wish backfired when Midas accidentally turned everyone he loved into gilded stone.

“Be careful what you ask for because it might be granted in ways you don’t expect,” he says, cautioning his students, many of them aspiring mathematicians and programmers.

Digging into MIT archives to share slides of grainy black-and-white photographs, he narrates the history of programming. We hear about the 1970s Pygmalion machine that required incredibly detailed cues, to the late ’90s computer software that took teams of engineers years and an 800-page document to program.

While remarkable in their time, these processes took too long to reach users. They left no room for spontaneous discovery, play, and innovation.

Solar-Lezama talks about the risks of building modern machines that don’t always respect a programmer’s cues or red lines, and that are equally capable of exacting harm as saving lives.

Titus Roesler, a senior majoring in electrical engineering, nods knowingly. Roesler is writing his final paper on the ethics of autonomous vehicles and weighing who is morally responsible when one hypothetically hits and kills a pedestrian. His argument questions underlying assumptions behind technical advances, and considers multiple valid viewpoints. It leans on the philosophy theory of utilitarianism. Roesler explains, “Roughly, according to utilitarianism, the moral thing to do brings about the most good for the greatest number of people.”

MIT philosopher Brad Skow, with whom Solar-Lezama developed and is team-teaching the course, leans forward and takes notes.

A class that demands technical and philosophical expertise

Ethics of Computing, offered for the first time in Fall 2024, was created through the Common Ground for Computing Education, an initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

The instructors alternate lecture days. Skow, the Laurance S. Rockefeller Professor of Philosophy, brings his discipline’s lens for examining the broader implications of today’s ethical issues, while Solar-Lezama, who is also the associate director and chief operating officer of MIT’s Computer Science and Artificial Intelligence Laboratory, offers perspective through his.

Skow and Solar-Lezama attend one another’s lectures and adjust their follow-up class sessions in response. Introducing the element of learning from one another in real time has made for more dynamic and responsive class conversations. A recitation to break down the week’s topic with graduate students from philosophy or computer science and a lively discussion combine the course content.

“An outsider might think that this is going to be a class that will make sure that these new computer programmers being sent into the world by MIT always do the right thing,” Skow says. However, the class is intentionally designed to teach students a different skill set.

Determined to create an impactful semester-long course that did more than lecture students about right or wrong, philosophy professor Caspar Hare conceived the idea for Ethics of Computing in his role as an associate dean of the Social and Ethical Responsibilities of Computing. Hare recruited Skow and Solar-Lezama as the lead instructors, as he knew they could do something more profound than that.

“Thinking deeply about the questions that come up in this class requires both technical and philosophical expertise. There aren’t other classes at MIT that place both side-by-side,” Skow says.

That’s exactly what drew senior Alek Westover to enroll. The math and computer science double major explains, “A lot of people are talking about how the trajectory of AI will look in five years. I thought it was important to take a class that will help me think more about that.”

Westover says he’s drawn to philosophy because of an interest in ethics and a desire to distinguish right from wrong. In math classes, he’s learned to write down a problem statement and receive instant clarity on whether he’s successfully solved it or not. However, in Ethics of Computing, he has learned how to make written arguments for “tricky philosophical questions” that may not have a single correct answer.

For example, “One problem we could be concerned about is, what happens if we build powerful AI agents that can do any job a human can do?” Westover asks. “If we are interacting with these AIs to that degree, should we be paying them a salary? How much should we care about what they want?”

There’s no easy answer, and Westover assumes he’ll encounter many other dilemmas in the workplace in the future.

“So, is the internet destroying the world?”

The semester began with a deep dive into AI risk, or the notion of “whether AI poses an existential risk to humanity,” unpacking free will, the science of how our brains make decisions under uncertainty, and debates about the long-term liabilities, and regulation of AI. A second, longer unit zeroed in on “the internet, the World Wide Web, and the social impact of technical decisions.” The end of the term looks at privacy, bias, and free speech.

One class topic was devoted to provocatively asking: “So, is the internet destroying the world?”

Senior Caitlin Ogoe is majoring in Course 6-9 (Computation and Cognition). Being in an environment where she can examine these types of issues is precisely why the self-described “technology skeptic” enrolled in the course.

Growing up with a mom who is hearing impaired and a little sister with a developmental disability, Ogoe became the default family member whose role it was to call providers for tech support or program iPhones. She leveraged her skills into a part-time job fixing cell phones, which paved the way for her to develop a deep interest in computation, and a path to MIT. However, a prestigious summer fellowship in her first year made her question the ethics behind how consumers were impacted by the technology she was helping to program. 

“Everything I’ve done with technology is from the perspective of people, education, and personal connection,” Ogoe says. “This is a niche that I love. Taking humanities classes around public policy, technology, and culture is one of my big passions, but this is the first course I’ve taken that also involves a philosophy professor.”

The following week, Skow lectures on the role of bias in AI, and Ogoe, who is entering the workforce next year, but plans to eventually attend law school to focus on regulating related issues, raises her hand to ask questions or share counterpoints four times.

Skow digs into examining COMPAS, a controversial AI software that uses an algorithm to predict the likelihood that people accused of crimes would go on to re-offend. According to a 2018 ProPublica article, COMPAS was likely to flag Black defendants as future criminals and gave false positives at twice the rate as it did to white defendants.

The class session is dedicated to determining whether the article warrants the conclusion that the COMPAS system is biased and should be discontinued. To do so, Skow introduces two different theories on fairness:

“Substantive fairness is the idea that a particular outcome might be fair or unfair,” he explains. “Procedural fairness is about whether the procedure by which an outcome is produced is fair.” A variety of conflicting criteria of fairness are then introduced, and the class discusses which were plausible, and what conclusions they warranted about the COMPAS system.

Later on, the two professors go upstairs to Solar-Lezama’s office to debrief on how the exercise had gone that day.

“Who knows?” says Solar-Lezama. “Maybe five years from now, everybody will laugh at how people were worried about the existential risk of AI. But one of the themes I see running through this class is learning to approach these debates beyond media discourse and getting to the bottom of thinking rigorously about these issues.” 

Picture of John Doe
John Doe

Sociosqu conubia dis malesuada volutpat feugiat urna tortor vehicula adipiscing cubilia. Pede montes cras porttitor habitasse mollis nostra malesuada volutpat letius.

Related Article

Leave a Reply

Your email address will not be published. Required fields are marked *

We would love to hear from you!

Please record your message.

Record, Listen, Send

Allow access to your microphone

Click "Allow" in the permission dialog. It usually appears under the address bar in the upper left side of the window. We respect your privacy.

Microphone access error

It seems your microphone is disabled in the browser settings. Please go to your browser settings and enable access to your microphone.

Speak now

00:00

Canvas not available.

Reset recording

Are you sure you want to start a new recording? Your current recording will be deleted.

Oops, something went wrong

Error occurred during uploading your audio. Please click the Retry button to try again.

Send your recording

Thank you

Meet Eve: Your AI Training Assistant

Welcome to Enlightening Methodology! We are excited to introduce Eve, our innovative AI-powered assistant designed specifically for our organization. Eve represents a glimpse into the future of artificial intelligence, continuously learning and growing to enhance the user experience across both healthcare and business sectors.

In Healthcare

In the healthcare category, Eve serves as a valuable resource for our clients. She is capable of answering questions about our business and providing "Day in the Life" training scenario examples that illustrate real-world applications of the training methodologies we employ. Eve offers insights into our unique compliance tool, detailing its capabilities and how it enhances operational efficiency while ensuring adherence to all regulatory statues and full HIPAA compliance. Furthermore, Eve can provide clients with compelling reasons why Enlightening Methodology should be their company of choice for Electronic Health Record (EHR) implementations and AI support. While Eve is purposefully designed for our in-house needs and is just a small example of what AI can offer, her continuous growth highlights the vast potential of AI in transforming healthcare practices.

In Business

In the business section, Eve showcases our extensive offerings, including our cutting-edge compliance tool. She provides examples of its functionality, helping organizations understand how it can streamline compliance processes and improve overall efficiency. Eve also explores our cybersecurity solutions powered by AI, demonstrating how these technologies can protect organizations from potential threats while ensuring data integrity and security. While Eve is tailored for internal purposes, she represents only a fraction of the incredible capabilities that AI can provide. With Eve, you gain access to an intelligent assistant that enhances training, compliance, and operational capabilities, making the journey towards AI implementation more accessible. At Enlightening Methodology, we are committed to innovation and continuous improvement. Join us on this exciting journey as we leverage Eve's abilities to drive progress in both healthcare and business, paving the way for a smarter and more efficient future. With Eve by your side, you're not just engaging with AI; you're witnessing the growth potential of technology that is reshaping training, compliance and our world! Welcome to Enlightening Methodology, where innovation meets opportunity!