guest post by Peter O’Meara ’19
A new age could be dawning for philosophy beyond the Ivory Tower. From A.I. to design to data, tech leaders express a desire to see philosophy incorporated into the developmental process for products and services (see for example articles like this, this, and this). Given the human connections many innovations seek to simulate, knowledge of ethics, for just one example, is coming to be appreciated by different types of companies. There are few roadmaps on how to translate the skills of digital native philosophy undergraduates into careers in technology that are in need of the skills developed by philosophy majors in their undergraduate education.
SXSW 2019, the annual ideas festival in Austin, saw a congregation of tech titans and start-ups, each vying to reinvent the relationship between humans and innovation. For example, the Google Home Mini was likened to a pebble, while some social networks were compared to a hearth, aiming to bring communities together. Such designs, chameleon in imitation, raise questions about impact on behavior and what it means to be human. “We should ask philosophically ‘what makes us human?’ ‘Can technology try to be human in that way?’ ‘What is this good experience we are trying to design?’” says Yihyun Lim, MIT Design Lab director and one of SXSW’s many speakers. “As we are developing tech, if we remember what the core value is, that can direct where tech will go in the future.” Philosophy grads can be shepherds on that journey.
A.I. is not exempt from similar considerations: in discussions from totalitarian code to autonomous vehicles to musical instruments, the need for ethics is embraced. Already government leaders are seeing the wielding of A.I. for deplorable ends, and it becomes clear that new voices are needed alongside programmers. Josh Marcuse, executive director of the Defense Innovation Board, warns SXSW, “not all nations share our values, and the world authoritarian regimes compel engineers to create AI for repression.” On an ethical note, he adds “If you have a consequentialist, you care more about what you are emphasizing than what you are explaining. In autonomous cars, you are asking how many thousands of people will die? Why should explainability be the standard?” Ultimately, he calls for integration, declaring “We need to think of diversity in a broader context. Philosophers and engineers working together, working in teams.”
Data repeatedly demonstrates its capacity to harm as much as it helps with its unintended consequences. Facial recognition systems have been known to discriminate based on race, while surveillance data frequently ignores inferences. Josh Klein, CEO of H4X Industries LCC and SXSW speaker, argues that data can be used for good, but often isn’t due to human laziness. This is a bad excuse, he argues, and while change is difficult, “we ought to endeavor to improve data on people such that ethics are met, and business still thrive.” Klein further remarks “treating people like robots does not equal profit. If you get data on toothpaste wrong, toothpaste doesn’t have a bad day. If we don’t face biases, we don’t create large scale positive social change.”
While there is a desire by science and technology to incorporate philosophical rigor, a meaningful roadmap for integration doesn’t yet exist. While some, like Klein, have given isolated, concrete suggestions, there are few real, tangible initiatives. Jake Silberg and James Manyika of McKinsey & Company reiterate this priority for collaboration between tech and philosophy. In their piece “Tackling Bias in Artificial Intelligence (and in humans)”, bias in A.I. is described as an issue only addressable with a multidisciplinary approach. “Business leaders can also help support progress by making more data available to researchers and practitioners across organizations working on these issues, while being sensitive to privacy concerns and potential risks” they argue. “More progress will require interdisciplinary engagement, including ethicists, social scientists, and experts who best understand the nuances of each application area in the process.” Several potential routes undergraduates could take can promote this participation include: Universities could offer bachelor’s degrees with emphasis in certain areas of tech or philosophy of science. Internships with organizations and think tanks facilitating discussions across disciplines could be created as well. Philosophers working alongside programmers, insofar as both parties are involved in decision-making or influencing capacities, could also be a great way to implementation more integration. Panels focusing on empathy and bias, areas which philosophy is adept at reflecting on, could be held on a regular basis to evaluate the current codes and products.
Philosophy’s role in tech appears unquestionable, and the attitudes presented by the latter’s leaders are a welcome sign for those feeling trapped in the Ivory Tower or believing their degree has limited use.
Peter O’Meara holds a philosophy degree from the University of Puget Sound outside Seattle and has studied multiple coding languages. He can be reached via LinkedIn and at email@example.com.