Talk on campus: “Materialized Oppression in Medical Tools and Technologies”

Philosophy professor Sam Liao will be giving a talk organized by the Bioethics Club. The talk will be on Wednesday October 26th at 6pm in Wyatt Hall 201. The title of the talk is “Materialized Oppression in Medical Tools and Technologies.” Here is the abstract for the talk:

It is well-known that racism is encoded into the social practices and institutions of medicine. Less well-known is that racism is encoded into the material artifacts of medicine. We argue that many medical devices are not merely biased, but materialize oppression. An oppressive device exhibits a harmful bias that reflects and perpetuates unjust power relations. Using pulse oximeters and spirometers as case studies, we show how medical devices can materialize oppression along various axes of social difference, including race, gender, class, and ability. Our account uses political philosophy and cognitive science to give a theoretical basis for understanding materialized oppression, explaining how artifacts encode and carry oppressive ideas from the past to the present and future. Oppressive medical devices present a moral aggregation problem. To remedy this problem, we suggest redundantly layered solutions that are coordinated to disrupt reciprocal causal connections between the attitudes, practices, and artifacts of oppressive systems.​

Talk on campus: “Artificial Intelligence: Value Alignment and Misalignment”

Puget Sound Philosophy professors Ariela Tubert and Justin Tiehen will be giving a talk on ethics and artificial intelligence as part of the Math and Computer Science seminar series. The talk will be on Monday, October 24, 2022 @ 4pm in Thompson Hall 391. The title of the talk is “Artificial Intelligence: Value Alignment and Misalignment.” Here is the abstract for the talk:

In discussions of artificial intelligence, the problem of value alignment has to do with how to make sure that the intelligent machines we build are aligned with our human values, where in the short run this includes creating AI that does not perpetuate injustice, while in the long run it means building safe AI that does not pose an existential threat to human life. But, as we will argue, part of our own human intelligence involves our ability to transform our own values, and so to misalign the values we hold at one point in time with the values we hold at another. If this is right, it means that AI that is fully able to match human intelligence would need to be a kind of value misalignment machine, in which case it figures to threaten the project of value alignment.

Ember Reed ’22: Summer Research Project

Profile of Ember Reed ’23

Ember Reed ’23 worked alongside philosophy professor Justin Tiehen on a summer research project that focused on applying the arguments surrounding universal fine-tuning to the history of nuclear close calls. (For more information on Summer Research Grants in Arts, Humanities, and Social Sciences, look here!) 

Here is Ember’s own description of the project:

illustration by Ember Reed ’22

Recent existential-risk thinkers have noted that the analysis of the fine-tuning argument for God’s existence and the analysis of certain forms of existential risk employ similar types of reasoning. This paper argues that insofar as the “many worlds objection” undermines the inference to God’s existence from universal fine-tuning, then a similar many worlds objection undermines the inference that historic risk of global nuclear catastrophe has been low from the lack of such a catastrophe having occurred in our world. A version of the fine-tuning argument applied to nuclear risk, The Nuclear Fine-Tuning Argument, utilizes the set of nuclear close calls to show that:
1) Conventional explanations fail to adequately explain how we have survived thus far, and,
2) The existence of many worlds provides an adequate explanation.
This is because, if there are many worlds, observers are disproportionately more likely to reflect upon a world that hasn’t had a global nuclear catastrophe than upon one that has had a global nuclear catastrophe. This selection bias results from the catastrophic nature of such an event. This argument extends generally to all global catastrophic risks that both A) have been historic threats and B) would result in a significantly lower global population.

Ember Reed ’23 presenting at Summer Quest poster session on campus

We are so proud of Ember for what they’ve accomplished this summer! For more information on Summer Quest and other summer research projects, Click Here