Scientists must recognize social inequality as a foundational ecological concern
Scientists must recognize social inequality as a foundational ecological concern
For too long, ecologists and evolutionary biologists have not explored the relevant impacts of historical and current social inequalities on their field, thus inhibiting the possibilities for a justice-oriented approach towards conservation and restoration
Reviewed by Penny Sun
Introduction
Cities are important ecosystems shaped by dynamic and interdependent biological, physical and social influences. However, Schell et al. note that few studies link research on urban ecological and evolutionary studies to that of social inequality. They argue it is integral to integrate these disciplines as human-created systems of power create uneven impacts on non-human ecosystems. And, ultimately, unequal distribution of green spaces and “blue spaces,” in addition to harmful practices, impact human health and well-being.
Historic and current social inequality is a particularly significant factor in urban ecology and evolution. Mechanisms of structural racism and classism including redlining and gentrification result from, and sustain, unequal distribution of resources and power. Unequal representation and power in decision-making impact the entirety of urban management, including development, governance, and infrastructure. The physical manifestations of these social inequalities also influence the distribution of biodiversity and evolutionary stressors, which affects the equilibrium of urban plant, animal, and microbe communities. Thus, research on ecological and evolutionary outcomes in urban settings must incorporate a social and environmental justice lens to adequately account for the drivers behind environmental change and to advance goals of equitable urban conservation and climate resilience.
Dr. Christopher J. Schell, PhD is an Assistant Professor in the UC Berkeley Department of Environmental Science, Policy, and Management. Dr. Karen Dyson, PhD is an urban ecologist at the University of Washington, the founder and director of Research and Design for Integrated Ecology, and Senior Scientist at Dendrolytics. Dr. Tracy L. Fuentes, PhD is a terrestrial ecologist and botanist and urban plant ecologist at the University of Washington. Dr. Simone Des Roches, PhD is a research scientist at the University of Washington. Dr. Nyeema C. Harris, PhD is an Associate Professor of Wildlife and Land Conservation at the Yale School of the Environment. Dr. Danica Sterud Miller, PhD is an Associate Professor in the Culture, Arts and Communication division of the School of Interdisciplinary Arts and Sciences at the University of Washington. Dr. Cleo A. Woelfle-Erskine, PhD is an Assistant Professor at the University of Washington School of Marine & Environmental Affairs. Dr. Max R. Lambert, PhD is the Aquatic Research Section Manager at the Washington State Department of Fish & Wildlife.
Methods and Findings
To test several epistemological hypotheses, the authors summarize current cross-disciplinary findings on the socio-ecological implications of wealth disparities in cities, the impact of structural racism on urban structures and ecology, and the need for justice-oriented urban conservation approaches.
One of the key hypotheses is that household and neighborhood wealth – specifically median household income – correlates positively with urban biodiversity. This “luxury effect” is believed to occur because humans with greater resources available for non-essential needs have greater likelihood to facilitate growth and abundance of plant species in their neighborhoods. The luxury effect scales from the household to neighborhood and city level, with wealthier residential neighborhoods generally having more vegetation and canopy cover, plant diversity, and public park spaces than less affluent neighborhoods. Importantly, the distribution of plants within cities is inversely correlated with the concentration of heat and air pollution, resulting in urban heat islands and greater risk of exposure to air pollutants in lower-income neighborhoods.
Wealth alone, however, does not completely predict urban ecosystem patterns. Structural racism, community norms, and local policies, are also predictors of urban socio-ecological patterns. Residential segregation policies based on racial prejudice, including redlining, have well-documented, measurable, and sustained harmful effects on urban ecological patterns that remain after official policies are dissolved. As a consequence, ecological and evolutionary stressors including heat, pollution exposure, and risk of zoonotic disease spread are not only distributed along economic lines, but also along minoritized racial and ethnic populations. Thus, the authors argue that urban conservation plans need to be tailored to the historical and contextual needs of the impacted communities, rather than applied uniformly across and within cities.
The authors note that further research is necessary to better articulate the relationship between systemic racism, ecology, and evolution and to capture the intersectional effects of structural racism and classism on evolutionary outcomes. In addition, the authors identify a need to center Black, Indigenous, Latinx, and non-white immigrant communities in ecological and evolutionary research and justice movements due to their disproportionate vulnerability to the climate crisis and environmental exposure risk. These racialized groups experience the duality of environmental harm and social harm in public spaces including state-sanctioned police brutality. In addition, communities of color possess distinct environmental rights and relationships, cultural knowledge, and effective practices for ecological revitalization that have been historically, and detrimentally, excluded from urban environmental decision-making.
Conclusions
Ecologists, biologists, and environmentalists must expand the scope of their research and practice to include a social justice lens. Economic opportunity, public infrastructure, affordable housing, access to healthcare, and voting rights, are all powerful levers for promoting environmental justice, conservation, and local stewardship of urban ecosystems.
Centering social inequity in ecological and evolutionary research also enables equitable distribution of conservation and restoration resources and ultimately, urban biodiversity, according to community need. Researchers have a responsibility to integrate justice into their research process itself. This includes involving local communities in knowledge generation, increasing access to decision making, and eradicating practices of exploitation of community labor to produce academic discourse.
As the urgency of climate change grows, it is more important than ever to actively and radically dismantle systems of racial and economic oppression, within cities and outside their borders. Environmental and evolutionary biology research requires a thorough re-understanding and integration of the social factors impacting ecosystems to advance equitable urban resilience.
Algorithms Can Replicate or Remedy Racial Biases in Healthcare Resource Allocation
Algorithms Can Replicate or Remedy Racial Biases in Healthcare Resource Allocation
A healthcare algorithm trained on cost data to predict patients’ health risk score were found to demonstrate algorithmic bias in underrating the severity of Black patients’ health conditions relative to their white counterparts, leading to under-provision of health care to Black patients.
Reviewed by Penny Sun
Introduction
Obermeyer et al. note both the growing attention to potential racial and gender biases within algorithms and the difficulty of obtaining access to real world algorithms – including the raw data used to design and train them – in order to understand how and why bias could appear in them. This study is important because it has obtained access to the inputs, outputs, and real world outcomes of a health care algorithm that performs a widely used function within the healthcare sector. Further, it is widely representative of the type of logic used by algorithms in other social sectors. In particular, this algorithm identifies which patients to recommend for a care management program where they will receive additional resources. The algorithm simplifies this task into identifying the patients with the greatest care needs, patients at the top 3 percentile of need automatically qualifying for entry into the program, and those at the top 45 percentile of need assessed for entry by their primary care physician.
As prominent researchers at the nexus of machine learning and health, the researchers were able to convince the manufacturer of this algorithm, a leader in the field, to consider changing its algorithm, with the hope that this may change the norms within the entire sector.
Methods and Findings
Obermeyer et al. collected input data of all primary care patients enrolled in risk-based contracts at a large academic hospital from 2013-2015. They defined the population of Black and white, non-Hispanic patients based on patient self-identification. The researchers also used outcomes from electronic health records to assess patients’ health needs and insurance claims to assess patients’ costs including: all diagnoses, key quantitative laboratory studies, vital signs, utilization, outpatient visits, emergency visits, hospitalizations, and health care costs.
Primary findings:
Based on the number of comorbid conditions and severity of markers of chronic disease, Black patients with the same level of predicted risk as white patients, according to the algorithm, were substantially sicker than their white counterparts. Thus, if the algorithm identified high risk patients solely on health needs, significantly more Black patients should have been included in the care management program.
Previous health care costs were the driving force behind Black patients’ lower entry to the care management program. For unknown reasons, the gap between care needed and care received among Black patients was significantly larger. Thus, a seemingly neutral factor of previous health care spending could turn into a racially biased one, due to external social conditions. This demonstrates the difficulty of “problem formulation” in data science – how to turn complex, interactive, and vague social conditions into a concrete, measurable variable in a dataset.
The study explores other ways to define the input variables that the algorithm considers when distinguishing patients’ health risks, including considering only total costs, avoidable costs (based on emergency visits and hospitalizations), and health needs (based on number of chronic conditions). These three options all do fairly well in predicting patient outcomes, but considering health needs results in almost twice the number of Black patients that make it into the highest risk group compared to considering only total costs.
Doctor judgment can marginally increase the number of Black patients that make it into the care management program compared to the cost-based algorithm. But, an algorithm that is adjusted to take health needs into consideration is even better than doctors at identifying high-risk Black patients. Thus, an improved algorithm has greater potential for improving the ratio of Black patients who are rated as “high-risk” than relying on individual doctors’ judgement.
The manufacturer of this algorithm independently confirmed Obermeyer et al.’s findings. Working together, the researchers and manufacturer demonstrated that adjusting the algorithm to take health needs into account reduces racial bias in outcomes by 84%.
Conclusions
Obermeyer et al. recommend caution in identifying and defining what measures algorithms are trained to look for to ensure that the algorithm collects truly relevant input and output data for health outcomes. Although this seems difficult and costly, the private sector has clearly demonstrated that it is possible, and technically the additional labor is merely in validating the conceptual relationships within the algorithm rather than the statistical technique it relies on. With the high stakes involved in the health and social sectors, this kind of investment is necessary, and can yield great results while minimizing harm.
With the start of a partnership with the manufacturer of this algorithm, Obermeyer et al. hope to find solutions to this type of error together, and with the combined leadership in academia and industry, they hope that their findings could change the norms used to create algorithms within the healthcare sector and wider social sectors.
Healthcare Algorithms and Racial Bias
Healthcare Algorithms and Racial Bias
An algorithm designed to predict health care costs as a proxy for health needs critically underestimates the needs of Black patients, with life-threatening consequences.
Reviewed by Becky Mer
Introduction
This article addresses the growing public concern regarding the automation of racial discrimination through digital tools and technology. Throughout the paper, the author, Dr. Ruha Benjamin, focuses her discussion on a notable publication by Obermeyer et al. entitled, “Dissecting racial bias in an algorithm used to manage the health of populations.”
Unlike most researchers who lack access to proprietary algorithms, Obermeyer et al. completed one of the first studies to examine the training data, algorithm, contextual data, and outputs of one of the largest commercial tools used by the health insurance industry. This tool allows insurers to identify patients who need increased attention before care becomes too costly and severe. Since the tool was designed to use potential cost as a proxy for patients’ needs, and because providers allocate significantly less resources to Black patients’ care, Obermeyer et al. found that Black patients whose risk score is the same as white patients tend to be much more sick. By measuring the racial disparity and building new predictors, the researchers concluded that, as long as the tool effectively predicts costs, its results will be racially biased, even without explicit attempts to account for race.
Dr. Benjamin discusses the broader implications of the study through a range of historical, hypothetical, and modern-day cases. She underscores how algorithmic and other labels, like health care costs, may initially appear to be race-neutral, but ultimately play critical and harmful roles in the lives of millions of Black patients in the United States. Dr. Benjamin’s analysis is situated within her larger body of research on race and the social dimensions of science, technology, and medicine. At Princeton University, Dr. Benjamin is an Associate Professor of African American Studies, founder of the Ida B. Wells Just Data Lab, Executive Committee member at the Program in Global Health and Health Policy and Center for Digital Humanities, and Faculty Associate in the Center for Information Technology Policy, Program on History of Science, Center for Health and Wellbeing, Program on Gender and Sexuality Studies, and the Department of Sociology.
Methods and Findings
Employing examples from health care, housing, and social media, Dr. Benjamin demonstrates how historical systems of racial discrimination are inextricably linked to modern-day, seemingly colorblind automated systems. She presents, among others, the following paired cases:
Imagine if Henrietta Lacks, an African American mother of five, was “digitally triaged” at Johns Hopkins Hospital in 1951 after arriving with severe abdominal symptoms. The hospital’s cutting-edge automated tool would assess her risk based on the predicted cost of care ̶ far less than typically spent on white patients despite Black patients’ actual health needs ̶ leading providers to underestimate her level of need and discharge her with ultimately fatal consequences.
Consider, in reality, Ms. Lacks’ admission to, and experience in, the Negro wing of Johns Hopkins Hospital, during a period in American history when overt racial discrimination was legally sanctioned.
Resulting in much of the same catastrophic health outcomes, these cases highlight how the legacy of Jim Crow policies continue to feed its modern automated equivalent, termed in this paper as the “New Jim Code.” Racially biased and historical human decisions shape both algorithmic design and inputs, such as data from segregated healthcare facilities, unequal insurance systems, and racist medical training. Yet, the power of these automated tools can reach far beyond the scale of individual behavior, as they are capable of perpetuating unjust discrimination at a much greater level. Given this context, relying on top-down reform efforts, whether by shifts in federal law or institutional policy, will not diminish discrimination alone.
Conclusions
Dr. Benjamin concludes that labels matter significantly, both in the design and analysis of algorithms. Rather than employing tropes that Black patients “cost less” or that Black patients’ poor care is the result of patients’ “non-compliance” or “lack of trust,” researchers, hospital staff, and analysts must adopt a more socially conscious analysis. The issue, put simply, is that Black patients are valued less, structural and interpersonal racism are persistent in the American healthcare system, and that the medical industry—not Black patients—is accountable for lack of trustworthiness. Although Obermeyer et al. describe some of this context, their descriptions are insufficient to reveal the very social processes that make their work so important.
Concern over algorithmic bias, although critical, must not outweigh focus on the context of racial discrimination. Indeed, this context is what made the promise of neutral technology so critical in the first place. Automated tools like the one studied by Obermeyer et al. might work similarly for all patients if companies, institutions, and individuals provided the same level of care for Black patients, such that their care would not “cost less”than the care provided for non-Black patients. Overall, beyond the automated tools considered in this particular study, Dr. Benjamin recommends moving away from individual risk assessment tools, and instead adopting those that evaluate the risks produced by institutions and organizations. Through the development of such tools, the public can uncover agents and/or patterns of discrimination and ultimately hold institutions accountable for providing high quality care for all patients.