HomeBlogAcademia ESG Echoes in AI Ethics: Drawing Lessons from Corporate Sustainability's Missteps for Effective Algorithmic Accountability

background

ESG Echoes in AI Ethics: Drawing Lessons from Corporate Sustainability's Missteps for Effective Algorithmic Accountability

Alec Foster2024-01-16

Academia, Essay

Introduction

The widespread adoption of Environmental, Social, and Corporate Governance (ESG) principles since the mid-2000s, followed by the burgeoning field of modern AI ethics, marks a critical juncture in corporate responsibility and technological governance. While both fields propound transformative benefits, there is an inherent tension when balancing their ethical aspirations with corporate interests. This paper explores the similarities between these movements and the insights that can be gleaned from ESG disillusionment to understand the dynamics of AI ethics and algorithmic accountability.

This essay posits that ESG’s shortcomings offer crucial insights for AI ethics, advocating for a shift towards more robust algorithmic accountability: Section 2 presents a comparative analysis of ESG and AI ethics, revealing the prevalence of ineffective ethical commitments within both domains; Section 3 examines regulatory capture and systemic barriers to reforming AI; Section 4 draws from ESG and AI ethics’ limitations to emphasize the need for algorithmic accountability; and Section 5 proposes an enforceable regulatory framework for AI, arguing for criminal penalties, licensing requirements, and revenue sharing. This approach positions algorithmic accountability as a stringent but necessary remedy for AI ethics limitations, informed by ESG’s trajectory.

ESG and AI ethics: parallels and pitfalls

The struggle of the ESG and AI ethics industrial complexes to balance ethical aspirations with profit motives is increasingly evident. ESG critics derisively label the movement as ‘greenwashing,’ a marketing gimmick pandering to environmentally-conscious investors (Yu et al., 2020). The evidence overwhelmingly indicates that the ESG movement has not slowed climate change: despite 98.8% of S&P 500 companies publishing ESG reports as of June 2023, the last ten years have ranked as the ten warmest years on record (Center for Audit Quality, 2023; NOAA, 2023). This discrepancy highlights the ineffectiveness of ESG investing as global warming continues to escalate. A similar dichotomy is present in AI ethics, where corporate narratives often prioritize public perception and profitability over consequential changes.

Elettra Bietti’s analysis of ‘ethics washing’ in AI critiques the tech industry’s use of ethical language as a tool for self-regulation and self-promotion, often yielding superficial commitments that favor the interests of industry stakeholders more than societal good (Bietti, 2021). This concept calls attention to the discrepancy when companies instrumentalize ethics language, conflating the marketing of ethics with genuine ethical practice. Bietti argues for a more nuanced approach to tech policy, arguing that ethics should go beyond performative rhetoric about ‘fixing’ algorithms. Instead, she suggests that we should consider the ways in which algorithms can further existing social inequities and the settings when we should avoid using them altogether. The status quo of ethics as performance in private industry often results in commitments adopted more for appearance and misdirection than actual stewardship. Bietti’s insights are critical in examining the authenticity and impact of initiatives in AI ethics and ESG.

The tendency to rely on market-driven solutions to market-driven problems has proven inadequate in addressing complex social-ecological and socio-technical challenges. The case of American Airlines exemplifies this discrepancy: the world’s largest airline made it to the Dow Jones Sustainability Index despite emitting 49 million metric tons of carbon dioxide last year (American Airlines, 2023). This inclusion raises questions about the efficacy of ESG metrics in reflecting environmental stewardship. Similarly, many decisions that OpenAI’s leadership team made as it evolved from a nonprofit without financial obligations to a for-profit subsidiary worth $86 billion mirror this contradiction. Despite safety concerns within OpenAI, the release of generative AI products like the chatbot ChatGPT in 2022 and the upcoming GPT Store in 2024 illustrates the difficulty in maintaining foundational ethical principles when faced with huge financial incentives (Weise et al., 2023). Together, these examples underscore how market-driven dynamics can lead firms to reveal large quantities of ethics data to seem transparent but perform poorly in these aspects.

The influence of commercial pressures is visible in Google and Meta's rushed deployments of AI products following ChatGPT’s release. For instance, Google fast-tracked AI chat and cloud offerings despite internal acknowledgment of “stereotypes, toxicity and hate speech in outputs” from their generative AI chatbot (Weise et al., 2023). Similarly, Meta accelerated the release of its LLaMA chatbot despite internal debates over its open-source nature and potential misuse (Weise et al., 2023). This trend of rapid deployment in the generative AI arms race echoes Volkswagen’s 2015 emissions scandal, in which the German automaker deceived US environmental regulators in order to inflate their performance metrics and falsify eligibility for energy-efficient subsidies.

Figure 1 Screenshot of Volkswagen marketing image as an example of greenwashing Note. N.Y. Attorney General, 2016.

Figure 2 Photograph of VW Golf TDI Clean Diesel as an example of greenwashing Note. Ortiz, 2010. CC BY-SA 3.0.

Between 2008 and 2015, Volkswagen programmed their diesel engines to activate emissions controls only during laboratory emissions testing. However, just one week before the US Environmental Protection Agency announced their investigation confirming that Volkswagen willingly violated the Clean Air Act, Volkswagen was named the “world’s most sustainable automotive group” by the Dow Jones Sustainability Index (Volkswagen of America, 2015). Volkswagen’s emissions deception led to an estimated 1,200 premature deaths in Europe (Chu, 2017), nearly 1 million tons of extra nitrogen oxides emitted per year (Mathiesen & Neslen, 2015), and its lauding as the leader of automotive environmental stewardship eroded trust in corporate sustainability commitments. Across AI ethics and ESG, such examples demonstrate a troubling pattern: companies often sideline ethical obligations (despite claiming otherwise) in pursuing market dominance, resulting in significant societal repercussions.

Corporate influence and regulatory realities in AI development

Current AI regulations often frame biases within AI systems as isolated challenges with technical solutions rather than proposing a different system or no system at all. Julia Powles and Helen Nissenbaum (2018) assert that this approach reflects corporate strategies that enable a facade of addressing biases while ignoring reforms that would slow the growth of domestic AI companies. While a positive step towards regulating AI, the US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023) embodies the strategic concessions and regulatory capture that allow AI companies to progress unchecked by the deeper ethical considerations they warrant. Its signature red-teaming disclosure requirement is achievable without reducing AI companies’ profits (Buccino, 2023). Furthermore, the computational threshold for compliance is set at a level five times higher than that of current leading models like GPT-4, delaying its applicability and overlooking the potential harms of existing AI systems (The White House, 2023). The lack of enforceable watermarking or provenance requirements for AI-generated content, licensing regimens over essential hardware like GPUs, and mandates for disclosures of training data content indicate that the executive order does not impede the progress US tech conglomerates have made in AI (Buccino, 2023). This approach tends to preempt more sweeping measures, such as whether specific AI use cases should be permissible, echoing Powles and Nissenbaum’s critique of addressing bias as a mere computational issue rather than a profound societal challenge. 

As Powles and Nissenbaum note, addressing AI’s broader implications requires more than performative ethics; it demands a systemic overhaul. To address AI’s ethical challenges, we must first confront the misalignment between a legal and economic system designed to maximize shareholder wealth and societal welfare. The very structure of publicly traded companies, the majority of which are incorporated in shareholder-friendly jurisdictions like Delaware, is predicated on a strict adherence to fiduciary obligations. The wishful thinking that corporations will voluntarily act as agents of humanitarian change is unrealistic and diverts attention from the need for more stringent regulatory measures.

The imperative for algorithmic accountability

The ESG movement’s shortcomings, rooted in its dependence on trivial rather than enforceable measures, along with insights from advocates of algorithmic accountability, provide guidance to avert similar pitfalls in the AI industry. Frank Pasquale’s (2019) notion of a ‘second wave’ of algorithmic accountability offers a framework for reorienting the AI ethics discussion. Moving beyond the limited scope of the AI ethics’ incremental improvements to existing systems, algorithmic accountability calls for a comprehensive reassessment of AI systems’ deployment. Central to this framework is a reexamination of the necessity of specific AI systems, coupled with a push for structural reforms in their governance. This perspective aligns with an important lesson from the ESG movement – the insufficiency of self-regulation in industries driven by profit motives – and offers a proactive step towards embedding ethical considerations at the core of AI development.

Nissenbaum and colleagues’ exploration of accountability in algorithmic society is vital for understanding the ethical implications of AI. She identifies multiple barriers to accountability in the tech industry, including the diffuse responsibility among contributors, the inevitability of errors in AI systems, shifting blame onto technology, and ownership without liability (Cooper et al., 2022; Nissenbaum, 1996). This perspective is crucial for understanding how the barriers to accountability have evolved with technology. Cooper et al. (2022) also detail contemporary interventions that, while necessary for developing actionable notions of accountability, are insufficient on their own. They assert that we must go beyond post hoc impact assessments and transparency reports to build a culture of accountability. They suggest measures including designating multiple actors accountable, auditing requirements throughout the AI development pipeline, and strict liability frameworks with measures to identify accountable parties and address algorithmic harms directly (Cooper et al., 2022). 

The insights from Pasquale (2019) and Cooper et al. (2022), combined with Nissenbaum’s (1996) foundational work on accountability barriers, shed light on potential regulatory and institutional mechanisms to address the societal impacts of AI. The shift to a lasting culture of accountability necessitates concrete actions set forth in the next section of this essay. By doing so, AI ethics can transcend theoretical discourse, becoming an integral part of AI governance.

Framework for enforceable AI regulation

ESG and AI ethics initiatives can effectively function as heat shields, enabling corporations to maintain their power and capital while creating an illusion of responsibility. Instead, this essay argues for a decisive shift towards enforceable legislative standards that directly confront and penalize dangerous corporate behaviors. There is evidence that strong regulations and disincentives can help mitigate a race to the bottom in the private sector. For example, prosecuting two of the most influential crypto exchange founders has proven more effective in cleaning up the cryptocurrency space than any well-intentioned ESG initiative (Khalili, 2023). Consequently, creating dangerous or hateful content by AI models should result in criminal penalties. Additionally, taxing AI’s extractive business practices, such as training AI models using copyrighted materials, akin to carbon emissions, may also prove fruitful. Only through such tangible measures can we steer AI development towards a beneficial and safe course.

The examples of recklessness in sustainability and AI ethics from Volkswagen and the developers of foundation models are akin to non-cooperative game theory, where the Nash equilibrium results in corporations prioritizing rapid innovation and market dominance to stay competitive. We need systematic external interventions that deter detrimental corporate behaviors to shift this equilibrium towards more responsible outcomes.  Therefore, this framework must incorporate the following elements:

  • criminal penalties for irresponsible or damaging AI practices
  • licensing requirements based on model size and use case
  • transparency requirements for the sources of model training data
  • revenue sharing with creators when AI models are trained on their intellectual property

The elements above take inspiration from consumer protection initiatives in regulated, high-risk industries. While not perfect, these sectors have seen the beneficial impact of structured regulatory frameworks in the US, such as criminal penalties (finance and banking), licensing requirements (energy, healthcare, and law), transparency requirements (real estate, aviation, and pharmaceuticals), and revenue sharing (online video sharing, notably YouTube’s Content ID system). This framework, informed by teachings from ESG, regulated industries, and AI ethics discourse, will guide AI towards a more socially equitable future.

Conclusion

This essay has examined the parallels between the ESG movement and the field of AI ethics, advocating for a transition towards more stringent algorithmic accountability. Just as the ESG movement has grappled with the challenge of actualizing its ethical intentions amidst profit-driven corporate dynamics, AI ethics faces similar hurdles, often sidelining ethical commitments under competitive pressures. This analysis underscores the importance of integrating ethical considerations into AI’s operational and decision-making processes. It proposes a framework that addresses some of the immediate challenges in AI and broader societal implications, echoing the calls for a systemic overhaul by scholars like Powles and Nissenbaum (2018). In doing so, the essay aligns itself with a growing body of critical thought that seeks to push for a holistic reevaluation of how technology intersects with societal values.

The lessons from ESG’s shortcomings illuminate a path forward for AI ethics rooted in accountability, transparency, and a commitment to our collective values. This essay advocates for robust external safeguards, challenges the current over-reliance on corporate self-regulation, and assesses the pervasive influence of corporate interests on regulatory frameworks. In doing so, it contributes to the discourse on AI ethics and champions a vision where algorithmic accountability becomes the cornerstone of a future where AI is a positive, ethical force in our lives.

Bibliography

2022 Sustainability Report. (2023). American Airlines. https://s202.q4cdn.com/986123435/files/images/esg/aa-sustainability-report-2022.pdf

Bietti, E. (2021). From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy (SSRN Scholarly Paper 3914119). https://doi.org/10.2139/ssrn.3914119

Buccino, J. (2023, November 3). Red Teams, watermarks and GPUs: The advantages and limitations of Biden’s AI executive order. Federal News Network. https://federalnewsnetwork.com/commentary/2023/11/red-teams-watermarks-and-gpus-the-advantages-and-limitations-of-bidens-ai-executive-order/

Jennifer Chu. (2017, March 3). Study: Volkswagen’s excess emissions will lead to 1,200 premature deaths in Europe. MIT News | Massachusetts Institute of Technology. https://news.mit.edu/2017/volkswagen-emissions-premature-deaths-europe-0303

Cooper, A. F., Moss, E., Laufer, B., & Nissenbaum, H. (2022). Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. 2022 ACM Conference on Fairness, Accountability, and Transparency, 864–876. https://doi.org/10.1145/3531146.3533150

Climate at a Glance: Global Time Series. (2023). NOAA National Centers for Environmental Information. https://www.ncei.noaa.gov/access/monitoring/climate-at-a-glance/global/Time-series

Volkswagen is world’s most sustainable automotive group. (2015, September 11). Volkswagen of America. https://media.vw.com/en-us/releases/566

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. (2023, October 30). The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-Order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Khalili, J. (2023, November 22). CZ Has Left Binance, SBF Is in Jail. Crypto Is About to Get Boring. Wired UK. https://www.wired.co.uk/article/cz-has-left-binance-sbf-is-in-jail-crypto-is-about-to-get-boring

Mathiesen, K., & Neslen, A. (2015, September 23). VW scandal caused nearly 1m tonnes of extra pollution, analysis shows. The Guardian. https://www.theguardian.com/business/2015/sep/22/vw-scandal-caused-nearly-1m-tonnes-of-extra-pollution-analysis-shows

Nissenbaum, H. (1996, March). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25–42. https://doi.org/10.1007/BF02639315

NY A.G. Schneiderman, Massachusetts A.G. Healey, Maryland A.G. Frosh Announce Suits Against Volkswagen, Audi and Porsche Alleging They Knowingly Sold Over 53,000 Illegally Polluting Cars And SUVs, Violating State Environmental Laws. (2016, July 19). New York State Attorney General. https://ag.ny.gov/press-release/ny-ag-schneiderman-massachusetts-ag-healey-Maryland-ag-frosh-announce-suits-against

Ortiz, M. R. D. (2009). VW Golf TDI Clean Diesel [Photograph], Wikimedia Commons, CC BY-SA 3.0. https://upload.wikimedia.org/wikipedia/commons/e/e8/VW_Golf_TDI_Clean_Diesel_ WAS_2010_8981.JPG

Pasquale, F. (2019, November 25). The second wave of algorithmic accountability. LPE Project. https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/

Powles, J., & Nissenbaum, H. (2018, December 7). The seductive diversion of ‘solving’ bias in artificial intelligence. OneZero on Medium. https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53

Pucker, K. P., & King, A. (2022, August 1). ESG Investing Isn’t Designed to Save the Planet. Harvard Business Review. https://hbr.org/2022/08/esg-investing-isnt-designed-to-save-the-planet

Shifflett, S. (2023, November 19). Wall Street’s ESG Craze Is Fading. The Wall Street Journal. https://www.wsj.com/finance/investing/esg-branding-wall-street-0a487105

Weise, K., Metz, C., Grant, N., & Isaac, M. (2023, December 5). Inside the A.I. Arms Race That Changed Silicon Valley Forever. The New York Times. https://www.nytimes.com/2023/12/05/technology/ai-chatgpt-google-meta.html

Yu, E. P., Luu, B. V., & Chen, C. H. (2020). Greenwashing in environmental, social and governance disclosures. Research in International Business and Finance, 52, 101192. https://doi.org/10.1016/j.ribaf.2020.101192


More Posts

background

Lessons from Willy's Chocolate Catastrophe: Navigating AI's Role in Art and Labor

background

Racing Towards AI Supremacy: Balancing Open Innovation with Ethical Responsibility

background

ESG Echoes in AI Ethics: Drawing Lessons from Corporate Sustainability's Missteps for Effective Algorithmic Accountability

Show more

Alec Foster

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.

Alec Foster

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.