Modern Workplace

Will AI get the ESG treatment?

Jul 11, 2023
Will AI get the ESG treatment? Will AI get the ESG treatment?

Recent advances in generative AI capabilities have prompted a flurry of regulatory measures across the globe, and governments are scrambling to craft the benchmark legislation that strikes the right balance between protecting citizens’ privacy and promoting economic growth. 

According to Stanford's Artificial Intelligence Index Report for 2023, AI-related laws enacted jumped from one in 2016 to 37 in 2022. 

Probably the most consequential law yet cleared a major hurdle in the EU this June. The current draft of the Artificial Intelligence Act introduced requirements based on the perceived risk level of a given AI application based on four categories: 

  • Low and minimal risk – AI risk in video games or SPAM filters are permitted without restriction.
  • Limited risk – Bots imitating human behavior online would be required to disclose to human users their AI nature.
  • High risk – AI applications for workforce recruitment or medical devices are subject to a number of restrictions including activity logging, human oversight, and case-specific risk assessments.
  • Unacceptable risk – Applications such as social scoring and some policing activities will be banned in the EU.

Some expect the law, once finalized, to become "the benchmark against which other legislation is judged.” If GDPR and data privacy are any corollary, that sounds like a safe bet. Some lawmakers in the U.S. are apparently frustrated by their inability to quickly draw up matching legislation.

Other countries, though, are taking a radically different approach. Japan, for instance, has turned regulation on its head by requiring AI benefits society rather than banning its use in some domains. Its AI regulations mandate that the technology be implemented to improve outcomes in areas including education, fairness, and innovation, according to the Center for Strategic & International Studies. 

Interestingly, India with its massive talent pool and growing technical sophistication, says it doesn’t plan to introduce any overarching AI legislation at all. In addition to China’s "detailed and demanding regulatory regime for AI," some also argue that censorship laws are impeding the development of large language models there. Others say China’s flaunting of intellectual property law and willingness to aggressively collect data on its civilian population for things like social scores give the country a competitive edge.

AI does not respect national boundaries and cybercriminals by definition don’t observe laws. Complying with the patchwork of statutes enacted by global governments could hamstring law enforcement with more restrictive rules and will be scoffed at by cybercriminals universally. There would be no Sherlock Holmes to threat actors' Moriarty.

The heavy burden of compliance and the threat of penalties means many will oppose these regulations. There is already evidence that OpenAI, the company behind ChatGPT, worked to water down the EU’s AI act. And in the U.S., its CEO argued before Congress that regulation could allow U.S. rivals to achieve supremacy in the field. Thorny legal issues will need to be worked through everywhere, promising high legal fees and ample distractions.

Imagining a market-based approach: Analogies to ESG

Meanwhile, companies like Zscaler and others expect AI use to become ubiquitous and are investing heavily in the technology. It’s conceivable to think that AI in some form will underpin the majority of interactions between businesses and consumers, at least online. Customers have learned to be vocal in demanding sustainability factor into their purchasing decisions. Could the same become true for responsible and ethical AI use?

While not strictly binding in the U.S., 96% of the S&P 500 published ESG reports in 2022. Such reporting will become mandatory for large companies in the EU. In much the same way that environmental, social, and governance (ESG) priorities rose, then soared, as a means of more closely aligning investments with values, consumers may also begin to show a preference for companies that deploy AI responsibly and transparently. If both unchecked carbon emissions and runaway AI are considered existential threats, why should their mitigation efforts be discussed differently?

Surely terms of service agreements aren’t a sufficient control. Most do not read them. And users can’t be forced to sign their lives away for refusing to do so.

Reporting on AI use and governance could provide organizations the opportunity to earn customer trust, showcase innovation initiatives, display compliance with data privacy laws, and risk mitigation associated with particular AI applications. In this case, like ESG, transparency in AI use could act as a competitive advantage for participants, be an outlet for expressing core values, and offer opportunities for growth. 

In the meantime, those using AI in their operations should proactively enhance their internal assessment process to address the associated security and privacy risks. This includes evaluating partners’ and third-party apps’ security posture and data governance policies for AI-enabled tools. Organizations should track their own AI-use metrics similarly. These capabilities will eventually be so seamless and ubiquitous that keeping up with discovery will become its own challenge without the right tools (which will likely be AI-enabled themselves).

Following discovery, organizations must move to securing their own AI use cases and insisting on the same from partners. This may entail, for example, DLP capabilities to keep proprietary data like source code from being leaked and becoming subject to an LLM with shady terms of service.

Companies that take these and further steps to ensure their responsible use of AI should be happy to report on it in the same way they now report on social or environmental issues.  

No place for binary thinking

It’s neither smart nor feasible for voluntary or mandatory reporting to replace government regulation completely. But ESG-type programs and reporting could reward companies that use AI in a way that’s most productive, broadly beneficial, and responsible. 

James Surowiecki wrote recently in The Atlantic that the legacy of ESG, "may have been to weaken the demand for government action by fostering the illusion that corporations can solve, and indeed are solving, the world’s problems on their own."

ESG frameworks have proliferated since their popularity skyrocketed. Maybe they'll expand to swallow AI, or maybe AI will eventually receive a similar treatment to environment, social, and governance initiatives. 

What to read next

Getting practical with generative AI: A special report

Intelligent augmentation: The future of human-AI collaboration