Controlling the machine

How can organisations create frameworks for the ethical and efficient use of AI? What advice are governments providing? Catherine Early reports

The many ways in which artificial intelligence (AI) could help solve environmental problems are well-documented. From distributed energy grids and precision agriculture to sustainable supply chains, environmental monitoring and enforcement, plus enhanced weather and disaster prediction and response, AI could not only make environmental protection far quicker and more efficient, but also enable solutions not previously possible in an analogue world.

Research by consultants at PwC, commissioned by Microsoft, estimates that using AI for environmental applications in agriculture, transport, energy and water could reduce worldwide greenhouse gas emissions by 4% in 2030 – equivalent to the 2030 annual emissions of Australia, Canada and Japan combined.

More and more businesses are considering using AI. A June survey of chief executives, also carried out by PwC, found that 85% believed AI would significantly change the way they do business during the next five years, while close to 66% considered it to be bigger than the internet revolution.

Getting heads together

When it came to questions about how much AI can be trusted, however, opinions were split. More than 75% believed that it was good for society, but 84% believed that AI-based decisions need to be explainable in order to be trusted by the public.

The survey also found that the understanding and application of responsible and ethical AI practices among respondents varied significantly across organisations, and in most cases was immature. Some 25% had not considered AI as part of their corporate strategy, 38% believed that it aligned with their core values, and only 25% said they considered the ethical implications of an AI solution before investing in it.

In July, PwC launched a responsible AI toolkit. “We want to kick-start the conversation around issues such as data ethics and governance. A lot more needs to be done in this space,“ explains Ben Combes, co-founder of PwC UK's innovation and sustainability practice.

In such a rapidly developing technology landscape, the key for businesses is to establish a direction of travel for policy, and then see how things evolve in real time, he says. “We don't know how applications are going to evolve, but we also can't just leave everything and hope that it'll be okay.“Companies looking to use AI need to think carefully about who to involve across the company, in order to ensure that governance and policies are not developed in silos, he says. “Using AI is strategically important, so it needs the people who would be involved in core strategic planning, which is going to be a lot broader than only your technology or AI team.“

“AI can't reduce inefficiencies and solve problems if your algorithms and data are broken up in silos“

The key is to consider all the angles, then work out who is needed to bring their expertise into decision-making. Companies could set up a steering group comprising a broad spectrum of business roles, with the technology team working on the day-to-day development, he suggests. Sustainability professionals absolutely need to be involved.

“We have big non-linear problems and we increasingly need big exponential solutions. AI, blockchain and other technologies are very much part of the toolkit, but we need to make sure that they're not making environmental and social issues worse. Environment and sustainability teams need to put themselves dead centre,“ he says.

A broad church

Lawyers are also giving attention to developments in responsible AI. Law firm DLA Piper established a dedicated AI team in May to support companies as they navigate the legal landscape of emerging and disruptive technologies, and help them understand the legal and compliance risks emerging from these technologies' creation and deployment.

Danny Tobey, co-chair of the firm's AI practice, agrees that companies need to include a broad selection of staff in the development of AI. “Everyone from the board and senior leadership team down needs to be involved. The whole point is that AI can reduce waste and inefficiencies and solve problems, but it really can't do that if your algorithms and data are broken up in silos,“ he says.

The risks of using AI vary according to industry, he says. AI needs to be trained on good data and applied in the right situations. It is also vital to consider data and algorithmic bias.

“One complication with environmental issues is the vastness of the datasets, which is going to push you towards black box solutions where the final model is not understandable to a human,“ says Tobey. “In situations like that, it's incredibly important to have people involved from the beginning who know what they're doing in AI and data science fields.“

Many tools can be downloaded from the internet, and might then be used by a business without the right checks and balances in-house to use them properly. “That's where companies get in trouble. If you bring in a team from the beginning who have the right people involved, who understand the regulation and what that might look like in the future (because there are so many on the horizon), then you're on a better footing.“ DLA Piper is on the defence counsel on a class action lawsuit alleging that AI produced recommendations for insurance that are contrary to regulations in certain states. “This case is at the forefront of the litigation that we're starting to see emerge, and I think there'll be a lot more of that,“ says Tobey.

Rules and regulations

Governments worldwide have been scrambling to draw up regulations on AI. According to Tobey, the US has some 21 proposed pieces of legislation on the technology. The UK government, meanwhile, has created the Office for Artificial Intelligence, the Centre for Data Ethics and Innovation, and the AI Council. These will work together to set up 'data trusts', to ensure that data governance is implemented ethically.

The European Commission is also scrutinising the sector. In June 2019, it published a report entitled Policy and investment recommendations for trustworthy Artificial Intelligence, which concerns how AI can be made trustworthy. It includes a recommendation to ban its use in certain cases, such as mass-scoring of individuals, and to implement very strict rules on surveillance for security purposes.

Other organisations have been incorporating ethics into their work on AI. Think tank E3G is calling for the government to create an 'International Centre for AI, Energy and Climate', which should consider responsible application of the technology as one of its core functions. Industry body Tech UK has been considering digital ethics more broadly, and will hold its third annual summit on the subject in December.

Time will tell whether the rise of AI will be “the best or the worst thing ever to happen to humanity“, as the late professor Stephen Hawking warned. Larissa Bifano, who also co-chairs DLA Piper's AI practice, is optimistic. “It's really about mitigating any risk that could come from a malfunction or a poorly programmed piece of AI. As long as there's input from regulators and companies implement AI thoughtfully, it should be a win-win for everyone.“

Further reading

  • Read the European Commission's report Policy and investment recommendations for trustworthy Artificial Intelligence at bit.ly/2J7Qpnu
  • Download Tech UK's paper Digital ethics in 2019: Making digital ethics relevant to the lives people lead at bit.ly/2SsFE1I

Catherine Early is a freelance journalist.


Principles for establishing rigorous AI governance

  • Establish a multidisciplinary governing body –
  • companies should consider establishing an oversight body, with representatives from a range of different areas.
  • Create a common playbook for all circumstances –
  • each application of AI will have its own risks and sensitivities that require different levels of governance. Governance structures should, therefore, cater to many possibilities, acting as a 'how to' to approach new initiatives.
  • Develop appropriate data and model governance processes – data and model governance should be considered in tandem.

For data, there should be a clear picture of where the data has come from, how reliable it is, and any regulatory sensitivities that might apply to its use. It should be possible to show an audit trail of everything that has happened to the data over time.

For models, standards and guidelines should be drawn up to ensure AI applications are suitable and sustainable for their intended use, and do not expose the business to undue risk.

Back to Index