Document Type

Article

Publication Title

Royal Society Open Science

Abstract

It is 10 years since neural networks made their spectacular comeback. Prompted by this anniversary, we take a holistic perspective on artificial intelligence (AI). Supervised learning for cognitive tasks is effectively solved - provided we have enough high-quality labelled data. However, deep neural network models are not easily interpretable, and thus the debate between blackbox and whitebox modelling has come to the fore. The rise of attention networks, self-supervised learning, generative modelling and graph neural networks has widened the application space of AI. Deep learning has also propelled the return of reinforcement learning as a core building block of autonomous decision-making systems. The possible harms made possible by new AI technologies have raised socio-technical issues such as transparency, fairness and accountability. The dominance of AI by Big Tech who control talent, computing resources, and most importantly, data may lead to an extreme AI divide. Despite the recent dramatic and unexpected success in AI-driven conversational agents, progress in much-heralded flagship projects like self-driving vehicles remains elusive. Care must be taken to moderate the rhetoric surrounding the field and align engineering progress with scientific principles.

First Page

1

Last Page

15

DOI

10.1098/rsos.221414

Publication Date

3-29-2023

Keywords

artificial intelligence winter, Big Tech, ImageNet, supervised learning, transformers

Comments

Open Access article

Archived with thanks to Royal Society Publishing

License: CC by 4.0

Uploaded 28 November 2023

Share

COinS