AI Progress Cannot be Stopped, but Must be Managed

In the sector of artificial intelligence we are experiencing jolting change: the rate of acceleration of technological performance is increasing. The release of ChatGPT, shortly followed by GPT4, and the plugin architecture extending its capabilities, has brought this to everyone’s attention, surprising even experts. The reason why it is necessary to be more intentional about the direction and the consequences of these and other tools lies in the fact that both as individuals and as organizations we have limits to our adaptability, and we need time to take advantage of what AI offers, rather than being swept away by it.

That is why we are one of the signatories of the open letter by the Future of Life Institute: “Pause Giant AI Experiments”. Together with Joshua Bengio, Steve Wozniak, Elon Musk, and hundreds of others, experts in the field of AI, what we are asking is an opportunity for thinking deeply about the right way to design, implement, improve, deploy, and regulate advanced tools in the field of Artificial Intelligence. Today’s jolting pace does not provide that opportunity, and the risks that we make mistakes in how we build next generation AI models are too large to afford.

Some of the concerns are already evident, such as the inherent bias in these models, and our inability to properly manage it, the pace of job displacement, which could be very rapid, with no solutions in place to help the transition, the danger of an increase in the pollution of our info-sphere, and the loss of trust in communication and public discourse. Other dangers appear to many more theoretical or implausible, connected to Artificial General Intelligence, and the open question of its agency and goal seeking, together with its potential uncontrollable nature. While an iterative approach in improving our technologies worked in other areas, we don’t have spare civilizations to deploy if we end up destroying the current one. 

The benefits of powerful AI tools, and eventually of a human-aligned AGI are immense. While it would be irresponsible to attempt to stop progress, and give up this future benefit, the right approach is to plan ahead, and bring it forth responsibly. A series of oversight initiatives that incentivize transparency and accountability will represent the initial steps in the right direction.

Previous
Previous

The Coevolution of AI and Blockchain

Next
Next

The Potential of Opening Up a Proprietary Platform: the Case for M-Pesa