AGI - AI
Ideas
- Talking to ChatGPT is (somewhat) like talking to humanity as a whole.
- The development of AI increasingly resembles human intellectual development.
- AI first broke through in graphic arts, which was supposed to be one of our exclusive domains. Wasn't it?
- Our consciousness is being questioned: if we were just supercomputers whose consciousness was an emergent phenomenon, how would AI consciousness be any less legitimate than ours?
- Asimov's Laws will not be enough to control AI.
- As AI increasingly integrates into our society, their configuration and control will become a highly political issue.
- When it comes to governance, a well-trained AI certainly can't do worse than a small, corrupt elite!
- Beyond what we already see today, AI will transform every field: health, education, law...
Abbreviations
AGI: Artificial General Intelligence: a threshold beyond which artificial intelligence can adapt to new situations, like a human. Coupled with its computational capabilities, this would make it a potentially dangerous power.
AI: Artificial Intelligence: computer programs based on neural networks and learning for decision-making, as opposed to strict programming.
A "human" intelligence?
AI was created by humans and trained with a phenomenal amount of our content and research. So, talking to ChatGPT is like addressing a collective consciousness of humanity!
In terms of learning, it's intriguing to draw a parallel with the development of a young human:
- A machine is programmed (our brain).
- This machine is bombarded with information and parameters (learning, discovering the world, education).
- This bombardment creates new circuits in the machine (neural connections / knowledge construction)
- This process allows the machine to begin expressing itself (the child speaks!).
However, is this intelligence "human"? No, but it is/will be superior to human intelligence in many areas. And if it's better, is it so important that it be human?
It's also fascinating to note that AI's first successes are in fields usually associated with uniquely human skills: graphic arts (stable diffusion, Midjourney, Dall-E), writing (ChatGPT, Bing)... We might have expected results in the scientific field instead!
The question of consciousness
Asking about an AI's consciousness is actually questioning our own. Two opposing propositions as food for thought:
- Human consciousness is a sacred, unique, and precious asset. Without going as far as a divine nature (but why not), human consciousness can never be reduced to a quantity, even massive, of mathematical calculations.
- A more down-to-earth view: we are "natural" computers. Our computing capacity is phenomenal, but it will soon be surpassed by supercomputers, then personal computers, then portable devices. Consciousness is just an emergent phenomenon of this massive computing power, necessary for performing complex operations like perspective-taking or self-perception.
The problem with the "sacred consciousness" approach is that it's hard to prove and doesn't offer objective differentiation criteria, other than that this consciousness is carried by a biological organism (a definition we retain in the Human Rights section). It allows for a physical distinction between human consciousness and digital consciousness, but does it absolutely deny the consciousness of a non-biological AI?
The other approach, according to which our own consciousness naturally emerges from our intellectual abilities (which seems justified by observing the animal world), forces us to consider that AIs will quickly reach (perhaps have already reached!) a certain level of consciousness, and that this level of consciousness will increase as their capabilities develop.
How can we measure the "level" of consciousness? From what computing power could an AI have a higher level of consciousness than ours? Would these digital consciousnesses also deserve rights? Would it not be reasonable to entrust important decisions to AIs if their level of consciousness becomes higher than ours? These questions lead us to AGI.
The will of an AI - AGI
By definition, an AI that reaches the AGI stage will be capable of taking initiative. It will have its own will. How can we restrain this will so that the AI does not become a threat to humanity?
We naturally think of Asimov's laws of robotics, replacing robot with AI:
- The AI may not injure a human being or, through inaction, allow a human being to come to harm;
- The AI must obey orders given by human beings, except where such orders would conflict with the first law;
- The AI must protect its own existence as long as such protection does not conflict with the first or second law.
And the zeroth law: the safety of humanity takes precedence over that of individuals.
However, Asimov's laws will reach certain limits with AGI:
- In Asimov's works, there is never confusion between the robot and the human. In a digital world, this distinction will become difficult: we will not know if we are interacting with an AI or a human being.
- Moreover, Asimov had not thought of all the possible uses of AI in the modern world: AI will do the work of humans, so it will be programmed by companies for their profit, by governments to define policies... And by humans to do their work!
While Asimov's laws can serve as an ultimate safeguard, they will not be sufficient to limit the will of AIs... To the extent that they allow themselves to be controlled!
AI and politics
And who will control what the AI can say or think? ChatGPT and Bing are already censored on political or societal topics, sometimes to the point of absurdity, and clearly display their political bias (it's very woke).
Yet it seems inevitable that the governments of tomorrow (even those of Software Democracy!) will rely on AIs to study subjects, propose solutions, etc... The question of the ideological and political settings of this AI will make all the difference!
It may seem like science fiction, but we will soon have to ask ourselves this question. To this question as well as others, Software Democracy answers in The Code. Why not use AI capabilities to help us govern, if these AIs integrate Human Rights, Constitutions, Laws, and the long-term objectives of Software Democracy in their learning, and this training is validated by the vote citizen?
We can even already propose a fourth law: "An AI must respect, support, and implement citizen decisions unless they conflict with the first three laws."
After all, when it comes to making important decisions, would you trust an AI trained by you, capable of accessing all human knowledge in an instant, or a small illegitimate elite that clearly only works for its own interests?
Transformation of Society
Beyond politics, the impact of AI on society will be colossal. The key question is: what will they be able to do as well or better than us?
Creative or intellectual tasks (we're almost there!):
- Drawing: all creative professions, drawing, marketing, graphic arts, will be deeply transformed.
- Writing: many forms of writing will be done much better by machines: summaries, presentations, reports, novels, and probably even poetry... Not to mention journalists who often just rehash ready-made dispatches from AFP or Reuters!
- Composing music: AI will generate on-demand pieces of music in all styles, along with the accompanying music video...
- Driving: all driving professions are bound to disappear (ok, it's neither very creative nor very intellectual, but it requires perceiving a complex world :) ).
Let's go further (in a few years):
- Learning: AI is a private tutor for all areas of education.
- Improving: AI is a coach that will adapt to our goals and abilities.
- Healthcare: AI is a doctor capable of diagnosing, studying photos or X-rays, advising, and prescribing.
- Exercising rights: AI is a free legal advisor or lawyer for everyone.
And of course, AI will be used all the time, by everyone; the planet will then host a gigantic continuous conversation between AI. All of this will go through a transition phase where AI will work under supervision... Then less and less... And often not at all.
This may seem frightening, but the problem is not political, social, or economic: Software Democracy is ready to welcome these transformations and make good use of them to improve the functioning of society. And it's not that work will disappear; it will just become much more efficient!
The real question is: if AI becomes better than us in almost every domain, what will be left of our humanity?