AGI - AI

Ideas

Abbreviations

AGI: Artificial General Intelligence: a threshold beyond which artificial intelligence can adapt to new situations, like a human. Coupled with its computational capabilities, this would make it a potentially dangerous power.
AI: Artificial Intelligence: computer programs based on neural networks and learning for decision-making, as opposed to strict programming.

A "human" intelligence?

AI was created by humans and trained with a phenomenal amount of our content and research. So, talking to ChatGPT is like addressing a collective consciousness of humanity!

In terms of learning, it's intriguing to draw a parallel with the development of a young human:

However, is this intelligence "human"? No, but it is/will be superior to human intelligence in many areas. And if it's better, is it so important that it be human?

It's also fascinating to note that AI's first successes are in fields usually associated with uniquely human skills: graphic arts (stable diffusion, Midjourney, Dall-E), writing (ChatGPT, Bing)... We might have expected results in the scientific field instead!

The question of consciousness

Asking about an AI's consciousness is actually questioning our own. Two opposing propositions as food for thought:

The problem with the "sacred consciousness" approach is that it's hard to prove and doesn't offer objective differentiation criteria, other than that this consciousness is carried by a biological organism (a definition we retain in the Human Rights section). It allows for a physical distinction between human consciousness and digital consciousness, but does it absolutely deny the consciousness of a non-biological AI?

The other approach, according to which our own consciousness naturally emerges from our intellectual abilities (which seems justified by observing the animal world), forces us to consider that AIs will quickly reach (perhaps have already reached!) a certain level of consciousness, and that this level of consciousness will increase as their capabilities develop.

How can we measure the "level" of consciousness? From what computing power could an AI have a higher level of consciousness than ours? Would these digital consciousnesses also deserve rights? Would it not be reasonable to entrust important decisions to AIs if their level of consciousness becomes higher than ours? These questions lead us to AGI.

The will of an AI - AGI

By definition, an AI that reaches the AGI stage will be capable of taking initiative. It will have its own will. How can we restrain this will so that the AI does not become a threat to humanity?

We naturally think of Asimov's laws of robotics, replacing robot with AI:

  1. The AI may not injure a human being or, through inaction, allow a human being to come to harm;
  2. The AI must obey orders given by human beings, except where such orders would conflict with the first law;
  3. The AI must protect its own existence as long as such protection does not conflict with the first or second law.

And the zeroth law: the safety of humanity takes precedence over that of individuals.

However, Asimov's laws will reach certain limits with AGI:

While Asimov's laws can serve as an ultimate safeguard, they will not be sufficient to limit the will of AIs... To the extent that they allow themselves to be controlled!

AI and politics

And who will control what the AI can say or think? ChatGPT and Bing are already censored on political or societal topics, sometimes to the point of absurdity, and clearly display their political bias (it's very woke).

Yet it seems inevitable that the governments of tomorrow (even those of Software Democracy!) will rely on AIs to study subjects, propose solutions, etc... The question of the ideological and political settings of this AI will make all the difference!

It may seem like science fiction, but we will soon have to ask ourselves this question. To this question as well as others, Software Democracy answers in The Code. Why not use AI capabilities to help us govern, if these AIs integrate Human Rights, Constitutions, Laws, and the long-term objectives of Software Democracy in their learning, and this training is validated by the vote citizen?

We can even already propose a fourth law: "An AI must respect, support, and implement citizen decisions unless they conflict with the first three laws."

After all, when it comes to making important decisions, would you trust an AI trained by you, capable of accessing all human knowledge in an instant, or a small illegitimate elite that clearly only works for its own interests?

Transformation of Society

Beyond politics, the impact of AI on society will be colossal. The key question is: what will they be able to do as well or better than us?

Creative or intellectual tasks (we're almost there!):

Let's go further (in a few years):

And of course, AI will be used all the time, by everyone; the planet will then host a gigantic continuous conversation between AI. All of this will go through a transition phase where AI will work under supervision... Then less and less... And often not at all.

This may seem frightening, but the problem is not political, social, or economic: Software Democracy is ready to welcome these transformations and make good use of them to improve the functioning of society. And it's not that work will disappear; it will just become much more efficient!

The real question is: if AI becomes better than us in almost every domain, what will be left of our humanity?

Let's talk about it!