“The AI community has not yet adjusted to the fact that we are now starting to have a really big impact in the real world. That simply wasn’t the case for most of the history of the field – we were just in the lab, developing things, trying to improve AI ability to self-improve, mostly failing to get stuff to work. So the question of real-world impact was just not germane at all. And we have to grow up very quickly to catch up.”
Those were the remarks of the Center for Human-Compatible Artificial Intelligence founder at the University of California, Berkeley, in a talk to the Guardian.
There is no denying that AI systems have developed exponentially and transformed life around us in various forms. If AI ability starts to self-improve, it will be able to make all the changes that it wants in the system without any human intervention and supervision. When such a developed system is left alone for modification without sufficient supervision, it will surely be dangerous and detrimental. AI researchers and founders have been spooked by their own progress; Prof Stuart Russell expressed these sentiments in one of his talks.
AI Ability to Self-Improve:
Since the release of ChatGPT a few months back, the development of AI technology has been the talk of the town. One of the most popular topics of discussion is the ability of AI to self-improve. The idea of the self-improvement of AI is pretty scary to acknowledge. It states that once Artificial Intelligence develops the ability to improve its own features, it will transcend the barriers of human intellect and transform itself into a superior creature. This superior creature will be so powerful that it can wipe the entire human race from this planet; scary enough – isn’t it?
A Superintelligence Explosion:
When AI determines how to bring self-improvement in its features and qualities, we describe it by coining a new term for it. The term is called “The AI singularity” and is sometimes called “A Superintelligence Explosion”. Various AI experts believe that once Artificial Intelligence learns how to upgrade itself, it will get incredibly intelligent quickly and will be alarming for humanity. That state is known as the “fast take-off ” situation. Seeing the recent progress in the field, it is not too difficult to say that a fast take-off is not too far away – it might be around the corner. There are numerous bets that the superintelligence explosion might occur between the years 2050 and 2100.
On a side note, nobody knows – these are just speculations.
The Nature of a Self-Improving AI System:
Let’s talk about the nature of a self-improving AI system if it’s ever developed (the development of which is highly expected and anticipated to happen soon). A self-improving AI mechanism consists of a system that comprehends its own behaviour and has the authority and sense to make changes in itself to improve its functions. It’s more like producing a self-aware system that would not depend on humans for changes and monitoring. We do not deny the fact that a super-intelligent system can also create and design an even-better system. Such a system will surpass the boundaries of the human intellect, and there might be an intelligence explosion – it is the same thing which is the main topic of debate these days in AI discussion circles.
How Will Self-Improving AI Look Like?
Self-improvement enables different systems to allocate computational and physical resources per already defined principles. It enables such systems to make changes in data compression, algorithm optimisation, reversible computation, virtualisation of the physical and more. It will erase the need for thousands of jobs where humans are employed, such as several business operations, the need for scholars in institutes, academic writing services, drivers, servers, servants and whatnot. You might fail to visualise a self-improving AI system because it is hard to imagine how it will be if Artificial Intelligence starts improving itself. However it is going to look like, one thing that is certain about the whole procedure is that it is not going to be very pleasant for humanity.
How to Ensure the Safe Modifications in the Self-Improving AI?
When systems are allowed to modify themselves, they still have to be monitored to see if all the changes are safe and if the systems are not up to committing some mistakes which may prove terrible for others. For this, a developer needs to have a grip on the knowledge of all the possibilities in which AI can modify itself. There are two solutions to this issue:
- The first solution is to limit the ability of a system to create other AI platforms or other AI agents. It will restrict AI from creating new replicas or clones, which may prove detrimental to the existence of the human race.
- The second possible solution is to allow only a few self-improvement options. Such modifications must be safe enough, and their safety must be assured by the developers first. A few examples of such changes include the memory automated memory upgradation, processor changes and installing software updates.
In the current era, several answers are being found to this complicated problem of AI ability to self-improve and its levels. Much of the current tech advancements are focused on testing several systems as solutions and trying to ensure that the developers have come forward with a practical way to cater to such concerns.
Who Can Provide The Safest Solution To The Problem Of AI Self-Improvement?
No matter what the safest possible solution to this problem of AI ability to self-improve will be, it can only be made possible if a bunch of qualified AI researchers work closely to address this issue. The developers must limit the AI ability to make several changes in the systems. It is the right time the researchers collaborate with each other to save the world from the potential dangers associated with the exponential development of AI systems.