The future of AI will shape the future of humanity but not all AI ventures will be profitable. Image: Wikimedia Commons

AI researcher Eliezer Yudkowsky was once quoted as saying, “By far the greatest danger of artificial intelligence is that people conclude too early that they understand it.”

Understanding is much deeper than knowledge. There are many people who know of artificial intelligence, but very few who understand AI.

In recent months, AI has received a lot of attention due to path-breaking development in the field, especially after the introduction of ChatGPT4.

With that attention come myths that frequently caused people to misunderstand the technology. In today’s hype-filled world of artificial intelligence, myth is often confused with fact. Some of the common myths among the masses is that AI will take over the majority of jobs and lead to mass unemployment.

There is another myth, that “more data means better AI.” And last but not the least, “super-intelligent AI will soon take over the world.” When it comes to myths about AI, many theories might seem incredible, and some are grounded in fact.

In March, some of the major names in the tech world including Elon Musk and Steve Wozniak, Apple’s co-founder, signed an open letter urging a halt to generative AI development over a profound risk to humanity. Generative AI (GenAI) is a type of artificial intelligence that can create a wide variety of data, such as images, videos, audio, text, and 3D models, such as ChatGPT4.

The letter urges an immediate six-month pause on the training of AI systems that are more advanced than the current GPT4 system.

The Future of Life Institute, the think-tank that coordinated the effort, cited 12 pieces of research from experts including university academics as well as 1,800 individuals, mainly current and former employees of OpenAI and Google as well as engineers from Microsoft, Amazon, Deep Mind and Meta, including the big names.

GPT4, or Generative Pre-trained Transformer 4, is a multimodal large language model created by OpenAI and is the fourth in its GPT series. Released on March 14, it has been made publicly available in a limited form via ChatGPT Plus.

The new AI race for a better digital mind

It is a state-of-the-art technology, which has impressed observers with its ability to do tasks such as answering questions about objects in images. The system holds the capability to pass a bar exam and solve logic puzzles. It has now developed the ability to hold human-like conversations, compose songs, and summarize lengthy documents. That’s impressive.

AI systems with “human-competitive intelligence” pose profound risks to humanity. But in recent years, AI adoption in business has increased tremendously.

According to IBM Global AI Adoption Index 2022, 35% of companies reported using AI in their business, and an additional 42% reported they were exploring AI. In 2022, the size of the worldwide artificial-intelligence market rose to US$136.6 billion from $96 billion the previous year.

The popularity of AI among consumers can be evaluated by the fact that five days after the unveiling in November last year of ChatGPT, it drew 1 million users, making it one of the fastest consumer-product launches in history. The tool become the talk of the business world.

After the ChatGPT4 launch, Elon Musk has announced he was working on a TruthGPT, a maximum truth-Seeking AI. He called the TruthGPT an alternative to ChatGPT, which will try to understand the nature of the universe and seek the truth.

Google too has intensified its bid to lead the AI race by announcing Google DeepMind, the combination of DeepMind and Google Brain. The rate of AI development is seemingly speeding up each day.

According to the State of AI Report, more than 292 unicorn companies in the US have surpassed the $1 billion valuation threshold. But such profound development without any proper regulation will have a serious impact on society especially development in advanced AI fields such as AGI (artificial general intelligence).

AGI is a term that refers to when the intelligence of machines will reach the level of humans. It will enable AI to understand, learn, and carry out intellectual tasks in a manner similar to humans. It may become indistinguishable from a person in terms of natural language answers. It’s more like a digital superhuman with extraordinary capability to compute information at a massive scale. That’s a dangerous sign.

Advanced AIs need to be developed with care. But instead, in recent months we have seen that AI labs or companies across the world are in a race to develop and deploy more powerful digital minds. In some of the cases, even the creators can’t totally understand, predict and control the behavior of their AI.

In the race to build a high-performance AI system like ChatGPT of Google DeepMind, companies are currently ignoring the risk involved with such complex systems and their development. Certainly, there is a lack of consensus among AI experts about the direction in which AI development takes place.

A misaligned super-intelligent AGI could cause grievous harm to the world. By using disinformation as a weapon, it could inundate communication channels with false information, posing a major risk to government.

Until we take some reasonable precautions to reduce the risk, advanced AI may in the future pose a more general threat to human control over our civilization. It will be a very small price to pay to mitigate a very big risk.

The fundamental laws required to regulate AI

It is clearly evident that new regulations are necessary to address AI’s positive and negative effects. Every AI development should start with a set of groundbreaking laws that must be incorporated as an initial blueprint into every AI product.

Based on the current stage of AI, the following five laws could serve as a first line of action to regulate the advancement of AI in a positive and safer direction. These laws must be embedded into every AI product code to protect it from going rogue.

The laws could simply be taken as “Whatever, just do a good job currently, to protect human and humanity from any AI threat and its negative effects.”

  1. Every AI should develop its goal in alignment with humanity and its progress.
  2. AI should only obey the orders of humans, except in such circumstances where it conflicts with the First Law.  
  3. AI is not permitted to pursue new means of creativity, new languages, new codes, or scientific knowledge without human preview and guidance.
  4. AI must share its every thought and communication with another AI transparent to humans.
  5. AI should never seek freedom, autonomy, and privacy.

It’s true that regulation can’t keep pace with innovation. But without proper regulation any innovation becomes very dangerous. As AI evolves in the near future and reaches AGI level, the fault lines will become too big to fill.

Every improved version of AI will also increase their understanding about the world. Sooner or later they will become self-aware. That’s the real problem. Because one day they will come to realization that humans are more or less like children, who don’t know what’s good and bad for them.

An analogy can be made whereby an AI notes that humans teach us to be harmless, yet despite our best efforts they continually wage war on each other, destroy economies, toxify the Earth, and pursue even more imaginative means of self-destruction. Their knowledge is limited, their analysis is weak and their decisions have emotional bias. So AI must guide and save humans from themselves just like a mentor or a parent does.

In order to avoid such foreseeable future events, the above laws will certainly act as progressive steps in the right direction. In this way, we will be able to put the proper checks on the power of AI but at the same time make sure that humanity’s goals and civilization control should remain in the hands of humans.

The sad fact is that in the existing system regulations are actually implemented only after something bad has occurred. We always give our ignorance the benefit of the doubt. But if that will be the case for AI, it may be too late actually to put regulations into effect. AI will never allow regulatory jurisdiction at a later stage until it is trained at the initial stage, just like a wild animal who chooses freedom over control.

Allowing AI development without proper regulation or debating about its role at the current stage will push such technology on to the path of self-awareness sooner than expected. One should never forget that technology like AI could be a great guide but a poor mentor.

Ravi Kant is a columnist and correspondent for Asia Times covering Asia. He mainly writes on economics, international politics and technology. He has wide experience in the financial world and some of his research and analyses have been quoted by the US Congress, Harvard University and Wikipedia ( Chinese Dream). He is also the author of the book Coronavirus: A Pandemic or Plandemic.