ChatGPT was released in Nov 22. It gathered a million users within a week. And 100 million within 2 months. This generative Ai has been so good and powerful that the potential of a doomsday scenario has been voiced out by many including Elon Mask and Steve Wozniak.
Prior to the launch, in August 2022, Ai Impacts, a US research group, surveyed 700 machine learning researchers about their predictions on Ai risks. The result yielded a 5% probability of Ai causing an ‘extremely bad’ outcome, such as human extinction.
Li Fei Fei, Ai luminary at Stanford, talks of a ‘civilizational moment’ for Ai. Geoff Hinton, another Ai bigwig from the University of Toronto, said, the judgement day is not inconceivable. Robert Trager of the Centre for Governance on Ai said one risk of such large language models (llms) is “making it easier to do lots of things – and thus allowing more people to do them (harm)”.
In a recent survey of super forecasters and Ai experts, the median Ai expert gave a 3.9 per cent chance of an existential catastrophe (<5,000 humans survived) owing to Ai by 2100. The median super forecaster gave only 0.38 per cent. The difference may probably be due to field selection bias by Ai researchers.
So, how do we control Ai?
Before Chatgpt4 (C4), OpenAI used several approaches to reduce the risk of accidents and misuse. One is called ‘reinforcement learning from human feedback’ (rlhf). Rlhf asked humans to provide feedback on whether the model response to a prompt was appropriate. Then the model is updated based on feedback. The goal is to reduce the likelihood of producing harmful content when given similar prompts in the future. The drawback is that humans often disagree about what counts as ‘appropriate’. Rlhf also made C4 far more capable in conversation, thus propelling the Ai race.
Another approach, borrowed from war gaming, is ‘red-teaming’. OpenAI worked with Alignment Research Centre (ARC) to put its model through a battery of tests. The red-teamer job was to attack the model by getting it to do something it should not, in the hope of anticipating mischief in the real world.
Another idea is to use Ai to police Ai. Sam Bowman of New York University and of Anthropic Ai firm, has written on topics like “Constitutional Ai”, in which a secondary Ai model is asked to assess whether the output from the main model adheres to certain ‘constitutional principles’.
In general, governments can approach control Ai using one of the following three strategies:
I. Light touch – with no new rules or regulatory bodies. The approach is to apply existing regulations to Ai systems. e.g., UK, US.
II. Tougher line – The government would create law categories for different uses of Ai classified according to risk, with stringent monitoring and disclosure. Some Ai to be banned such as subliminal advertisements and those with remote biometrics. The government would impose fines for non-compliance. E.g., EU.
III. Toughest – Under this strategy, the government would treat Ai like medicines, with a dedicated regulator, strict testing and pre-approval requirements. E.g., in China, where all new Ai has to undergo a security review before release.
Even if efforts to produce safe models work, future Ai models could work around them. E.g., Ai models have already made new discoveries in biology. Hence it is not inconceivable that one day ai may design dangerous biochemicals themselves.
The general attitude of the world seems to be better safe than sorry. Dr Li of Stanford thinks we ‘should dedicate more, much more resources to research on Ai alignment and governance. Dr Trager of the Centre for Governance on Ai, on the other hand, supports the creation of bureaucracies to govern Ai standards and do safety research.
In the meantime, Ai researchers supporting much more funding for safety research has grown from 14 per cent in 2016 to 35 per cent in 2023. ARC is also considering developing a safety standard for Ai.
Immediate impacts before the judgement day
The probability of the end of the world may be low enough to cast aside, yet everyone seems to agree that the immediate impact of ai would be on jobs. The big tech already retrenched tens of thousands of staff in the last twelve months alone. These jobs are not coming back. Tyna Eloundou of Openai and colleagues estimated that ‘around 80 per cent of the US workforce could have at least 10 per cent of their tasks affected by the intro of llms. Based on Ms Eloundou’s estimates, Ai would result in a net loss of around 15 per cent of US jobs. Some could move to industries experiencing worker shortages, such as hospitality. A big rise in unemployment would follow, maybe up to 15 per cent reached during Covid.
Edward Felten of Princeton University and colleagues, conducted a similar exercise, legal services, accountancy, and travel agencies come out at or near the top of professions most likely to lose out. According to him, 14 of the top 20 occupations most exposed to Ai are teachers.
Goldman Sachs’s prediction is somewhat more positive, stating the widespread adoption of Ai could drive a seven per cent or almost 7T increase in annual global GDP over a 10-year period. Academic studies predict a three per cent rise in annual labour productivity in firms that adopt Ai.
Another concern from this has been who would eventually benefit most from Ai. Ai profits could end up in just one org – Openai. Generative ai has some real monopolistic characteristics. C4 reportedly cost more than $100m to train. There is also a lot of proprietary knowledge about data for training the models plus the users’ feedback.
Should you be worried about that job loss?
In areas of the economy with heavy state involvement such as healthcare and education, technological change tends to be super slow. Governments may have policy goals such as maximization of employment levels, that are inconsistent with improved efficiency. These industries are likely to be unionized and unions are good at preventing job losses, according to Mark Andreessen of Andreessen Horowitz. Only the bravest of all governments would replace teachers with Ai.
A paper by David Autor of MIT and colleagues said about 60 per cent of the jobs in today’s America did not exist in 1940. The ‘fingerprint technician’ was added in 2000. ‘Solar photovoltaic technician’ in 2018. The ai economy is likely to create new occupations which today cannot even be imagined. The personal computer was invented in the 1970s. In 1987, Robert Solow, an economist, famously declared that the computer age was ‘everywhere except for productivity stats’.
Jobs beyond the reach of Ai are blue-collar works, such as construction and farming, accounting for 20 per cent of the rich world GDP and in industries where human-to-human contact is an inherent part of the service, such as hospitality and healthcare.
In summary, we can be less concerned about job losses and individual impacts of Ai. We should be more concerned about the balance of power, changing of the society and nations and the destruction of them, simply based on the extrapolation of how damaging social media is. Just imagine, C4 is a godsend for nimby fighting against a government plan or a development program. In five minutes, he can produce a well-written 1,000-page objection. Someone then has to read and respond. Spam emails would be harder to detect. Fraud cases would soar. Banks will need to spend more on preventing attacks and compensating people who lose out. Combining that with the auto-creating of comments in social media would bring fake news and mal-information to a whole new level. That’s exactly the future without the strict governance of Ai!