The Times are reporting a warning from top educators on the dangers they see emanating from AI. Such warnings are becoming frequent from others including Harari in the Economist, Elon Musk and even Sam Altman head of OpenAI.
This got me thinking about an attempt to capture the current state of fear and doom from AI. Here is version 1.
The evolving position on AI
There are broadly views from three groups covered here:
- Educators
- Thought leaders
- Government – divergence of British, EU and US approaches
Some deeper review will come later on Harari, and the Musk, Altman et al group.
Nature of broad risks I see being identified, although Fear of the Unknown is prevalent in all when you look at the disparity and the them of self interest:
- Destruction of the human race
- Employment disruption and elimination of jobs
- Dilution of education quality through cheating and reliance on AI to replace student thought
- Fear of the unknown
- Lack of AI regulation and resultant fear driving a push to fill the perched vacuum and produce regulation which so farlands between the prescriptive EU and the lighter British touch
Banks are absent from the current debates. When we consider the
a) ChatGPT just arrived in 2022, and,
b) AI is deeper, broader and larger than chat
These are early days and a shoot first, aim second approach that lies behind all the regulatory examples so far is assured to miss the mark.
More to come and some thoughts that I have peculating on benefits of AI for Banks.
What follows are snippets covering some relevant discussion
- the Educators concerns
- British, EU and US Government positions, discussion and evolving thought process on AI
The Times – Educators concerns
School leaders announce joint response to tech May 19 2023, The Times
Artificial intelligence is the greatest threat to education and the government is responding too slowly to its dangers, head teachers say.
A coalition of leaders of some of the country’s top schools have warned of the “very real and present hazards and dangers” being presented by the technology.
In a letter to The Times, they say that schools must collaborate to ensure that AI works in their best interests and those of pupils, not of large education technology companies. The group, led by Sir Anthony Seldon, the head of Epsom College, also announce the launch of a body to advise and protect schools from the risks of AI.
There is growing recognition of the dangers of AI. Rishi Sunak told reporters at the G7 summit this week that “guardrails” would have to be put around it. The Times reported last week that one of the “godfathers” of AI research, Professor Stuart Russell, had warned that ministers were not doing enough to guard against the possibility of a super-intelligent machine wiping out humanity.
Gillian Keegan, the education secretary, told a conference this month that AI would have the power to transform a teacher’s day-to-day work, taking out much of the “heavy lifting” by marking and making lesson plans.
• The Times view: Britain needs an AI bill of rights
Head teachers’ fears go beyond AI’s potential to aid cheating, encompassing the impact on children’s mental and physical health and even the future of the teaching profession.
Their letter says:
“Schools are bewildered by the very fast rate of change in AI and seek secure guidance on the best way forward, but whose advice can we trust? We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools and in the past the government has not shown itself capable or willing to do so.”
—
AI FOR SCHOOLS – open letter to the Times
Sir, As leaders in state and independent schools we regard AI as the greatest threat but also potentially the greatest benefit to our students, staff and schools. Schools are bewildered by the very fast rate of change in AI and seek secure guidance on the best way forward, but whose advice can we trust? We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools and in the past the government has not shown itself capable or willing to do so. We are pleased, however, that it is now grasping the nettle (“Sunak: Rules to curb AI threats will keep pace with technology”, May 19) and we are eager to work with it.
AI is moving far too quickly for the government or parliament alone to provide the real-time advice schools need. We are thus announcing today our own cross-sector body composed of leading teachers in our schools, guided by a panel of independent digital and AI experts, to advise schools on which AI developments are likely to be beneficial and which damaging. We believe this initiative will ensure that we can maximise the vast benefits of AI across education, while minimising the very real and present hazards and dangers.
Sir Anthony Seldon, head, Epsom College; Helen Pike, master, Magdalen College School; James Dahl, master, Wellington College; Lucy Elphinstone, headmistress, Francis Holland School; Geoff Barton, general secretary, Association of School and College Leaders; Chris Goodall, deputy head, Epsom & Ewell High School; Tom Rogerson, headmaster, Cottesmore School; Rebecca Brown, director of studies, Emanuel School
—
British, EU and US Government positions, discussion and evolving thought process
We need ‘guardrails’ to regulate AI, Rishi Sunak says at G7 summit
May 18 2023, The Times
New regulation may be needed to tackle artificial intelligence (AI), Rishi Sunak has admitted, in a signal that the government is to adopt a more cautious approach to the technology.
The prime minister said that Britain’s regulations would have to “evolve” amid concerns in some quarters that Whitehall’s approach so far has been too light-touch.
“I think if it’s used safely, if it’s used securely, obviously there are benefits from artificial intelligence for growing our economy, for transforming our society, improving public services,” Sunak told reporters at the G7 summit in Japan. “That has to be done safely and securely and with guardrails in place, and that has been our regulatory approach.
“We have put in place a regulatory approach that puts those guardrails in place, and sets out a set of frameworks and areas where we need to have guardrails so that we can exploit AI for its benefits.”
But in an indication that he expects the UK to require new rules in the future, Sunak added: “We have taken a deliberately iterative approach because the technology is evolving quickly and we want to make sure that our regulation can evolve as it does as well.”
In a white paper in March the government said that rather than enacting legislation it was preparing to require companies to abide by five “principles” when developing AI. Individual regulators would then be left to develop rules and practices. The position appeared to set the UK at odds with other regulatory regimes, including that of the EU, which set out a more centralised approach, classifying certain types of AI as “high risk”.
The White House gathered tech leaders to address the issue and said it was open to bringing forward new laws to ensure AI can safely benefit everyone. Stuart Russell, a leading figure in AI research, last week criticised the UK’s approach, characterising it as: “Nothing to see here. Carry on as you were before.”
Government sources said the pace at which AI is progressing was causing concern in Whitehall that the country’s approach may have been too relaxed.
But Google’s European president Matt Brittin warned about the dangers of over-regulation, insisting technologies are neutral. Speaking at a Deloitte Enders conference, he said: “A fork is technology. I can use it to eat spaghetti or I can stab you in the hand. We do not regulate forks but there are consequences if you go and stab someone.
“It’s a very sensible way to think about how we harness the opportunity of AI. It’s good that we have lots of voices pointing to the risks but that doesn’t mean that we should stop working on it. There are risks if you want to regulate a specific piece of technology. Saying ‘no more forks’ means you miss out on all of the benefits.”
Sunak said he wanted “co-ordination with our allies” as the government examines the area, suggesting the subject was likely to come up at the G7 summit. His official spokesman said: “I think there’s a recognition that AI isn’t a problem that can be solved by any one country acting unilaterally. It’s something that needs be done collectively.”
That view was echoed by another AI pioneer who called for governments to create an international body like Cern, the European nuclear research organisation, to counter the “danger” the technology poses to democracy. Yoshua Bengio is considered one of the “godfathers of AI” alongside Geoffrey Hinton, who quit Google to warn about the threats of the technology. They jointly won the Turing Award in 2018 for their work on deep learning with Yann LeCun, Meta’s AI chief.
• How worried should we be about the rise of the AI ‘monster’?
Like Hinton, Bengio sees the “race” between big tech companies to increasingly refine AI as a worry, characterising it as “a vicious circle”. He told the Financial Times he saw a “danger to political systems, to democracy, to the very nature of truth” because of the dynamic.
“Generative” AI models that can easily create high-quality text, images, audio and video are widely considered to pose a threat to democracy as they can be adopted by bad actors to spread disinformation. “If you want humanity and society to survive these challenges, we can’t have the competition between people, companies, countries — and a very weak international co-ordination,” Bengio told the paper.
He has proposed an international coalition to fund AI research that can help humanity. “Like investments into Cern in Europe or space programmes — that’s the scale where AI public investment should be today to really bring the benefits of AI to everyone, and not just to make a lot of money,” he said. Cern has 23 member states and operates the Large Hadron Collider, the world’s largest and most powerful particle accelerator.
Tags #AI #AI-education #AI-society #AI-risks