For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
This NYT piece got me thinking about the rapidly evolving stat of thought on AI which is understandable I suppose given the rapid appears of smart chatbots and the focus them.
I remain committed to the AI long view (20 years plus). This is added to the AI Series.
Author of NYT piece:
By Cade Metz
reported this story in Toronto. Published May 1, 2023- Updated May 4, 2023
Cade reports on Hintons departure from Google so that he can be transparent on his changed views of AI.
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems …
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
I base my commentary on the NYT piece and on listening to the NYT podcast this morning. During the interview, Hinton told Cade about the risks he sees which I loosely categorize thus:
- Disinformation which could shape the average persons views of the world and of reality. He bases this risk on the potential for manipulation of words, images, (even – Hinton used the word even) video.
- Employment – jobs being replaced by AI
- Risk to humanity – Hinton speaks of super warriors including robots sustained by AI. He speaks of the potential for aggressive countries to be much less concerned with mass attacks on their enemies using AI with the consonant low risk to the attackers soldiers and the potential for attacks far more lethal and broad than a small belligerent country could muster when relying on human soldiers.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
Two groups have issued open letters proposing moratoriums or halts to AI development before regulation can be put in place. My observation on these useless letters is they represent the equivalent of an Englishman voicing objections to Steam Engines because of the risks to horses well-being and livelihood in 1698.
Thomas Savery began it all with his steam pump in 1698. He was followed by Thomas Newcomen’s first real steam engine in 1711.
Open letter #1 Future of Life site – Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’
More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems.
Open letter #2 Association for the Advancement of Artificial Intelligence (AAAI)
Ensuring that AI is employed for maximal benefit will require wide participation. We strongly support a constructive, collaborative, and scientific approach that aims to improve our understanding and builds a rich system of collaborations among AI stakeholders for the responsible development and fielding of AI technologies. Civil society organizations and their members should weigh in on societal influences and aspirations. Governments and corporations can also play important roles. For example, governments should ensure that scientists have sufficient resources to perform research on large-scale models, support interdisciplinary socio-technical research on AI and its wider influences, encourage risk assessment best practices, insightfully regulate applications, and thwart criminal uses of AI.
Relevance to the AI discussion:
- Hintons risks and reasons provided for his mind change snd departure from Google are his and his alone. To me they read as weak displaying a lack of depth and analytical context for someone termed the “Godfather”.
- I see no evidence of the decades long view required to see the full impacts and some of which I have begun to couch on here.
- Cade refers to Hinton always having lived in the future. As someone who believes he lives in the future of my own fields, I see little evidence for this characterization of Hintons latest positioning.
- I read there is more to the Hinton story.
- The open letters are replete with vested interests. Again the view are theirs and theirs alone but I see people who will benefit from AI and set the stage.
- The names of new entrants yet to appear or be identified could be a risk the letters signatories fear the most.
Tags #A! #AI-series #AI-debate