Views & Opinions |
Views & Opinions |
|
|
The singularity has been described as a hypothetical future in which technological growth is out of control and irreversible. Unfortunately, writes McGill University Oppenheimer Scholar J. Mauricio Gaona, the singularity is already here — and it "constitutes artificial intelligence's greatest threat to humanity." |
"AI will be effective not only when machines can do what humans do (replication), but when they can do it better and without human supervision (adaptation)," he writes. Unsupervised learning algorithms are what will define the future of autonomous driving, non-invasive medical care, stock market prediction and much else. Some tech experts insist that AI is being designed solely to serve humanity. But as machines gradually become more independent, and humans more dependent on them, humans are bound to become less intelligent and machines more so. "From law school admission exams to medical licensing, the use of unsupervised learning algorithms (such as Chat-GPT3 and BARD) show that machines can do things that humans do today," Gaona writes. But they also "constitute the last warning to humanity: Crossing the line between basic optimization and exponential optimization of unsupervised learning algorithms is a point of no return that will inexorably lead to AI singularity." To help prevent this outcome, Gaona proposes the creation of various international bodies and treaties to help oversee the new technology. But time is of the essence. "The greatest risk of all is that humans might realize that AI singularity has taken place only when machines remove from their learning adaptations the flaw of their original design limiting their intelligence: human input." Read the op-ed at TheHill.com. |
Welcome to The Hill's Views & Opinions newsletter, it's Tuesday, May 16. I'm Daniel Allott, bringing together a collection of key opinion pieces published from a wide range of voices. |
|
|
Op-eds exploring key issues affecting the U.S. and world: |
|
|
Jonathan Turley, Shapiro Professor of Public Interest Law at George Washington University |
The coverage this week has all the markings of a state media. The consistent spin. The almost universal lack of details. The absurd distinctions. It is the blindside of our First Amendment, which addresses the classic use of state authority to coerce and control media. It does not address a circumstance in which most of the media will maintain an official line out of consent rather than coercion. |
| |
|
With his numbers falling and a target on his back, Ron DeSantis may find advisers pushing for a "discretion is the better part of valor" excuse that allows him to back out and come again for the nomination in 2028. But a look at past presidential runs shows that while GOP primary voters are unlikely to punish a failed run now, a real chance may not be on tap later. |
| |
|
Along with the technical challenges SpaceX's Elon Musk faces getting his Starship rocket operational, he has to deal with the environmental regulators who must approve everything, since the Starbase launch facility sits in the middle of a wildlife preserve. |
| |
|
By Gregory Wallance, former federal prosecutor |
One problem with CNN's attempt to defend a media debacle with media sanctimony about educating the public is that we have long since passed the point where anyone, supporters or opponents, needs an education about Donald Trump. |
| |
|
Opinions related to pivotal issues and figures in the news: | |
|
You're all caught up. See you next time! |
Views expressed by contributors are theirs and not the opinion of The Hill. Interested in submitting an op-ed? Click here. |
1625 K Street NW, 9th Floor, Washington, DC 20006 |
Copyright © 1998 - 2023 Nexstar Media Inc. | All Rights Reserved. |
|
|
|
If you believe this has been sent to you in error, please safely unsubscribe.
No comments:
Post a Comment