So as a professional scrutineer of AI applications within DoD - I share the sentiment towards such AI Fear Syndrome...
I recently enabled our Flag Level Critical Non-Concur Position with a recent Flag Officer level review of a Draft Order/Policy on the USMC guidance for advancing Data and AI... based upon the Pace being too fast without addressing all the RISKS...
But lets take an OBJECTIVE LOOK at these TWO BEAST rising AI CANDIDATES... Between Professor Yuval Noah Harari or Elon Musk...
As we Compare and Contrast the potential AI Beast Master between Professor Yuval Noah Harari or Elon Musk...
Elon Musk:
- Elon's initiative is AIMED at HELPING Human's with existing Brain Injuries, Paralysis, Crippled or otherwise...
- Essentially the initiative calls for a VOLUNTEER participation for Patients WHO CHOSE to have the option for a better life existence.
- Elon is ON RECORD (see below) for advocating for "Responsible AI"
- BLUF - Elon's intentions are aligned for use of AI for GOOD...
- Elon DOES NOT Advocate for AI integration with HUMANITY!
Yuval Noah Harari:
- Harari Advocates for AI integration for the benefit of ALL of HUMANITY!
- Advocates for the End of Human History...
- Advocates that AI can take People where Humans have never gone before...
- Advocates for PLAYING GOD!
welovetrump.com › 2022/03/08 › who-is-professor Who is Professor Yuval Noah Harari, Klaus Schwab's Top Advisor?
Mar 8, 2022 · Is this the mastermind behind the demented ideas of World Economic Forum founder Klaus Schwab? For an introduction to the mind of
Yuval Noah Harari, here’s the quote that appears
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
May 14, 2023
View attachment 251896
In this keynote and Q&A, Yuval Noah Harari summarizes and speculates on 'AI and the future of humanity'. There are a number of questions related to this discussion, including: "In what ways will AI affect how we shape culture? What threat is posed to humanity when AI masters human intimacy? Is AI the end of human history? Will ordinary individuals be able to produce powerful AI tools of their own? How do we regulate AI?" The event is was organized and produced by the Frontiers Forum, dedicated to connecting global communities across science, policy, and society to accelerate global science related initiatives. It was produced and filmed with support from Impact, on April 29, 2023, in Montreux, Switzerland.
-
https://www.nytimes.com › 2023 › 03 › 29 › technology › ai-artificial-intelligence-musk-risks.html
Elon Musk and Others Call for Pause on A.I., Citing 'Risks to Society ...
Mar 29, 2023More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter...
Statement on AI Risk
AI experts and public figures express their concern about AI risk.
AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
https://www.safe.ai/statement-on-ai-risk#open-letter