Close Menu
TechUpdateAlert

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025
    Facebook X (Twitter) Instagram
    Trending
    • My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why
    • You can now buy the OnePlus 15 in the US and score free earbuds if you hurry
    • Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455
    • Android might finally stop making you tap twice for Wi-Fi
    • Today’s NYT Mini Crossword Answers for Dec. 22
    • Waymo’s robotaxis didn’t know what to do when a city’s traffic lights failed
    • Today’s NYT Wordle Hints, Answer and Help for Dec. 22 #1647
    • You Asked: OLED Sunlight, VHS on 4K TVs, and HDMI Control Issues
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechUpdateAlertTechUpdateAlert
    • Home
    • Gaming
    • Laptops
    • Mobile
    • Software
    • Reviews
    • AI & Tech
    • Gadgets
    • How-To
    TechUpdateAlert
    Home»Reviews»Is OpenAI’s approach to AI safety a recipe for global disaster?
    Reviews

    Is OpenAI’s approach to AI safety a recipe for global disaster?

    techupdateadminBy techupdateadminSeptember 21, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Is OpenAI's approach to AI safety a recipe for global disaster?
    Share
    Facebook Twitter LinkedIn Pinterest Email

    As AI rapidly advances, leading experts like Eliezer Yudkowsky warn that the pursuit of superintelligence poses an unprecedented existential threat, with OpenAI’s practices facing sharp scrutiny amid rising concerns of safety, ethical boundaries, and chilling implications for humanity’s future.

    Over the past few months, concerns have been high around AI safety and privacy amid rising incidents of minors committing suicide after reportedly forming unhealthy bonds and relationships with AI tools like ChatGPT.

    Generative AI has evolved over the years, transitioning from critical setbacks like hallucination episodes to gaining sophisticated capabilities that allow AI bots to generate realistic images and videos, ultimately making it difficult for people to tell what’s real and what isn’t.


    You may like

    AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, indicated that there’s a 99.999999% probability AI could end humanity. The researcher warned that the only way to avoid this outcome is by not building AI in the first place.

    Perhaps more concerningly, ChatGPT can be prompted to share a master plan highlighting how it would plan to take over the world and end humanity. Per its step-by-step explanation, we might already be in phase one of the plan, where more people are becoming overly dependent on AI tools to handle redundant and mundane tasks.

    As it now seems, AI could potentially be on the precipice of ending humanity if elaborate measures and safeguards aren’t put in place to prevent it from spiraling out of control. However, none of these options is a viable solution to the existential threat AI poses to humanity, according to the Machine Intelligence Research Institute’s (MIRI) co-founder, Eliezer Yudkowsky (via The Decoder).

    Instead, Yudkowsky says the only way around the inevitable doomsday is through an international treaty that mandates the permanent shutdown of AI systems. It’s worth noting that he has been studying and evaluating the risks of advanced AI since the early 2000s, and while speaking to The New York Times, he indicated that:

    All the latest news, reviews, and guides for Windows and Xbox diehards.

    “If we get an effective international treaty shutting A.I. down, and the book had something to do with it, I’ll call the book a success. Anything other than that is a sad little consolation prize on the way to death.”

    According to Yudkowsky, approaches like safe AI labs and differentiated risk regulations are only distractions and therefore cannot fully resolve the impending issues and threats that arise from AI development.

    Among the crazed mad scientists driving headlong toward disaster, every last one of which should be shut down, OpenAI’s management is noticeably worse than the pack, and some of Anthropic’s employees are noticeably better than the pack. None of this makes a difference, and all of them should be treated the same way by the law.

    Machine Intelligence Research Institute co-founder, Eliezer Yudkowsky

    He seemingly indicated that OpenAI, which is arguably the most popular AI lab following ChatGPT’s launch, is the worst among the herd chasing the ever-elusive AI bubble.

    Could superintelligence end humanity?

    Dr. Roman Yampolskiy shares similar views about AI’s potential threats to Eliezer Yudkowsky. (Image credit: The Diary Of A CEO | YouTube)

    Most AI labs that are heavily invested in the industry seem to have a common goal: achieving artificial general intelligence (AGI) and perhaps, with more compute, high-quality training, and resources — superintelligence.

    OpenAI CEO Sam Altman indicated that AGI could be achieved within the next 5 years, but brushed off safety concerns while suggesting that it will whoosh away with surprisingly little societal impact.

    However, Yudkowsky seemingly disagrees with these claims, indicating that any artificial superintelligence developed using current methods will lead to the end of humanity.

    As highlighted in his book (If Anyone Builds It, Everyone Dies):

    “If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die.”

    Yudkowsky is calling for action from the political class. he refers to the current approach of sitting on the fence and simply delaying regulation because some of these breakthroughs will predictably be achieved in the next 10 years as reckless.

    “What is this obsession with timelines?” he added. Yudkowsky says it’s important to already have regulations and safeguards in place if these risks already exist.


    Click to follow Windows Central on Google News

    Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!

    approach Disaster Global OpenAIs Recipe Safety
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe funky JBL Go 4 is back to its lowest price this year
    Next Article F1: How to Watch and Stream the 2025 Azerbaijan Grand Prix
    techupdateadmin
    • Website

    Related Posts

    Mobile

    ChatGPT gets safety rules to protect teens and encourage human relations over virtual pals

    December 20, 2025
    Mobile

    Rainbow Six Mobile finally gets a global release date and two exclusive maps

    December 18, 2025
    Mobile

    Huawei FreeClip 2, Watch Ultimate Design Royal Gold and MatePad 11.5 S go global

    December 11, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    NYT Strands hints and answers for Monday, August 11 (game #526)

    August 11, 202545 Views

    These 2 Cities Are Pushing Back on Data Centers. Here’s What They’re Worried About

    September 13, 202542 Views

    Today’s NYT Connections: Sports Edition Hints, Answers for Sept. 4 #346

    September 4, 202540 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Best Fitbit fitness trackers and watches in 2025

    July 9, 20250 Views

    There are still 200+ Prime Day 2025 deals you can get

    July 9, 20250 Views

    The best earbuds we’ve tested for 2025

    July 9, 20250 Views
    Our Picks

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2026 techupdatealert. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.