HomeSample Page

Sample Page Title


Nicely, that didn’t occur, clearly. 

I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take inventory of what has occurred since. Listed here are highlights of our dialog. 

On shifting the Overton window on AI threat: Tegmark informed me that in conversations with AI researchers and tech CEOs, it had turn out to be clear that there was an enormous quantity of tension concerning the existential threat AI poses, however no one felt they might talk about it overtly “for worry of being ridiculed as Luddite scaremongerers.” “The important thing aim of the letter was to mainstream the dialog, to maneuver the Overton window so that folks felt protected expressing these considerations,” he says. “Six months later, it’s clear that half was successful.”

However that’s about it: “What’s not nice is that each one the businesses are nonetheless going full steam forward and we nonetheless don’t have any significant regulation in America. It appears to be like like US policymakers, for all their speak, aren’t going to go any legal guidelines this 12 months that meaningfully rein in essentially the most harmful stuff.”

Why the federal government ought to step in: Tegmark is lobbying for an FDA-style company that will implement guidelines round AI, and for the federal government to power tech corporations to pause AI improvement. “It’s additionally clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very involved themselves. However all of them know they will’t pause alone,” Tegmark says. Pausing alone could be “a catastrophe for his or her firm, proper?” he provides. “They only get outcompeted, after which that CEO can be changed with somebody who doesn’t need to pause. The one approach the pause comes about is that if the governments of the world step in and put in place security requirements that power everybody to pause.” 

So how about Elon … ? Musk signed the letter calling for a pause, solely to arrange a brand new AI firm known as X.AI to construct AI programs that will “perceive the true nature of the universe.” (Musk is an advisor to the FLI.) “Clearly, he needs a pause identical to numerous different AI leaders. However so long as there isn’t one, he feels he has to additionally keep within the recreation.”

Why he thinks tech CEOs have the goodness of humanity of their hearts: “What makes me suppose that they actually need a good future with AI, not a foul one? I’ve identified them for a few years. I speak with them repeatedly. And I can inform even in personal conversations—I can sense it.” 

Response to critics who say specializing in existential threat distracts from present harms: “It’s essential that those that care rather a lot about present issues and those that care about imminent upcoming harms work collectively somewhat than infighting. I’ve zero criticism of people that concentrate on present harms. I believe it’s nice that they’re doing it. I care about these issues very a lot. If folks interact in this sort of infighting, it’s simply serving to Huge Tech divide and conquer all those that need to actually rein in Huge Tech.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles