What does a Harry Potter fanfic have to do with OpenAI?

Have you ever vomited and had diarrhea at the same time? I have, and when it happened, I was listening to a fan-made audiobook version of Harry Potter and the Methods of Rationality (HPMOR), a fan fiction written by Eliezer Yudkowsky.

No, the two final body horrors were not fanfic induced, but the two experiences are inseparable in my mind. I was shocked to discover years later that the 660,000-word fanfic I marathoned while sick had some strange intersections with the richest technorati, including several figures involved in the current OpenAI debacle .

Case in point: in a easter egg found on 404 Media (which is too small for anyone – not even me, someone who’s actually read a thousand-odd page fanfic – to notice), there’s a one-time mentioned Quidditch player in the wider story named Emmett Shear. Yes, the same Emmett Shear who co-founded Twitch and was just named interim CEO of OpenAI, may be the most influential company of the 2020s. Shear is a fan of Yudkowsky’s work, following the serialized story as it was published online. So, as a birthday present, he was given a cameo.

Shear is a longtime fan of Yudkowsky’s writings, as are many of the key players in the AI ​​industry. But this Harry Potter fanfic is his most popular work.

HPMOR is an alternate universe rewrite of the Harry Potter series, starting with the premise that Harry’s aunt Petunia married an Oxford biochemistry professor, instead of the abusive dolt Vernon Dursley. Therefore, Harry grew up as a child who was known to be fascinated by rationalist thinking, an ideology that prizes experimentation, scientific thinking to solve problems, avoiding emotion, religion or other imprecise steps. It’s not three pages into the story before Harry is quoting the Feynman Lectures on Physics to try to resolve a disagreement between his adoptive parents about whether or not magic is real. If you think so actually Harry Potter can be a bit frustrating at times (why doesn’t he ask Dumbledore the most obvious questions?), be prepared for IT Harry Potter, who can give the eponymous “Young Sheldon” a run for his money.

It makes sense that Yudkowsky runs in the same circles as many of the most influential people in AI today, as he himself is a long-time AI researcher. In a 2011 New Yorker piece techno-libertarians in Silicon Valley, George Packer reports from a dinner party at the home of billionaire venture capitalist Peter Thiel, who later co-founded and invested in OpenAI. While the “blondes in black dresses” poured the men wine, Packer partied with PayPal co-founders like David Sacks and Luke Nosek. Also at the party was Patri Friedman, a former Google engineer who got funding from Thiel to start a non-profit aimed at building floating, anarchist sea civilizations inspired by the Burning Man festival (after fifteen years, the organization seems to have made a lot of progress). And then there’s Yudkowsky.

To further connect the parties involved, see: a ten-month selfie of the now ousted OpenAI CEO Sam Altman, Grimes and Yudkowsky.

Yudkowsky is not a household name like Altman or Elon Musk. But he has always developed stories behind companies like OpenAI, or even behind the famous romance that brought us children named X Æ A-Xii, Exa Dark Sideræl and Techno Mechanicus. No, actually – Musk wanted to tweet a joke about “Roko’s Basilisk,” a thought experiment about artificial intelligence that comes from LessWrong, Yudkowsky’s blog and community forum. But, as it turns out, Grimes had already made the same joke about a “Rococo Basilisk” in the music video for his song “Flesh Without Blood.”

HPMOR is a literal recruiting tool for the rationalist movement, finding its virtual home in Yudkowsky’s LessWrong. Through an admittedly entertaining story, Yudkowsky uses the familiar world of Harry Potter to illustrate rationalist ideology, showing how Harry works against his cognitive biases to become a master problem solver. . In a final showdown between Harry and Professor Quirrell — his teacher of rationalism turned evil — Yudkowsky breaks the fourth wall and gives his readers a “final exam.” As a community, readers must submit to rationalist theories that explain how Harry can get himself out of such a deadly disaster. Fortunately, for happy endings, the community passed.

But the moral of HPMOR is not just to be a better rationalist, or to be as “less wrong” as possible.

“To me, most of HPMOR is about how rationality can make you more effective, but being more effective can still be worse,” a friend of mine who read HPMOR told me. “I feel like the whole point of HPMOR is that rationality doesn’t matter at the end of the day if your alignment is evil.”

But, of course, we can’t all agree on a definition of good versus evil. This brings us back to the turmoil of OpenAI, a company that is trying to create an AI that is smarter than humans. OpenAI wants to adapt this artificial general intelligence (AGI) to human qualities (such as the value of a human being without being killed in an apocalyptic, AI-induced event), but it turns out that it “alignment research” is Yudkowsky’s specialty.

In March, thousands of prominent AI figures signed an open letter arguing for all “AI labs to cease immediately in at least 6 months.”

The signatories include Meta and Google engineers, founders of Skype, Getty Images and Pinterest, Stability AI founder Emad Mostaque, Steve Wozniak, and even Elon Musk, a co-founder of Open AI who resigned in 2018. But Yudkowsky did not sign the letter, and instead, wrote an op-ed in TIME Magazine to argue that a six-month hiatus is not radical enough.

“If someone were to build a super-powerful AI, under current conditions, I expect every single member of the human species and all biological life on Earth to die out before long.” ,” Yudkowsky wrote. “There is no proposed blueprint for how we can do anything and survive. OpenAI is clearly stated DRY is to make future AI do our AI alignment homework. Just heard that this is the plan enough to scare any sane person. Another leading AI lab, DeepMind, has no plans.

While Yudkowsky argues for a doomerist approach to AI, the OpenAI leadership kerfuffle underscores the wide variety of beliefs on how to navigate the technology that could be an existential threat.

Acting as interim CEO of OpenAI, Shear – now one of the most powerful people in the world, and not a Quidditch looking for a fanfic – posted memes about different factions of the AI ​​debate.

There are techno-optimists, who support the growth of tech at all costs, because they think that any problems caused by this “grow at all costs” mentality can be solved by technology itself. Then there are the effective accelerationists (e/acc) which seem like techno-optimism, but with more language about how growth at all costs is the only way forward because of the second law of thermodynamics. – as such. Safetyists (or “decel”) support technological progress, but only in a regulated and safe manner (while, in his “Techno-Optimist Manifesto,” venture capitalist Marc Andreessen rejects “trust and safety ” and “tech ethics ” as his enemy). And then there are the doomers, who think that if AI outsmarts us, it will kill us all.

Yudkowsky is a leader of the doomers, and he’s also a man who has spent the last few decades running in the same circles as half of OpenAI’s board. A popular theory about Altman’s ouster is that the board wanted to appoint someone more in line with those “decel” values. So, enter Shear, who we know is inspired by Yudkowsky and also considers himself a doomer-slash-safetyist.

We still don’t know what’s going on with OpenAI, and the story seems to change about once every ten seconds. Currently, techy circles on social media continue to fight decel vs. e/acc ideology, using the backdrop of the OpenAI mess to make their arguments. And in the midst of it all, I can’t help but find it fascinating that, if you look at it, it all traces back to a scary Harry Potter fanfic.

Leave a comment