Sora, the new ‘deepfake’ factory. Can OpenAI put YouTubers out of work?

“Sam, please don’t make me homeless,” the very popular man told him. youtuber Mr. Beast to the CEO of OpenAI four minutes after he announced Sora, the company’s new artificial intelligence (AI) model for video generation. In a matter of another 20 minutes, the kid asks for a video of a monkey playing chess and, almost magically, Altman returns it. It barely lasts 10 seconds, the monkey turns around, looks at the camera, and the world has changed forever.

Getting an AI that generates video with such a level of quality and realism in such a short time was the next logical step of the company after ChatGPT. When I heard about the launch I knew the videos I was going to see would be impressive. And they are. But its impact and what it entails is much deeper. “What you have to understand is that Sora represents something bigger than a video generator. “It is a ‘bad’ simulator of the real world, and as a byproduct, it is capable of generating videos… The feat is SUCH A BEAST,” said in X the Spanish AI popularizer Carlos Santana.

One of the videos that OpenAI has created to promote Sora.

That has been the most talked about feature in the few hours that have passed since the announcement: the model’s ability to understand real-world physics and dynamics. Obviously it has flaws, and big ones, like this duck that flies backwards (that’s why Santana says that “it’s a ‘bad’ simulator”), but there it is, simulating reality to the consumer’s liking and in seconds. It’s just a matter of time before I get it perfectly right, probably not too well.

What will the guild of actors, makeup artists, lighting artists, cameras, sound technicians, localizers, producers, graphic designers, video game artists do then? Think about it. How much time, budget and human equipment would it have been necessary to record the video of the monkey playing chess? Checkmate, audiovisual industry. Because Altman has done it in minutes and for free.

Now it’s time to say the typical phrases with which big tech They try to calm society every time their products torpedo the foundations of our productive and labor structures:

  1. There will always be a need for a human in charge to tell the system what to do.
  2. It will democratize the generation of audiovisual products.
  3. It will boost the creativity of creatives.

All of that is true, but so ishe fear of Mr. Beast and that of everyone who makes a living anywhere in the audiovisual industry chain. This affects cinema, television, advertising, video games, music, the media and, of course, the influencers, tiktokers, youtuberscontent creators, or whatever you want to call them.

This is what always happens with automation, what has been warned for years. We don’t know the number of jobs that will disappear, Goldman Sachs says 300 million in the next decade. Maybe there are more, maybe lessbut there is no doubt that the audiovisual industry has reasons to worry, just as this humble journalist worried when she read this column written by GPT-3 in The Guardian in 2020, long before the world met ChatGPT.

The difference with any previous industrial revolution is that previous automations displaced very low-skilled jobs, and that was a good thing, whatever the Luddites said, because it was relatively easy to retrain those workers. But The threat of artificial intelligence falls on medium-skilled jobs, whose professionals will have a much more difficult time relocating.

They will be able to retrain quickly as Glovo deliverymen, Uber driverscontent moderators and data taggers, but acquiring the demanding technical skills associated with today’s most in-demand jobs, such as cybersecurity, will cost them much more. Good luck with that reskilling.

There’s also the issue of OpenAI stealing/plagiarizing/inspiring work from all those who produced the thousands or probably millions of videos it will have needed to train its new model. “Meta, Google and OpenAI are using the hard work of newspapers and authors to train their AI models without compensation nor recognition,” US Democratic Senator Richard Blumenthal denounced a hearing on the impact of AI on journalism.

In Sora’s case, the situation is exactly the same. “When I started working in AI four decades ago, it simply didn’t occur to me that one of the biggest use cases would be derived imitation, transferring value from artists and other creators to megacorporationsby using massive amounts of energy. “This is not the AI ​​I dreamed of,” has said psychologist, neurologist and AI expert Gary Marcus. Since the model was published, he has not stopped tweeting (or whatever they say now) to point out its flaws and limitations from his privileged view as an expert on the human brain.

THAT IS REAL?

And, of course, there is misinformation, THE ISSUE of misinformation. What can we say about this that hasn’t already been said? Before Sora saw the light, the World Economic Forum had already warned that misinformation generated with AI is the second major risk facing humanity at the moment, only behind extreme weather. So We are facing the same threat as always, but with another dose of steroids.

The more realistic and easier to make these creations become, the easier it will be for bad guys to use them for nefarious purposes.something especially worrying in a year as markedly electoral as 2024. “The greatest immediate danger of current AI? “Probably: Creating disinformation to influence elections,” Marcus said a couple of weeks ago.

Of course, OpenAI has not been slow to report the precautions it is taking to avoid misuse. “She is creating tools to help detect misleading content, with detectors that can tell when a video has been generated by Sora. She has also developed powerful image classifiers that are used to review the frames of all generated videos and ensure that they comply with their usage policies before showing them to the user,” say our colleagues at The country.

Unfortunately, no matter how many flaws it has, no matter how many ducks fly backwards, the advancement of technology is unstoppable and is getting faster and faster. It does not matter that the regulation prohibits the dissemination of deepfakes pornographic videos, phone calls with artificial voices and AI-generated election advertising, the tools to do all this They are there, just like bombs, guns and biological weapons, no matter how prohibited they are.

So, no matter how many tools the industry develops, whoever wants to use AI for evil will use AI for evil, and each of us will be ultimately responsible for not getting caught. The future looks bleak in this regard, especially as the tiktokization of information and acceleration of life is limiting our willingness to reason and understand the context and complexity of problemsand with it, our critical thinking.

But, as a friend told me as soon as he heard about the release of Sora, “I see the future clearly in ‘ponme the Star Warsbut as if it were directed by Wes Anderson and ended with Vader adopting an Ewok‘”. That possibility is also there. So, it may all end up being a lie, but at least we can see the lie we choose, even if we don’t have a job to live off of.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.