BSV
$53.56
Vol 30.7m
-1.47%
BTC
$95691
Vol 42665.57m
-1.69%
BCH
$448.53
Vol 337.98m
-1.57%
LTC
$101.22
Vol 806.55m
0.65%
DOGE
$0.31
Vol 4704.67m
-3.06%
Getting your Trinity Audio player ready...

In nearly all of the articles I’ve written about generative AI systems, especially those that can generate images, videos, and audio, I always highlight the attack vectors they present when creating deepfakes that are indistinguishable from authentic content.

This year, we’ve already seen one audio-deepfake attack on President Joe Biden. New Hampshire residents received a phone call where AI-generated audio of the president told them they didn’t need to vote in an upcoming election. As you can imagine, an attack like that is just the tip of the iceberg when it comes to the damage that can be done to society via deepfake misinformation and disinformation campaigns. This is likely to increase because it is an election year, and political candidates are using every tool available to them to get the upper hand while trying to make their opponents look less qualified.

Now, those attacks have become easier to execute because OpenAI, the leading service provider for generative AI, has announced that they will soon be releasing an AI model that can create realistic videos from text instructions through their new tool ‘Sora.’

A text-to-video AI model

Sora is a a text-to-video AI model, meaning that users can now generate hyper-realistic videos through text prompts. The model can generate entire videos, extend existing videos, and animate still images.

Currently, Sora is being made available to a select group of ‘red teamers’ to identify potential harms or risks, along with visual artists, designers, and filmmakers to refine its utility for creative endeavors. In addition, OpenAI is developing tools that can detect misleading content generated by Sora, including a detection classifier and the future inclusion of metadata that will indicate an AI system generated the content.

Exploring Sora’s capabilities

What OpenAI probably had in mind when creating Sora was that it would be a critical tool for anybody doing any sort of creative work that has a video component to it.

Film & TV producers, animators, artists, and illustrators immediately come to mind, as they can now use this text-to-video model to quickly create a prototype (or full-length draft) of a scene and experiment with different styles without needing the same resources or having to deal with the constraints and cost of traditional production.

Marketers and advertisers also come to mind, as they often create video campaigns and promotional materials for various companies and brands. With Sora, they can generate these videos very quickly and act faster on trending topics and consumer interests as they go viral.

Educators will also find value in Sora, as they will be able to use the model to generate visual aids for abstract or challenging concepts, which should make topics easier to understand for students. As augmented reality, virtual reality, and spatial computing become more popular, I would imagine Sora being one of the most popular tools that AR/VR/MR developers use to create realistic environments that can be explored and experienced through a virtual reality/mixed reality headset.

Of course, the use cases extend far beyond this. Any person, job, or industry that either enjoys or relies on video content in some form will benefit from this tool, making it easier, cheaper, and faster than ever to bring that content to fruition.

But of course, when there is new technology and tools, individuals will look to capitalize off these new, relatively unexplored technologies for fraudulent, illegal, or otherwise dishonest purposes.

Deepfake dangers

As usual, the potential for dishonest applications for a tool like Sora is high, especially in the political world during an election year. The technology’s ability to generate videos from text-based descriptions means it is now possible to create highly convincing deepfake videos of public figures, including politicians, saying or doing things that never happened. This capability could be exploited to create false narratives and manipulate public opinion.

As we saw earlier in the year, even an audio deepfake attack can cause significant confusion and spread misinformation, which means that the impact of visually realistic deepfake videos could be much more damaging, especially if deepfake videos were to circulate across social media platforms and become viral before fact-checkers have the chance to debunk them.

The creators of these popular tools have been vocal about their efforts to make sure AI-generated content is easily identifiable. Still, if the average user of social media is currently struggling to identify whether the content they are viewing is real or ai-generated, a hyperrealistic tool like Sora will only make it worse.

AI innovation vs. societal risks

Despite the concerns and new attack vectors created by Sora, it is a tool that will likely have more advantages than drawbacks for society.

We can not be afraid of new technology, even when it presents risks to society and when there are aspects of it that we do not fully understand. These concerns around artificial intelligence are not new. Still, on average, the new AI tools and services we see hit the market allows businesses and individuals to be more productive and efficient in their daily lives. Of course, there are obstacles to overcome and problems that will need to be solved, but it would not be innovation taking place if that was not the case.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: What does blockchain and AI have in common? It’s data

Recommended for you

Google unveils ‘Willow’; Bernstein downplays quantum threat to Bitcoin
Google claims that Willow can eliminate common errors associated with quantum computing, while Bernstein analysts noted that Willow’s 105 qubits...
December 18, 2024
WhatsOnChain adds support for 1Sat Ordinals with new API set
WhatsOnChain now supports the 1Sat Ordinals with a set of APIs in beta testing; with this new development, developers can...
December 13, 2024
Advertisement
Advertisement
Advertisement