So-called â€œdeep fakeâ€� technology will be producing audio and video clips that are indistinguishable from authentic ones as soon as one year from now, a media panel audience heard today.
â€œWe are about to get to a world where the fact we have seen something on a video is no longer a statement that it is true,â€� data scientist John Gibson told Mindshareâ€™s Huddle event.
Telegraph technology special correspondent Harry De Quetteville added: â€œVideo for a long time has been the watermark of credibility â€“ it is the media that conveys veracityâ€¦
â€œNow that is up for questioning in the future, we have to begin to re-evaluate our equivalence of video with truth.â€�
Deep fake videos use algorithms to create a facsimile of a personâ€™s voice and their appearance, making them appear to be saying whatever the videoâ€™s creator desires.
According to Gibson, the audio side of the technology is the result of years of research by Google. The tech giant owns London start-up Deep Mind which runs speech synthesis modelÂ Wavenet.
The video technology side of deep fake appeared on Reddit a year ago from nowhere, said Gibson. Its origin is understood to be in imposing famous faces on bodies in pornographic videos.
â€œAudio is just as susceptible as video and when we put the two together it’s very deceptive,â€� said De Quetteville.
â€œDeep fakes are competing with CGI [computer-generated imagery] now after a year. In a year’s time it will be better.
“Artificial intelligence will overcome the â€˜uncanny valleyâ€™ that means you automatically distinguish that something is real or not and when itâ€™s done it will be able to be done by a kid in their bedroom.
â€œIt will suddenly matter an awful lot where you look at the content you look at. That has quite big implications for brands and platforms.â€�
Already the technology is in use.
Earlier this year US comedian Jordan Peele teamed up with Buzzfeed to create fake video of former US president Barack Obama speaking, with Peele providing the impression and a script of his own.
Said Gibson: “Human impersonators are unlikely to get any better anytime soon. Algorithms will get better really very fast.”
He said by this time next year it was “completely reasonable to assume” that algorithms would be imperceptible in many cases.
De Quetteville said deep fake videos could be a good thing for trusted newsbrands because â€œall that trust that used to reside in the medium, i.e. video, will reside in the brandâ€�.
He added: â€œThe result will be that people will trust brands. They will trust the conduit rather than the message itself.â€�
Gibson said using detection tools to spot fake videos could become an everyday part of a journalistâ€™s professional life within a matter of years. Tools such as digital watermarking on authentic videos could also become more prevalent, the ASI Data Science consultant said.
He said one upside could be that because deep fake videos were â€œquite striking and people like to talk about itâ€� it could raise the profile of disinformation and peopleâ€™s awareness of it.
He said it took people about 30 years to realise that models and celebrities on the front of magazines had been airbrushed using Photoshop, which was released in the 80s.
It published a video on social media yesterday using deep fake technology to show broadcaster MatthewÂ Amroliwala presenting the news in languages he doesn’t actually know how to speak.
— BBC Click (@BBCClick) November 14, 2018
The post 'Deep fake' videos could benefit newsbrands as trust moves from medium to brand, panel audience hears appeared first on Press Gazette.
Source: Digital Journalism