Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
This is AI generated summarization, which may have errors. For context, always refer to the full article.
Here are some of the reasons why YouTube’s approach to ‘improving’ videos doesn’t work, both for creators and for the average video watcher
YouTube recently confirmed that it did use AI — or, more specifically machine learning technology — to enhance some creators’ YouTube Shorts, reducing the noise and blurriness on some videos without the knowledge or permission of the content creators who made a given video.
In a post on X, Rene Ritchie, YouTube’s head of editorial and creator liaison, said YouTube was “running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video).”
Replying to pushback from creators, Ritchie said that they are working on improving automated deblurring and denoising of videos, and that an opt-out function would be coming, though he did not provide a timeline for such a function.
There are a number of things wrong with this approach to using machine learning or artificial intelligence to improve on something, and it’s time we discussed the reasons why this development sucks for creators and people who just want to watch videos.
YouTube’s acknowledgement that it did this denoising and deblurring on a creator’s video shorts without the permission of that creator might erode trust in YouTube as a video hosting service, though it’s still one of the only places people can reliably upload videos and share their work at scale.
It makes YouTube seem untrustworthy, and follows in the footsteps of other big tech giants who have, in the past overstepped their boundaries and done things without the consent of users, such as Amazon deleting copies of George Orwell’s 1984 from Kindles and refunding the purchases, or Facebook manipulating people’s moods for research.
While YouTube has stated that an opt-out function would come for this automated improvement system in the future, it would be better if it they took an opt-in approach.
Rather than enrolling everyone into a feature set and then asking them to disable that feature, YouTube and other big tech companies should really consider allowing people the option to sign up for a new feature of their own free will instead.
The distinction is important, because Youtube’s approach is NOT similar to the apporach on a videos edited on a smartphone or app. People have to agree to use an app or a specific effect or feature on their phone before that effect or feature can even work, so it’s not fair to say the two are similar.
Akin to a digital form of gaslighting, this kind of silent editing of videos is a sort of alteration of reality before it hits people’s screens.
It’s already happening little by little, with things like the Google Pixel’s Best Take function that uses AI to blend images taken together to improve on the finished product — such as adding smiles to a point in time when you weren’t actually smiling — or AI-powered zoom capabilities on PIxel phones that enable you to increase zoom up to 100x.
The BBC quoted Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh, as saying, “This case with YouTube reveals the ways in which AI is increasingly a medium that defines our lives and realities.”
“People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?” he added.
Worst still, what if people can’t tell the difference, or eventually cease to care about the truth because the alternative looks so much better? – Rappler.com