google-experts-warn-that-ai-may-distort-reality,-while-ai-overviews-repel-mobile-usersGoogle Experts Warn That AI May Distort Reality, While AI Overviews Repel Mobile Users

Given the ongoing debate over whether generative AI will harm humanity, it’s not surprising that a new research report warns that the “mass production of low quality, spam-like and nefarious synthetic content” by AI may foment distrust of all digital information. AI-generated “slop” may also lead to fatigue, because we humans will need to constantly fact-check what we read, see and hear on the internet (the alternative — not fact-checking — is worse).

“This contamination of publicly accessible data with AI-generated content could potentially impede information retrieval and distort collective understanding of socio-political reality or scientific consensus,” six researchers say in their June paper, Generative AI Misuse: A Taxonomy of Tactics and Insights From Real-World Data. “We are already seeing cases of liar’s dividend, where high profile individuals are able to explain away unfavourable evidence as AI-generated, shifting the burden of proof in costly and inefficient ways.”

AI Atlas art badge tag

Distorting reality? Liars gaslighting us? Again, not surprising, given that we’ve been living in a country where misinformation and disinformation have been a daily part of our media diet — even before AI made all that text, image and video slop possible. A third of the US population, for instance, still believes the 2020 presidential election was rigged (it wasn’t).

What is surprising about this new research? The fact that the 29-page report was co-authored by researchers from across Google, namely from its DeepMind AI research lab, its charitable group Google.org, and Jigsaw, a tech incubator focused on security and threats to society. Google, whose search engine and other services are used by billions of people every day, is among the big tech firms investing heavily in a future with AI. 

Good on those researchers for pointing out real-world examples of how gen AI can be misused, and for reminding us all that we still don’t know a lot about the potential risks as the technology continues to evolve at a rapid pace. If you don’t have time to read or scan the report, at least look over the introduction and the top three findings. 

First, most of the misuse is directed at cheating people, lying to them to change their minds, or making money. “Manipulation of human likeness and falsification of evidence underlie the most prevalent tactics in real-world cases of misuse. Most of these were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit,” the researchers wrote.

Second, you don’t need to be a tech whiz to use these tools for ill. “The majority of reported cases of misuse do not consist of technologically sophisticated uses of GenAI systems or attacks. Instead, we are predominantly seeing an exploitation of easily accessible GenAI capabilities requiring minimal technical expertise.”

Third — and most worrying to my mind — is that many of the cases of misuse “are neither malicious nor explicitly violate these tools’ terms of services.” So it’s the way we humans have built these tools and set (or not set) guardrails that’s a big part of the problem.   

That brings me to what I consider a basic tenet of tech development: Just because you can do a thing with technology doesn’t mean you should.

Case in point: Google’s AI Overviews, which the company introduced at its developers conference in May. The feature uses AI to autogenerate answers to certain Google Search questions by summarizing or referencing supposedly legitimate and credible sources from across the internet. Unfortunately, the release of AI Overviews didn’t go as planned, with some users reporting that the system suggested putting glue in pizza sauce to get it to stick to the crust. That prompted Google to say in late May that it would scale back the use of AI summaries, after seeing that “some odd, inaccurate or unhelpful AI Overviews certainly did show up.” 

But overall, Google has defended AI Overviews — even as publishers have argued that it can undercut their ability to fund editorial work — saying the feature is intended to give users helpful information and allow Google “to do the Googling for you.” 

Well, one survey shows that perhaps users don’t exactly find AI Overviews helpful. The release of AI Overviews “coincided with a significant drop in mobile searches,” according to a study by a search industry expert named Rand Fishkin and reported on by Search Engine Journal. 

The study looked at Google searches by users in the US and the European Union. Search Engine Journal reported that while Fishkin found a “slight increase” in desktop searches in May, “the drop in mobile searches was significant, considering that mobile accounts for nearly two-thirds of all Google queries. This finding suggests that users may have been less inclined to search on their mobile devices when confronted with AI-generated summaries.”

But that doesn’t mean AI Overviews is a failure. Search Engine Journal noted that users who did “engage” with the AI summaries still clicked on results at a similar or higher rate than they had on other search results. 

As with all things AI, we’ll have to wait and see how Google’s all-in approach to AI evolves. Let’s hope Google CEO Sundar Pichai and his team have read the gen AI misuse report and already modified some of their go-forward plans based on what their experts found.

Here are the other doings in AI worth your attention.

Fact or AI fakery? A few worthwhile fact-checking sources

While we’re talking about the need to double-check whether that viral post with the sensational headline is fact or AI fakery, let me share a few of the more popular online destinations for fact-checking things you’re reading or seeing online. 

FactCheck.org, a project of the Annenberg Public Policy Center, is a nonpartisan, nonprofit site designed to help US voters by monitoring the “factual accuracy of what is said by major US political players in the form of TV ads, debates, speeches, interviews and news releases.”

PolitiFact, run by the Poynter Institute, is a nonpartisan site that also aims to fact-check statements that may mislead or confuse US citizens. 

RumorGuard is a fact-checking site focused on viral rumors. It’s from the News Literacy Project, a nonpartisan education nonprofit that aims to advance “news literacy through American society, creating better informed, more engaged and more empowered individuals.”

Snopes, founded in 1994 to investigate “urban legends, hoaxes, and folklore,” now provides fact-checks on rumors and news stories covering news, politics, entertainment, science, technology, lifestyle content and more. 

The Fact Checker, run by The Washington Post, grades political information on a scale of 1 to 4 “Pinocchios.”

The AI Incident Database is a list of incident reports submitted by anyone who wants to call out the misuse of AI. The site says the goal is to index “the collective history of harms or near harms realized in the world by the deployment of artificial intelligence systems.”

Meta updates its AI labeling policy after some real photos were tagged 

After being called out by some artists and content creators for mistakenly tagging their work as AI generated, Meta said it’s changing the labels it applies to social media posts that it suspects may’ve been created with a gen AI assist. Meta, parent company of Facebook, Instagram, Threads and WhatsApp, said its new label will display “AI info” alongside a post, where it used to say “Made with AI,” according to CNET’s Ian Sherr.

Signup notice for AI Atlas newsletter

Artists whose work was mislabeled include former White House photographer Pete Souza, who told TechCrunch that a cropping tool may have triggered Meta’s AI detectors.

In a July 1 update to its blog post detailing its AI labeling policy, Meta said, “We’ve found that our labels … weren’t always aligned with people’s expectations and didn’t always provide enough context.”

“For example,” it continued, “some content that included minor modifications using AI, such as retouching tools, included industry standard indicators that were then labeled ‘Made with AI.’ While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information.” 

Morgan Freeman isn’t OK with people stealing his voice

When it comes to AI and intellectual property rights, it’s only funny until the IP holder cries foul. As they should.

That was the case with a TikTok creator, posting under an account called “Justine’s Camera Roll,” who wanted to have some fun with Academy Award-winning actor Morgan Freeman.

Freeman, who’s used his voice to help narrate notable films including The Shawshank Redemption, said he was not OK with the TIkTok influencer using an AI version of his distinctive voice without his permission to narrate a fake day in her life.

“Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an AI voice imitating me, Freeman wrote in a Facebook post with the hashtags #AI#scam#imitation and #identityprotection. “Your dedication helps authenticity and integrity remain paramount.”

Freeman took exception to a 43-second video posted by Justine’s Camera Roll in which the TikTok creator claims to be Freeman’s niece, according to a report by Today. The video has been taken down, but Today reported that the fake Freeman narration recounts Justine begging “for money for what she said would be a cultural experience in Spain. She asked for my credit card to book what she claimed was a little activity for her birthday. Imagine my surprise when I was charged for a yacht. Basically, she embezzled.”

The TikTok creator said it was an “obvious joke,” according to Today, and in a follow-up video a few days later, she told her fans that she “just thought it’d be funny. … Now Uncle Mo is upset with me. … Please no cease and desist.”

How big a problem is it for people to have their voices used in unauthorized ways? Well, the US Federal Communications Commission this year banned AI-generated robocalls after a bad actor copied Joe Biden’s voice and told New Hampshire Democracts not to vote in the state’s presidential primary. The creator of that deepfake is now facing a $6 million fine.

And several celebrities, including actor Tom Hanks, have also called out AI fraudsters who used their voice for fake ads. 

Expect this problem to get worse (see YouTube’s new policy below). There’s a collection on TikTok called “Morgan Freeman AI Voice” that shows just how easy it is for AI tools to mimic a real person’s voice.   

YouTube lets you ask to remove AI-generated versions of your voice, face

You don’t have to be a celebrity, politician or noted personality to be concerned that your voice or face might be copied without your permission by someone wielding an AI tool. 

In June, YouTube rolled out a policy change for its site that will “allow people to request the takedown of AI-generated or other synthetic content that simulates their face or voice,” TechCrunch found. “Instead of requesting the content be taken down for being misleading, like a deepfake, YouTube wants the affected parties to request the content’s removal directly as a privacy violation.” 

YouTube will consider requests on a case-by-case basis, so takedowns won’t be automatic. 

Expert vs. AI: Battle of the DJs

In the latest edition of CNET’s Expert vs. AI series, New York-based DJ Paz pitted his 20 years as a music expert and DJ against Google Gemini and Google’s experimental MusicFX AI tools.

Paz asked MusicFX to create a disco song with 122 bpm (typical beats per minute for disco) and a 2024 tech house bass and piano. The result, said Paz, was “not exactly what I was looking for but that’s pretty cool.” A ’70s funk and soul song at 108 bpm with bass guitar and smooth synths was also “really cool,” but again not what he was looking for.  

Instead of putting the blame on MusixFX, Paz decided he needed “to be more descriptive in what I’m asking for” in his prompts. But overall, though MusicFX is a “breathtaking and amazing” tool that’s fast and easy to use, Paz decided that it seems to be more of a tool for creating music than for DJing.

He also asked Gemini whether DJs should be excited or nervous about AI. Gemini’s answer: They should be excited because it “won’t replace them but rather be a powerful tool.” Paz disagreed, saying AI will replace a lot of “really bad DJs” who may just be playing off of Spotify Top 50 lists. 

Paz agreed with Gemini that AI can help boost DJ’s creativity, but only if it can focus on a DJ’s particular style and tastes. “The key here to be a real asset to DJs and further creativity will be to suggest songs from my library. I’m less interested in hearing suggestions of the billions of songs or the songs that every single person is playing. I would like it more tuned to my tastes.”

If you’re curious how this all works, just watch the CNET video of Paz at work, here.  

By admin

Related Post