A project using artificial intelligence to track social media abuse aimed at players at the 2022 World Cup identified more than 300 people whose details are being given to law enforcement, FIFA said Sunday.

The people made “abusive, discriminatory, or threatening posts [or] comments” on platforms like Twitter, Instagram, Facebook, TikTok and YouTube, soccer’s governing body said in a report detailing efforts to protect players and officials during the tournament played in Qatar.

 

The biggest spike in abuse was during the France-England quarterfinals game, said the report from a project created jointly by FIFA and the players’ global union FIFPRO. It used AI to help identify and hide offensive social media posts.

“Violence and threat became more extreme as the tournament progressed, with players’ families increasingly referenced and many threatened if players returned to a particular country — either the nation they represent or where they play football,” the report said.

About 20 million posts and comments were scanned and more than 19,000 were flagged as abusive. More than 13,000 of those were reported to Twitter for action.

Accounts based in Europe sent 38% of the identifiable abuse and 36% came from South America, FIFA said.

“The figures and findings in this report do not come as a surprise, but they are still massively concerning,” said David Aganzo, president of Netherlands-based FIFPRO.

Players and teams were offered moderation software that intercepted more than 286,000 abusive comments before they were seen.

The identities of the more than 300 people identified for posting abuse “will be shared with the relevant member associations and jurisdictional law authorities to facilitate real-world action being taken against offenders,” FIFA said.

“Discrimination is a criminal act. With the help of this tool, we are identifying the perpetrators and we are reporting them to the authorities so that they are punished for their actions,” FIFA President Gianni Infantino said in a statement.

“We also expect the social media platforms to accept their responsibilities and to support us in the fight against all forms of discrimination.”

FIFA and FIFPRO have extended the system for use at the Women’s World Cup that starts next month in Australia and New Zealand.

Summarized by Techpresso

AI technology is progressively invading the audiobook industry, potentially replacing human voice actors. This advancement, despite its promising implications for growth, is raising concerns among professionals about their future in the field.

AI in the Audiobook Industry: The audiobook industry is forecasted to have significant growth, reaching a worth of $35 billion by 2030. Technology advancements, specifically AI, are contributing to this growth but also introducing concerns. AI’s ability to replicate human voices is causing unease among voice actors.

  • AI is already being utilized in some areas of the industry.

  • Google Play and Apple Books are among the platforms using AI-generated voices.

  • However, the replication of the human voice by AI isn’t seamless yet.

Impact on Voice Actors: Voice actors are increasingly skeptical of AI’s potential in the industry. Some, like Brad Ziffer, are refusing work that could lead to their voices being cloned by AI.

  • Actors are protective of their unique intonation, cadence, and emotional expression.

  • The preference is still for real human voices due to their unique characteristics that AI currently can’t fully mimic.

AI vs. Human Voice: The Current Gap: While AI voices are getting better, they still can’t capture all the nuances of a human voice. People’s sensitivity to sound and nuances in timing are hard to replicate perfectly by AI.

  • AI struggles with capturing the subtleties of comedic timing or awkward pauses.

  • However, AI-generated voices aren’t entirely off-putting.

  • In tests, participants could distinguish between human and AI voices, but didn’t find the latter entirely unappealing.

Future Perspectives: Despite concerns, there is recognition of AI’s potential in the industry. The technology could be beneficial but also easily abused. Currently, the belief is that real human voices have no equal in the industry.

  • The development of AI in this sector is still ongoing, and full reproduction of the human voice is yet to be achieved.

  • Professionals are wary but acknowledge the potential advancements AI could bring.

A radio station in Portland, Oregon, has introduced a part-time AI DJ to its audience. Named “AI Ashley,” the AI’s voice closely resembles that of the station’s human host, Ashley Elzinga. AI Ashley will host the broadcast for five hours daily, using a script created by AI tool, RadioGPT.

Introduction of AI Ashley: AI Ashley is a project introduced by Live 95.5, a popular radio station in Portland. This AI DJ, modelled after human host Ashley Elzinga, is set to entertain listeners from 10 a.m. to 3 p.m. daily.

  • The AI’s voice is said to closely mimic Elzinga’s.

  • This project is powered by Futuri Media’s RadioGPT tool, which utilizes GPT-4 for script creation.

Listener Reactions: Twitter users and Live 95.5’s audience have had mixed reactions to the introduction of an AI DJ.

  • Some have shown concerns over AI’s growing influence in the job market.

  • Others appreciated the station’s effort to maintain consistency in content delivery.

Hybrid Hosting Model: Despite AI Ashley’s introduction, traditional human hosting isn’t completely phased out.

  • Phil Becker, EVP of Content at Alpha Media, explained that both Ashleys would alternate hosting duties.

  • While AI Ashley is on-air, the human Ashley could engage in community activities or manage digital assets.

Impact on the Job Market: The increasing integration of AI in media industries is causing some job concerns.

  • iHeartMedia’s staff layoffs in 2020 and subsequent investment in AI technology raised alarms.

  • In the publishing industry, voice actors fear loss of audiobook narration jobs due to AI voice clones.

AI in the Music Industry: AI’s impact on the music industry is also noteworthy.

  • It’s being used for tasks like recording and writing lyrics.

  • Apple has started rolling out AI-narrated audiobooks.

Source (Business Insider)

PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!

 

A field study by Cambridge and Harvard Universities explores whether large language models (LLMs) democratize access to dual-use biotechnologies, research that can be used for both good and bad.

– A study from Cambridge and Harvard Universities shows that large language models such as GPT-4 can make potentially dangerous knowledge, including instructions on how to develop pandemic viruses, accessible to those without formal training in the life sciences.

– The study identifies weaknesses in the security mechanisms of current language models and shows that malicious actors can circumvent them to obtain information that could be used for mass harm.

– As solutions, the authors propose the curation of training datasets, independent testing of new LLMs, and improved DNA screening methods to identify potentially harmful DNA sequences before they are synthesized.

Source: https://the-decoder.com/ai-chatbots-allow-amateurs-to-create-pandemic-viruses/
Paper: https://arxiv.org/ftp/arxiv/papers/2306/2306.03809.pdf

  • AI can make it easier for anyone to create custom-tailored viruses and pathogens: MIT researchers asked undergraduate students to test whether chatbots “could be prompted to assist non-experts in causing a pandemic,” and found that within one hour the chatbots suggested four potential pandemic pathogens. The chatbots helped the students identify which pathogens could inflict the most damage, and even provided information not commonly known among experts. The students were offered lists of companies who might assist with DNA synthesis, and suggestions on how to trick them into providing services. This is arguably the strongest case against open-sourcing AI [source: https://www.msn.com/en-us/news/technology/new-ai-fear-making-it-easy-for-anyone-to-mint-dangerous-new-viruses/ar-AA1cCVq6]

  • Intel will start shipping 12-qubit quantum processors to a few universities and academic research labs: 12 qubits is still not a big deal, it’s not a lot of computing power. However, as we all know, technology, and very specifically processing power, is subject to Moore’s Law, which for those of you who actually had a social life in high school and now you don’t know what Moore’s Law is, simply means that technology gets better, faster, stronger, and cheaper as time goes by. And, compared to regular processors, quantum processors are orders of magnitude faster. Ok, how is this related to AI? I’m glad you asked. Advancements in AI pretty much come down to two things – data and computing power. We already have entire oceans of data, or, rather, Google and Facebook do, and the biggest challenge to making God-like AI is the laggings in processing power. And when that stops being a problem because of quantum computers, when we plug AI into quantum computers… I guess we’ll finally see if we get to live in a Kumbaya Utopia where we all love each other and don’t have to work unless we feel like it, or, you know, Skynet meets the Matrix type of thing. [source: https://arstechnica.com/science/2023/06/intel-to-start-shipping-a-quantum-processor/ ]

  • People are using AI to automate responses to sites that pay them to train AI: So, for those of you who’ve never watched one of those “how to make $5000 a month on the Internet” videos, Amazon’s Mechanical Turk is a platform where people can complete small tasks like data validation or transcriptions or surveys to earn a bit of money. Well, researchers at École Polytechnique Fédérale de Lausanne in Switzerland have found that a significant number of Mechanical Turk workers are already using large language models (LLMs) to automate their labor. [source: – https://futurism.com/the-byte/people-automating-responses-train-ai ]

  • A Chick-fil-A in Alpharetta, Atlanta, is testing AI-powered delivery robots: This sounds like bad news for delivery people. However, I think we’ve kinda seen this story play out before, since Amazon tried to automate delivery with drones a few years ago, and regulatory setbacks prevented their efforts. I’m not sure how this particular Chick-fil-A restaurant has pulled this off, it may not be entirely legal, but let’s see how this develops. [source: https://www.wsbtv.com/news/local/north-fulton-county/metro-atlanta-chick-fil-a-tests-delivery-robots-equipped-with-artificial-intelligence/GSLBEX2NFJAQFGE3H7KW3YFOCU/ ]

  • Researchers from Microsoft and UC Santa Barbara Propose LONGMEM: An AI Framework that Enables LLMs to Memorize Long History: As you may know, even the most advanced AI bots like ChatGPT can only take input of up to a certain length, and you can still use several prompts to add more input, but this way of functioning is still limited, as the chatbot doesn’t really have long-term memory, doesn’t really learn from your own specific actions and adjust itself based on your input. If that were possible, a whole other world of features and possibilities would open up for AI. Well, the proposed LONGMEM framework should enable language models to cache, to keep in memory long-form prior context or knowledge, which will kinda give LLMs superpowers and we will likely start seeing a lot more new applications. Exciting stuff. [source: https://www.marktechpost.com/2023/06/16/researchers-from-microsoft-and-uc-santa-barbara-propose-longmem-an-ai-framework-that-enables-llms-to-memorize-long-history/ ]

  • AI used to catch a thief: A video on Facebook is going viral, a person was caught on a security camera stealing stuff from some street artist kids in the Philippines, and the Internet rose to the occasion – social media users used AI to sharpen and enhance the image of the thief, sent the pic to the kids, and they gave it to the police. The authorities were able to recover the bag, but one cellphone was missing. The suspect is identified but still at large. The implications of this are not certain. This is still an AI-generated image, it can very easily be inaccurate, and the wrong person might easily get punished even when innocent. [source: https://www.facebook.com/watch/?v=1307441943456719 ]

  • A study finds that a new AI autopilot algorithm can help pilots avoid crashes: Researchers the MIT have developed a new algorithm that can help stabilize planes in low altitudes. [source: https://www.jpost.com/science/article-746671 ]

  • Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase. [source: https://finance.yahoo.com/news/amazon-experiments-using-ai-sum-173839185.html?fr=sycsrp_catchall ]

  • The best new “Black Mirror” episode is a Netflix self-own that plays out our current AI nightmare. “Joan Is Awful” presents the peril posed by artificial intelligence with brisk humor that can’t be generated.[2]

  • The world’s biggest tech companies(OpenAI, Google, Microsoft, and Adobe) are in talks with leading media outlets to strike landmark deals over the use of news content to train artificial intelligence technology.[3]

  • A.I. human-voice clones are coming for Amazon, Apple, and Google audiobooks.[4]

  • Finally, some heartwarming news, AI may help us understand animals. [source: https://www.msn.com/en-us/news/technology/will-artificial-intelligence-help-us-talk-to-animals/ar-AA1cFRO6]