In the year since Open AI introduced ChatGPT to the world, almost 600 media organizations have blocked the technology from scraping their content.
Two other AI chat bots — Google AI’s Bard and Common Crawl’s CCBot — are also blocked by some or most of those same news organizations.
The list grows longer each day, according to Ben Welsh, a news applications editor for Reuters, who compiled a survey of news organizations for his media blog.
“What we are seeing here is that news publishers, at least half of them in my survey, want to put the brakes on this a little bit and not just allow themselves to be included in this without some sort of conversation or negotiation with the Open AI company,” Welsh said.
Open AI, the creator of ChatGPT, offered 1,153 news organizations the option to block its chat bot in August 2023. As of Wednesday, nearly half have taken up that offer.
While most are U.S. organizations, including The New York Times and CNN, the list also includes international media groups, including Australia’s ABC News, The Times of India, and The South African.
Welsh’s survey didn’t dig deeply into the reasons for blocking ChatGPT, but he said that commercial media tend to be among the groups that stop ChatGPT whereas nonprofits are more likely to share content.
VOA’s attempts to contact ChatGPT via LinkedIn, email and at its offices in San Francisco were unsuccessful.
Seen as threat
Many media analysts and press freedom groups see AI as a threat to publishers and broadcasters, as well as a threat to ethical journalism.
Among the chief concerns are the use of artificial intelligence to create false narratives and fake visuals and to amplify misinformation and disinformation.
“It is clearly possible that some groups or organizations use and fine-tune models to create tailored disinformation that suits their projects or their purpose,” said Vincent Berthier, who manages the technology desk at Reporters Without Borders, or RSF. “But right now, today, the higher risk of disinformation comes from generative AI from pictures and deep fakes.”
RSF organized a commission made up of 32 journalism and AI experts, led by Nobel laureate and disinformation expert Maria Ressa, to regulate how media use the technology.
The resulting Paris Charter on AI and Journalism, released in November, sets parameters for the use of AI for news organizations and makes clear that journalists must take a leading role.
RSF’s Berthier believes that many of the organizations opting out are sending a clear message to AI developers.
“What media companies are saying is AI won’t be built without us and it is exactly RSF’s position on this topic,” Berthier said. “It is the spirit of the charter we released this month saying that media and journalism should be part of AI governance.”
Media freedom is already at risk from Big Tech and social media algorithms, Berthier said.
“That’s why we fight every day to protect press freedom and just make sure that journalists can still do their jobs to give the most accurate information to the public,” he said.
The Associated Press became partners with OpenAI in a news content and information sharing agreement in July.
Pamela Samuelson, a MacArthur Fellow, University of California-Berkeley law professor and information technology expert, said the deal might be just the beginning of many licensing agreements and partnerships between AI and journalism.
But she also predicted that companies would work to develop their own AI.
“So The New York Times might be doing it, CNN might be doing it, we just don’t know,” Samuelson said. “They will announce either their own generative stuff or they will just keep it in house.”
Ethical concerns
As the debate over the use of AI in journalism unfolds, many news organizations and journalists cite ethical concerns and reservations about its use.
Others cite economic factors, such as the use of their copyrighted materials and unique intellectual property without payment or provenance.
But, said Samuelson, “The predictions of doom, doom, doom are probably overblown.”
“Predictions that everything is going to be perfect, that is probably wrong, too,” she added. “We will have to find some new equilibrium.”
Generative AI can write computer code, create art, produce research and even write news articles. But makers widely admit in disclaimers that there are problems with its reliability and accuracy.
There is also growing fear among researchers that a dependence on generative AI to both produce and access news and information is spreading and that too often the information being dispensed isn’t reliable or accurate.
“There is one thing that journalism puts right up at the top of the list and that’s accuracy and that is a weakness of these tools,” Welsh said. “While they are incredibly great at being creative and generating all sorts of interesting outputs, one thing they struggle with is getting the facts right.”
Some AI analysts and watchers say the growing list of news organizations blocking AI bots could further affect that quality.
Source: VOA News
Celebrity WEB Update— Premier Jewelry designer and manufacturer fashion house ParisJewelry.com has started manufacturing a new custom line of celebrity jewelry designs with 30% Off and Free Shipping. Replenish Your Body- Refilter Your Health with OrganicGreek.com Vitamin Bottles, Vitamins and Herbs. Become a WebFans Creator and Influencer. Check the New Special XMicro Razors for Men & Women, 1 Razor, 7 Blade Refills with German Stainless Steel, Lubricated with Vitamin E for Smooth Shave, Shields Against Irritation, Version X Men|Women