Back to home
Technology

Google claims US government pressured YouTube to remove user-generated COVID videos

Source

TweakTown

Published

TL;DR

AI Generated

Google has accused the US government of pressuring YouTube to remove user-generated videos related to COVID-19. The company claims that the government used legal means to force the removal of these videos, which were critical of the government's handling of the pandemic. Google's statement raises concerns about censorship and freedom of speech on online platforms. The incident highlights the complex relationship between tech companies and government regulations regarding content moderation.

Read Full Article

Similar Articles

Nvidia's own DLSS 5 announcement video gets taken down by YouTube in Italy due to a copyright strike — local TV channel sent a copyright strike to every YouTube video for using the trailer it used for its own broadcast

Nvidia's own DLSS 5 announcement video gets taken down by YouTube in Italy due to a copyright strike — local TV channel sent a copyright strike to every YouTube video for using the trailer it used for its own broadcast

A local TV channel in Italy issued a copyright strike to every YouTube video that used Nvidia's DLSS 5 trailer, including Nvidia's own announcement video. The strike was a result of the TV channel using the same footage for its coverage. YouTube's AI moderators then took down all videos with the content, even Nvidia's. This incident highlights concerns about YouTube's AI technology for content moderation, with creators expressing frustration over inaccurate takedowns and swift rejection of appeals. While Nvidia may have the means to address the issue, smaller creators impacted by the takedowns could face challenges in reinstating their videos and avoiding potential strikes on their accounts.

Tom's Hardware
US judge sides with Anthropic, says company supply chain risk branding over Pentagon disagreement 'Orwellian' — Trump slapped AI company with designation after it refused to lower its guardrails for the military

US judge sides with Anthropic, says company supply chain risk branding over Pentagon disagreement 'Orwellian' — Trump slapped AI company with designation after it refused to lower its guardrails for the military

A U.S. court has ruled in favor of Anthropic, temporarily preventing the Pentagon from labeling the company a supply chain risk. The dispute arose when the military demanded Anthropic to compromise its AI safety policies, which the company refused. Judge Rita Lin criticized the government's actions, stating that branding a company as a potential adversary for disagreeing is unjust. Anthropic's CEO refused to allow the use of their AI for mass surveillance and autonomous weapons, leading to President Trump banning the company from federal agencies. Despite this win, Anthropic still faces legal battles against the government.

Tom's Hardware
Anthropic sues Pentagon over 'supply chain risk' designation, citing free speech and due process violations — company refused to allow its AI to be used for autonomous attacks, mass surveillance

Anthropic sues Pentagon over 'supply chain risk' designation, citing free speech and due process violations — company refused to allow its AI to be used for autonomous attacks, mass surveillance

Anthropic has filed two lawsuits against the Pentagon over being designated a "supply chain risk," preventing its AI models from being used by Pentagon suppliers due to the company's refusal to allow its AI to be used for autonomous attacks and mass surveillance. The lawsuits allege violations of Anthropic's First Amendment and due process rights, seeking to overturn the designation and block its enforcement. The dispute arose from a contract renegotiation that collapsed when Anthropic refused to remove guardrails prohibiting the use of its models for fully autonomous weapons and mass domestic surveillance. The fallout has led to competitive consequences, with OpenAI striking a new Pentagon deal shortly after Anthropic's designation.

Tom's Hardware
Greatest irony of the AI age: Humans hired to clean AI slop

Greatest irony of the AI age: Humans hired to clean AI slop

The article discusses the irony of the AI age where humans are being hired to clean up the errors and nonsensical content generated by AI tools like ChatGPT and Midjourney. This has created a new category of employment for individuals to fix the mistakes made by AI when tasked with complex work. AI-generated content, referred to as "slop," is flooding the internet, including videos, music, and articles, leading to misinformation and overwhelming social media feeds. The rise of human roles such as AI content rewriters, art fixers, and AI code debuggers underscores the need for human intervention to correct AI-generated mistakes and ensure quality and accuracy in digital content creation. The article emphasizes the importance of recalibrating the relationship between humans and AI to prioritize human creativity, empathy, and authenticity over speed and cost-cutting.

Hacker News

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.