4Chan Mocks $700K Fine For UK Online Safety Breaches | | The UK regulator Ofcom fined 4chan nearly $700,000 (520,000 pounds) for failing to implement age checks and address illegal content risks under the Online Safety Act, but the platform mocked the penalty and signaled it won't pay. A lawyer representing the company responded with an AI-generated cartoon image of a hamster, writing in a follow-up post on X: "In the only country in which 4chan operates, the United States, it is breaking no law and indeed its conduct is expressly protected by the First Amendment." The BBC reports: The fines also include 50,000 pounds for failing to assess the risk of illegal material being published and a further 20,000 pounds for failing to set out how it protects users from criminal content. 4Chan has refused to pay all previous fines from Ofcom. "Companies -- wherever they're based -- are not allowed to sell unsafe toys to children in the UK. And society has long protected youngsters from things like alcohol, smoking and gambling. The digital world should be no different," said Ofcom's Suzanne Cater. "The UK is setting new standards for online safety. Age checks and risk assessments are cornerstones of our laws, and we'll take robust enforcement action against firms that fall short." Read more of this story at Slashdot. |
Rogue AI Triggers Serious Security Incident At Meta | | For the second time in the past month, an AI agent went rogue at Meta -- this time giving an engineer incorrect advice that briefly exposed sensitive data. The Verge reports: A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.
According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information -- and it's not clear whether the employee who originally prompted the answer planned to post it publicly. "The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided." Read more of this story at Slashdot. |
Rapper Afroman Wins Defamation Lawsuit Over Use of Police Raid Footage In His Music Videos | | Longtime Slashdot reader UnknowingFool writes: Rapper Afroman, born Joseph Edgar Foreman, famous for his 2000 hit "Because I Got High", has won a defamation lawsuit that seven Ohio police offers filed against him. A jury found he did not defame the officers in music videos he made about a 2022 police raid of his home. In August 2022, Adams County Sheriff's Department raided Afroman's home on suspicion of drug trafficking and kidnapping. Neither drugs nor kidnapping victims were found, and charges were never filed. However, local officials would not pay for damages occurred during the raid including a broken front door and a video surveillance camera. Afroman used his home security footage of the raid to create music rap videos criticizing the police over the incident; "Will You Help Me Repair My Door?", "Why You Disconnecting My Video Camera?", and "Lemon Pound Cake". He posted the videos on YouTube.
In March 2023, seven officers filed a lawsuit against Afroman for invasion of privacy and the unauthorized use of their images from the security footage in addition to defamation claims. The officers requested an injunction for Afroman to stop speaking about them or using their photos. The officers also wanted all proceeds from the videos, song sales, performances, and merchandise claiming they had suffered "emotional distress" due to the videos. Afroman's defense included Freedom of Speech rights to criticize public officials. The ACLU filed an amicus brief supporting the rapper, arguing that the lawsuit was a SLAPP suit only meant to silence criticism. In October 2023, the court agreed and dismissed the invasion of privacy, "right of publicity", and "unauthorized use of individual's persona" claims but allowed the defamation case to proceed.
Defamation claims by the officers included the allegation Afroman repeatedly had sex with the wife of Randolph L. Walters, Jr. When Afroman's lawyer asked Walters "But we all know that's not true, right?", the officer replied he did not know. Defamation from emotional damages requires that harm arise from a false statement; however, if a statement is so outrageous that no one would believe it to be true, then reputational damage cannot be a result. Read more of this story at Slashdot. |
Google Details New 24-Hour Process To Sideload Unverified Android Apps | | An anonymous reader quotes a report from Ars Technica: Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification. With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google's intervention.
Apps that come from unverified developers won't be installable on Android phones -- unless you use the new advanced flow, which will be buried in the developer settings. When sideloading apps today, Android phones alert the user to the "unknown sources" toggle in the settings, and there's a flow to help you turn it on. The verification bypass is different and will not be revealed to users. You have to know where this is and proactively turn it on yourself, and it's not a quick process. [...] The actual legwork to activate this feature only takes a few seconds, but the 24-hour countdown makes it something you cannot do spur of the moment.
But why 24 hours? According to Samat, this is designed to combat the rising use of high-pressure social engineering attacks, in which the scammer convinces the victim they have to install an app immediately to avoid severe consequences. "In that 24-hour period, we think it becomes much harder for attackers to persist their attack," said Samat. "In that time, you can probably find out that your loved one isn't really being held in jail or that your bank account isn't really under attack." But for people who are sure they don't want Google's verification system to get in the way of sideloading any old APK they come across, they don't have to wait until they encounter an unverified app to get started. You only have to select the "indefinitely" option once on a phone, and you can turn dev options off again afterward. "For a lot of people in the world, their phone is their only computer, and it stores some of their most private information," Samat said. "Over the years, we've evolved the platform to keep it open while also keeping it safe. And I want to emphasize, if the platform isn't safe, people aren't going to use it, and that's a lose-lose situation for everyone, including developers." Read more of this story at Slashdot. |
Meta Backtracks, Will Keep Horizon Worlds VR Support 'For Existing Games' | | Meta is partially reversing its decision to drop VR support for Horizon Worlds, keeping VR access for existing Unity-based games while shifting future development to a new flatscreen-focused Horizon Engine. UploadVR reports: If you somehow missed it, on Tuesday Meta officially announced that its Horizon Worlds "metaverse" platform would drop VR support in June, meaning it would only be available as a flatscreen experience for the web and smartphones. But now, in an "ask me anything" session on his Instagram page, Meta CTO Andrew Bosworth says the company has decided to "keep Horizon Worlds working in VR for existing games to support the fans who've reached out."
Bosworth says this specifically applies to worlds developed with the Horizon Unity runtime, suggesting it applies to those built inside VR or with the Horizon Desktop Editor, but not those built for the new Horizon Engine with Horizon Studio. The picture painted here is of a clean technical break, with the legacy Unity version of Horizon Worlds continuing to support VR, and the new Horizon Engine focusing fully on flatscreen. This VR support will continue through the Horizon Worlds VR app, which Bosworth says will stay on Quest's store "for the foreseeable future".
Specific worlds will not be recommended by the operating system, though, and nor will they be seen in the storefront. Horizon Worlds will be just another app on the store. As for the reason behind not supporting VR in Horizon Engine, Bosworth repeated the explanation he's been giving for two months now -- "because that's where most of the consumer and creator energy already was, and so we're leaning into that." Read more of this story at Slashdot. |
OpenAI Acquires Developer Tooling Startup Astral | | OpenAI announced it's acquiring developer tooling startup Astral to strengthen its Codex AI coding assistant, which has over 2 million weekly users and has seen a three-fold increase in user growth since the start of the year. CNBC reports: "Through it all, though, our goal remains the same: to make programming more productive. To build tools that radically change what it feels like to build software," Astral's founder and CEO Charlie Marsh wrote in a blog post. The company's acquisition of Astral is still subject to customary closing conditions, including regulatory approval. Read more of this story at Slashdot. |
Walmart Wins Patents To Give Algorithms More Sway Over Prices | | Walmart has secured patents for systems that use machine learning to forecast demand and automate pricing decisions, "pushing the U.S. retail behemoth into a debate over the use of algorithms to adjust product costs," reports the Financial Times. From the report: In January Walmart obtained a U.S. patent for a "system and method for dynamically and automatically updating item prices" to carry out markdowns in its ecommerce unit, a rapidly growing division that generated more than $150 billion in sales last year. Last week it received another patent for using machine learning to predict demand and recommend prices for goods. [...] Walmart said that both patents were "unrelated to dynamic pricing," as the patent issued in January was specific to markdowns and last week's patent was designed for merchant teams to make decisions, not the technology.
The patent granted in January involves an "end-to-end price markdown system" for ecommerce platforms such as Walmart.com based on data including predicted demand and consumers' price sensitivity. Last week's approved patent outlines ways to forecast demand and set prices at levels that will move stock over periods such as a week, a month or a quarter. "Example categories may include, for example, a food item, outdoor equipment, clothing, housewares, toys, workout equipment, vegetables, spices," according to the filing. The "demand forecasting and price recommendation" tool envisaged in the patent would incorporate sources including purchases, prices, methods of payment and customer ID, such as a passport or driver's license number. "Dynamic pricing or anything that smells like it is playing with fire," said Matt Hamory, a grocery industry consultant at AlixPartners, who cited "the goodwill that you can lose by getting customers to think or suspect or worry even slightly that you are doing things with pricing that are to your benefit and their detriment." Read more of this story at Slashdot. |
Microsoft Considers Legal Action Over $50 Billion Amazon-OpenAI Cloud Deal | | An anonymous reader quotes a report from Reuters: Microsoft is considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker, the Financial Times reported on Wednesday. Last month, Amazon and OpenAI signed several agreements, including one that makes Amazon Web Services the exclusive third-party cloud provider for Frontier, OpenAI's enterprise platform for building and running AI agents. The dispute centers on whether OpenAI can offer Frontier via AWS without violating the Microsoft partnership, which requires the startup's models to be accessed through the Windows maker's Azure cloud platform, the FT report said, citing sources.
OpenAI and Microsoft recently stated together that "Azure remains the exclusive cloud provider of stateless OpenAI APIs," a Microsoft spokesperson said in an emailed statement, referring to software interfaces used to access OpenAI's models. "We are confident that OpenAI understands and respects the importance of living up to this legal obligation," the spokesperson added. FT said Microsoft executives believed the approach was not feasible and would violate the spirit, if not the letter, of their agreement, and added that the companies were in talks to resolve the dispute without litigation ahead of Frontier's launch. "We know our contract," a person familiar with Microsoft's position told the newspaper. "We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them." Read more of this story at Slashdot. |
iPhone Exploit DarkSword Steals Data In Minutes With No Trace | | BrianFagioli writes: A new iOS exploit chain called DarkSword shows how attackers can break into certain iPhones, grab sensitive data like messages, credentials, and even crypto wallets, and then disappear without leaving obvious traces. It targets older iOS 18 builds using Safari and WebGPU flaws to escape Apple's sandbox, which is pretty wild on its own, but what really stands out is how fast it works and how financially motivated these attacks have become. The takeaway is simple but important, update your iPhone ASAP and don't assume mobile devices are somehow safer than desktops anymore. Read more of this story at Slashdot. |
Pardoned Nikola Fraudster Is Raising Funds For AI-Powered Planes He Claims Will Reshape Aviation | | Trevor Milton, the pardoned founder of Nikola, is seeking $1 billion for AI-powered autonomous planes through a new venture called SyberJet. The Tech Buzz reports: "Autonomous planes will be 10 times harder than Nikola ever was," Milton told the Wall Street Journal in a rare interview. It's a remarkable admission from someone whose last venture collapsed under the weight of securities fraud charges after he overstated the capabilities of Nikola's electric and hydrogen-powered trucks. Milton was convicted in 2022 on three counts of fraud for misleading investors about Nikola's technology, including staging a video that made it appear a truck prototype was driving under its own power when it was actually rolling downhill. The conviction sent him to prison and turned Nikola into a cautionary tale about startup hype culture. His pardon, which came earlier this year, sparked immediate controversy in venture capital and legal circles.
Now he's betting that AI and autonomous aviation represent a clean slate. SyberJet appears focused on developing artificial intelligence systems capable of piloting aircraft without human intervention - a technical challenge that's stumped even well-funded players like Boeing and Airbus. [...] Milton hasn't detailed SyberJet's technical approach or revealed who's backing the venture. The company's website remains sparse, and aviation industry sources say they haven't seen concrete demonstrations of the technology. That opacity echoes the early days of Nikola, when Milton made sweeping claims about revolutionary trucks that existed mostly in renderings and promotional videos. If you need a quick refresher on the Nikola saga, here's a timeline of key events:
June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck
December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles
February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range
June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck
September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies
September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video
September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller
October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans
November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck
July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud
December, 2021: EV Startup Nikola Agrees To $125 Million Settlement
September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial Read more of this story at Slashdot. |
FBI Is Buying Location Data To Track US Citizens, Director Confirms | | An anonymous reader quotes a report from TechCrunch: The FBI has resumed purchasing reams of Americans' data and location histories to aid federal investigations, the agency's director, Kash Patel, testified to lawmakers on Wednesday. This is the first time since 2023 that the FBI has confirmed it was buying access to people's data collected from data brokers, who source much of their information -- including location data -- from ordinary consumer phone apps and games, per Politico. At the time, then-FBI director Christopher Wray told senators that the agency had bought access to people's location data in the past but that it was not actively purchasing it.
When asked by U.S. Senator Ron Wyden, Democrat of Oregon, if the FBI would commit to not buying Americans' location data, Patel said that the agency "uses all tools ... to do our mission." "We do purchase commercially available information that is consistent with the Constitution and the laws under the Electronic Communications Privacy Act -- and it has led to some valuable intelligence for us," Patel testified Wednesday. Wyden said buying information on Americans without obtaining a warrant was an "outrageous end-run around the Fourth Amendment," referring to the constitutional law that protects people in America from device searches and data seizures. Read more of this story at Slashdot. |
Cloudflare Appeals Piracy Shield Fine, Hopes To Kill Italy's Site-Blocking Law | | Cloudflare is appealing a 14.2 million-euro fine from Italy for refusing to comply with its "Piracy Shield" law, which requires blocking access to websites on its 1.1.1.1 DNS service within 30 minutes. The company argues the system lacks oversight, risks widespread overblocking, and could undermine core Internet infrastructure. Ars Technica's Jon Brodkin reports: Piracy Shield is "a misguided Italian regulatory scheme designed to protect large rightsholder interests at the expense of the broader Internet," Cloudflare said in a blog post this week. "After Cloudflare resisted registering for Piracy Shield and challenged it in court, the Italian communications regulator, AGCOM, fined Cloudflare... We appealed that fine on March 8, and we continue to challenge the legality of Piracy Shield itself." Cloudflare called the fine of 14.2 million euros ($16.4 million) "staggering." AGCOM issued the penalty in January 2026, saying Cloudflare flouted requirements to disable DNS resolution of domain names and routing of traffic to IP addresses reported by copyright holders.
Cloudflare had previously resisted a blocking order it received in February 2025, arguing that it would require installing a filter on DNS requests that would raise latency and negatively affect DNS resolution for sites that aren't subject to the dispute over piracy. Cloudflare co-founder and CEO Matthew Prince said that censoring the 1.1.1.1 DNS resolver would force the firm "not just to censor the content in Italy but globally."
Piracy Shield was designed to combat pirated streams of live sports events, requiring network operators to block domain names and IP addresses within 30 minutes of receiving a copyright notification. Cloudflare said the fine should have been capped at 140,000 euros ($161,000), or 2 percent of its Italian earnings, but that "AGCOM calculated the fine based on our global revenue, resulting in a penalty nearly 100 times higher than the legal limit."
Despite its complaints about the size of the fine, Cloudflare said the principles at stake "are even larger" than the financial penalty. "Piracy Shield is an unsupervised electronic portal through which an unidentified set of Italian media companies can submit websites and IP addresses that online service providers registered with Piracy Shield are then required to block within 30 minutes," Cloudflare said. Cloudflare is pushing for the law to be struck down, arguing that it is "incompatible with EU law, most notably the Digital Services Act (DSA), which requires that any content restriction be proportionate and subject to strict procedural safeguards."
In addition to appealing the fine, Cloudflare says it will continue to challenge Piracy Shield in Italian courts, engage with EU officials, and seek full access to AGCOM's Piracy Shield records. Read more of this story at Slashdot. |
Google Is Trying To Make 'Vibe Design' Happen | | With today's latest Stitch updates, Google is trying to make "vibe design" happen, reports The Verge's Jay Peters. The AI-native design platform encourages users to describe goals, feelings, or inspiration in "natural language," rather than starting with traditional blueprints.
In a blog post, Google Labs Product Manager Rustin Banks says that Stitch can turn those inputs into interactive prototypes, automatically map user flows, and support real-time iteration. It introduces voice capabilities that allow users to "speak directly to [the] canvas" for feedback or changes. Tools like DESIGN.md also help users create reusable design systems across various projects. Read more of this story at Slashdot. |
New Windows 11 Bug Breaks Samsung PCs, Blocking Access To C: Drive | | Longtime Slashdot reader UnknowingFool writes: Users of Samsung PCs are reporting the inability to access the C: drive after the Windows 11 February update. The bug seems to be in connection with the Samsung Galaxy Connect app, which allows Samsung phones and tablets to connect to Windows machines. [A previous stable version of the app has been re-released to prevent this problem from spreading.] This parody explains the situation with humor. The issue stems from update KB5077181 and is impacting Samsung PCs running Windows 11 25H2 or 24H2. Microsoft and Samsung have confirmed the issue and published a workaround, but as PCWorld notes, it will take some time. The workaround "requires removing the Samsung application, then asking Windows to repair the drive permissions and assigning a new owner, then restoring the Windows default permissions, including patching in some custom code that Microsoft wrote." Read more of this story at Slashdot. |
UK Plans To Require Labels On AI-Generated Content | | An anonymous reader quotes a report from Reuters: Britain plans to consider requiring labels on AI-generated content to protect consumers from disinformation and deepfakes, the government said on Wednesday, as it outlined other areas of focus to tackle the evolving global challenge. Technology minister Liz Kendall stressed the need to strike the right balance between protecting the creative industries and allowing the AI sector to innovate, saying in a statement that the government would take time to "get this right."
The next phase of the government's work on copyright and AI would also look at the harms posed by digital replicas without consent, ways for creators to control their work online and support for independent creative organizations, she said. [...] Louise Popple, a copyright expert at law firm Taylor Wessing, noted that the government had not ruled out a broad exception that would allow AI developers to train on copyright works. "That's a subtle difference of approach and could be interpreted to mean that everything is still up for grabs" she said. "It feels very much like the hard issues are being kicked down the road by the government."
In 2024, Britain proposed easing copyright rules to let developers train models on lawfully accessed material, with creators able to reserve their rights. On Wednesday, Kendall said that having engaged with creatives, AI firms, industry bodies, unions and academics, the government had concluded it "no longer has a preferred option." "We will help creatives control how their work is used. This sits at the heart of our ambition for creatives – including independent and smaller creative organizations -- to be paid fairly," she said. Read more of this story at Slashdot. |
|
|