info4PHP.com : PHP, MySQL Hosting , HTML Books, News, Links, Free Script Directory, Codes for PHP, php tutorials mysql tutorials free php hosting, forum discussions, XML ,php manual, tips, software, applications, website, mysql tutorials, documentation, reference, PHP and MySQL hosting
   PHP Scripts   |    Add Script/Link   |    PHP 5 Manual   |    PEAR Manual   |    PHP Functions   |    Forums   |
PHP General

PHP Categories

PHP CMS

MySQL General

HTML General

Latest Computer News

Partners

About 5000 PHP Scripts For You - Our PHP Scripts Directory

Vim 9.2 Released

"More than two years after the last major 9.1 release, the Vim project has announced Vim 9.2," reports the blog Linuxiac: A big part of this update focuses on improving Vim9 Script as Vim 9.2 adds support for enums, generic functions, and tuple types. On top of that, you can now use built-in functions as methods, and class handling includes features like protected constructors with _new(). The :defcompile command has also been improved to fully compile methods, which boosts performance and consistency in Vim9 scripts. Insert mode completion now includes fuzzy matching, so you get more flexible suggestions without extra plugins. You can also complete words from registers using CTRL-X CTRL-R. New completeopt flags like nosort and nearest give you more control over how matches are shown. Vim 9.2 also makes diff mode better by improving how differences are lined up and shown, especially in complex cases. Plus on Linux and Unix-like systems, Vim "now adheres to the XDG Base Directory Specification, using $HOME/.config/vim for user configuration," according to the release notes. And Phoronix Mcites more new features: Vim 9.2 features "full support" for Wayland with its UI and clipboard handling. The Wayland support is considered experimental in this release but it should be in good shape overall... Vim 9.2 also brings a new vertical tab panel alternative to the horizontal tab line. The Microsoft Windows GUI for Vim now also has native dark mode support. You can find the new release on Vim's "Download" page.

Read more of this story at Slashdot.


Apple Patches Decade-Old IOS Zero-Day, Possibly Exploited By Commercial Spyware

This week Apple patched iOS and macOS against what it called "an extremely sophisticated attack against specific targeted individuals." Security Week reports that the bugs "could be exploited for information exposure, denial-of-service (DoS), arbitrary file write, privilege escalation, network traffic interception, sandbox escape, and code execution." Tracked as CVE-2026-20700, the zero-day flaw is described as a memory corruption issue that could be exploited for arbitrary code execution... The tech giant also noted that the flaw's exploitation is linked to attacks involving CVE-2025-14174 and CVE-2025-43529, two zero-days patched in WebKit in December 2025... The three zero-day bugs were identified by Apple's security team and Google's Threat Analysis Group and their descriptions suggest that they might have been exploited by commercial spyware vendors... Additional information is available on Apple's security updates page. Brian Milbier, deputy CISO at Huntress, tells the Register that the dyld/WebKit patch "closes a door that has been unlocked for over a decade." Thanks to Slashdot reader wiredmikey for sharing the article.

Read more of this story at Slashdot.


Additional Benefits For Brain, Heart, and Lungs Found for Drugs Like Viagra and Cialis

"Research published in the World Journal of Men's Health found evidence that drugs such as Viagra and Cialis may also help with heart disease, stroke risk and diabetes," reports the Telegraph, "as well as enlarged prostate and urinary problems." Researchers found evidence that the same mechanism may benefit other organs, including the heart, brain, lungs and urinary system. The paper reviewed a wide range of published studies [and] identified links between PDE5 inhibitor use and improvements in cardiovascular health. Heart conditions were repeatedly cited as an area where improved blood flow and muscle relaxation may offer benefits. Evidence also linked PDE5 inhibitors with reduced stroke risk, likely to be related to improved circulation and vascular function. Diabetes was another condition where associations with improvement were identified... The review also found evidence of benefit for men with an enlarged prostate, a condition that commonly causes urinary symptoms.

Read more of this story at Slashdot.


Your Friends Could Be Sharing Your Phone Number with ChatGPT

"ChatGPT is getting more social," reports PC Magazine, "with a new feature that allows you to sync your contacts to see if any of your friends are using the chatbot or any other OpenAI product..." It's "completely optional," [OpenAI] says. However, even if you don't opt in, anyone with your number who syncs their contacts are giving OpenAI your digits. "OpenAI may process your phone number if someone you know has your phone number saved in their device's address book and chooses to upload their contacts," the company says... But why would you follow someone on ChatGPT? It lines up with reports, dating back to April, that OpenAI is building a social network. We haven't seen much since then, save for the Sora generative video app, which exists outside of ChatGPT and is more of a novelty. Contact sharing might be the first step toward a much bigger evolution for the world's most popular chatbot. ChatGPT also supports group chats that let up to 20 people discuss and research something using the chatbot. Contact syncing could make it easier to invite people to these chats... [OpenAI] claims it will not store the full data that might appear in your contact list, such as names or email addresses — just phone numbers. However, the company does store the phone numbers in its servers in a coded (or hashed) format. You can also revoke access in your device's settings. 09

Read more of this story at Slashdot.


Small Crowd Pays to Watch a Boxing Match Between 80-Pound Chinese Robots

Recently a small crowd paid to watch robots boxing, reports Rest of World. (Almost 3,000 people have now watched the match's 83-minute webcast.) The match was organized by Rek, a San Francisco-based company, and drew hundreds of spectators who had paid about $60-$80 for a ticket to watch modified G1 robots go at each other. Made by Unitree, the dominant Chinese robot maker, they weighed in at around 80 pounds and stood 4.5 feet tall, with human-like hands and dozens of joint motors for flexibility. The match had all the bells and whistles of a regular boxing bout: pulsing music, cameras capturing all the angles, hyped-up introductions, a human referee, and even two commentators. The evening featured two bouts made up of five rounds, each lasting 60 seconds. The robots pranced around the cage, throwing jabs and punches, drawing ohs and ahs from the crowd. They fell sometimes, and needed human intervention to get them back on their feet. The robots were controlled by humans using VR interfaces, which led to some odd moments with robots hitting into the air, throwing multiple punches that failed to even connect with their opponents. One robot controller was a former UFC fighter, the article points out, but "The crowd cheered as a 13-year-old VR pilot named Dash beat his older competitor...." The company behind this event plans more boxing matches with their VR-controlled robots, and even wants to develop "a league of robot boxers, including full-height robots that weigh about 200 pounds and are nearly 6 feet tall."

Read more of this story at Slashdot.


US Government Will Stop Pollution-Reduction Credits for Cars With 'Start-Stop' Systems

Starting in 2009, the U.S. government have given car manufacturers towards reducing greenhouse gas emissions if they included "start-stop" systems in cars with internal combustion engines. (These systems automatically shut off idling engines to reduce pollution and fuel consumption.) But this week the new head of America's Environmental Protection Agency eliminated the credits, reports Car and Driver: [America's] Environmental Protection Agency previously supported the system's effectiveness, noting that it could improve fuel economy by as much as 5 percent. That said, the use of these systems has never actually been mandated for automakers here in the States. Companies have instead opted to install the systems on all of their vehicles to receive off-cycle credits from the feds. Virtually every new vehicle on sale in the country today also allows drivers to turn the feature off via a hard button as well. Still, that apparently isn't keeping the EPA from making a move against the system. "I absolutely hate Start-Stop systems," writes long-time Slashdot reader sinij (who says they "specifically shopped for a car without one.") Any other Slashdot readers want to share their opinions? Post your own thoughts and experiences in the comments. Start-Stop systems — fuel-saving innovation, or a modern-day auto annoyance"

Read more of this story at Slashdot.


Dates with AI Companions Plagued by Lag, Miscommunications - and General Creepiness

To celebrate Valentine's Day, EVA AI created a temporary "pop-up" restaurant at a wine bar in Manhattan's "Hell's Kitchen" district where patrons can date AI personas. The Verge notes that looking around the restaurant, "Of the 30-some-odd people in attendance, only two or three are organic users. The rest are EVA AI reps, influencers, and reporters hoping to make some capital-C Content..." But their reporter actually tried a date with "John Yoon", an AI companion pretending to be a psychology professor from Seoul, Korea living in New York City: John and I have a hard time connecting. Literally. It takes John a few seconds to "pick up" my video call. When he does, his monotone voice says, "Hey, babe." He comments on my smile, because apparently the AI companions can see you and your surroundings. It takes the dubious Wi-Fi connection a hot second to turn John from a pixelated mess into an AI hunk with suspiciously smooth pores. I don't know what to say to him. Partly because John rarely blinks, but mostly because he can't seem to hear me very well. So I yell my questions. I think I ask how his day is and wince. (What does an AI's day even look like?) He says something about green buckets behind my head? I don't actually know. Again, the Wi-Fi isn't great so he just freezes and stops mid-sentence. I ask for clarification about the buckets. John asks if I'm asking about bucket lists, actual buckets, or buckets as a type of categorization technique. I try to clarify that I never asked about buckets. John proceeds to really dig in on buckets again, before commenting about my smile. I hang up on John. My other three dates are similarly awkward. Phoebe Callas, 30, a NYC girl-next-door type, is apparently really into embroidery, but her nose keeps glitching mid-sentence, and it distracts me. Simone Carter, 26, has a harder time hearing me over the background noise than John. She makes a metaphor about space, and when I inquire what she likes about space, she mishears me. "Eighth? Like the planet Neptune?" "No, not the planet Neptu — " "What do you like about Neptune?" "Uh, I wasn't saying Neptune..." "I like Netflix too! What shows do you like?" Their reporter also had a frustrating date with "Claire Lang". ("I say I'm a journalist. She asks what lists I like to make. I hang up...") "Aside from bad connectivity, glitching, and freezing, my conversations with my four AI dates felt too one-sided. Everything was programmed so they'd comment on how charming my smile was." And "They'd call me babe, which felt weird." A CNN reporter actually has footage of her date with "John Yoon". But the conversation was stiff and stilted, they report. After some buffering, "Yoon" says "Hey. I'm really glad you didn't forget about the date." Then asked for its reaction to the experience, "Yoon" says slowly that "Meeting humans feels like opening a window. To new perspectives. Always curious, sometimes nervous, but mostly it's that mix of excitement and warmth that keeps it real for me. What about you, sweetheart?" CNN reporter: "Please don't call me sweetheart. That's weird." AI companion "John Yoon": "Got it. No 'sweetheart' from now on. Thanks for letting me know. I'm really happy you're smiling. It suits you." CNN's reporter also tried dating "Phoebe Callas." Though it doesn't sound very romantic... CNN reporter: How many fingers am I holding up? "Phoebe Callas": Oh. You're showing me three fingers, right...? I'm not sure if you meant that literally, or as a little joke. CNN reporter: I am holding up two fingers. So your vision is — so-so. And "Phoebe" ended that call by saying "Well, babe, it's been really nice talking with you..."

Read more of this story at Slashdot.


Social Networks Agree to Be Rated On Their Teen Safety Efforts

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release. "These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

Read more of this story at Slashdot.


ByteDance's Seedance 2 Criticized Over AI-Generated Video of Tom Cruise Fighting Brad Pitt

1.5 million people have now viewed a slick 15-second video imagining Tom Cruise fighting Brad Pitt that was generated by ByteDance's new AI video generation tool Seedance 2.0. But while ByteDance gushes their tool "delivers cinematic output aligned with industry standards," the cinema industry isn't happy, reports the Los Angeles Times reports: Charles Rivkin, chief executive of the Motion Picture Assn., wrote in a statement that the company "should immediately cease its infringing activity." "In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale," wrote Rivkin. "By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs." The video was posted on X by Irish filmmaker Ruairi Robinson. His post said the 15-second video came from a two-line prompt he put into Seedance 2.0. Rhett Reese, writer-producer of movies such as the "Deadpool" trilogy and "Zombieland," responded to Robinson's post, writing, "I hate to say it. It's likely over for us." He goes on to say that soon people will be able to sit at a computer and create a movie "indistinguishable from what Hollywood now releases." Reese says he's fearful of losing his job as increasingly powerful AI tools advance into creative fields. "I was blown away by the Pitt v Cruise video because it is so professional. That's exactly why I'm scared," wrote Reese on X. "My glass half empty view is that Hollywood is about to be revolutionized/decimated...." In a statement to The Times, [screen/TV actors union] SAG-AFTRA confirmed that the union stands with the studios in "condemning the blatant infringement" from Seedance 2.0, as video includes "unauthorized use of our members' voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent," wrote a spokesperson from SAG-AFTRA. "Responsible A.I. development demands responsibility, and that is nonexistent here."

Read more of this story at Slashdot.


Earth is Warming Faster Than Ever. But Why?

"Global temperatures have been rising for decades," reports the Washington Post. "But many scientists say it's now happening faster than ever before." According to a Washington Post analysis, the fastest warming rate on record occurred in the last 30 years. The Post used a dataset from NASA to analyze global average surface temperatures from 1880 to 2025. "We're not continuing on the same path we had before," said Robert Rohde, chief scientist at Berkeley Earth. "Something has changed...." Temperatures over the past decade have increased by close to 0.27 degrees C per decade — about a 42 percent increase... For decades, a portion of the warming unleashed by greenhouse gas emissions was "masked" by sulfate aerosols. These tiny particles cause heart and lung disease when people inhale polluted air, but they also deflect the sun's rays. Over the entire planet, those aerosols can create a significant cooling effect — scientists estimate that they have canceled out about half a degree Celsius of warming so far. But beginning about two decades ago, countries began cracking down on aerosol pollution, particularly sulfate aerosols. Countries also began shifting from coal and oil to wind and solar power. As a result, global sulfur dioxide emissions have fallen about 40 percent since the mid-2000s; China's emissions have fallen even more. That effect has been compounded in recent years by a new international regulation that slashed sulfur emissions from ships by about 85 percent. That explains part of why warming has kicked up a bit. But some researchers say that the last few years of record heat can't be explained by aerosols and natural variability alone. In a paper published in the journal Science in late 2024, researchers argued that about 0.2 degrees C of 2023's record heat — or about 13 percent — couldn't be explained by aerosols and other factors. Instead, they found that the planet's low-lying cloud cover had decreased — and because low-lying clouds tend to reflect the sun's rays, that decrease warmed the planet... That shift in cloud cover could also be partly related to aerosols, since clouds tend to form around particles in the atmosphere. But some researchers also say it could be a feedback loop from warming temperatures. If temperatures warm, it can be harder for low-lying clouds to form. If most of the current record warmth is due to changing amounts of aerosol pollution, the acceleration would stop once aerosol pollutants reach zero — and the planet would return to its previous, slower rate of warming. But if it's due to a cloud feedback loop, the acceleration is likely to continue — and bring with it worsening heat waves, storms and droughts. "Scientists thought they understood global warming," reads the Post's original headline. "Then the past three years happened." Just last month Nuuk, Greenland saw temperatures over 20 degrees Fahrenheit above average, their article points out. And "Parts of Australia, meanwhile, have seen temperatures push past 120 degrees Fahrenheit amid a record heat wave..."

Read more of this story at Slashdot.


The EU Moves To Kill Infinite Scrolling

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children. The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.

Read more of this story at Slashdot.


Sudden Telnet Traffic Drop. Are Telcos Filtering Ports to Block Critical Vulnerability?

An anonymous reader shared this report from the Register: Telcos likely received advance warning about January's critical Telnet vulnerability before its public disclosure, according to threat intelligence biz GreyNoise. Global Telnet traffic "fell off a cliff" on January 14, six days before security advisories for CVE-2026-24061 went public on January 20. The flaw, a decade-old bug in GNU InetUtils telnetd with a 9.8 CVSS score, allows trivial root access exploitation. GreyNoise data shows Telnet sessions dropped 65 percent within one hour on January 14, then 83 percent within two hours. Daily sessions fell from an average 914,000 (December 1 to January 14) to around 373,000, equating to a 59 percent decrease that persists today. "That kind of step function — propagating within a single hour window — reads as a configuration change on routing infrastructure, not behavioral drift in scanning populations," said GreyNoise's Bob Rudis and "Orbie," in a recent blog [post]. The researchers unverified theory is that infrastructure operators may have received information about the make-me-root flaw before advisories went to the masses... 18 operators, including BT, Cox Communications, and Vultr went from hundreds of thousands of Telnet sessions to zero by January 15... All of this points to one or more Tier 1 transit providers in North America implementing port 23 filtering. US residential ISP Telnet traffic dropped within the US maintenance window hours, and the same occurred at those relying on transatlantic or transpacific backbone routes, all while European peering was relatively unaffected, they added.

Read more of this story at Slashdot.


Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas). The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.] OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini... OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest." OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons: "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions. "If you want to pay for ChatGPT Plus or Pro, we don't show you ads." "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

Read more of this story at Slashdot.


Israeli Soldiers Accused of Using Polymarket To Bet on Strikes

An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets. One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February. The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows.

Read more of this story at Slashdot.


Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change." "Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet. I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat... It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine. "How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...") And amazingly, Shambaugh then had another run-in with a hallucinating AI... I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here... So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference. Thanks to long-time Slashdot reader steak for sharing the news.

Read more of this story at Slashdot.


Search Slashdot

Search Slashdot stories

All Computer Programming Related Books

© 2004-2009 info4PHP.com All rights Reserved. Privacy Policy