IT

2/3 of Node.Js Users Run an Outdated Version. So OpenJS Announces Program Offering Upgrade Providers (openjsf.org) 26

How many Node.js users are running unsupported or outdated versions. Roughly two thirds, according to data from Node's nonprofit steward, OpenJS.

So they've announced "the Node.js LTS Upgrade and Modernization program" to help enterprises move safely off legacy/end-of-life Node.js. "This program gives enterprises a clear, trusted path to modernize," said the executive director of the OpenJS Foundation, "while staying aligned with the Node.js project and community." The Node.js LTS Upgrade and Modernization program connects organizations with experienced Node.js service providers who handle the work of upgrading safely.

Approved partners assess current versions and dependencies, manage phased upgrades to supported LTS releases, and offer temporary security support when immediate upgrades are not possible... Partners are surfaced exactly where users go when upgrades become unavoidable, including the Node.js website, documentation, and end of life guidance.

The program follows the existing OpenJS Ecosystem Sustainability Program revenue model, with partners retaining 85% of revenue and 15% supporting OpenJS and Node.js through Open Collective and foundation operations. OpenJS provides the guardrails, alignment, and oversight to keep the program credible and connected to the project. We're pleased to welcome NodeSource as the inaugural partner in the Node.js LTS Upgrade and Modernization program.

"The goal is simple: reduce risk without breaking production or trust with the upstream project."
AI

Jack Dorsey's Block Accused of 'AI-Washing' to Excuse Laying Off Nearly Half Its Workforce (entrepreneur.com) 28

When Block cut 4,000 jobs — nearly half its workforce — co-founder Jack Dorsey "pointed to AI as the culprit," writes Entrepreneur magazine. "Dorsey claimed that AI tools now allow fewer employees to accomplish the same work."

"But analysts see a different explanation: poor management." Block more than tripled its employee base between 2019 and 2022, growing from 3,835 to 12,430 workers. The company's stock had fallen 40% since early 2025, creating pressure to cut costs. "This is more about the business being bloated for so long than it is about AI," Zachary Gunn, a Financial Technology Partners analyst, told Bloomberg.

The phenomenon has earned a nickname: "AI-washing," where companies use artificial intelligence as cover for traditional cost-cutting. Goldman Sachs economists estimate that AI is eliminating only 5,000 to 10,000 jobs per month across all U.S. sectors, hardly enough to justify Block's massive cuts.

"European Central Bank President Christine Lagarde told lawmakers in Brussels last week that ECB economists are monitoring for signs that AI is causing job losses," reports Bloomberg, "and are 'not yet seeing' the 'waves of redundancies that are feared'..." And "a recent survey of global executives published in the Harvard Business Review found that while AI has been cited as the reason for some layoffs, those cuts are almost entirely anticipatory: executives expect big efficiency gains that have not yet been realized."

Even a former senior Block executive "is questioning whether AI is truly the reason behind the cuts," writes Inc.: In a recent opinion piece for The New York Times, Aaron Zamost, Block's former head of communications, policy, and people, asked whether the layoffs reflect a genuine "new reality in which the work they do might no longer be viable," or whether artificial intelligence is "just a convenient and flashy new cover for typical corporate downsizing." Zamost acknowledged that the answer is unclear and perhaps unknowable, even within Block itself...

Looking more closely at the layoffs, Zamost argued that the specific roles affected suggest more traditional corporate cost-cutting than a sweeping AI transformation... Many of the responsibilities being eliminated, he argued, rely on distinctly human skills that AI systems still cannot replicate. "A chatbot can't meet with the mayor, cast commercial actors, or negotiate with the Securities and Exchange Commission," Zamost wrote. "Not all the roles I've heard that Block is eliminating can be handled by AI, yet executives are treating it as equally useful today to all disciplines."

Ultimately, Zamost suggested that the sincerity of companies' AI explanations may not really matter. "It matters less whether a company knows how to deploy AI and more whether investors believe it is on track to do so," he wrote.

Indeed, whatever the rationale for Dorsey's statement, " Wall Street didn't seem to mind..." Entrepreneur magazine — since Block's stock shot up 15% after the announcement.
IT

Workers Who Love 'Synergizing Paradigms' Might Be Bad at Their Jobs (cornell.edu) 105

Cornell University makes an announcement. "Employees who are impressed by vague corporate-speak like 'synergistic leadership,' or 'growth-hacking paradigms' may struggle with practical decision-making, a new Cornell study reveals." Published in the journal Personality and Individual Differences, research by cognitive psychologist Shane Littrell introduces the Corporate Bullshit Receptivity Scale (CBSR), a tool designed to measure susceptibility to impressive-but-empty organizational rhetoric... Corporate BS seems to be ubiquitous - but Littrell wondered if it is actually harmful. To test this, he created a "corporate bullshit generator" that churns out meaningless but impressive-sounding sentences like, "We will actualize a renewed level of cradle-to-grave credentialing" and "By getting our friends in the tent with our best practices, we will pressure-test a renewed level of adaptive coherence." He then asked more than 1,000 office workers to rate the "business savvy" of these computer-generated BS statements alongside real quotes from Fortune 500 leaders...

The results revealed a troubling paradox. Workers who were more susceptible to corporate BS rated their supervisors as more charismatic and "visionary," but also displayed lower scores on a portion of the study that tested analytic thinking, cognitive reflection and fluid intelligence. Those more receptive to corporate BS also scored significantly worse on a test of effective workplace decision-making. The study found that being more receptive to corporate bullshit was also positively linked to job satisfaction and feeling inspired by company mission statements. Moreover, those who were more likely to fall for corporate BS were also more likely to spread it.

Essentially, the employees most excited and inspired by "visionary" corporate jargon may be the least equipped to make effective, practical business decisions for their companies.

Government

Trump Administration Says It Can't Process Tariff Refunds Because of Computer Problems (theverge.com) 166

U.S. Customs and Border Protection (CBP) said in a filing on Friday that it currently cannot process billions in tariff refunds because its import-processing system is "not well suited to a task of this scale." The Verge reports: The CBP's admission comes after the Supreme Court struck down the tariffs imposed by Trump under the International Emergency Economic Powers Act (IEEPA) last month. This week, the International Trade Court ruled that importers impacted by the tariffs are entitled to refunds with interest. The CBP estimates that it collected around $166 billion in IEEPA duties as of March 4th, 2026. [...]

The CBP says it currently processes imports through its Automated Commercial Environment (ACE) system. In the filing, Lord says that using the department's existing technology, it would take more than 4.4 million hours to process refunds for the over 53.2 million entries with IEEPA duties. Despite these current limitations, the CBP says it's "confident" it can develop and launch new capabilities to "streamline and consolidate refunds and interest payments on an importer basis" -- but this could take 45 days. "The process will be simpler and more efficient than the existing functionalities, and CBP will provide guidance on how to file refund declarations in the new system," Lord says.

Security

US Cybersecurity Adds Exploited VMware Aria Operations To KEV Catalog (thehackernews.com) 4

joshuark writes: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added a VMware Aria Operations vulnerability tracked as CVE-2026-22719 to its Known Exploited Vulnerabilities catalog, flagging the flaw as exploited in attacks. VMware Aria Operations is an enterprise monitoring platform that helps organizations track the performance and health of servers, networks, and cloud infrastructure. The flaw has now been added to the CISA's Known Exploited Vulnerabilities (KEV) catalog, with the U.S. cyber agency requiring federal civilian agencies to address the issue by March 24, 2026. Broadcom said it is aware of reports indicating the vulnerability is exploited in attacks but cannot confirm the claims.

"A malicious unauthenticated actor may exploit this issue to execute arbitrary commands which may lead to remote code execution in VMware Aria Operations while support-assisted product migration is in progress," the advisory explains. Broadcom released security patches on February 24 and also provided a temporary workaround for organizations unable to apply the patches immediately. The mitigation is a shell script named "aria-ops-rce-workaround.sh," which must be executed as root on each Aria Operations appliance node. There are currently no details on how the vulnerability is being exploited in the wild, who is behind it, and the scale of such efforts.

The Internet

Computer Scientists Caution Against Internet Age-Verification Mandates (reason.com) 79

fjo3 shares a report from Reason Magazine: Effective January 1, 2027, providers of computer operating systems in California will be required to implement age verification. That's just part of a wave of state and national laws attempting to limit children's access to potentially risky content without considering the perils such laws themselves pose. Now, not a moment too soon, over 400 computer scientists have signed an open letter warning that the rush to protect children from online dangers threatens to introduce new risks including censorship, centralized power, and loss of privacy. They caution that age-verification requirements "might cause more harm than good." The group of computer scientists from around the world cautions that "those deciding which age-based controls need to exist, and those enforcing them gain a tremendous influence on what content is accessible to whom on the internet." They add that "this influence could be used to censor information and prevent users from accessing services."

"Regulating the use of VPNs, or subjecting their use to age assurance controls, will decrease the capability of users to defend their privacy online. This will not only force regular users to leave a larger footprint on the network, but will leave a number of at-risk populations unprotected, such as journalists, activists, or domestic abuse victims." It continues: "We note that we do not believe that trying to regulate VPN use for non-compliant users would be any more effective than trying to forbid the use of end-to-end encrypted communication for criminals. Secure cryptography is widely available and can no longer be put back into a box."

"If minors or adults are deplatformed via age-related bans, they are likely to migrate to find similar services," warn the scientists. "Since the main platforms would all be regulated, it is likely that they would migrate to fringe sites that escape regulation." With data on everyone collected in order to restrict the activites of minors, data abuses and privacy risks increase. "This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord."

Instead of mandated age restrictions, the letter urges lawmakers to consider the dangers and suggest regulating social media algorithms instead. They also recommend "support for parents to locally prevent access to non-age-appropriate content or apps, without age-based control needing to be implemented by service providers."
Encryption

TikTok Says End-To-End Encryption Makes Users Less Safe (bbc.com) 86

An anonymous reader quotes a report from the BBC: TikTok will not introduce end-to-end encryption (E2EE) -- the controversial privacy feature used by nearly all its rivals -- arguing it makes users less safe. E2EE means only the sender and recipient of a direct message can view its contents, making it the most secure form of communication available to the general public. Platforms such as Facebook, Instagram, Messenger and X have embraced it because they say their priority is maximizing user privacy.

But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages. The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk. TikTok has consistently denied this, but earlier this year the social media firm's US operations were separated from its global business on the orders of US lawmakers.

TikTok told the BBC it believed end-to-end encryption prevented police and safety teams from being able to read direct messages if they needed to. It confirmed its approach to the BBC in a briefing about security at its London office, saying it wanted to protect users, especially young people from harm. It described this stance as a deliberate decision to set itself apart from rivals.
"Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritizing 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite," said social media industry analyst Matt Navarra. But Navarra said the move also "puts TikTok out of step with global privacy expectations" and might reinforce wariness for some about its ownership.
Iphone

A Possible US Government iPhone-Hacking Toolkit Is Now In the Hands of Foreign Spies, Criminals (wired.com) 39

Security researchers say a highly sophisticated iPhone exploitation toolkit dubbed "Coruna," which possibly originated from a U.S. government contractor, has spread from suspected Russian espionage operations to crypto-stealing criminal campaigns. Apple has patched the exploited vulnerabilities in newer iOS versions, but tens of thousands of devices may have already been compromised. An anonymous reader quotes an excerpt from Wired's report: Security researchers at Google on Tuesday released a report describing what they're calling "Coruna," a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

In fact, Google traces components of Coruna to hacking techniques it spotted in use in February of last year and attributed to what it describes only as a "customer of a surveillance company." Then, five months later, Google says a more complete version of Coruna reappeared in what appears to have been an espionage campaign carried out by a suspected Russian spy group, which hid the hacking code in a common visitor-counting component of Ukrainian websites. Finally, Google spotted Coruna in use yet again in what seems to have been a purely profit-focused hacking campaign, infecting Chinese-language crypto and gambling sites to deliver malware that steals victims cryptocurrency.

Conspicuously absent from Google's report is any mention of who the original surveillance company "customer" that deployed Coruna may have been. But the mobile security company iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government. Google and iVerify both note that Coruna contains multiple components previously used in a hacking operation known as "Triangulation" that was discovered targeting Russian cybersecurity firm Kaspersky in 2023, which the Russian government claimed was the work of the NSA. (The US government didn't respond to Russia's claim.)

Coruna's code also appears to have been originally written by English-speaking coders, notes iVerify's cofounder Rocky Cole. "It's highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government," Cole tells WIRED. "This is the first example we've seen of very likely US government tools -- based on what the code is telling us -- spinning out of control and being used by both our adversaries and cybercriminal groups." Regardless of Coruna's origin, Google warns that a highly valuable and rare hacking toolkit appears to have traveled through a series of unlikely hands, and now exists in the wild where it could still be adopted -- or adapted -- by any hacker group seeking to target iPhone users.
"How this proliferation occurred is unclear, but suggests an active market for 'second hand' zero-day exploits," Google's report reads. "Beyond these identified exploits, multiple threat actors have now acquired advanced exploitation techniques that can be re-used and modified with newly identified vulnerabilities."
AI

Apple Might Use Google Servers To Store Data For Its Upgraded AI Siri 21

Apple has reportedly asked Google to look into "seting up servers" for a Gemini-powered upgrade to Siri that meets Apple's privacy standards. The Verge reports: Apple had already announced in January that Google's Gemini AI models would help power the upgraded version of Siri it delayed last year, but The Information's report indicates Apple might lean even more on Google so it can catch up in AI.

The original partnership announcement said that "the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology," and that the models would "help power future Apple Intelligence features," including "a more personalized Siri." While the announcement noted that Apple Intelligence would "continue to run on Apple devices and Private Cloud Compute," it didn't specify if the new Siri would run on Google's cloud.
Apple's Private Cloud Compute is not only underpowered but it's also underutilized in its current state, notes 9to5Mac, "with the company only using about 10% of its capacity on average, leading to some already-manufactured Apple servers to be sitting dormant on warehouse shelves."
The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

AI

Southern California Air Board Rejects Pollution Rules After AI-Generated Flood of Comments 52

Southern California's air quality board rejected proposed rules to phase out gas-powered appliances after receiving more than 20,000 opposition comments generated through CiviClick, "the first and best AI-powered grassroots advocacy platform." Phys.org reports: A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign "left the staff of the Southern California Air Quality Management District (SCAQMD) reeling," the article says. It is not clear how AI was deployed in the campaign, and officials at CiviClick did not respond to repeated requests for comment. But their website boasts several tools, including "state of the art technology and artificial intelligence message assistance" that can be used to create custom advocacy letters, as opposed to repetitive form letters or petitions often used in similar campaigns.

When staffers at the air district reached out to a small sample of people to verify their comments, at least three said they had not written to the agency and were not aware of any such messages, records show. But the email onslaught almost certainly influenced the board's June decision, according to agency insiders, who noted that the number of public comments typically submitted on agenda items can be counted on one hand.

The proposed rules were nearly two years in the making and would have placed a fee on natural gas-powered water heaters and furnaces, favoring electric ones, in an effort to reduce air pollution in the district, which includes Orange County and large swaths of Los Angeles, Riverside and San Bernardino counties. Gas appliances emit nitrogen oxides, or NOx -- key pollutants for forming smog. The implications are troubling, experts said, and go beyond the use of natural gas furnaces and heaters in the second-largest metropolitan area in the country.
Government

CISA Replaces Bumbling Acting Director After a Year (techcrunch.com) 26

New submitter DeanonymizedCoward shares a report from TechCrunch: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is reportedly in crisis following major budget cuts, layoffs, and furloughs under the Trump administration, says TechCrunch. The agency has now replaced its acting director, Madhu Gottumukkala, after a turbulent year marked by controversy and internal turmoil. During his tenure, Gottumukkala allegedly mishandled sensitive information by uploading government documents to ChatGPT, oversaw a one-third reduction in staff, and reportedly failed a counterintelligence polygraph needed for classified access. His leadership also saw the suspension of several senior officials, including CISA's chief security officer. Nextgov also reported that CISA lost another top senior official, Bob Costello, the agency's chief information officer tasked with overseeing the agency's IT systems and data policies. "Last month, CISA's acting director Madhu Gottumukkala reportedly took steps to transfer Costello, but other political appointees blocked it," added Nextgov.
IT

Smartphone Market To Decline 13% in 2026, Marking the Largest Drop Ever Due To the Memory Shortage Crisis (idc.com) 15

An anonymous reader shares a report: Worldwide smartphone shipments are forecast to decline 12.9% year-on-year (YoY) in 2026 to 1.1 billion units, according to the International Data Corporation (IDC) Worldwide Quarterly Mobile Phone Tracker. This decline will bring the smartphone market to its lowest annual shipment volume in more than a decade. The current forecast represents a sharp decline from our November forecast amid the intensifying memory shortage crisis.
IOS

iPhone and iPad Are First Consumer Devices Cleared for NATO Classified Data (macrumors.com) 27

Apple's iPhone and iPad running iOS 26 and iPadOS 26 have become the first consumer mobile devices cleared for NATO-restricted classified data. No special software or settings are required. MacRumors reports: Apple's devices are the first and only consumer mobile products that have reached this government certification level after security testing and evaluation by the German government. iPhones and iPads running iOS 26 and iPadOS 26 are now certified for use with classified data in all NATO nations.

In an announcement of the security clearance, Apple touted its security features: "Apple designs security into all of its products from the start, ensuring the most sophisticated protections are built in across hardware, software, and Apple silicon. This unique approach allows Apple users to benefit from industry-leading security protections such as best-in-class encryption, biometric authentication with Face ID, and groundbreaking features like Memory Integrity Enforcement. These same protections are now recognized as meeting stringent government and international security requirements, even for restricted data."

Television

HBO Max's Password-Sharing Crackdown Will Expand Globally in 2026 (thewrap.com) 21

HBO Max will be cracking down on password sharing around the world. From a report: The streamer first started cracking down on password sharing in the United States late last August. Subscribers are now able to add an additional out-of-household account for $7.99 a month. Before that August change, Warner Bros. Discovery had been testing for months to determine who may or may not be a "legitimate user," as CEO and President for Warner Bros. Discovery Global Streaming and Games JB Perrette described the plan.

On Thursday during the company's fourth quarter earnings call for 2025, WBD revealed that the streaming limitations would be expanding. This news came as part of an answer about which levers the company plans to pull to grow HBO Max. Password crackdowns have proven to be a lucrative way to both boost revenue and subscriptions. Netflix, for example, saw 9 million more subscribers after its first wave of password crackdowns in 2024. The caveat is that password crackdowns do not lead to consistent growth, and they often infuriate subscribers.

IT

Cloudflare Experiment Ports Most of Next.js API in 'One Week' With AI (theregister.com) 29

An anonymous reader shares a report: A Cloudflare engineer says he has implemented 94% of the Next.js API by directing Anthropic's Claude, spending about $1,100 on tokens. The purpose of the experimental project was not to show off AI coding, but to address an issue with Next.js, the popular React-based framework sponsored by Vercel.

According to Cloudflare engineering director Steve Faulkner, the Next.js tooling is "entirely bespoke... If you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run."

The Next.js team is addressing this following numerous complaints that deploying the framework with full features on platforms other than Vercel is too difficult, with a feature in progress called deployment adapters. "Vercel will use the same adapter API as every other partner," the company said when introducing the planned feature last year.

Security

AI Can Find Hundreds of Software Bugs -- Fixing Them Is Another Story (theregister.com) 26

Anthropic last week promoted Claude Code Security, a research preview capability that uses its Claude Opus 4.6 model to hunt for software vulnerabilities, claiming its red team had surfaced over 500 bugs in production open-source codebases -- but security researchers say the real bottleneck was never discovery.

Guy Azari, a former security researcher at Microsoft and Palo Alto Networks, told The Register that only two to three of those 500 vulnerabilities have been fixed and none have received CVE assignments. The National Vulnerability Database already carried a backlog of roughly 30,000 CVE entries awaiting analysis in 2025, and nearly two-thirds of reported open-source vulnerabilities lacked an NVD severity score.

The curl project closed its bug bounty program because maintainers could no longer handle the flood of poorly crafted reports from AI tools and humans alike. Feross Aboukhadijeh, CEO of security firm Socket, said discovery is becoming dramatically cheaper but validating findings, coordinating with maintainers, and developing architecture-aligned patches remains slow, human-intensive work.
AI

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data (bloomberg.com) 22

A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

HP

HP Says Memory's Contribution To PC Costs Just Doubled To 35% (theregister.com) 25

HP has revealed that memory now accounts for 35% of the cost of materials it needs to build a PC, up from between 15 and 18% last quarter. And the company expects RAM's contribution will rise through the year. From a report: Speaking on the company's Q1 2026 earnings call, interim CEO Bruce Broussard said the company has secured long-term supply agreements for the year and also "qualified new suppliers [and] built in strategic inventory positions for key platforms and cut the time to qualify new material in half to accelerate our product configuration changes."

That sounds a lot like HP Inc is signing up new suppliers at a brisk pace. Broussard said the company has also "expanded lower-cost sourcing across our commodity basket, lowering logistics costs with agile end-to-end planning processes." The company is using its internal AI initiatives to power those new processes. The company is also "configuring our products and shaping demand to align the supply we have with our customer needs" and "taking targeted pricing actions to offset the remaining cost impact in close partnership with both our channel and direct customers."

AI

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox (techcrunch.com) 75

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a "speed run" while ignoring her repeated commands from her phone to stop.

"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller "toy" inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction -- a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.

Slashdot Top Deals