AI

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox (techcrunch.com) 17

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a "speed run" while ignoring her repeated commands from her phone to stop.

"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller "toy" inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction -- a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.
Security

CrowdStrike Says Attackers Are Moving Through Networks in Under 30 Minutes (cyberscoop.com) 14

An anonymous reader shares a report: Cyberattacks reached victims faster and came from a wider range of threat groups than ever last year, CrowdStrike said in its annual global threat report released Tuesday, adding that cybercriminals and nation-states increasingly relied on predictable tactics to evade detection by exploiting trusted systems.

The average breakout time -- how long it took financially-motivated attackers to move from initial intrusion to other network systems -- dropped to 29 minutes in 2025, a 65% increase in speed from the year prior. "The fastest breakout time a year ago was 51 seconds. This year it's 27 seconds," Adam Meyers, head of counter adversary operations at CrowdStrike, told CyberScoop. Defenders are falling behind because attackers are refining their techniques, using social engineering to access high-privilege systems faster and move through victims' cloud infrastructure undetected.

Bug

Microsoft Says Bug In Classic Outlook Hides the Mouse Pointer (bleepingcomputer.com) 34

joshuark quotes a report from BleepingComputer: Microsoft is investigating a known issue that causes the mouse pointer to disappear in the classic Outlook desktop email client for some users. This bug has been acknowledged almost two months after the first reports started surfacing online, with users saying that Outlook became unusable after the mouse pointer vanished while using the app.

[...] Microsoft explained in a recent support document that the mouse pointer (and in some cases the cursor) will suddenly vanish as users move it across Outlook's interface. "When using classic Outlook, you may find that the mouse pointer or mouse cursor disappears as you move the pointer over the Outlook interface," it said. "Although the mouse pointer is not there, the email in the message list will change color as you hover over it. This issue has also been reported with OneNote and other Microsoft 365 apps to a lesser degree."

Microsoft added that the Outlook team is investigating the issues and will provide updates as more information becomes available. While a timeline for a permanent fix is not yet available, Microsoft has offered three temporary workarounds that require affected users to click an email in the message list when the cursor disappears, which may cause it to reappear. Alternatively, switching to PowerPoint, clicking into an editable area, and then returning to Outlook may also restore the mouse pointer.

IT

'How Many AIs Does It Take To Read a PDF?' (theverge.com) 40

Despite AI's progress in building complex software, the ubiquitous PDF remains something of a grand challenge -- a format Adobe developed in the early 1990s to preserve the precise visual appearance of documents. PDFs consist of character codes, coordinates, and rendering instructions rather than logically ordered text, and even state-of-the-art models asked to extract information from them will summarize instead, confuse footnotes with body text, or outright hallucinate contents, The Verge writes.

Companies like Reducto are now tackling the problem by segmenting pages into components -- headers, tables, charts -- before routing each to specialized parsing models, an approach borrowed from computer vision techniques used in self-driving vehicles. Researchers at Hugging Face recently found roughly 1.3 billion PDFs sitting in Common Crawl alone, and the Allen Institute for AI has noted that PDFs could provide trillions of novel, high-quality training tokens from government reports, textbooks, and academic papers -- the kind of data AI developers are increasingly desperate for.
AI

Should Job-Seekers Stop Using AI to Write Their Resumes? (yahoo.com) 63

When one company asked job applicants to submit a video where they answer a question, most of the 300 responses were "eerily similar," reports the Washington Post (with a company executive saying it was "abundantly clear" they'd used AI.) Job seekers are turning to AI to help them land jobs more quickly in a tough labor market.... Employers say that's having an unintended consequence: Many applications are looking and sounding the same...

It's easy to spot when candidates over-rely on AI, some employers said. Oftentimes, executive summaries will look eerily similar to each other, odd phrases that people wouldn't normally use in conversation creep into descriptions, fancy vocabulary appears, and someone with entry-level experience uses language that indicates they are much more senior, they added. It's worse when they use auto-apply AI tools, which will find jobs, fill out applications and submit résumés on the candidate's behalf, some employers said. Those tend to misinterpret some of the application questions and fill in the wrong information in inappropriate spots. If these applications were evaluated alone, employers say they'd have a harder time identifying AI usage. But when hundreds of applications all have the same issue, they said, AI's role in it becomes obvious.

The article acknowledges that some employers could be using AI tools to screen resumes too. One job-seeker in Texas even says he'll stop submitting an AI-written résumé when the recruiter stops using AI to evaluate them. "You're saying, 'You shouldn't be doing this' when I know a good chunk of them do this!"

Obligatory XKCD.
Encryption

Telegram Disputes Russia's Claim Its Encryption Was Compromised (business-standard.com) 21

Russia's domestic intelligence agency claimed Saturday that Ukraine can obtain sensitive information from troops using the Telegram app on the front line, reports Bloomberg. The fact that the claims were made through Russia's state-operated news outlet RIA Novosti signals "tightening scrutiny over a platform used by millions of Russians," Bloomberg notes, as the Kremlin continues efforts to "push people to use a new state-backed alternative." Russia's communications watchdog limited access to Telegram — a popular messaging app owned by Russian-born billionaire Pavel Durov — over a week ago for failing to comply with Russian laws requiring personal data to be stored locally. Voice and video calls were blocked via Telegram in August. The pressure is the latest move in a long-running campaign to promote what the Kremlin calls a sovereign internet that's led to blocks on YouTube, Instagram and WhatsApp... Foreign intelligence services are able to see Russia's military messages in Telegram too, Russia's Minister for digital development, Maksut Shadaev, said on Wednesday, although he added that Russia will not block access to Telegram for troops for now.

Telegram responded at the time that no breaches of the app's encryption have ever been found. "The Russian government's allegation that our encryption has been compromised is a deliberate fabrication intended to justify outlawing Telegram and forcing citizens onto a state-controlled messaging platform engineered for mass surveillance and censorship," it said in an emailed response.

Open Source

'Open Source Registries Don't Have Enough Money To Implement Basic Security' (theregister.com) 24

Google and Microsoft contributed $5 million to launch Alpha-Omega in 2022 — a Linux Foundation project to help secure the open source supply chain. But its co-founder Michael Winser warns that open source registries are in financial peril, reports The Register, since they're still relying on non-continuous funding from grants and donations.

And it's not just because bandwidth is expensive, he said at this year's FOSDEM. "The problem is they don't have enough money to spend on the very security features that we all desperately need..." In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io). Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm)...

In some cases benevolent parties can cover [bandwidth] bills: Python's PyPI registry bandwidth needs for shipping copies of its 700,000+ packages (amounting to 747PB annually at a sustained rate of 189 Gbps) are underwritten by Fastly, for instance. Otherwise, the project would have to pony up about $1.8 million a month. Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages. Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed. Alpha-Omega's recipients include the Python Software Foundation, Rust Foundation, Eclipse Foundation, OpenJS Foundation for Node.js and jQuery, and Ruby Central.

Donations and memberships certainly help defray costs. Volunteers do a lot of what otherwise would be very expensive work. And there are grants about...Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

The dilemma was summed up succinctly by the anonymous Slashdot reader who submitted this story.

"Free beer is great. Securing the keg costs money!"
Robotics

Man Accidentally Gains Control of 7,000 Robot Vacuums (popsci.com) 48

A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with "a sneak peak into thousands of people's homes." While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw... He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots' IP addresses also revealed their approximate locations.

DJI told Popular Science the issue was addressed "through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10."
United Kingdom

After 16 Years, 'Interim' CTO Finally Eradicating Fujitsu and Horizon From the UK's Post Office (computerweekly.com) 37

Besides running tech operations at the UK's Post Office, their interim CTO is also removing and replacing Fujitsu's Horizon system, which Computer Weekly describes as "the error-ridden software that a public inquiry linked to 13 people taking their own lives."

After over 16 years of covering the scandal they'd first discovered back in 2009, Computer Weekly now talks to CTO Paul Anastassi about his plans to finally remove every trace of the Horizon system that's been in use at Post Office branches for over 30 years — before the year 2030: "There are more than 80 components that make up the Horizon platform, and only half of those are managed by Fujitsu," said Anastassi. "The other components are internal and often with other third parties as well," he added... The plan is to introduce a modern front end that is device agnostic. "We want to get away from [the need] to have a certain device on a certain terminal in your branch. We want to provide flexibility around that...."

Anastassi is not the first person to be given the task of terminating Horizon and ending Fujitsu's contract. In 2015, the Post Office began a project to replace Fujitsu and Horizon with IBM and its technology, but after things got complex, Post Office directors went crawling back to Fujitsu. Then, after Horizon was proved in the High Court to be at fault for the account shortfalls that subpostmasters were blamed and punished for, the Post Office knew it had to change the system. This culminated in the New Branch IT (NBIT) project, but this ran into trouble and was eventually axed. This was before Anastassi's time, and before that of its new top team of executives....

Things are finally moving at pace, and by the summer of this year, two separate contracts will be signed with suppliers, signalling the beginning of the final act for Fujitsu and its Horizon system.

Anastassi has 30 years of IT management experience, the article points out, and he estimates the project will even bring "a considerable cost saving over what we currently pay for Fujitsu."
Python

How Python's Security Response Team Keeps Python Users Safe (blogspot.com) 5

This week the Python Software Foundation explained how they keep Python secure. A new blog post recognizes the volunteers and paid Python Software Foundation staff on the Python Security Response Team (PSRT), who "triage and coordinate vulnerability reports and remediations keeping all Python users safe." Just last year the PSRT published 16 vulnerability advisories for CPython and pip, the most in a single year to date! And the PSRT usually can't do this work alone, PSRT coordinators are encouraged to involve maintainers and experts on the projects and submodules. By involving the experts directly in the remediation process ensures fixes adhere to existing API conventions and threat-models, are maintainable long-term, and have minimal impact on existing use-cases. Sometimes the PSRT even coordinates with other open source projects to avoid catching the Python ecosystem off-guard by publishing a vulnerability advisory that affects multiple other projects. The most recent example of this is PyPI's ZIP archive differential attack mitigation.

This work deserves recognition and celebration just like contributions to source code and documentation. [Security Developer-in-Residence Seth Larson and PSF Infrastructure Engineer Jacob Coffee] are developing further improvements to workflows involving "GitHub Security Advisories" to record the reporter, coordinator, and remediation developers and reviewers to CVE and OSV records to properly thank everyone involved in the otherwise private contribution to open source projects.

Security

Cyber Stocks Slide As Anthropic Unveils 'Claude Code Security' (bloomberg.com) 29

An anonymous reader quotes a report from Bloomberg: Shares of cybersecurity software companies tumbled Friday after Anthropic PBC introduced a new security feature into its Claude AI model. Crowdstrike Holdings was the among the biggest decliners, falling as much as 6.5%, while Cloudflare slumped more than 6%. Meanwhile, Zscaler dropped 3.5%, SailPoint shed 6.8%, and Okta declined 5.7%. The Global X Cybersecurity ETF fell as much as 3.8%, extending its losses on the year to 14%.

Anthropic said the new tool will "scans codebases for security vulnerabilities and suggests targeted software patches for human review." The firm said the update is available in a limited research preview for now.

Microsoft

Microsoft Deletes Blog Telling Users To Train AI on Pirated Harry Potter Books (arstechnica.com) 13

Microsoft pulled a year-old blog post this week after a Hacker News thread flagged that it had encouraged developers to download all seven Harry Potter books from a Kaggle dataset -- incorrectly marked as public domain -- and use them to train AI models on the company's Azure platform.

The blog, written in November 2024 by senior product manager Pooja Kamath, walked users through building Q&A systems and generating fan fiction using the copyrighted texts, and even included a Microsoft-branded AI image of Harry Potter. The Kaggle dataset's uploader, data scientist Shubham Maindola, told Ars Technica the public domain label was "a mistake" and deleted the dataset after the outlet reached out.
AI

HSBC To Investors: If India Couldn't Build an Enterprise Software Challenger, Neither Can AI (x.com) 53

India's IT services giants have spent decades deploying, customizing, and maintaining the world's largest enterprise software platforms, putting hundreds of thousands of engineers in daily contact with the business logic and proprietary architectures of vendors like SAP and Oracle. None of them have built a competing product that gained meaningful traction against the U.S. incumbents, HSBC said in a note to clients, using this history to argue AI-generated code faces the same structural barriers.

The bank's analysts contend that enterprise software competition turns on factors that have little to do with the ability to write code -- sales teams, cross-licensing agreements, patented IP, first-mover lock-in, brand awareness, and go-to-market infrastructure. If a massive, low-cost, domain-expert workforce couldn't crack the market over several decades, HSBC argues, the idea that AI-generated code will do so is, in the words of Nvidia's Jensen Huang that the report approvingly cites, "illogical."
IT

Email Blunder Exposes $90 Billion Russian Oil Smuggling Ring (ft.com) 30

schwit1 writes: An IT blunder has revealed an apparent smuggling ring that has moved at least $90bn of Russian oil and is playing a central role in funding the Kremlin's war in Ukraine. Financial Times has identified 48 seemingly independent companies working from different physical addresses that appear to be operating together to disguise the origin of Russian oil, particularly from Kremlin-controlled Rosneft. The network was discovered because they all share a single private email server. The report adds: The FT was able to identify 442 web domains whose public registrations show they all use a single private server for their email, "mx.phoenixtrading.ltd," showing that they share back-office functions. The FT was then able to identify companies by comparing the names in the domain to those of entities that appear in Russian and Indian customs records as involved in carrying Russian oil.
AI

Was an Amazon Service Taken Down By Its AI Coding Bot? 38

UPDATE (2/21): After this story ran, Amazon published a blog post Friday "to address the inaccuracies" in the Financial Times report that the company's own AI tool Kiro caused two outages in an AWS service in December. Amazon's blog post says that the "brief" and "extremely limited" service interruption "was the result of user error — specifically misconfigured access controls — not AI as the story claims." And "The Financial Times' claim that a second event impacted AWS is entirely false."

An anonymous Slashdot reader had shared this report from Reuters: Amazon's cloud unit has suffered at least two outages due to errors involving its own AI tools [non-paywalled source], leading some employees to raise doubts about the US tech giant's push to roll out these coding assistants.

Amazon Web Services experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes, according to four people familiar with the matter.

The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to "delete and recreate the environment." Amazon posted an internal postmortem about the "outage" of the AWS system, which lets customers explore the costs of its services. Multiple Amazon employees told the FT that this was the second occasion in recent months in which one of the group's AI tools had been at the centre of a service disruption.

Slashdot Top Deals