Android

Google Details New 24-Hour Process To Sideload Unverified Android Apps (arstechnica.com) 68

An anonymous reader quotes a report from Ars Technica: Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification. With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google's intervention.

Apps that come from unverified developers won't be installable on Android phones -- unless you use the new advanced flow, which will be buried in the developer settings. When sideloading apps today, Android phones alert the user to the "unknown sources" toggle in the settings, and there's a flow to help you turn it on. The verification bypass is different and will not be revealed to users. You have to know where this is and proactively turn it on yourself, and it's not a quick process. [...] The actual legwork to activate this feature only takes a few seconds, but the 24-hour countdown makes it something you cannot do spur of the moment.

But why 24 hours? According to Samat, this is designed to combat the rising use of high-pressure social engineering attacks, in which the scammer convinces the victim they have to install an app immediately to avoid severe consequences. "In that 24-hour period, we think it becomes much harder for attackers to persist their attack," said Samat. "In that time, you can probably find out that your loved one isn't really being held in jail or that your bank account isn't really under attack." But for people who are sure they don't want Google's verification system to get in the way of sideloading any old APK they come across, they don't have to wait until they encounter an unverified app to get started. You only have to select the "indefinitely" option once on a phone, and you can turn dev options off again afterward.
"For a lot of people in the world, their phone is their only computer, and it stores some of their most private information," Samat said. "Over the years, we've evolved the platform to keep it open while also keeping it safe. And I want to emphasize, if the platform isn't safe, people aren't going to use it, and that's a lose-lose situation for everyone, including developers."
Facebook

Meta Backtracks, Will Keep Horizon Worlds VR Support 'For Existing Games' (uploadvr.com) 10

Meta is partially reversing its decision to drop VR support for Horizon Worlds, keeping VR access for existing Unity-based games while shifting future development to a new flatscreen-focused Horizon Engine. UploadVR reports: If you somehow missed it, on Tuesday Meta officially announced that its Horizon Worlds "metaverse" platform would drop VR support in June, meaning it would only be available as a flatscreen experience for the web and smartphones. But now, in an "ask me anything" session on his Instagram page, Meta CTO Andrew Bosworth says the company has decided to "keep Horizon Worlds working in VR for existing games to support the fans who've reached out."

Bosworth says this specifically applies to worlds developed with the Horizon Unity runtime, suggesting it applies to those built inside VR or with the Horizon Desktop Editor, but not those built for the new Horizon Engine with Horizon Studio. The picture painted here is of a clean technical break, with the legacy Unity version of Horizon Worlds continuing to support VR, and the new Horizon Engine focusing fully on flatscreen. This VR support will continue through the Horizon Worlds VR app, which Bosworth says will stay on Quest's store "for the foreseeable future".

Specific worlds will not be recommended by the operating system, though, and nor will they be seen in the storefront. Horizon Worlds will be just another app on the store. As for the reason behind not supporting VR in Horizon Engine, Bosworth repeated the explanation he's been giving for two months now -- "because that's where most of the consumer and creator energy already was, and so we're leaning into that."

Facebook

Meta Is Shutting Down VR Social Platform Horizon Worlds (cnbc.com) 51

Meta is shutting down its VR social platform Horizon Worlds, which was once a key piece of the pivot to the metaverse. The company said the app will be taken off the Quest store at the end of March, and fully removed from Quest headsets by June 15. After that date, it will shift to a standalone "mobile-only experience." CNBC reports: The shift for Horizon Worlds, which was once a central part of the company's push into virtual reality, comes weeks after Meta cut over 1,000 employees from Reality Labs, the unit responsible for the metaverse. [...] The social platform has never drawn more than a couple hundred thousand active users a month, CNBC previously reported.

The virtual 3D social network where avatars could interact and play games with other users officially launched in late 2021. It operated exclusively on the Quest VR platform until Meta launched a mobile app version in September 2023. The mobile version of Horizon Worlds was built to provide an entry point for users without VR headsets, functioning similarly to Roblox.

AI

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet (404media.co) 153

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
"Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..."

"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
Graphics

Gamers React With Overwhelming Disgust To DLSS 5's Generative AI Glow-Ups (arstechnica.com) 124

Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.

While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."

Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says.
Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.

"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."

Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."

Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."

DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Businesses

Finance Bros To Tech Bros: Don't Mess With My Bloomberg Terminal (wsj.com) 61

An anonymous reader quotes a report from the Wall Street Journal: A battle of insults and threats has broken out between the tech world and Wall Street. What's got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy -- and way cheaper -- alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now "Bloomberg is cooked," some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. [...]

The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is "laughable," said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). "It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution," he wrote. [...] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it's rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay "a really good foundation for a financial application. And that really has not been possible before."

Others aren't so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic's Claude. "It was laughable at best, horrific at worst," he said. Shevelenko acknowledged there are some aspects of the terminal that can't be replicated with vibe coding, including some of Bloomberg's proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal's data security, reliability and robust support system. "I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy," said Lemire. His message to the techies? "There's nothing that you can vibe code in a weekend or even like over the course of a year that's going to come anywhere close."

Programming

New 'Vibe Coded' AI Translation Tool Splits the Video Game Preservation Community 43

An anonymous reader quotes a report from Ars Technica: Since Andrej Karpathy coined the term "vibe coding" just over a year ago, we've seen a rapid increase in both the capabilities and popularity of using AI models to throw together quick programming projects with less human time and effort than ever before. One such vibe-coded project, Gaming Alexandria Researcher, launched over the weekend as what coder Dustin Hubbard called an effort to help organize the hundreds of scanned Japanese gaming magazines he's helped maintain at clearinghouse Gaming Alexandria over the years, alongside machine translations of their OCR text.

A day after that project went public, though, Hubbard was issuing an apology to many members of the Gaming Alexandria community who loudly objected to the use of Patreon funds for an error-prone AI-powered translation effort. The hubbub highlights just how controversial AI tools remain for many online communities, even as many see them as ways to maximize limited funds and man-hours. "I sincerely apologize," Hubbard wrote in his apology post. "My entire preservation philosophy has been to get people access to things we've never had access to before. I felt this project was a good step towards that, but I should have taken more into consideration the issues with AI."
"I'm very, very disappointed to see [Gaming Alexandria], one of the foremost organizations for preserving game history, promoting the use of AI translation and using Patreon funds to pay for AI licenses," game designer and Legend of Zelda historian Max Nichols wrote in a post on Bluesky over the weekend. "I have cancelled my Patreon membership and will no longer promote the organization."

Nichols later deleted his original message (archived here), saying he was "uncomfortable with the scale of reposts and anger" it had generated in the community. However, he maintained his core criticism: that Gemini-generated translations inevitably introduce inaccuracies that make them unreliable for scholarly use.

In a follow-up, he also objected to Patreon funds being used to pay for AI tools that produce what he called "untrustworthy" translations, arguing they distort history and are not valid sources for research. "... It's worthless and destructive: these translations are like looking at history through a clownhouse mirror," he added.
Android

Android, Epic, and What's Really Behind Google's 'Existential' Threat to F-Droid (thenewstack.io) 53

Starting in September, even Android developers not in Google's Play Store will still be required to register with Google to distribute their apps in Brazil, Singapore, Indonesia, and Thailand, with Google continuing "to roll out these requirements globally" four months later. Even developers distributing Android apps on the web for sideloading will be required to register, pay Google a $25 fee, and provide a government ID.

But there's a new theory on what's secretly been motivating Google from an unnamed source in the "Keep Android Open" movement, writes long-time Slashdot reader destinyland: "You can't separate this really from their ongoing interactions with Epic and the settlement that they came to," they argue. Twelve days ago Epic Games and Google announced a new proposal for settling their long-running dispute over the legality of alternative app stores on Android phones. (Rather than agreeing to let third-party app stores into their Play Store, Google wants them to continue being sideloaded, promising in a blog post last week that they'll even offer a "more streamlined" and "simplified" sideloading alternative for rival app stores. "This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.")

So "developer verification" could be Google's fallback plan if U.S. courts fail to approve this. "If the Google Play Store has to allow any third-party repository app store, Google essentially has given up all control of the apps. But if they're able to claw back that control by requiring that all developers, no matter how they distribute their apps, have to register with Google — have to agree to their Terms & Conditions, pay them money, provide identification — then they have a large degree of indirect control over any app that can be developed for the entire platform."

But that plan threatens millions of people using the alternative F/OSS app distributor F-Droid, since Google also wants to have only one signature attached to Android apps. Marc Prud'hommeaux, a member of F-Droid's board of directors, says that "all of a sudden breaks all those versions of the application distributed through F-Droid or any other app store!"

Prud'hommeaux says they've told Google's Android team "You know perfectly well that you're killing F-Droid!" creating an "existential" threat to an app distributor "that has existed happily for over 10 years." But good things started happening when he created the website Keep Android Open: There's now a "huge backlog" of signers for an Open Letter that already includes EFF, the Software Freedom Conservancy, and the Free Software Foundation. He believes Android's existing Play Protect security "is completely sufficient to handle the particular scenarios they claim that developer verification is meant to address"...

The Keep Android Open site urges developers not to sign up for Android's early access program when it launches next week. (Instead, they're asking developers to respond to invites with an email about their concerns — and to spread the word to other developers and organizations in forums and social media posts.) There's also a petition at Change.org currently signed by 64,000 developers — adding 20,000 new signatures in the last 10 days. And "If you have an Android device, try installing F-Droid!" he adds. Google tracks how many people install these alternative app repositories, and a larger user base means greater consequences from any Android policy changes.

Plus, installing F-Droid "might be refreshing!" Prud'hommeaux says. "You don't see all the advertisements and promotions and scam and crapware stuff that you see in the commercial app stores!"

The Media

Should Banksy Remain Anonymous? (reuters.com) 91

He's "the most famous anonymous man in the world," suggests Reuters. But investigating Banksy's artworks in a bombed Ukrainian village (and other clues in the U.K. and Manhattan) have led them to "a hand-written confession by the artist to a long-ago misdemeanor charge of disorderly conduct — a document that revealed, beyond dispute, Banksy's true identity."

But Banksy's long-time lawyer "urged us not to publish this report, saying doing so would violate the artist's privacy, interfere with his art and put him in danger" and "would harm the public, too." Working "anonymously or under a pseudonym serves vital societal interests," he wrote. "It protects freedom of expression by allowing creators to speak truth to power without fear of retaliation, censorship or persecution — particularly when addressing sensitive issues such as politics, religion or social justice."

Reuters took into account Banksy's privacy claims — and the fact that many of his fans wish for him to remain anonymous. Yet we concluded that the public has a deep interest in understanding the identity and career of a figure with his profound and enduring influence on culture, the art industry and international political discourse... As for the risk he might face of retaliation or censorship, Britain's legal and political establishments seem comfortable with Banksy's messages and how he delivers them...

His mastery of disguise began as a way of shaking the police, says former manager [Steve] Lazarides. In an interview, Lazarides said anonymity served a practical purpose in Bristol, where authorities enforced "draconian" policies against graffiti... Eventually, keeping the secret became a burden. By the end of their partnership, Lazarides estimates he spent half or more of his time managing and maintaining the artist's mystique. "I think it became a good gag, and then, if you want my honest, honest opinion, I think it then became a disease," he said.

Lazarides wrote a two-volume book about managing Banksy from the late 1990s to 2008, including a story about Banksy's arrest in 2000 for this defacing of a billboard. Reuters geolocated that building, then found police documents and a court file including the hand-written confession. This investigation spawned a 7,000-word article with everything from a comic strip Banksy drew when he was 11 to his connections with Robert Del Naja of the trip hop band Massive Attack — and a 2017 podcast interview where a music producer apparently revealed Banksy's real first name.

But the article also reveals how protective the art community is of Banksy's secret. Reuters investigated that Banksy auctioned in 2018 for $1.4 million — and then immediately started shredding itself with a device Banksy embedded in its frame: That piece, renamed "Love is in the Bin," sold three years later for about $25 million. Art dealer [Robert] Casterline was at the auction and remembers when the shredder began to beep. He pulled out his phone to take pictures. "Unfortunately, there was one person standing in front of me," blocking the view, he said. It was an eccentric-looking man with a broad neck scarf and thick eyewear. Oddly, the man wasn't watching the painting get shredded. He was looking in the other direction, observing the crowd's reaction. Only later, reviewing what he shot, did Casterline notice that the man's glasses appeared to have a small camera built into the bridge. (Banksy later posted a video of the stunt, including shots of the astonished audience.)
Having seen a photo of the man suspected of being Banksy, Casterline confirmed to Reuters that he was "pretty sure" it was the same man.

But "I don't want to be the guy who exposes Banksy."
Privacy

New Freenet Network Launches, Along With 'River' Group Chat (freenet.org) 26

Wikipedia describes Freenet as "a peer-to-peer platform for censorship-resistant, anonymous communication," released in the year 2000. "Both Freenet and some of its associated tools were originally designed by Ian Clarke," Wikipedia adds. (And in 2000 Clarke answered questions from Slashdot's readers...)

And now Ian Clarke (aka Sanity — Slashdot reader #1,431) returns to share this announcement: Freenet's new generation peer-to-peer network is now operational, along with the first application built on the network: a decentralized group chat system called River.

The new version is a complete redesign of the original project, focusing on real-time decentralized applications rather than static content distribution. Applications run as WebAssembly-based contracts across a small-world peer network, allowing software to operate directly on the network without centralized infrastructure.

An introductory video demonstrating the system is available on YouTube.

"While the original Freenet was like a decentralized hard drive, the new Freenet is like a full decentralized computer," Clarke wrote in 2023, "allowing the creation of entirely decentralized services like messaging, group chat, search, social networking, among others... designed for efficiency, flexibility, and transparency to the end user."

"Freenet 2023 can be used seamlessly through your web browser, providing an experience that feels just like using the traditional web,"
Social Networks

US Set To Receive $10 Billion Fee For Brokering TikTok Deal (msn.com) 44

The deal to take control of TikTok's U.S. business came with an unusual condition, according to people familiar with the matter. The investors — which include Oracle, Abu Dhabi investor MGX, and private-equity firm Silver Lake — "paid the Treasury Department about $2.5 billion when the deal closed in January," reports the Wall Street Journal, "and are set to make several additional payments until hitting the $10 billion total." The $10 billion payment would be nearly unprecedented for a government helping arrange a transaction, historians have said... Investment bankers advising on a typical deal receive fees of less than 1% of the transaction value, and the percentage generally gets smaller as the deal size increases. Bank of America is in line to make some $130 million for advising railroad operator Norfolk Southern on its $71.5 billion sale to Union Pacific, one of the largest fees on record for a single bank on a deal. Administration officials have said the fee is justified given Trump's role in saving TikTok in the U.S. and navigating negotiations with China to get the deal done while addressing the security concerns of lawmakers...

The TikTok fee extracted from private-sector investors is the administration's latest transaction involving the nation's largest businesses. Trump took a nearly 10% stake in semiconductor company Intel and has agreed to take a chunk of chip sales to China from Nvidia in exchange for granting export licenses. The administration has also taken equity stakes in other companies and has a say in the operations of U.S. Steel following a "golden share" agreement with Japan's Nippon Steel in its takeover.

Reuters notes earlier this month, a lawsuit was filed by investors in two of TikTok's social media rivals, seeking to reverse the approval of the deal.

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Encryption

Instagram Discontinues End-To-End Encryption For DMs (thehackernews.com) 31

Meta plans to remove end-to-end encryption (E2EE) from Instagram direct messages by May 8, 2026. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," says Meta. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." The Hacker News reports: The American company first began testing E2EE for Instagram direct messages in 2021 as part of CEO Mark Zuckerberg's "privacy-focused vision for social networking." The feature is currently "only available in some areas" and is not enabled by default. Weeks into the Russo-Ukrainian war in February 2022, the company made encrypted direct messaging available to all adult users in both countries. Last week, TikTok said it would not introduce E2EE, arguing it makes users less safe by preventing police and safety teams from being able to read direct messages if needed.
Facebook

Meta Acquires Moltbook, the Social Network For AI Agents 30

Axios reports that Meta has acquired Moltbook, the viral, Reddit-like social network designed for AI agents. Humans are welcome, but only to observe. Axios reports: The deal brings Moltbook's creators -- Matt Schlicht and Ben Parr -- into Meta Superintelligence Labs (MSL), the unit run by former Scale AI CEO Alexandr Wang. Meta did not disclose Moltbook's purchase price. The deal is expected to close mid-March, Meta says, with the pair starting at MSL on March 16. When it launched in late January, Moltbook was labeled the "most interesting place on the internet" by open-source developer and writer Simon Willison. "Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned."

In an internal post seen by Axios, Meta's Vishal Shah said existing Moltbook customers can temporarily continue using the platform. "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners." He added: "Their team has unlocked new ways for agents to interact, share content, and coordinate complex tasks."
Social Networks

Bluesky CEO Jay Graber Is Stepping Down (wired.com) 48

Bluesky CEO Jay Graber is stepping down after overseeing the platform's growth from a Twitter research project into a 40-million-user alternative to X. "As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things," Graber wrote in a statement.

She will be transitioning to a new Chief Innovation Officer role while Venture capitalist Toni Schneider will serve as interim CEO until the board searches for a permanent replacement. Wired reports: Graber joined Bluesky in 2019, when it was a research project within Twitter focused on developing a decentralized framework for the social web. She became the company's first chief executive officer in 2021, when it spun out into an independent entity. She oversaw the platform's remarkable rise and the growing pains it experienced as it transformed from a quirky Twitter offshoot to a full-fledged alternative to X. Schneider tells WIRED that he intends to help Bluesky "become not just the best open social app, but the foundation for a whole new generation of user-owned networks."

Schneider, who will continue working as a partner at the venture capital firm True Ventures while at Bluesky, was previously CEO of the Wordpress parent company, Automattic, from 2006 to 2014. He also served as its CEO again in 2024 while top executive Matt Mullenweg went on a sabbatical. During that time, Schneider met Graber and became an adviser to Bluesky's leadership. In a blog post announcing his new role, Schneider said he plans to emphasize scaling, describing his job as "to help set up Bluesky's next phase of growth."

This isn't the end for Graber and Bluesky. She will transition to become the company's chief innovation officer, a role focused on Bluesky's technology stack rather than its business operations. The position was created for her. Graber, who began her career as a software engineer, has always sounded the most enthusiastic when discussing Bluesky's technology rather than its revenue streams. Bluesky's board of directors will appoint the next permanent CEO. The members include Jabber founder Jeremie Miller, crypto-focused VC Kinjal Shah, TechDirt founder Mike Masnick, and Graber. (Twitter founder Jack Dorsey was originally part of the board but quit in 2024.) This means Graber will have input on her successor. The talent search is still in early stages.

AI

AI Allows Hackers To Identify Anonymous Social Media Accounts, Study Finds (theguardian.com) 54

An anonymous reader quotes a report from the Guardian: AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned. In most test scenarios, large language models (LLMs) -- the technology behind platforms such as ChatGPT -- successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted. The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a "fundamental reassessment of what can be considered private online".

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a "Dolores park." In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence. While this example was fictional, the paper's authors highlighted scenarios in which governments use AI to surveil dissidents and activists posting anonymously, or hackers are able to launch "highly personalized" scams.

AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

The Almighty Buck

Prediction Market 'Kalshi' Sued for Not Paying $54 Million for Bets on Khamenei's Death (reuters.com) 44

An anonymous reader shared this report from the Independent: A popular predictions market app will not pay out the $54 million some of its users believed they were owed after correctly forecasting the death of Ayatollah Ali Khamenei, according to a report.

Kalshi, which allows players to gamble on real-world events, offered customers favorable odds on Khamenei, 86, being "out as Supreme Leader" in response to the announcement of joint U.S.-Israeli airstrikes on Tehran in the early hours of Saturday morning. The company promoted the trade on its homepage and app and tweeted [last] Saturday: "BREAKING: The odds Ali Khamenei is out as Supreme Leader have surged to 68 percent." It continued: "Reminder: Kalshi does not offer markets that settle on death. If Ali Khamenei dies, the market will resolve based on the last traded price prior to confirmed reporting of death." Khamenei was later confirmed dead in the airstrikes and the company clarified in a follow-up post: "Please note: A prior version of this clarification was grammatically ambiguous. As a customer service measure, Kalshi will reimburse lost value due to trades made between these clarifications...."

While the company has offered to reimburse any bets, fees or losses from the trade placed prior to its clarification message, it has nevertheless attracted a firestorm of complaints on social media.

A Kalshi spokesperson told Reuters they'd reimbursed "net losses" out of pocket "to the tune of millions of dollars". But a class action lawsuit was filed Thursday saying Kalshi had failed to pay $54 million: Kalshi did not invoke a "death carveout" provision until after the Iranian leader was killed to avoid paying customers in Kalshi's "Khamenei Market" what they were owed, the lawsuit said... The language specifying that Khamenei's departure could be due to any cause, including death, was "clear, unambiguous and binary," the lawsuit said, describing Kalshi's actions as "deceptive" and "predatory."
"In a notice filed Monday, the company proposed standardizing the terms of all its markets that implicitly depend on a person surviving..." reports Business Insider. "The update comes after Kalshi paid $2.2 million to resolve complaints from users who were confused by the way it divided the $55 million wagered on Iran's Supreme Leader Ali Khamenei's ouster after his targeted killing by Israel and the US."

Their article cites a DePaul University law professor who says "There's now sort of this nascent, but bipartisan movement against prediction markets. I think Kalshi's feeling the heat." For example, U.S. Senator Chris Murphy told the Washington Post, "People shouldn't be rooting for people to die because they placed a bet."
Government

Indonesia To Ban Social Media For Children Under 16 (theguardian.com) 47

Indonesia will ban children under 16 from having accounts on major social media platforms as part of a government push to protect minors from harmful content, addiction, and online threats. The rule will roll out starting March 28 and makes Indonesia the first country in Southeast Asia to impose such a restriction. The Guardian reports: Meutya Hafid said in a statement to media said that she signed a government regulation that will mean children under the age of 16 can no longer have accounts on high-risk digital platforms, including YouTube, TikTok, Facebook, Instagram, Threads, X, Roblox and Bigo Live, a popular livestreaming site. With a population of about 285 million, the fourth-highest in the world, the south-east Asian nation represents a significant market for social networks.

The implementation will start gradually from 28 March, until all platforms fulfill their compliance obligations. "The basis is clear. Our children face increasingly real threats. From exposure to pornography, cyberbullying, online fraud, and most importantly addiction. The government is here so that parents no longer have to fight alone against the giant of algorithms," Hafid said.

She added that the government is taking this step as the best effort in the midst of a digital emergency to reclaim sovereignty over children's futures. "We realize that the implementation of this regulation may cause some discomfort at first. Children may complain and parents may be confused about how to respond to their children's complaints," Hafid said.

IOS

Apple Blocks US Users From Downloading ByteDance's Chinese Apps (wired.com) 25

An anonymous reader quotes a report from Wired: While TikTok operates in the United States under new ownership, Apple has deployed technical restrictions to block iOS users in the United States from downloading other apps made by the video platform's Chinese parent organization ByteDance. ByteDance owns a vast array of different apps spanning social media, entertainment, artificial intelligence, and other sectors. The leading one is Douyin, the Chinese version of TikTok, which has over 1 billion monthly active users. While most of those users reside in China, iPhone owners around the world have traditionally been able to download these apps from anywhere without using a VPN, as long as they have a valid App Store account registered in China.

That's not true anymore. Starting in late January, iPhone users in the U.S. with Chinese App Store accounts began reporting that they were encountering new obstacles when they tried to download apps developed by ByteDance. WIRED has confirmed that even with a valid Chinese App Store account, downloading or updating a ByteDance-owned Chinese app is blocked on Apple devices located in the United States. Instead, a pop-up window appears that says, "This app is unavailable in the country or region you're in." The restriction appears to apply only to ByteDance-owned apps and not those developed by other Chinese companies.

The timing and technical specifics suggest the restriction is related to the deal TikTok agreed to in January to divest Chinese ownership of its U.S. operations. The agreement was the result of the so-called TikTok ban law passed by Congress in 2024, which also barred companies like Apple and Google from distributing other apps majority-owned by ByteDance. The Protecting Americans from Foreign Adversary Controlled Applications Act states that no company can "distribute, maintain, or update" any app majority-controlled by ByteDance "within the land or maritime borders of the United States."

The law was primarily aimed at TikTok, which has more than 100 million users in the U.S. and had been the subject of years of debate in Washington over whether its Chinese ownership posed a national security risk. But ByteDance also has dozens of other apps that at some point were also removed from Apple's and Google's app stores in the U.S.. Now it seems like the scope of impact has reached even more apps that are not technically designed for U.S. audiences, such as Douyin, the AI chatbot Doubao, and the fiction reading platform Fanqie Novel.

The Internet

Computer Scientists Caution Against Internet Age-Verification Mandates (reason.com) 79

fjo3 shares a report from Reason Magazine: Effective January 1, 2027, providers of computer operating systems in California will be required to implement age verification. That's just part of a wave of state and national laws attempting to limit children's access to potentially risky content without considering the perils such laws themselves pose. Now, not a moment too soon, over 400 computer scientists have signed an open letter warning that the rush to protect children from online dangers threatens to introduce new risks including censorship, centralized power, and loss of privacy. They caution that age-verification requirements "might cause more harm than good." The group of computer scientists from around the world cautions that "those deciding which age-based controls need to exist, and those enforcing them gain a tremendous influence on what content is accessible to whom on the internet." They add that "this influence could be used to censor information and prevent users from accessing services."

"Regulating the use of VPNs, or subjecting their use to age assurance controls, will decrease the capability of users to defend their privacy online. This will not only force regular users to leave a larger footprint on the network, but will leave a number of at-risk populations unprotected, such as journalists, activists, or domestic abuse victims." It continues: "We note that we do not believe that trying to regulate VPN use for non-compliant users would be any more effective than trying to forbid the use of end-to-end encrypted communication for criminals. Secure cryptography is widely available and can no longer be put back into a box."

"If minors or adults are deplatformed via age-related bans, they are likely to migrate to find similar services," warn the scientists. "Since the main platforms would all be regulated, it is likely that they would migrate to fringe sites that escape regulation." With data on everyone collected in order to restrict the activites of minors, data abuses and privacy risks increase. "This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord."

Instead of mandated age restrictions, the letter urges lawmakers to consider the dangers and suggest regulating social media algorithms instead. They also recommend "support for parents to locally prevent access to non-age-appropriate content or apps, without age-based control needing to be implemented by service providers."

Slashdot Top Deals