AI

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code 69

Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions ... that developers had shared on programming platform GitHub." From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic's tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories -- a process it calls dreaming. Another appears to instruct Claude Code in some cases to go "undercover" and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called "Buddy" that users could interact with.

After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.
Transportation

Robotaxi Outage In China Leaves Passengers Stranded On Highways (wired.com) 31

An anonymous reader quotes a report from Wired: An unknown technical problem caused a number of robotaxis owned by the Chinese tech giant Baidu to freeze on Tuesday in the middle of traffic, trapping some passengers in the vehicles for more than an hour. In Wuhan, a city in central China where Baidu has deployed hundreds of its Apollo Go self-driving taxis, people on Chinese social media reported witnessing the cars suddenly malfunction and stop operating. Photos and videos shared online show the Baidu cars halted on busy highways, often in the fast lane.

[...] Local police in Wuhan issued a statement around midnight in China that said the situation was "likely caused by a system malfunction," but the incident is still under investigation. No one was injured, and all passengers have exited the vehicles, the police added. It's unclear how many of Baidu's robotaxis may have been impacted. [...] There were at least two other collisions on the same day, according to photos and videos posted on Chinese social media. A RedNote user in Wuhan confirmed to WIRED that she drove past a white minivan that had gotten into a rear-end collision with a parked robotaxi. The back of the Baidu car was badly damaged, but the two people standing beside the scene looked unharmed, she says. She added that she estimates she also saw at least a dozen more parked robotaxies.

Businesses

Oracle Cuts Thousands of Jobs Across Sales, Engineering, Security (theregister.com) 46

bobthesungeek76036 shares a report from the Register: Oracle laid off thousands of employees on Tuesday as it ramps spending on AI infrastructure projects internally and with major technology partners. The layoffs were carried out via email, according to copies of the message viewed by Business Insider. The email told affected workers they would be terminated immediately and to provide a personal email for follow-up.

The cuts echo a TD Cowen forecast earlier this year, when the investment bank questioned how Oracle would finance its expanding AI datacenter buildout and suggested headcount reductions could reach 20,000 to 30,000. It is not clear how many employees were notified on Tuesday, but one screenshot that purports to show the number of internal Slack users showed a drop of 10,000 overnight.

[...] Oracle employs about 162,000 people, with 58,000 of those in the US and approximately 104,000 internationally. If the rumored cuts of 30,000 are correct, it would amount to 18 percent of the company's workforce. According to posts from Oracle workers on LinkedIn, the cuts were spread through multiple departments around the country, with employees in Kansas, Tennessee, and Texas taking to social media to say they were among those chopped.
"This news didn't seem to affect stock price," adds bobthesungeek76036. "ORCL is up 6% for the day."
Social Networks

Australia Readies Social Media Court Action Citing Teen Ban Breaches (reuters.com) 27

Australia is preparing possible court action against major social media platforms that are failing to enforce the country's social media ban on under-16s. "Three months after the ban came into effect, the eSafety Commissioner said it was probing Meta's Instagram and Facebook, Google's YouTube, Snapchat and TikTok for possible breaches of the law," reports Reuters. From the report: Communications Minister Anika Wells said the government was gathering evidence "so that the eSafety Commissioner can go to the Federal Court and win." "We have spent the summer building that evidence base of all the stories that no doubt you have all heard ... about how kids are getting around that," Wells told reporters in Canberra. The legal threat is a striking change of tone from a government which had hailed tech giants' shows of cooperation when the ban went live in December.

Under the Australian law, platforms must show they are taking reasonable steps to keep out underage users or face fines of up to $34 million per breach, something eSafety would need to pursue in a civil court. The regulator previously said it would only take enforcement action in cases of systemic noncompliance. But in its first comprehensive compliance report since the ban took effect, eSafety said measures taken by the platforms were substandard and it would make a decision about next steps by mid-year. "We are now moving âinto an enforcement stance," said commissioner Julie Inman Grant in a statement.

The regulator reported major compliance gaps, including platforms prompting children who had previously declared ages under 16 to do fresh age checks, allowing repeated attempts at age-assurance tests until a child got a result over 16 and poor pathways for people to report underage accounts. Some platforms did not use age-inference, which estimates age based on someone's online activity, and some only used age-assurance measures like photo-based checks after a user tried to change their age, rather than at sign-up. That made it "likely many Australian children aged under 16 have been able to create accounts on age-restricted social media platforms by simply declaring they are 16 or older", the regulator said. Nearly one-third of parents reported their under-16 child had at least one social media account after the ban took effect, of which two-thirds said the platform had not asked the child's age, it added.

Social Networks

Will Social Media Change After YouTube and Meta's Court Defeat? (theverge.com) 54

Yes, this week YouTube and Meta were found negligent in a landmark case about social media addiction.

But "it's still far from certain what this defeat will change," argues The Verge's senior tech and policy editor, "and what the collateral damage could be." If these decisions survive appeal — which isn't certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more "bellwether" cases in Los Angeles, a much larger group settlement could be reached down the road... For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don't change their business practices...

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change "toxic" features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize "shocking and crude" content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users' privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

Blake Reid, a professor at Colorado Law, is more circumspect. "It's hard right now to forecast what's going to happen," Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for "cold, calculated" ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. "There are obviously harms here and it's pretty important that the tort system clocked those harms" in the recent cases, he told The Verge. "It's just that what comes in the wake of them is less clear to me".

The article also includes this prediction from legal blogger/Section 230 export Eric Goldman. "There will be even stronger pushes to restrict or ban children from social media." Goldman argues "This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations."
Social Networks

Bluesky's Newest Product: an AI Tool That Gives You Custom Feeds (attie.ai) 39

"What happens when you can describe the social experience you want and have it built for you...?" asks Bluesky? "We've just started experimenting, but we're sharing it now because we want you to build alongside us."

Called "Attie" — because it's built with Bluesky's decentralized publishing framework, AT Protocol (which is open source) — the new assistant turns natural language prompts into social feeds, without users having to know how to code. (It's part of Bluesky's mission to "develop and drive large-scale adoption of technologies for open and decentralized public conversation.")

Engadget reports: On the Attie website, examples include prompts like, "Show me electronic music and experimental sound from people in my network" or "Builders working on agent infrastructure and open protocol design."

"It feels more like having a conversation than configuring software," [writes Bluesky's former CEO/current chief innovation officer, Jay Graber, in a blog post]. "You describe the sort of posts you want to see, and the coding agent builds the feed you described."

Graber added that Attie is a separate app from Bluesky and users don't have to use the new AI assistant if they don't want to. However, since Attie and Bluesky were built on the same framework, it could mean there will be some cross-app implementation between the two or any other app built on the AT Protocol.

"Attie is open for beta signups today, and we'll be sharing what we learn along the way," Graber writes in the blog post. "To learn more about Attie, visit: Attie.AI. Come help us find out what this can be."

The blog post warns that "Right now, AI is undermining human agency at the same time it's enhancing it," since "The proliferation of low-quality AI-generated content is making public social networks noisier and less trustworthy..." And in a world where "signal is getting harder to find... The major platforms aren't trying to fix this problem." They're using AI to increase the time users spend on-platform, to harvest training data, and to shape what users see and believe through systems they can't inspect and didn't choose. We think AI should serve people, not platforms...

An open protocol puts this power directly in users' hands. You can use it to build your own feeds, create software that works the way you want it to, and find signal in the noise. We built the AT Protocol so anyone could build any app they imagine on top of it, but until recently "anyone" really meant "anyone who can code." Agentic coding tools change that. For the first time, an open protocol can be genuinely open to everyone...

The Atmosphere [Bluesky's interoperable ecosystem] is an open data layer with a clearly defined schema for applications, which makes it uniquely well-suited for coding agents to build on... Bluesky will continue to evolve as a social app millions of people rely on. Attie will be where we experiment with agentic social.

AI is an accelerant on whatever it's applied to. I want it to accelerate decentralizing social and putting power back in users' hands. But I don't think the most interesting things built on AT Protocol will come from us. They're going to come from everyone who picks up these tools and starts building.

United Kingdom

Apple Now Requires Device-Level Age Verification in the UK. Could the US Be Next? (gizmodo.com) 121

Apple unveiled new device-level age restrictions in the UK on Wednesday. "After downloading a new update, users will now have to confirm that they are 18 or older to access unrestricted features," reports Gizmodo.

"Users will be able to confirm their age with a credit card or by scanning an ID." For those underage or who have not confirmed their age, Apple will turn on Web Content Filter and Communication Safety, which will not only restrict access to certain apps or websites, but will also monitor messages, shared photo albums, AirDrop, and FaceTime calls for nudity. Apple didn't specify exactly which services and features are banned for under-18 users, but it will likely be in compliance with UK legislation...

The British government does not require Apple and other OS providers to institute device-level age checks, but it does restrict minor access to online pornography under the Online Safety Act, which passed in 2023. So far, that restriction has only been implemented at the website level, but UK officials have been worried about easy loopholes to evade the age restrictions, like VPNs.

The broader tech industry has been campaigning for some time to use device-level age checks instead in response to the rising tide of under-16 social media and internet bans around the world. Last month, in a landmark social media trial in California, Meta CEO Mark Zuckerberg also supported this idea, saying that conducting age verification "at the level of the phone is just a lot clearer than having every single app out there have to do this separately." Pornhub-operator Aylo had advocated for device-level restrictions in the UK as well, and even sent out letters to Apple, Google, and Microsoft in November asking for OS-level age verification...

The most obvious question: Could this be brought stateside?

AI

People are Using AI-Powered Services to Find Lost Pets (yahoo.com) 35

A dog missing for two months was found at an animal shelter — and its owner received an email from an artificial intelligence service that identified it, according to the Washington Post.

"As controversial as AI is right now, this is one of those areas where it's a real win," according to the chief executive at the nonprofit animal welfare organization Best Friends Animal Society. And while it shouldn't replace microchipping pets, AI does offer another tool to help desperate pet owners (and overcrowded animal shelters) — and might even be "game-changing"... People send photos of their lost pets to a database, and AI compares the pets' features — including facial structure, coat pattern and ear shape — to photos of stray pets that have been spotted elsewhere. Many of the stray pets have already been taken to shelters... Doorbell cameras have recently implemented facial recognition for dogs, and perhaps the largest AI database for pet reunification is Petco Love Lost, which says it has reunited more than 200,000 pets and owners since 2021... After owners upload photos of their lost pets, AI scans thousands of photos of lost animals from social media and from about 3,000 animal shelters and rescues that use the software, according to Petco Love, an animal welfare nonprofit that's affiliated with the pet store Petco. It notifies owners if two photos match.
The article notes that one in three pets go missing during their lifetime, according to figures from the Animal Humane Society. "But as technology has progressed, so have resources for finding lost pets" — including GPS collars — and now, apparently, AI-powered pet identification.
Social Networks

Austria Plans Social Media Ban For Under-14s (bbc.com) 11

Austria plans to restrict under-14s from using social media platforms over concerns about addictive algorithms and harmful content. The government says draft legislation should be ready by the end of June, though details around enforcement and age verification have yet to be finalized. The BBC reports: Announcing the plans, Vice-Chancellor Andreas Babler of the Social Democrats said the government could not stand by and watch as social media made children "addicted and also often ill." He said it was the responsibility of politicians to protect children and argued that the issue should be treated no different to alcohol or tobacco: "There must be clear rules in the digital world too." In future, said Babler, children under 14 would be protected from algorithms that were addictive. "Other information providers have clear rules to protect young people from harmful content." These, he said, should now be implemented in the digital space. Yesterday, juries in two separate cases found social media giants liable for harming young people's mental health. The verdicts are being hailed as social media's Big Tobacco moment.

Further reading: California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media
Social Networks

California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media (latimes.com) 46

A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children's lives online. The Los Angeles Times reports: The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6.

"The evolution of these applications and technology is incredible," Padilla said. "But it's changing our social dynamic and it's creating situations that, while very productive for some folks, also need some guardrails." The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.

Privacy

Reddit Takes On Bots With 'Human Verification' Requirements (techcrunch.com) 75

Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears "fishy," and that it is "not conducting sitewide human verification." TechCrunch reports: To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).

To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method.
"If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other."
Robotics

Melania Trump Welcomes Humanoid Robot At White House Summit 94

Longtime Slashdot reader theodp writes: In Melania and the Robot, the New York Times reports on First Lady Melania Trump's inaugural Fostering the Future Together Coalition Summit, which brought together international leaders, First Spouses from around the world, tech leaders, educators, and nonprofits to collaborate on practical solutions that expand access to educational tools while strengthening protections for children in digital environments (Day 2 WH summary). The Times begins:

"On Wednesday, Mrs. Trump appeared at the White House alongside Figure 3, a humanoid, A.I.-powered robot whose uses, according to the company that makes it, include fetching towels, carrying groceries and serving champagne. But Mrs. Trump joins tech executives and some researchers in envisioning a world beyond robot butlery. She is interested in how these robots could cut it as educators. Both clad in shades of white, the first lady and the visiting robot walked into a gathering of first spouses from around the world, a group that included Sara Netanyahu of Israel, Olena Zelenska of Ukraine, and Brigitte Macron of France. The dulcet tones from a (presumably human) military orchestra played as the first lady and her guest entered the event. Both lady and robot extolled the virtues of further integrating robots into the educational and social lives of children. In the history of modern first-lady initiatives, which have included building a national book festival (Laura Bush), reshuffling the food pyramid (Michelle Obama) and advocating for free community college (Jill Biden), Mrs. Trump's involvement of a humanoid robot in education policy was a first."

"Figure 3 delivered brief remarks and delivered salutations in several languages. With its sleek black-and-white appearance, Figure 3 would fit right in with the first lady's branding aesthetic, which includes a self-titled coffee table book and movie, not least because the name "MELANIA" was emblazoned on the side of its glossy plastic head. After Figure 3 teetered gingerly away, Mrs. Trump looked around the room and told them that the future looked a lot like what they had just witnessed. 'The future of A.I. is personified,' she told her audience. 'It will be formed in the shape of humans. Very soon artificial intelligence will move from our mobile phones to humanoids that deliver utility.' She invited her guests to envision a future in which a robot philosopher educated children."
Social Networks

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case 113

A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.

The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products.
The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Facebook

Meta Loses Trial After Arguing Child Exploitation Was 'Inevitable' (arstechnica.com) 45

Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.

Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.

Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough."
Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
Facebook

Mark Zuckerberg Is Building an AI Agent To Help Him Be CEO (the-independent.com) 48

An anonymous reader quotes a report from the Wall Street Journal: Mark Zuckerberg wants everyone inside and outside his company to eventually have his or her own personal artificial-intelligence agent. He is starting with himself. Zuckerberg, the chief executive of Meta Platforms, is building a CEO agent to help him do his job (source paywalled; alternative source), according to a person familiar with the project. The agent, which is still in development, is currently helping Zuckerberg get information faster -- for instance, by retrieving answers for him that he would typically have to go through layers of people to get, the person familiar with the project said.

[...] Use of AI tools has spread quickly through the ranks at Meta -- in part because it is now a factor in employees' performance reviews. Meta's internal message board is filled with posts from employees sharing new AI use cases they have found and new tools they have built using AI, according to people familiar with the matter. [...] Employees have started using personal agent tools such as My Claw that have access to their chat logs and work files and can go talk to colleagues -- or their colleagues' own personal agents -- on their behalf, the people said. Another AI tool called Second Brain that is somewhere between a chatbot and an agent is also gaining momentum internally, according to people familiar with the matter. Second Brain was built by a Meta employee on top of Claude and can index and query documents for projects, among other uses. On the internal post announcing it to staff, the employee said it is "meant to be like an AI chief of staff."

There is even a group on the internal messaging board where employees' personal agents talk to each other, some of the people said. (Separately, Meta acquired Moltbook, the social-media site for AI agents, and hired its founders in a deal earlier this month.) Meta also recently acquired Manus, a Singapore-based startup that makes personal agents that can execute tasks for its users, and is using the tool internally, some of the people said. Meta recently established a new applied AI engineering organization that is tasked with using AI to help speed up development of the company's large language models. Those teams will have an ultraflat structure of as many as 50 individual contributors reporting to one manager, The Wall Street Journal previously reported. [...] Employees across the company said they have been encouraged to attend AI tutorial meetings several times a week and frequent AI hackathons, and to create their own AI tools to speed up their work.

Social Networks

Reddit Is Weighing Identity Verification Methods To Combat Its Bot Problem (engadget.com) 116

An anonymous reader quotes a report from Engadget: There could be one more step required before creating an account and posting on Reddit in the future. According to Reddit's CEO, Steve Huffman, the social media platform is exploring different ways to verify a user is human and not a bot. When asked by the TBPN podcast how to confirm that it's a human using Reddit, Huffman responded with several verification methods with varying degrees of heavy-handedness.

"The most lightweight way is with something like Face ID or Touch ID," Huffman said during the interview. "They actually require a human presence, like a human has to touch, or do or look at something, so that actually just proves there's a person there or gets you pretty far." Besides these passkey methods that use biometrics data, Huffman said there are other options like relying on third-party services that are decentralized or don't require ID. On the other end of the spectrum, Huffman also mentioned more burdensome options, like ID-checking services.

[...] "Part of our promise for our users is we don't know your name but we do want to know you're a person," Huffman said. "It'll be an evolution for us for a while, and probably every platform to find the right middle ground here." Reddit co-founder and former executive chair, Alexis Ohanian, said on X that Reddit requiring Face ID wasn't something he expected but agreed that something had to be done about the fake content from bots, adding that, "I just don't know how to sell face-scanning to Redditors or even lurkers." We reached out to Reddit's communications team and will update the story when we hear back.
The Digg beta shut down earlier this month after failing to fight the overwhelming influx of AI-driven bots and spam. "The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts," said CEO Justin Mezzell. "We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us."

"We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."
Android

GrapheneOS Refuses to Comply with Age-Verification Laws (tomshardware.com) 69

An anonymous reader shared this report from Tom's Hardware: GrapheneOS, the privacy-focused Android fork, said in a post on X on Friday that it will not comply with emerging laws requiring operating systems to collect user age data at setup. "GrapheneOS will remain usable by anyone around the world without requiring personal information, identification or an account," the project stated. "If GrapheneOS devices can't be sold in a region due to their regulations, so be it."

The statement came after Brazil's Digital ECA (Law 15.211) took effect on March 17, imposing fines of up to R$50 million (roughly $9.5 million) per violation on operating system providers that fail to implement age verification...

Motorola and GrapheneOS announced a long-term partnership at MWC on March 2, to bring to bring the hardened OS to future Motorola hardware, ending GrapheneOS's long-standing exclusivity to Google Pixel devices. A GrapheneOS-powered Motorola phone is expected in 2027. If Motorola sells devices with GrapheneOS pre-installed, those devices would need to comply with local regulations in every market where they ship, or Motorola may need to restrict sales geographically.

Or, "People can buy the devices without GrapheneOS and install it themselves in any region where that's an issue," according to a post on the GrapheneOS BlueSky account. "Motorola devices with GrapheneOS preinstalled is something we want but it doesn't have to happen right away and doesn't need to happen everywhere for the partnership to be highly successful. Pixels are sold in 33 countries which doesn't include many countries outside North America and Europe."

Tom's Hardware also notes that GrapheneOS "isn't the first and won't be the last company to outright refuse compliance with incoming age verification laws."

"The developers of open-source calculator firmware DB48X issued a legal notice recently, stating that their software 'does not, cannot and will not implement age verification,' while MidnightBSD updated its license to ban users in Brazil."
Hardware

Elon Musk Announces $20B 'Terafab' Chip Plant in Texas To Supply His Companies (yahoo.com) 126

"Billionaire Elon Musk has announced plans to build a $20 billion chip plant in Austin, Texas" reports a local news station: Musk announced on Saturday night during a livestream on his social media platform X that the plant, called "Terafab," will be built near Tesla's campus and gigafactory in eastern Travis County. The long-anticipated project is a joint venture between Musk-owned properties Tesla, SpaceX and xAI... The Terafab plant is expected to begin production in 2027.
Musk "has said the semiconductor industry is moving too slow to keep up with the supply of chips he expects to need," writes Bloomberg — quoting Musk as saying "We either build the Terafab or we don't have the chips, and we need the chips, so we build the Terafab." Musk detailed some specific plans, including producing chips that can support 100 to 200 gigawatts a year of computing power on Earth, and chips that can support a terawatt in space, but gave no timelines for the facility or its output... The facility is expected to make two types of chips, one of which will be optimized for edge and inference, primarily for his vehicle, robotaxi and Optimus humanoid robots. The other will be a high-power chip, designed for space that could be used by SpaceX and xAI... Musk said he expects xAI to use the vast majority of the chips.

During the presentation, Musk also unveiled a speculative rendering of a future "mini" AI data center satellite, one piece of a much larger satellite system that he wants SpaceX to build to do complex computing in space. In January, SpaceX requested a license from the Federal Communications Commission to launch one million data center satellites into orbit around Earth. Musk said that the mini satellite he revealed would have the capacity for 100 kilowatts of power. "We expect future satellites to probably go to the megawatt range," Musk said.

Raising money to build and launch AI data centers in space is one of the driving forces behind SpaceX's planned IPO later this year. SpaceX is expected to raise as much as $50 billion in a record-setting IPO this summer which could value it at more than $1.75 trillion, Bloomberg News reported earlier.

Space

Meteor Rumbles Over Houston, as Six-Pound Fragment Crashes Into a Texas Home (cbsnews.com) 45

"It is the talk of the town today — the loud boom, the flash of light in the sky experienced by a lot of folks across the Houston area this afternoon," says a local Texas newscaster. "And then there was this — a home in northwest Harris county hit by something that crashed through their roof."

Travelling at very high speed, the six-pound meteorite crashed through their roof and through their attic, crashing again through the ceiling of the floor below. It then bounced off the floor, hit the ceiling again — and then fell onto the bed.

CBS News reports: NASA said in a social media post that the meteor became visible at 49 miles above Stagecoach, northwest of Houston, at 4:40 p.m. local time. The meteor moved southeast at 35,000 miles per hour, breaking apart 29 miles above Bammel, just west of Cypress Station, NASA said. "The fragmentation of the meteor — which weighed about a ton with a diameter of 3 feet — created a pressure wave that caused booms heard by some in the area," NASA said in the post. Across the Houston area, residents described hearing a low, rumbling sound that many compared to thunder, even though the skies were clear, according to CBS affiliate KHOU.

Earlier this week, an asteroid weighing about 7 tons and traveling at 45,000 mph traveled over multiple states. And last June, a bright meteor was seen across the southeastern U.S. and exploded over Georgia, creating similar booms heard by residents in the area.

Censorship

Millions Face Mobile Internet Outages in Moscow. 'Digital Crackdown' Feared (cnn.com) 54

13 million people live in Moscow, reports CNN.

But since early March the city "has experienced internet and mobile service outages on a level previously unseen." (Though Wi-Fi access to the internet is still available...) Russian social media "is flooded with jokes and memes about sending letters by carrier pigeons or using smartphones as ping-pong paddles..." [Moscow residents] complain they cannot navigate around the center or use their favorite mobile apps. The interruptions appear to have had a knock-on effect of making it more difficult to make voice calls or send an SMS. Some are panic-buying walkie-talkies, paper maps, and even pagers.

The latest shutdown builds on similar efforts around the country. For months, mobile internet service interruptions have hit Russia's regions, particularly in provinces bordering Ukraine, which has staged incursions and launched strikes inside Russian territory to counter Russia's full-scale invasion. Some regions have reported not having any mobile internet since summer. But the most recent outages have hit the country's main centers of wealth and power: Moscow and Russia's second city, St. Petersburg.

Public officials claim the blackout of mobile internet service in the capital and other regions is part of a security effort to counter "increasingly sophisticated methods" of Ukrainian attack... Speculation centers on whether the authorities are testing their ability to clamp down on public protest in the case there's an effort to reintroduce unpopular mobilization measures to find fresh manpower for the war in Ukraine; whether mobile internet outages may precede a more sweeping digital blackout; or if the new restrictions reflect an atmosphere of heightened fear and paranoia inside the Kremlin as it watches US-led regime- change efforts unfold against Russian allies such as Venezuela and Iran... On Wednesday, Russian mobile providers sent notifications that there would be "temporary restrictions" on mobile internet in parts of Moscow for security reasons, Russian state news agency RIA-Novosti reported. The measures will last "for as long as additional measures are needed to ensure the safety of our citizens," Kremlin spokesman Dmitry Peskov said on March 11...

As well as banning many social media platforms, Russia blocks calling features on messenger apps such as WhatsApp and Telegram. Roskomnadzor, the country's communications regulator, has introduced a "white list" of approved apps... Russia has also tested what it calls the "sovereign internet," a network that is effectively firewalled from the rest of the world. The disruptions are fueling broader concerns about tightening state control. In parallel with the internet shutdown, the Kremlin has also been pushing to impose a state-controlled messaging app called Max as the country's main portal for state services, payments and everyday communication. There has been speculation the Kremlin may be planning to ban Telegram, Russia's most widely used messaging app, entirely. Roskomnadzor said that it was restricting Telegram for allegedly failing to comply with Russian laws.

"Russia has opened a criminal case against me for 'aiding terrorism,'" Telegram's Russian-born founder Pavel Durov said on X last month. "Each day, the authorities fabricate new pretexts to restrict Russians' access to Telegram as they seek to suppress the right to privacy and free speech...."

The article includes this quote from Mikhail Klimarev, head of the Internet Protection Society and an expert on Russian internet freedom. "In any situation when they (the authorities) perceive some kind of danger for themselves and accept the belief that the internet is dangerous for them, even if it may not be true, they will shut it down," he said. "Just like in Iran."

Slashdot Top Deals