Connect with us
DAPA Banner

Tech

Meta warned by dozens of organizations that facial recognition on its smart glasses would empower predators

Published

on

Dozens of civil rights organizations have to warn of the dangers in to the company’s smart glasses. More than 70 groups have banded together to form a coalition to urge Zuckerberg to abandon plans to incorporate the tech, on the grounds that it would empower stalkers, sexual predators and other bad actors.

This coalition includes organizations like the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now and many others. The letter isn’t asking for safeguards. These groups want the feature to be completely eliminated, stating the idea behind facial recognition of this type is so dangerous that it “cannot be resolved through product design changes, opt-out mechanisms or incremental safeguards.” This tracks, as there would be no real way for bystanders to know or consent to being identified.

“People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health and behaviors,” the letter states.

The organizations have urged Meta to disclose any known instances of its wearables being used for stalking, harassment or domestic violence. They also want the company to disclose past or ongoing discussions with federal law enforcement agencies, including ICE, about the use of Meta smart glasses and other wearables, .

Advertisement

There is certainly some cause for worry here. Meta that suggested it would roll out this technology “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” That’s corporate speak for “we’ll do it when nobody is watching.” The coalition has called this “vile behavior” that looks to take advantage of “rising authoritarianism.”

The technology in question is called Name Tag, for obvious reasons. It uses AI to pull up information about people in a field of view to smart glasses displays. That’s about as dystopian as it gets.

The company has reportedly been working on . There’s one that would only identify people that are currently connected to a Meta platform and another that would identify anyone with a public account on a service like Instagram. It doesn’t look like there’s any way, as of yet, to use this tech to identify strangers on the street who don’t have a Meta account of any kind. In other words, the company should expect a if this rolls out.

Name Tag is currently scheduled for release at some point this year, but it’s not set in stone just yet. Public outcry has gotten Meta to back off from facial recognition in the past. The company after pushback from civil liberties groups and years of costly litigation. Meta paid out billions of dollars to settle biometric privacy lawsuits in and and another for a separate privacy case partially tied to facial recognition software.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Is Anthropic ‘nerfing’ Claude? Users increasingly report performance degradation as leaders push back

Published

on

A growing number of developers and AI power users are taking to social media to accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code — intentionally or as an outcome of compute limits — arguing that the company’s flagship coding model feels less capable, less reliable and more wasteful with tokens than it did just weeks ago.

The complaints have spread quickly on Github, X and Reddit over the past several weeks, with several high-reach posts alleging that Claude has become worse at sustained reasoning, more likely to abandon tasks midway through, and more prone to hallucinations or contradictions.

Some users have framed the issue as “AI shrinkflation” — the idea that customers are paying the same price for a weaker product.

Others have gone further, suggesting Anthropic may be throttling or otherwise tuning Claude downward during periods of heavy demand.

Advertisement

Those claims remain unproven, and Anthropic employees have publicly denied that the company degrades models to manage capacity. At the same time, Anthropic has acknowledged real changes to usage limits and reasoning defaults in recent weeks, which has made the broader debate more combustible.

VentureBeat has reached out to Anthropic for further clarification on the recent accusations, including whether any recent changes to reasoning defaults, context handling, throttling behavior, inference parameters or benchmark methodology could help explain the spike in complaints.

We have also asked how Anthropic explains the recent benchmark-related claims and whether it plans to publish additional data that could reassure customers. An Anthropic spokesperson did not address the questions individually, instead referring us to X posts by Claude Code creator Boris Cherny and Claude Code team member Thariq Shihipar regarding Opus 4.6 performance and usage limits, respectively. Both X posts are also referenced and linked below.

Viral user complaints, including from an AMD Senior Director, argue Claude has become less capable

One of the most detailed public complaints originated as a GitHub issue filed by Stella Laurenzo on April 2, 2026, whose LinkedIn profile identifies her as Senior Director in AMD’s AI group.

Advertisement

In that post, Laurenzo wrote that Claude Code had regressed to the point that it could not be trusted for complex engineering work, then backed that claim with a sprawling analysis of 6,852 Claude Code session files, 17,871 thinking blocks and 234,760 tool calls.

The complaint argued that, starting in February, Claude’s estimated reasoning depth fell sharply while signs of poorer performance rose alongside it, including more premature stopping, more “simplest fix” behavior, more reasoning loops, and a measurable shift from research-first behavior to edit-first behavior.

The post’s broader point was that for advanced engineering workflows, extended reasoning is not a luxury but part of what makes the model usable in the first place.

That GitHub thread then escaped into the broader social media conversation, with X users including @Hesamation, who posted screenshots of Laurenzo’s GitHub post to X on April 11, turning it into an even more viral talking point.

Advertisement

That amplification mattered because it gave the wider “Claude is getting worse” narrative something more concrete than anecdotal frustration: a long, data-heavy post from a senior AI leader at a major chip company arguing that the regression was visible in logs, tool-use patterns and user corrections, not just gut feeling.

Anthropic’s public response focused on separating perceived changes from actual model degradation. In a pinned follow-up on the same GitHub issue posted a week ago, Claude Code lead Boris Cherny thanked Laurenzo for the care and depth of the analysis but disputed its main conclusion.

Cherny said the “redact-thinking-2026-02-12” header cited in the complaint is a UI-only change that hides thinking from the interface and reduces latency, but “does not impact thinking itself,” “thinking budgets,” or how extended reasoning works under the hood.

He also said two other product changes likely affected what users were seeing: Opus 4.6’s move to adaptive thinking by default on Feb. 9, and a March 3 shift to medium effort, or effort level 85, as the default for Opus 4.6, which he said Anthropic viewed as the best balance across intelligence, latency and cost for most users.

Advertisement

Cherny added that users who want more extended reasoning can manually switch effort higher by typing /effort high in Claude Code terminal sessions.

That exchange gets at the core of the controversy. Critics like Laurenzo argue that Claude’s behavior in demanding coding workflows has plainly worsened and point to logs and usage patterns as evidence.

Anthropic, by contrast, is not saying nothing changed. It is saying the biggest recent changes were product and interface choices that affect what users see and how much effort the system expends by default, not a secret downgrade of the underlying model. That distinction may be technically important, but for power users who feel the product is delivering worse results, it is not necessarily a satisfying one.

External coverage from TechRadar and PC Gamer further amplified Laurenzo’s post and larger wave of agreement from some power users.

Advertisement

Another viral post on X from developer Om Patel on April 7 made the same argument in even more direct terms, claiming that someone had “actually measured” how much “dumber” Claude had gotten and summarizing the result as a 67% drop.

That post helped popularize the “AI shrinkflation” label and pushed the controversy beyond hard-core Claude Code users into the broader AI discourse on X.

These claims have resonated because they map closely onto what many frustrated users say they are seeing in practice: more unfinished tasks, more backtracking, more token burn and a stronger sense that Claude is less willing to reason deeply through complicated coding jobs than it was earlier this year.

Benchmark posts turned anecdotal frustration into a public controversy

The loudest benchmark-based claim came from BridgeMind, which runs the BridgeBench hallucination benchmark. On April 12, the account posted that Claude Opus 4.6 had fallen from 83.3% accuracy and a No. 2 ranking in an earlier result to 68.3% accuracy and No. 10 in a new retest, calling that proof that “Claude Opus 4.6 is nerfed.”

Advertisement

That post spread widely and became one of the main anchors for the broader public case that Anthropic had degraded the model.

Other users also circulated benchmark-related or test-based posts suggesting that Opus 4.6 was underperforming versus Opus 4.5 in practical coding tasks.

Still other posts pointed to TerminalBench-related results as supposed evidence that the model’s behavior had changed in certain harnesses or product contexts.

The effect was cumulative: benchmark screenshots, side-by-side tests and anecdotal frustration all began reinforcing one another in public.

Advertisement

That matters because benchmark claims tend to travel farther than more subjective complaints. A developer saying a model “feels worse” is one thing. A screenshot showing a ranking drop from No. 2 to No. 10, or a dramatic percentage swing in accuracy, gives the appearance of hard proof, even when the underlying comparison may be more complicated.

Critics of the benchmark claims say the evidence is weaker than it looks

The most important rebuttal to the BridgeBench claim did not come from Anthropic. It came from Paul Calcraft, an outside software and AI researcher on X, who argued that the viral comparison was misleading because the earlier Opus 4.6 result was based on only six tasks while the later one was based on 30.

In his words, it was a “DIFFERENT BENCHMARK.” He also said that on the six tasks the two runs shared in common, Claude’s score moved only modestly, from 87.6% previously to 85.4% in the later run, and that the bigger swing appeared to come mostly from a single fabrication result without repeats. He characterized that as something that could easily fall within ordinary statistical noise.

That outside rebuttal matters because it undercuts one of the cleanest and most viral claims in circulation. It does not prove users are wrong to think something has changed. But it does suggest that at least some of the benchmark evidence now driving the story may be overstated, poorly normalized or not directly comparable.

Advertisement

Even the BridgeBench post itself drew a community note to similar effect. The note said the two benchmark runs covered different scopes — six tasks in one case and 30 in the other — and that the common-task subset showed only a minor change. That does not make the later result meaningless, but it weakens the strongest version of the “BridgeBench proved it” argument.

This is now a key feature of the controversy: the claims are not all equally strong. Some are grounded in first-hand user experience. Some point to real product changes. Some rely on benchmark comparisons that may not be apples-to-apples. And some depend on inferences about hidden system behavior that users outside Anthropic cannot directly verify.

Earlier capacity limits gave users a reason to suspect more changes under the hood

The current backlash also lands in the shadow of a real, confirmed Anthropic policy change from late March. On March 26, Anthropic technical staffer Thariq Shihipar posted that, “To manage growing demand for Claude,” the company was adjusting how 5-hour session limits work for Free, Pro and Max subscribers during peak hours, while keeping weekly limits unchanged.

He added that during weekdays from 5 a.m. to 11 a.m. Pacific time, users would move through their 5-hour session limits faster than before. In follow-up posts, he said Anthropic had landed efficiency wins to offset some of the impact, but that roughly 7% of users would hit session limits they would not have hit before, particularly on Pro tiers.

Advertisement

In an email on March 27, 2026, Anthropic told VentureBeat that Team and Enterprise customers were not affected by those changes, and that the shift was not dynamically optimized per user but instead applied to the peak-hour window the company had publicly described. Anthropic also said it was continuing to invest in scaling capacity.

Those comments were about session limits, not model downgrades. But they are important context, because they establish two things that users now keep connecting in public: first, Anthropic has been dealing with surging demand; second, it has already changed how usage is rationed during busy periods. That does not prove Anthropic reduced model quality. It does help explain why so many users are primed to believe something else may also have changed.

Prompt caching and TTL

A separate, more recent GitHub issue broadens the dispute beyond model quality and into pricing and quota behavior. In issue #46829, user seanGSISG argued that Claude Code’s prompt-cache time-to-live, or TTL, appeared to shift from a one-hour setting back to a five-minute setting in early March, based on analysis of nearly 120,000 API calls drawn from Claude Code session logs across two machines.

The complaint argues that this change drove meaningful increases in cache-creation costs and quota burn, especially for long-running coding sessions where cached context expires quickly and must be rebuilt. The author claims that this helps explain why some subscription users began hitting usage limits they had not previously encountered.

Advertisement

What makes this issue notable is that Anthropic did not flatly deny that something changed. In a reply on the thread, Jarred Sumner said the March 6 change was real and intentional, but rejected the framing that it was a regression. He said Claude Code uses different cache durations for different request types, and that one-hour cache is not always cheaper because one-hour writes cost more up front and only save money when the same cached context is reused enough times to justify it.

In his telling, the change was part of ongoing cache optimization work, not a silent downgrade, and the pre–March 6 behavior described in the issue “wasn’t the intended steady state.”

The thread later drew a more detailed response from Anthropic’s Cherny, who described one-hour caching as “nuanced” and said the company has been testing heuristics to improve cache hit rates, token usage and latency for subscribers. Cherny said Anthropic keeps five-minute cache for many queries, including subagents that are rarely resumed, and said turning off telemetry also disables experiment gates, which can cause Claude Code to fall back to a five-minute default in some cases.

He added that Anthropic plans to expose environment variables that let users force one-hour or five-minute cache behavior directly. Together, those replies do not validate the issue author’s claim that Anthropic silently made Claude Code more expensive overall, but they do confirm that Anthropic has been actively experimenting with cache behavior behind the scenes during the same period users began complaining more loudly about quota burn and changing product behavior.

Advertisement

Anthropic says user-facing changes, not secret degradation, explain much of the uproar

Anthropic-affiliated employees have publicly pushed back on the broadest accusations. In one widely circulated reply on X, Cherny responded to claims that Anthropic had secretly nerfed Claude Code by writing, “This is false.”

He said Claude Code had been defaulted to medium effort in response to user feedback that Claude was consuming too many tokens, and that the change had been disclosed both in the changelog and in a dialog shown to users when they opened Claude Code.

That response is notable because it concedes a meaningful product change while rejecting the more conspiratorial interpretation of it. Anthropic is not saying nothing changed. It is saying that what changed was disclosed and was aimed at balancing token use, not secretly reducing model quality.

Public documentation also supports the fact that effort defaults have been in motion. Claude Code’s changelog says that on April 7, Anthropic changed the default effort level from medium to high for API-key users as well as Bedrock, Vertex, Foundry, Team and Enterprise users.

Advertisement

That suggests Anthropic has actively been tuning these settings across different segments, which could plausibly affect user perceptions even if the core model weights are unchanged.

Shihipar has also directly denied the broader demand-management accusation. In a reply on X posted April 11, he said Anthropic does not “degrade” its models to better serve demand. He also said that changes to thinking summaries affected how some users were measuring Claude’s “thinking,” and that the company had not found evidence backing the strongest qualitative claims now spreading online.

The real issue may be trust as much as model quality

What is clear is that a trust gap has opened between Anthropic and some of its most demanding users.

For developers who rely on Claude Code all day, subtle shifts in visible thinking output, effort defaults, token burn, latency tradeoffs or usage caps can feel indistinguishable from a weaker model.

Advertisement

That is true whether the root cause is a product setting, a UI change, an inference-policy tweak, capacity pressure or a genuine quality regression.

It also means both sides of the fight may be talking past each other. Users are describing what they experience: more friction, more failures and less confidence. Anthropic is responding in product terms: effort defaults, hidden thinking summaries, changelog disclosures, and denials that demand pressure is causing secret model degradation.

Those are not necessarily incompatible descriptions. A model can feel worse to users even if the company believes it has not “nerfed” the underlying model in the way critics allege. But coming at a time when Anthropic’s chief rival OpenAI has recently pivoted and put more resources behind its competing, enterprise and vibe-coding focused product Codex — even offering a new, more mid-range ChatGPT subscription in an effort to boost usage of the tool — it’s certainly not the kind of publicity that stands to benefit Anthropic or its customer retention.

At the same time, the public evidence remains mixed. Some of the most viral claims have come from developers with detailed logs and strong opinions based on repeated use. Some of the benchmark evidence has been challenged by outside observers on methodological grounds. And Anthropic’s own recent changes to limits and settings ensure that this debate is happening against a backdrop of real adjustments, not pure rumor.

Advertisement

Source link

Continue Reading

Tech

Denuvo removed from Resident Evil Requiem, improving performance over hypervisor-based crack

Published

on


Less than a month after removing Denuvo from Doom: The Dark Ages, Voices38 has now achieved the same feat with Resident Evil Requiem. Capcom’s survival horror title is the first game released in 2026 to undergo a full “cracking” process, just as TDA marked the first cracked release of a…
Read Entire Article
Source link

Continue Reading

Tech

DOJ Is Using A Grand Jury To Force Reddit To Unmask An Anonymous User

Published

on

from the government-cheat-codes dept

The government’s reliance on grand juries to bring charges against activists, protesters, and the president’s personal enemies has been misplaced. Increasingly, grand juries are refusing to give the government what it wants: rubber-stamped indictments that will allow it to move forward with vindictive prosecutions.

But there’s still something grand juries offer that regular courts can’t: secrecy. If the government doesn’t want the public to know how it’s building cases, it’s best bet to drag everyone involved in front of a grand jury whose secrecy can’t easily be pierced without a concerted effort by involved parties and the assistance of sympathetic judges.

There’s a good reason the government doesn’t want the public to know what it’s doing in this case detailed by Ryan Devereaux for The Intercept. There’s some shady stuff happening here, along with some incredibly incompetent stuff.

According to a subpoena obtained by The Intercept, Reddit has until April 14 to provide a wide range of personal data on one of its users, whom U.S. Immigration and Customs Enforcement agents have been trying unsuccessfully to identify for more than a month.

That’s the brief summation. The details, however, make this whole thing look sketch as fuck. Reddit received the first demand for this user’s data on March 4. Two days later, it informed the user that the government was seeking this information. The Reddit user secured legal representation from the Civil Liberties Defense Center.

Advertisement

The user’s lawyers looked through the targeted account and couldn’t find anything that might be considered criminal.

Commenting on a Minnesota Star Tribune article, another Reddit user posted that Ross might be welcomed as a hero in Florida or Texas. John Doe responded by sharing that Ross had lived in Chaska, Minnesota; grew up in Indiana; and served in the Indiana National Guard — biographical details that were circulating widely at the time. “Hopefully he moves up to Stillwater State Penitentiary,” they wrote.

In another post, a Reddit user asked what they should write on an anti-ICE protest sign. John Doe suggested the lyrics to a song: “Urine speaks louder than words.” In a third instance, Doe wrote, “TSA sucks and we all know it.” According to the Reddit user’s attorneys, these were the most aggressive posts they could find.

While one would hardly expect legal reps to dish out inculpatory information in response to journalist’s questions, the lack of anything possibly law-breaking speaks for itself. The whole thing looks like a fishing expedition by the DOJ on behalf of ICE — something that’s confirmed by the administrative subpoena ICE issued in hopes of unmasking this user.

In its summons, ICE indicated the basis for its request was a provision of the Smoot-Hawley Tariff Act of 1930. John Doe informed the court that they had nothing to do with the kind of activities at issue in the near-century-old statute, which governs boat show sales, wild animal imports, forfeited wines and spirits, and cross-border trade in other goods.

In case you’ve forgotten, the C in ICE stands for “Customs.” That means whoever “wrote” this subpoena didn’t even care enough to ensure the correct boilerplate was copy-pasted into the subpoena. ICE wants to punish this person for their speech, which it seemingly believes adds up to a federal crime. In support of its demand for user info, it inserted boilerplate pertaining to customs enforcement.

Advertisement

Then again, this might have been intentional laziness. As The Intercept notes, the Trump administration tried to use the same customs statutes to unmask his critics back in 2017. Those efforts were criticized by the still-operable Office of the Inspector General.

ICE withdrew the tariff-related subpoena. Then the DOJ sent another one nearly a month later, this time targeting Reddit itself:

This time, instead of requesting information on an individual user, the government ordered Reddit itself to appear before a grand jury — not in California, but in Washington.

The request came not from an ICE field agent but rather from a Special Assistant U.S. Attorney in D.C., where Reddit has received the highest number of federal law enforcement information requests. The records sought spanned a period roughly three times longer than what ICE had originally requested.

That’s the backdoor the DOJ is trying to use. It can’t get the stuff it thinks will generate an indictment via the usual Smoot-Hawley whatever the fuck. And since it’s not interested in seeking an actual warrant (which would require judicial review) to compel Reddit to produce user data and information, it’s hoping it can accomplish the same thing in a secret court far away from anything resembling an adversarial process, much less the watchful eyes of a federal judge.

Advertisement

That’s the Department of Justice deliberately routing around a crucial part of the justice system in hopes of securing ill-gotten “wins” against critics of Trump, his policies, and his administration in general. With any luck, this attempt won’t work because it’s been exposed. But rest assured, this administration will never stop trying to bypass the systems of checks and balances that might occasionally prevent it from doing whatever it wants.

Filed Under: administrative warrants, doj, grand juries, ice, trump administration

Companies: eff, reddit

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Wordle Hints, Answer and Help for April 14 #1760

Published

on

Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.


Today’s Wordle puzzle is a bit tricky, with a repeated letter and only one true vowel. If you need a new starter word, check out our list of which letters show up the most in English words. If you need hints and the answer, read on.

Read more: New Study Reveals Wordle’s Top 10 Toughest Words of 2025

Advertisement

Today’s Wordle hints

Before we show you today’s Wordle answer, we’ll give you some hints. If you don’t want a spoiler, look away now.

Wordle hint No. 1: Repeats

Today’s Wordle answer has one repeated letter.

Wordle hint No. 2: Vowels

Today’s Wordle answer has one vowel and one sometimes vowel.

Wordle hint No. 3: First letter

Today’s Wordle answer begins with C.

Advertisement

Wordle hint No. 4: Last letter

Today’s Wordle answer ends with E.

Wordle hint No. 5: Meaning

Today’s Wordle answer can refer to a complete set of events. It can also be a shortened word for a pedaled vehicle that you ride.

TODAY’S WORDLE ANSWER

Today’s Wordle answer is CYCLE.

Advertisement

Yesterday’s Wordle answer

Yesterday’s Wordle answer, April 13, No. 1759, was ELFIN.

Recent Wordle answers

April 9, No. 1755: LADEN

April 10, No. 1756: CAROM

April 11, No. 1757: PRUDE

Advertisement

April 12, No. 1758: ALLEY

Source link

Advertisement
Continue Reading

Tech

ESP32 Weather Display Runs Macintosh System 3

Published

on

It seems like everybody takes their turn doing an ESP32-based weather display, and why not? They’re cheap, they’re easy, and you need to start somewhere. With the Cheap Yellow Display (CYD) and modules like it, you don’t even need to touch hardware! [likeablob] had the CYD, and he’s showing weather on it, but the Cydintosh is a full Macintosh Plus Emulator running on the ESP32.

Honey, I stretched the Macintosh!

The weather app is his own creation, written with the Retro68k cross-compiler, but it looks like something out of the 80s even if it’s getting its data over WiFi. The WiFi connection is, of course, thanks to the whole thing running on an ESP32-S3. Mac Plus emulation comes from [evansm7]’s Micro Mac emulator, the same one that lives inside the RP2040-based PicoMac that we covered some time ago. Obviously [likeablob] has added his own code to get the Macintosh emulator talking to the ESP32’s wireless hardware, with a native application to control the wifi connection in System 3.3. As far as the Macintosh is concerned, commands are passed to the ESP32 via memory address 0xF00000, and data can be read back from it as well. It’s a straightforward approach to allow intercommunication between the emulator and the real world.

The touchpad on the CYD serves as a mouse for the Macintosh, which might not be the most ergonomic given the Macintosh System interface was never meant for touchscreens, but evidently it’s good enough for [likeablob]. He’s built it into a lovely 3D printed case, whose STLs are available on the GitHub repository along with all the code, including the Home Assistant integration.

Advertisement

Source link

Continue Reading

Tech

Games Workshop brings seven classic Warhammer games to Steam for the first time

Published

on

Fans of miniature plastic soldiers, rejoice. Games Workshop has brought a host of older Warhammer and Warhammer 40K video games to Steam for the first time, alongside a dozen games that haven’t been available on Valve’s storefront for a few years. The new to Steam releases consist of three games from the Warhammer fantasy range — Shadow of the Horned Rat, Mark of Chaos – Gold Edition and Dark Omen — and four from its sci-fi 40K universe — Chaos Gate, Fire Warrior, Final Liberation and Rites of War.

If you’re a Warhammer fan of a certain age, some of these may be formative experiences for you. I know they are for me. I can’t count how many hours I spent playing Chaos Gate when I first discovered 40K at the age of 10. Yes, it was an XCOM clone, but by that point I didn’t know about the MicroProse original, and Space Marines were cool.

Years later and as a Tau collector at the time, I also loved Fire Warrior, even if it wasn’t the most polished or deep first-person shooter. I haven’t played the other five games included in today’s announcement, but I’ve heard Warhammer: Shadow of the Horned Rat and Warhammer 40K: Rites of War are pretty good if you’re into the setting or, in the latter case, a fan of the Eldar.

To celebrate the re-release of these old gems, Games Workshop is running a Classics sale on Steam, with discounts on all 19 re-releases. Plus, you can get discounts on some more recent releases, including the excellent Dawn of War – Definitive Edition and Dawn of War 2 – Anniversary Edition. If you’re new to the Warhammer 40K universe, and would rather avoid a plastic addiction, one of those would be my first port of call, along with the excellent Space Marine 2.

Advertisement

Source link

Continue Reading

Tech

Roblox boosts child safety with new account types limiting chat and game access

Published

on

Roblox is among the internet’s busiest digital playgrounds, but keeping it safe, especially for the youngest users, has been an ongoing challenge. 

Well, on April 13, 2026, the platform’s founder and CEO, David Baszucki, announced two new age-based account tiers, which will launch in June. 

So, What Exactly Is Changing For Young Players?

The platform is launching Roblox Kids for users between the ages of five and eight and Roblox Select for those aged between nine and 15.

Both categories are assigned automatically through the platform’s existing facial age-check mechanism, the system made mandatory for accessing the in-built chat feature in January 2026. 

For the youngest group, Roblox Kids, chat is completely disabled by default. Game access is also restricted to content carrying only Minimal or Mild maturity ratings. Selected users get more freedom with Moderate-rated games and chat that can be gradually enabled based on age. 

Advertisement

Once a user hits 16, they graduate to a standard account. 

What Does This Mean For Parents And Developers?

Parents gain sharper controls, including the ability to individually block or approve titles up to age 15. Meanwhile, developers face a tougher entry bar. To reach younger audiences, they must provide ID verifications, two-step authentication, and an active Roblox Plus subscription (which costs $4.99 per month). 

As mentioned in the beginning, the new age-based accounts will roll out globally at the beginning of June 2026. Users will also get a transition period to verify their age. 

If you’re wondering where the sudden strategic pivot comes from, it stems from the lawsuits by the attorneys general of Louisiana and Texas over child safety concerns.

It’s food that platforms can no longer keep child safety on the back burner, and if the platform does well, it could set a benchmark for age-appropriate access. 

Advertisement

Source link

Continue Reading

Tech

FBI Raids Texas Home of Man Suspected of Firebombing Sam Altman’s SF Mansion

Published

on

The FBI searched the Texas home of a 20-year-old man accused of throwing a Molotov cocktail at Sam Altman’s San Francisco residence. Authorities say the suspect also made threats at OpenAI’s headquarters, and reports indicate he had written extensively about fears over AI and opposition to AI executives.

The suspect reportedly authored a Substack blog and was a member of the Discord server PauseAI, an activist group focused on banning the development of the most powerful AI models to protect the public. In one post, they wrote: “These machines have already shown themselves to be unaligned with the interest of the people creating them. Models have often been found lying, cheating on tasks, and blackmailing their own creators whenever convenient; let alone the broader question of aligning them to whatever general ‘human interest’ may be.” The Houston Chronicle reports: The search happened hours before the Justice Department charged 20-year-old Daniel Moreno-Gama with possession of an unregistered firearm and damage and destruction of property by means of explosives. An FBI spokesperson on Monday morning confirmed agents were executing a search warrant in Spring, but provided no other information.

Around the same time, FOX News reported the search was being conducted at the home of Daniel Moreno-Gama, 20, who last week was arrested by San Francisco police suspicion of attempted murder, making criminal threats and possession of a destructive device. The charges were first reported by the Associated Press. When Moreno-Gama was arrested Friday, he was carrying a document that “identified views opposed to Artificial Intelligence (AI) and the executives of various AI companies,” the Associated Press reported. Moreno-Gama has no criminal history in Harris or Montgomery counties, according to public records. […] Agents had left the cul-de-sac by 1 p.m. It was unclear if they removed any items from the house. Another incident occurred outside Sam Altman’s residence early Sunday morning. “Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI’s CEO,” reports The San Francisco Standard, citing reports from the local police department. Two suspects were arrested and booked for negligent discharge.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Connections Hints, Answers for April 14 #1038

Published

on

Looking for the most recent Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections: Sports Edition and Strands puzzles.


Today’s NYT Connections puzzle is rather tricky. Read on for clues and today’s Connections answers.

The Times has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. Players who are registered with the Times Games section can now nerd out by following their progress, including the number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.

Advertisement

Read more: Hints, Tips and Strategies to Help You Win at NYT Connections Every Time

Hints for today’s Connections groups

Here are four hints for the groupings in today’s Connections puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Surfing the web.

Advertisement

Green group hint: Think Muhammad Ali.

Blue group hint: You might try to do this with a pinball machine.

Purple group hint: Not imprisoned.

Answers for today’s Connections groups

Yellow group: Things stored by a browser.

Advertisement

Green group: Boxing terms.

Blue group: Tilt.

Purple group: Free ____.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

Advertisement

What are today’s Connections answers?

completed NYT Connections puzzle for April 14, 2026

The completed NYT Connections puzzle for April 14, 2026.

NYT/Screenshot by CNET

The yellow words in today’s Connections

The theme is things stored by a browser. The four answers are bookmark, cache, cookie and history.

The green words in today’s Connections

The theme is boxing terms. The four answers are bell, gloves, ring and round.

Advertisement

The blue words in today’s Connections

The theme is tilt. The four answers are lean, list, pitch and tip.

The purple words in today’s Connections

The theme is free ____. The four answers are lance, mason, style and way.

Source link

Advertisement
Continue Reading

Tech

Xbox CEO called Game Pass ‘too expensive for players’ in a leaked memo

Published

on

Xbox’s new chief exec, Asha Sharma, has only been in charge for a few months but things already seem like they might be changing for the better. Or at the very least, they might be getting cheaper. The Verge reported that the new Xbox CEO wrote a memo to employees addressing the current pricing of the Game Pass subscription service.

“Game Pass is central to gaming value on Xbox. It’s also clear that the current model isn’t the final one,” Sharma allegedly said. “Short term, Game Pass has become too expensive for players, so we need a better value equation. Long term, we will evolve Game Pass into a more flexible system which will take time to test and learn around.”

After Microsoft upped the price for Game Pass twice within 15 months, many of us certainly felt that the service had gotten too costly to keep. Xbox is still offering a wide range of titles on Game Pass; the April update is adding indies like Hades 2 and new Double Fine project Kiln alongside AAA hits like the remake of Call of Duty: Modern Warfare. The Verge‘s sources suggested that the addition of the CoD franchise might have been a factor in some of the Game Pass price increases, since Microsoft would lose out on revenue by making the latest entries in the series available under the subscription.

It’s too early to say whether this memo from Sharma means Xbox is on the brink of a resurgence. And there are changes the company could make, like adding ever more complicated tiers, that would further hamper interest and uptake of Game Pass. But acknowledging the problem, even internally, is refreshing to see after so many baffling moves from Xbox in recent years.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025