Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Claude Code’s ‘/goals’ separates the agent that works from the one that decides it’s done

Published

on

A code migration agent finishes its run, and the pipeline looks green. But several pieces were never compiled — and it took days to catch. That’s not a model failure; that’s an agent deciding it was done before it actually was.

Many enterprises are now seeing that production AI agent pipelines fail not because of the models’ abilities but because the model behind the agent decides to stop. Several methods to prevent premature task exits are now available from LangChain, Google and OpenAI, though these often rely on separate evaluation systems. The newest method comes from Anthropic: /goals on Claude Code, which formally separates task execution and task evaluation.

Coding agents work in a loop: they read files, run commands, edit code and then check whether the task is done. 

Claude Code /goals essentially adds a second layer to that loop. After a user defines a goal, Claude will continue to turn by turn, but an evaluator model comes in after every step to review and decide if the goal has been achieved. 

Advertisement

The two model split

Orchestration platforms from all three vendors identified the same roadblock. But the way they approach these is different. OpenAI leaves the loop alone and lets the model decide when it’s done, but does let users tag on their own evaluators. For LangGraph and Google’s Agent Development Kit, independent evaluation is possible, but requires developers to define the critic node, write up the termination logic and configure observability. 

Claude Code /goals sets the independent evaluator’s default, whether the user wants it to run longer or shorter. Basically, the developer sets the goal completion condition via a prompt. For example, /goal all tests in test/auth pass, and the lint step is clean. Claude Code then runs, and every time the agent attempts to end its work, the evaluation model, which is Haiku by default, will check against the condition loop. If the condition is not met, the agent keeps running. If the condition is met, then it logs the achieved condition to the agent conversation transcript and clears the goal. There are only two decisions the evaluator makes, which is why the smaller Haiku model works well, whether it’s done or not. 

Claude Code makes this possible by separating the model that attempts to complete a task from the evaluator model that ensures the task is actually completed. This prevents the agent from mixing up what it’s already accomplished with what still needs to be done. With this method, Anthropic noted there’s no need for a third-party observability platform — though enterprises are free to continue using one alongside Claude Code — no need for a custom log, and less reliance on post-mortem reconstruction.

Competitors like Google ADK support similar evaluation patterns. Google ADK deploys a LoopAgent, but developers have to architect that logic.

Advertisement

In its documentation, Anthropic said the most successful conditions usually have: 

  • One measurable end state: a test result, a build exit code, a file count, an empty queue

  • A stated check: how Claude should prove it, such as “npm test exits 0” or “git status is clean.”

  • Constraints that matter: anything that must not change on the way there, such as “no other test file is modified”

Reliability in the loop

For enterprises already managing sprawling tool stacks, the appeal is a native evaluator that doesn’t add another system to maintain.

This is part of a broader trend in the agentic space, especially as the possibility of stateful, long-running and self-learning agents becomes more of a reality. Evaluator models, verification systems and other independent adjudication systems are starting to show up in reasoning systems and, in some cases, in coding agents like Devin or SWE-agent. 

Sean Brownell, solutions director at Sprinklr, told VentureBeat in an email that there is interest in this kind of loop, where the task and judge are separate, but he feels there is nothing unique about Anthropic’s approach.

Advertisement

“Yes, the loop works. Separating the builder from the judge is sound design because, fundamentally, you can’t trust a model to judge its own homework. The model doing the work is the worst judge of whether it’s done,” Brownell said. “That being said, Anthropic isn’t first to market. The most interesting story here is that two of the world’s biggest AI labs shipped the same command just days apart, but each of them reached entirely different conclusions about who gets to declare ‘done.’”

Brownell said the loop works best “for deterministic work with a verifiable end-state like migrations, fixing broken test suites, clearing a backlog,” but for more nuanced tasks or those needing design judgment, a human making that decision is far more important.

Bringing that evaluator/task split to the agent-loop level shows that companies like Anthropic are pushing agents and orchestration further toward a more auditable, observable system.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Your next free Google account might only come with 5GB of storage

Published

on

Google has quietly altered one of the most reliable promises in consumer tech: 15GB of free cloud storage. For years, signing up for a Google account meant getting 15GB of free storage, shared across Gmail, Drive, and Photos. However, that’s changed. 

New accounts are now defaulting to 5GB (same as iCloud), with the full 15GB available only if you have entered your phone number during setup. The prompt users are seeing reads: “Your account includes 5GB of storage. Now get even more storage space with your phone number.”

What exactly changed?

The policy change took effect sometime around March 18, 2026 (via 9To5Google). That’s when the company updated its support page language from definitive to conditional. Initially, the support page read “Your Google account comes with 15GB of cloud storage at no charge.”

Now, it has been updated to say “up to 15GB of cloud storage at no charge.” And Google didn’t announce the change via a tweet or a blog post, as it does for every other update that comes out for consumer-centric products. 

It is during the account setup that users are now seeing two explicit choices: link a phone number to get 15GB of storage or keep 5GB. 

Advertisement

Why is Google doing this?

Google wants to make sure that the 15GB storage is offered to users only once, and not as many times as they create a new account. Linking the free storage to users’ phone numbers is, I’d say, a smart move, as it’s much more difficult to get a new number than to create a new Google account.

So, the company is positioning the change as an anti-duplication measure rather than anything else. A Google spokesperson has also confirmed to Endgadget that this is a regional test, which is why some users are still able to access the 15GB free storage without verifying their phone number. 

At the same time, I’d also like to draw your attention to the timing of this change. Only recently did Google expand the available storage for AI Pro subscribers from 1TB to 5TB, and now, it’s enforcing a tighter space for free users. Ultimately, we should all prepare for slimmer free storage margins.

Source link

Advertisement
Continue Reading

Tech

What the jury will actually decide in the case of Elon Musk vs. Sam Altman

Published

on

Nine California jurors are now deliberating over the future of OpenAI, the world-leading artificial intelligence lab.

While the trial exploring Elon Musk’s case against OpenAI’s other cofounders and Microsoft has covered territory ranging from the breakup of the founders in 2018 to Altman’s firing and rehiring in 2023, the jurors will be considering a set of fairly narrow questions.

  • Breach of charitable trust — essentially, did OpenAI and cofounders Sam Altman and Greg Brockman violate a specific agreement with Musk to use his donations to OpenAI for a specific, charitable purpose and not general use by the non-profit?
  • Unjust enrichment — did the defendants use Musk’s donations to enrich themselves through OpenAI’s for-profit arm, instead of for charitable purposes?
  • Aiding and abetting breach of charitable trust — Did Microsoft, through its interactions with OpenAI, know that Musk had specific conditions on its donations, and play a significant role in causing harm to Musk?

OpenAI has also made three arguments in its defense that the jury will weigh:

  • Statute of limitations — a legal deadline by which a lawsuit must be filed. Here, if OpenAI can prove that any harms to Musk happened before August 5, 2021 for the first count; August 5, 2022 for the second count; and November 14, 2021 for the first count, then his claims will be moot.
  • Unreasonable delay — Musk, by filing his lawsuit in 2024, delayed his claim in a way that made his request for damages unreasonable.
  • Unclean hands — a legal doctrine holding that Musk’s conduct related to his claims against OpenAI was unconscionable and renders them invalid.

If Musk wins out, it could mean the end of OpenAI as a for-profit company, but it’s not entirely clear what will result. Next week, the judge will begin a set of new hearings where lawyers from both sides will debate what the consequences of a verdict in favor of the plaintiffs might be. That process could be rendered moot by a negative verdict, however.

Breach of charitable trust

Musk’s attorneys say the defendants clearly understood that Musk wanted to support a non-profit that would ensure the benefits of AI to the world, and prevent it from being controlled by any one organization. In particular, they say a $10 billion investment from Microsoft in 2023 into OpenAI’s for-profit affiliate—the first to happen after the statute of limitations—was the event that turned Musk’s concern into conviction.

That deal, Musk’s lawyers say, was different from previous investments and led to OpenAI’s investors being enriched by the company’s commercial products, at the expense of the charitable mission of AI safety that Musk promoted.

Advertisement

OpenAI’s attorneys have asked every witness to describe specific restrictions put on Musk’s donations, and none have, including his financial adviser Jared Birchall, his chief of staff Sam Teller, or his special adviser Shivon Zilis. They say everyone involved agreed that private fundraising would be required to achieve its goals, and note that Musk himself attempted to launch an OpenAI-affiliated for-profit he would personally control, and later to merge OpenAI into his company Tesla. They also note the organization’s other donors haven’t said their charitable trust was violated.

Importantly, a forensic accountant hired by OpenAI testified that all of Musk’s donations had been used by OpenAI well before the key date of August 5, 2021. That is evidence that Musk’s donations were already used for their purpose well before he brought his lawsuit, invalidating any charitable trust that may have existed.

Mainly, they insist that the for-profit affiliate that conducts most of OpenAI’s actual activity continues to fulfill the organization’s mission, and has generated nearly $200 billion in equity value to support the non-profit foundation. Notably, Sam Altman argued that providing ChatGPT for free helps fulfill the mission of sharing the benefits of AI with the world.

Unjust enrichment

The plaintiffs point to the multibillion-dollar valuations of stakes held by OpenAI founders like Brockman and Ilya Sutskever, as well as Microsoft itself, as a sign that Musk’s donations were ultimately used for personal benefit, as opposed to supporting the mission of the charity. They argue that the work at OpenAI’s for-profit was commercially focused, while the foundation itself was left essentially dormant, without full-time employees, and, ultimately, not even in control of the for-profit.

Advertisement

OpenAI says all of Musk’s contributions were used by the foundation by 2020, and that equity distributions came well after he left the organization in 2018. Even beforehand, evidence shows the key players agreed that being able to compensate researchers with stock was key to developing AGI, the hypothetical form of AI capable of performing any intellectual task a human can. OpenAI executives maintain that the for-profit’s work meaningfully advanced the foundation’s mission, including safety activities. They say the non-profit board continues to control the for-profit, and instituted new governance controls following “the blip,” when Altman was fired by OpenAI’s non-profit board in 2023 for lack of candor and then rehired just days later.

Aiding and abetting

Musk’s case focused on the events of the blip, when Microsoft CEO Satya Nadella, whose company depended on OpenAI’s tech, was personally involved with helping to bring Altman back and creating a new board to govern OpenAI. They note that Microsoft executives wondered if their commercial agreement might conflict with the non-profit’s goals, and suggest that Microsoft’s commercial priorities led OpenAI away from its mission. They’ve focused attention on a clause in Microsoft’s agreement with OpenAI that gave Microsoft veto rights over major corporate decisions at OpenAI.

Microsoft’s witnesses have insisted that the company’s executives didn’t know of any specific conditions on Musk’s donations despite extensive due diligence, and never vetoed any decision by OpenAI. They note that the company’s investments and compute power allowed OpenAI to achieve its biggest triumphs.

Statute of Limitations

Musk has suggested that his skepticism of his cofounders grew over time, until in the fall of 2022 he finally decided they had betrayed him when he found out about Microsoft’s plans for a new $10 billion investment that took place in 2023. He wouldn’t file his lawsuit until mid-2024.

Advertisement

OpenAI’s attorneys argue that the terms of that deal were spelled out in a term sheet for a previous fundraising round in 2018, which Musk received and his advisers reviewed, but Musk said he didn’t read in detail. They also note numerous blog posts and other communications from over the years that show Musk could have known what OpenAI was doing well before he brought them to court, including tweets where Musk criticized the company years before the suit. Zilis, Musk’s adviser, even voted to approve these transactions as a member of the OpenAI board.

Ultimately, the OpenAI attorneys emphasize that Musk’s formal role in the organization ended in 2018 and his last donations took place in 2020.

Unreasonable delay

OpenAI’s attorneys say the real reason that Musk filed his suit was he realized that he was wrong about OpenAI, after its launch of ChatGPT revolutionized the business of artificial intelligence. They argue that OpenAI has operated under its current structure since its first Microsoft investment in 2018, and that forcing the organization to restructure eight years later is unreasonable.

Unclean hands

There is evidence that Musk was planning his own competing AI efforts while he was still the chair of OpenAI, and hired OpenAI employees to work on AI at Tesla. OpenAI’s attorneys argue that these efforts undermined OpenAI at a time when it was using Musk’s donations to pursue its mission. They noted that Zilis, the mother of three of Musk’s children, didn’t disclose her personal relationship to other OpenAI board members for years. And they argue that Musk withheld his donations in 2017 in an effort to win control of a planned for-profit affiliate of OpenAI. Finally, “Mr. Musk abandoned OpenAI for dead in 2018,” Bill Savitt, OpenAI’s lead attorney, told the jury.

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading

Tech

5 Useful Google Maps Features You Must Try

Published

on

Google Maps is now one of the most commonly used applications for daily travel. It provides direction, displays real-time traffic information, and facilitates searches of destinations such as food outlets, petrol pumps, or even accommodation centers. Nevertheless, most of its users use only the primary functions and overlook the others. From organizing your saved places to improving navigation accuracy, these tools can make traveling and planning far more convenient.

1. Use Emojis to Organize Your Saved Places

image for use  Emojis to Organize in google maps

The first problem with saved places in Google Maps concerns their similarity in representation. This is problematic because users cannot quickly find what they need when there are too many markers on the map. Google Maps allows people to personalize their categories with emojis. Thus, the user creates associations and sees pictures rather than icons, which facilitates the search process.

  1. Open Google Maps.
  2. Tap on the “You” tab at the bottom.
  3. Open an existing list or create a new one.
  4. Tap Edit (for existing lists).
  5. Select the Choose icon.
  6. Pick an emoji that matches your category and tap Save.

2. Avoid Stairs While Navigating

image for google maps to Avoid Stairs While Navigating

A useful feature of Google Maps is the ability to hide stairs along the way. By enabling the accessibility function, the application will automatically adjust the route and find a better option without any additional steps.

  1. Open the Google Maps app.
  2. Enter your destination and tap Directions.
  3. Select Walking or Transit mode.
  4. Tap the filter/settings icon.
  5. Turn on Wheelchair accessible.
  6. Choose the recommended route.

3. Turn Screenshots Into Saved Locations

These days, people often discover new places through social media. But when you actually need those places, they’re hard to find in your gallery. To make this easier, Google Maps includes a feature that converts screenshots into saved locations. It uses AI to scan text in the image and match it with real places. This keeps all your saved spots in one place, making travel planning more organized and efficient.

  1. Open Google Maps on your Apple device.
  2. Tap on the You tab.
  3. Open the Screenshots list.
  4. Allow access to your photos when prompted.
  5. Tap Choose screenshots and select the screenshots you want to scan.
  6. Tap Add and wait for processing.
  7. Review and save the detected locations.

4. Set Reminders for When to Leave

image to Set Reminders for When to Leave

Most people calculate their departure time manually. They consider their destination and traffic, then use other applications to remind them of the exact departure time. However, Google Maps can do all of this in a single application. It allows users to schedule trips and be reminded of their departure time.

  1. Open Google Maps.
  2. Search your destination.
  3. Tap Directions.
  4. Tap the three-dot icon.
  5. Select Set a reminder to leave.
  6. Choose Leave at or Arrive by.
  7. Enter your time and save your reminder.

5. Fix Incorrect Location on Google Maps

image to Calibrate location. on google maps

One problem that can arise when using maps is inaccurate location tagging. This can make it difficult to find your way through crowded places. This problem can be solved by using the camera-based map calibration system. This will help the phone tag locations of the surrounding buildings be accurately calculated to determine your position.

  • Open the Google Maps app.
  • Tap your current location (blue dot).
  • Select the Calibrate location.
  • Tap Start.
  • Use your camera to scan nearby landmarks.
  • Wait for the accuracy confirmation.

Source link

Continue Reading

Tech

New 3D memory architecture revives old camera technology to smash through AI memory wall – NAND + DRAM hybrid promises to make memory cheaper, faster and with ‘unlimited endurance’

Published

on


  • Researchers have created a NAND-DRAM hybrid, inspired by legacy camera tech
  • Indium Gallium Zinc Oxide also promises benefits over silicon
  • For now, this is just a prototype that needs further work

Belgian semiconductor research hub imec has unveiled what it claims to be the first 3D implementation of charge-coupled device (CCD) memory architecture, which revives technology we’ve already seen used before in digital cameras and camcorders, but for a totally different purpose.

With 3D CCD architecture, the researchers were able to break one of the biggest bottlenecks in AI computing today – the memory wall – where GPUs and accelerators spend more time waiting for data than processing it as a result of poor memory bandwidth and power efficiency.

Source link

Advertisement
Continue Reading

Tech

Ontario auditors find doctors’ AI note takers routinely blow basic facts

Published

on

AI + ML

60% of evaluated AI Scribe systems mixed up prescribed drugs in patient notes, auditors say

The AI systems approved for Ontario healthcare providers routinely missed critical details, inserted incorrect information, and hallucinated content that neither patients nor clinicians mentioned, according to a provincial audit of 20 approved vendors’ systems.

The findings come from the Office of the Auditor General of Ontario, Canada, and are included in a larger report about the state of AI usage by public services in the province. They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector.

Advertisement

As part of the procurement process, officials conducted evaluations using simulated doctor-patient recordings. Medical professionals then reviewed the original recordings alongside the AI-generated notes to evaluate their accuracy.

What they found was, frankly, shocking for anyone concerned about the accuracy of AI in critical situations. 

Nine out of 20 AI systems reportedly “fabricated information and made suggestions to patients’ treatment plans” that weren’t discussed in the recordings. According to the report, evaluators spotted potentially devastating incorrect information in the sample reports, such as no masses being found, or patients being anxious, even though these things were never discussed in the recordings.

Twelve of the 20 systems evaluated inserted incorrect drug information into patient notes, while 17 of the systems “missed key details about the patients’ mental health issues” that were discussed in the recordings. Six of the systems “missed the patients’ mental health issues fully or partially or were missing key details,” per the report. 

Advertisement

OntarioMD, a group that offers support for physicians in adopting new technologies and was involved in the AI Scribe procurement process, has recommended that doctors manually review their AI notes for accuracy, but the report notes there’s no mandatory attestation feature in any of the AI Scribe-approved systems. 

Bad evaluations don’t help, either

AI systems making mistakes isn’t exactly shocking. As we’ve reported previously, consumer-focused AI has a tendency to provide bad medical information to users, and some studies have found large language models failed to produce appropriate differential diagnoses in roughly 80 percent of tested cases. But the tools evaluated here are for doctors, not consumers, and such poor performance necessitates explanation. A good portion of the report blames how the systems were evaluated.

According to the report, the weight given to various categories of AI Scribe performances was wonky. While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score.

Bias controls accounted for only 2 percent of the total evaluation score; threat, risk, and privacy assessments counted for another 2 percent; and SOC 2 Type 2 compliance contributed an additional 4 percentage points.

Advertisement

In other words, criteria tied to accuracy, bias controls, and key security and privacy safeguards made up only a small portion of the total evaluation score for the AI Scribe systems.

“Inaccurate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the report said of the scoring regime.

The Register reached out to the Ontario Health Ministry for its take on the report, and whether it was going to conform to its recommendations for the AI Scribe program, but we didn’t immediately hear back. A spokesperson for the Ministry told the CBC on Wednesday that more than 5,000 physicians in Ontario are participating in the AI Scribe program and there have been no known reports of patient harms associated with the technology. ®

Source link

Advertisement
Continue Reading

Tech

The MicroFold Concept Turns Solo Urban Rides Into Space-Saving Magic

Published

on

MicroFold Concept Commuter Car Electric
Photo credit: Amaan Mukadam
Crowded streets across Europe pack in far more people than parking spots can handle. Freelance designer Amaan Mukadam from the UK looked at that daily scramble and built the MicroFold, a four-wheeled electric vehicle meant for exactly one rider at a time.


MicroFold Concept Commuter Car Electric
Riders notice one of these units lurking by a curb in a busy area of town. You unlock the door using your phone, as the app allows you to do so easily. As you enter, the roof swings up in a fluid motion, much like a gullwing door, forming the windshield and side windows all at once. The single seat provides a great perspective of everything around you, so getting in is a no-brainer. Simply select your destination and you’re off without having to touch the steering wheel.

Sale


Maxshot Electric Scooter, 8.5″ /10″ /14″ Tire, 16/19/22 Mph Top Speed, 12/16/21/27/28/49/50 Miles Long…
  • 【Dual Suspension and Solid Tire】The 10”honeycomb tires, along with the shock absorbing system, make this electric scooter adults for a smooth…
  • 【LED Display & Smart Control &Lockable】You can check your speed, modes and battery level on the LED digital display. check and control the…
  • 【Powerful Motor & Long Range】The electric scooter for adults with a 500W brushless hub motor allows for speed up to 22mph. High-capacity…

MicroFold Concept Commuter Car Electric
As you drive toward your destination, the back part pops open just enough to keep the vehicle steady on the road. Amazingly, even with that open, the entire vehicle takes up just about a third of the space of a standard car. The body panels simply arrange themselves to provide a comfortable peaceful place inside for you. It’s like traveling in a private shuttle, with automatic turns and stops.

MicroFold Concept Commuter Car Electric
Arrival is where things become really creative, as you get out, pay via the app, and the MicroFold begins to transform. The rear wheels roll along these small tracks inside, bringing the back end closer to the front. The back panels glide in as the seat folds flat inside. Before you know it, the item has shrunk to a tiny size, just small enough to slip into a tight little space that would never fit a typical car. Mukadam adapted the folding sequence from the crisp lines of origami, and it works brilliantly. It’s a controlled movement every time, so you know it’ll be safe. When it’s all packed away, it simply rolls to the charging station and sits there silently until the next customer requires a lift.

MicroFold Concept Commuter Car Electric
The concept of self-parking and charging is a game changer for densely populated cities. You won’t have to worry about a car sitting there for hours, taking up valuable space. A whole row of folded MicroFolds may fit in the area required for two or three standard automobiles. That makes this a gem in locations like Europe where streets are narrow and parking is difficult, because you can simply line them all up in the space that one large car would take up. Sure, in the States, where driving distances are longer, you may desire a larger automobile for longer excursions, but the MicroFold demonstrates how simple it can be to use personal electric transportation in congested urban areas.
[Source]

Source link

Advertisement
Continue Reading

Tech

We Now Know How Many People the CDC Is Monitoring for Hantavirus

Published

on

The US Centers for Disease Control and Prevention is monitoring 41 people in the US for the Andes hantavirus after a cruise ship was hit with a rare outbreak, but the risk to the public remains low, according to health officials.

This includes a group of 18 passengers from the cruise ship who are now in quarantine facilities in Nebraska and Georgia. The agency is also monitoring passengers who returned home before the outbreak was identified and others who were exposed during travel, specifically on flights where a symptomatic case was present.

“Most people under monitoring are considered high-risk exposures, and CDC recommends that everyone under monitoring stay at home and avoid being around people during their 42-day monitoring period,” David Fitter, incident manager for the CDC’s hantavirus response, told reporters during a media briefing on Thursday. “We emphasize not to travel across all these groups.”

The Andes virus is a strain of hantavirus found in South America that can be transmitted from person to person. Typically, hantavirus is passed to humans when they come into contact with rodent droppings or urine. A respiratory virus, the disease can cause difficulty breathing and carries a fatality rate of around 35 percent. As of Thursday, the World Health Organization has confirmed 11 cases of the Andes virus among passengers of the MV Hondius cruise ship, including three deaths.

Advertisement

A Department of Health and Human Services official confirmed to WIRED that all Americans who were on board the Hondius at any point during its journey are now back in the US.

The CDC has legal authority to issue federal quarantine and isolation orders to prevent the spread of certain communicable diseases into or within the US. Fitter said on Thursday that the CDC is not using that authority to manage all 41 of the individuals who were potentially exposed to the hantavirus.

“Our approach is based on risk and evidence,” he said. “We are working closely with passengers and public health partners to ensure monitoring and rapid access to care if symptoms develop. Our goal is to work with them and alongside them, building plans based on their specific situations to protect the health and safety of passengers and American communities.”

Individuals will be monitored for 42 days, which is the amount of time it can take for hantavirus symptoms to appear after exposure. Symptoms begin as flu-like, with fever, muscle aches, and fatigue, then rapidly progress to severe respiratory distress.

Advertisement

Source link

Continue Reading

Tech

Utah just approved a data center twice the size of Manhattan that will consume more electricity than the entire state

Published

on


  • Utah will be the home of a new 40,000 acre datacenter
  • The datacenter will consume more power than the entire state
  • The power will be provided with natural gas burning turbine generators

Box Elder county commission in Utah has approved an enormous new data center that, upon completion, will be twice the size of Manhattan and consume more electricity than the entire state currently does.

The Stratos artificial intelligence datacenter will occupy more than 40,000 acres (62 sq miles) in north-western Utah and consume 9GW of power.

Source link

Advertisement
Continue Reading

Tech

OpenAI’s KOSA Endorsement Is Regulatory Capture With A Smiley Face

Published

on

from the not-the-flex-dc-thinks-it-is dept

Earlier this week, OpenAI became the latest tech company to publicly endorse KOSA, the Kids Online Safety Act. The company, conveniently, tries to frame this as being about its support of child safety. It’s not. It’s about political horse trading, desperation for good publicity, and building a regulatory moat.

KOSA would help create stronger online protections for young social media users through safer default settings, expanded parental controls, and greater accountability for online harms.

The path forward on kids safety, however, also requires AI-specific rules. And we believe KOSA is complementary to the work we’re doing at the federal and state level. Young people should be able to benefit from AI in ways that are safe, age-appropriate, and grounded in real-world support, including referrals to crisis resources and parental notifications in serious safety situations. That means building safeguards from the start, giving families better tools, and taking responsibility for reducing risks before they become harms.

The broader point is an important one: AI companies still have the opportunity to build protections early, before these technologies become fully embedded in everyday life. As OpenAI Chief Global Affairs Officer Chris Lehane has put it, “We can’t repeat the mistakes made during the rise of social media, when stronger safeguards for teens weren’t put in place until the platforms were already deeply embedded in young people’s lives.”

All of this is, of course, nonsense. As we’ve explained repeatedly, the underlying mechanisms of KOSA are deeply problematic and will do real damage. It will, inherently, make the internet worse for everyone. At its heart, KOSA is a surveillance and censorship bill, and it’s the last thing that we need for the internet today.

While it’s positioned as being about something no one can be against (“kid safety!”), that is all too often the facade with which terrible rights-killing laws are passed. And KOSA is no exception.

But a bunch of tech companies have endorsed it anyway. Why? Because they know it makes life way more difficult for smaller upstart competitors. The additional compliance costs it will add for companies will be ruinous to smaller, less well-resourced companies. For big companies with big bank accounts, however, it gives them a leg up.

Advertisement

OpenAI, perhaps more than most others in the space, needs that kind of government-backed protection against growing competition.

Almost exactly three years ago, I wrote a piece about Sam Altman going to Congress and asking for the federal government to regulate the AI space, calling it Sam Altman Wants The Government To Build Him A Moat. As I pointed out at the time, AI researchers were coming to the conclusion that there was little to no real competitive advantage that any frontier AI model could really have for any extended period of time. That situation has only gotten worse since then. The jockeying between the various leading AI models has meant that they’re all effectively comparable, and more and more builders are realizing that since you can separate out the context, the computer, and the agentic tools from the underlying LLM, that technology is quickly turning into a commodity where any one will do (and this situation is becoming even more tenuous as open weight/local models are getting better and better).

While OpenAI has a huge number of users (one of the fastest growing tech companies in history), it’s unclear if those users are particularly loyal. Indeed, there are a few indications that when OpenAI does something stupid, a large segment of users will quickly leave.

Given that, all of the large AI companies keep looking for ways to create some sort of lock-in for users. Most of them haven’t gone down the fully siloed path (knowing at this stage that would probably drive away their most valuable users). For the most part, the focus between the likes of OpenAI, Anthropic, Google and others is to build in more features to make it more convenient to stay than to swap out an underlying LLM. That and the continued leapfrogging, combined with various experiments regarding how much they’re willing to subsidize with their subscription plans.

Advertisement

But having the government wipe out competitors, or create “mandatory” tools that create lock-in, might be another path towards such a result. And that’s exactly what KOSA would lead to. It certainly wouldn’t protect kids. Indeed, all evidence suggests it would put plenty of marginalized kids at much greater risk.

However, it would create something of a regulatory moat for those larger companies.

On top of that, is there any company more desperate for a headline talking about how it’s “helping” protect children than OpenAI? The company has been accused of being “responsible” for suicide and other harmful behavior. And, even if those claims and lawsuits are misleading (they are!), culturally that message has been sticking. I’ve heard multiple people refer to ChatGPT as a suicide machine.

So, if you need a good headline to claim that you’re “protecting children” and doing so in a way where the law will have little direct impact on your business, but will damage some of your competitors in the space (not to mention the wider open internet), why not? It’s hard not to be cynical about OpenAI’s reasoning here.

Advertisement

Separately, it’s likely that the AI companies see this as a bit of political horse trading. While KOSA would have some impact on AI tools, it’s much more directed at social media platforms than AI. And it’s likely that the bet being made by OpenAI here is “hey, we’ll back KOSA for you, and you get rid of the AI-specific bills.” OpenAI’s Chris Lehane, who announced the endorsement and is featured in every press release about it, is infamous as a political trickster. He’s a political operator, not a tech or policy expert. You roll him out to cut a deal, not to advance a principled position on child safety. And that’s exactly what’s happening here.

You can see the KOSA authors gleefully using the OpenAI endorsement to falsely claim that only Mark Zuckerberg now opposes the law:

Yeah, that’s Senator Richard Blumenthal choosing to spend time on X, a site run by a guy who has made it clear he thinks Blumenthal’s political party is evil and needs to be wiped out, using that platform to lie and claim that the only people opposed to KOSA are “Mark Zuckerberg & his lobbyists.” That ignores the long list of civil society and public interest groups who have made it clear just how dangerous the law would be.

Marsha Blackburn (who has been vocal about how she wants KOSA to silence LGBTQ voices) put out a silly press release about this endorsement, saying:

“Lip service won’t save lives – Congress must take action to establish guardrails in the virtual space. I look forward to chairing a hearing on why the verdicts in California and New Mexico should spur Congress to hold Big Tech accountable for exploiting children to turn a profit.”

What? As bad as the rulings in California and New Mexico are, they seem to suggest that the courts already think they have the authority to order companies to do the impossible and magically stop anything bad from ever happening to kids who also (incidentally) use the internet.

Advertisement

All of this is for show. No one is being honest. Blackburn wants to censor LGBTQ speech she considers “dangerous to kids” because it terrifies her. Blumenthal wants to end encryption and the ability of tech companies to keep information, because he’s always been a cop and wants the ability to spy on your kids. And OpenAI wants Congress to direct their bad policies at social media companies rather than AI companies.

And all of us internet users are simply collateral damage for the mad power dreams of those in charge.

Filed Under: censorship, child safety, chris lehane, kosa, marsha blackburn, regulatory capture, richard blumenthal, surveillance

Companies: openai

Advertisement

Source link

Continue Reading

Tech

Netflix has its own AI studio now, and AI-generated content is coming for your feed whether you like it or not

Published

on

Netflix has spent years using AI to make sure you never leave the couch. Making AI-based content is the next step, I guess.

The streaming giant is staffing up a new internal studio called INKubator to produce animated short films and specials using generative AI (via TheVerge).

The project never got an official announcement from Netflix. Instead, it surfaced through a series of recently published job listings seeking producers and CGI artists. These listings paint a pretty clear picture of where the company is headed.

What exactly is INKubator, and who is running it?

Based on LinkedIn profiles, INKubator quietly launched in March 2026 and is led by Serrena Iyer, who previously held strategy and operations roles at DreamWorks Animation, MRC Studios, and A24 Films. That is not a lineup you put together for a throwaway experiment.

The job listings describe the studio as a next-generation, creativity-first operation built entirely around generative AI. The studio’s long-term technology strategy covers generative AI workflows, artist tooling, and scalable multi-show environments.

Advertisement

Interestingly, INKubator is not the first AI studio to be acquired by Netflix. Earlier this year, the streaming giant acquired InterPositive, an AI startup founded by actor Ben Affleck, which is centred on AI usage in post-production.

Could AI-generated shows end up in your Netflix feed?

For now, INKubator seems to be focused strictly on shorts and experimental animated specials, rather than full-length features. That said, the job listings hint at longer-form ambitions down the line.

Netflix recently added a TikTok-style vertical video feed called Clips in its mobile app, which is currently used for trailers and promotional content. AI-generated shorts could slot naturally into that space in the future.

Netflix has also been making a push into kids’ programming, positioning itself as a family-friendly YouTube alternative. It also launched a standalone app for kids called Netflix Playground. Generative AI could surely help it scale that kind of content much faster.

Whether you are ready for AI-made Netflix shows or not, INKubator suggests the streamer has already made up its mind.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025