The TWS earbuds category, at least in India, is a tricky business. There are too many players, each with their specific appeal. Noise is one such homegrown brand that’s always appealed to budget-conscious buyers. I was one of the early adopters when they first started doing smart bands. Their earphones have been solid overall, but have struggled to stand out among more established players like OPPO and OnePlus. On the flip side, Bose is the most recognized player in the premium headphone space, as they essentially invented ANC back in the 90s.
So, what if you combine the value proposition of Noise with the premiumness and sound of Bose? You get the Noise Master Buds. The first generation of these unique collaboration earbuds was a hit, with experts praising the sound quality and the ANC capabilities. Fast forward to today, and the second generation of the Master Buds are out, bringing some meaningful updates in the sound department. And I called Noise to get these for a review. It’s been a month since that call, and I even took them with me to Thailand. Spoiler alert: these earbuds are a hit. Here’s why.
Noise Master Buds 2
Hisan Kidwai
Advertisement
Summary
With the Master Buds 2, the design alone is a conversation starter. The Bose-tuned sound signature focuses on balance rather than overwhelming bass, making everything from vocals to instruments sound clean and detailed. ANC performance is strong enough for everyday commutes and flights, the gesture controls are surprisingly useful, and the companion app is genuinely well-designed rather than feeling like an afterthought.
Advertisement
Design & Comfort
When I first got the Master Buds 2, I was genuinely in awe of the design. I genuinely haven’t seen a case this pretty before, yet still functional. The concentric circle pattern on the front is a really nice touch that adds character to the stale world of TWS earbuds. The semicircle shape is also unique and something I have yet to see from other makers. One more benefit of this shape is that you can keep the case standing vertically, almost resembling the clocks of the past, or maybe I’m just too old at this point.
Nevertheless, I was constantly asked by my friends and family what earbuds I was using, all of whom adored the design, so you know it’s not just me yapping. I’d recommend sticking to the silver color, as it’s the best and doesn’t pick up random scratches, but the black also looks pretty decent. There’s a diagonal white LED strip that’s meant to indicate when the earbuds are on. You also get a dedicated pairing button, so no hand gymnastics is needed. Still, the best part of the design for me is the opening/closing mechanism. There’s a good amount of weight behind it, which makes that heavy thud that I love.
I do, however, have a problem with the size. I know these earbuds pack a lot of hardware, but the case size is just massive. I’ve been daily driving the Enco Buds 3 Pro+ for months, and compared to them, the Master Buds feel huge.
Moving to the earbuds themselves, comfort is highly subjective, as everyone’s ears are different. All that being said, though, the first day of me using the Master Buds wasn’t the best. My ears are small, and the medium-sized tips that come pre-installed were just too big for my liking. Thankfully, when I put on the small eartips, the experience was much better. You’ll still feel them sticking out, but the rubber fins work well to keep them from falling out during my everyday gym struggles with weights. It’s been a couple of hours since I started writing this review, and the Master Buds 2 are still sitting comfortably in my ears.
Sound Quality & Battery Life
Often, budget earbuds lean heavily towards bass and compromise everything else. Well, that’s not the case with the Master Buds 2. The 10 mm drivers tuned by Bose prioritize balance over everything else, and I love that. The mids, where most of the vocals live, are crisp, with you being able to hear the small nuances of a singer’s voice. The highs are decent without that sharpness that pinches through, and I love the separation between the different elements. Everything is placed perfectly, and that’s what many earbuds fail to do. Bass isn’t the selling point of the Master Buds, but the lows are still there, working hard in the background. Just don’t expect a rumble.
If you’re not happy with the signature Bose tuning, which would be surprising, Noise does bundle a few different listening modes like Jazz, Club, and Rock, each with its unique style. For audiophiles, there’s also a custom equalizer. Beyond the basics, Noise has bundled spatial audio with head tracking. I tried some songs on Apple Music, and it works great with instruments placed around you that change as you move your head. Call quality was excellent, with the other person hearing me loud and clear. Speaking of loud, though, there’s a new feature called Sidetone that lets you hear yourself on calls to check if you’re talking too loudly. Honestly, I can think of a lot of people needing that feature.
The Master Buds 2 use their 6 microphones for up to 51dB of active noise cancellation, and I put that number to the test. Using the ANC at Max, the buds do a stellar job of quietening the everyday hum of an AC or fan without a hitch. Voices were muffled enough that I couldn’t hear someone talking near me at half volume. For commuters, the Master Buds 2 will be enough to block out the chatter of a metro, but sudden loud noises, like a horn, will still find their way through. Last but not least, the battery on the buds is very decent. I got about 4-5 hours of use on a single charge. The case can recharge the buds at least 3 times, and once you do run out, fast charging should come to the rescue.
Advertisement
Companion App
It’s no secret that controls are super easy to mess up. Fortunately, that’s not the case with the Noise Master Buds. In fact, there’s a lot going on. Everything is controlled via the Noise Audio app, and it looks really well designed. It’s sophisticated, and everything is laid out neatly. Still, there are plenty of ways you can control music.
The obvious choice is the touch controls on the earbuds themselves. You can configure a single touch, a double touch, a triple touch, a quadruple touch, and even a tap-and-hold. But that’s not it. Noise has incorporated motion controls, meaning if you nod your head twice, you can play/pause the music. Beyond that, shaking your head twice to the right will skip to the next track and vice versa. This is a pretty niche feature that I thought would be a gimmick at best. Surprisingly, it works quite well, except for the occasional misses. You can also nod to accept calls.
No review of a 2026 product will be complete without mentioning AI, and the same is true here. Noise has bundled something called Noise AI, which, in theory, is meant to answer your questions. I gave it a go with one of the recommended questions: suggesting a good cafe near me. The answer was that it doesn’t have access to live restaurant listings and that I should search Google instead. If I have to search Google, then what’s even the point?
Verdict
At ₹8,999, the Noise Master Buds 2 sit in a very competitive segment, but they do enough differently to stand out. The design alone is a conversation starter, and unlike many flashy earbuds, these actually back it up with substance. The Bose-tuned sound signature focuses on balance rather than overwhelming bass, making everything from vocals to instruments sound clean and detailed. ANC performance is strong enough for everyday commutes and flights, the gesture controls are surprisingly useful, and the companion app is genuinely well-designed rather than feeling like an afterthought.
Of course, they aren’t perfect. The case is noticeably larger than most competitors, and bass lovers might find the tuning a little too restrained. Noise AI also feels half-baked right now and adds very little to the experience. Still, if you’re shopping in this segment, the Master Buds 2 are a must-consider.
Nine California jurors are now deliberating over the future of OpenAI, the world-leading artificial intelligence lab.
While the trial exploring Elon Musk’s case against OpenAI’s other cofounders and Microsoft has covered territory ranging from the breakup of the founders in 2018 to Altman’s firing and rehiring in 2023, the jurors will be considering a set of fairly narrow questions.
Breach of charitable trust — essentially, did OpenAI and cofounders Sam Altman and Greg Brockman violate a specific agreement with Musk to use his donations to OpenAI for a specific, charitable purpose and not general use by the non-profit?
Unjust enrichment — did the defendants use Musk’s donations to enrich themselves through OpenAI’s for-profit arm, instead of for charitable purposes?
Aiding and abetting breach of charitable trust — Did Microsoft, through its interactions with OpenAI, know that Musk had specific conditions on its donations, and play a significant role in causing harm to Musk?
OpenAI has also made three arguments in its defense that the jury will weigh:
Statute of limitations — a legal deadline by which a lawsuit must be filed. Here, if OpenAI can prove that any harms to Musk happened before August 5, 2021 for the first count; August 5, 2022 for the second count; and November 14, 2021 for the first count, then his claims will be moot.
Unreasonable delay — Musk, by filing his lawsuit in 2024, delayed his claim in a way that made his request for damages unreasonable.
Unclean hands — a legal doctrine holding that Musk’s conduct related to his claims against OpenAI was unconscionable and renders them invalid.
If Musk wins out, it could mean the end of OpenAI as a for-profit company, but it’s not entirely clear what will result. Next week, the judge will begin a set of new hearings where lawyers from both sides will debate what the consequences of a verdict in favor of the plaintiffs might be. That process could be rendered moot by a negative verdict, however.
Breach of charitable trust
Musk’s attorneys say the defendants clearly understood that Musk wanted to support a non-profit that would ensure the benefits of AI to the world, and prevent it from being controlled by any one organization. In particular, they say a $10 billion investment from Microsoft in 2023 into OpenAI’s for-profit affiliate—the first to happen after the statute of limitations—was the event that turned Musk’s concern into conviction.
That deal, Musk’s lawyers say, was different from previous investments and led to OpenAI’s investors being enriched by the company’s commercial products, at the expense of the charitable mission of AI safety that Musk promoted.
Advertisement
OpenAI’s attorneys have asked every witness to describe specific restrictions put on Musk’s donations, and none have, including his financial adviser Jared Birchall, his chief of staff Sam Teller, or his special adviser Shivon Zilis. They say everyone involved agreed that private fundraising would be required to achieve its goals, and note that Musk himself attempted to launch an OpenAI-affiliated for-profit he would personally control, and later to merge OpenAI into his company Tesla. They also note the organization’s other donors haven’t said their charitable trust was violated.
Importantly, a forensic accountant hired by OpenAI testified that all of Musk’s donations had been used by OpenAI well before the key date of August 5, 2021. That is evidence that Musk’s donations were already used for their purpose well before he brought his lawsuit, invalidating any charitable trust that may have existed.
Mainly, they insist that the for-profit affiliate that conducts most of OpenAI’s actual activity continues to fulfill the organization’s mission, and has generated nearly $200 billion in equity value to support the non-profit foundation. Notably, Sam Altman argued that providing ChatGPT for free helps fulfill the mission of sharing the benefits of AI with the world.
Unjust enrichment
The plaintiffs point to the multibillion-dollar valuations of stakes held by OpenAI founders like Brockman and Ilya Sutskever, as well as Microsoft itself, as a sign that Musk’s donations were ultimately used for personal benefit, as opposed to supporting the mission of the charity. They argue that the work at OpenAI’s for-profit was commercially focused, while the foundation itself was left essentially dormant, without full-time employees, and, ultimately, not even in control of the for-profit.
Advertisement
OpenAI says all of Musk’s contributions were used by the foundation by 2020, and that equity distributions came well after he left the organization in 2018. Even beforehand, evidence shows the key players agreed that being able to compensate researchers with stock was key to developing AGI, the hypothetical form of AI capable of performing any intellectual task a human can. OpenAI executives maintain that the for-profit’s work meaningfully advanced the foundation’s mission, including safety activities. They say the non-profit board continues to control the for-profit, and instituted new governance controls following “the blip,” when Altman was fired by OpenAI’s non-profit board in 2023 for lack of candor and then rehired just days later.
Aiding and abetting
Musk’s case focused on the events of the blip, when Microsoft CEO Satya Nadella, whose company depended on OpenAI’s tech, was personally involved with helping to bring Altman back and creating a new board to govern OpenAI. They note that Microsoft executives wondered if their commercial agreement might conflict with the non-profit’s goals, and suggest that Microsoft’s commercial priorities led OpenAI away from its mission. They’ve focused attention on a clause in Microsoft’s agreement with OpenAI that gave Microsoft veto rights over major corporate decisions at OpenAI.
Microsoft’s witnesses have insisted that the company’s executives didn’t know of any specific conditions on Musk’s donations despite extensive due diligence, and never vetoed any decision by OpenAI. They note that the company’s investments and compute power allowed OpenAI to achieve its biggest triumphs.
Statute of Limitations
Musk has suggested that his skepticism of his cofounders grew over time, until in the fall of 2022 he finally decided they had betrayed him when he found out about Microsoft’s plans for a new $10 billion investment that took place in 2023. He wouldn’t file his lawsuit until mid-2024.
Advertisement
OpenAI’s attorneys argue that the terms of that deal were spelled out in a term sheet for a previous fundraising round in 2018, which Musk received and his advisers reviewed, but Musk said he didn’t read in detail. They also note numerous blog posts and other communications from over the years that show Musk could have known what OpenAI was doing well before he brought them to court, including tweets where Musk criticized the company years before the suit. Zilis, Musk’s adviser, even voted to approve these transactions as a member of the OpenAI board.
Ultimately, the OpenAI attorneys emphasize that Musk’s formal role in the organization ended in 2018 and his last donations took place in 2020.
Unreasonable delay
OpenAI’s attorneys say the real reason that Musk filed his suit was he realized that he was wrong about OpenAI, after its launch of ChatGPT revolutionized the business of artificial intelligence. They argue that OpenAI has operated under its current structure since its first Microsoft investment in 2018, and that forcing the organization to restructure eight years later is unreasonable.
Unclean hands
There is evidence that Musk was planning his own competing AI efforts while he was still the chair of OpenAI, and hired OpenAI employees to work on AI at Tesla. OpenAI’s attorneys argue that these efforts undermined OpenAI at a time when it was using Musk’s donations to pursue its mission. They noted that Zilis, the mother of three of Musk’s children, didn’t disclose her personal relationship to other OpenAI board members for years. And they argue that Musk withheld his donations in 2017 in an effort to win control of a planned for-profit affiliate of OpenAI. Finally, “Mr. Musk abandoned OpenAI for dead in 2018,” Bill Savitt, OpenAI’s lead attorney, told the jury.
Advertisement
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Google Maps is now one of the most commonly used applications for daily travel. It provides direction, displays real-time traffic information, and facilitates searches of destinations such as food outlets, petrol pumps, or even accommodation centers. Nevertheless, most of its users use only the primary functions and overlook the others. From organizing your saved places to improving navigation accuracy, these tools can make traveling and planning far more convenient.
1. Use Emojis to Organize Your Saved Places
The first problem with saved places in Google Maps concerns their similarity in representation. This is problematic because users cannot quickly find what they need when there are too many markers on the map. Google Maps allows people to personalize their categories with emojis. Thus, the user creates associations and sees pictures rather than icons, which facilitates the search process.
Pick an emoji that matches your category and tap Save.
2. Avoid Stairs While Navigating
A useful feature of Google Maps is the ability to hide stairs along the way. By enabling the accessibility function, the application will automatically adjust the route and find a better option without any additional steps.
These days, people often discover new places through social media. But when you actually need those places, they’re hard to find in your gallery. To make this easier, Google Maps includes a feature that converts screenshots into saved locations. It uses AI to scan text in the image and match it with real places. This keeps all your saved spots in one place, making travel planning more organized and efficient.
Tap Choose screenshots and select the screenshots you want to scan.
Tap Add and wait for processing.
Review and save the detected locations.
4. Set Reminders for When to Leave
Most people calculate their departure time manually. They consider their destination and traffic, then use other applications to remind them of the exact departure time. However, Google Maps can do all of this in a single application. It allows users to schedule trips and be reminded of their departure time.
One problem that can arise when using maps is inaccurate location tagging. This can make it difficult to find your way through crowded places. This problem can be solved by using the camera-based map calibration system. This will help the phone tag locations of the surrounding buildings be accurately calculated to determine your position.
New 3D memory architecture revives old camera technology to smash through AI memory wall – NAND + DRAM hybrid promises to make memory cheaper, faster and with ‘unlimited endurance’
Researchers have created a NAND-DRAM hybrid, inspired by legacy camera tech
Indium Gallium Zinc Oxide also promises benefits over silicon
For now, this is just a prototype that needs further work
Belgian semiconductor research hub imec has unveiled what it claims to be the first 3D implementation of charge-coupled device (CCD) memory architecture, which revives technology we’ve already seen used before in digital cameras and camcorders, but for a totally different purpose.
With 3D CCD architecture, the researchers were able to break one of the biggest bottlenecks in AI computing today – the memory wall – where GPUs and accelerators spend more time waiting for data than processing it as a result of poor memory bandwidth and power efficiency.
The new design combines the speed and rewritability of DRAM with the density and efficiency of NAND to form a type of hybrid.
Latest Videos From
Advertisement
Old camera tech could actually lead to future generations of memory
CCD technology is nothing new – charge-coupled devices have long been used in digital cameras, broadcast video equipment, scientific imaging and even astronomy sensors, but CCDs have since been replaced with CMOS image sensors.
Traditionally, CCDs work by physically moving electrical charges between semiconductor gates, and this same principle applies to imec’s research to enable highly efficient memory movement.
Instead of arranging memory cells side-by-side on a flat plane, like conventional DRAM, the design stacks them vertically in a similar sense to 3D NAND, and this is important because DRAM’s limitations include leakage, higher manufacturing costs and a reduction in how quickly density improvements are happening.
The chips also replace silicon with IGZO (Indium Gallium Zinc Oxide), which promises lower leakage, longer data retention, easier low-temperature processing and strong compatibility with dense 3D stacking.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
With this hybrid architecture, imec has already demonstrated a successful charge transfer at transfer speeds of more than 4MHz, but this is still very early-stage technology and the prototype only uses a small number of stacked layers. In theory, it should be able to scale as well as NAND, with commercial chips now surpassing 200 layers.
CCD architecture looks to promise reduced wear mechanisms and endurance that could even exceed NAND, making it ideal for highly intensive applications across AI training clusters and inference servers.
“Unlike byte-addressable DRAM, our 3D CCD device is designed to provide block-level data access, which is better suited to modern AI workloads,” Program Director for Storage Memory Maarten Rosmeulen added.
Advertisement
“The potential of this CCD device to be used as a buffer memory lies in its ability to be integrated in a 3D NAND Flash string architecture – the most cost-effective way to achieve a scalable, high bit density estimated to go far beyond the DRAM limit.”
The research also details future plans for the promising architecture, positioning it as a CXL Type-3 device, or one that complies with industry standards to connect GPUs, CPUs and accelerators. This is an important consideration, with hyperscalers now turning to CXL as AI models become too big for local GPUs alone.
As a prototype and research product, there are still plenty of hurdles to overcome, including thermal behavior, layer count scaling and of course real-world integration, however if it’s successful then the new hybrid architecture could seriously help to reduce one of the biggest costs in AI infrastructure, DRAM.
Advertisement
Looking ahead, imec proposes that the next phase may involve a totally new class of memory architecture rather than simply evolving existing designs further.
60% of evaluated AI Scribe systems mixed up prescribed drugs in patient notes, auditors say
The AI systems approved for Ontario healthcare providers routinely missed critical details, inserted incorrect information, and hallucinated content that neither patients nor clinicians mentioned, according to a provincial audit of 20 approved vendors’ systems.
The findings come from the Office of the Auditor General of Ontario, Canada, and are included in a larger report about the state of AI usage by public services in the province. They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector.
Advertisement
As part of the procurement process, officials conducted evaluations using simulated doctor-patient recordings. Medical professionals then reviewed the original recordings alongside the AI-generated notes to evaluate their accuracy.
What they found was, frankly, shocking for anyone concerned about the accuracy of AI in critical situations.
Nine out of 20 AI systems reportedly “fabricated information and made suggestions to patients’ treatment plans” that weren’t discussed in the recordings. According to the report, evaluators spotted potentially devastating incorrect information in the sample reports, such as no masses being found, or patients being anxious, even though these things were never discussed in the recordings.
Twelve of the 20 systems evaluated inserted incorrect drug information into patient notes, while 17 of the systems “missed key details about the patients’ mental health issues” that were discussed in the recordings. Six of the systems “missed the patients’ mental health issues fully or partially or were missing key details,” per the report.
Advertisement
OntarioMD, a group that offers support for physicians in adopting new technologies and was involved in the AI Scribe procurement process, has recommended that doctors manually review their AI notes for accuracy, but the report notes there’s no mandatory attestation feature in any of the AI Scribe-approved systems.
Bad evaluations don’t help, either
AI systems making mistakes isn’t exactly shocking. As we’ve reported previously, consumer-focused AI has a tendency to provide bad medical information to users, and some studies have found large language models failed to produce appropriate differential diagnoses in roughly 80 percent of tested cases. But the tools evaluated here are for doctors, not consumers, and such poor performance necessitates explanation. A good portion of the report blames how the systems were evaluated.
According to the report, the weight given to various categories of AI Scribe performances was wonky. While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score.
Bias controls accounted for only 2 percent of the total evaluation score; threat, risk, and privacy assessments counted for another 2 percent; and SOC 2 Type 2 compliance contributed an additional 4 percentage points.
Advertisement
In other words, criteria tied to accuracy, bias controls, and key security and privacy safeguards made up only a small portion of the total evaluation score for the AI Scribe systems.
“Inaccurate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the report said of the scoring regime.
The Register reached out to the Ontario Health Ministry for its take on the report, and whether it was going to conform to its recommendations for the AI Scribe program, but we didn’t immediately hear back. A spokesperson for the Ministry told the CBC on Wednesday that more than 5,000 physicians in Ontario are participating in the AI Scribe program and there have been no known reports of patient harms associated with the technology. ®
Photo credit: Amaan Mukadam Crowded streets across Europe pack in far more people than parking spots can handle. Freelance designer Amaan Mukadam from the UK looked at that daily scramble and built the MicroFold, a four-wheeled electric vehicle meant for exactly one rider at a time.
Riders notice one of these units lurking by a curb in a busy area of town. You unlock the door using your phone, as the app allows you to do so easily. As you enter, the roof swings up in a fluid motion, much like a gullwing door, forming the windshield and side windows all at once. The single seat provides a great perspective of everything around you, so getting in is a no-brainer. Simply select your destination and you’re off without having to touch the steering wheel.
【Dual Suspension and Solid Tire】The 10”honeycomb tires, along with the shock absorbing system, make this electric scooter adults for a smooth…
【LED Display & Smart Control &Lockable】You can check your speed, modes and battery level on the LED digital display. check and control the…
【Powerful Motor & Long Range】The electric scooter for adults with a 500W brushless hub motor allows for speed up to 22mph. High-capacity…
As you drive toward your destination, the back part pops open just enough to keep the vehicle steady on the road. Amazingly, even with that open, the entire vehicle takes up just about a third of the space of a standard car. The body panels simply arrange themselves to provide a comfortable peaceful place inside for you. It’s like traveling in a private shuttle, with automatic turns and stops.
Arrival is where things become really creative, as you get out, pay via the app, and the MicroFold begins to transform. The rear wheels roll along these small tracks inside, bringing the back end closer to the front. The back panels glide in as the seat folds flat inside. Before you know it, the item has shrunk to a tiny size, just small enough to slip into a tight little space that would never fit a typical car. Mukadam adapted the folding sequence from the crisp lines of origami, and it works brilliantly. It’s a controlled movement every time, so you know it’ll be safe. When it’s all packed away, it simply rolls to the charging station and sits there silently until the next customer requires a lift.
The concept of self-parking and charging is a game changer for densely populated cities. You won’t have to worry about a car sitting there for hours, taking up valuable space. A whole row of folded MicroFolds may fit in the area required for two or three standard automobiles. That makes this a gem in locations like Europe where streets are narrow and parking is difficult, because you can simply line them all up in the space that one large car would take up. Sure, in the States, where driving distances are longer, you may desire a larger automobile for longer excursions, but the MicroFold demonstrates how simple it can be to use personal electric transportation in congested urban areas. [Source]
The US Centers for Disease Control and Prevention is monitoring 41 people in the US for the Andes hantavirus after a cruise ship was hit with a rare outbreak, but the risk to the public remains low, according to health officials.
This includes a group of 18 passengers from the cruise ship who are now in quarantine facilities in Nebraska and Georgia. The agency is also monitoring passengers who returned home before the outbreak was identified and others who were exposed during travel, specifically on flights where a symptomatic case was present.
“Most people under monitoring are considered high-risk exposures, and CDC recommends that everyone under monitoring stay at home and avoid being around people during their 42-day monitoring period,” David Fitter, incident manager for the CDC’s hantavirus response, told reporters during a media briefing on Thursday. “We emphasize not to travel across all these groups.”
The Andes virus is a strain of hantavirus found in South America that can be transmitted from person to person. Typically, hantavirus is passed to humans when they come into contact with rodent droppings or urine. A respiratory virus, the disease can cause difficulty breathing and carries a fatality rate of around 35 percent. As of Thursday, the World Health Organization has confirmed 11 cases of the Andes virus among passengers of the MV Hondius cruise ship, including three deaths.
Advertisement
A Department of Health and Human Services official confirmed to WIRED that all Americans who were on board the Hondius at any point during its journey are now back in the US.
The CDC has legal authority to issue federal quarantine and isolation orders to prevent the spread of certain communicable diseases into or within the US. Fitter said on Thursday that the CDC is not using that authority to manage all 41 of the individuals who were potentially exposed to the hantavirus.
“Our approach is based on risk and evidence,” he said. “We are working closely with passengers and public health partners to ensure monitoring and rapid access to care if symptoms develop. Our goal is to work with them and alongside them, building plans based on their specific situations to protect the health and safety of passengers and American communities.”
Individuals will be monitored for 42 days, which is the amount of time it can take for hantavirus symptoms to appear after exposure. Symptoms begin as flu-like, with fever, muscle aches, and fatigue, then rapidly progress to severe respiratory distress.
Utah will be the home of a new 40,000 acre datacenter
The datacenter will consume more power than the entire state
The power will be provided with natural gas burning turbine generators
Box Elder county commission in Utah has approved an enormous new data center that, upon completion, will be twice the size of Manhattan and consume more electricity than the entire state currently does.
The Stratos artificial intelligence datacenter will occupy more than 40,000 acres (62 sq miles) in north-western Utah and consume 9GW of power.
Nearly 4,000 local residents and environmentalists have objected strongly to the proposed datacenter, pointing out that the datacenter will pull water and raise temperatures in an already drought-hit region.
Latest Videos From
Advertisement
Datacenter raises ecological concerns
Kevin O’Leary, the venture capitalist and Shark Tank star, is backing the project, and has made several statements in an attempt to quell concerns about the development.
Speaking to Fox News, O’Leary said, “I don’t think there’s a bigger site in the world than this. It shows the Chinese and the rest of the world we are not messing around, we are going to get this done, move it forward and provide the compute power to our AI companies that defend the country.”
The Chinese likely aren’t the greatest concern for those opposing the project. Many are concerned about the Great Salt Lake ecosystem, which is already under threat from a recurrent drought and agricultural water diversions. The datacenter would likely divert more water from the lake, unless the developers are planning to source cooling water from outside the county.
“We’re not gonna drain the Great Salt Lake. That’s ridiculous. We are gonna create incremental jobs,” O’Leary said in a post on X. Evidence from other projects suggests that local job growth from data centers is short term, and almost entirely construction-based.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
For those worried about the energy consumption of the site, O’Leary said, “We are building power from scratch, from the pipeline. We are going to burn it with turbines, clean.” For the uninitiated, natural gas is a fossil fuel, and burning produces pollutants that have contributed to man-made climate change.
Gas turbines also present a second, less well known phenomenon. Each turbine operates like a commercial jet engine, but instead produces electricity. As the data center will have a 9GW energy consumption when it is completed, the campus will likely be as loud as a large airport. In several instances, the infrasound produced by datacenters has made local residents sick.
O’Leary has also claimed in a video on X that those opposed to the data center are “professional protestors” being paid to object to the project.
Advertisement
A group called the Box Elder Accountability Referendum has applied for a referendum on the decision to build the data center. If the referendum is signed by the 5,422 registered voters in the county within 45 days, another vote will be held in November.
“Instead of speaking with us, Kevin O’Leary went on social media saying we were out-of-state, paid protesters, and we don’t want people from out-of-state making decisions for us,” said Brenna Williams, lead sponsor of the referendum push. “The only thing he’s right about is that we don’t want him, an out-of-state billionaire, making decisions for us.”
Earlier this week, OpenAI became the latest tech company to publicly endorse KOSA, the Kids Online Safety Act. The company, conveniently, tries to frame this as being about its support of child safety. It’s not. It’s about political horse trading, desperation for good publicity, and building a regulatory moat.
KOSA would help create stronger online protections for young social media users through safer default settings, expanded parental controls, and greater accountability for online harms.
The path forward on kids safety, however, also requires AI-specific rules. And we believe KOSA is complementary to the work we’re doing at the federal and state level. Young people should be able to benefit from AI in ways that are safe, age-appropriate, and grounded in real-world support, including referrals to crisis resources and parental notifications in serious safety situations. That means building safeguards from the start, giving families better tools, and taking responsibility for reducing risks before they become harms.
The broader point is an important one: AI companies still have the opportunity to build protections early, before these technologies become fully embedded in everyday life. As OpenAI Chief Global Affairs Officer Chris Lehane has put it, “We can’t repeat the mistakes made during the rise of social media, when stronger safeguards for teens weren’t put in place until the platforms were already deeply embedded in young people’s lives.”
All of this is, of course, nonsense. As we’ve explained repeatedly, the underlying mechanisms of KOSA are deeply problematic and will do real damage. It will, inherently, make the internet worse for everyone. At its heart, KOSA is a surveillance and censorship bill, and it’s the last thing that we need for the internet today.
While it’s positioned as being about something no one can be against (“kid safety!”), that is all too often the facade with which terrible rights-killing laws are passed. And KOSA is no exception.
But a bunch of tech companies have endorsed it anyway. Why? Because they know it makes life way more difficult for smaller upstart competitors. The additional compliance costs it will add for companies will be ruinous to smaller, less well-resourced companies. For big companies with big bank accounts, however, it gives them a leg up.
Advertisement
OpenAI, perhaps more than most others in the space, needs that kind of government-backed protection against growing competition.
Almost exactly three years ago, I wrote a piece about Sam Altman going to Congress and asking for the federal government to regulate the AI space, calling it Sam Altman Wants The Government To Build Him A Moat. As I pointed out at the time, AI researchers were coming to the conclusion that there was little to no real competitive advantage that any frontier AI model could really have for any extended period of time. That situation has only gotten worse since then. The jockeying between the various leading AI models has meant that they’re all effectively comparable, and more and more builders are realizing that since you can separate out the context, the computer, and the agentic tools from the underlying LLM, that technology is quickly turning into a commodity where any one will do (and this situation is becoming even more tenuous as open weight/local models are getting better and better).
While OpenAI has a huge number of users (one of the fastest growing tech companies in history), it’s unclear if those users are particularly loyal. Indeed, there are a few indications that when OpenAI does something stupid, a large segment of users will quickly leave.
Given that, all of the large AI companies keep looking for ways to create some sort of lock-in for users. Most of them haven’t gone down the fully siloed path (knowing at this stage that would probably drive away their most valuable users). For the most part, the focus between the likes of OpenAI, Anthropic, Google and others is to build in more features to make it more convenient to stay than to swap out an underlying LLM. That and the continued leapfrogging, combined with various experiments regarding how much they’re willing to subsidize with their subscription plans.
Advertisement
But having the government wipe out competitors, or create “mandatory” tools that create lock-in, might be another path towards such a result. And that’s exactly what KOSA would lead to. It certainly wouldn’t protect kids. Indeed, all evidence suggests it would put plenty of marginalized kids at much greater risk.
However, it would create something of a regulatory moat for those larger companies.
On top of that, is there any company more desperate for a headline talking about how it’s “helping” protect children than OpenAI? The company has been accused of being “responsible” for suicide and other harmful behavior. And, even if those claims and lawsuits are misleading (they are!), culturally that message has been sticking. I’ve heard multiple people refer to ChatGPT as a suicide machine.
So, if you need a good headline to claim that you’re “protecting children” and doing so in a way where the law will have little direct impact on your business, but will damage some of your competitors in the space (not to mention the wider open internet), why not? It’s hard not to be cynical about OpenAI’s reasoning here.
Advertisement
Separately, it’s likely that the AI companies see this as a bit of political horse trading. While KOSA would have some impact on AI tools, it’s much more directed at social media platforms than AI. And it’s likely that the bet being made by OpenAI here is “hey, we’ll back KOSA for you, and you get rid of the AI-specific bills.” OpenAI’s Chris Lehane, who announced the endorsement and is featured in every press release about it, is infamous as a political trickster. He’s a political operator, not a tech or policy expert. You roll him out to cut a deal, not to advance a principled position on child safety. And that’s exactly what’s happening here.
You can see the KOSA authors gleefully using the OpenAI endorsement to falsely claim that only Mark Zuckerberg now opposes the law:
Yeah, that’s Senator Richard Blumenthal choosing to spend time on X, a site run by a guy who has made it clear he thinks Blumenthal’s political party is evil and needs to be wiped out, using that platform to lie and claim that the only people opposed to KOSA are “Mark Zuckerberg & his lobbyists.” That ignores the long list of civil society and public interest groups who have made it clear just how dangerous the law would be.
Marsha Blackburn (who has been vocal about how she wants KOSA to silence LGBTQ voices) put out a silly press release about this endorsement, saying:
“Lip service won’t save lives – Congress must take action to establish guardrails in the virtual space. I look forward to chairing a hearing on why the verdicts in California and New Mexico should spur Congress to hold Big Tech accountable for exploiting children to turn a profit.”
What? As bad as the rulings in California and New Mexico are, they seem to suggest that the courts already think they have the authority to order companies to do the impossible and magically stop anything bad from ever happening to kids who also (incidentally) use the internet.
Advertisement
All of this is for show. No one is being honest. Blackburn wants to censor LGBTQ speech she considers “dangerous to kids” because it terrifies her. Blumenthal wants to end encryption and the ability of tech companies to keep information, because he’s always been a cop and wants the ability to spy on your kids. And OpenAI wants Congress to direct their bad policies at social media companies rather than AI companies.
And all of us internet users are simply collateral damage for the mad power dreams of those in charge.
Netflix has spent years using AI to make sure you never leave the couch. Making AI-based content is the next step, I guess.
The streaming giant is staffing up a new internal studio called INKubator to produce animated short films and specials using generative AI (via TheVerge).
The project never got an official announcement from Netflix. Instead, it surfaced through a series of recently published job listings seeking producers and CGI artists. These listings paint a pretty clear picture of where the company is headed.
What exactly is INKubator, and who is running it?
Based on LinkedIn profiles, INKubator quietly launched in March 2026 and is led by Serrena Iyer, who previously held strategy and operations roles at DreamWorks Animation, MRC Studios, and A24 Films. That is not a lineup you put together for a throwaway experiment.
The job listings describe the studio as a next-generation, creativity-first operation built entirely around generative AI. The studio’s long-term technology strategy covers generative AI workflows, artist tooling, and scalable multi-show environments.
Advertisement
Interestingly, INKubator is not the first AI studio to be acquired by Netflix. Earlier this year, the streaming giant acquired InterPositive, an AI startup founded by actor Ben Affleck, which is centred on AI usage in post-production.
Could AI-generated shows end up in your Netflix feed?
For now, INKubator seems to be focused strictly on shorts and experimental animated specials, rather than full-length features. That said, the job listings hint at longer-form ambitions down the line.
Netflix has also been making a push into kids’ programming, positioning itself as a family-friendly YouTube alternative. It also launched a standalone app for kids called Netflix Playground. Generative AI could surely help it scale that kind of content much faster.
Whether you are ready for AI-made Netflix shows or not, INKubator suggests the streamer has already made up its mind.
Polestar CEO Michael Lohscheller told CNBC that “pump anxiety” has replaced range anxiety as the dominant consumer concern, with rising fuel prices from the Iran war and Strait of Hormuz closure driving a measurable shift toward EV demand. EU EV registrations jumped 51% in March. Polestar reported a widening Q1 net loss of $383 million on flat revenue of $633 million despite record deliveries of 13,126 vehicles, with gross margins swinging negative due to pricing pressure, tariffs, and currency effects.
Advertisement
For years, the electric vehicle industry’s biggest problem had a name: range anxiety. Now, according to the chief executive of Polestar, the anxiety has moved to the other side of the forecourt. “People are concerned, ‘how much do I pay at the gas station?’” Michael Lohscheller told CNBC’s Squawk Box Europe on Wednesday, coining a phrase “pump anxiety” that captures a shift the entire automotive industry is struggling to absorb.
The context is not subtle. Since the United States and Israel launched strikes against Iran on 28 February, the Strait of Hormuz, the narrow waterway that carries roughly a fifth of the world’s oil supply, has been effectively closed to commercial shipping. Brent crude has surged past $100 a barrel. In the United Kingdom, average petrol prices have risen by more than 25 pence per litre since early March, with diesel tracking nearly 45 pence higher, according to RAC Fuel Watch. Across the European Union, petrol has breached €2 per litre in several markets. The result is a measurable, continent-wide recalculation of what it costs to drive.
The economics have flipped
Lohscheller’s argument is that the cost equation has inverted. “In the past, people considered EVs for idealistic reasons, and now the decision is all about money,” he said. The claim is supported by the numbers. EU electric vehicle registrations jumped 51% in March compared with the same month a year earlier, with Italy recording a 65.7% increase in battery-electric registrations in the first quarter, France following at 50.4%, and Germany at 41.3%, according to industry data compiled by the European Automobile Manufacturers’ Association. In the United Kingdom, Polestar’s home market in operational terms, Chinese EV manufacturers and European incumbents alike are reporting surges in online inquiries and test-drive bookings.
The shift is not uniform. In the United States, where petrol prices have topped $4 per gallon for the first time in four years, the effect has been more muted. Used EV sales rose 12% year on year in the first quarter, and 17% over the previous quarter, but new EV sales have not yet shown the same spike. Lohscheller cited disappearing federal tax incentives and broader consumer uncertainty as factors dampening American demand.
A company under pressure
The pump-anxiety thesis arrives at a moment when Polestar could use some good news. The Geely-controlled, Sweden-headquartered company reported a widening net loss of $383 million in the first quarter, more than double the $166 million loss in the same period last year. Revenue was flat at $633 million despite a 7% increase in deliveries to a record 13,126 vehicles. Gross margins swung to negative 3.2%, down from positive 10.3% a year earlier, a deterioration the company attributed to pricing pressure, EU and US tariffs, lower carbon-credit sales, and unfavourable foreign-exchange movements driven by the weakening Chinese yuan.
Advertisement
The financial picture reveals the paradox at the heart of the EV industry in 2026. Demand is rising, but so is the cost of competing. Polestar manufactures primarily in China, which makes its vehicles subject to both American and European tariff regimes designed to counter the competitive advantage of Chinese production. The company described China’s domestic market as “hyper competitive” and suggested Europe needed to “speed up” its own response.
Range anxiety is over. What comes next is harder.
Lohscheller, who previously ran Opel and Vauxhall, was blunt about the range question. “Range anxiety, I think this has gone,” he said at the Financial Times Future of the Car conference, also held on Wednesday. The cheapest Polestar 2 now offers 344 miles of official range on the WLTP test cycle. The dual-motor Polestar 3 SUV manages 402 miles. An 82-kilowatt-hour Polestar 2 can be fully charged overnight on a domestic EV tariff in the UK for roughly £15, a fraction of what a comparable petrol car costs to refuel at current prices.
But lower running costs have not yet translated into lower purchase anxiety. Lohscheller acknowledged that EV residual values remain a pain point, lagging behind equivalent combustion cars. New-car pricing pressure, driven by manufacturers scrambling to meet the UK’s zero-emission vehicle mandate quotas or face fines, has led to aggressive discounting that erodes used-EV values further. “I’m asking for stability,” he said of the regulatory environment. “Every three months to have a new debate about these rules changing is not helping anybody.”
The bigger picture
The fuel-price shock is reshaping automotive markets well beyond Polestar’s niche. BYD exported more than 120,000 electric and hybrid vehicles in March alone, a 65% increase year on year. Renault has described the Middle East conflict as having triggered a “seismic shift” in EV adoption. The International Energy Agency’s chief, Fatih Birol, has said countries are likely to pivot to renewables as a way to mitigate geopolitical risk, calling the Hormuz disruption the largest supply disruption in the history of the global oil market.
Advertisement
The irony is that the crisis most likely to accelerate the transition to electric vehicles is also the crisis most likely to punish the companies trying to lead it. Higher energy costs raise manufacturing expenses. Tariff walls make cross-border competition more expensive. Currency volatility erodes margins on vehicles built in one country and sold in another. Polestar’s first-quarter results are a case study in all three dynamics operating simultaneously.
Lohscheller’s framing, that the conversation has moved from range to price, from ideology to arithmetic, is probably correct. The question is whether Polestar, a company losing money on every percentage point of margin while navigating tariffs, competition, and a war-driven energy shock, is positioned to benefit from the shift it is describing. Pump anxiety may be good for EVs in the aggregate. Whether it is good for Polestar depends on whether the company can turn rising demand into something its balance sheet has not yet demonstrated: a sustainable business.
You must be logged in to post a comment Login