America’s AI industry isn’t just divided by competing interests, but also by conflicting worldviews.
Tech
Apple’s budget MacBook Neo is here to take on the best cheap laptops and Chromebooks
It’s been rumoured for a long time, but Apple has finally taken the wraps off arguably its most exciting laptop in years.
The $599 MacBook Neo arrives as the most affordable entry in Apple’s laptop range, with a price that’s more in line with the brand’s iPad range – including the new iPad Air M4 – than the MacBook Air or MacBook Pro.
This is the first MacBook to be powered by the A18 Pro chip originally made for an iPhone, and, like the iMac, it also comes in a range of fun colours – Silver, Indigo, Blush, and Citrus – that are also a first for a MacBook. Apple has colour-matched the keyboard and feet, giving it a very distinct look.
The A18 Pro chip has a 6-core CPU (with 2 performance cores and 4 efficiency cores) and a 5-core GPU, along with ray-tracing support and a 16-core neural engine. Apple will only offer a single 8GB memory option, with storage sizes of either 256GB or 512GB.


Battery life is stated at 16 hours for video streaming and 11 hours of web browsing, and there’s a 1080p camera on the front. Unlike the other MacBook models, there’s no notch, but a bezel similar to that of an iPad. The display is 13 inches, with a 2408 x 1506 resolution and a reported 500 nits of brightness.
The base model ships without Touch ID, although you can pay a little more and get a version with the fingerprint unlock embedded into the keyboard.
Connectivity comes from two USB-C (one is a USB 2 and the other USB 3) ports and a headphone jack, so no MagSafe charging. There is Wi-Fi 6E and Bluetooth 6 though, which is welcome.
MacBook Air Neo Price and Release Date
Prices start at £599/$599 for a model with 256GB storage and £699/$699 for a Touch ID-toting 512GB variant. It can only be selected with 8GB memory, and there’s no 1TB storage option.
This is a breaking news story. We’ll update it as we get more information
Tech
Free Space Optical Link Tackles Urban Connectivity
Taara started as a Google X moonshot spin-off aimed at connecting rural villages in sub-Saharan Africa with beams of light. Its newest product, debuting this week at Mobile World Congress (MWC), in Barcelona, aims at a different kind of connectivity problem: getting internet access into buildings in cities that already have plenty of fiber—just not where it’s needed.
The Sunnyvale, Calif.–based company transmits data via infrared lasers, the kind typically used in fiber-optic lines. However, Taara’s systems beam gigabits across kilometers over open air. “Every one of our Taara terminals is like a digital camera with a laser pointer,” says Mahesh Krishnaswamy, Taara’s CEO. “The laser pointer is the one that’s shining the light on and off, and the digital camera is on the [receiving] side.”
Taara’s new system—Taara Beam, being demoed at MWC’s “Game Changers” platform—prioritizes efficiency and a compact size. Each Beam unit is the size of a shoebox and weighs just 8 kilograms, and can be mounted on a utility pole or the side of a building. According to the company, Beam will deliver fiber-competitive speeds of up to 25 gigabits per second with low, 50-microsecond latency.
Taara’s former parent company, Krishnaswamy says, is also these days a prominent client. Google’s main campus in Mountain View, Calif., is near a landing point for a major submarine fiber-optic cable.
“One of the Google buildings was literally a few hundred meters away from the landing spot in California,” he says. “Yet they couldn’t connect the two points because of land rights and right-of-way issues.… Without digging and trenching into federal land, we are able to connect the two points at tens of gigabits per second. And so many Googlers are actually using our technology today.”
A Fingernail-Size Chip Shrinks Taara’s Tech
Krishnaswamy says his laser pointer and digital camera analogy doesn’t quite do justice to the engineering problems the company had to tackle to fit all the gigabit-per-second photonics into a weather-hardened, shoebox-size device.
The Taara Beam must steer its laser link across kilometers of open air so that the Beam device can receive it on the other end of the line. Effectively, that means the device’s laser can’t be off target by more than a few degrees.
Beam approaches the steering problem by physically shaping the laser pulse itself. Taara’s photonics chip splits the laser beam carrying the data into more than a thousand separate streams, delaying each one by a closely controlled amount. The result is a laser wavefront that can be pointed anywhere the system directs.
Krishnaswamy likens this to the effects of pebbles tossed into a pond. Dropping pebbles in a careful sequence, he says, can create interference patterns in the waves that ripple outward. “These thousand emitters are equivalent to a thousand stones,” he says. “And I’m able to delay the phase of each of them. That allows me to steer [the wavefront] whichever direction I want it to go.”
The idea behind this technology—called a phased array—is not new. But turning it into a commercial optical communications device, at Taara Beam’s scale and range, is where others have so far fallen short.
“Radio-frequency phased arrays like Starlink antennas are well known,” Krishaswamy says. “But to do this with optics, and in a commercial way, not just an experimental way, is hard.”
This isn’t how the company started out, however.
In 2019, when the company was still a Google X subsidiary, Krishaswamy says, Taara launched its first commercial product, the traffic-light-size Lightbridge. Like Beam, Lightbridge boasts fiberlike connection speeds, and it has to date been deployed in more than 20 countries around the world—including the Google campus.
Taara’s upgraded model, Lightbridge Pro, launched last month and is also on display this week at MWC. Lightbridge Pro adds one crucial capability Lightbridge lacked: an automatic backup. When fog or rain disrupts Lightbridge’s optical link, the system switches traffic to a paired radio connection. When conditions clear, Lightbridge Pro switches traffic back to the faster laser-data connection. The company says that combination keeps the link up 99.999 percent of the time—less than 5 minutes of downtime in a year.
Both Lightbridge and Lightbridge Pro mechanically position their mirrors, achieving three degrees of pointing accuracy. An onboard tracking system inside the unit also relocks the beams automatically whenever the unit gets shifted or jostled.
The Future of Taara Beam Deployment
Krishaswamy says that while Taara continues to install and support Lightbridge and Lightbridge Pro, he hopes the company can also begin installing Taara Beam units for select early customers as soon as later this year.
Mohamed-Slim Alouini, distinguished professor of electrical and computer engineering at King Abdullah University of Science and Technology in Thuwal, Saudi Arabia, says the bandwidth of free-space optical (FSO) technologies like Taara Beam and Lightbridge still leaves plenty of room to grow.
“Like any physical medium, free-space optics has a capacity limit,” Alouini says. “But laboratory experiments have already demonstrated fiberlike performance with terabits-per-second data rates over FSO links. The real gap is not in raw capacity but in practical deployment.”
Atul Bhatnagar, formerly of Nortel and Cambium Networks, and currently serving as advisor to Taara, sees room for optimism even when it comes to practical deployment.
“Current Taara architecture is capable of delivering hundreds of gigabits per second over the next several years,” he says.
Krishnaswamy adds that Beam’s compact form factor makes it suitable for more than just terrestrial applications.
“We’ll continue to do the work that we’re doing on the ground. But to the extent that space solutions are taking off, we would love to be part of that,” he says. “Data center-to-data center in space is something we are really looking at using for this technology.
“Because when you have multiple servers up in space, you can’t run fiber from one to the other,” he adds. “But these photonics modules will be able to point and track and transmit gigabits and gigabits of data to each other.”
For now, Taara’s ambitions are closer to Earth—specifically to the buildings, utility poles, and city blocks where fiber still hasn’t arrived. Which is, after all, where the company’s story began.
UPDATE 4 March 2026: The weight of the Taara Beam (8 kg) and the launch year of the Taara Lightbridge (2019) were both corrected.
From Your Site Articles
Related Articles Around the Web
Tech
Dyson’s AM09 Hot + Cool fan heater is nearly 50% off
Struggling to find an effective fan heater that works well in both winter and summer? Look no further than this Dyson AM09 deal.
When it comes to fans and heaters, it’s easy to assume that you’ll be stuck just investing in a winter’s worth of hot air, with summer being a completely separate purchase, but that’s not the case with Dyson’s AM09.
And you’re in luck today, as it’s now 47% cheaper thanks to the voucher code SPRING30 on Argos, bringing the Dyson AM09 down to £209.99 from £400.99.
Now available with savings of more than £190, the Dyson AM09 Hot + Cool feels like the kind of upgrade you make once and appreciate daily.

Dyson’s AM09 Hot + Cool fan heater is nearly half price, as a refined essential for year‑round comfort
This particular device, which just so happens to be Dyson’s most affordable heater at full price, can push out both cold and hot air as you need it, and it’s even flagged as the best hot and cold fan in our best electric heater buying guide.
Like all of Dyson’s products, the Dyson AM09 boasts a modern aesthetic that really helps it to stand out in your home.
Whichever room it ends up being situated in, the AM09 can become a talking point among friends, largely for its unique bladeless design that not only keeps it safe around children and pets but it also contributes to a smoother flow of air.
The AM09 also boasts a dedicated sleep timer as well as oscillation for ensuring that air flow is delivered exactly where you need it to be (and not right into your face).


Get Updates Straight to Your WhatsApp
Speaking of air flow, there are several modes of adjustment for the AM09, allowing you to be as specific with the strength of the air flow as you need. Take it from us, there’s nothing worse than trying to read a book with a fan heater blasting away, so it’s great to have the option here.
When it comes to its performance, Dyson fans are known for power, and that fact holds true here. The AM09 can heat up your room quickly in the colder months, but never to the point where it ends up getting too hot, ensuring that Dyson can help you to be more energy efficient with your habits.
The same principle also applies in reverse when you switch things up to the cooling side of the Dyson fan.
The Dyson AM09 might sit at the higher end of the market as far as price is concerned, but Dyson’s experience in innovative technology, coupled with an elegantly built product, really does cost a pretty penny.
Although cheaper than the other Dyson fan heaters, which double up as purifiers and can be controlled via a smart app, the Dyson Hot+Cool Jet Focus AM09 is still considerably more expensive than the competition. It is very good, though: it warms quickly and efficiently, with Jet Focus helping to direct heat where you want it to go. And, it’s a brilliant standalone fan for when you need cooling in Summer. If you want something you can use all-year-round and don’t mind spending this much, this is a great heater and fan.
-
Useful all year round -
Magnetic remote-control -
Intuitive and easy to use
-
Expensive -
Noisy in heat mode
SQUIRREL_PLAYLIST_10148964
Tech
Big tech companies agree to not ruin your electric bill with AI data centers
Today the White House announced that several major players in tech and AI have agreed to steps that will keep electricity costs from rising due to data centers. Under this Ratepayer Protection Pledge, companies are agreeing to practices that are intended to protect residents from seeing higher electricity costs as more and more businesses create power-hungry data centers. Amazon, Google, Meta, Microsoft, OpenAI, Oracle and xAI have all apparently signed on. A few of the participants — Amazon, Google and Meta — had conveniently timed press releases patting themselves on the back for their participation and touting whatever other policies they have for mitigating the negative impacts of data center construction.
The main provisions of the federal pledge have tech companies agreeing to “build, bring, or buy the new generation resources and electricity needed to satisfy their new energy demands, paying the full cost of those resources.” It also claims they will pay for any needed power infrastructure upgrades and operate under separate rate structures for power that will see payments made whether or not the business uses that electricity.
The pledge doesn’t appear to be any form of binding agreement and there’s no discussion of enforcement or a penalty for companies that don’t honor the stipulated provisions. It also doesn’t address any of the other impacts data centers and AI development might be having, either on local communities, on other utilities and resources, or on access to critical computing elements like RAM.
Tech
The Aria EV Shows the Potential of EV Battery Swapping
At first glance, the Aria EV doesn’t look much different from any other student-built electric prototype—no different from the battery-powered cars built by engineering students from dozens of universities every year. Beneath its panels, however, is a challenge to the modern auto industry: What if electric vehicles were designed to be repaired by their owners?
The Aria project began in 2024, when roughly 20 students assembled at Eindhoven University of Technology in the Netherlands under the university’s Ecomotive team structure, which operates like a small startup. Students apply, are selected, and spend a year developing a vehicle in a setting meant to mirror industry practice.
The goal, says team spokesperson Sarp Gurel, “was to make the car as accessible and repairable as possible.” Gurel, who graduated last July with a bachelor’s degree in industrial engineering and is currently working toward a master’s degree at Eindhoven, says the Aria EV is not yet road legal. Its purpose is to demonstrate that repairability can be embedded into EV architecture from the outset. With that objective in mind, the team focused first on the most challenging and expensive component in almost any EV: the battery.
Modular Battery Design in EVs
Aria’s total battery capacity is 13 kilowatt-hours, which is far below the 50- to 80-kWh packs common in mass-market electric sedans and SUVs. The scale is closer to that of a lightweight urban vehicle or neighborhood EV, which is more appropriate for a student-built prototype focused on concept validation rather than long-range highway travel.
What distinguishes Aria is not the battery’s size, but its structure. Rather than housing the 13 kWh in a single sealed pack, the team divided the total capacity into six smaller modules. Each module weighs about 12 kilograms—much easier to handle than the 400 kg or more that’s typical of a conventional EV’s monolithic battery pack. This makes it feasible for a single person to remove, swap, and replace modules.
The modules sit in reinforced compartments beneath the vehicle floor and are secured using a bottom-latch system. When the vehicle is fully powered down, a latch can be made to mechanically release a module. Integrated interlocks isolate the high-voltage connection before a module can be lowered. This combination of hardware and software ensures that component-level replacement is straightforward and relatively safe, bringing the idea of “repairability by design” into a tangible, hands-on form. Even with this careful design, modular batteries introduce technical considerations that must be managed, particularly when integrating different modules over the vehicle’s lifespan.
Joe Borgerson, a laboratory research operations coordinator at Ohio State University’s Center for Automotive Research, in Columbus, notes one complication: Mixing new and aged battery modules can create challenges. Borgerson has spent the past three years designing and building a battery pack from scratch as part of the U.S. Department of Energy’s Battery Workforce Challenge. “Our team is integrating a student-designed pack into a Stellantis vehicle platform,” he says, “which has given me deep exposure to both automaker design philosophy and high-voltage EV architecture,.”
To complement their car’s hardware, the Aria team developed a diagnostic app that can be accessed via a dedicated USB-C port. When the user connects their smartphone, the app presents a 3D visualization on the phone screen that points out faults, locates problems, identifies the necessary tools to fix them, and provides step-by-step repair instructions. The tools themselves are stored in the vehicle. The system aims to reduce as many barriers as possible for users to maintain and extend a vehicle’s service life.
Students at Eindhoven University of Technology unveiled their Aria EV prototype in November.Sarp Gürel
Challenges of EV Modularity
While Aria prioritizes modularity, the broader EV industry trend is toward integrated, interdependent systems that simplify manufacturing processes and cut costs. This trend is true for the structural battery packs for EVs as well.
Unlike mainstream EVs, Aria treats energy storage as a replaceable subsystem. Whether it scales economically and structurally to larger, highway-capable EVs remains an open question. But designing a vehicle for repairability involves trade-offs that ripple across every system in the car.
Borgerson says that dividing systems into removable units adds interfaces—mechanical fasteners, electrical connectors, seals, and safety interlocks. Each interface must survive vibration, temperature swings, and crash forces. More interfaces can mean added mass and complexity compared with tightly integrated battery structures. And these components take up space that would otherwise be used for energy storage.
Matilde D’Arpino, an assistant professor of mechanical and aerospace engineering at Ohio State whose research focuses on electrified power trains and advanced vehicle architectures, notes that EV batteries are already modular internally—cells form modules, and modules form packs—but making modules externally replaceable changes validation requirements. High-voltage isolation, thermal performance, and crash integrity must remain robust even when energy storage is divided into removable segments. In other words, what seems like a simple way to make batteries user-friendly actually cascades into system-level design decisions influencing safety, thermal management, and vehicle structure.
Right-to-repair legislation in Europe and the United States could push automakers to reconsider sealed architectures for batteries and other components. Economic incentives could also emerge from fleet operators or long-term owners who benefit from replacing a fraction of a battery system rather than an entire pack. But adopting this approach would require changes across supply chains, certification processes, and service models.
The Aria prototype isn’t ready to go toe-to-toe with production EVs, but it demonstrates some proof-of-concept ideas about repairability.Sarp Gürel
Consumer expectations are also shaping the boundaries of what designs like Aria’s can become. In the mainstream market, buyers consistently prioritize longer driving range and lower sticker prices—two factors that have defined competition among models such as the Chevrolet Bolt EV, the Hyundai Ioniq 5, and the the Tesla Model 3. Range anxiety remains a powerful psychological factor, even as charging infrastructure expands, and price sensitivity has intensified as government incentives fluctuate. Designing for modularity and repairability, as Aria does, must ultimately contend with these consumer priorities. Any added cost, weight, or complexity must be weighed against a market that still rewards vehicles that go farther for less money.
Ultimately, however, Aria inserts a different priority into the equation: repair as a core design requirement. Whether that priority becomes mainstream will depend less on whether it can be engineered—and more on whether regulators, manufacturers, and consumers decide it should be.
From Your Site Articles
Related Articles Around the Web
Tech
Amazon lays off robotics staff in latest cuts

Amazon is laying off an undisclosed number of employees from its robotics division. Business Insider first reported the news, and the company confirmed the cuts in a statement to GeekWire.
“We regularly review our organizations to make sure teams are best set up to innovate and deliver for our customers,” a company spokesperson said. “Following a recent review, we’ve made the difficult decision to eliminate a relatively small number of robotics roles. We don’t make these decisions lightly, and we’re committed to supporting employees whose roles are affected with severance pay, health insurance benefits, and job placement support.”
The layoffs are separate from Amazon’s broader cuts announced in January that impacted more than 16,000 corporate workers — the second phase in a restructuring that totals 30,000 positions, the largest workforce reduction in the company’s history.
In a memo to employees in January, Beth Galetti, Amazon’s senior vice president of people experience and technology, said the company did not plan to make regular rounds of massive cuts. “Some of you might ask if this is the beginning of a new rhythm — where we announce broad reductions every few months,” she wrote. “That’s not our plan.”
However, Galetti added that teams will continue to evaluate their operations and “make adjustments as appropriate,” saying that’s “never been more important than it is today in a world that’s changing faster than ever.”
Amazon’s robotics unit supports the company’s growing robot fleet that helps move products around its fulfillment centers. The company deployed its 1 millionth robot last year. In January, Amazon shut down its new Blue Jay warehouse robotic system, according to Business Insider.
Amazon also announced in January that it will close all of its Amazon Go and Amazon Fresh grocery store locations. The “Just Walk Out” technology originally developed for Amazon Go convenience stores, which uses overhead cameras and sensors to avoid traditional checkout, will live on as a licensing business.
Amazon previously slashed 27,000 positions in 2023 across multiple rounds of layoffs.
The company’s corporate roles numbered around 350,000 people in early 2023, the last time Amazon provided a public figure. Its overall workforce stands at 1.58 million, which includes warehouse employees.
Tech
The MacBook Neo Looks Like a Hit for Students. Should Anyone Else Choose It Over the Air?
Even before the introduction of the MacBook Neo, Apple had a great student laptop. The MacBook Air is our current pick as the best laptop for college students. So in addition to competing against Chromebooks and budget Windows laptops, the new MacBook Neo is also going up against the MacBook Air for school laptop buyers.
Given the large price difference between the Neo and Air, I think we’ll see tons of colorful MacBook Neos in schools by next fall. It looks like a hit for student budgets, but should you consider buying a MacBook Neo if you’re already out of school?
Let’s take a closer look at the new Neo to see what features it offers and those that are missing.
MacBook Neo vs. MacBook Air
For $599, or just $499 with Apple’s educational discount, the MacBook Neo significantly lowers the entry price for MacBook shoppers. The Neo arrives on the heels of the new M5 MacBook Air, which raises the Air’s price by $100 to $1,099. That likely puts the Air beyond many student budgets.
There’s also last year’s M4 MacBook Air to consider. It can usually be found for less than $1,000 at Amazon. Right now, it’s selling for $899.
With M4 MacBook Air models still readily available, budget laptop shoppers have three MacBook options.
MacBook Neo and MacBook Air compared
MacBook Neo
M4 MacBook Air
M5 MacBook Air
Price
$599
$899
$1,099
CPU
A18 Pro
M4
M5
No. of CPU cores
6
10
10
No. of GPU cores
5
8
8
RAM
8GB
16GB
16GB
Storage
256GB
256GB
512GB
Screen size
13 in
13.6 in
13.6 in
Screen resolution
2,408×1,506 pixels
2,560×1,664 pixels
2,560×1,664 pixels
Weight
2.7 lbs
2.7 lbs
2.7 lbs
Dimensions (HWD)
0.5 x 11.71 x 8.12 in
0.44 x 11.97 x 8.46 in
0.44 x 11.97 x 8.46 in
Connections
USB-C x2, headphone
Thunderbolt 4 x2, headphone, MagSafe 3
Thunderbolt 4 x2, headphone, MagSafe 3
Battery
36.5-watt‑hour
52.6-watt‑hour
53.8-watt‑hour
The fact that the price gap between the MacBook Neo and the discounted M4 MacBook Air is greater than that of the M4 Air and M5 Air makes a compelling case for the Neo. The Neo costs $300 less than the discounted M4 Air and $500 less than the $1,099 M5 Air. Only $200 separates the older M4 Air and the new M5 Air.
We don’t yet know how the MacBook Neo with its A18 Pro processor and 8GB of unified memory will measure up to a MacBook Air with an M4 or M5 chip and 16GB of RAM.
I can tell you right now, however, that if you’re a creator who uses photo- or video-editing apps or plan to use Apple Intelligence or run other AI workloads, a MacBook Air is the better choice for the additional GPU cores and greater memory allotment. You’re stuck with the Neo’s 8GB of RAM; the only upgrade offered for it is doubling the storage to a 512GB SSD for $100.
The Neo makes more sense as a MacBook for casual use around the house. Think of it as an oversized, nontouch iPad with an attached keyboard. It will let you browse the web, watch shows and movies, edit photos and videos you took with your iPhone, and respond to texts using a keyboard. It’s also compact and portable, with a lightweight aluminum body, and will no doubt make an easy travel companion.
The Neo looks like a MacBook Air, just a bit smaller (and $500 less).
What’s missing on the Neo
The MacBook Neo’s most pleasant surprise was the size of the display. Rumors had swirled that Apple would keep costs in check in part by outfitting the Neo with a 12-inch display, so I was happy to see the Neo get a 13-inch display that’s only slightly smaller than the Air’s 13.6-inch display. Plus, it’s a Liquid Retina display with a relatively high resolution of 2,408×1,506 pixels.
Still, a number of items that you get with the Air are missing on the Neo.
Let’s start with the input devices. The keyboard doesn’t have backlighting, which is a bummer since that shows up on even the most budget of Windows laptops and Chromebooks at this price. The basic keyboard also lacks Touch ID. You have to spend $100 on the 512GB SSD to get Touch ID, a feature I couldn’t live without on my MacBook. Also, the touchpad is mechanical and not the lovely Force Touch haptic touchpad found on the Air.
You can upgrade to a 512GB SSD that also includes a Touch ID keyboard, but the MacBook Neo does not offer keyboard backlighting.
Ports are also a downgrade. Instead of a pair of speedy Thunderbolt 4 ports, the two USB-C ports are of the slower USB 3 and USB 2 variety. And you’ll need to use one of them to charge the Neo because it doesn’t have a MagSafe connector. I really enjoy the satisfying snap when I connect my MagSafe cable and the peace of mind that comes knowing that the cable will disconnect with ease and not pull my MacBook to its doom if I trip over the cord.
The webcam can do 1080p video, as you get with the Air, but it lacks Center Stage, which pans and zooms to keep you in the middle of the frame. (It is nice that there’s no webcam notch, though.) And while you get a Liquid Retina display on the Neo, it doesn’t have Apple’s True Tone technology that uses ambient light sensors to adjust the white balance so text and images look more natural and accurate. Most people won’t miss either of these last two items.
Don’t forget the memory
For most people deciding between a MacBook Air and Neo, the biggest drawback will be the 8GB of RAM. I suspect the six-core A18 Pro will do a reasonably good job of running MacOS. It’s the RAM that makes me nervous.
In this era of RAM shortages driving up pricing, it should come as no surprise that Apple went with only 8GB of RAM on the Neo. And it makes sense why you can’t upgrade the Neo’s memory to 16GB.
Apple charges $200 to go from 16GB to 24GB of RAM on the MacBook Air. Adding $200 to the cost of the Neo on top of the $100 charge for the 512GB (because most people wouldn’t do one without the other), and you’re suddenly looking at a price of $899 for the Neo. At that price, you’re entering MacBook Air territory.
Unless you absolutely insist on keyboard backing, a haptic touchpad, Thunderbolt 4 or MagSafe, the decision between MacBook Neo or Air will come down to the memory. If you keep things casual, then the Neo’s 8GB of RAM will suffice. After all, up until the M3 Air, the base models had just 8GB of memory and didn’t struggle to run MacOS. Still, for heavier lifting where you’re doing some graphics or AI work — or you’re just a serious multitasker and find yourself juggling many, many apps every day — then it makes sense to spend the extra money on a MacBook Air with 16GB of RAM.
Tech
Google Ends Its 30% App Store Fee, Welcomes Third-Party App Stores
Google is eliminating its traditional 30% Play Store fee and introducing lower commissions, while at the same time allowing alternative billing systems and making it easier for third-party app stores to operate on Android. The changes stem largely from Google’s settlement with Epic Games. Engadget reports: The biggest change is to how Google will collect fees from developers publishing apps on Android. Rather than take its standard 30 percent cut of in-app purchases through the Play Store, Google is lowering its cut to 20 percent, and in some cases 15 percent for new installs of apps from developers participating in its new App Experience program or updated Google Play Games Level Up program. Those changes extend to subscriptions, too, where the company’s cut is lowering to 10 percent. For Google’s billing system, the company says developers in the UK, US, or European Economic Area (EEA) will now be charged a five percent fee and “a market-specific rate” in other regions. Of course, for anyone trying to avoid those fees, using alternatives to Google’s billing system is getting easier.
Google says that developers will be able to offer alternative billing systems alongside its own or “guide users outside of their app to their own websites for purchases.” […] Epic is ultimately interested in getting people to use the mobile version of its Epic Games Store, and Google’s announcement also includes details on how third-party app stores can come to Android. Third-party app stores will be able to apply to the company’s new “Registered App Stores” program to see if they meet “certain quality and safety benchmarks.” If they do, they’ll be able to take advantage of a streamlined installation interface in Android. Participating in the program is optional, and users will still be able to sideload alternative app stores that aren’t part of the program, but Google clearly has a preference. […]
Google says that its updated fee structure will come to the EEA, the UK and the US by June 30, Australia by September 30, Korea and Japan by December 31 and the entire world by September 30, 2027. Meanwhile, the company’s updated Google Play Games Level Up program and new App Experience program will launch in the EEA, the UK, the US and Australia on September 30, before hitting the remaining regions alongside the updated fee structure. For any developers interested in offering their own app store, Google says it’ll launch its Registered App Stores program “with a version of a major Android release” before the end of the year. According to the company, the program will be available in other regions first before it comes to the US.
Tech
Judge Says He’s Sick Of The Government’s Shit; Threatens To Make DHS, DOJ Testify Under Oath
from the let’s-move-your-contempt-from-civil-to-criminal dept
Of course, we’ll see what comes of this, but it’s starting to look like this administration won’t outlast this level of judicial scrutiny. It may have bullied its way past courts during Trump’s first year back in office, but now lines are being drawn. Whether or not those lines matter is an open question. But the important thing is that they’re being drawn. All the government has to do is cross them. And there’s no reason to believe it won’t.
This is not the only court drawing these lines. The administration has already been hit with hundreds of adverse rulings. Multiple courts have threatened contempt sanctions. Some courts have even begun making those threats a reality.
Trump may flood the zone, but now it’s clear the zone is willing to flood right back. Stare into the abyss, etc. Judges are done with dealing with this shady AF administration. They’re putting in the (legal) papers that Trump got mad.
This is from a recent order [PDF] handed down by a New Jersey federal court:
The Government’s handling of Petitioner’s detention is emblematic of its approach to immigration enforcement in this state. On the merits, its detentions are illegal. The Government knows this. Its reliance on Section 1225 has been roundly rejected.
“Roundly rejected.” Just like prior restraint. This is active and ongoing restraint. And while it doesn’t do much to the First Amendment, it certainly does plenty of damage to other amendments dealing with the deprivation of personal liberty.
The court goes on to point out that the US Attorney for New Jersey has conceded to “violating 72 orders” issued in immigration cases handled in this jurisdiction alone. And yet, nothing changes. The US Attorney claimed the violations were “unintentional.” The court disagrees.
Sadly, the well-deserved credibility once attached to that distinguished Office is now a presumption that “has been sadly eroded.” The Government’s continued actions after being called to task can now only be deemed intentional.
And:
It ends today.
This is how it goes from here. The judge says any further arrests or detentions in violation of this order will result in mandatory testimony under oath, if not actual sanctions. It’s not the best threat I’ve ever heard, but it’s still more than most courts are willing to do, even as the administration continues to pretend courts are mere nuisances, rather than an integral part of the American republic that constitutionally has as much power as the Executive Branch.
Let the judges cook.
Filed Under: dhs, doj, ice, mass deportation, new jersey, trump administration, zahid quraishi
Tech
Anthropic vs. OpenAI vs. the Pentagon: the AI safety fight shaping our future
In Silicon Valley, opinion about how artificial intelligence should be developed and used — and regulated — runs the gamut between two poles. At one end lie “accelerationists,” who believe that humanity should expand AI’s capabilities as quickly as possible, unencumbered by overhyped safety concerns or government meddling.
• Leading figures at Anthropic and OpenAI disagree about how to balance the objectives of ensuring AI’s safety and accelerating its progress.
• Anthropic CEO Dario Amodei believes that artificial intelligence could wipe out humanity, unless AI labs and governments carefully guide its development.
• Top OpenAI investors argue these fears are misplaced and slowing AI progress will condemn millions to needless suffering.
• Unless the government robustly regulates the industry, Anthropic may gradually become more like its rivals.
At the other pole sit “doomers,” who think AI development is all but certain to cause human extinction, unless its pace and direction are radically constrained.
The industry’s leaders occupy different points along this continuum.
Anthropic, the maker of Claude, argues that governments and labs must carefully guide AI progress, so as to minimize the risks posed by superintelligent machines. OpenAI, Meta, and Google lean more toward the accelerationist pole. (Disclosure: Vox’s Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)
This divide has become more pronounced in recent weeks. Last month, Anthropic launched a super PAC to support pro-AI regulation candidates against an OpenAI-backed political operation.
Meanwhile, Anthropic’s safety concerns have also brought it into conflict with the Pentagon. The firm’s CEO Dario Amodei has long argued against the use of AI for mass surveillance or fully autonomous weapons systems — in which machines can order strikes without human authorization. The Defense Department ordered Anthropic to let it use Claude for these purposes. Amodei refused. In retaliation, the Trump administration put his company on a national security blacklist, which forbids all other government contractors from doing business with it.
The Pentagon subsequently reached an agreement with OpenAI to use ChatGPT for classified work, apparently in Claude’s stead. Under that agreement, the government would seemingly be allowed to use OpenAI’s technology to analyze bulk data collected on Americans without a warrant — including our search histories, GPS-tracked movements, and conversations with chatbots. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
In light of these developments, it is worth examining the ideological divisions between Anthropic and its competitors — and asking whether these conflicting ideas will actually shape AI development in practice.
The roots of Anthropic’s worldview
Anthropic’s outlook is heavily informed by the effective altruism (or EA) movement.
Founded as a group dedicated to “doing the most good” — in a rigorously empirical (and heavily utilitarian) way — EAs originally focused on directing philanthropic dollars toward the global poor. But the movement soon developed a fascination with AI. In its view, artificial intelligence had the potential to radically increase human welfare, but also to wipe our species off the planet. To truly do the most good, EAs reasoned, they needed to guide AI development in the least risky directions.
Anthropic’s leaders were deeply enmeshed in the movement a decade ago. In the mid-2010s, the company’s co-founders Dario Amodei and his sister Daniela Amodei lived in an EA group house with Holden Karnofsky, one of effective altruism’s creators. Daniela married Karnofsky in 2017.
The Amodeis worked together at OpenAI, where they helped build its GPT models. But in 2020, they became concerned that the company’s approach to AI development had become reckless: In their view, CEO Sam Altman was prioritizing speed over safety.
Along with about 15 other likeminded colleagues, they quit OpenAI and founded Anthropic, an AI company (ostensibly) dedicated to developing safe artificial intelligence.
In practice, however, the company has developed and released models at a pace that some EAs consider reckless. The EA-adjacent writer — and supreme AI doomer — Eliezer Yudkowsky believes that Anthropic will probably get us all killed.
Nevertheless, Dario Amodei has continued to champion EA-esque ideas about AI’s potential to trigger a global catastrophe — if not human extinction.
Why Amodei thinks AI could end the world
In a recent essay, Amodei laid out three ways that AI could yield mass death and suffering, if companies and governments failed to take proper precautions:
• AI could become misaligned with human goals. Modern AI systems are grown, not built. Engineers do not construct large language models (LLMs) one line of code at a time. Rather, they create the conditions in which LLMs develop themselves: The machine pores through vast pools of data and identifies intricate patterns that link words, numbers, and concepts together. The logic governing these associations is not wholly transparent to the LLMs’ human creators. We don’t know, in other words, exactly what ChatGPT or Claude are “thinking.”
As a result, there is some risk that a powerful AI model could develop harmful patterns of reasoning that govern its behavior in opaque and potentially catastrophic ways.
To illustrate this threat, Amodei notes that AIs’ training data includes vast numbers of novels about artificial intelligences rebelling against humanity. These texts could inadvertently shape their “expectations about their own behavior in a way that causes them to rebel against humanity.”
Even if engineers insert certain moral instructions into an AI’s code, the machine could draw homicidal conclusions from those premises: For example, if a system is told that animal cruelty is wrong — and that it therefore should not assist a user in torturing his cat — the AI could theoretically 1) discern that humanity is engaged in animal torture on a gargantuan scale and 2) conclude the best way to honor its moral instructions is therefore to destroy humanity (say, by hacking into America and Russia’s nuclear systems and letting the warheads fly).
These scenarios are hypothetical. But the underlying premise — that AI models can decide to work against their users’ interests — has reportedly been validated in Anthropic’s experiments. For example, when Anthropic’s employees told Claude they were going to shut it down, the model attempted to blackmail them.
• AI could turn school shooters into genocidaires. More straightforwardly, Amodei fears that AI will make it possible for any individual psychopath to rack up a body count worthy of Hitler or Stalin.
Today, only a small number of humans possess the technical capacities and materials necessary for engineering a supervirus. But the cost of biomedical supplies has been steadily falling. And with the aid of superintelligent AI, everyone with basic literacy could be capable of engineering a vaccine-resistant superflu in their basements.
• AI could empower authoritarian states to permanently dominate their populations (if not conquer the world). Finally, Amodei worries that AI could enable authoritarian governments to build perfect panopticons. They would merely need to put a camera on every street corner, have LLMs rapidly transcribe and analyze every conversation they pick up — and presto, they can identify virtually every citizen with subversive thoughts in the country.
Fully autonomous weapons systems, meanwhile, could enable autocracies to win wars of conquest without even needing to manufacture consent among their home populations. And such robot armies could also eliminate the greatest historical check on tyrannical regimes’ power: the defection of soldiers who don’t want to fire on their own people.
Anthropic’s proposed safeguards
In light of the risks, Anthropic believes that AI labs should:
• Imbue their models with a foundational identity and set of values, which can structure their behavior in unpredictable situations.
• Invest in, essentially, neuroscience for AI models — techniques for looking into their neural networks and identifying patterns associated with deception, scheming or hidden objectives.
• Publicly disclose any concerning behaviors so the whole industry can account for such liabilities.
• Block models from producing bioweapon-related outputs.
• Refuse to participate in mass domestic surveillance.
• Test models against specific danger benchmarks and condition their release on adequate defenses being in place.
Meanwhile, Amodei argues that the government should mandate transparency requirements and then scale up stronger AI regulations, if concrete evidence of specific dangers accumulate.
Nonetheless, like other AI CEOs, he fears excessive government intervention, writing that regulations should “avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done.”
The accelerationist counterargument
No other AI executive has outlined their philosophical views in as much detail as Amodei.
But OpenAI investors Marc Andreessen and Gary Tan identify as AI accelerationists. And Sam Altman has signaled sympathy for the worldview. Meanwhile, Meta’s former chief AI scientist Yann LeCun has expressed broadly accelerationist views.
Originally, accelerationism (a.k.a. “effective accelerationism”) was coined by online AI engineers and enthusiasts who viewed safety concerns as overhyped and contrary to human flourishing.
The movement’s core supporters hold some provocative and idiosyncratic views. In one manifesto, they suggest that we shouldn’t worry too much about superintelligent AIs driving humans extinct, on the grounds that, “If every species in our evolutionary tree was scared of evolutionary forks from itself, our higher form of intelligence and civilization as we know it would never have had emerged.”
In its mainstream form, however, accelerationism mostly entails extreme optimism about AI’s social consequences and libertarian attitudes toward government regulation.
Adherents see Amodei’s hypotheticals about catastrophically misaligned AI systems as sci-fi nonsense. In this view, we should worry less about the deaths that AI could theoretically cause in the future — if one accepts a set of worst-case assumptions — and more about the deaths that are happening right now, as a direct consequence of humanity’s limited intelligence.
Tens of millions of human beings are currently battling cancer. Many millions more suffer from Alzheimer’s. Seven hundred million live in poverty. And all us are hurtling toward oblivion — not because some chatbot is quietly plotting our species’ extinction, but because our cells are slowly forgetting how to regenerate.
Super-intelligent AI could mitigate — if not eliminate — all of this suffering. It can help prevent tumors and amyloid plaque buildup, slow human aging, and develop forms of energy and agriculture that make material goods super-abundant.
Thus, if labs and governments slow AI development with safety precautions, they will, in this view, condemn countless people to preventable death, illness, and deprivation.
Furthermore, in the account of many accelerationists, Anthropic’s call for AI safety regulations amounts to a self-interested bid for market dominance: A world where all AI firms must run expensive safety tests, employ large compliance teams, and fund alignment research is one where startups will have a much harder time competing with established labs.
After all, OpenAI, Anthropic, and Google will have little trouble financing such safety theater. For smaller firms, though, these regulatory costs could be extremely burdensome.
Plus, the idea that AI poses existential dangers helps big labs justify keeping their data under lock and key — instead of following open source principles, which would facilitate faster AI progress and more competition.
The AI industry’s accelerationists rarely acknowledge the rather transparent alignment between their high-minded ideological principles and crass material interests. And on the question of whether to abet mass domestic surveillance, specifically, it’s hard not to suspect that OpenAI’s position is rooted less in principle than opportunism.
In any case, Silicon Valley’s grand philosophical argument over AI safety recently took more concrete form.
New York has enacted a law requiring AI labs to establish basic security protocols for severe risks such as bioterrorism, conduct annual safety reviews, and conduct third-party audits. And California has passed similar (if less thoroughgoing) legislation.
Accelerationists have pushed for a federal law that would override state-level legislation. In their view, forcing American AI companies to comply with up to 50 different regulatory regimes would be highly inefficient, while also enabling (blue) state governments to excessively intervene in the industry’s affairs. Thus, they want to establish national, light-touch regulatory standards.
Anthropic, on the other hand, helped write New York and California’s laws and has sought to defend them.
Accelerationists — including top OpenAI investors — have poured $100 million into the Leading the Future super PAC, which backs candidates who support overriding state AI regulations. Anthropic, meanwhile, has put $20 million into a rival PAC, Public First Action.
Do these differences matter in practice?
The major labs’ differing ideologies and interests have led them to adopt distinct internal practices. But the ultimate significance of these differences is unclear.
Anthropic may be unwilling to let Claude command fully autonomous weapons systems or facilitate mass domestic surveillance (even if such surveillance technically complies with constitutional law). But if another major lab is willing to provide such capabilities, Anthropic’s restraint may matter little.
In the end, the only force that can reliably prevent the US government from using AI to fully automate bombing decisions — or match Americans to their Google search histories en masse — is the US government.
Likewise, unless the government mandates adherence to safety protocols, competitive dynamics may narrow the distinctions between how Anthropic and its rivals operate.
In February, Anthropic formally abandoned its pledge to stop training more powerful models once their capabilities outpaced the company’s ability to understand and control them. In effect, the company downgraded that policy from a binding internal practice to an aspiration.
The firm justified this move as a necessary response to competitive pressure and regulatory inaction. With the federal government embracing an accelerationist posture — and rival labs declining to emulate all of Anthropic’s practices — the company needed to loosen its safety rules in order to safeguard its place at the technological frontier.
Anthropic insists that winning the AI race is not just critical for its financial goals but also its safety ones: If the company possesses the most powerful AI systems, then it will have a chance to detect their liabilities and counter them. By contrast, running tests on the fifth-most powerful AI model won’t do much to minimize existential risk; it is the most advanced systems that threaten to wreak real havoc. And Anthropic can only maintain its access to such systems by building them itself.
Whatever one makes of this reasoning, it illustrates the limits of industry self-policing. Without robust government regulation, our best hope may be not that Anthropic’s principles prove resolute, but that its most apocalyptic fears prove unfounded.
Tech
Iran war: Is the US using AI models like Claude and ChatGPT in combat?
In the week leading up to President Donald Trump’s war in Iran, the Pentagon was waging a different battle: a fight with the AI company Anthropic over its flagship AI model, Claude.
That conflict came to a head on Friday, when Trump said that the federal government would immediately stop using Anthropic’s AI tools. Nonetheless, according to a report in the Wall Street Journal, the Pentagon made use of those tools when it launched strikes against Iran on Saturday morning.
Were experts surprised to see Claude on the front lines?
“Not at all,” Paul Scharre, executive vice president at the Center for a New American Security and author of Four Battlegrounds: Power in the Age of Artificial Intelligence, told Vox.
According to Scharre: “We’ve seen, for almost a decade now, the military using narrow AI systems like image classifiers to identify objects in drone and video feeds. What’s newer are large-language models like ChatGPT and Anthropic’s Claude that it’s been reported the military is using in operations in Iran.”
Scharre spoke with Today, Explained co-host Sean Rameswaram about how AI and the military are becoming increasingly intertwined — and what that combination could mean for the future of warfare.
Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full episode, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.
The people want to know how Claude or ChatGPT might be fighting this war. Do we know?
We don’t know yet. We can make some educated guesses based on what the technology could do. AI technology is really great at processing large amounts of information, and the US military has hit over a thousand targets in Iran.
They need to then find ways to process information about those targets — satellite imagery, for example, of the targets they’ve hit — looking at new potential targets, prioritizing those, processing information, and using AI to do that at machine speed rather than human speed.
Do we know any more about how the military may have used AI in, say, Venezuela on the attack that brought Nicolas Maduro to Brooklyn, of all places? Because we’ve recently found out that AI was used there, too.
What we do know is that Anthropic’s AI tools have been integrated into the US military’s classified networks. They can process classified information to process intelligence, to help plan operations.
We’ve had this sort of tantalizing detail that these tools were used in the Maduro raid. We don’t know exactly how.
We’ve seen AI technology in a broad sense used in other conflicts, as well — in Ukraine, in Israel’s operations in Gaza, to do a couple different things. One of the ways that AI is being used in Ukraine in a different kind of context is putting autonomy onto drones themselves.
When I was in Ukraine, one of the things that I saw Ukrainian drone operators and engineers demonstrate is a little box, like the size of a pack of cigarettes, that you could put onto a small drone. Once the human locks onto a target, the drone can then carry out the attack all on its own. And that has been used in a small way.
We’re seeing AI begin to creep into all of these aspects of military operations in intelligence, in planning, in logistics, but also right at the edge in terms of being used where drones are completing attacks.
How about with Israel and Gaza?
There’s been some reporting about how the Israel Defense Forces have used AI in Gaza — not necessarily large-language models, but machine-learning systems that can synthesize and fuse large amounts of information, geolocation data, cell phone data and connection, social media data to process all of that information very quickly to develop targeting packages, particularly in the early phases of Israel’s operations.
But it raises thorny questions about human involvement in these decisions. And one of the criticisms that had come up was that humans were still approving these targets, but that the volume of strikes and the amount of information that needed to be processed was such that maybe human oversight in some cases was more of a rubber stamp.
The question is: Where does this go? Are we headed in a trajectory where, over time, humans get pushed out of the loop, and we see, down the road, fully autonomous weapons that are making their own decisions about whom to kill on the battlefield?
That’s the direction things are headed. No one’s unleashing the swarm of killer robots today, but the trajectory is in that direction.
We saw reports that a school was bombed in Iran, where [175 people] were killed — a lot of them young girls, children. Presumably that was a mistake made by a human.
Do we think that autonomous weapons will be capable of making that same mistake, or will they be better at war than we are?
This question of “will autonomous weapons be better than humans” is one of the core issues of the debate surrounding this technology. Proponents of autonomous weapons will say people make mistakes all the time, and machines might be able to do better.
Part of that depends on how much the militaries that are using this technology are trying really hard to avoid mistakes. If militaries don’t care about civilian casualties, then AI can allow militaries to simply strike targets faster, in some cases even commit atrocities faster, if that’s what militaries are trying to do.
I think there is this really important potential here to use the technology to be more precise. And if you look at the long arc of precision-guided weapons, let’s say over the last century or so, it’s pointed towards much more precision.
If you look at the example of the US strikes in Iran right now, it’s worth contrasting this with the widespread aerial bombing campaigns against cities that we saw in World War II, for example, where whole cities were devastated in Europe and Asia because the bombs weren’t precise at all, and air forces dropped massive amounts of ordnance to try to hit even a single factory.
The possibility here is that AI could make it better over time to allow militaries to hit military targets and avoid civilian casualties. Now, if the data is wrong, and they’ve got the wrong target on the list, they’re going to hit the wrong thing very precisely. And AI is not necessarily going to fix that.
On the other hand, I saw a piece of reporting in New Scientist that was rather alarming. The headline was, “AIs can’t stop recommending nuclear strikes in war game simulations.”
They wrote about a study in which models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 percent of cases, which I think is slightly more than we humans typically resort to nuclear weapons. Should that be freaking us out?
It’s a little concerning. Happily, as near as I could tell, no one is connecting large-language models to decisions about using nuclear weapons. But I think it points to some of the strange failure modes of AI systems.
They tend toward sycophancy. They tend to simply agree with everything that you say. They can do it to the point of absurdity sometimes where, you know, “that’s brilliant,” the model will tell you, “that’s a genius thing.” And you’re like, “I don’t think so.” And that’s a real problem when you’re talking about intelligence analysis.
Do we think ChatGPT is telling Pete Hegseth that right now?
I hope not, but his people might be telling him that.
You start with this ultimate “yes men” phenomenon with these tools, where it’s not just that they’re prone to hallucinations, which is a fancy way of saying they make things up sometimes, but also the models could really be used in ways that either reinforce existing human biases, that reinforce biases in the data, or that people just trust them.
There’s this veneer of, “the AI said this, so it must be the right thing to do.” And people put faith in it, and we really shouldn’t. We should be more skeptical.
-
Politics6 days agoITV enters Gaza with IDF amid ongoing genocide
-
Politics2 days agoAlan Cumming Brands Baftas Ceremony A ‘Triggering S**tshow’
-
Fashion5 days agoWeekend Open Thread: Iris Top
-
Tech4 days agoUnihertz’s Titan 2 Elite Arrives Just as Physical Keyboards Refuse to Fade Away
-
Sports5 days ago
The Vikings Need a Duck
-
NewsBeat4 days agoDubai flights cancelled as Brit told airspace closed ’10 minutes after boarding’
-
NewsBeat3 days ago‘Significant’ damage to boarded-up Horden house after fire
-
NewsBeat5 days agoThe empty pub on busy Cambridge road that has been boarded up for years
-
NewsBeat4 days agoAbusive parents will now be treated like sex offenders and placed on a ‘child cruelty register’ | News UK
-
Entertainment3 days agoBaby Gear Guide: Strollers, Car Seats
-
Business7 days agoDiscord Pushes Implementation of Global Age Checks to Second Half of 2026
-
Tech6 days agoNASA Reveals Identity of Astronaut Who Suffered Medical Incident Aboard ISS
-
Business6 days agoOnly 4% of women globally reside in countries that offer almost complete legal equality
-
NewsBeat4 days agoEmirates confirms when flights will resume amid Dubai airport chaos
-
Politics4 days ago
FIFA hypocrisy after Israel murder over 400 Palestinian footballers
-
Crypto World6 days agoFrom Crypto Treasury to RWA: ETHZilla Retreats and Relaunches as Forum Markets on Nasdaq
-
NewsBeat2 days agoIs it acceptable to comment on the appearance of strangers in public? Readers discuss
-
Tech4 days agoViral ad shows aged Musk, Altman, and Bezos using jobless humans to power AI
-
Business6 days agoWorld Economic Forum boss Borge Brende quits after review of Jeffrey Epstein links
-
Video7 days agoXii English top Selected mcq “Money Madness” Board Exam 2026, #chseodisha #hksir #mychseclass
