Apple has acknowledged that users may be encountering issues with iCloud services, Photos sync, or an outright outage with Find My on Tuesday afternoon.
Another services outage has affected users
Everything you do on an iPhone touches some kind of service, which can experience an outage from time to time. If you noticed that a file just won’t sync, or you can’t see your friend’s location, it’s because of an ongoing issue. According to Apple’s System Status page, various iCloud services began facing issues around 2:02 p.m. ET and Find My saw a full outage that began at 3:04 p.m. ET. Users attempting to utilize those features could encounter errors or endless loading. Continue Reading on AppleInsider | Discuss on our Forums
I’m going to trust that most of our audience will have some idea of what McCarthyism was in the 1950s. To summarize very briefly, it was an anti-communist campaign that spread into becoming equally anti-leftist throughout the country, with a specific focus on driving the supposed communist influences out of major media in America, such as radio and Hollywood. This led to a public hyper-vigilant in looking for supposed communists everywhere, as well as plenty of cases of false accusations of communist activity purposefully foisted upon people for personal reasons. This rabid, frothy-mouthed era of suspicion became a major stain on America in the 1950s.
I’m watching a version of this begin to take form around artificial intelligence. I know, I know: there are very real dangers and negative outcomes that could come to be from AI. That was true of communism and our Cold War enemy in the Soviet Union as well. My point is not that AI is great all the time and any pushback against it is invalid. Instead, my point is that we’re starting to see what I’ll call McPromptism, where some percentage of the public looks for AI everywhere it can and, if use is suspected, immediately decries it as terrible and demands that people not engage with the supposed user.
And just like McCarthyism, McPromptism gets its accusations wrong sometimes. You can see a version of that in the story of Aspyr’s remastering of old Tomb Raider games and the horrible outfits that were produced for the protagonist, Lara Croft.
Earlier this week we reported on fan reaction to the latest update to the Tomb Raider I-III Remastered collection, in which the game received a new Challenge Mode, while Lara received a suite of new outfits to wear as rewards. And oh wow, they were bad. Comically bad. So bad, in fact, that one of the remaster’s original artists posted on X to distance himself and his colleagues from the dross. Alongside all of this was the suspicion that genAI might have been involved in the fits’ creation, given just how dreadful they looked. Publisher Aspyr has now finally responded to the claims to insist no AI was used at all, instead stating they were created by “our team of artists.” Which raises more questions.
If you want to see a somewhat humorous look at the outfit textures that are the subject of public complaint, here you go.
Advertisement
On the one hand, for someone like me who is not into the anti-AI dogma out there, it is objectively funny for some people to point at bad video game textures and claim they’re so bad because they’re obviously created using generative AI… only to have the company that made them say, “Nuh uh! It was our human employees who made them!” It’s almost Monty-Python-esque, in a way.
But this default among some in the gaming public to be “This thing in gaming is bad, so it must have been made using AI!” is just one more kind of silly that is out there right now. Aspyr doesn’t exactly have a perfect reputation when it comes to remastering games, after all, and it built that reputation long before genAI came along.
It seems clear that this was a case of images being released to promote the remastered game that Aspyr didn’t live up to in the actual game itself. No AI, just human beings not hitting the mark. It happens all the time. Hell, there is even a chance that AI could have done a better job. Not a certainty by any stretch, but a possibility.
Advertisement
But the real take away from this otherwise minor episode for me was the McPromptism misfire. If you’re going to rage against the literal machine in the video gaming industry, which I think is the wrong stance to take anyway, at least let it be righteous rage.
NASA’s Space Launch System rocket stands on its launch pad in preparation for the Artemis 2 moon launch. (NASA Photo / Bill Ingalls)
After years of postponements and close to $100 billion in spending, NASA is finally counting down to its first attempt to send astronauts around the moon since Apollo 17 in 1972.
The 10-day Artemis 2 mission is set to begin today with the liftoff of NASA’s Space Launch System rocket from NASA’s historic Launch Complex 39B at Kennedy Space Center in Florida. The two-hour launch window opens at 6:24 p.m. ET (3:24 p.m. PT), and NASA is streaming live mission coverage of the countdown on twodifferent YouTube channels.
NASA has fueled up the 322-foot-tall SLS rocket with liquid hydrogen and oxygen, and there’s an 80% chance of acceptable weather for launch. Rain showers are the main concern.
Artemis 2 is the first crewed test flight in a series leading up to a moon landing that’s currently scheduled for 2028. It follows Artemis 1, which sent a crewless Orion space capsule around the moon in 2022. This time, four astronauts will be riding inside Orion: NASA mission commander Reid Wiseman, NASA astronauts Christina Koch and Victor Glover, and Canadian astronaut Jeremy Hansen. Koch will be the first woman to go beyond Earth orbit, and Hansen will be the first non-American to do so.
Although the astronauts won’t be landing on the lunar surface, they’ll follow a figure-8 trajectory that will send them 4,700 miles beyond the far side of the moon and make them the farthest-flung travelers in human history.
Advertisement
Last week, NASA Administrator Jared Isaacman laid out a plan for establishing a permanent base on the moon and preparing for even farther trips into the solar system. On the eve of the launch, Isaacman played up the significance of Artemis 2 in that plan. “The next era of exploration begins,” he said in a post to X.
Senior test director Jeff Spaulding, a veteran of the space shuttle program, said he was looking forward to the mission. “I’m excited about going to the moon,” he told reporters. “I’m excited about establishing a presence there. It’s something that I have had a desire for, for a great many years — and then to get humans out to Mars as well.”
“They’re going to be able to see the whole moon as a lunar disk on the lunar far side,” Marie Henderson, lunar science deputy lead for the Artemis 2 mission, said in a NASA video. “So, that’s a brand-new, unique perspective that humans haven’t been able to look at before.”
Advertisement
At the end of the trip, the crew and their Orion capsule are due to splash down in the Pacific Ocean off the California coast. They’ll be brought to a recovery ship for medical checkouts and their return to shore, following a routine that became familiar during the Apollo era.
Artemis 2 is about the history of America’s space program as well as its future. The round-the-moon mission profile matches that of Apollo 8, which served as a unifying event for a nation riven by the social tumult of the time. That mission’s commander, Frank Borman, reported receiving a telegram reading, “Congratulations to the crew of Apollo 8. You saved 1968.” Notably, less than a third of Americans living today were around when Apollo 8 flew.
The main motivation for the Apollo program was America’s superpower competition with the Soviet Union, and today, the geopolitical stakes are similarly high. NASA and the White House are seeking to jump-start progress on Artemis in part because China is targeting a crewed moon landing by 2030.
Sen. Maria Cantwell, D-Wash., said this week during a visit to Seattle-area suppliers for the Artemis program that it’s important for America to get to the moon first. “We’re trying to get the best real estate on the moon,” she said. “So, to do that, you’ve got to get up there to claim it.”
Advertisement
The course of the Artemis program, which is named after the goddess of the moon and the twin sister of Apollo in Greek mythology, hasn’t always run smooth. When the program was given its name in 2019, the Artemis 2 mission was planned for 2022 or 2023, with the moon landing scheduled for 2024. The cost of the program has been estimated at $93 billion through 2025, with each Artemis launch costing $4.1 billion.
Artemis 2’s launch team ran into several challenges during this year’s preparations for launch. Liftoff was initially scheduled for February, but a liquid hydrogen leak forced NASA to reset the launch for March. The launch date was reset again when a helium pressurization problem required a rocket rollback for repairs. The SLS was brought back out to the pad on March 20, and preparations went smoothly since then.
Several companies with a presence in the Seattle area are banking on Artemis’ success. For example, a facility in Redmond operated by L3Harris (previously known as Aerojet Rocketdyne) builds thrusters for the Orion spacecraft and is already working ahead on the Artemis 8 mission.
Ken Pillonel, a Swiss engineer, struck again. He’s well-known for refurbishing outdated iPhones with creative add-on cases, which he even sells. This time, however, he turned the tables. On April 1st, he completed a totally new prototype in just a few days, a slim protective cover that hands the iPhone 17 Pro a working Lightning port right where Apple moved on from it.
If you’ve recently updated from an iPhone 14 or earlier, you understand the pain. All of those old cords, docks, and chargers you used to love are now rendered worthless unless you carry a separate adapter with you everywhere. Pillonel effectively solved the challenge by working in reverse. Instead of forcing the phone to use a newer plug, he designed a cover that allows Lightning cables to plug right in while the iPhone 17 Pro remains safely tucked inside its USB-C shell.
3 in 1 Wireless Charger Station: This 3-in-1 wireless charger is designed to work seamlessly with a variety of devices, including iPhone 17 16e…
Fast Charging Power: Ensure your devices are efficiently charged with up to 7.5W for phones, 5W for earbuds, and 3W for watches. The charger is…
Portable and Foldable Design: Featuring a foldable, lightweight design, this charging station is ideal for home, office, travel or trip. Manufacturer…
It all starts with some careful effort on the electronics side. He designed tiny custom circuit boards to shrink a standard USB-C to Lightning adapter down to almost nothing. These boards are located inside the bottom border of the casing and add only a few mils of thickness. Next came the casing, which was produced in flexible TPU using a high-end 3D printer that is good at reducing waste. He also made a little jig to help get the MagSafe magnets in the appropriate place, and when he snapped everything together, it fit like a charm, no tools required.
When it’s all put together, the case feels exactly like any other you’d get in a store, soft to the touch and durable enough for daily use. When you insert the iPhone 17 Pro inside, the internal cables align neatly with the phone’s USB-C port. Plugging a Lightning cable into the new hole outside just works; power flows exactly like it would on an older model. Yes, charging works well, as he demonstrated in his whole build video; now he just needs to test data transfer and other accessories.
Pillonel never meant to sell this one. He refers to the finished piece as one of the oddest things he has ever put together, a tongue-in-cheek reference to Lightning’s official departure from the roster years ago. Nonetheless, the project illustrates a wider point. With some work and the correct parts, compatibility gaps between old and new technology can be bridged in inventive ways that keep favorite accessories alive. [Source]
Some projects need no complicated use case to justify their development, and so it was with [Janne]’s BeamInk, which mashes a Wacom pen tablet with an xTool F1 laser engraver with the help of a little digital glue. For what purpose? So one can use a digital pen to draw with a laser in real time, of course!
Pen events from the drawing tablet get translated into a stream of G-code that controls laser state and power.
Here’s how it works: a Python script grabs events from a USB drawing tablet via evdev (the Linux kernel’s event device, which allows user programs to read raw device events), scales the tablet size to the laser’s working area, and turns pen events into a stream of laser power and movement G-code. The result? Draw on tablet, receive laser engraving.
It’s a playful project, but it also exists as a highly modular concept that can be adapted to different uses. If you’re looking at this and sensing a visit from the Good Ideas Fairy, check out the GitHub repository for more technical details plus tips for adapting it to other hardware.
In the early 1970s, the idea of an ordinary person owning a computer sounded absurd. Computers back then were more like aircraft carriers or nuclear power plants than household appliances – vast machines housed in data centres operated by teams of specialists, serving governments, universities and large corporations.
Founded on April 1 1976 by ‘college dropouts’ Steve Jobs and Steve Wozniak, the Silicon Valley start-up did not invent computing. What it did was arguably more important: it helped turn computing into a personal technology.
Before Apple, computers were largely sold in kit form. Jobs saw that people wanted them pre-assembled and ready to run. The earliest Apple I units, featuring handmade koa wooden cases, now sell for hundreds of thousands of dollars.
As an early Apple adopter and app developer, here’s my selection of the company’s (and Jobs’s) most significant technological achievements over the last 50 years.
Apple II – beige yet distinctive
Early personal computers were more curiosities than practical tools. The Apple II, launched in June 1977, introduced something new: style. Even its colour – beige – was distinctive, contrasting with the black metal boxes common at that time.
Advertisement
Bottom of Form
The use of colour graphics was both new and exciting, and the keyboard felt satisfying to use. A simple speaker, with only a single-bit output, was ingeniously coaxed into producing tones and even speech-like sounds. The design revolution stretched as far as the packaging: Jerry Manock, Apple’s first in-house designer, placed the machine in a moulded plastic case which looked sleek and professional.
The mouse – a whole new way of interacting
By 1979, the 24-year-old Jobs – sensing that tech giant IBM was catching up with Apple – went looking for the next big thing. The photocopier company Xerox, wanting pre-IPO shares in Apple, offered a visit to its nearby research labs as an inducement. Jobs realised that researchers such as Alan Kay at Xerox’s Palo Alto research centre were creating the next generation of computing interfaces.
Central to this was a device invented by Kay’s mentor, Douglas Engelbart, at Stanford University in the mid-1960s and nicknamed ‘the mouse’. Engelbart’s vision of computers as machines to augment the human mind inspired Kay and colleagues to create graphical displays in which users interacted with scrollbars, buttons, menus and windows.
Advertisement
Macintosh – dawn of the modern product launch
Jobs thought anyone should be able to use a computer. In January 1984, the first Apple Mac pushed this idea to new extremes. The traditional need for obscure computer commands (and manuals) vanished. Early adopters such as myself felt we just knew how to do everything.
But the Mac’s launch was not just another technological leap for Apple. It also inspired the now-familiar cultural moment of the modern product launch. Following a teasing Super Bowl advert directed by Ridley Scott, Jobs used a 1,500-seat theatre on January 24 to create a stage performance centred on a single charismatic presenter. Jobs let a small, square and still-beige computer (then known as Macintosh) out of its bag – and it began speaking for itself, to rapturous applause.
Pixar – Jobs’s side hustle
In its first decade, Apple grew at an exceptional rate – but it also came close to financial collapse on several occasions. This led to one of the most dramatic moments in Apple’s history when, in May 1985, the company forced Jobs out.
Advertisement
A year later and now in charge of the start-up NeXT Inc, Jobs bought a division of George Lucas’s film company which was soon rebranded as Pixar. Its RenderMan software generated images by distributing processing across multiple machines simultaneously.
Pixar, jokingly referred to as Jobs’s “side hustle”, would become one of the world’s most influential (and valuable) animation production companies, having released the first fully computer-animated feature film in Toy Story (1995).
iMac – a meeting of minds
After a failed attempt to develop a new operating system with IBM, Apple eventually bought Jobs’s company NeXT. In September 1997, he returned to Apple as interim CEO with the company “two months from bankruptcy”. The move, though welcomed by many Apple users, terrified some of its employees. Jobs quickly began firing staff and shutting down failed products.
Advertisement
During this restructuring, he visited Apple’s design studio and immediately hit it off with young British designer Jony Ive. Their meeting of minds led to the 1998 candy-coloured translucent iMac. Essentially smaller, cheaper NeXT machines, iMac (the ‘i’ stood for internet) also kicked off another Apple habit: abandoning ageing technology. The floppy disk drive was ditched in favour of a CD drive – a move heavily criticised at the time, but later widely copied.
iPod – 1,000 songs in your pocket
For Apple, computing was always about more than, well, computing. In 2001, the company began focusing on processing sound and video, not just text and pictures. By November that year, it had released the iPod – a personal music player capable of storing “1,000 songs in your pocket”, compared with a maximum of 20-30 on each cassette tape in a Sony Walkman.
The iPod used an elegant ‘click wheel’ to operate the screen. Music was synced through a new application called iTunes. By 2005, people were using iTunes to manage audio downloaded automatically from the internet using a process called RSS. This in turn put the pod in podcasting.
Advertisement
iPhone – a computer in everyone’s hands
By 2007, many mobile phone companies had approached Apple about merging the iPod with their phones. Instead, on January 9, Jobs unveiled Apple’s most ambitious product yet: a combined phone, music player and Mac computer – all at the size of a handset with no physical keyboard and huge screen.
Most media ‘experts’, from TechCrunch to the Guardian, predicted the iPhone would bomb. Steve Ballmer, then CEO of Microsoft, mocked the US$500 price tag, saying nobody would buy it. In fact, 1.4m iPhones were sold by the end of the year – and over 3bn more since then. This truly put a computer into everyone’s hands – and opened the door to social media as we know it today.
Advertisement
App Store’s software revolution
By mid-2008, the iPhone enabled third-party developers the chance to create a dizzying range of new applications. At the same time, the App Store – launched on July 10 2008 – addressed one of the most complex problems: how to distribute and commercialise these ‘apps’. Historically, they were often copied and distributed freely. The App Store changed this, using strong encryption to ensure the copy sold could only be used by that specific user, thus eliminating software piracy.
By establishing the first (eponymous) App Store, Apple changed the way people discover and purchase software. This led to an explosion of apps and a simple but powerful idea: whatever you wanted to do, someone, somewhere, had already built it. Apple captured this shift in a slogan that became part of everyday language: “There’s an app for that”.
Time and again, this extraordinary company has anticipated the value of opening up computing to everyone. Happy birthday, Apple.
Nick Dalton is an associate professor in the School of Computer Science at Northumbria University in Newcastle. His background is as a computer scientist crossing between architecture and computation. His principal area of expertise is in the design, development and evaluation of human computer interfaces with a specialism in the design of ubiquitous computing technology.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
There’s a great debate these days about what the current crop of AI chatbots should and shouldn’t do for you. We aren’t wise enough to know the answer, but we were interested in hearing what is, apparently, Microsoft’s take on it. Looking at their terms of service for Copilot, we read in the original bold:
Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.
While that’s good advice, we are pretty sure we’ve seen people use LLMs, including Copilot, for decidedly non-entertaining tasks. But, at least for now, if you are using Copilot for non-entertainment purposes, you are violating the terms of service.
Legal
While we know how it is when lawyers get involved in anything, we can’t help but think this is simply a hedge so that when Copilot gives you the wrong directions or a recipe for cake that uses bleach, they can say, “We told you not to use this for anything.”
It reminds us of the Prohibition-era product called a grape block. It featured a stern warning on the label that said: “Warning. Do not place product in one quart of water in a cool, dark place for more than two weeks, or else an illegal alcoholic beverage will result.” That doesn’t fool anyone.
Advertisement
We get it. They are just covering their… bases. When you do something stupid based on output from Copilot, they can say, “Oh, yeah, that was just for entertainment.” But they know what you are doing, and they even encourage it. Heck, they’re doing it themselves. Would it stand up in court? We don’t know.
Others
Now it is true that probably everyone will give you a similar warning. OpenAI, for example, has this to say:
Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.
You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.
You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.
Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.
Notice that it doesn’t pretend you are only using it for a chuckle. Anthropic has even more wording, but still stops short of pretending to be a party game. Copilot, on the other hand, is for fun.
Your Turn
How about you? Do you use any of the LLMs for anything other than “entertainment?” If you do, how do you validate the responses you get?
When things do go wrong, who should be liable? There have been court cases where LLM companies have been sued for everything, ranging from users committing suicide to defaming people. Are the companies behind these tools responsible? Should they be?
Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, “Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions … that developers had shared on programming platform GitHub.” From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic’s tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories — a process it calls dreaming. Another appears to instruct Claude Code in some cases to go “undercover” and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called “Buddy” that users could interact with.
After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.
Right wing broadcasters are having a very good time under Brendan Carr, who has looked to destroy all remaining media consolidation limits to let them merge. Such companies, like Sinclair, Nexstar, and Tegna, don’t do journalism so much as they do soggy, right wing propaganda and infotainment, usually with endless fear mongering about drugs, homelessness, and crime rates.
They’re just one part of the right wing’s effort to remake the entirety of media into a massive safe space for dim autocrats.
Carr’s latest effort: he rubber stamped Nexstar Media Group’s $6.2 billion purchase of Tegna behind closed doors. Carr let the merged companies ignore our remaining media consolidation limits, which prevent one company from being the primary broadcast news voice for more than 39 percent of households (the new combined company reaches 54.5 percent).
Nexstar (a very Republican friendly company that also owns The Hill), not that long ago fired a journalist whose reporting angered Trump. Combined with Tegna, the two companies will own 221 Big Four broadcast stations, or more than half of the U.S. stations affiliated with FOX, NBC, ABC, or CBS.
Advertisement
Carr’s been on a campaign to ensure these right-wing loyal companies have more power in their dealings with their national counterparts (remember how they helped Carr censor Jimmy Kimmel?). The efforts come as local Americans increasingly live in “local news deserts” where quality local journalism simply no longer exists.
Anna Gomez, the lone Democrat left at the FCC (Republicans refuse to fill the other seat), didn’t have nice things to say about Carr’s decision to ignore the public interest protections without a transparent, public vote (indicating Carr very clearly knew this would be very unpopular):
As always, Carr’s order approving the merger leverages all manner of pseudo-legalistic sounding bullshit to justify ignoring Congress and the law. And he parrots a bunch of completely empty promises by Nexstar that they’ll ramp up the production of more “local news”:
“We note that Nexstar has made significant commitments in the agency’s record as well, further ensuring that this transaction promotes the public interest. To further serve its local communities, Nexstar commits to expanding its investment in local news and programming, including increasing the amount of local news it provides in acquired markets.”
Except again, by “news” we mean right wing propaganda. And Brendan Carr never meaningfully holds corporate power accountable for anything, unless it involves a comedian making fun of the president or companies not being suitably racist enough for the president’s liking.
Eight states have already filed a lawsuit challenging the legality of the decision. The lawsuits understandably focus heavily on the competition impacts, and the likely higher cable TV prices that will result for most of you:
Advertisement
“By consolidating with a major competitor, Nexstar would likely acquire the power to charge MVPDs higher retransmission consent fees for Big 4 station content. In turn, those MVPDs would likely pass on the increased retransmission consent fees, in large measure, to their subscribers in the form of substantially higher cable and satellite bills.”
California regulators attempted to slow the process down by proposing a standard timing agreement with Nexstar, where the company would suspend its acquisition of Tegna until the state completed its investigation.
But something of particular note: on pages 16-17 of the states’ amended complaint, it becomes clear that Nexstar completely ignored the State AGs for 8 days, then ignored their lawsuit for another 18 hours, and then told the state AGs “The relief sought in your Complaint is no longer available.”
In other words, what passes for some of the only real antitrust enforcement we have (a scattered coalition of states) have to fight both consolidated corporate power and the authoritarian, corrupt government simultaneously to make any inroads in the public interest.
“This is completely unprecedented,” Free Press (the consumer group, not the Bari Weiss troll farm) Research Director S. Derek Turner told me via email. “Nexstar and the Trump DOJ and FCC seem to have acted in concert to deprive the citizens of of these 8 states their rights to have our AG enforce the antitrust laws on our behalf.”
If Carr succeeds here, I suspect it won’t be long before you see Sinclair and this new combined company merge. Carr is also fielding requests by the big four national broadcasters to eliminate restrictions preventing them from merging as well (one of many reasons they’ve been so feckless). After that, you’ll likely see more consolidation across telecom, tech, and media.
Advertisement
It is, just in case we’ve forgotten, the complete opposite of the “antitrust reform populism” Trump, and a long line of useful idiots, promised last election season.
While this is certainly an act of some desperation (less than 20% of all U.S. TV viewing is now broadcast), claiming this doesn’t matter because this is “just local broadcasting” and the “future is the internet” (something I see often) is a violent misread of the dire stakes of the situation. This aggressive, Trump-loyal consolidation hasn’t, and isn’t, just being confined to broadcast television (see: Twitter, TikTok).
This is, to be clear, a coordinated and illegal authoritarian/corporatist effort to ignore the public interest and the law to expand right wing propaganda’s power over an already clearly befuddled and broadly misinformed electorate. Right wingers will continue to engage in this quest to dominate the entirety of U.S. media (following in the steps of Victor Orban in Hungary) until they run into something other than the political and policy equivalent of soft pudding.
Isaiah Taylor was sixteen when he decided the nuclear industry had a size problem. Not that reactors were too dangerous or too expensive, though they are both, but that they were simply too big. The multi-gigawatt monuments to Cold War-era engineering that still dot the American landscape were designed for a grid that moved power in one direction: from a distant plant to a distant city. They were never meant to sit behind a hyperscaler’s fence line, feeding a cluster of GPU racks whose appetite doubles every eighteen months.
Taylor, now 27, founded Valar Atomics in 2023 to build something different. On Tuesday, the El Segundo, California-based startup announced it has raised $450 million at a $2 billion valuation, according to Bloomberg. The round comprises $340 million in equity and $110 million in debt, and it lands barely five months after a $130 million Series A that valued the company at a fraction of its current price.
The backers read like a roster of the Americandefence-tech establishment that has lately been writing enormous cheques. Palmer Luckey, the Anduril Industries founder whose company was recently reported to be pursuing a $4 billion raise at a $60 billion valuation, is an investor. So is Shyam Sankar, the chief technology officer of Palantir Technologies. The earlier Series A was led by Snowpoint Ventures, the firm co-founded by Doug Philippone, Palantir’s former head of global defence, alongside Day One Ventures and Dream Ventures. Lockheed Martin board member and former AT&T chief executive John Donovan also participated.
Valar’s pitch is built around what it calls “gigasites”, sprawling industrial campuses that would host hundreds or even thousands of small, high-temperature gas-cooled reactors operating in concert. Each unit uses helium as a coolant and TRISO fuel encased in graphite, a combination that allows the reactors to run at significantly higher temperatures than conventional light-water designs. The company says these clusters can deliver dense, steady, carbon-free power tailored to the load profiles of AI data centres, industrial manufacturers, and grid-constrained regions.
It is anaudacious answer to an increasingly urgent question: where will the electricity come from? The International Energy Agency projects that data-centre power consumption will double by 2026. Goldman Sachs estimates that 85 to 90 gigawatts of new nuclear capacity will eventually be needed to help fill the gap. Microsoft, Amazon, and Google have all signed nuclear power agreements in recent months, but the reactors those deals depend on do not yet exist at commercial scale.
Advertisement
Valar claims a meaningful head start. In November 2025, the company announced that its NOVA Core achieved zero-power criticality at Los Alamos National Laboratory’s National Criticality Experiments Research Centre, making it what the Breakthrough Institute described as the first company to reach that milestone under the US Department of Energy’s Nuclear Reactor Pilot Programme. Zero-power criticality — a self-sustaining chain reaction of uranium-235 without reaching full operating temperatures — is a necessary validation step, not a working power plant, but it is further than most of Valar’s competitors have publicly demonstrated.
The company is now preparing its Ward250 reactor, a 100-kilowatt thermal high-temperature gas-cooled unit, for power operations at the Utah San Rafael Energy Research Centre. In February 2026, the reactor was airlifted from California to Utah aboard three C-17 Globemaster military cargo aircraft in a joint operation between the Departments of Defence and Energy — a logistical stunt that doubled as a proof of concept for rapid reactor deployment. Valar is targeting operational status before 4 July 2026, the deadline the DOE set for three reactors in its pilot programme to achieve criticality.
Taylor’s trajectory has beenunconventional even by deep-tech standards. A self-taught coder who launched his first venture as a teenager, he comes from a family with nuclear roots: his great-grandfather, Ward Schaap, was a physicist on the Manhattan Project. The Ward250 reactor carries Schaap’s name. Taylor has assembled a leadership team that includes Mark Mitchell, the former president of Ultra Safe Nuclear Corporation, and Muhammad Shahzad, the former president and chief financial officer of Relativity Space.
The competitive field is crowded and well-funded. TerraPower, backed by Bill Gates, broke ground on a sodium-cooled reactor in Wyoming last year. Kairos Power is building a molten-salt demonstration plant in Tennessee. X-energy has a partnership with Dow Chemical for an industrial HTGR. Oklo, which went public via a SPAC in 2024, is developing a fast-neutron microreactor. None has yet delivered commercial power from an advanced design.
Advertisement
Valar has also taken a combative approach to regulation thatfew young companies would risk. In April 2025, the startup sued the Nuclear Regulatory Commission, arguing that the agency’s licensing framework unlawfully restricts small-scale reactor innovation by requiring the same approval process for low-power test reactors as for full-scale commercial plants. The lawsuit, filed alongside the states of Texas, Utah, Louisiana, Florida, and Arizona, as well as fellow reactor startups Last Energy and Deep Fission, seeks to shift regulatory authority for small reactors to individual states. The case has since been paused amid the Trump administration’s broader executive order to overhaul the NRC.
The $2 billion valuation places Valar among the most richly valued nuclear startups in the United States, a distinction that would have seemed absurd five years ago. Whether the premium reflectsgenuine confidence in the technology or the gravitational pull of AI-adjacent capitalis a question the next eighteen months should begin to answer. If the Ward250 reaches power operations in Utah this summer, Valar will have done something no advanced-reactor startup has managed: moved from incorporation to criticality to grid-connected electricity in roughly three years. If it does not, $2 billion will buy a very expensive physics experiment in the desert.
A delivery of medical supplies by Project C.U.R.E. (Project C.U.R.E. Photo)
[Editor’s Note: Agents of Transformation is an independent GeekWire series, underwritten by Accenture, exploring the adoption and impact of AI and agents. See coverage of our related event.]
Project C.U.R.E. had the answers. Decades of repair manuals for X-ray machines, anesthesia equipment and other medical devices — plus inventory data for the 250 semi-truck containers of supplies it ships to clinics worldwide every year. The problem was access: the archives had grown too large for any one person to navigate.
Now the nonprofit is turning to AI to start unlocking those resources, using the technology to predict future supply needs and search its manuals database for specific fixes.
“We’ve got almost 40 years of manuals,” said Doug Jackson, CEO of Project C.U.R.E., a Denver nonprofit providing medical aid. “There’s no way that any one person can sit down in a room and read through all those manuals. But AI can.”
Project C.U.R.E. was among 1,500 organizations in Bellevue, Wash., last week for Microsoft’s Global Nonprofit Leadership Summit, which centered on a high-stakes paradox for the social sector. The event’s focus was accelerated AI adoption and agentic tools, but the move toward automation keeps running into the gap between the technology’s potential and the real costs, skills and time required to deploy it.
Advertisement
Children International, a Kansas City-based organization serving impoverished youth, found a way to bridge that divide. Its employees are using AI agents for tasks including bulk translation of the letters sent from donors to children receiving their support.
“We had to do something different,” said Tim Batcha, vice president of Global Information Technology at Children International, speaking at the summit. He explained that too much effort was going toward day-to-day operations instead of advancing the nonprofit’s core mission.
To help others eager to deploy AI, the tech giant last Wednesday unveiled Microsoft Elevate for Changemakers, which expands the company’s Elevate program launched last July.
The initiative has three components:
Advertisement
“AI for Nonprofits” credential: The professional certificate created with LinkedIn and NetHope develops skills applicable for this specific sector.
AI skills training: Live and on-demand instruction modules are focused on nonprofit needs and target areas such as Microsoft Copilot’s agentic tools, change management and responsible AI governance.
Changemaker Fellowship: The program creates a global cohort supporting fellows deploying AI in their operations and is funded by Microsoft, EY, Caribou and others.
Inclusion and anxiety
Justin Spelhaug, president of Microsoft Elevate, left, and Tim Batcha, vice president of Global Information Technology at Children International, speaking at a Microsoft summit on March 25. (GeekWire Photo / Lisa Stiffler)
Changemakers aims to address challenges Microsoft own leaders’ repeatedly acknowledged at the summit — that while AI is likely to be one of the most influential technologies of this era, it’s also creating widespread concerns around job loss and other community impacts and threatens to further widen tech inequities worldwide.
“This defining moment of our time can either be more inclusive or it can be less inclusive based on the decisions that we make in rooms like this all around the world,” said Justin Spelhaug, director of Microsoft Elevate, addressing attendees.
Microsoft President Brad Smith said that one of the best ways to overcome fears and build support for AI is to get people using the technology at home and in their work.
“Anxiety, especially in the United States, has reached people before AI has,” Smith said.
Advertisement
The company has committed to providing more than $5 billion in support for nonprofits over the next year alone through discounts and donations of its technology, as well as grants.
In an interview with GeekWire, Spelhaug pointed to two key operations where AI is likely to have the greatest impact for nonprofits:
Answering calls from the people served by the organizations to answer basic questions and address straightforward needs, replacing automated phone systems with a “press 1, press 2” menu.
Improving fundraising by tracking donor information, providing personalized communications and supporting lead follow ups.
“There’s no shortage of problems in the world to solve,” Spelhaug said. “Let’s get people solving those problems and AI taking care of the work that it can take care of.”
AI ambitions and experiments
Seattle-based Evergreen Goodwill is testing AI as a tool for managing the millions of pounds of donated apparel and household goods every year that it sells or tries to recycle.
The century-old nonprofit was selected last year as an AI for Good Lab grant recipient and is using the funds to pilot the use of AI in pricing some of the roughly 26 million items it processes annually. It’s testing computer vision tech at one site that scans items and suggests prices — currently requiring staff to display individual items, but eventually aimed at an automated system.
Advertisement
Manually sorting and pricing “is very high stress,” said Brent Deim, Goodwill’s vice president of technology. The tech should help employees work faster, build AI skills, and its language capabilities can open up roles for people with limited English.
The AI-enabled tech should also result in more consistent pricing, prevent undervaluing of items, and ultimately increase proceeds that fund its free education and job training programs.
Evergreen Goodwill needs to price 26 million donated items each year to sell in its stores. (Evergreen Goodwill Photo)
And those initiatives are another opportunity for integrating AI, said Huan Do, Goodwill’s VP of mission advancement. Do is eager to apply the Changemakers’ AI credential to the programs to “enable our students to be the best employees available for a 21st century workforce.”
Rapid pace of change
Jackson of Project C.U.R.E. has his own ambitious ideas for AI. One is to create videos with avatars that guide healthcare workers in remote communities in repairing broken medical devices themselves — avatars that reflect the people being assisted, speaking their language and dialect.
But he also recognizes the hurdles to making AI initiatives a reality. For his 35-person team — even with 35,000 volunteer supporters — budget and staffing constraints loom large.
Advertisement
So do the challenges of digitizing historic paper records, persuading clinics to enter current operational data, and navigating privacy and data-use concerns. Keeping pace with rapidly evolving technologies, and ensuring those technologies can talk to one another, add further pressure.
“I’m just sitting here thinking, ‘Oh man, we are so far behind already,’” Jackson said after a tech demo at the summit. “We’ll try to get there.”
You must be logged in to post a comment Login