Isaiah Taylor was sixteen when he decided the nuclear industry had a size problem. Not that reactors were too dangerous or too expensive, though they are both, but that they were simply too big. The multi-gigawatt monuments to Cold War-era engineering that still dot the American landscape were designed for a grid that moved power in one direction: from a distant plant to a distant city. They were never meant to sit behind a hyperscaler’s fence line, feeding a cluster of GPU racks whose appetite doubles every eighteen months.
Taylor, now 27, founded Valar Atomics in 2023 to build something different. On Tuesday, the El Segundo, California-based startup announced it has raised $450 million at a $2 billion valuation, according to Bloomberg. The round comprises $340 million in equity and $110 million in debt, and it lands barely five months after a $130 million Series A that valued the company at a fraction of its current price.
The backers read like a roster of the Americandefence-tech establishment that has lately been writing enormous cheques. Palmer Luckey, the Anduril Industries founder whose company was recently reported to be pursuing a $4 billion raise at a $60 billion valuation, is an investor. So is Shyam Sankar, the chief technology officer of Palantir Technologies. The earlier Series A was led by Snowpoint Ventures, the firm co-founded by Doug Philippone, Palantir’s former head of global defence, alongside Day One Ventures and Dream Ventures. Lockheed Martin board member and former AT&T chief executive John Donovan also participated.
Valar’s pitch is built around what it calls “gigasites”, sprawling industrial campuses that would host hundreds or even thousands of small, high-temperature gas-cooled reactors operating in concert. Each unit uses helium as a coolant and TRISO fuel encased in graphite, a combination that allows the reactors to run at significantly higher temperatures than conventional light-water designs. The company says these clusters can deliver dense, steady, carbon-free power tailored to the load profiles of AI data centres, industrial manufacturers, and grid-constrained regions.
It is anaudacious answer to an increasingly urgent question: where will the electricity come from? The International Energy Agency projects that data-centre power consumption will double by 2026. Goldman Sachs estimates that 85 to 90 gigawatts of new nuclear capacity will eventually be needed to help fill the gap. Microsoft, Amazon, and Google have all signed nuclear power agreements in recent months, but the reactors those deals depend on do not yet exist at commercial scale.
Advertisement
Valar claims a meaningful head start. In November 2025, the company announced that its NOVA Core achieved zero-power criticality at Los Alamos National Laboratory’s National Criticality Experiments Research Centre, making it what the Breakthrough Institute described as the first company to reach that milestone under the US Department of Energy’s Nuclear Reactor Pilot Programme. Zero-power criticality — a self-sustaining chain reaction of uranium-235 without reaching full operating temperatures — is a necessary validation step, not a working power plant, but it is further than most of Valar’s competitors have publicly demonstrated.
The company is now preparing its Ward250 reactor, a 100-kilowatt thermal high-temperature gas-cooled unit, for power operations at the Utah San Rafael Energy Research Centre. In February 2026, the reactor was airlifted from California to Utah aboard three C-17 Globemaster military cargo aircraft in a joint operation between the Departments of Defence and Energy — a logistical stunt that doubled as a proof of concept for rapid reactor deployment. Valar is targeting operational status before 4 July 2026, the deadline the DOE set for three reactors in its pilot programme to achieve criticality.
Taylor’s trajectory has beenunconventional even by deep-tech standards. A self-taught coder who launched his first venture as a teenager, he comes from a family with nuclear roots: his great-grandfather, Ward Schaap, was a physicist on the Manhattan Project. The Ward250 reactor carries Schaap’s name. Taylor has assembled a leadership team that includes Mark Mitchell, the former president of Ultra Safe Nuclear Corporation, and Muhammad Shahzad, the former president and chief financial officer of Relativity Space.
The competitive field is crowded and well-funded. TerraPower, backed by Bill Gates, broke ground on a sodium-cooled reactor in Wyoming last year. Kairos Power is building a molten-salt demonstration plant in Tennessee. X-energy has a partnership with Dow Chemical for an industrial HTGR. Oklo, which went public via a SPAC in 2024, is developing a fast-neutron microreactor. None has yet delivered commercial power from an advanced design.
Advertisement
Valar has also taken a combative approach to regulation thatfew young companies would risk. In April 2025, the startup sued the Nuclear Regulatory Commission, arguing that the agency’s licensing framework unlawfully restricts small-scale reactor innovation by requiring the same approval process for low-power test reactors as for full-scale commercial plants. The lawsuit, filed alongside the states of Texas, Utah, Louisiana, Florida, and Arizona, as well as fellow reactor startups Last Energy and Deep Fission, seeks to shift regulatory authority for small reactors to individual states. The case has since been paused amid the Trump administration’s broader executive order to overhaul the NRC.
The $2 billion valuation places Valar among the most richly valued nuclear startups in the United States, a distinction that would have seemed absurd five years ago. Whether the premium reflectsgenuine confidence in the technology or the gravitational pull of AI-adjacent capitalis a question the next eighteen months should begin to answer. If the Ward250 reaches power operations in Utah this summer, Valar will have done something no advanced-reactor startup has managed: moved from incorporation to criticality to grid-connected electricity in roughly three years. If it does not, $2 billion will buy a very expensive physics experiment in the desert.
Traffic lights can be tricky, depending on where you go. The response you have to a red light at an intersection in one state may not be the same response you need at an intersection in another state. Turning right on red can even get you a ticket in some U.S. cites. But in Florida, a right turn traffic light may still allow a right turn after stopping. But there’s also a bit more to it than that.
First off, you must come to a complete stop at the red light. If you keep rolling through the turn instead, you could get a ticket. Next, if there are no posted warning signs at the light, Florida law says you can go ahead and turn right once it’s clear to do so. But if you have a sign warning you that there’s no turn on red, then you’re stuck. Stay where you are until you get the green light.
Similarly, if you have a red right arrow, you of course must fully stop then as well. But don’t let the arrow fool you, as it’s not an automatic signal that you can just turn once the way is clear. If there are no signs posted that say otherwise (such as a “No turn on red” sign), you may proceed after determining that it is safe to do so. This is the case whether you’re at an intersection or a crosswalk.
Advertisement
Crosswalks and malfunctioning traffic lights
Monticelllo/Getty Images
If you come to a right turn traffic light at a crosswalk in Florida, keep in mind that you are expected to yield to any pedestrians who are crossing. Even if you’ve come to a complete stop and are otherwise allowed to turn, you must wait. If your light turns green and someone is still in the process of crossing, you should wait then as well. Additionally, if you’re at an intersection with sidewalks but no clearly marked crosswalk present, you still have to yield.
However, there could be times you arrive at a right turn traffic light that’s malfunctioning. Maybe it’s blinking, stuck, or completely dead. If this happens, Florida law states that you must treat it as a four-way stop sign. That means you must come to a complete stop and yield right of way to traffic coming from all directions. Of course, you must also yield to any pedestrians crossing in front of you. Once the way clears and you have an open right turn, you’re free to go. Always be cautious when arriving at a light that’s out of order and make sure the intersection is fully clear before you continue.
Meta has found a new source of training data for its AI models: its own employees. The company plans to use data culled from the mouse movements and keystrokes of its own staff in its pursuit to build more capable and efficient artificial intelligence.
The story, which was first reported by Reuters, shows the lengths to which tech companies are going to find new sources of training data — the lifeblood of AI models that helps the programs learn how to more effectively carry out tasks and respond to user queries.
When reached for comment by TechCrunch, a Meta spokesperson provided the following statement: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus. To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”
This trend reveals a troublesome privacy dimension of the AI industry. Last week it was reported that old startups are being scavenged for their corporate communications (like Slack archives and Jira tickets), and converted into AI training data.
The Game Pass front page on Microsoft’s website now shows revised pricing for the service’s two most expensive plans. Although delaying the addition of new Call of Duty titles marks a reversal of the company’s earlier strategy, the expanded library introduced during last year’s major price increase remains intact. Read Entire Article Source link
Cash App, the banking and payments app run by Block, has added support for parent-managed kids accounts. The new accounts include key benefits from the service’s normal account, with an eye towards teaching financial literacy to younger users ages 6 to 12. Cash App first allowed teenage users on its platform in 2021.
As part of the “expanded Cash App Families experience,” eligible legal guardians and parents can create managed accounts that offer “a dedicated place on the platform to send allowances, set aside savings, and track spending for their child, kickstarting their path to financial independence,” Cash App says. Adults managing these accounts will be able to set up recurring transfers, see how their child is spending and do things like lock their child’s account to prevent transactions. Kids will get a custom debit card and the ability to receive payments from up to five trusted accounts, though notably they won’t be able to access Cash App itself.
Cash App says managed accounts are designed for kids 6 through 12. Once those kids turn 13, Cash App says parents will be able to choose to convert their account to a “sponsored account” to unlock more features, like the ability to send and receive payments, invest in stocks or trade crypto. Those sponsored accounts are technically still monitored and controlled by a parent or legal guardian, but they do give 13-year-olds more control over how they use their money.
A parent-managed account for kids is not a new idea in the fintech space, though Cash App is trying to reach a younger audience than some of its competitors. Venmo rolled out access to its payment platform to teens between the ages of 13 to 17 in 2023. Separately, both Apple and Google also offer their own kids accounts in Google Wallet and Apple Cash Family.
Florida’s attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is “not responsible for this terrible crime” and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner’s chat logs. “My prosecutors have looked at this and they’ve told me, if it was a person on the other end of that screen, we would be charging them with murder,” Uthmeier said. “We cannot have AI bots that are advising people on how to kill others.”
Uthmeier’s office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. “We are going to look at who knew what, designed what, or should have done what,” he said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”
[…] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU’s Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.
Anthropic’s buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company’s special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla’s team find and patch 271 vulnerabilities in the latest release of the Firefox browser. “So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” the foundation said.
The blog post from Mozilla feels like a positive sign for Anthropic’s Project Glasswing. Obviously the AI company would want to put itself in the best possible light while presenting its own initiative, but there’s something encouraging about hearing the benefits from a third party. Mozilla also noted that in its time with Claude Mythos, the AI wasn’t able to turn up any bugs that a human wouldn’t have been able to find, given enough time and resources, which indicates that AI isn’t presently able to do more to crack cybersecurity protections than a person can.
An organizaion successfully using AI for good is certainly a refreshing change of pace in tech news. And for those Firefox users who aren’t personally interested in applying any generative AI in their browsing, Mozilla has given the option to turn it all off for the past several months.
Google on Monday unveiled the most significant upgrade to its autonomous research agent capabilities since the product’s debut, launching two new agents — Deep Research and Deep Research Max — that for the first time allow developers to fuse open web data with proprietary enterprise information through a single API call, produce native charts and infographics inside research reports, and connect to arbitrary third-party data sources through the Model Context Protocol (MCP).
The release, built on Google’s Gemini 3.1 Pro model, marks an inflection point in the rapidly intensifying race to build AI systems that can autonomously conduct the kind of exhaustive, multi-source research that has traditionally consumed hours or days of human analyst time. It also represents Google’s clearest bid yet to position its AI infrastructure as the backbone for enterprise research workflows in finance, life sciences, and market intelligence — industries where the stakes of getting information wrong are extraordinarily high.
“We are launching two powerful updates to Deep Research in the Gemini API, now with better quality, MCP support, and native chart/infographics generation,” Google CEO Sundar Pichai wrote on X. “Use Deep Research when you want speed and efficiency, and use Max when you want the highest quality context gathering & synthesis using extended test-time compute — achieving 93.3% on DeepSearchQA and 54.6% on HLE.”
Both agents are available starting today in public preview via paid tiers of the Gemini API, accessible through the Interactions API that Google first introduced in December 2025.
Advertisement
Why Google built two research agents instead of one
The launch introduces a tiered architecture that reflects a fundamental tension in AI agent design: the tradeoff between speed and thoroughness.
Deep Research, the standard tier, replaces the preview agent Google released in December and is optimized for low-latency, interactive use cases. It delivers what Google describes as significantly reduced latency and cost at higher quality levels compared to its predecessor. The company positions it as ideal for applications where a developer wants to embed research capabilities directly into a user-facing interface — think a financial dashboard that can answer complex analytical questions in near-real time.
Deep Research Max occupies the opposite end of the spectrum. It leverages extended test-time compute — a technique where the model spends more computational cycles iteratively reasoning, searching, and refining its output before delivering a final report. Google designed it for asynchronous, background workflows: the kind of task where an analyst team kicks off a batch of due diligence reports before leaving the office and expects exhaustive, fully sourced analyses waiting for them the next morning.
The Google DeepMind team framed the distinction on X: “Deep Research: Optimized for speed and efficiency. Perfect for interactive apps needing quicker responses. Deep Research Max: It uses extra time to search and reason. Ideal for exhaustive context gathering and tasks happening in the background.”
Advertisement
“Deep Research was our first hosted agent in the API and has gained a ton of traction over the last 3 months, very excited for folks to test out the new agents and all the improvements, this is just the start of our agents journey,” Logan Kilpatrick, who leads developer relations for Google’s AI efforts, wrote on X.
MCP support lets the agents tap into private enterprise data for the first time
Perhaps the most consequential feature in today’s release is the addition of Model Context Protocol support, which transforms Deep Research from a sophisticated web research tool into something more closely resembling a universal data analyst.
MCP , an emerging open standard for connecting AI models to external data sources, allows Deep Research to securely query private databases, internal document repositories, and specialized third-party data services — all without requiring sensitive information to leave its source environment. In practical terms, this means a hedge fund could point Deep Research at its internal deal-flow database and a financial data terminal simultaneously, then ask the agent to synthesize insights from both alongside publicly available information from the web.
Google disclosed that it is actively collaborating with FactSet, S&P, and PitchBook on their MCP server designs, a signal that the company is pursuing deep integration with the data providers that Wall Street and the broader financial services industry already rely on daily. The goal, according to the blog post authored by Google DeepMind product managers Lukas Haas and Srinivas Tadepalli, is to “let shared customers integrate financial data offerings into workflows powered by Deep Research, and to enable them to realize a leap in productivity by gathering context using their exhaustive data universes at lightning speed.”
Advertisement
This addresses one of the most persistent pain points in enterprise AI adoption: the gap between what a model can find on the open internet and what an organization actually needs to make decisions. Until now, bridging that gap required significant custom engineering. MCP support, combined with Deep Research’s autonomous browsing and reasoning capabilities, collapses much of that complexity into a configuration step. Developers can now run Deep Research with Google Search, remote MCP servers, URL Context, Code Execution, and File Search simultaneously — or turn off web access entirely to search exclusively over custom data. The system also accepts multimodal inputs including PDFs, CSVs, images, audio, and video as grounding context.
Native charts and infographics turn AI reports into stakeholder-ready deliverables
The second headline feature — native chart and infographic generation — may sound incremental, but it addresses a practical limitation that has constrained the usefulness of AI-generated research outputs in professional settings.
Previous versions of Deep Research produced text-only reports. Users who needed visualizations had to export the data and build charts themselves, a friction point that undermined the promise of end-to-end automation. The new agents generate high-quality charts and infographics inline within their reports, rendered in HTML or Google’s Nano Banana format, dynamically visualizing complex datasets as part of the analytical narrative.
“The agent generates HTML charts and infographics inline with the report. Not screenshots. Not suggestions to ‘visualize this data.’ Actual rendered charts inside the markdown output,” noted AI commentator Shruti Mishra on X, capturing the practical significance of the change.
Advertisement
For enterprise users — particularly those in finance and consulting who need to produce stakeholder-ready deliverables — this transforms Deep Research from a tool that accelerates the research phase into one that can potentially produce near-final analytical products. Combined with a new collaborative planning feature that lets users review, guide, and refine the agent’s research plan before execution, and real-time streaming of intermediate reasoning steps, the system gives developers granular control over the investigation’s scope while maintaining the transparency that regulated industries demand.
How Deep Research evolved from a consumer chatbot feature to enterprise platform infrastructure
Today’s release crystallizes a strategic narrative Google has been building for months: Deep Research is not merely a consumer feature but a piece of infrastructure that powers multiple Google products and is now being offered to external developers as a platform.
The blog post explicitly notes that when developers build with the Deep Research agent, they tap into “the same autonomous research infrastructure that powers research capabilities within some of Google’s most popular products like Gemini App, NotebookLM, Google Search and Google Finance.” This suggests that the agent available through the API is not a stripped-down version of what Google uses internally but the same system, offered at platform scale.
The journey to this point has been remarkably rapid. Google first introduced Deep Research as a consumer feature in the Gemini app in December 2024, initially powered by Gemini 1.5 Pro. At the time, the company described it as a personal AI research assistant that could save users hours by synthesizing web information in minutes. By March 2025, Google upgraded Deep Research with Gemini 2.0 Flash Thinking Experimental and made it available for anyone to try. Then came the upgrade to Gemini 2.5 Pro Experimental, where Google reported that raters preferred its reports over competing deep research providers by more than a 2-to-1 margin. The December 2025 release was the pivot to developer access, when Google launched the Interactions API and made Deep Research available programmatically for the first time, powered by Gemini 3 Pro and accompanied by the open-source DeepSearchQA benchmark.
Advertisement
The underlying model driving today’s improvements is Gemini 3.1 Pro, which Google released on February 19, 2026. That model represented a significant leap in core reasoning: on ARC-AGI-2, a benchmark evaluating a model’s ability to solve novel logic patterns, 3.1 Pro scored 77.1% — more than double the performance of Gemini 3 Pro. Deep Research Max inherits that reasoning foundation and layers autonomous research behaviors on top of it, achieving 93.3% on DeepSearchQA (up from 66.1% in December) and 54.6% on Humanity’s Last Exam (up from 46.4%).
Google’s new Deep Research Max agent outperformed its December predecessor across nearly all qualitative dimensions in internal expert evaluations — but the older version held an edge in internal consistency and faithfulness. (Source: Google DeepMind)
Google faces a crowded field of competitors building autonomous research agents
Google is not operating in a vacuum. The launch arrives amid intensifying competition in the autonomous research agent space. OpenAI has been developing its own agent capabilities within ChatGPT under the codename Hermes, which includes an agent builder, templates, scheduling, and Slack integration, according to reports circulating on social media. Perplexity has built its business around AI-powered research. And a growing ecosystem of startups is attacking various slices of the automated research workflow.
What distinguishes Google’s approach is the combination of its search infrastructure — which gives Deep Research access to the broadest and most current index of web information available — with the MCP-based connectivity to enterprise data sources. No other company currently offers a research agent that can simultaneously query the open web at Google Search’s scale and navigate proprietary data repositories through a standardized protocol. The pricing structure also signals Google’s intent to drive adoption: according to Sim.ai, which tracks model pricing, the Deep Research agent in the December preview was priced at $2 per million input tokens and $2 per million output tokens with a 1 million token context window — positioning it as cost-competitive for the volume of research output it generates.
Advertisement
Not everyone greeted the announcement with unalloyed enthusiasm, however. Several users on X noted that the new agents are available only through the API, not in the Gemini consumer app. “Not on Gemini app,” observed TestingCatalog News, while another user wrote, “Google keeps punishing Gemini App Pro subscribers for some reason.” Others raised concerns about the presentation of benchmark results, with one user arguing that Google’s charts could be “misleading” in how they represent percentage improvements. These complaints point to a broader tension in Google’s AI strategy: the company is increasingly directing its most advanced capabilities toward developers and enterprise customers who access them through APIs, while consumer-facing products sometimes lag behind.
Deep Research Max led all competitors on DeepSearchQA and BrowseComp, but GPT 5.4 edged ahead on Humanity’s Last Exam, a benchmark measuring reasoning and knowledge. All results were evaluated by Google DeepMind using publicly available model APIs. (Source: Google DeepMind)
What Deep Research Max means for finance, biotech, and the future of knowledge work
The practical implications of today’s launch are most immediately felt in industries that depend on exhaustive, multi-source research as a core business function. In financial services, where analysts routinely spend hours assembling due diligence reports from scattered sources — SEC filings, earnings transcripts, market data terminals, internal deal memos — Deep Research Max offers the possibility of automating the initial research phase entirely. The FactSet, S&P, and PitchBook partnerships suggest Google is serious about making this work with the data infrastructure that financial professionals already use.
In life sciences, the blog post notes that Google has collaborated with Axiom Bio, which builds AI systems to predict drug toxicity, and found that Deep Research unlocked new levels of initial research depth across biomedical literature. In market research and consulting, the ability to produce stakeholder-ready reports with embedded visualizations and granular citations could compress project timelines from days to hours.
Advertisement
The key question is whether the quality and reliability of these automated outputs will meet the standards that professionals in these fields demand. Google’s benchmark numbers are impressive, but benchmarks measure performance on standardized tasks — real-world research is messier, more ambiguous, and often requires the kind of judgment that remains difficult to automate. Deep Research and Deep Research Max are available now in public preview via paid tiers of the Gemini API, with availability on Google Cloud for startups and enterprises coming soon.
Eighteen months ago, Deep Research was a feature that helped grad students avoid drowning in browser tabs. Today, Google is betting it can replace the first shift at an investment bank. The distance between those two ambitions — and whether the technology can actually close it — will define whether autonomous research agents become a transformative category of enterprise software or just another AI demo that dazzles on benchmarks and disappoints in the conference room.
SpaceX and AI company Cursor have struck a new partnership that could see the owner of X buy the AI company for $60 billion later this year. “SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI,” SpaceX wrote in a post on X.
According to SpaceX, the deal allows for it to either invest $10 billion into the company known for its AI coding tool, or acquire it entirely “later this year” for $60 billion. If an acquisition were to happen, it’s not clear at what point Cursor could officially join the fold of Elon Musk’s rapidly expanding and increasingly enmeshed web of companies. SpaceX bought xAI, the billionaire’s AI company that also controls X, earlier this year. SpaceX is currently getting ready to go public this summer in what will likely be the biggest initial public offering (IPO) in history.
Cursor, which has reportedly been in talks to raise its own $2 billion round of funding, is known for its AI coding tool of the same name that’s become the vibe coding platform of choice for many developers. It allows people to use either its own models or those from other leading AI companies, including OpenAI, Google, Anthropic and xAI.
In a statement, Cursor said its partnership with SpaceX will “accelerate our model training efforts” while addressing infrastructure-related issues that have slowed it down in the past. “We’ve wanted to push our training efforts much further, but we’ve been bottlenecked by compute,” the company said. “With this partnership, our team will leverage xAI’s Colossus infrastructure to dramatically scale up the intelligence of our models for coding and beyond.”
The Angle Computer of the B-52, opened. (Credit: Ken Shirriff)
In the ages before convenient global positioning satellites to query for one’s current location military aircraft required dedicated navigators in order to not get lost. This changed with increasing automation, including the arrival of increasingly more sophisticated electromechanical computers, such as the angle computer in the B-52 bomber’s star tracker that [Ken Shirriff] recently had a poke at.
We covered star trackers before, with this devices enabling the automation of celestial navigation. In effect, as long as you have a map of the visible stars and an accurate time source you will never get lost on Earth, or a few kilometers above its surface as the case may be.
The B-52’s Angle Computer is part of the Astro Compass, which is the star tracker device that locks onto a star and outputs a heading that’s accurate to a tenth of a degree, while also allowing for position to be calculated from it. Inside the device a lot of calculations are being performed as explained in the article, though the full equations are quite complex.
Not burdening the navigator of a B-52 with having to ogle stars themselves with an instrument and scribbling down calculations on paper is a good idea, of course. Instead the Angle Computer solves the navigational triangle mechanically, essentially by modelling the celestial sphere with a metal half-sphere. The solving is thus done using this physical representation, involving numerous gears and other parts that are detailed in the article.
Advertisement
In addition to the mechanical components there are of course the motors driving it, feedback mechanisms and ways to interface with the instruments. For the 1950s this was definitely the way to design a computer like this, but of course as semiconductor transistors swept the computing landscape, this marvel of engineering would before long find itself too replaced with a fully digital version.
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Tuesday’s puzzle instead then click here: NYT Strands hints and answers for Tuesday, April 21 (game #779).
Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc’s Wordle today page for the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Strands today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Strands today (game #780) – hint #1 – today’s theme
What is the theme of today’s NYT Strands?
• Today’s NYT Strands theme is… Earth Day
NYT Strands today (game #780) – hint #2 – clue words
Play any of these words to unlock the in-game hints system.
Advertisement
RINSE
SORE
CREATOR
LATER
DOVE
REPAID
NYT Strands today (game #780) – hint #3 – spangram letters
How many letters are in today’s spangram?
• Spangram has 12 letters
NYT Strands today (game #780) – hint #4 – spangram position
What are two sides of the board that today’s spangram touches?
First side: left, 4th row
Last side: top, 4th column
Advertisement
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Strands today (game #780) – the answers
(Image credit: New York Times)
The answers to today’s Strands, game #780, are…
REPAIR
RECYCLE
DONATE
REFILL
REUSE
REDUCE
SPANGRAM: CONSERVATION
My rating: Easy
My score: Perfect
Today, is of course Earth Day, an annual event where we all, as a human race, can show our support for the environment by behaving sustainably and thinking about how we can protect the planet for the future. Not blowing it up would be a good start, perhaps.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Despite the theme being obvious I was quite slow to get going today and saw plenty of words that I should have clocked as game words.
In fact, I saw “repaid” before I saw REPAIR, “conserve” before the spangram CONSERVATION and “cycle” before RECYCLE.
Advertisement
Unlike previous games this one was decidedly lacking in ridiculously long words — perhaps the game makers were choosing not to be wasteful with their letters.
Yesterday’s NYT Strands answers (Tuesday, April 21, game #779)
BOLD
INTREPID
GUTSY
COURAGEOUS
ADVENTUROUS
SPANGRAM: DAREDEVILS
What is NYT Strands?
Strands is the NYT’s not-so-new-any-more word game, following Wordle and Connections. It’s now a fully fledged member of the NYT’s games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I’ve got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you’re struggling to beat it each day.
You must be logged in to post a comment Login