Connect with us
DAPA Banner

Tech

Western Electric Revives U.S. Vacuum Tube Manufacturing, Unveils New Amplifier Designs at AXPONA 2026

Published

on

Western Electric didn’t just show up at AXPONA 2026 with new amplifier designs. It tapped into something a lot of us have been thinking about for years.

I’ve had a thing for tubes since I was a kid, learning the basics with my grandfather before I was ten. Decades later, that interest hasn’t faded. It has just gotten more expensive and harder to justify shelf space. I’ve owned just about every type of tube amplifier you can name and built a few along the way. My wife would argue the collection peaked years ago, but I’m still not walking past a good 6922 or KT88 without at least thinking about it.

The bigger issue hasn’t been the gear. It has been the tubes themselves. Options have thinned out to the point where “choice” often feels like theater. I remember standing in a large musician supply shop staring at five different brands of 12AX7. Different logos, different boxes, different prices. Same factory in Russia. Same tube.

That’s a long way from where things used to be. There was a time when American manufacturing alone offered a deep bench. RCA, GE, Sylvania, Tung Sol. All building serious product, alongside a strong European presence from Mullard, Telefunken, Philips, and others. Today, new production is concentrated in a handful of places: Slovakia, China, Russia, and yes, Rossville, GA, USA. Which is why what Western Electric is doing right now actually matters.

Advertisement

The operation in Rossville, GA, USA belongs to Western Electric, one of the most storied names in American tube manufacturing. For a long stretch, that name carried more history than output. Western Electric was not producing tubes at all. Even now, the lineup coming out of that factory is focused and limited. Two tubes. The 300B and the 308B.

western-electric-308b-tube

The 300B remains the centerpiece. It powers Western Electric’s Type 91E integrated amplifier and continues to define what the brand does best. The 308B is a different story. Production is being ramped back up to support the new 100E monoblock amplifiers, which signals a broader push beyond a single legacy tube.

Both amplifiers were on display in Western Electric’s room at AXPONA 2026. And yes, they deliver exactly what you think they will. If you have even a passing interest in tubes, this is the kind of gear that stops you mid sentence and makes you reconsider your financial priorities.

The 300B Reality

The 300B has been around since the 1930s and, during the golden age of tubes, it powered everything from PA systems to theater installations and clubs around the world. It wasn’t boutique back then. It was the workhorse.

Today, it sits on the other end of the spectrum. Among the most coveted tubes in the new old stock market, with prices that can start just under a thousand dollars and climb into several thousand for early examples. Add the premium for matched pairs or quads and the cost of keeping a 300B amplifier running gets uncomfortable fast. And that’s before you factor in the risk. Most NOS tubes come with little to no warranty. You’re buying history and hoping it holds up.

Advertisement

That’s where Western Electric shifts the conversation. Their current production 300B comes in at $699 each or $1,499 for a matched pair. Still not cheap, but grounded in reality compared to NOS pricing. More importantly, they back it with a five year warranty. That alone changes the math for anyone serious about running a 300B based system long term.

western-electric-300b-tube-case

The 308B: Big Glass, Big Power, and Still a Work in Progress

The 308B is not subtle. It stands roughly 14 inches tall and close to 4 inches in diameter. This is the kind of tube that makes everything around it look like it needs to hit the gym.

Advertisement. Scroll to continue reading.

In Western Electric’s 100E monoblock, a single 308B is rated to deliver 160 watts. That’s more output from one tube than many push pull designs manage with a quad of KT88s. It’s an ambitious play and one that suggests Western Electric is not content to stay in the 300B comfort zone.

Details are still catching up to the product. Pricing and availability have not been finalized, and even the web page listed in the company’s show materials was still under construction the week after AXPONA 2026. That tells you where this sits. Early, promising, and not quite ready for prime time.

Advertisement

91E and 100E: How Western Electric Is Actually Using These Tubes

Western Electric wasn’t just putting tubes on pedestals at AXPONA 2026. They showed how they’re being used across two very different amplifier designs.

The 300B plays two roles here. In the 91E integrated amplifier, it’s the output tube. In the 100E monoblock, it shifts upstream and handles the mid stage. The spotlight in the 100E belongs to the 308B, which drives the final output stage and does the heavy lifting.

western-electric-91e-tube-amp

The 91E integrated amplifier, priced at $8,000, uses a pair of 300B tubes to deliver roughly 20 watts per channel. That number will not impress anyone chasing big power, but that’s not the point. Western Electric built flexibility into the design with interchangeable output modules for 4, 8, and 16 ohm loads. That opens the door to a wide range of loudspeakers, although higher sensitivity designs will make the most sense here.

Connectivity is more modern than you might expect. There are moving coil and moving magnet phono stages built-in, along with RCA inputs for a tuner, CD player, and additional analog sources. On the digital side, the 91E includes Bluetooth, USB, and Ethernet, with an ESS DAC handling up to 16-bit/96 kHz for incoming Bluetooth and USB signals.

Advertisement

Outputs include line out and pre out for system integration, plus dual sets of binding posts. It’s a tube integrated that leans into flexibility without pretending to be something it isn’t. No apps, no ecosystem pitch, and definitely not a Class D network amplifier. It just makes music and throws off enough heat to remind you that winter is coming.

100E Monoblocks and A2 Loudspeakers: Open Window Listening

The rest of what Western Electric brought to AXPONA 2026 leans newer, and in some cases, still short on published detail. The 100E monoblocks were impossible to miss. Physically and visually, they owned the room.

western-electric-100e-tube-amp

Each chassis is built around that 14 inch tall 308B, and yes, it glows in a way that will stop you in your tracks. Subtle is not part of the brief here. Rated at 160 watts per amplifier, the 100E is doing something few tube designs attempt, delivering serious output from a single ended architecture that looks more like industrial art than consumer audio.

Size is part of the story. At roughly 32 inches deep and close to 22 inches wide, these amplifiers are going to dictate the layout of most rooms. Weight is estimated around 160 pounds each, so once they are in place, they are staying there. This is not gear you casually move around on a Saturday afternoon.

Advertisement

The topology is just as unconventional. A 12AT7 handles the input stage, a 300B is used in the mid stage, and the 308B takes over as the output tube. Seeing a 300B in that middle role tells you everything about the scale of this design. Nothing about it is typical.

Heat is not an afterthought either. With plate voltage around 1500 volts and plate dissipation exceeding 220 watts, these amplifiers are going to generate serious thermal output. Ventilation is not optional, especially in smaller rooms.

Advertisement. Scroll to continue reading.

The 100E is impedance matched to the 91E, so building a complete Western Electric system is straightforward if you are willing to commit. At $75,000 each, the monoblocks sit in a very different bracket than the 91E, and the new A2 loudspeakers at $70,000 per pair make it clear this is a full system play, not just a statement amplifier. You accountant would like a word.

Advertisement

Big System Energy, Small Room Reality

The A2 loudspeakers from Western Electric are a hybrid design built around air motion transformer tweeters and midrange drivers, paired with dual dynamic bass drivers. The goal is broad, even coverage with a 180 degree dispersion pattern. This is meant to fill a room, not lock one listener into a single chair.

That ambition ran into a familiar problem at AXPONA 2026. The hotel room was simply too small. The A2 sounded like it wanted more space, more air, more distance to breathe. Instead, it was confined to a setup that forced it to hold back. This is the kind of speaker that needs a larger ballroom or dedicated listening space to make sense.

Feeding the system was the new WE 203C CD player, priced at $12,000. It served as the primary source for a system that, all in, lands around $310,000 before you even start thinking about cables or adding a turntable.

The Bottom Line

What stuck with me most from Western Electric at AXPONA 2026 wasn’t the big glass. It was the small signal tubes quietly doing their job up front. The 12AX7 in the first stage of the 100E may not draw a crowd, but it matters more than it looks.

Advertisement

Western Electric is ramping up production of the 12AX7 and aiming to expand into other small signal tubes as well. If that includes something like a 6SN7, a lot of people are going to pay attention. This is not a niche development. It’s a structural shift. For the first time in decades, American amplifier manufacturers could have a domestic source for one of the most widely used tubes in both hi fi and instrument amplifiers.

For years, “made in the USA” has come with an asterisk. Chassis, transformers, assembly. Sure. Tubes? Usually sourced from Russia, Slovakia, or China. Bringing small signal tube production back to the U.S. changes that conversation in a real way.

With the factory in Rossville, GA, USA only a few hours from me, there’s a good chance to see this firsthand. And if that happens, it’s worth documenting. People should see how this is being done, not just read about it.

Honestly, if Western Electric had shown nothing but that 12AX7 effort, it still would have been one of the most important rooms at the show.

Advertisement

For more information: westernelectric.com

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting

Published

on

Florida’s attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is “not responsible for this terrible crime” and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner’s chat logs. “My prosecutors have looked at this and they’ve told me, if it was a person on the other end of that screen, we would be charging them with murder,” Uthmeier said. “We cannot have AI bots that are advising people on how to kill others.”

Uthmeier’s office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. “We are going to look at who knew what, designed what, or should have done what,” he said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”

[…] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU’s Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Source link

Advertisement
Continue Reading

Tech

Mozilla says it patched 271 Firefox vulnerabilities thanks to Anthropic’s Claude Mythos

Published

on

Anthropic’s buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company’s special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla’s team find and patch 271 vulnerabilities in the latest release of the Firefox browser. “So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” the foundation said.

The blog post from Mozilla feels like a positive sign for Anthropic’s Project Glasswing. Obviously the AI company would want to put itself in the best possible light while presenting its own initiative, but there’s something encouraging about hearing the benefits from a third party. Mozilla also noted that in its time with Claude Mythos, the AI wasn’t able to turn up any bugs that a human wouldn’t have been able to find, given enough time and resources, which indicates that AI isn’t presently able to do more to crack cybersecurity protections than a person can.

An organizaion successfully using AI for good is certainly a refreshing change of pace in tech news. And for those Firefox users who aren’t personally interested in applying any generative AI in their browsing, Mozilla has given the option to turn it all off for the past several months.

Source link

Advertisement
Continue Reading

Tech

Google’s new Deep Research and Deep Research Max agents can search the web and your private data

Published

on

Google on Monday unveiled the most significant upgrade to its autonomous research agent capabilities since the product’s debut, launching two new agents — Deep Research and Deep Research Max — that for the first time allow developers to fuse open web data with proprietary enterprise information through a single API call, produce native charts and infographics inside research reports, and connect to arbitrary third-party data sources through the Model Context Protocol (MCP).

The release, built on Google’s Gemini 3.1 Pro model, marks an inflection point in the rapidly intensifying race to build AI systems that can autonomously conduct the kind of exhaustive, multi-source research that has traditionally consumed hours or days of human analyst time. It also represents Google’s clearest bid yet to position its AI infrastructure as the backbone for enterprise research workflows in finance, life sciences, and market intelligence — industries where the stakes of getting information wrong are extraordinarily high.

“We are launching two powerful updates to Deep Research in the Gemini API, now with better quality, MCP support, and native chart/infographics generation,” Google CEO Sundar Pichai wrote on X. “Use Deep Research when you want speed and efficiency, and use Max when you want the highest quality context gathering & synthesis using extended test-time compute — achieving 93.3% on DeepSearchQA and 54.6% on HLE.”

Both agents are available starting today in public preview via paid tiers of the Gemini API, accessible through the Interactions API that Google first introduced in December 2025.

Advertisement

Why Google built two research agents instead of one

The launch introduces a tiered architecture that reflects a fundamental tension in AI agent design: the tradeoff between speed and thoroughness.

Deep Research, the standard tier, replaces the preview agent Google released in December and is optimized for low-latency, interactive use cases. It delivers what Google describes as significantly reduced latency and cost at higher quality levels compared to its predecessor. The company positions it as ideal for applications where a developer wants to embed research capabilities directly into a user-facing interface — think a financial dashboard that can answer complex analytical questions in near-real time.

Deep Research Max occupies the opposite end of the spectrum. It leverages extended test-time compute — a technique where the model spends more computational cycles iteratively reasoning, searching, and refining its output before delivering a final report. Google designed it for asynchronous, background workflows: the kind of task where an analyst team kicks off a batch of due diligence reports before leaving the office and expects exhaustive, fully sourced analyses waiting for them the next morning.

The Google DeepMind team framed the distinction on X: “Deep Research: Optimized for speed and efficiency. Perfect for interactive apps needing quicker responses. Deep Research Max: It uses extra time to search and reason. Ideal for exhaustive context gathering and tasks happening in the background.”

Advertisement

“Deep Research was our first hosted agent in the API and has gained a ton of traction over the last 3 months, very excited for folks to test out the new agents and all the improvements, this is just the start of our agents journey,” Logan Kilpatrick, who leads developer relations for Google’s AI efforts, wrote on X.

MCP support lets the agents tap into private enterprise data for the first time

Perhaps the most consequential feature in today’s release is the addition of Model Context Protocol support, which transforms Deep Research from a sophisticated web research tool into something more closely resembling a universal data analyst.

MCP , an emerging open standard for connecting AI models to external data sources, allows Deep Research to securely query private databases, internal document repositories, and specialized third-party data services — all without requiring sensitive information to leave its source environment. In practical terms, this means a hedge fund could point Deep Research at its internal deal-flow database and a financial data terminal simultaneously, then ask the agent to synthesize insights from both alongside publicly available information from the web.

Google disclosed that it is actively collaborating with FactSet, S&P, and PitchBook on their MCP server designs, a signal that the company is pursuing deep integration with the data providers that Wall Street and the broader financial services industry already rely on daily. The goal, according to the blog post authored by Google DeepMind product managers Lukas Haas and Srinivas Tadepalli, is to “let shared customers integrate financial data offerings into workflows powered by Deep Research, and to enable them to realize a leap in productivity by gathering context using their exhaustive data universes at lightning speed.”

Advertisement

This addresses one of the most persistent pain points in enterprise AI adoption: the gap between what a model can find on the open internet and what an organization actually needs to make decisions. Until now, bridging that gap required significant custom engineering. MCP support, combined with Deep Research’s autonomous browsing and reasoning capabilities, collapses much of that complexity into a configuration step. Developers can now run Deep Research with Google Search, remote MCP servers, URL Context, Code Execution, and File Search simultaneously — or turn off web access entirely to search exclusively over custom data. The system also accepts multimodal inputs including PDFs, CSVs, images, audio, and video as grounding context.

Native charts and infographics turn AI reports into stakeholder-ready deliverables

The second headline feature — native chart and infographic generation — may sound incremental, but it addresses a practical limitation that has constrained the usefulness of AI-generated research outputs in professional settings.

Previous versions of Deep Research produced text-only reports. Users who needed visualizations had to export the data and build charts themselves, a friction point that undermined the promise of end-to-end automation. The new agents generate high-quality charts and infographics inline within their reports, rendered in HTML or Google’s Nano Banana format, dynamically visualizing complex datasets as part of the analytical narrative.

“The agent generates HTML charts and infographics inline with the report. Not screenshots. Not suggestions to ‘visualize this data.’ Actual rendered charts inside the markdown output,” noted AI commentator Shruti Mishra on X, capturing the practical significance of the change.

Advertisement

For enterprise users — particularly those in finance and consulting who need to produce stakeholder-ready deliverables — this transforms Deep Research from a tool that accelerates the research phase into one that can potentially produce near-final analytical products. Combined with a new collaborative planning feature that lets users review, guide, and refine the agent’s research plan before execution, and real-time streaming of intermediate reasoning steps, the system gives developers granular control over the investigation’s scope while maintaining the transparency that regulated industries demand.

How Deep Research evolved from a consumer chatbot feature to enterprise platform infrastructure

Today’s release crystallizes a strategic narrative Google has been building for months: Deep Research is not merely a consumer feature but a piece of infrastructure that powers multiple Google products and is now being offered to external developers as a platform.

The blog post explicitly notes that when developers build with the Deep Research agent, they tap into “the same autonomous research infrastructure that powers research capabilities within some of Google’s most popular products like Gemini App, NotebookLM, Google Search and Google Finance.” This suggests that the agent available through the API is not a stripped-down version of what Google uses internally but the same system, offered at platform scale.

The journey to this point has been remarkably rapid. Google first introduced Deep Research as a consumer feature in the Gemini app in December 2024, initially powered by Gemini 1.5 Pro. At the time, the company described it as a personal AI research assistant that could save users hours by synthesizing web information in minutes. By March 2025, Google upgraded Deep Research with Gemini 2.0 Flash Thinking Experimental and made it available for anyone to try. Then came the upgrade to Gemini 2.5 Pro Experimental, where Google reported that raters preferred its reports over competing deep research providers by more than a 2-to-1 margin. The December 2025 release was the pivot to developer access, when Google launched the Interactions API and made Deep Research available programmatically for the first time, powered by Gemini 3 Pro and accompanied by the open-source DeepSearchQA benchmark.

Advertisement

The underlying model driving today’s improvements is Gemini 3.1 Pro, which Google released on February 19, 2026. That model represented a significant leap in core reasoning: on ARC-AGI-2, a benchmark evaluating a model’s ability to solve novel logic patterns, 3.1 Pro scored 77.1% — more than double the performance of Gemini 3 Pro. Deep Research Max inherits that reasoning foundation and layers autonomous research behaviors on top of it, achieving 93.3% on DeepSearchQA (up from 66.1% in December) and 54.6% on Humanity’s Last Exam (up from 46.4%).

gemini-3.1-pro deep-research-qualitative-advacements blog evals

Google’s new Deep Research Max agent outperformed its December predecessor across nearly all qualitative dimensions in internal expert evaluations — but the older version held an edge in internal consistency and faithfulness. (Source: Google DeepMind)

Google faces a crowded field of competitors building autonomous research agents

Google is not operating in a vacuum. The launch arrives amid intensifying competition in the autonomous research agent space. OpenAI has been developing its own agent capabilities within ChatGPT under the codename Hermes, which includes an agent builder, templates, scheduling, and Slack integration, according to reports circulating on social media. Perplexity has built its business around AI-powered research. And a growing ecosystem of startups is attacking various slices of the automated research workflow.

What distinguishes Google’s approach is the combination of its search infrastructure — which gives Deep Research access to the broadest and most current index of web information available — with the MCP-based connectivity to enterprise data sources. No other company currently offers a research agent that can simultaneously query the open web at Google Search’s scale and navigate proprietary data repositories through a standardized protocol. The pricing structure also signals Google’s intent to drive adoption: according to Sim.ai, which tracks model pricing, the Deep Research agent in the December preview was priced at $2 per million input tokens and $2 per million output tokens with a 1 million token context window — positioning it as cost-competitive for the volume of research output it generates.

Advertisement

Not everyone greeted the announcement with unalloyed enthusiasm, however. Several users on X noted that the new agents are available only through the API, not in the Gemini consumer app. “Not on Gemini app,” observed TestingCatalog News, while another user wrote, “Google keeps punishing Gemini App Pro subscribers for some reason.” Others raised concerns about the presentation of benchmark results, with one user arguing that Google’s charts could be “misleading” in how they represent percentage improvements. These complaints point to a broader tension in Google’s AI strategy: the company is increasingly directing its most advanced capabilities toward developers and enterprise customers who access them through APIs, while consumer-facing products sometimes lag behind.

gemini-3.1-pro deep-research-and-max blog evals

Deep Research Max led all competitors on DeepSearchQA and BrowseComp, but GPT 5.4 edged ahead on Humanity’s Last Exam, a benchmark measuring reasoning and knowledge. All results were evaluated by Google DeepMind using publicly available model APIs. (Source: Google DeepMind)

What Deep Research Max means for finance, biotech, and the future of knowledge work

The practical implications of today’s launch are most immediately felt in industries that depend on exhaustive, multi-source research as a core business function. In financial services, where analysts routinely spend hours assembling due diligence reports from scattered sources — SEC filings, earnings transcripts, market data terminals, internal deal memos — Deep Research Max offers the possibility of automating the initial research phase entirely. The FactSet, S&P, and PitchBook partnerships suggest Google is serious about making this work with the data infrastructure that financial professionals already use.

In life sciences, the blog post notes that Google has collaborated with Axiom Bio, which builds AI systems to predict drug toxicity, and found that Deep Research unlocked new levels of initial research depth across biomedical literature. In market research and consulting, the ability to produce stakeholder-ready reports with embedded visualizations and granular citations could compress project timelines from days to hours.

Advertisement

The key question is whether the quality and reliability of these automated outputs will meet the standards that professionals in these fields demand. Google’s benchmark numbers are impressive, but benchmarks measure performance on standardized tasks — real-world research is messier, more ambiguous, and often requires the kind of judgment that remains difficult to automate. Deep Research and Deep Research Max are available now in public preview via paid tiers of the Gemini API, with availability on Google Cloud for startups and enterprises coming soon.

Eighteen months ago, Deep Research was a feature that helped grad students avoid drowning in browser tabs. Today, Google is betting it can replace the first shift at an investment bank. The distance between those two ambitions — and whether the technology can actually close it — will define whether autonomous research agents become a transformative category of enterprise software or just another AI demo that dazzles on benchmarks and disappoints in the conference room.

Source link

Advertisement
Continue Reading

Tech

SpaceX and Cursor strike partnership that might end in a $60 billion acquisition

Published

on

SpaceX and AI company Cursor have struck a new partnership that could see the owner of X buy the AI company for $60 billion later this year. “SpaceXAI and  @cursor_ai  are now working closely together to create the world’s best coding and knowledge work AI,” SpaceX wrote in a post on X.

According to SpaceX, the deal allows for it to either invest $10 billion into the company known for its AI coding tool, or acquire it entirely “later this year” for $60 billion. If an acquisition were to happen, it’s not clear at what point Cursor could officially join the fold of Elon Musk’s rapidly expanding and increasingly enmeshed web of companies. SpaceX bought xAI, the billionaire’s AI company that also controls X, earlier this year. SpaceX is currently getting ready to go public this summer in what will likely be the biggest initial public offering (IPO) in history.

Cursor, which has reportedly been in talks to raise its own $2 billion round of funding, is known for its AI coding tool of the same name that’s become the vibe coding platform of choice for many developers. It allows people to use either its own models or those from other leading AI companies, including OpenAI, Google, Anthropic and xAI.

In a statement, Cursor said its partnership with SpaceX will “accelerate our model training efforts” while addressing infrastructure-related issues that have slowed it down in the past. “We’ve wanted to push our training efforts much further, but we’ve been bottlenecked by compute,” the company said. “With this partnership, our team will leverage xAI’s Colossus infrastructure to dramatically scale up the intelligence of our models for coding and beyond.”

Advertisement

Source link

Continue Reading

Tech

The Electromechanical Computer Of The B-52’s Star Tracker

Published

on

The Angle Computer of the B-52, opened. (Credit: Ken Shirriff)
The Angle Computer of the B-52, opened. (Credit: Ken Shirriff)

In the ages before convenient global positioning satellites to query for one’s current location military aircraft required dedicated navigators in order to not get lost. This changed with increasing automation, including the arrival of increasingly more sophisticated electromechanical computers, such as the angle computer in the B-52 bomber’s star tracker that [Ken Shirriff] recently had a poke at.

We covered star trackers before, with this devices enabling the automation of celestial navigation. In effect, as long as you have a map of the visible stars and an accurate time source you will never get lost on Earth, or a few kilometers above its surface as the case may be.

The B-52’s Angle Computer is part of the Astro Compass, which is the star tracker device that locks onto a star and outputs a heading that’s accurate to a tenth of a degree, while also allowing for position to be calculated from it. Inside the device a lot of calculations are being performed as explained in the article, though the full equations are quite complex.

Not burdening the navigator of a B-52 with having to ogle stars themselves with an instrument and scribbling down calculations on paper is a good idea, of course. Instead the Angle Computer solves the navigational triangle mechanically, essentially by modelling the celestial sphere with a metal half-sphere. The solving is thus done using this physical representation, involving numerous gears and other parts that are detailed in the article.

Advertisement

In addition to the mechanical components there are of course the motors driving it, feedback mechanisms and ways to interface with the instruments. For the 1950s this was definitely the way to design a computer like this, but of course as semiconductor transistors swept the computing landscape, this marvel of engineering would before long find itself too replaced with a fully digital version.

Source link

Advertisement
Continue Reading

Tech

NYT Strands hints and answers for Wednesday, April 22 (game #780)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Tuesday’s puzzle instead then click here: NYT Strands hints and answers for Tuesday, April 21 (game #779).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Continue Reading

Tech

SpaceX Strikes Deal With Coding Startup Cursor For $60 Billion

Published

on

An anonymous reader quotes a report from the New York Times: SpaceX, Elon Musk’s rocket and satellite company, said on Tuesday that it had struck a deal with the artificial intelligence start-up Cursor that could result in its acquiring the young company for $60 billion. SpaceX is making the deal just as it prepares to go public in what is likely to be one of the largest initial public offerings ever. In a social media post, SpaceX said the combination with Cursor, which makes code-writing software, would “allow us to build the world’s most useful” A.I. models.

SpaceX added that the agreement gave it the option “to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.” It is unclear if the companies plan to consummate the deal before or after SpaceX’s I.P.O., which could happen as early as June. […] Cursor, which has raised more than $3 billion in funding, was founded in 2022 and made waves as a fast-growing A.I. start-up. It was under pressure in recent months after OpenAI and Anthropic announced competing code-writing products that were embraced by tech companies. Cursor had been in talks to raise funding in recent weeks.

Source link

Continue Reading

Tech

Today’s NYT Mini Crossword Answers for April 22

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword features some Earth Day-specific clues. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-april-22-2026.png

The completed NYT Mini Crossword puzzle for April 22, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: It’s clearly recyclable!
Answer: GLASS

6A clue: ___ Day (April 22nd observance)
Answer: EARTH

7A clue: Thick, underground part of a plant stem
Answer: TUBER

Advertisement

8A clue: Small cluster of trees
Answer: GROVE

9A clue: Rowed, as a boat
Answer: OARED

Mini down clues and answers

1D clue: From the ___ (right at the beginning)
Answer: GETGO

2D clue: Author Ingalls Wilder who wrote “Little House on the Prairie”
Answer: LAURA

Advertisement

3D clue: ___ Day, observance on the last Friday of April
Answer: ARBOR

4D clue: Actor Buscemi
Answer: STEVE

5D clue: Rip into bits, as paper
Answer: SHRED

Advertisement

Source link

Continue Reading

Tech

Framework Has a Better, More Take-Apartable Laptop

Published

on

Framework, the company that makes laptops designed for optimal repairability, announced a new version of its main product, a 13-inch-screen laptop. It’s called the Framework Laptop 13 Pro, and it has far better battery life, a touchscreen, and a haptic touchpad, and is fitted with Intel processors.

At an event in San Francisco today, Framework CEO Nirav Patel showed off the company’s new tech, opening with a joke about making Framework AI—something the company is very much not doing. Framework’s whole thing, after all, is aiming to give users control over the physical tech they use.

“That industry is fighting for you to own nothing, and they own everything,” Patel said about the AI industry. “We’re fighting for a future where you can own everything and be free.”

Framework used the event to detail other updates coming to its 16-inch laptop. It also showed off previews of an official developer kit and a wireless keyboard for controlling your rig from the couch.

Advertisement

Framework 13 Pro

Image may contain Electronics Hardware Computer Hardware and QR Code

The Framework Laptop 13 Pro.

Courtesy of Framework

As the name implies, the 13 Pro is a step up from the company’s last version, the Framework 13. It’s also pricier, starting at $1,199 for a DIY Edition that requires assembling the computer yourself. Prebuilt units start at $1,499 but can be upgraded with more features. Framework says it will start shipping the 13 Pro in June.

Framework’s signature move for its products is the ability to take the thing apart. The 13 Pro is made with that ethos in mind, so its parts can be easily swapped out, upgraded, or replaced. Four Thunderbolt 4 interfaces let you pick which ports (USB-C, HDMI, etc.) you want and then choose where to place them. Framework says it planned the laptop with cross-generation compatibility in mind, so current Framebook 13 laptop owners will be able to use new 13 Pro parts like the mainboard, display, and battery, and put them into their existing machine.

The big changes in the guts of the 13 Pro come from Framework’s shift away from using an AMD processor to Intel’s Core Ultra Series 3 processors, which Framework described in its press release as “just insanely efficient.” That efficiency, along with a bigger battery, translates to more than 20 hours of battery life while streaming 4K Netflix videos, at least that’s the claim. That’s almost 12 hours longer than the Framework 13.

Advertisement
Image may contain Computer Electronics Laptop Pc Computer Hardware Hardware Monitor and Screen

Courtesy of Framework

Image may contain Computer Electronics Laptop Pc Computer Hardware Hardware Monitor and Screen

Courtesy of Framework

Source link

Continue Reading

Tech

Report: New Apple CEO's biggest challenge will be retiring leadership & regular churn

Published

on

Industry-high employee retention levels and executives holding their posts for decades are apparently going to be significant hurdles for incoming Apple CEO John Ternus.

John Ternus in a blue T-shirt walks across a dark background, winking and smiling slightly, with his reflection visible behind him in a glossy vertical surface
John Ternus can’t invent a time machine fast enough, so he’s going to have to pick new Apple leadership, eventually. Image source: Apple

There’s been a trend in tech reporting that attempts to make every employment change from the top down a calamitous occasion. Whether it’s a dozen engineers out of thousands leaving or executives being poached with insane pay packages, every departure is treated as a serious problem.
I’m still not entirely sure why.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025