Connect with us

Tech

OpenAI’s big investment from AWS comes with something else: new ‘stateful’ architecture for enterprise agents

Published

on

The landscape of enterprise artificial intelligence shifted fundamentally today as OpenAI announced $110 billion in new funding from three of tech’s largest firms: $30 billion from SoftBank, $30 billion from Nvidia, and $50 billion from Amazon.

But while the former two players are providing money, OpenAI is going further with Amazon in a new direction, establishing an upcoming fully “Stateful Runtime Environment” on Amazon Web Services (AWS), the world’s most used cloud environment.

This signals OpenAI’s and Amazon’s vision of the next phase of the AI economy — moving from chatbots to autonomous “AI coworkers” known as agents — and that this evolution requires a different architectural foundation than the one that built GPT-4.

For enterprise decision-makers, this announcement isn’t just a headline about massive capital; it is a technical roadmap for where the next generation of agentic intelligence will live and breathe.

Advertisement

And especially for those enterprises currently using AWS, it’s great news, giving them more options with a new runtime environment from OpenAI coming soon (the companies have yet to announce a precise timeline for when it will arrive).

The great divide between ‘stateless’ and ‘stateful’

At the heart of the new OpenAI-Amazon partnership is a technical distinction that will define developer workflows for the next decade: the difference between “stateless” and “stateful” environments.

To date, most developers have interacted with OpenAI through stateless APIs. In a stateless model, every request is an isolated event; the model has no “memory” of previous interactions unless the developer manually feeds the entire conversation history back into the prompt. OpenAI’s prior cloud partner and major investor, Microsoft Azure, remains the exclusive third-party cloud provider for these stateless APIs.

The newly announced Stateful Runtime Environment, by contrast, will be hosted on Amazon Bedrock — a paradigm shift.

Advertisement

This environment allows models to maintain persistent context, memory, and identity. Rather than a series of disconnected calls, the stateful environment enables “AI coworkers” to handle ongoing projects, remember prior work, and move seamlessly across different software tools and data sources.

As OpenAI notes on its website: “Now, instead of manually stitching together disconnected requests to make things work, your agents automatically execute complex steps with ‘working context’ that carries forward memory/history, tool and workflow state, environment use, and identity/permission boundaries.”

For builders of complex agents, this reduces the “plumbing” required to maintain context, as the infrastructure itself now handles the persistent state of the agent.

OpenAI Frontier and the AWS Integration

The vehicle for this stateful intelligence is OpenAI Frontier, an end-to-end platform designed to help enterprises build, deploy, and manage teams of AI agents, launched back in early February 2026.

Advertisement

Frontier is positioned as a solution to the “AI opportunity gap”—the disconnect between model capabilities and the ability of a business to actually put them into production.

Key features of the Frontier platform include:

  • Shared Business Context: Connecting siloed data from CRMs, ticketing tools, and internal databases into a single semantic layer.

  • Agent Execution Environment: A dependable space where agents can run code, use computer tools, and solve real-world problems.

  • Built-in Governance: Every AI agent has a unique identity with explicit permissions and boundaries, allowing for use in regulated environments.

While the Frontier application itself will continue to be hosted on Microsoft Azure, AWS has been named the exclusive third-party cloud distribution provider for the platform.

This means that while the “engine” may sit on Azure, AWS customers will be able to access and manage these agentic workloads directly through Amazon Bedrock, integrated with AWS’s existing infrastructure services.

Advertisement

OpenAI opens the door to enterprises: how to register your interest in its upcoming new Stateful Runtime Environment on AWS

For now, OpenAI has launched a dedicated Enterprise Interest Portal on its website. This serves as the primary intake point for organizations looking to move past isolated pilots and into production-grade agentic workflows.

The portal is a structured “request for access” form where decision-makers provide:

  • Firmographic Data: Basic details including company size (ranging from startups of 1–50 to large-scale enterprises with 20,000+ employees) and contact information.

  • Business Needs Assessment: A dedicated field for leadership to outline specific business challenges and requirements for “AI coworkers”.

By submitting this form, enterprises signal their readiness to work directly with OpenAI and AWS teams to implement solutions like multi-system customer support, sales operations, and finance audits that require high-reliability state management.

Community and leadership reactions

The scale of the announcement was mirrored in the public statements from the key players on social media.

Advertisement

Sam Altman, CEO of OpenAI, expressed excitement about the Amazon partnership, specifically highlighting the “stateful runtime environment” and the use of Amazon’s custom Trainium chips.

However, Altman was quick to clarify the boundaries of the deal: “Our stateless API will remain exclusive to Azure, and we will build out much more capacity with them”.

Amazon CEO Andy Jassy emphasized the demand from his own customer base, stating, “We have lots of developers and companies eager to run services powered by OpenAI models on AWS”. He noted that the collaboration would “change what’s possible for customers building AI apps and agents”.

Early adopters have already begun to weigh in on the utility of the Frontier approach. Joe Park, EVP at State Farm, noted that the platform is helping the company accelerate its AI capabilities to “help millions plan ahead, protect what matters most, and recover faster”.

Advertisement

The enterprise decision: where to spend your dollars?

For CTOs and enterprise decision-makers, the OpenAI-Amazon-Microsoft triangle creates a new set of strategic choices. The decision of where to allocate budget now depends heavily on the specific use case:

  1. For High-Volume, Standard Tasks: If your organization relies on standard API calls for content generation, summarization, or simple chat, Microsoft Azure remains the primary destination. These “stateless” calls are exclusive to Azure, even if they originate from an Amazon-linked collaboration.

  2. For Complex, Long-Running Agents: If your goal is to build “AI coworkers” that require deep integration with AWS-hosted data and persistent memory across weeks of work, the AWS Stateful Runtime Environment is the clear choice.

  3. For Custom Infrastructure: OpenAI has committed to consuming 2 gigawatts of AWS Trainium capacity to power Frontier and other advanced workloads. This suggests that enterprises looking for the most cost-efficient way to run OpenAI models at massive scale may find an advantage in the AWS-Trainium ecosystem.

Licensing, revenue and the Microsoft ‘safety net’

Despite the massive infusion of Amazon capital, the legal and financial ties between Microsoft and OpenAI remain remarkably rigid. A joint statement released by both companies clarified that their “commercial and revenue share relationship remains unchanged”.

Crucially, Microsoft continues to maintain its “exclusive license and access to intellectual property across OpenAI models and products”. Furthermore, Microsoft will receive a share of the revenue generated by the OpenAI-Amazon partnership.

This ensures that while OpenAI is diversifying its infrastructure, Microsoft remains the ultimate beneficiary of OpenAI’s commercial success, regardless of which cloud the compute actually runs on.

Advertisement

The definition of Artificial General Intelligence (AGI) also remains a protected term in the Microsoft agreement. The contractual processes for determining when AGI has been reached—and the subsequent impact on commercial licensing—have not been altered by the Amazon deal.

Ultimately, OpenAI is positioning itself as more than a model or tool provider; it is an infrastructure player attempting to straddle the two largest clouds on Earth.

For the user, this means more choice and more specialized environments. For the enterprise, it means that the era of “one-size-fits-all” AI procurement is over.

The choice between Azure and AWS for OpenAI services is now a technical decision about the nature of the work itself: whether your AI needs to simply “think” (stateless) or to “remember and act” (stateful).

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

This payday deal gets you the Google Pixel 10a plus free Pixel Buds

Published

on

If you’re in the market for a new smartphone upgrade, then this Pixel 10a deal is for you.

Right now, you can get the recently unveiled Google Pixel 10a 128GB bundled with a pair of Pixel Buds 2a for only $499, saving 21% (if bought together).

Considering that the Buds 2a alone cost $129, that means you’re saving quite a fair bit on what you would have paid otherwise, making this the perfect pick for anyone who’s agreeably content with their existing phones and earbuds but is looking to make a more premium upgrade.

Deal Google Pixel 10a 128 GB with Pixel Buds 2a BundleDeal Google Pixel 10a 128 GB with Pixel Buds 2a Bundle

This payday deal gets you the Google Pixel 10a plus free Pixel Buds

Pick up the Google Pixel 10a during this payday promo and you’ll get a complimentary set of Pixel Buds.

Advertisement

View Deal

Even though the ‘a’ series devices from Google are technically meant to be the more affordable line of phones, there’s still a lot to love about the Pixel 10a that separates it from rivals at this price point, and it actually goes a lot further thanks to the value of the Pixel Buds.

On the phone itself, you’re getting a slick design that feels even more premium than its price tag suggests, with seven years of updates, a 30+ hour battery life, a great camera and a vibrant 6.3-inch display.

Advertisement

Advertisement

That’s alongside a huge 128 GB of storage to keep all your favourite apps, photos and videos stored locally (without having to delete things to make space constantly).

Where Google’s hardware really shines, though (and where the Pixel 10a excels) is in the ability to showcase its software in the best light possible.

Google software integrations and feature sets really need to be seen in-person to be understood, but needless to say, it offers so much more to try than the traditional Android experience you might have grown accustomed to.

For the Buds 2a, you’re getting a simple, albeit effective set of wireless earbuds that can deliver quite the punch when listening to your favourite tunes.

Advertisement

TL;DR – if your existing phone and earbuds combo are a little worse for wear, then you deserve an upgrade, and it just so happens that you can now get Google’s latest hardware for a steal, and it might be a worthy entry to our best budget phone buying guide.

Advertisement

SQUIRREL_PLAYLIST_10148964

Source link

Advertisement
Continue Reading

Tech

Google’s Opal just quietly showed enterprise teams the new blueprint for building AI agents

Published

on

For the past year, the enterprise AI community has been locked in a debate about how much freedom to give AI agents. Too little, and you get expensive workflow automation that barely justifies the “agent” label. Too much, and you get the kind of data-wiping disasters that plagued early adopters of tools like OpenClaw. This week, Google Labs released an update to Opal, its no-code visual agent builder, that quietly lands on an answer — and it carries lessons that every IT leader planning an agent strategy should study carefully.

The update introduces what Google calls an “agent step” that transforms Opal’s previously static, drag-and-drop workflows into dynamic, interactive experiences. Instead of manually specifying which model or tool to call and in what order, builders can now define a goal and let the agent determine the best path to reach it — selecting tools, triggering models like Gemini 3 Flash or Veo for video generation, and even initiating conversations with users when it needs more information.

It sounds like a modest product update. It is not. What Google has shipped is a working reference architecture for the three capabilities that will define enterprise agents in 2026:

Advertisement
  1. Adaptive routing

  2. Persistent memory

  3. Human-in-the-loop orchestration

…and it’s all made possible by the rapidly improving reasoning abilities of frontier models like the Gemini 3 series.

The ‘off the rails’ inflection point: Why better models change everything about agent design

To understand why the Opal update matters, you need to understand a shift that has been building across the agent ecosystem for months.

The first wave of enterprise agent frameworks — tools like the early versions of CrewAI and the initial releases of LangGraph — were defined by a tension between autonomy and control. Early models simply were not reliable enough to be trusted with open-ended decision-making. The result was what practitioners began calling “agents on rails”: tightly constrained workflows where every decision point, every tool call, and every branching path had to be pre-defined by a human developer.

This approach worked, but it was limited. Building an agent on rails meant anticipating every possible state the system might encounter — a combinatorial nightmare for anything beyond simple, linear tasks. Worse, it meant that agents could not adapt to novel situations, the very capability that makes agentic AI valuable in the first place.

Advertisement

The Gemini 3 series, along with recent releases like Anthropic’s Claude Opus 4.6  and Sonnet 4.6, represents a threshold where models have become reliable enough at planning, reasoning, and self-correction that the rails can start coming off. Google’s own Opal update is an acknowledgment of this shift. The new agent step does not require builders to pre-define every path through a workflow. Instead, it trusts the underlying model to evaluate the user’s goal, assess available tools, and determine the optimal sequence of actions dynamically.

This is the same pattern that made Claude Code’s agentic workflows and tool calling viable: the models are good enough to decide the agent’s next step and often even to self-correct without a human manually re-prompting every error. The difference compared to Claude Code is that Google is now packaging this capability into a consumer-grade, no-code product — a strong signal that the underlying technology has matured past the experimental phase.

For enterprise teams, the implication is direct: if you are still designing agent architectures that require pre-defined paths for every contingency, you are likely over-engineering. The new generation of models supports a design pattern where you define goals and constraints, provide tools, and let the model handle routing — a shift from programming agents to managing them.

Memory across sessions: The feature that separates demos from production agents

The second major addition in the Opal update is persistent memory. Google now allows Opals to remember information across sessions — user preferences, prior interactions, accumulated context — making agents that improve with use rather than starting from zero each time.

Advertisement

Google has not disclosed the technical implementation behind Opal’s memory system. But the pattern itself is well-established in the agent-building community. Tools like OpenClaw handle memory primarily through markdown and JSON files, a simple approach that works well for single-user systems. Enterprise deployments face a harder problem: maintaining memory across multiple users, sessions, and security boundaries without leaking sensitive context between them.

This single-user versus multi-user memory divide is one of the most under-discussed challenges in enterprise agent deployment. A personal coding assistant that remembers your project structure is fundamentally different from a customer-facing agent that must maintain separate memory states for thousands of concurrent users while complying with data retention policies.

What the Opal update signals is that Google considers memory a core feature of agent architecture, not an optional add-on. For IT decision-makers evaluating agent platforms, this should inform procurement criteria. An agent framework without a clear memory strategy is a framework that will produce impressive demos but struggle in production, where the value of an agent compounds over repeated interactions with the same users and datasets.

Human-in-the-loop is not a fallback — it is a design pattern

The third pillar of the Opal update is what Google calls “interactive chat” — the ability for an agent to pause execution, ask the user a follow-up question, gather missing information, or present choices before proceeding. In agent architecture terminology, this is human-in-the-loop orchestration, and its inclusion in a consumer product is telling.

Advertisement

The most effective agents in production today are not fully autonomous. They are systems that know when they have reached the limits of their confidence and can gracefully hand control back to a human. This is the pattern that separates reliable enterprise agents from the kind of runaway autonomous systems that have generated cautionary tales across the industry.

In frameworks like LangGraph, human-in-the-loop has traditionally been implemented as an explicit node in the graph — a hard-coded checkpoint where execution pauses for human review. Opal’s approach is more fluid: the agent itself decides when it needs human input based on the quality and completeness of the information it has. This is a more natural interaction pattern and one that scales better, because it does not require the builder to predict in advance exactly where human intervention will be needed.

For enterprise architects, the lesson is that human-in-the-loop should not just be treated as a safety net bolted on after the agent is built. It should be a first-class capability of the agent framework itself — one that the model can invoke dynamically based on its own assessment of uncertainty.

Dynamic routing: Letting the model decide the path

The final significant feature is dynamic routing, where builders can define multiple paths through a workflow and let the agent select the appropriate one based on custom criteria. Google’s example is an executive briefing agent that takes different paths depending on whether the user is meeting with a new or existing client — searching the web for background information in one case, reviewing internal meeting notes in the other.

Advertisement

This is conceptually similar to the conditional branching that LangGraph and similar frameworks have supported for some time. But Opal’s implementation lowers the barrier dramatically by allowing builders to describe routing criteria in natural language rather than code. The model interprets the criteria and makes the routing decision, rather than requiring a developer to write explicit conditional logic.

The enterprise implication is significant. Dynamic routing powered by natural language criteria means that business analysts and domain experts — not just developers — can define complex agent behaviors. This shifts agent development from a purely engineering discipline to one where domain knowledge becomes the primary bottleneck, a change that could dramatically accelerate adoption across non-technical business units.

What Google is really building: An agent intelligence layer

Stepping back from individual features, the broader pattern in the Opal update is that Google is building an intelligence layer that sits between the user’s intent and the execution of complex, multi-step tasks. Building on lessons from an internal agent SDK called “Breadboard”, the agent step is not just another node in a workflow — it is an orchestration layer that can recruit models, invoke tools, manage memory, route dynamically, and interact with humans, all driven by the ever improving reasoning capabilities of the underlying Gemini models.

This is the same architectural pattern emerging across the industry. Anthropic’s Claude Code, with its ability to autonomously manage coding tasks overnight, relies on similar principles: a capable model, access to tools, persistent context, and feedback loops that allow self-correction. The Ralph Wiggum plugin formalized the insight that models can be pressed through their own failures to arrive at correct solutions — a brute-force version of the self-correction that Opal now packages some of that into a polished consumer experience.

Advertisement

For enterprise teams, the takeaway is that agent architecture is converging on a common set of primitives: goal-directed planning, tool use, persistent memory, dynamic routing, and human-in-the-loop orchestration. The differentiator will not be which primitives you implement, but how well you integrate them — and how effectively you leverage the improving capabilities of frontier models to reduce the amount of manual configuration required.

The practical playbook for enterprise agent builders

Google shipping these capabilities in a free, consumer-facing product sends a clear message: the foundational patterns for building effective AI agents are no longer cutting-edge research. They are productized. Enterprise teams that have been waiting for the technology to mature now have a reference implementation they can study, test, and learn from — at zero cost.

The practical steps are straightforward. First, evaluate whether your current agent architectures are over-constrained. If every decision point requires hard-coded logic, you are likely not leveraging the planning capabilities of current frontier models. Second, prioritize memory as a core architectural component, not an afterthought. Third, design human-in-the-loop as a dynamic capability the agent can invoke, rather than a fixed checkpoint in a workflow. And fourth, explore natural language routing as a way to bring domain experts into the agent design process.

Opal itself probably won’t become the platform enterprises adopt. But the design patterns it embodies — adaptive, memory-rich, human-aware agents powered by frontier models — are the patterns that will define the next generation of enterprise AI. Google has shown its hand. The question for IT leaders is whether they are paying attention.

Advertisement

Source link

Continue Reading

Tech

OpenAI raises $110bn in round double the size of previous raise

Published

on

Alongside the investment, OpenAI and Amazon have also reached a deal in which OpenAI will utilise 2GW of computing capacity powered by Amazon’s in-house Trainium chips.

US artificial intelligence platform OpenAI has today (27 February) announced a $110bn funding round, reportedly a record for the private technology company, with a figure that is double that of its previous funding round

Amazon invested $50bn, Nvidia invested $30bn and SoftBank invested $30bn. The investment brings OpenAI from a $500bn valuation to a $730bn pre-money valuation and OpenAI has stated that the organisation expects additional investors to join as the round progresses. 

Commenting on the news, Sam Altman, the CEO of OpenAI, told CNBC: “We’re super excited about this deal. AI is going to happen everywhere. It’s transforming the whole economy and the world needs a lot of collective computing power to meet the demand.”

Advertisement

OpenAI also confirmed that the funding announcement will not impact the terms of a current partnership it holds with tech giant Microsoft, which was established in 2019. In a joint statement, both organisations agreed that the deal is “strong and central” to operations. CNBC also reported that Microsoft has the option to participate in OpenAI’s funding round. 

Amazon’s $50bn investment in OpenAI will begin with an initial commitment of $15bn, followed by another $35bn over the course of the next few months when “certain conditions” are met. OpenAI also has an additional deal with Amazon in which the organisation will utilise 2GW of computing capacity powered by Amazon’s in-house Trainium chips.

The announcement comes at a time when many of the globe’s most influential technology companies are vying for dominance in the AI space. Earlier this week, Intrinsic, an Alphabet-owned software and AI company, announced that it was joining Google, as a means of enabling Google to move further into the physical AI space.  

Japan’s SoftBank was also recently announced as the first to deploy SambaNova’s new SN50 chips within its data centres in Japan, while Intel is also partnering with SambaNova to roll out an Intel-powered AI cloud.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

5 Of The Best Soldering Irons For Electronics

Published

on





We may receive a commission on purchases made from links.

Electronics pose such a unique and interesting problem for DIY’ers itching to fix things. Most of your household electronics equipment won’t experience major faults or damage regularly, but every once in a while, the need to repair one of these surprisingly sophisticated elements comes into focus. You might have a chewed lamp wire that needs replacing, or a child’s toy may have been thrown one too many times, jarring loose an integral component. Breaking the lid off these electronic products and seeing what the guts inside look like is the dream of many tinkerers and home improvers. These kinds of operations become even more routine for people who like to customize their equipment after the fact or DIY their entire PC build. Guitarists represent yet another subset of the gearhead world where integrated electronics and the occasional soldering need meet an entirely separate hobby area.

Getting your first soldering iron can seem like a daunting task because of the amount of options in size and scale to consider. Experts generally suggest veering toward a unit that produces at least 30 watts to ensure consistent and even heating, but many have had great experiences with smaller, weaker soldering systems, too. Your ideal soldering tool will be heavily dependent on the tasks you anticipate facing. I’ve frequently dabbled in the process myself, and used both station setups and basic plug-in irons before.

Advertisement

These five soldering tools run the gamut of what’s out there, and represent some of the best options in their categories.

Advertisement

X-Tronic 3020-XTS 75W Soldering Station

The X-Tronic 3020-XTS 75-Watt Soldering Station is a full-service station-style option. The tool goes far beyond the basic layout you might expect from a simple soldering unit, but it’s available at Amazon for $55, making it a tool that remains firmly in the camp of cost-effectiveness. The unit comes with some required soldering accessories and two grabber arms to hold a workpiece in place while you solder. This means buyers can get started on their repair projects with the tool right away, without requiring any additional purchases for basic soldering tasks. The unit has an average rating of 4.5 stars from 4,464 buyers.

The X-Tronic features a 75-watt total output with 60 watts of power directed to the soldering iron itself. It features a wide temperature range from 194 degrees Fahrenheit to 896 degrees Fahrenheit and can reach its maximum temperature from around 400 degrees in less than 30 seconds for fast augmentation to the output when needed. It includes an LED digital display with a central control panel that makes dialing in settings easy. The station also features a 55-inch power cord and a 40-inch tether for the iron. Solderers who plan to remain in place while working on electronic repair tasks can gain significant value from a solution like this.

Advertisement

Weller 60W Soldering Iron

Soldering iron users demanding mobility should use a tool like the Weller 60-Watt Soldering Iron. It’s a 60-watt iron that comes without the added heft and stationary nature of soldering stations. You simply plug the tool in and wait for it to achieve operating temperature. This makes it incredibly mobile and allows users to achieve their goals in a wide range of usage conditions. There’s no need to maintain a dedicated soldering space with a unit like this, instead you’ll bring your soldering iron to the application demanded of it.

This is a straightforward implementation, but it features some important integrations that make for an experience going beyond the basics. It doesn’t appear to provide a temperature range, but instead simply heats up to 880 degrees Fahrenheit when engaged. It comes with a range of soldering tips and features a quick tip change capability that makes swapping them out simple. The tool also utilizes an integrated safety rest and an LED halo ring around the front end of its ergonomic body.

The tool is available from Amazon for $33, and it has received a 4.2-star average rating from 422 buyers. It’s also listed at Home Depot for $44, and here buyers have given it a similar rating, with a 4.1-star average across 559 reviews. You can also find the tool on Weller’s website, listed for $41.

Advertisement

Ryobi ONE+ 18V Hybrid Power Soldering Station

Ryobi is a brand with plenty of value for users of all backgrounds. Ryobi makes a USB Lithium soldering pen that stands among its small-scale cordless tools that rival the full-sized 18-volt catalog. However, the Ryobi ONE+ 18-Volt Hybrid Power Soldering Station offers something that isn’t typically built into the platform of a station layout. This sets it apart as a dynamic solution that can serve numerous functions. The tool features a hybrid power model that allows it to be plugged into the wall or operated for over four hours of continuous runtime with a ONE+ 18-volt battery pack for cordless soldering production. The tool features an integrated temperature control that ranges from 300 to 900 degrees Fahrenheit. The iron comes with a three-foot cord for maximum mobility and includes a range of tips and other accessories.

Advertisement

The tool is available at Ryobi’s website for $63, and here it has received 166 reviews with a 4.8-star average rating. It’s also available at Home Depot ($63) and Amazon ($57), and at these outlets, it has garnered 4.7 stars from 455 reviewers and 4.6 stars from 427, respectively. The tool delivers 45 watts of power and utilizes a neatly arranged onboard storage design for sponges, tips, and other key accessories and materials.

Advertisement

Hakko FX-888D 70W Soldering Station

The Hakko FX-888D 70-Watt Soldering Station takes the footprint of a soldering station and breaks it into separate components. This enables mobility improvements and additional functionality. The tool comes with a rotary encoder and an improved interface over its previous model. The unit is available on Amazon for $121, and it has an average rating of 4.8 stars from 238 buyers. It delivers a 70-watt power rating and utilizes an ergonomic soldering pen with a detachable cord to make storage simpler when the tool is not in use.

Unlike most soldering stations, the iron holder and accessory elements are separated from the main power source and control unit. This allows you to organize your workspace more flexibly. The tool features a temperature range between 120 degrees and 899 degrees Fahrenheit. It can also achieve a 660-degree output in just 26 seconds. The tool also offers five preset temperature modes, enabling faster switching between the temperatures you frequently use. The tool also maintains an idle temperature within 1.8 degrees Fahrenheit of its setting and is compatible with more than 30 unique tip shapes for versatile soldering across project types.

Advertisement

Milwaukee M12 Cordless Soldering Iron

The Milwaukee M12 Cordless Soldering Iron isn’t the only cordless soldering tool on the market, but it is the only one from a primary tool brand that runs on the same battery platform as its main line. This is a key integration in Milwaukee’s M12 lineup, whereas a kind of gimmicky nature often characterizes the typical cordless soldering pen. The tool weighs less than half a pound and features a maximum temperature of 750 degrees Fahrenheit, underpinned by a 90-watt heater within the tool body. It features a three-stop pivoting head that allows for greater access in tight spaces. The tool also delivers a heat-up time of just 18 seconds, making it lightning fast to transition from the preparation phase to fully functional usage during a project.

Milwaukee’s soldering iron is available from Amazon as a bare tool for $109. It’s listed as an “Amazon’s Choice” product with over 400 bought in the past month. It has received a 4.7-star rating from 1,189 buyers. The tool is also available from Home Depot in a kit variety for $319. Here, it has received a 4.4-star average rating from 612 buyers. The kit includes the same pair of soldering tips that come with the bare tool, along with two batteries and a charger.

Advertisement

Methodology

We selected these soldering irons based on user reviews. Each one has been reviewed by at least 200 buyers, with many racking up over 1,000. The lowest-rated product has a 4.1-star average. These five soldering tools also offer different experiences for the user. They range from small-scale tools and cordless options to a full-sized soldering station. Therefore, these solutions include something that can deliver value to just about any kind of user need.

Advertisement



Source link

Continue Reading

Tech

Ultrahuman Ring Pro Brings Better Battery Life, More Action and Analysis

Published

on

Sick of your smart ring’s battery not holding up? Ultrahuman’s new $479 Ring Pro smart ring, unveiled on Friday, offers up to 15 days of battery life on a single charge. The Ring Pro joins the company’s $349 Ring Air, which boosts health tracking, thanks to longer battery life, increased data storage, improved speed and accuracy and a new heart-rate sensing architecture. The ring works in conjunction with the latest Pro charging case. 

Ultrahuman also launched its Jade AI, which can act as an agent based on analysis of current and historical health data. Jade can synthesize data from across the company’s products and is compatible with its Rings.

“With industry-leading hardware paired with Jade biointelligence AI, users can now take real-time actionable interventions towards their health than ever before,” said Mohit Kumar, CEO of Ultrahuman.

Advertisement

No US sales

That hardware isn’t available in the US, though, thanks to the ongoing ban on Ultrahuman’s Rings sales here, stemming from a patent dispute with its competitor, Oura Ring. It’s available for preorder now everywhere else and is slated to ship in March. Jade’s available globally.

Ultrahuman says the Ring Pro boosts battery life to about 15 days in Chill mode — up to 12 days in Turbo — compared to a maximum of six days for the Air. The Pro charger’s battery stores enough for another 45 days, which you top off with Qi-compatible wireless charging. In addition, the case incorporates locator technology via the app and a speaker, as well as usability features such as haptic notifications and a power LED.

The ring can also retain up to 250 days of data versus less than a week for the cheaper model. Ultrahuman redesigned the heart-rate sensor for better signal quality. An upgraded processor improves the accuracy of the local machine learning and overall speed. 

It’s offered in gold, silver, black and titanium finishes, with available sizes ranging from 5 to 14.

Advertisement

Jade’s Deep Research Mode is the cross-ecosystem analysis feature, which aggregates data from Ring and Blood Vision and the company’s subscription services, Home and M1 CGM, to provide historical trends, offer current recommendations and flag potential issues, as well as trigger activities such as A-fib detection. Ultrahuman plans to expand its capabilities to include health-adjacent activities, such as ordering food.

Some new apps are also available for the company’s PowerPlug add-on platform, including capabilities such as tracking GLP-1 effects, snoring and respiratory analysis and migraine management tools.

Source link

Advertisement
Continue Reading

Tech

Jack Dorsey says AI is driving Block's massive layoffs as 4,000+ roles are cut

Published

on


Block, which Dorsey founded in 2009, is the US market leader in point-of-sale systems. It operates Square, Cash App, and Tidal, boasting over 60 million users.
Read Entire Article
Source link

Continue Reading

Tech

Modder builds a CPU cooler powered by "infinite" ice from a hacked ice maker

Published

on


The experiment sits halfway between absurdist entertainment and an intriguing case study in thermal engineering. The concept is deceptively simple: use melting ice to absorb the CPU’s heat, then recycle the meltwater to make new ice, forming a closed-loop cooling system.
Read Entire Article
Source link

Continue Reading

Tech

Apple’s code hints at new Studio Display models with two key upgrades

Published

on

Apple’s rumored Studio Display refresh is back in the spotlight. While earlier reports suggested the company had two new models in the pipeline, fresh details (via Macworld) now hint at what could actually change. Newly uncovered code and leaks point toward upgrades to ports and speakers, offering the clearest picture yet of how Apple might evolve its pro-focused monitor lineup.

Just to refresh your memory, Apple introduced the original Studio Display in 2022 as a 27-inch 5K monitor designed to pair with Macs, featuring a built-in camera, speakers, and Thunderbolt connectivity. While the display has remained a popular choice for Mac users, it has seen few hardware changes since launch. That makes the signs of a refresh particularly noteworthy.

A refresh focused on ports and sound

First, the new models are expected to bring upgraded ports. As Macs continue adopting faster Thunderbolt and USB standards, improved connectivity would help the Studio Display better match modern workflows. Faster ports could support higher bandwidth for accessories, external storage, and multi-display setups, making the monitor more capable as a desktop hub.

The second rumored upgrade centers on improved speakers. The current Studio Display already includes a six-speaker sound system, but Apple is reportedly testing enhanced audio hardware for the new models. Better speakers could make the display even more suitable for video editing, music playback, and video calls without requiring external speakers.

As for how the two models might differ, nothing has been confirmed yet. Still, it would be unusual for Apple to release two nearly identical 27-inch displays with only minor changes to ports or speakers, which has led to speculation that one could be a larger 32-inch variant. For now, Apple has not officially acknowledged these displays, so it is best to treat the rumors with caution. That said, if the reports do prove to be accurate, Mac users who have been waiting for a Studio Display refresh may not have to wait much longer.

Source link

Advertisement
Continue Reading

Tech

US arrests OnlyFake operator accused of selling over 10,000 AI-generated digital fake IDs

Published

on


According to the US Attorney’s Office for the Southern District of New York, 27-year-old Ukrainian national Yurii Nazarenko (also known by several aliases, including “John Wick”) was charged and pleaded guilty to conspiracy to commit fraud involving identification documents and authentication features.
Read Entire Article
Source link

Continue Reading

Tech

Why Not Ask Why: Neuroscientist Urges Educators to Reconsider Technology’s Reach

Published

on

Several years ago, Jared Cooney Horvath’s interest in teaching took a scientific turn.

He entered teaching during a period he calls “the decade of the brain” — when much of the buzz around education and learning covered new theories about brain activity and information processing. Horvath believed that if he learned more about the brain, he’d become a better teacher.

Jared Cooney Horvath

But the education ideas that captured the popular imagination in the early 2000s had to do with catering to so-called learning styles — right- versus left-brain thinkers or visual versus word learners — and notions about how to hasten cognitive development through certain outside stimuli. Remember those moms-to-be with headphones on their bellies for their babies to experience the “Mozart Effect” in utero?

The gains from these methods proved to be short-lived or difficult to measure accurately.

Yet the science of learning persists. And what Horvath — today a neuroscientist and education consultant — now knows about human cognitive development has spurred him to join a cohort of researchers who are questioning the proliferation of technology and education software in schools.

Advertisement

His new book “The Digital Delusion” feels like a logical progression from Jonathan Haidt’s 2024 bestseller “The Anxious Generation,” which looked at how hours spent in front of screens, especially on social media, with its rapid-fire videos and toxic commentary, has damaged children’s overall mental health and learning.

In “Digital Delusion,” Horvath outlines research showing how digital devices and screen time, at the expense of playtime, interferes with children’s cognitive development. Then he argues how the ubiquitous use in schools of laptops and edtech, at the expense of traditional skills like handwriting and note-taking, alters, for the worse, how kids learn.

Horvath’s book arrives at a pivotal moment, with digital systems facing a cultural reckoning: Social media companies defend themselves in court against accusations that their platforms harm mental health, and lawmakers propose legislation that would severely restrict screen time for kids under 13. Meanwhile, school districts across the United States impose bell-to-bell cellphone bans, and parents push to opt their children out of using digital devices for school.

Horvath takes a pragmatic approach on that score, suggesting arguments parents can use with administrators and at school board meetings. He has chapters that include examples of letters and other tools parents can customize to mobilize action at state and federal levels.

Some educators maintain that schools should emphasize responsible use of technology, including AI, to prepare students for a technology-driven workforce. Horvath isn’t convinced. First, he argues, workforce preparation should not be education’s priority, particularly in younger grades. Second, it’s inefficient: “Teach someone to use a tool and they’ll be able to use that tool,” he writes. “Teach someone how to think and they’ll be able to use any tool.”

Advertisement

Even so, Horvath insists he isn’t anti-tech: “This isn’t a book about resisting devices,” he writes. “It’s a book about reclaiming education as a deeply human endeavor.”

EdSurge spoke with Horvath about “The Digital Delusion” and his work with schools around the globe, including in Australia, which at the end of last year banned social media for anyone under 16.

This interview has been edited for length and clarity.

EdSurge: You make the point that whenever a new technology is introduced to a culture, early adopters are the enthusiasts. But for any given technology to have broad acceptance, it must pass muster with skeptics. Yet that didn’t really happen with digital technology in schools, did it?

Advertisement

Horvath: If I invented something, I had to convince you. This [product] will get rid of that stain on your shirt. This will keep your iceberg lettuce crisp in the fridge. If you promised something you had to live up to it, because for the few people who adopted it to begin with, if you didn’t clean their stains, they’re not coming back.

Digital technology never made a claim to anything. It just kind of appeared and people just started using it. When AI came out, the developers flat-out said, we don’t know what this does. Why don’t you guys tell us what it does? And for some reason we shoved it into schools and said, instead of me telling you what it does, why don’t I let my kids tell you what it does?

Something very weird happened where they made no claims to efficacy and then we jumped in and started using it. Our job now is to start to pull some of those weeds rather than protect before planting. And unfortunately that means there’s been a lot of victims along the way.

A lot of kids have suffered due to our rush to just put things in their hands, unfortunately.

Advertisement

I think we have this love affair with digital technology. I don’t know if it’s because of sci-fi or “Star Trek” or what. We intuitively think this is going to be helpful.

And now we’re just scrambling back.

You explain that children need to play for optimum cognitive development, but ordinary childhood play and behavior has been disrupted by screens. Is there evidence that if we take the technology away from children whose brains are still forming that they can bounce back?

Yes, absolutely. The good thing about human biology is it is wickedly malleable.

Advertisement

There’s two aspects to keep in mind. One, biology is also wickedly conservative. It changes all the time, but it never forgets anything. So if you have had a habit at one point and you drop that habit, you can move your biology a different way, but if you come back to that habit even once, your biology will have held onto that entire circuit. It’s a survival mechanism. Our genes, our brain, hold everything.

So when it comes to these tech habits, if you’ve already formed them as a kid, they will always kind of be there. If you think, I’m over this, and you pick up your phone, you will move much faster back into that habit than you did before.

The other thing to recognize here is everything we know about learning, and most of what we know about biology, basically starts after the age of 5. That’s when what we call human biological learning mechanisms really kick in.

From birth to about 5, you’re in a totally different world. The brain is basically in input mode. Gimme, gimme, gimme. And I’m going to hold onto everything. This is why if a kid grows up in a house with two languages, they will easily learn two languages because the brain just says gimme, gimme, gimme.

Advertisement

So that’s where I think the super danger zone comes in. If you develop habits or problems before the age of 5, when you hit 5, the brain locks itself down. You won’t be able to consciously remember what happened before the age of 5, but all of that [input] forms the foundation upon which further learning is going to occur.

My fear is if you form a habit before the age of 5 and then your brain locks down, are you now stuck in a spot where it will be very hard to get that out? If you’ve already addicted your kid before age 5, be careful. I don’t know what that’s going to mean when they get older.

There is data that says around 40 percent of 2-year-olds have tablets.

Why? My question is just why? There are a lot of states right now putting forward bills to limit screen time in primary years: K through [grade] 2, 90 minutes; [grades] 2 through 5, two hours a day. To which I always reply, why any hours?

Advertisement

I could easily make a case they don’t need any of this at any moment. It makes no sense for learning and development why [technology] needs to interface with anything they’re doing.

But by banning, aren’t we setting up a mystique around technology — causing a different kind of distraction around the yearning to use it?

That’s what you want. By banning and building a mystique, you give kids aspirations. I think back to my generation, when we turned 16, you couldn’t stop us from driving. Why? Because with our parents, that was the hold: you want to go to your friend’s house? You got a bike, you got feet, I’m not driving you. You want to get to school? There’s a bus, you got feet, I’m not driving you. So by the time we knew we could drive, that’s the first thing we did.

If by banning tech, that makes kids say when I’m 18, I’m using tech — then, good, that means I have 18 years to train you to be ready to use that machine.

Advertisement

Can schools realistically go back to paper? Textbooks, for instance, are expensive and take longer to update than websites, which are dynamic.

It’s funny, this is where you get the clash between different masters. In a good rule of thumb you can only serve one master at a time. So we’ve got issues of, I want my kids to learn, but I have monetary constraints and I have administrative bureaucracy that I’ve got to wend my way through.

When you’ve got multiple masters, eventually you’ve got to settle on one because if you try and serve many, no one’s going to be happy. And I would hope that in education we choose learning as our ultimate master. If that means, look, we have to devote more of our budget to textbooks and that means we won’t be able to do X this year, then so be it.

If that means, look, we’re going to only use the website for the last two years of history, but we’re going to have the book for the rest because it’s better for learning, then so be it.

Advertisement

I don’t know how much more research we need on this. People learn more from hard copy text than they do from digital text. It’s done. That battle is over. So if learning is our outcome, why not go back to what we know works best for that?

Can you explain the findings around taking notes by hand?

Most students think note-taking is something they do while they learn. So [they think] if AI does it for me — cool! But they miss the point. Note-taking is the learning, not something that’s happening in parallel to learning. That is the learning. Because that’s where you’re doing your transformation: Your teacher said it. I now have to analyze it, think about it, organize it, get it out.

That requires friction. Your brain is going much faster. So the handwriting is constraining the speed with which you can think, which in turn is forcing you to focus on ideas, which in turn is transforming those ideas as you’re going along.

Advertisement

That is the definition of learning.

The act of handwriting is arguably the most complex thing we do. When it comes to motor skills, there might be nothing more complex than that.

We talk about the difference between gross- and fine-motor movements. Name one skill we do that is so minutely fine as handwriting and so varied as handwriting. If you’re using a pen versus a pencil versus a crayon versus a marker, you’re doing very subtly different movements.

Those develop so much more awareness and understanding of the body in a way that then translates into other fields in ways we’ve never seen from any other skill before.

Advertisement

If you know how to write, you will become better at reading. If you know how to write, you will become better at recognizing faces. Why? We don’t know. But everything seems to be correlated back to that skill.

So when people debate [whether] handwriting is still worth teaching? Of course. Is cursive still worth teaching? Of course. No one’s going to use cursive as an adult. That’s not why we’re teaching it, baby. It has nothing to do with what you’re going to do as an adult. ’

You were just in Australia. What is the feedback from the social media ban?

The response is overwhelmingly positive. Basically every school I worked at, the kids are fine with it. Teachers are fine with it. All of a sudden, behaviors are getting so much better in school. They said the biggest problem is with parents, oddly enough, who basically have to hang out with their kids and they don’t know what to do. If that’s our biggest problem, we’ll solve that. Hang out with your kid.

Advertisement

Any time you remove something from your kid’s heart, you’re going to have to fill it with something else. You’re going to have to fill it with yourself, which means you’re going to have to take some of your own tech out of your own life to devote more of your time to your kid.

Source link

Continue Reading

Trending

Copyright © 2025