Jeff Bezos framed this copy ofa 2006 BusinessWeek cover, reflecting Wall Street’s skepticism about AWS at the time. (Jeff Bezos via X, May 2022)
In the early days of Amazon Web Services, technical evangelist Jeff Barr was putting in long hours on the road, pitching a novel concept: rent computing power for 10 cents an hour, and storage for 15 cents a gigabyte per month — no servers to buy, no data centers to build.
Barr remembers calling his wife to check in at the end of the day. Get a nice dinner, she told him, you deserve it. But later, at the restaurant, looking at the menu and doing the math in his head, he couldn’t help but ask himself if the pennies were adding up.
“Did enough people start using these servers to buy me a decent steak?” he wondered.
He probably should have ordered the filet.
Two decades later, AWS generates nearly $129 billion a year in revenue. That’s enough to rank in the top 40 of the Fortune 500 if it were a standalone company, ahead of the likes of Comcast, AT&T, Tesla, Disney, and PepsiCo. Companies such as Netflix, Airbnb, Slack, Stripe and thousands more have built massive businesses on its platform.
Advertisement
When AWS goes down, it ripples across the web, taking down apps, websites, and services that most users never knew were on a common infrastructure.
But the business that defined cloud computing — bankrolling Amazon’s expansion into everything from streaming to same-day delivery — is now grappling with the most significant challenge since it launched. The rise of AI has upended the industry, empowering Microsoft, Google and others, and creating competitive dynamics that seem to change every month.
For the first time, AWS faces questions about its long-term ability to lead the market it created.
With Amazon marking the 20th anniversary of AWS this month, GeekWire spoke with early builders, current AWS insiders, and longtime observers of the company to tell the story of how the business got started, how it won the cloud, and what it’s up against now.
Advertisement
Scalable, reliable, and low-latency
Officially, Amazon pegs the public launch of AWS to March 14, 2006. That’s when it announced “a simple storage service” that offered software developers “a highly scalable, reliable, and low-latency data storage infrastructure at very low costs.”
Dubbed S3, it was Amazon’s first metered cloud service: the first time developers could pay for exactly what they used, billed in tiny increments, with no upfront commitment.
“We think it can be a meaningful, financially attractive business.” A Bloomberg News story quotes Jeff Bezos about AWS in November 2006. S3 launched earlier in the year.
All of this might seem mundane in a modern world where the cloud and internet services are almost like electricity and water, seemingly always there when you need them.
But remember the context of that moment: Facebook was available only on college campuses. Netflix arrived on DVDs in the mail. The iPhone was still a year away from being unveiled. And over at Microsoft in Redmond, they were finally getting ready to ship Windows Vista.
The asterisk in the headline
The history of Amazon Web Services is more complicated than it might seem, and it’s actually a subject of some disagreement behind the scenes. There are multiple origin stories, including one offered by Amazon itself, and others by former employees who say the company has tidied up the narrative over the years to shape the lore around its current leaders.
Advertisement
Journalist Brad Stone, author of the canonical Amazon book, “The Everything Store,” discovered this when Andy Jassy — the longtime AWS CEO who would go on to succeed Jeff Bezos as Amazon CEO — disputed aspects of his telling of the AWS story in a one-star review.
One point of contention: the origins of EC2, the AWS service built by a small team in South Africa, and the degree to which it sprang from the process Jassy led or was born independently.
Part of the challenge: Amazon, despite operating the storehouse of the internet, isn’t great at preserving its own history. The company, which cooperated with this piece, wasn’t able to unearth key documents such as Jassy’s original AWS six-pager from September 2003.
Some former Amazon leaders take things further back, to a set of e-commerce APIs that Amazon released in July 2002, allowing outside developers to access its product catalog and build applications on top of it. By that accounting, AWS is closer to 24 years old.
Advertisement
Overcoming internal opposition
The effort was led by business leader Colin Bryar, who ran Amazon’s affiliates program, along with technical leader Robert Frederick, whose Amazon Anywhere team (focusing on making Amazon’s site and features available on mobile devices) had been working since 1999 on internal web services that became the foundation for the external APIs.
Amazon in those days was on Seattle’s Beacon Hill, in the landmark art deco Pacific Medical Center tower overlooking downtown. Jeff Bezos was directly involved from the early days, as a believer in the vision that Amazon’s infrastructure capabilities could become a big business.
In 2002, when Bryar initially pitched a roomful of senior leaders on the idea of opening up Amazon’s product catalog and features as web services to outside developers, nearly all of them said no, as Frederick recalled in a recent interview.
The objections piled up: it would cannibalize existing business, it would educate competitors. Then, as Frederick remembers it, Bezos looked around the table and let out one of his trademark piercing laughs. Amazon’s founder wanted to see what developers would do.
Advertisement
“Let’s do it,” Frederick recalls Bezos saying, “and let’s have them surprise us.”
Later, in a July 2002 press release announcing “Amazon.com Web Services,” Bezos used nearly identical language: “We can’t wait to see how they’re going to surprise us.”
Big developer response
Within months, tens of thousands of developers had signed up. Increasingly, they were asking for things like storage, hosting, and compute, recalled Frederick, who worked at Amazon through mid-2006. He went on to found IoT platform Sirqul in 2013 and remains its CEO.
Another veteran of those early days agreed that the developer response to those initial e-commerce APIs may have opened the minds of Amazon’s leaders to the larger possibilities.
Advertisement
“Maybe that’s where Andy’s brain lit up. … Maybe that’s where Jeff’s brain lit up,” said Dave Schappell, referring to Jassy and Bezos. Schappell arrived at Amazon in 1998 as Jassy’s MBA intern, dropped out of Wharton to stay, and spent the next seven years working with him.
Schappell ran the associates program after Bryar, became an early head of product for AWS, and hired the original product managers. Those product managers included Jeff Lawson, who went on to found Twilio. Schappell himself became a well-known Seattle entrepreneur before returning to AWS for four years after Amazon acquired his startup TeachStreet.
The ‘crystal-clear movie moment’
Jeff Barr was one of the developers who noticed.
Now an Amazon VP and longtime AWS chief evangelist, Barr was working as an outside consultant in the web services field when he logged into his Amazon Associates account one day in 2002 and noticed a new message.
Advertisement
AWS Chief Evangelist Jeff Barr, joined in in the early days of the business. (Amazon Photo)
Amazon now had XML, it said, referring to the data-formatting standard that allowed software systems to communicate over the internet. Amazon was making its product catalog available as a web service and connecting it to the affiliate program, a surprising move at the time.
“I clicked through, I signed up for the beta. I downloaded it right away,” Barr recalled.
He sent feedback to the email address in the documentation. They actually replied.
Before long, he was invited to a small developer conference at Amazon’s headquarters — maybe four or five attendees at the Pacific Medical Center tower, in a semicircular open space with a view of the city. The developers sat in the middle, with Amazon employees around them.
At some point, one of the Amazon presenters announced that they were so impressed at how developers had found the APIs and started publishing apps within 48 hours that they were going to look around the rest of the company for more services to open up.
Advertisement
“That was that crystal-clear movie moment,” Barr said. He turned to an Amazon employee nearby and told her: “I have to be a part of this.”
Creating the cloud
But what Frederick and team had built was essentially a way for outside developers to access Amazon’s product data. It was not yet the cloud as we know it today.
That move started in mid-2003, as Jassy told the story in a 2013 talk at Harvard Business School. Jassy, then serving as Bezos’s technical advisor, was tasked with figuring out why software projects across Amazon were taking so long. It turned out that engineers were spending months building storage, database, and compute solutions from scratch.
In a meeting of six or seven people that summer, someone made the observation that would change the company’s trajectory. Jassy recalled the thinking during his HBS talk: “We’re pretty good at this. And if we’re having so many problems, and we don’t have anything we can use externally, I imagine lots of other companies probably have the same problem.”
Advertisement
Around the same time, Amazon recruited Werner Vogels, a Cornell distributed systems researcher, as its chief technology officer. He almost didn’t take the call. “It’s an online bookstore,” he recalled in a LinkedIn post last week. “How hard could their scaling be?”
But the company was wrestling with every problem he and his colleagues had been theorizing about — fault tolerance, consistency, availability at scale — live in production, every day.
Fundamental building blocks
Schappell remembers those early days as a non-stop cycle of six-page memos and meetings with Jassy and Bezos, all focused on trying to figure out what to build.
The concept that would define AWS — breaking every capability down to its most basic building block, or “primitive” — didn’t arrive fully formed. “I don’t think he said that on day one,” Schappell said of Bezos. “I think he said it after he read 47 of our six-pagers.”
Advertisement
Each primitive would stand on its own, and customers would pay only for what they used, billed in tiny increments. It was a direct rebuke to the licensing models of companies such as database giant Oracle, where customers paid for everything whether they used it or not.
Rahul Singh, who joined AWS in January 2004 as one of its first engineers, recalled the early technical plans going through just one layer of review before reaching Bezos and Jassy. (It’s the kind of streamlined decision-making that Jassy is now trying to restore across the company.)
Fault tolerant by design
In one early meeting, Bezos told the engineers he wanted a server touched exactly twice: once when installed in the data center, and once years later when it was pulled out. In between, nothing. The software had to be built to tolerate failures, leaving dead machines behind and moving on. It was a philosophy that would define the architecture of the cloud.
On Singh’s first day, his manager Peter Cohen sat him down in the lunch area and handed him a planning document (a “PR/FAQ” in Amazon lingo) that had just been approved by Bezos.
Advertisement
“We’re calling this S4,” Cohen said. Singh looked at the name of the product, Simple Server-Side Storage Service, and pointed out that it should be called S5. Singh recalls Cohen’s response: “Yeah, you’re really smart, aren’t you? Let’s see if you can actually build this.”
It was eventually shortened to Simple Storage Service, or S3.
The queuing service called SQS had launched in 2004 in beta (adding further to the debate over the origin story and what counts as the launch) but S3 was the first made generally available.
A billion-dollar business?
Jassy, then the VP in charge of AWS, would hold all-hands meetings in a conference room for four or five engineers, most of them straight out of college and grad school, as Singh recalled in an interview. Jassy ran them with the discipline of a much larger organization, repeating over and over that AWS could be a billion-dollar business, at a time when it had no revenue at all.
Advertisement
Singh remembers being highly skeptical.
“I was young and naive, and I remember thinking: a billion, that’s a really big number,” Singh said. Years later, he would joke with Jassy that the prediction had been completely wrong: it turned out to be a multi-billion dollar business, many times over.
In a LinkedIn post marking the March 14 anniversary, current AWS CEO Matt Garman — who joined the company as a summer intern in 2005, before the launch of S3 — recalled how early customers like FilmmakerLive and CastingWords took a bet on the fledgling platform.
“That shift changed the economics of building technology almost overnight,” he wrote.
Advertisement
Meanwhile, in Cape Town …
While one team was building S3 in Seattle, the compute side of the equation was taking shape 10,000 miles away. Chris Pinkham, an Amazon VP who wanted to move back to his native South Africa, was given permission to set up a development office in Cape Town.
His small team built EC2 — the Elastic Compute Cloud — largely independent of the Seattle operation. The local tech community was a bit bewildered by what Amazon was doing.
“We knew this bookstore had arrived in town,” recalled Dave Brown, who was working at a local payments startup at the time. He asked his friends who had joined what they were doing.
Dave Brown, Vice President, AWS Compute & ML Services, at AWS re:Invent 2025. (Amazon Photo)
“It’s kind of like, you know, you can rent a computer on the internet,” they told him.
Brown asked about the revenue. “Tens of dollars every single day,” they said.
Advertisement
He remembers wondering why they were wasting their time on that.
The answer became clear when EC2 launched in August 2006, five months after S3, adding compute to storage as another fundamental building block of AWS and the cloud.
Early customers showed EC2’s range: a Spider‑Man movie used it for rendering, and Facebook apps like FarmVille and Animoto spun up instances on demand, as Brown recalled.
A New York Times engineer used a personal credit card to run optical character recognition on the paper’s scanned archives over a weekend, making the entire archive searchable, after being told by the company that it would be cost-prohibitive using traditional approaches. It cost a grand total of a couple hundred bucks, even after initially screwing up and doing it over again.
Advertisement
Typing ahead of the characters
Brown joined in August 2007, the 14th person on the EC2 team. They worked out of a tiny office in Constantia, the winelands part of Cape Town, across the highway from vineyards.
They occupied part of one floor of an office building. There was one conference room, and two offices. The rest was open plan. The team was 14 engineers, one product manager, and Peter DeSantis, the leader who came from Seattle to help build the service.
The internet connection was a four-megabit DSL line shared by the entire office, with 300 milliseconds of latency to the data centers in the U.S. When engineers typed on their screens, each character had to make the round trip across the ocean and back before it appeared.
“You get really good at typing ahead of where the actual characters are appearing,” Brown said.
Advertisement
Every morning, someone had to find the VPN token to get the office online. It lasted about 10 hours before it automatically reset. “Everybody would be shouting, where’s the VPN token?”
Scrambling to keep up
One day, they were running low on computing capacity. DeSantis came out of his office and told the engineers to shut down the machines they were using for testing. That freed up enough capacity to keep the service going for a few days until the next racks of hardware came online.
Marc Brooker, now an AWS VP and distinguished engineer working on agentic AI, joined the EC2 team in Cape Town in 2008. He could see the entire team from his desk. When Brown was away one day, Brooker and the team covered every surface of his office in sticky notes — the kind of prank that only works in a small office where everyone knows everyone else.
Brooker was drawn in by something he heard about in his job interview: the team had built a way to make a distributed system look like a physical hard drive to the operating system.
Advertisement
“Wow, that is so cool,” he recalled thinking. “Here’s 20 other things I can think of that we could do with that kind of technology.”
That instinct, that the building blocks of the cloud could be combined and recombined in ways no one at Amazon had imagined, was at the core of what made AWS catch on.
AWS VP Mai-Lan Tomsen Bukovec, who oversees AWS’s core data and analytics services, in front of a whiteboard on which she mapped the evolution from the early days of S3 to the AI era at Amazon’s re:Invent building in Seattle. (GeekWire Photo / Todd Bishop)
“The world would be in a very different place if you didn’t have the freedom to experiment, to pilot, to try something, to move on to some other idea, that AWS first introduced,” said Mai-Lan Tomsen Bukovec, an AWS VP who has led S3 for 13 of its 20 years.
Prasad Kalyanaraman, now the AWS vice president who oversees global infrastructure, previously spent years building supply-chain forecasting systems for Amazon’s retail operation. Around 2011, Charlie Bell, then a senior AWS leader, asked him to help with a problem: the team was forecasting its compute demand using spreadsheets.
Advertisement
He adapted the supply-chain forecasting tools for AWS, but the cloud business kept outrunning every model he built.
“The funny thing about forecasts is that forecasts are always wrong,” he said. “It’s very hard to actually predict exponential growth.”
How AWS grew
It began with startups. The companies that would define the next era of technology were building on AWS. Airbnb, Instagram, and Pinterest all got their start on AWS.
John Rossman, a former Amazon exec and author of books including “The Amazon Way” and “Big Bet Leadership,” remembers Jassy pulling him aside for coffee at PacMed around 2008. Rossman had left Amazon and was working as a consultant to large businesses. Jassy wanted to know: did he think big companies would ever be interested in on-demand computing?
Advertisement
Maybe, maybe not, Rossman said. He was working with Blue Shield of California at the time, and tried to imagine them running on AWS. It was hard to picture. At the time, the typical AWS customer was a startup developer with little budget for infrastructure. The idea that a big insurance company would run on AWS seemed like a stretch.
“I was a little bit of a pessimist on it,” Rossman said.
But soon things started to change.
Netflix moved its streaming infrastructure to AWS starting in 2009, a decision that carried particular weight because it competed with Amazon in video. In 2013, the CIA awarded AWS a contract over IBM, signaling that the platform was trusted at the highest levels of security.
Advertisement
Microsoft tips its hat
AWS’s pricing model, in which customers paid only for what they used, was a direct threat to the licensing businesses of tech’s old guard. Whether burying their heads in the sand or just preoccupied, the companies that would become the biggest AWS rivals were slow to respond.
Microsoft didn’t unveil its cloud platform — code-named “Red Dog,” and initially launched as “Windows Azure” — until October 2008, more than two years after S3 debuted. Bill Gates had left his day-to-day role at Microsoft a few months earlier. The company was still recovering from the aftermath of the Vista flop.
“I’d like to tip my hat to Jeff Bezos and Amazon,” said Ray Ozzie, then Microsoft’s chief software architect, at the launch event — a rare public acknowledgment of a competitor’s lead.
Azure didn’t reach general availability until 2010, and its early approach was more of a platform for applications, not the raw infrastructure that made AWS so popular with developers. It took years to build out comparable offerings.
Advertisement
Google launched App Engine, a platform for running applications, in 2008, but didn’t offer raw computing infrastructure to rival EC2 until Compute Engine arrived in 2012.
‘The AWS IPO’
For years, AWS grew in something close to silence. Amazon said little about the overall growth, and didn’t break out the financial results for the business in its quarterly earnings reports.
Then, in April 2015, Amazon reported its first-quarter earnings with AWS broken out in detail for the first time, and it stunned the industry. The business had a $6 billion annual revenue run rate and was growing 50% a year.
The modest expo hall at the first AWS re:Invent, under construction in 2012, left. Last year’s conference, right, drew 60,000 people to Las Vegas. (2012 Photo Courtesy Jeff Barr; 2025 Photo by Todd Bishop)
AWS generated more than $250 million in profit that quarter alone, with operating margins around 17%. This was a stark contrast with the rest of Amazon, scraping by on traditional retail margins of 2% to 3%. AWS was making significantly more profit on every dollar of revenue.
The hosts of the Acquired podcast, in their extensive 2022 history about the rise of Amazon Web Services, would later call this moment “the AWS IPO,” in effect.
Advertisement
Amazon stock jumped 15% on the news.
“I was blown away,” said Schappell, the early AWS product leader who left in 2004 and later listened to the first AWS earnings breakout while training for a marathon. For years, he had assumed Amazon was losing billions on AWS. The reality was the opposite: AWS had become so profitable that it was effectively bankrolling Amazon’s future.
The margins kept climbing, reaching 35% by early 2022.
Then the pandemic cloud boom faded. Inflation spiked amid broader economic uncertainty. Customers scrutinized their cloud bills and pulled back spending. AWS revenue growth fell from 37% to 12% over the course of the year, the slowest in its history. Margins fell to 24%.
Advertisement
The ChatGPT moment
Then everything changed, for Amazon and everyone else.
On November 30, 2022, OpenAI released ChatGPT, with little fanfare at first. The consumer AI chatbot quickly became the fastest-growing application in history, reaching 100 million users in two months, and sending the technology world into a frenzy in the ensuing months.
For AWS, the stakes were huge. Every major wave of technology over the previous 15 years, from mobile to social to streaming to e-commerce, had been built on its platform.
If AI was the next wave, AWS needed to lead the way again.
Advertisement
Amazon was far from absent in AI. AWS had launched SageMaker in 2017, giving developers tools to build and deploy machine learning models. It had released custom AI chips for inference and training. Alexa, the voice assistant, had been processing natural language queries since 2014. Amazon had spent many years and billions of dollars on machine learning.
But none of it looked or worked like ChatGPT. The new model could write code, draft essays, answer complex questions, and hold a conversation. It was not a feature. It was a product people wanted to use. And it was built by an AI lab running on Microsoft Azure.
‘AWS sneaked in there’
The irony: OpenAI didn’t start on Microsoft’s cloud. It launched on AWS.
When the AI lab debuted in December 2015, AWS was listed as a donor. OpenAI was running its early research on Amazon’s infrastructure under a deal worth $50 million in cloud credits.
Advertisement
Microsoft CEO Satya Nadella learned about it after the fact. “Did we get called to participate?” he wrote to his team that day, in an email that surfaced only recently in a court filing from Elon Musk’s suit against Microsoft and OpenAI. “AWS seems to have sneaked in there.”
Microsoft moved fast. Within months, Nadella was courting OpenAI. The AWS contract was up for renewal in September 2016. “Amazon started really dicking us around on the [terms and conditions], especially on marketing commits,” Sam Altman wrote to Musk, who was then OpenAI’s co-chair. “And their offering wasn’t that good technically anyway.”
By that November, Microsoft had won the business.
Six years later, with the launch of ChatGPT, that bet paid off in ways no one could have predicted. Microsoft stock surged. Amazon, like many others in the industry, was scrambling to figure it all out — suddenly trying to keep up with the future of a market it had long defined.
Advertisement
Pivoting to generative AI
The AWS CEO at the time was Adam Selipsky, who had helped build the business from its earliest days before leaving in 2016 to run Tableau, the data visualization company. He returned in May 2021 to lead AWS after Jassy was promoted to succeed Bezos as Amazon CEO.
In a May 2024 interview with Selipsky, on one of his last days in the role, GeekWire asked him directly if Amazon had been caught flat-footed by the rise of generative AI.
After a member of his team interjected to say the question seemed to be informed by reading too many Microsoft press releases, Selipsky dismissed the idea that AWS was behind.
While that narrative might have “more sizzle” and generate clicks, Selipsky said, the reality was different, as evidenced by Amazon’s years of work in AI and machine learning.
Advertisement
AWS had announced Inferentia, a chip for deep learning, in 2018, building on its 2015 acquisition of Annapurna Labs, the Israeli chip startup. It began work on CodeWhisperer, an AI coding assistant, in 2020 — before GitHub Copilot existed, the company notes. In 2021, it launched Trainium, a chip designed to train models with 100 billion or more parameters.
Dario Amodei, CEO of Anthropic, right, speaks with Adam Selipsky, then CEO of Amazon Web Services, at AWS re:Invent on Nov. 28, 2023. (GeekWire File Photo / Todd Bishop)
At the same time, Selipsky acknowledged that AWS had “pivoted many thousands of people from other interesting, important projects to work on generative AI” — a scale of reallocation signaling something other than business as usual inside the company.
Tomsen Bukovec, who now oversees AWS’s core data services including S3, analytics, and streaming, said her team’s response was less a pivot than a process of learning.
They educated themselves on what the technology meant for their services, she said, and thought deeply about what it would look like for AI to both create and consume data at scale.
The question her team started asking in late 2022: what does the world look like when 70 to 80 percent of the usage of your services comes through AI?
Advertisement
“AI is going to use it at 10 times to 100 times the rate of a human, and it’s going to do it all day long, all the time, 24 hours,” she said. “AI never goes to sleep.”
Scrambling to meet the moment
The pressure to catch up in generative AI was felt across the company. In a lawsuit filed in Los Angeles Superior Court, an AI researcher who worked on Amazon’s Alexa team alleged that a director instructed her to ignore internal copyright policies because “everyone else is doing it.”
The complaint described ChatGPT’s launch in late November 2022 as causing “panic within the organization.” Amazon has denied the allegations, and the case is still pending.
On Amazon’s earnings call in early February 2023 — two months after ChatGPT’s launch — Amazon CEO Andy Jassy did not discuss generative AI or large language models.
Advertisement
Matt Garman, AWS CEO, speaks at AWS re:Invent 2025. (GeekWire File Photo / Todd Bishop)
By the next quarter’s call, in late April 2023, he spoke about it for nearly ten minutes, describing it as “a remarkable opportunity to transform virtually every customer experience that exists.”
In September 2023, the company announced an investment of up to $4 billion in Claude maker Anthropic, the AI startup founded by former OpenAI researchers. The investment would eventually grow to $8 billion — which seemed like a lot at the time.
Selipsky left AWS in mid-2024. Garman, whom Selipsky had hired as a product manager in 2006, succeeded him as CEO, charged with leading the cloud business into the new era.
From CodeWhisperer to Bedrock
The roots of Amazon’s response actually predated ChatGPT by more than two years, although it faced initial skepticism internally. In 2020, Atul Deo, an AWS product director, wrote a six-page memo proposing a generative AI service that could write code from plain English prompts.
Jassy, who was still leading AWS at the time, wasn’t sold. His reaction, as Deo later told Yahoo Finance, was that it seemed like a pipe dream. The project launched in 2023 as CodeWhisperer, an AI coding assistant.
Advertisement
But by then, ChatGPT had redrawn the landscape, and the team realized they could offer something broader: a platform giving customers access to a range of foundation models through a single service. AWS called it Bedrock. The name reflected an ambition to do for AI models what the company had done years earlier with its Relational Database Service, which wrapped MySQL, Oracle, and other database engines in a common management layer.
Bedrock would do the same for large language models.
The decision to offer multiple models rather than push a single in-house option was deliberate, and rooted in a pattern AWS had followed for years. It brought multiple CPUs to the cloud: AMD, Intel, and its own Graviton. It offered Nvidia GPUs alongside its own Trainium chips.
Fastest-growing AWS service
Amazon’s view is that choice drives competition, which drives down prices for customers.
Advertisement
“We knew there was never going to be one model to rule everybody,” said Dave Brown, the AWS vice president who oversees EC2, networking, and custom silicon. “And even the best model was not going to be the best model all the time.”
Bedrock launched in preview in April 2023 and reached general availability that September, with models from Anthropic, Meta, and others alongside Amazon’s own. Two years later, it had become the fastest-growing service AWS had ever offered, with more than 100,000 customers.
On Amazon’s most recent earnings call, Jassy described it as a multi-billion-dollar business, with customer spending growing 60% from one quarter to the next.
At the end of 2024, Amazon added its own entry to the model race. The company introduced a family of foundation models called Nova, positioned as a lower-cost, lower-latency alternative to the third-party models on the Bedrock platform.
Advertisement
Amazon CEO Andy Jassy unveils the Nova models at AWS re:Invent in December 2024. (GeekWire Photo / Todd Bishop)
As Fortune’s Jason Del Rey observed, it was a page from the e-commerce playbook: build the marketplace first, then stock it with a house brand. Just as Amazon sells goods from thousands of merchants alongside its own private-label products, Bedrock offered models from Anthropic, Meta, and others, and now Amazon’s own models to go along with them.
At re:Invent in late 2025, AWS pushed further, unveiling what it called “frontier agents” — autonomous AI systems designed to work for hours or days without human involvement.
One, built into Amazon’s Kiro coding platform, can navigate multiple code repositories to fix bugs while a developer sleeps. Last month, the Financial Times reported that Amazon’s own AI coding tools caused at least one AWS service disruption. Amazon acknowledged the incident but publicly disputed aspects of the reporting, citing a misconfigured role, not the AI itself.
The $200 billion bet
Like its rivals, AWS is also building the physical infrastructure to back it up. In 2025, less than a year after it was announced, AWS opened Project Rainier, one of the world’s largest AI compute clusters, centered in Indiana, powered by more than 500,000 of Amazon’s Trainium2 chips.
Named after the mountain visible from Seattle, Rainier was built to train and run Anthropic’s next generation of Claude models, using Amazon’s own Trainium chips rather than Nvidia GPUs.
Advertisement
Kalyanaraman, the AWS vice president who oversees global infrastructure, said the project forced AWS to rethink its supply chain from the ground up. The goal was to minimize the time between a chip leaving its fabrication facility and serving a customer workload.
Rainier was built at a faster pace than anything AWS had ever done, Kalyanaraman said, with more than 100,000 Trainium chips available to Anthropic in under a year. But it wasn’t a one-off. He called it the new template for how AWS would build AI infrastructure going forward.
Then, late last month, came the deal that brought the story full circle.
OpenAI — the company that launched on AWS in 2015 and left for Microsoft Azure the following year — announced a partnership with Amazon that included up to $50 billion in investment and a cloud agreement worth more than $100 billion over eight years.
Advertisement
OpenAI committed to run workloads on Amazon’s custom Trainium chips, making it the second major AI lab after Anthropic to do so. The two companies had been talking since at least May 2023, according to SEC filings, but Microsoft’s right of first refusal on OpenAI’s compute had blocked a deal until those restrictions were loosened in the latest renegotiation.
By late 2025, AWS revenue was growing at its fastest pace in more than three years, up 24% to $35.6 billion a quarter. The company disclosed that its Trainium and Graviton chips had reached a combined annual revenue run rate of more than $10 billion. Bedrock had surpassed 100,000 customers and was generating revenue in the billions.
The competitive picture was also coming into sharper focus.
In mid-2025, Microsoft disclosed standalone Azure revenue for the first time: $75 billion a year, up 34%. Google Cloud had crossed a $50 billion annual run rate. AWS, at more than $116 billion a year at the time, was still larger — but no longer running away with the market.
All of this helps to explain Amazon’s record capital spending. On the company’s latest earnings call, Jassy defended plans to spend $200 billion this year, most of it on AI infrastructure.
Advertisement
The figure is so large it would consume nearly all of Amazon’s operating cash flow. Facing a Wall Street backlash, Jassy called artificial intelligence “an extraordinarily unusual opportunity to forever change the size of AWS and Amazon as a whole.”
What’s next: Bear and bull cases
Longtime observers are divided on the company’s AI bet.
Corey Quinn, a cloud economist who works with AWS customers through his Duckbill consultancy, sees little real‑world traction for Amazon’s Nova models. “You know someone is an Amazon employee when they talk about Nova, because no one else is,” he said.
Some businesses bypass Amazon’s Bedrock platform entirely because of capacity constraints and slower speeds, he said, going to third-party providers like Anthropic rather than inserting Bedrock as a “middleman” — unless they’re trying to retire their committed AWS spend.
Advertisement
Looking forward, Quinn pointed to a historical parallel. Twenty years ago, Cisco was the most valuable company in the world, the backbone of the internet. Today it is a profitable but largely invisible utility. AWS, he said, could be headed for the same fate.
“It’s very clear that there will be a 40th anniversary for AWS, because that inertia does not go away,” Quinn said. “But will it be at the center of tech policy and giant companies, or is it going to be a lot more like the Cisco of today?”
Om Malik, the veteran tech writer, cast a critical eye on Amazon’s OpenAI investment.
By his math, Amazon is paying roughly 16 times more per percentage point of OpenAI than Microsoft did, with none of the exclusive IP rights, revenue share, or primary API access that Microsoft locked up years ago. The cost of being late, Malik wrote, is measured in billions.
Advertisement
The lobby at AWS headquarters, the re:Invent building in Seattle. (GeekWire Photo / Todd Bishop)
Rossman, the former Amazon executive who was once skeptical about AWS demand from big business, sees a different picture. He agrees that AWS is strong in infrastructure, the picks and shovels. But where Quinn sees that as a ceiling, Rossman sees it as a moat.
The models are the commodity, Rossman contends. They leapfrog each other constantly. What matters is everything the models run on and through: the chips, the servers, the data centers, the power. AWS is building more of that stack than most competitors.
“That’s where the value is,” he said.
Rossman said he could envision AWS operating nuclear power plants someday. The long-term winners, he said, will be the companies that deliver the best AI at the lowest cost per token. That’s where AWS’s vertical integration — from Trainium chips to Bedrock to the data center itself — gives it an advantage competitors can’t easily replicate.
As for the risk of spending too much, Rossman put it simply: you have to decide which side of history you’d prefer to fail on — overbuilding or underbuilding. Amazon isn’t taking chances.
Advertisement
In an internal all-hands meeting last week, Jassy said AI could help AWS reach $600 billion in annual revenue, double his own prior estimate, Reuters reported. He had been thinking for years that AWS could be a $300 billion business in a decade. AI, he said, changed the math.
Marshals, a new Yellowstone spinoff starring Luke Grimes as Kayce Dutton, is airing on CBS right now. You can also tune in with Paramount Plus. The Yellowstone sequel series sees Grimes’ former Navy SEAL join an elite unit of US Marshals to bring range justice to Montana, according to a synopsis from CBS.
The show includes Yellowstone actors Gil Birmingham as Thomas Rainwater, Mo Brings Plenty as Mo and Brecken Merrill as Tate. Spencer Hudnut is the showrunner of Marshals — formerly known as Y: Marshals — and Taylor Sheridan is an executive producer.
Advertisement
When to watch new Marshals episodes on Paramount Plus
Episode 10 of Marshals airs on CBS on Sunday, May 3. Viewing options for Paramount Plus customers vary by subscription tier. You can watch the episode live if you have Paramount Plus Premium, which includes your local CBS station. If you subscribe to Paramount Plus Essential, you can watch the installment on demand the following Monday, but not live on Sunday.
Here’s a release schedule for the next four episodes of Marshals.
Episode 10, Playing with Fire: Premieres on CBS/Paramount Plus Premium on May 3 at 8 p.m. ET/8 p.m. PT/7 p.m. CT. Streams on Paramount Plus Essential on May 4.
Episode 11, On Thin Ice: Premieres on CBS/Paramount Plus Premium on May 10 at 8 p.m. ET/8 p.m. PT/7 p.m. CT. Streams on Paramount Plus Essential on May 11.
Episode 12, The Devil at Home: Premieres on CBS/Paramount Plus Premium on May 17 at 8 p.m. ET/8 p.m. PT/7 p.m. CT. Streams on Paramount Plus Essential on May 18.
Episode 13, Wolves at the Door: Premieres on CBS/Paramount Plus Premium on May 24 at 8 p.m. ET/8 p.m. PT/7 p.m. CT. Streams on Paramount Plus Essential on May 25.
You can also watch CBS and the tenth episode of Marshals without cable with a live TV streaming service such as YouTube TV, Hulu Plus Live TV or the DirecTV MyNews skinny bundle. In addition to offering a lower-cost option, Paramount Plus lets you watch the other two Yellowstone spinoffs: the prequels 1883 and 1923.
After a price increase in early 2026, the ad-supported Essential version runs $9 per month or $90 per year. The ad-free Premium version runs $14 per month or $140 per year. Paying more for Premium gives you downloads, the ability to watch more Showtime programming than Essential and access to your live, local CBS station.
If you’re into vibe coding, OpenAI just made it a lot more adorable. The company has rolled out Codex Pets, a brand-new feature for its Codex desktop app that adds animated companions to your screen while you work. Codex is OpenAI’s agentic coding tool that handles tasks on your behalf. It runs in the background and gets things done, and now it has a tiny mascot to go with it.
So, what exactly is a Codex Pet?
A Codex Pet is an optional animated companion that floats as an overlay on top of your screen, even when the Codex app itself is minimized. It shows you what Codex is currently working on through small message bubbles and alerts you when a task wraps up or when it needs your input.
If your pet pops up mid-task, you can click on it to send a reply directly to the agent. It is a passive status indicator that doubles as a lightweight two-way channel. Eight built-in pets are available right out of the box, all designed in a cute pixel-art style.
How to get a Codex Pet?
ScreenshotOpenAI
Getting a Codex Pet is simple. Just open the Codex app and type “/pet” to summon or dismiss your companion. If you want something more personal, use the “/hatch” command. Hatch is a bundled tool that takes any image you upload and turns it into a fully animated pet, saved locally in your Codex home folder so you can share it with others.
The community has already taken to it, and fan-made sharing sites have appeared online within hours of the launch. OpenAI is even running a limited-time contest where 10 of their favorite custom pets win their creators 30 days of ChatGPT Pro.
Advertisement
Beyond the pets, the same update also introduced config auto-import, which allows Codex to detect and pull in settings from other coding agents, such as Claude Code. There is also a new dictation dictionary in Settings, where you can save abbreviations and phrases so voice input stops getting them wrong.
Cordless power tools are a huge part of anyone’s toolkit, be they DIYers working on home projects or professionals working on the job site. They require less effort than their hand tool counterparts, allowing you get work done more quickly and with less effort. The good news is that power tools are incredibly popular and sold at every hardware store. In fact, there are so many brands that it’ll make your head spin. So, where do you start?
For most people, deciding on a brand is the first step. Every major brand uses systems designed around their own battery types, allowing you to buy three, four, or more tools from the same brand while needing comparatively fewer batteries. However, actually deciding between brands can still get a little difficult.
Advertisement
We’re here today to help answer that question, so if you’re curious about the ins and outs of cordless power tools, you’ve come to the right place.
Advertisement
Snap-on
Snap-on’s hand tools are legendary for their American-made quality, lifetime warranties, and generally excellent reputations among professional workers. The brand also sells power tools, and by all accounts, they’re decent. However, they may not be the best choice for most people. For starters, the product line is relatively small, giving you fewer options than almost every other brand we looked at. Additionally, the power tools themselves are also exceptionally expensive.
For the price, you’re not getting much you aren’t getting elsewhere. For example, the brand’s impact wrench has 1,550 lb-ft of breakaway torque and the bare tool costs around $630, but a Milwaukee M18 Impact Wrench costs more than $200 less and delivers 1,600 lb-ft of breakaway torque. The warranty is also only up to two years, which is less than most competitors. Toss in the fact that these aren’t typically available in-stores, and it becomes difficult to recommend Snap-on for cordless power tools. Other brands have cheaper tools that are easier to get with longer warranties and wider availability.
Advertisement
Festool
It’s understandable if you haven’t heard of Festool before. The brand much more well-known in its home country of Germany, but it’s also known for making some good power tools. Like Snap-on and some other brands, its cordless power tools are rather expensive, with an impact driver and drill combo set going for around $650. For the price, you get a competently built tool, three years of warranty, and a guarantee that spare parts will be available for at least a decade.
So, why is it so low on the list? Well, for DIYers and hobbyists, the price is a pretty big pitfall. Festool products also difficult to find in stores, which limits many to ordering online from retailers like Amazon. The selection for U.S. shoppers is also smaller than some other competitors, which means the batteries won’t go quite as far if you plan on stocking up on many cordless power tools. It’s an excellent brand, but maybe not the best value for most people.
Advertisement
Black and Decker
Black and Decker’s reputation has gone through its ups and downs over the years, but all told, it’s not half bad, at least when it comes to cordless power tools. The brand has a few dozen power tools and kits available for sale using the brand’s 20V Max PowerConnect battery system, and it’s much like other power tool brands in this part of the list. You’ll find basic stuff like drills, circular saws, and other common power tools. Its selection is smaller than most competitors, but it hits the high marks and can be found in retail stores like Home Depot.
There are three main reasons Black and Decker isn’t higher on the list. First, other brands offer larger selections than what’s offered by Black and Decker. Second, the two-year power tool warranty is on the lower end of the spectrum. Finally, Stanley Black and Decker also owns DeWalt, which is the superior sub-brand for cordless power tools.
Advertisement
Bauer
Bauer is Harbor Freight’s budget brand for power tools, and one of two Harbor Freight brands on this list. Its selection is pretty decent, boasting dozens of cordless power tools — mostly the usual stuff, like cordless drills and angle grinders. The brand has more than enough to cover most basic DIY work and, somewhat oddly, it’s also becoming popular with professionals looking for inexpensive tools that they don’t use very often.
Advertisement
The prices are about as low as it gets for cordless tools. A good example is the Bauer 20V Cordless Drill, which costs $55 and that includes a battery with a charger. Reviews tend to be pretty positive for Bauer tools, and the brand has its fans. The biggest detriment is Bauer’s warranty, which is a scant 90 days. That’s the shortest warranty of any cordless power tool company we saw, and it might be worth spending a little extra for more warranty coverage.
Advertisement
Worx
Worx is a brand you may not know, possibly because you can only get it from online retailers like Amazon, but it’s a major player in the power tool market. In any case, the brand has a decent overall selection, roughly on par with brands like Bauer and Black and Decker. You can find the basics at least, along with some outdoor tools like hedge trimmers and leaf blowers. The prices are about average for the industry.
The general sentiment from shoppers is that Worx is better for DIY stuff than pro use, which puts it in the same neighborhood as Bauer, Ryobi, and some other brands. It has a three-year warranty, which is better than Bauer and Black and Decker. You can also get free string trimmer spools for life when you buy one, which is neat. Overall, Worx isn’t necessarily great, but it’s also not bad.
Advertisement
Hercules
Among Harbor Freight brands, Hercules is a step up from Bauer in terms of overall quality. Its tools are widely available at Harbor Freight locations and include a selection of several dozen products. Like every other brand on the list so far, you get your basics like a reciprocating saw or a cordless drill along with some specialty items, but pale in comparison to the big dogs when it comes to variety. The general sentiment for Hercules tools is positive, but you can find the occasional complaint if you look around.
Hercules has a weird warranty policy. The brand has both brushless and non-brushless tools. The brushless tools have a five-year warranty, which is among the best for cordless power tools while the rest have a 90-day warranty, which ties Bauer for the worst. Brushless cordless power tools from Hercules are a better value than anything else on the list so far thanks to their longer warranties and good availability.
Advertisement
Metabo
Metabo is a German toolmaker with a limited presence in the U.S., which means a lot of folks may have never heard of it before. It’s more popular in Europe, where the selection is much larger, but the brand still has a good reputation in America too. In any case, the brand’s selection is a bit smaller. You’ll find more 18V tools than 12V, so you may want to skip the 12V tools if you’re looking to build out a collection. Metabo’s tools are backed by a three-year warranty, and you can usually find them on Amazon.
Overall, Metabo’s tool prices are in line with industry averages, and availability on Amazon makes them easy to find. These are definitely good tools, with many reviewers saying they’re as good as the Hitachi tools that Metabo HPT tools replaced when Metabo was bought out by Hitachi. The only thing holding this brand back is its weaker than average selection.
Advertisement
Kobalt
Kobalt is Lowe’s in-house brand, and as such, you’ll find the blue cordless power tools all over the store if you walk around. Kobalt’s selection is the biggest of any brand so far, offering over 100 cordless power tools across the brand’s 24V, 40V, and 80V battery systems, which include everyday items like drills and even cordless electric lawnmowers. Its wide coverage is a big step up from some other brands and makes it easier to justify getting into the ecosystem.
Advertisement
The general sentiment around Kobalt is that it’s great for DIYers, and the occasional pro has been known to pick up a Kobalt tool on occasion when there’s a big sale. The warranty on power tools is also quite good, with five years for tools and three years for batteries. Lowe’s is also incredibly transparent with recalls and safety notices, so really, the brand covers all the bases. It may not be the best for pro work, but there’s little reason not to trust it for DIY stuff.
Advertisement
Craftsman
Craftsman has been around for nearly 100 years and is one of the most well-known power tool brands in the world. The brand has well over 100 cordless power tools, ranging from your standard drill to a variety of outdoor tools as well. It’s not quite as popular in the pro segment as it once was, but the occasional professional still uses Craftsman, and it is still quite popular in the DIY, homeowner, and hobbyist segment, where the brand’s cordless power tools get reasonably good reviews.
Craftsman has a good variety, and most of its metrics are about average. Power tools have a three-year limited warranty, and recalls on defective tools aren’t terribly common. In short, Craftsman doesn’t excel in any one metric in particular, but has a good all-around showing, with warranties, selection, and availability on par with many competitors and better than some cheaper brands like Bauer or Black and Decker. Other brands do better overall, though, so Craftsman is about average.
Advertisement
Bosch
Lcva2/Getty Images
Users trust Bosch tools more than almost every other brand on this list. The brand’s tool selection is above average, with dozens of tools and kits to purchase. The only problem is finding them. Some Bosch tools are in stock at Home Depot, although in-store availability tends to be a little random. You can, of course, order them online if you choose directly from Bosch or from Amazon, but it’d be nice to see a wider selection in stores.
Bosch’s products are backed by a one-year warranty, which is on the shorter side, but the brand’s recall list is quite short, so it appears as though warranty replacements aren’t needed too often. Even so, the shorter warranty, less optimal availability, and average tool selection makes it hard to gush about Bosch too much, even if the tools it does sell have stellar reviews online. It’s an above average brand, for sure, as long as it has what you need.
Advertisement
Husqvarna, Ego Power, and Stihl
Ego Power, Husqvarna, and Stihl all have one thing in common. They all not only sell cordless power tools, but exclusively tools that are used outdoors. This is nice because most of the brands we’ve discussed so far don’t sell outdoor cordless power tools at all. All three brands sell push mowers, chainsaws, string trimmers, leaf blowers, hedge trimmers, and more, all powered by batteries. All three of them have good reputations and are often compared to larger tool brands like Milwaukee.
All three brands are available in stores as well as online, where they have very good customer reviews. Ego Power has the best warranty at five years, with Husqvarna coming in second with up to five years, and Stihl having an average two-year warranty. Their selections are a little small, but they do compete in a segment most cordless power tool brands avoid. You can choose the one you like most — they’re all competent.
Advertisement
Ryobi
Ryobi’s placement on this list was difficult to decide. On the one hand, it is a darling in the DIY department, with customers praising the brand’s price, availability, and selection. In fact, Ryobi has so many tools that we make lists of just the ones you may not have heard about. On the other hand, the brand isn’t terribly popular with pros, although some do use Ryobi products occasionally. Ryobi’s greatest strength is its selection, which includes hundreds of tools across its 18V and 40V battery systems that include everything from cordless drills to battery-powered lawn mowers. It competes for the biggest selection of any brand on the list.
Ryobi’s warranty is also pretty decent, offering between three and five years depending on the tool, which is better than average. Availability is also no problem because you can’t walk through a Home Depot without seeing Ryobi everywhere. It’s very popular there, and there are thousands of positive reviews on some tools. It’s hard to argue that Ryobi isn’t good.
Advertisement
Ridgid
Ridgid is sold exclusively at Home Depot, but isn’t owned by the retailer like Husky is. Its selection is pretty average, offering roughly 100 cordless power tools, which put up good numbers in terms of popularity and reviews, but not at the level of Ryobi, DeWalt, and other big brands. Pros do use Ridgid tools, albeit not as often as DeWalt or Milwaukee. So, you may be wondering why Ridgid is so high on the list.
Advertisement
The reason is because the brand has the single best warranty of any toolmaker on the list. It gives you three years out of the box, which is pretty standard. However, if you register the product online and apply for the lifetime service agreement, Ridgid adds a lifetime warranty that includes free replacement batteries, free service, and free replacement parts for the tool’s original owner. There are limitations to this, but that is a ridiculous warranty for a cordless power tool, and immediately makes Ridgid worthy of consideration, even with its smaller tool lineup.
Advertisement
Makita
Makita is a huge purveyor of cordless power tools, and easily among the best on the market. It’s considered a pro-level brand, and DIYers should have no shame in picking these up as well. The brand’s selection is larger than most, with nearly 600 power tools across three battery systems. The biggest is the LXT system, with 350 products all by itself, more than every other brand on this list so far except Ryobi. The brand’s warranty is also decent, offering an average three-year warranty on its power tools and batteries, which is better than some and worse than others.
Makita tools strike a good balance between affordability and competence. The brand only has a few recalls in its history, and its tools are readily available in stores and online. Users seem to like them, with some tools garnering thousands of reviews, most of which are positive. There really isn’t much to complain about. This is a pro-level brand with hundreds of tools, good availability, and a decent warranty.
Advertisement
DeWalt
If you’ve gotten the sense that a lot of folks use DeWalt tools, it’s because they do. This is one of the most popular tool brands in the U.S. for both DIYers and pros, and it’s easy to see why. The brand’s selection is quite large, housing hundreds of tools across the brand’s 12V, 20V, and 60V battery systems, more than most tool brands on the market. There isn’t much you won’t find in the collection, including outdoor tools like lawn mowers.
DeWalt’s tools are backed by a three-year warranty, which is average for the industry. The brand is also quite transparent with recalls, with a history dating back over 25 years, which is much longer than most tool companies’ recall lists. There are definitely some DeWalt tools beginners should avoid, but otherwise, there’s not much to criticize here. DeWalt is a big dog, and this is one of its best segments.
Advertisement
Milwaukee
There aren’t many areas areas where Milwaukee doesn’t excel. It has an outstanding number of power tools across its M12, M18, M24, and M4 battery systems, with more coming every year. The only brands with more are DeWalt, Ryobi, and Makita. It’s often placed side by side with Dewalt, Makita, and Bosch as the de facto choice for pros, and you could probably rank Makita, DeWalt, and Milwaukee evenly on this since they’re so close to equal. Milwaukee tools are also popular with DIYers, putting up excellent numbers in terms of customer reviews and reputation.
The only reason we placed Milwaukee above DeWalt is because Milwaukee has a longer warranty. It covers up to five years for cordless power tools and three years for batteries, beating out DeWalt by two years. Its recall notice list is also quite short, with only four tools recalled in the last decade. DeWalt and Makita do have larger tool selections, though, so again, it’s mostly a wash.
Advertisement
How we ranked these cordless power tool brands
Judging a whole list of cordless tools is no easy task. Between all the brands, there are nearly 2,000 total tools to look at. We created this list using a variety of metrics. In terms of performance, most of competitive tools offer very similar features to each other. For example, Hercules’ impact wrench does 1,500 lb-ft of breakaway torque, while Milwaukee’s did 1,600 lb-ft and Snap-on’s did 1,550 lb-ft. All three of those did better a few hundred pounds better than Bauer or Ryobi, though, so performance helped a little bit.
Advertisement
After that, we looked at selection size, in-store and online availability, warranty length, value, and general sentiment. For example, Ryobi is mostly known as a DIY-friendly brand, whereas DeWalt, Milwaukee, and Makita are pretty popular with DIYers and pros. Bauer’s 90-day warranty dropped its placement, while Milwaukee’s five-year warranty was the tie breaker between it, Makita, and DeWalt. Once all of these factors were taken into consideration, the list appeared as it does above.
A 1999 press release bragged “Jeeves” answered 92.3 million questions in just three months. “In the digital wilds of Y2K, we came to him with our most probing questions,” remembers the New York Times — whether it was Britney Spears or tamagotchis:
We asked, and he answered: Jeeves, the digital butler of information, the online valet who led us into the depths of cyberspace. Now, like so many other relics of yesterday’s internet, Jeeves — and his home, Ask.com — are no more. After almost 30 years, the question-and-answer service and former search engine shuttered on Friday. “To you — the millions of users who turned to us for answers in a rapidly changing world — thank you for your endless curiosity, your loyalty, and your trust,” the company said in a notice posted on its now-defunct website…
Created in Berkeley, Calif., in the days of the dot-com gold rush, Ask Jeeves first appeared on computer screens in 1996…. Their mascot, Jeeves, was modeled on the clever English butler character from the famed P.G. Wodehouse book series. Its search function was simple — type in a question, get an answer. But the quality of its responses was uneven, and the website was quickly eclipsed by Google and Yahoo as the world’s go-to search engines.
The site was bought by InterActive Corp. for more than $1 billion in 2005, and was given an injection of cash to help it compete as a search engine. It rebranded as Ask.com and as part of the reimagining, the site also ditched the character of Jeeves in 2006. Scrappy but inventive, the site was one of the first to introduce hyperlocal map overlays to its searches and incorporate thumbnails of webpages. “They are doing a lot of clever and interesting things,” a Google executive noted of Ask.com at the time. Still, Ask.com struggled to compete and returned in 2010 to its bread and butter: question-and-answer style prompts.
Even then, it faltered against newer, crowdsourced iterations like Quora and Google’s unyielding march to the internet fore — the platform now dominates search traffic, and the world’s general experience of the internet. A statement at Ask.com ends “by thanking its millions of users, and saying, ‘Jeeves’ spirit endures’,” notes this article from Engadget: As sad as it is to see a relic of the early Internet days fade into obscurity, we still have Ask Jeeves to thank for why some users still punch in full questions when querying Google. On top of that, Jeeves was built to provide detailed answers in natural language, which could have arguably acted as a precursor to today’s AI chatbots like ChatGPT. “Now, Ask.com joins the Internet graveyard that includes competitors like AltaVista, which shut down in 2013,” the article points out. “With Ask.com gone, alongside AIM and AOL dial-up services also sunsetting, we’re truly coming to an end of a specific era of the Internet.”
Advertisement
And the New York Times argues the memory of Jeeves now rests somewhere between Limewire and Beanie Babies…
Lithium deposits identified across Appalachia could supply hundreds of years of imports
Domestic discoveries across multiple states point to expanding lithium exploration efforts
Extraction capacity remains the biggest challenge despite large confirmed resource estimates
Lithium buried beneath parts of the Appalachian region could supply the United States with hundreds of years of material essential for batteries, electronics, and large-scale energy storage systems.
New estimates from the United States Geological Survey point to roughly 2.3 million metric tons of lithium oxide located in pegmatite formations spread across areas of the eastern United States.
Much of the material is believed to sit beneath sections of the Carolinas, while additional deposits are estimated to lie under parts of western Maine and New Hampshire.
Reporting on the news, Fortunesays the scale is large enough to replace about 328 years of US lithium imports based on recent demand levels, a number that shows just how dependent the country has become on foreign sources for key battery materials.
The deposits could support production of about 500 billion cellphones, along with billions of laptops and tablets, or enough batteries to power roughly 130 million electric vehicles if the material can be recovered at commercial scale.
Advertisement
Much of the discussion around lithium now centers on supply chains, especially since China dominates the production of finished lithium-ion batteries used in devices ranging from smartphones to electric vehicles and backup systems in data centers.
Demand continues rising as manufacturers move away from older battery types, while lithium-ion technology remains widely used in systems that require fast charging and long operating life.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
USGS says lithium resources in Appalachia are concentrated inside pegmatites, large-grained granite-like rock formations that can trap valuable elements during cooling and crystallization deep underground.
Advertisement
Accessing the material remains the biggest hurdle, since the United States currently produces only a small share of global lithium output despite rising domestic demand.
The country produced about 610 metric tons of lithium recently, accounting for roughly 0.3% of worldwide production, while most refining and large-scale battery manufacturing continues to take place overseas.
In December 2025 we reported how researchers identified lithium-rich clay deposits inside the McDermitt Caldera along the Nevada–Oregon border, where geological surveys suggested between 20 million and 40 million metric tons of lithium-bearing material could exist.
Advertisement
Geological analysis showed that layers of volcanic ash and long-running hydrothermal activity enriched soft sediments with lithium, creating clay bands that in some cases sit close enough to the surface to allow open-pit extraction.
Industry planners continue to point out that discovery alone does not guarantee production, since refining capacity, environmental permits, and infrastructure determine how quickly resources reach the market.
Government-backed funding and private investment projects are already underway in places such as Arkansas, where chemical extraction methods are being tested to increase domestic production capacity.
If you bought a digital game on the PlayStation Store between April 2019 and December 2023, you may soon receive some store credit in your account. A federal judge in San Francisco granted preliminary approval of a proposed $7.85 million settlement for a class action lawsuit that accused Sony of eliminating competition and monopolizing the market for its digital games through the PlayStation Store.
The lawsuit was first filed in May 2021 and claims that Sony’s alleged anticompetitive conduct caused gamers to “pay more than they otherwise would have paid for certain digital games.” The legal action comes after Sony eliminated “game-specific vouchers” sold by third-party companies in April 2019, which the lawsuit argued could have resulted in lower prices on the PlayStation Store if customers had alternative options through other retailers like Best Buy, GameStop and others.
The law firm representing affected users posted a list of eligible games, which includes The Last of Us, NBA 2K18 and Need for Speed Rivals, and said there are more than 4.4 million eligible PlayStation Network accounts. For anyone who qualifies as part of the class action settlement, you’ll see your PSN account credited once the final approvals are in. The court will have a Fairness Hearing on October 15, which will see the final judgement and the plan for allocating the millions of dollars to eligible accounts.
Advertisement
Notably, this lawsuit is separate from another similar legal action that was filed in the UK. Also a class action lawsuit, the case accuses Sony of “unfairly charging its UK customers too much for digital games and in-game content purchased through the PlayStation Store.” Unlike this recent settlement, Sony could pay up to $2.7 billion to UK residents as a result of alleged antitrust actions.
Apple has big plans for its F1 streaming service. Image source: Apple
Apple is keen for there to be a sequel to “F1: The Movie,” SVP Eddy Cue said, as the company hopes to increase its involvement with the motorsport in the future.
Apple has multiple connections to Formula 1, with it being the official broadcaster of the sport in the United States. It’s also behind the Brad Pitt vehicle “F1: The Movie,” which is also Apple TV’smost-watched movie.
With those successes in hand, Apple is planning for there to be more to both sides of the story.
Advertisement
Speaking to the press at the Miami Grand Prix on May 1, Apple SVP of Services Eddy Cue talked about both the real-life and fictional versions of Formula One.
On the movie, Cue said “I hope and expect there will be one,” reportsReuters.
Cue’s hope is well-founded, since it earned over $600 million at the box office, based on an estimated production cost of around $200 million. In February, producer Jerry Bruckheimer said that work is being carried out on a sequel.
The CEO of Formula 1, Stefano Domenicali, told reporters in February that a sequel wouldn’t happen in 2026, but strongly hinted at it being on the horizon.
Advertisement
Even so, there has yet to be any official confirmation that one will actually be produced.
Growing F1
Apple’s existing coverage of F1 in the United States has been well received, with Cue very happy at how it’s gone so far. However, he says Apple wants to do more to grow its presence.
He acknowledges that F1 doesn’t get licensed on a global basis, but that isn’t hurting Apple’s intentions. Cue says he hopes Apple can grow into other areas and markets with its streaming coverage.
Starting in the United States is a “huge market” for Apple, and building within it is “definitely the right way” to progress, says Cue. “And then of course, it would be great to expand it.”
Advertisement
Earlier in April, Cue said that 30% of people watching F1 are using the multiview function.
Ternus drives a Porsche and is an amateur racer, Cue explained, adding “He would actually be here this weekend but he’s at Laguna Seca.” Cue believes that Ternus would end up being at more races than Cook, and that he’s a “huge, huge fan of F1.”
Amazon once tried to pressure Nintendo to break the law, says former Nintendo of America President Reggie Fils-Aimé. At a recent NYU lecture, he describes a conversation with an Amazon executive, Kotaku reports:
“Amazon was looking to get bigger into the video game space,” said Fils-Aimé. “Amazon’s mentality back then is they wanted to have the lowest price out in the marketplace, even lower than Walmart… Essentially what Amazon wanted (was an) obscene amount of support, financial support, so they could have the lowest price and beat Walmart. I literally said to the executive, ‘You know that’s illegal, right? I can’t do that’….”
At the time, the Wii and DS were Nintendo’s best selling hardware in history. Amazon originally sold books, but in the 2000s rapidly expanded with cheaper discounts to became a one-stop shop for almost everything. Everything except Nintendo, that is…. “Literally we stopped selling to Amazon,” Fils-Aimé continued, “and it’s because I wasn’t going to do something illegal. I wasn’t going to do something that would put at risk the relationship we have with other retailers.” “The two sides have since made amends,” notes the Verge, “and you can buy a Switch 2 through Amazon. But for a long time, Nintendo consoles had been largely unavailable on the site.”
Microsoft Defender is detecting legitimate DigiCert root certificates as Trojan:Win32/Cerdigent.A!dha, resulting in widespread false-positive alerts, and in some cases, removing certificates from Windows.
According to cybersecurity expert Florian Roth, the issue first appeared after Microsoft added the detections to a Defender signature update on April 30th.
Today, administrators worldwide began reporting that DigiCert root certificate entries were flagged as malware and, on affected systems, removed from the Windows trust store.
According to a Reddit post about the false positives, the detected certificates are:
Advertisement
0563B8630D62D75ABBC8AB1E4BDFB5A899B24D43
DDFB16CD4931C973A2037D3FC83A4D7D775D05E4
On impacted systems, these certificates were removed from the AuthRoot store under this Registry key:
These false positives have led to concern among Windows users, with some thinking their devices were infected and reinstalling the operating system to be safe.
Microsoft Defender “Trojan:Win32/Cerdigent.A!dha” False Positive Source: Reddit
Microsoft has reportedly fixed the detections in Security Intelligence update version 1.449.430.0, and the most recent update is now 1.449.431.0.
Other reports on Reddit indicate that the fix also restores previously removed certificates on affected systems.
The new Microsoft Defender updates will automatically install, and Windows users can manually force an update by going into Windows Security > Virus and threat protection > Protection updates and clicking on Check for Updates.
Possibly linked to a recent DigiCert breach
The false positives occur shortly after a disclosed DigiCert security incident that enabled threat actors to obtain valid code-signing certificates used to sign malware.
Advertisement
“A malware incident targeted a customer support team member. Upon detection, the threat vector was contained,” explains the DigiCert incident report.
“Our subsequent investigation found that the threat actor was able to procure initialization codes for a limited number of code signing certificates, few of which were then used to sign malware.”
“The identified certificates were revoked within 24 hours of discovery and the revocation date set to their date of issuance. As a precautionary measure, pending orders within the window of interest were cancelled. Additional details will be provided in our full incident report.”
According to DigiCert’s incident report, attackers targeted the company’s support staff in early April by creating support messages containing a malicious ZIP file disguised as a screenshot.
Advertisement
After multiple blocked attempts, one support analyst’s device was eventually compromised, followed by a second system that went undetected for a time due to an endpoint protection “sensor gap.”
Using access to the breached support environment, the hacker used a feature in DigiCert’s internal support portal that allowed support staff to view customer accounts from the customer’s perspective.
While limited in scope, this access exposed “initialization codes” to previously approved, but undelivered, EV code-signing certificate orders.
“Possession of an initialization code, combined with an approved order, is sufficient to obtain the resulting certificate (see Contributing Factors discussion below),” explained DigiCert.
Advertisement
“Since the threat actor was able to obtain these two pieces of information for a finite set of approved orders, they were able to obtain EV Code Signing certificates across a set of customer accounts and CAs.”
DigiCert says it revoked 60 code-signing certificates, including 27 linked to a “Zhong Stealer” malware campaign.
“11 were identified in certificate problem reports provided to DigiCert by community members linking the certificates to malware, and 16 were identified during our own investigation,” explained DigiCert.
Zhong Stealer malware campaign
This aligns with earlier reports from security researchers who had observed newly issued DigiCert EV certificates used in malware campaigns and reported them to DigiCert.
Advertisement
Researchers, including Squiblydoo, MalwareHunterTeam, and g0njxa, reported that certificates issued to well-known companies such as Lenovo, Kingston, Shuttle Inc, and Palit Microsystems were being used to sign malware.
“What do Lenovo, Kingston, Shuttle Inc, and Palit Microsystems have in common?,” posted Squiblydoo on X.
“EV Certificates from these companies were issued and used by a Chinese crime group, #GoldenEyeDog (#APT-Q-27)!”
The malware in this campaign is named “Zhong Stealer,” though analysis indicates it may be more like a remote access trojan (RAT) than an infostealer.
Advertisement
The researcher says the malware was distributed through the following attacks:
Phishing emails deliver a fake image or screenshot
A first-stage executable that displays a decoy image
Retrieval of a second-stage payload from cloud storage such as AWS
Use of signed binaries and loaders, including components tied to legitimate vendors
After DigiCert disclosed the incident, the researchers said the incident report explains how the certificates used in these malware campaigns were obtained.
While Microsoft has not confirmed that the Defender detections are a result of the DigiCert incident, the timing and focus on DigiCert-related certificates suggest a possible connection.
However, it should be noted that the certificates flagged by Microsoft Defender are root certificates in the Windows trust store and do not match the revoked DigiCert code-signing certificates used to sign malware.
BleepingComputer contacted Microsoft with questions about the campaign, including whether it was tied to DigiCert’s breach.
Advertisement
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
Flip-phones are not only a fun way to get a hit of nostalgia, but they’re quickly becoming seriously useful everyday smartphones too.
Motorola has reinvented its iconic Razr flip-phone and recently introduced the Razr 70 series, which is headlined by the premium Razr 70 Ultra. But how does it measure up to Samsung’s own Galaxy Z Flip 7?
While we haven’t reviewed the Razr 70 Ultra just yet, we’ve compared its specs to the Z Flip 7 and highlighted the key differences between the clamshell flip phones below. Keep reading to decide which handset is likely to suit you best. Alternatively, we’ve also compared the Motorola Razr 70 Ultra vs 70 Plus vs 70, so you can see the entire collection side-by-side.
The Galaxy Z Flip 7 is readily available to buy now, and has an official RRP of £1049/$1099.99, However, as the phone is nearly a year old, it is possible to find the handset with a decent price cut.
SQUIRREL_PLAYLIST_10207784
Advertisement
Motorola Razr 70 Ultra runs on Snapdragon 8 Elite
Motorola has opted to fit the Razr 70 Ultra with Qualcomm’s 2025 Snapdragon 8 Elite, rather than the newer Snapdragon 8 Elite Gen 5. This is somewhat understandable, as the Razr 70 Ultra isn’t necessarily a productivity handset, so doesn’t necessarily need the oomph of the newer processor.
It’s a similar situation with the Galaxy Z Flip 7, with Samsung kitting the foldable with its own Exynos 2500 chip, rather than Snapdragon 8 Elite for Galaxy which is found in the Z Fold 7. Even so, we don’t think you’re likely to notice that much of a difference in real-world use, as the Z Flip 7 feels fast and responsive for most uses-cases, without any noticeable slowdown or overheating.
Image Credit (Motorola)
Advertisement
Sure, Exynos 2500 doesn’t achieve the same high benchmark scores as phones running on Snapdragon 8 Elite, but it’s still a solid processor that performs well.
Otherwise, although we haven’t reviewed the phone just yet, Motorola promises that the Razr 70 Ultra is the “most powerful Razr” ever. It actually uses the same chip as its predecessor, the Razr 60 Ultra, which we concluded offered a solid performance across everything from casual uses to even casual gaming too.
Advertisement
We’ll have to wait until we review the Razr 70 Ultra to see how it really performs in everyday use.
Samsung Galaxy Z Flip 7 will see Android updates until 2032
One of the most appealing features of the Galaxy Z Flip 7 is that Samsung promises it will see Android and security updates right up to July 2032 – taking the handset to Android 23. Considering the Z Flip 7 is upwards of £/$1000, this makes the cost seem like more of an investment, as you won’t necessarily need to buy a new phone in the next six years.
Unfortunately, the Razr 70 Ultra doesn’t quite boast the same promise. While the Razr 70 Ultra will see five years of security updates, it’s only promised three years of Android OS updates. That will take the phone up to Android 19.
Samsung Galaxy Z Flip 7. Image Credit (Trusted Reviews)
Advertisement
Motorola Razr 70 Ultra has a larger battery
With a 5000mAh cell, the Razr 70 Ultra boasts a considerably larger battery capacity than the Z Flip 7. In fact, Motorola states that this is the largest battery found among flip phones. With this in mind, we expect the handset to offer a pretty generous all-day battery life, but we’ll have to wait until we review the 70 Ultra to confirm this.
Advertisement
Although at 4300mAh, the Z Flip 7’s battery is considerably smaller, we should disclaim that we never struggled with its efficiency. During our testing, we found the phone comfortably saw us through a day’s worth of use before needing to be topped up.
Samsung Galaxy Z Flip 7. Image Credit (Trusted Reviews)
Speaking of topping up, the Razr 70 Ultra does benefit from faster charging speeds than the Z Flip, with support for 68W wired and 30W wireless speeds. In comparison, the Z Flip 7 supports a pretty measly 25W wired and 15W wireless.
Samsung Galaxy Z Flip 7 has a larger cover display
At 4.1-inches, the Z Flip 7 has a slightly larger cover display than the Razr 70 Ultra’s four-inch alternative. However, there are a few caveats to keep in mind.
Firstly, we found the Z Flip 7’s cover display to be more cumbersome to use and much less optimised than Motorola’s efforts. There are only a few pre-selected apps that you can launch on the outer screen and to enable others, you’ll need to download Multistar or other workarounds, which isn’t particularly ideal. Plus, its keyboard isn’t as easy to use as Gboard either.
Advertisement
Advertisement
Image Credit (Motorola)
Instead, more apps are optimised by default on the Razr 70 Ultra’s cover display, and the keyboard is much easier to type on when you don’t want to open up the handset. It’s also worth pointing out that the 70 Ultra’s cover screen sports many of the same specs as the 60 Ultra, and you can learn more about the differences between the two in our dedicated Razr 70 Ultra vs 60 Ultra guide.
Motorola Razr 70 Ultra has three 50MP cameras
Although both handsets have a total of three cameras, including two rear and one internal lens, they differ with their exact resolutions. Like its predecessor, the Galaxy Z Flip 7 is fitted with a 50MP main and a 12MP ultrawide at its rear, while its internal camera is 10MP. Generally, we found the hardware is able to take great shots in most lighting conditions, though our best camera phones have options better suited to keen photographers.
In comparison, the Razr 70 Ultra is equipped with three 50MP lenses, including a main and ultrawide/macro combination at the rear and one internal. Motorola has also introduced new shooting modes to the entire Razr 70 series, including Camcorder Rotate to Zoom which uses AI to automatically identify and zoom in on a subject. This mode leans into Motorola’s nostalgia, as you have to hold the phone like a camcorder.
Early Verdict
If you’re keen to try out a flip-phone, then the Motorola Razr 70 Ultra and Samsung Galaxy Z Flip 7 are two great options. If you’re looking for a more usable cover display, plenty of nostalgic features and a mighty batter, then the Razr 70 Ultra seems like a brilliant option. However, if you want a phone that’ll see Android updates for many years, the Galaxy Z Flip 7 is hard to beat.
Advertisement
Advertisement
We’ll be sure to update this versus once we review the Motorola Razr 70 Ultra.
You must be logged in to post a comment Login