Jeff Bezos framed this copy ofa 2006 BusinessWeek cover, reflecting Wall Street’s skepticism about AWS at the time. (Jeff Bezos via X, May 2022)
In the early days of Amazon Web Services, technical evangelist Jeff Barr was putting in long hours on the road, pitching a novel concept: rent computing power for 10 cents an hour, and storage for 15 cents a gigabyte per month — no servers to buy, no data centers to build.
Barr remembers calling his wife to check in at the end of the day. Get a nice dinner, she told him, you deserve it. But later, at the restaurant, looking at the menu and doing the math in his head, he couldn’t help but ask himself if the pennies were adding up.
“Did enough people start using these servers to buy me a decent steak?” he wondered.
He probably should have ordered the filet.
Two decades later, AWS generates nearly $129 billion a year in revenue. That’s enough to rank in the top 40 of the Fortune 500 if it were a standalone company, ahead of the likes of Comcast, AT&T, Tesla, Disney, and PepsiCo. Companies such as Netflix, Airbnb, Slack, Stripe and thousands more have built massive businesses on its platform.
Advertisement
When AWS goes down, it ripples across the web, taking down apps, websites, and services that most users never knew were on a common infrastructure.
But the business that defined cloud computing — bankrolling Amazon’s expansion into everything from streaming to same-day delivery — is now grappling with the most significant challenge since it launched. The rise of AI has upended the industry, empowering Microsoft, Google and others, and creating competitive dynamics that seem to change every month.
For the first time, AWS faces questions about its long-term ability to lead the market it created.
With Amazon marking the 20th anniversary of AWS this month, GeekWire spoke with early builders, current AWS insiders, and longtime observers of the company to tell the story of how the business got started, how it won the cloud, and what it’s up against now.
Advertisement
Scalable, reliable, and low-latency
Officially, Amazon pegs the public launch of AWS to March 14, 2006. That’s when it announced “a simple storage service” that offered software developers “a highly scalable, reliable, and low-latency data storage infrastructure at very low costs.”
Dubbed S3, it was Amazon’s first metered cloud service: the first time developers could pay for exactly what they used, billed in tiny increments, with no upfront commitment.
“We think it can be a meaningful, financially attractive business.” A Bloomberg News story quotes Jeff Bezos about AWS in November 2006. S3 launched earlier in the year.
All of this might seem mundane in a modern world where the cloud and internet services are almost like electricity and water, seemingly always there when you need them.
But remember the context of that moment: Facebook was available only on college campuses. Netflix arrived on DVDs in the mail. The iPhone was still a year away from being unveiled. And over at Microsoft in Redmond, they were finally getting ready to ship Windows Vista.
The asterisk in the headline
The history of Amazon Web Services is more complicated than it might seem, and it’s actually a subject of some disagreement behind the scenes. There are multiple origin stories, including one offered by Amazon itself, and others by former employees who say the company has tidied up the narrative over the years to shape the lore around its current leaders.
Advertisement
Journalist Brad Stone, author of the canonical Amazon book, “The Everything Store,” discovered this when Andy Jassy — the longtime AWS CEO who would go on to succeed Jeff Bezos as Amazon CEO — disputed aspects of his telling of the AWS story in a one-star review.
One point of contention: the origins of EC2, the AWS service built by a small team in South Africa, and the degree to which it sprang from the process Jassy led or was born independently.
Part of the challenge: Amazon, despite operating the storehouse of the internet, isn’t great at preserving its own history. The company, which cooperated with this piece, wasn’t able to unearth key documents such as Jassy’s original AWS six-pager from September 2003.
Some former Amazon leaders take things further back, to a set of e-commerce APIs that Amazon released in July 2002, allowing outside developers to access its product catalog and build applications on top of it. By that accounting, AWS is closer to 24 years old.
Advertisement
Overcoming internal opposition
The effort was led by business leader Colin Bryar, who ran Amazon’s affiliates program, along with technical leader Robert Frederick, whose Amazon Anywhere team (focusing on making Amazon’s site and features available on mobile devices) had been working since 1999 on internal web services that became the foundation for the external APIs.
Amazon in those days was on Seattle’s Beacon Hill, in the landmark art deco Pacific Medical Center tower overlooking downtown. Jeff Bezos was directly involved from the early days, as a believer in the vision that Amazon’s infrastructure capabilities could become a big business.
In 2002, when Bryar initially pitched a roomful of senior leaders on the idea of opening up Amazon’s product catalog and features as web services to outside developers, nearly all of them said no, as Frederick recalled in a recent interview.
The objections piled up: it would cannibalize existing business, it would educate competitors. Then, as Frederick remembers it, Bezos looked around the table and let out one of his trademark piercing laughs. Amazon’s founder wanted to see what developers would do.
Advertisement
“Let’s do it,” Frederick recalls Bezos saying, “and let’s have them surprise us.”
Later, in a July 2002 press release announcing “Amazon.com Web Services,” Bezos used nearly identical language: “We can’t wait to see how they’re going to surprise us.”
Big developer response
Within months, tens of thousands of developers had signed up. Increasingly, they were asking for things like storage, hosting, and compute, recalled Frederick, who worked at Amazon through mid-2006. He went on to found IoT platform Sirqul in 2013 and remains its CEO.
Another veteran of those early days agreed that the developer response to those initial e-commerce APIs may have opened the minds of Amazon’s leaders to the larger possibilities.
Advertisement
“Maybe that’s where Andy’s brain lit up. … Maybe that’s where Jeff’s brain lit up,” said Dave Schappell, referring to Jassy and Bezos. Schappell arrived at Amazon in 1998 as Jassy’s MBA intern, dropped out of Wharton to stay, and spent the next seven years working with him.
Schappell ran the associates program after Bryar, became an early head of product for AWS, and hired the original product managers. Those product managers included Jeff Lawson, who went on to found Twilio. Schappell himself became a well-known Seattle entrepreneur before returning to AWS for four years after Amazon acquired his startup TeachStreet.
The ‘crystal-clear movie moment’
Jeff Barr was one of the developers who noticed.
Now an Amazon VP and longtime AWS chief evangelist, Barr was working as an outside consultant in the web services field when he logged into his Amazon Associates account one day in 2002 and noticed a new message.
Advertisement
AWS Chief Evangelist Jeff Barr, joined in in the early days of the business. (Amazon Photo)
Amazon now had XML, it said, referring to the data-formatting standard that allowed software systems to communicate over the internet. Amazon was making its product catalog available as a web service and connecting it to the affiliate program, a surprising move at the time.
“I clicked through, I signed up for the beta. I downloaded it right away,” Barr recalled.
He sent feedback to the email address in the documentation. They actually replied.
Before long, he was invited to a small developer conference at Amazon’s headquarters — maybe four or five attendees at the Pacific Medical Center tower, in a semicircular open space with a view of the city. The developers sat in the middle, with Amazon employees around them.
At some point, one of the Amazon presenters announced that they were so impressed at how developers had found the APIs and started publishing apps within 48 hours that they were going to look around the rest of the company for more services to open up.
Advertisement
“That was that crystal-clear movie moment,” Barr said. He turned to an Amazon employee nearby and told her: “I have to be a part of this.”
Creating the cloud
But what Frederick and team had built was essentially a way for outside developers to access Amazon’s product data. It was not yet the cloud as we know it today.
That move started in mid-2003, as Jassy told the story in a 2013 talk at Harvard Business School. Jassy, then serving as Bezos’s technical advisor, was tasked with figuring out why software projects across Amazon were taking so long. It turned out that engineers were spending months building storage, database, and compute solutions from scratch.
In a meeting of six or seven people that summer, someone made the observation that would change the company’s trajectory. Jassy recalled the thinking during his HBS talk: “We’re pretty good at this. And if we’re having so many problems, and we don’t have anything we can use externally, I imagine lots of other companies probably have the same problem.”
Advertisement
Around the same time, Amazon recruited Werner Vogels, a Cornell distributed systems researcher, as its chief technology officer. He almost didn’t take the call. “It’s an online bookstore,” he recalled in a LinkedIn post last week. “How hard could their scaling be?”
But the company was wrestling with every problem he and his colleagues had been theorizing about — fault tolerance, consistency, availability at scale — live in production, every day.
Fundamental building blocks
Schappell remembers those early days as a non-stop cycle of six-page memos and meetings with Jassy and Bezos, all focused on trying to figure out what to build.
The concept that would define AWS — breaking every capability down to its most basic building block, or “primitive” — didn’t arrive fully formed. “I don’t think he said that on day one,” Schappell said of Bezos. “I think he said it after he read 47 of our six-pagers.”
Advertisement
Each primitive would stand on its own, and customers would pay only for what they used, billed in tiny increments. It was a direct rebuke to the licensing models of companies such as database giant Oracle, where customers paid for everything whether they used it or not.
Rahul Singh, who joined AWS in January 2004 as one of its first engineers, recalled the early technical plans going through just one layer of review before reaching Bezos and Jassy. (It’s the kind of streamlined decision-making that Jassy is now trying to restore across the company.)
Fault tolerant by design
In one early meeting, Bezos told the engineers he wanted a server touched exactly twice: once when installed in the data center, and once years later when it was pulled out. In between, nothing. The software had to be built to tolerate failures, leaving dead machines behind and moving on. It was a philosophy that would define the architecture of the cloud.
On Singh’s first day, his manager Peter Cohen sat him down in the lunch area and handed him a planning document (a “PR/FAQ” in Amazon lingo) that had just been approved by Bezos.
Advertisement
“We’re calling this S4,” Cohen said. Singh looked at the name of the product, Simple Server-Side Storage Service, and pointed out that it should be called S5. Singh recalls Cohen’s response: “Yeah, you’re really smart, aren’t you? Let’s see if you can actually build this.”
It was eventually shortened to Simple Storage Service, or S3.
The queuing service called SQS had launched in 2004 in beta (adding further to the debate over the origin story and what counts as the launch) but S3 was the first made generally available.
A billion-dollar business?
Jassy, then the VP in charge of AWS, would hold all-hands meetings in a conference room for four or five engineers, most of them straight out of college and grad school, as Singh recalled in an interview. Jassy ran them with the discipline of a much larger organization, repeating over and over that AWS could be a billion-dollar business, at a time when it had no revenue at all.
Advertisement
Singh remembers being highly skeptical.
“I was young and naive, and I remember thinking: a billion, that’s a really big number,” Singh said. Years later, he would joke with Jassy that the prediction had been completely wrong: it turned out to be a multi-billion dollar business, many times over.
In a LinkedIn post marking the March 14 anniversary, current AWS CEO Matt Garman — who joined the company as a summer intern in 2005, before the launch of S3 — recalled how early customers like FilmmakerLive and CastingWords took a bet on the fledgling platform.
“That shift changed the economics of building technology almost overnight,” he wrote.
Advertisement
Meanwhile, in Cape Town …
While one team was building S3 in Seattle, the compute side of the equation was taking shape 10,000 miles away. Chris Pinkham, an Amazon VP who wanted to move back to his native South Africa, was given permission to set up a development office in Cape Town.
His small team built EC2 — the Elastic Compute Cloud — largely independent of the Seattle operation. The local tech community was a bit bewildered by what Amazon was doing.
“We knew this bookstore had arrived in town,” recalled Dave Brown, who was working at a local payments startup at the time. He asked his friends who had joined what they were doing.
Dave Brown, Vice President, AWS Compute & ML Services, at AWS re:Invent 2025. (Amazon Photo)
“It’s kind of like, you know, you can rent a computer on the internet,” they told him.
Brown asked about the revenue. “Tens of dollars every single day,” they said.
Advertisement
He remembers wondering why they were wasting their time on that.
The answer became clear when EC2 launched in August 2006, five months after S3, adding compute to storage as another fundamental building block of AWS and the cloud.
Early customers showed EC2’s range: a Spider‑Man movie used it for rendering, and Facebook apps like FarmVille and Animoto spun up instances on demand, as Brown recalled.
A New York Times engineer used a personal credit card to run optical character recognition on the paper’s scanned archives over a weekend, making the entire archive searchable, after being told by the company that it would be cost-prohibitive using traditional approaches. It cost a grand total of a couple hundred bucks, even after initially screwing up and doing it over again.
Advertisement
Typing ahead of the characters
Brown joined in August 2007, the 14th person on the EC2 team. They worked out of a tiny office in Constantia, the winelands part of Cape Town, across the highway from vineyards.
They occupied part of one floor of an office building. There was one conference room, and two offices. The rest was open plan. The team was 14 engineers, one product manager, and Peter DeSantis, the leader who came from Seattle to help build the service.
The internet connection was a four-megabit DSL line shared by the entire office, with 300 milliseconds of latency to the data centers in the U.S. When engineers typed on their screens, each character had to make the round trip across the ocean and back before it appeared.
“You get really good at typing ahead of where the actual characters are appearing,” Brown said.
Advertisement
Every morning, someone had to find the VPN token to get the office online. It lasted about 10 hours before it automatically reset. “Everybody would be shouting, where’s the VPN token?”
Scrambling to keep up
One day, they were running low on computing capacity. DeSantis came out of his office and told the engineers to shut down the machines they were using for testing. That freed up enough capacity to keep the service going for a few days until the next racks of hardware came online.
Marc Brooker, now an AWS VP and distinguished engineer working on agentic AI, joined the EC2 team in Cape Town in 2008. He could see the entire team from his desk. When Brown was away one day, Brooker and the team covered every surface of his office in sticky notes — the kind of prank that only works in a small office where everyone knows everyone else.
Brooker was drawn in by something he heard about in his job interview: the team had built a way to make a distributed system look like a physical hard drive to the operating system.
Advertisement
“Wow, that is so cool,” he recalled thinking. “Here’s 20 other things I can think of that we could do with that kind of technology.”
That instinct, that the building blocks of the cloud could be combined and recombined in ways no one at Amazon had imagined, was at the core of what made AWS catch on.
AWS VP Mai-Lan Tomsen Bukovec, who oversees AWS’s core data and analytics services, in front of a whiteboard on which she mapped the evolution from the early days of S3 to the AI era at Amazon’s re:Invent building in Seattle. (GeekWire Photo / Todd Bishop)
“The world would be in a very different place if you didn’t have the freedom to experiment, to pilot, to try something, to move on to some other idea, that AWS first introduced,” said Mai-Lan Tomsen Bukovec, an AWS VP who has led S3 for 13 of its 20 years.
Prasad Kalyanaraman, now the AWS vice president who oversees global infrastructure, previously spent years building supply-chain forecasting systems for Amazon’s retail operation. Around 2011, Charlie Bell, then a senior AWS leader, asked him to help with a problem: the team was forecasting its compute demand using spreadsheets.
Advertisement
He adapted the supply-chain forecasting tools for AWS, but the cloud business kept outrunning every model he built.
“The funny thing about forecasts is that forecasts are always wrong,” he said. “It’s very hard to actually predict exponential growth.”
How AWS grew
It began with startups. The companies that would define the next era of technology were building on AWS. Airbnb, Instagram, and Pinterest all got their start on AWS.
John Rossman, a former Amazon exec and author of books including “The Amazon Way” and “Big Bet Leadership,” remembers Jassy pulling him aside for coffee at PacMed around 2008. Rossman had left Amazon and was working as a consultant to large businesses. Jassy wanted to know: did he think big companies would ever be interested in on-demand computing?
Advertisement
Maybe, maybe not, Rossman said. He was working with Blue Shield of California at the time, and tried to imagine them running on AWS. It was hard to picture. At the time, the typical AWS customer was a startup developer with little budget for infrastructure. The idea that a big insurance company would run on AWS seemed like a stretch.
“I was a little bit of a pessimist on it,” Rossman said.
But soon things started to change.
Netflix moved its streaming infrastructure to AWS starting in 2009, a decision that carried particular weight because it competed with Amazon in video. In 2013, the CIA awarded AWS a contract over IBM, signaling that the platform was trusted at the highest levels of security.
Advertisement
Microsoft tips its hat
AWS’s pricing model, in which customers paid only for what they used, was a direct threat to the licensing businesses of tech’s old guard. Whether burying their heads in the sand or just preoccupied, the companies that would become the biggest AWS rivals were slow to respond.
Microsoft didn’t unveil its cloud platform — code-named “Red Dog,” and initially launched as “Windows Azure” — until October 2008, more than two years after S3 debuted. Bill Gates had left his day-to-day role at Microsoft a few months earlier. The company was still recovering from the aftermath of the Vista flop.
“I’d like to tip my hat to Jeff Bezos and Amazon,” said Ray Ozzie, then Microsoft’s chief software architect, at the launch event — a rare public acknowledgment of a competitor’s lead.
Azure didn’t reach general availability until 2010, and its early approach was more of a platform for applications, not the raw infrastructure that made AWS so popular with developers. It took years to build out comparable offerings.
Advertisement
Google launched App Engine, a platform for running applications, in 2008, but didn’t offer raw computing infrastructure to rival EC2 until Compute Engine arrived in 2012.
‘The AWS IPO’
For years, AWS grew in something close to silence. Amazon said little about the overall growth, and didn’t break out the financial results for the business in its quarterly earnings reports.
Then, in April 2015, Amazon reported its first-quarter earnings with AWS broken out in detail for the first time, and it stunned the industry. The business had a $6 billion annual revenue run rate and was growing 50% a year.
The modest expo hall at the first AWS re:Invent, under construction in 2012, left. Last year’s conference, right, drew 60,000 people to Las Vegas. (2012 Photo Courtesy Jeff Barr; 2025 Photo by Todd Bishop)
AWS generated more than $250 million in profit that quarter alone, with operating margins around 17%. This was a stark contrast with the rest of Amazon, scraping by on traditional retail margins of 2% to 3%. AWS was making significantly more profit on every dollar of revenue.
The hosts of the Acquired podcast, in their extensive 2022 history about the rise of Amazon Web Services, would later call this moment “the AWS IPO,” in effect.
Advertisement
Amazon stock jumped 15% on the news.
“I was blown away,” said Schappell, the early AWS product leader who left in 2004 and later listened to the first AWS earnings breakout while training for a marathon. For years, he had assumed Amazon was losing billions on AWS. The reality was the opposite: AWS had become so profitable that it was effectively bankrolling Amazon’s future.
The margins kept climbing, reaching 35% by early 2022.
Then the pandemic cloud boom faded. Inflation spiked amid broader economic uncertainty. Customers scrutinized their cloud bills and pulled back spending. AWS revenue growth fell from 37% to 12% over the course of the year, the slowest in its history. Margins fell to 24%.
Advertisement
The ChatGPT moment
Then everything changed, for Amazon and everyone else.
On November 30, 2022, OpenAI released ChatGPT, with little fanfare at first. The consumer AI chatbot quickly became the fastest-growing application in history, reaching 100 million users in two months, and sending the technology world into a frenzy in the ensuing months.
For AWS, the stakes were huge. Every major wave of technology over the previous 15 years, from mobile to social to streaming to e-commerce, had been built on its platform.
If AI was the next wave, AWS needed to lead the way again.
Advertisement
Amazon was far from absent in AI. AWS had launched SageMaker in 2017, giving developers tools to build and deploy machine learning models. It had released custom AI chips for inference and training. Alexa, the voice assistant, had been processing natural language queries since 2014. Amazon had spent many years and billions of dollars on machine learning.
But none of it looked or worked like ChatGPT. The new model could write code, draft essays, answer complex questions, and hold a conversation. It was not a feature. It was a product people wanted to use. And it was built by an AI lab running on Microsoft Azure.
‘AWS sneaked in there’
The irony: OpenAI didn’t start on Microsoft’s cloud. It launched on AWS.
When the AI lab debuted in December 2015, AWS was listed as a donor. OpenAI was running its early research on Amazon’s infrastructure under a deal worth $50 million in cloud credits.
Advertisement
Microsoft CEO Satya Nadella learned about it after the fact. “Did we get called to participate?” he wrote to his team that day, in an email that surfaced only recently in a court filing from Elon Musk’s suit against Microsoft and OpenAI. “AWS seems to have sneaked in there.”
Microsoft moved fast. Within months, Nadella was courting OpenAI. The AWS contract was up for renewal in September 2016. “Amazon started really dicking us around on the [terms and conditions], especially on marketing commits,” Sam Altman wrote to Musk, who was then OpenAI’s co-chair. “And their offering wasn’t that good technically anyway.”
By that November, Microsoft had won the business.
Six years later, with the launch of ChatGPT, that bet paid off in ways no one could have predicted. Microsoft stock surged. Amazon, like many others in the industry, was scrambling to figure it all out — suddenly trying to keep up with the future of a market it had long defined.
Advertisement
Pivoting to generative AI
The AWS CEO at the time was Adam Selipsky, who had helped build the business from its earliest days before leaving in 2016 to run Tableau, the data visualization company. He returned in May 2021 to lead AWS after Jassy was promoted to succeed Bezos as Amazon CEO.
In a May 2024 interview with Selipsky, on one of his last days in the role, GeekWire asked him directly if Amazon had been caught flat-footed by the rise of generative AI.
After a member of his team interjected to say the question seemed to be informed by reading too many Microsoft press releases, Selipsky dismissed the idea that AWS was behind.
While that narrative might have “more sizzle” and generate clicks, Selipsky said, the reality was different, as evidenced by Amazon’s years of work in AI and machine learning.
Advertisement
AWS had announced Inferentia, a chip for deep learning, in 2018, building on its 2015 acquisition of Annapurna Labs, the Israeli chip startup. It began work on CodeWhisperer, an AI coding assistant, in 2020 — before GitHub Copilot existed, the company notes. In 2021, it launched Trainium, a chip designed to train models with 100 billion or more parameters.
Dario Amodei, CEO of Anthropic, right, speaks with Adam Selipsky, then CEO of Amazon Web Services, at AWS re:Invent on Nov. 28, 2023. (GeekWire File Photo / Todd Bishop)
At the same time, Selipsky acknowledged that AWS had “pivoted many thousands of people from other interesting, important projects to work on generative AI” — a scale of reallocation signaling something other than business as usual inside the company.
Tomsen Bukovec, who now oversees AWS’s core data services including S3, analytics, and streaming, said her team’s response was less a pivot than a process of learning.
They educated themselves on what the technology meant for their services, she said, and thought deeply about what it would look like for AI to both create and consume data at scale.
The question her team started asking in late 2022: what does the world look like when 70 to 80 percent of the usage of your services comes through AI?
Advertisement
“AI is going to use it at 10 times to 100 times the rate of a human, and it’s going to do it all day long, all the time, 24 hours,” she said. “AI never goes to sleep.”
Scrambling to meet the moment
The pressure to catch up in generative AI was felt across the company. In a lawsuit filed in Los Angeles Superior Court, an AI researcher who worked on Amazon’s Alexa team alleged that a director instructed her to ignore internal copyright policies because “everyone else is doing it.”
The complaint described ChatGPT’s launch in late November 2022 as causing “panic within the organization.” Amazon has denied the allegations, and the case is still pending.
On Amazon’s earnings call in early February 2023 — two months after ChatGPT’s launch — Amazon CEO Andy Jassy did not discuss generative AI or large language models.
Advertisement
Matt Garman, AWS CEO, speaks at AWS re:Invent 2025. (GeekWire File Photo / Todd Bishop)
By the next quarter’s call, in late April 2023, he spoke about it for nearly ten minutes, describing it as “a remarkable opportunity to transform virtually every customer experience that exists.”
In September 2023, the company announced an investment of up to $4 billion in Claude maker Anthropic, the AI startup founded by former OpenAI researchers. The investment would eventually grow to $8 billion — which seemed like a lot at the time.
Selipsky left AWS in mid-2024. Garman, whom Selipsky had hired as a product manager in 2006, succeeded him as CEO, charged with leading the cloud business into the new era.
From CodeWhisperer to Bedrock
The roots of Amazon’s response actually predated ChatGPT by more than two years, although it faced initial skepticism internally. In 2020, Atul Deo, an AWS product director, wrote a six-page memo proposing a generative AI service that could write code from plain English prompts.
Jassy, who was still leading AWS at the time, wasn’t sold. His reaction, as Deo later told Yahoo Finance, was that it seemed like a pipe dream. The project launched in 2023 as CodeWhisperer, an AI coding assistant.
Advertisement
But by then, ChatGPT had redrawn the landscape, and the team realized they could offer something broader: a platform giving customers access to a range of foundation models through a single service. AWS called it Bedrock. The name reflected an ambition to do for AI models what the company had done years earlier with its Relational Database Service, which wrapped MySQL, Oracle, and other database engines in a common management layer.
Bedrock would do the same for large language models.
The decision to offer multiple models rather than push a single in-house option was deliberate, and rooted in a pattern AWS had followed for years. It brought multiple CPUs to the cloud: AMD, Intel, and its own Graviton. It offered Nvidia GPUs alongside its own Trainium chips.
Fastest-growing AWS service
Amazon’s view is that choice drives competition, which drives down prices for customers.
Advertisement
“We knew there was never going to be one model to rule everybody,” said Dave Brown, the AWS vice president who oversees EC2, networking, and custom silicon. “And even the best model was not going to be the best model all the time.”
Bedrock launched in preview in April 2023 and reached general availability that September, with models from Anthropic, Meta, and others alongside Amazon’s own. Two years later, it had become the fastest-growing service AWS had ever offered, with more than 100,000 customers.
On Amazon’s most recent earnings call, Jassy described it as a multi-billion-dollar business, with customer spending growing 60% from one quarter to the next.
At the end of 2024, Amazon added its own entry to the model race. The company introduced a family of foundation models called Nova, positioned as a lower-cost, lower-latency alternative to the third-party models on the Bedrock platform.
Advertisement
Amazon CEO Andy Jassy unveils the Nova models at AWS re:Invent in December 2024. (GeekWire Photo / Todd Bishop)
As Fortune’s Jason Del Rey observed, it was a page from the e-commerce playbook: build the marketplace first, then stock it with a house brand. Just as Amazon sells goods from thousands of merchants alongside its own private-label products, Bedrock offered models from Anthropic, Meta, and others, and now Amazon’s own models to go along with them.
At re:Invent in late 2025, AWS pushed further, unveiling what it called “frontier agents” — autonomous AI systems designed to work for hours or days without human involvement.
One, built into Amazon’s Kiro coding platform, can navigate multiple code repositories to fix bugs while a developer sleeps. Last month, the Financial Times reported that Amazon’s own AI coding tools caused at least one AWS service disruption. Amazon acknowledged the incident but publicly disputed aspects of the reporting, citing a misconfigured role, not the AI itself.
The $200 billion bet
Like its rivals, AWS is also building the physical infrastructure to back it up. In 2025, less than a year after it was announced, AWS opened Project Rainier, one of the world’s largest AI compute clusters, centered in Indiana, powered by more than 500,000 of Amazon’s Trainium2 chips.
Named after the mountain visible from Seattle, Rainier was built to train and run Anthropic’s next generation of Claude models, using Amazon’s own Trainium chips rather than Nvidia GPUs.
Advertisement
Kalyanaraman, the AWS vice president who oversees global infrastructure, said the project forced AWS to rethink its supply chain from the ground up. The goal was to minimize the time between a chip leaving its fabrication facility and serving a customer workload.
Rainier was built at a faster pace than anything AWS had ever done, Kalyanaraman said, with more than 100,000 Trainium chips available to Anthropic in under a year. But it wasn’t a one-off. He called it the new template for how AWS would build AI infrastructure going forward.
Then, late last month, came the deal that brought the story full circle.
OpenAI — the company that launched on AWS in 2015 and left for Microsoft Azure the following year — announced a partnership with Amazon that included up to $50 billion in investment and a cloud agreement worth more than $100 billion over eight years.
Advertisement
OpenAI committed to run workloads on Amazon’s custom Trainium chips, making it the second major AI lab after Anthropic to do so. The two companies had been talking since at least May 2023, according to SEC filings, but Microsoft’s right of first refusal on OpenAI’s compute had blocked a deal until those restrictions were loosened in the latest renegotiation.
By late 2025, AWS revenue was growing at its fastest pace in more than three years, up 24% to $35.6 billion a quarter. The company disclosed that its Trainium and Graviton chips had reached a combined annual revenue run rate of more than $10 billion. Bedrock had surpassed 100,000 customers and was generating revenue in the billions.
The competitive picture was also coming into sharper focus.
In mid-2025, Microsoft disclosed standalone Azure revenue for the first time: $75 billion a year, up 34%. Google Cloud had crossed a $50 billion annual run rate. AWS, at more than $116 billion a year at the time, was still larger — but no longer running away with the market.
All of this helps to explain Amazon’s record capital spending. On the company’s latest earnings call, Jassy defended plans to spend $200 billion this year, most of it on AI infrastructure.
Advertisement
The figure is so large it would consume nearly all of Amazon’s operating cash flow. Facing a Wall Street backlash, Jassy called artificial intelligence “an extraordinarily unusual opportunity to forever change the size of AWS and Amazon as a whole.”
What’s next: Bear and bull cases
Longtime observers are divided on the company’s AI bet.
Corey Quinn, a cloud economist who works with AWS customers through his Duckbill consultancy, sees little real‑world traction for Amazon’s Nova models. “You know someone is an Amazon employee when they talk about Nova, because no one else is,” he said.
Some businesses bypass Amazon’s Bedrock platform entirely because of capacity constraints and slower speeds, he said, going to third-party providers like Anthropic rather than inserting Bedrock as a “middleman” — unless they’re trying to retire their committed AWS spend.
Advertisement
Looking forward, Quinn pointed to a historical parallel. Twenty years ago, Cisco was the most valuable company in the world, the backbone of the internet. Today it is a profitable but largely invisible utility. AWS, he said, could be headed for the same fate.
“It’s very clear that there will be a 40th anniversary for AWS, because that inertia does not go away,” Quinn said. “But will it be at the center of tech policy and giant companies, or is it going to be a lot more like the Cisco of today?”
Om Malik, the veteran tech writer, cast a critical eye on Amazon’s OpenAI investment.
By his math, Amazon is paying roughly 16 times more per percentage point of OpenAI than Microsoft did, with none of the exclusive IP rights, revenue share, or primary API access that Microsoft locked up years ago. The cost of being late, Malik wrote, is measured in billions.
Advertisement
The lobby at AWS headquarters, the re:Invent building in Seattle. (GeekWire Photo / Todd Bishop)
Rossman, the former Amazon executive who was once skeptical about AWS demand from big business, sees a different picture. He agrees that AWS is strong in infrastructure, the picks and shovels. But where Quinn sees that as a ceiling, Rossman sees it as a moat.
The models are the commodity, Rossman contends. They leapfrog each other constantly. What matters is everything the models run on and through: the chips, the servers, the data centers, the power. AWS is building more of that stack than most competitors.
“That’s where the value is,” he said.
Rossman said he could envision AWS operating nuclear power plants someday. The long-term winners, he said, will be the companies that deliver the best AI at the lowest cost per token. That’s where AWS’s vertical integration — from Trainium chips to Bedrock to the data center itself — gives it an advantage competitors can’t easily replicate.
As for the risk of spending too much, Rossman put it simply: you have to decide which side of history you’d prefer to fail on — overbuilding or underbuilding. Amazon isn’t taking chances.
Advertisement
In an internal all-hands meeting last week, Jassy said AI could help AWS reach $600 billion in annual revenue, double his own prior estimate, Reuters reported. He had been thinking for years that AWS could be a $300 billion business in a decade. AI, he said, changed the math.
Specialized’s proprietary, 700-watt motor feels natural—sometimes to an annoying extent, as the bike is designed for you to pedal and you won’t get faster than 10 mph just by using the throttle. Also, there’s no option for a dual battery. Still, the battery well exceeded Specialized’s estimated 60-mile range. Granted, I am a small person, but I was usually hauling at least one other person on the bike with me at all times, so I still found this remarkable.
It’s easily adjustable—both my 5’10” husband and my 5’2″ self were able to switch off riding, which is important if this is your family’s all-purpose hauler. The display is intuitive, and the buttons are well-spaced apart so you don’t get confused or end up button-mashing. Also, Specialized’s accessories go a long way toward making this bike so much more useful. Yes, you could jerry-rig some Home Depot buckets to the front of your bike and drill holes in the bottoms for them to drain, but the Coolcave panniers ($90) are so much more attractive, easy to use, and helpful for carting everything from kid dioramas to a dozen tiny soccer balls.
Best Value
The vast majority of people I know who buy a cargo ebike with their own money choose the Lectric XPedition2. There is just no better value for a dual-battery long-tail cargo ebike. Out of the box, Lectric has also gone above and beyond to make its bikes and accessories easy to assemble and use. You even pop the pedals in, instead of using regular screw-on pedals.
Advertisement
This bike’s specs are also wild for the price. It has a 1,310-watt rear hub motor, twice as powerful as the already-powerful Globe Haul. (It has a throttle and is a Class 2 ebike out of the box, though you can use the display to unlock its Class 3 capabilities and assist up to 28 mph.) It has hydraulic disc brakes, front suspension, an incredibly large and bright LCD color display, integrated lights, and fenders.
The Figure breach exposed 967,200 email records without a single exploit. Understanding what that enables — and why your MFA cannot contain it — is an architectural problem, not a user education problem.
In February 2026, TechRepublic reported that Figure, a financial services company, exposed nearly 967,200 email records in a newly disclosed data breach. No vulnerability was chained. No zero-day was burned. The records were accessible, and now they are in adversary hands.
Coverage of breaches like this tends to stop at the count. That is the wrong place to stop. The number of exposed records is not the event — it is the starting inventory for the event that follows.
To understand the actual risk, you have to follow the attack chain that a credential exposure like this enables, step by step, and ask honestly whether the authentication controls in your environment can interrupt it at any point.
Advertisement
Most cannot. Here is why.
What Adversaries Do With 967,000 Email Records
Exposed email addresses are not static data. They are operational inputs. Within hours of a record set like this becoming available, adversaries are running it through several parallel workflows simultaneously.
The first is credential stuffing. Figure customers and employees almost certainly reused passwords across services. Adversaries combine the exposed addresses with breach databases from prior incidents — LinkedIn, Dropbox, RockYou2024 — and test the resulting pairs against enterprise portals, VPN gateways, Microsoft 365, Okta, and identity providers at scale. Automation handles the volume.
Success rates on credential stuffing campaigns against fresh email lists routinely run at two to three percent. On 967,000 records, that is 19,000 to 29,000 valid credential pairs.
Advertisement
The second workflow is targeted phishing. AI-assisted tooling can now generate personalized phishing campaigns from an email list in minutes. The messages reference the organization by name, impersonate internal communications, and are visually indistinguishable from legitimate correspondence.
Recipient-specific targeting — using job title, department, or public LinkedIn data to tailor the lure — is standard practice, not a capability reserved for nation-state actors.
The third is help desk social engineering. Armed with a valid email address and basic OSINT, adversaries impersonate employees in calls to IT support teams, requesting password resets, MFA device resets, or account unlocks.
This attack vector bypasses authentication technology entirely — it targets the human process that exists to handle authentication failures.
Advertisement
In each of these workflows, no technical vulnerability is required. The adversary’s goal is not to break in. It is to log in as a valid user. The breach does not create access. It creates the conditions under which access becomes achievable through the authentication system itself.
Token’s Biometric Assured Identity platform is built for organizations where authentication failure is not an acceptable outcome.
See how Token can strengthen identity assurance across your existing IAM, SSO & PAM stack.
This is the part of the analysis that most incident post-mortems underweight. Organizations read about a credential exposure and conclude that their MFA deployment protects them. For the attack chain described above, that conclusion is structurally incorrect.
Advertisement
Modern adversary tooling executes what security researchers call a real-time phishing relay, sometimes referred to as an adversary-in-the-middle (AiTM) attack. The mechanics are precise.
An adversary builds a reverse proxy that sits between the victim and the legitimate service. When the victim enters credentials on the spoofed page, the proxy forwards those credentials to the real site in real time.
The real site responds with an MFA challenge. The proxy forwards that challenge to the victim. The victim responds — because the page looks legitimate and the MFA prompt is real. The proxy forwards the response. The adversary receives an authenticated session.
Push notification MFA, SMS one-time codes, and TOTP authenticator apps are all vulnerable to this relay. They authenticate the exchange of a code. They do not verify that the individual completing the exchange is the authorized account holder. They cannot distinguish a direct session from a proxied one.
Advertisement
Toolkits that automate this attack — Evilginx, Modlishka, Muraena, and their derivatives — are publicly available, actively maintained, and require no advanced tradecraft to operate. The capability is not exotic. It is the baseline.
MFA fatigue compounds this. Adversaries who obtain valid credentials but cannot relay the session in real time will instead trigger repeated push notifications until a user approves one out of frustration or confusion. This attack has been used successfully against organizations with mature security programs, including in incidents that received significant public coverage.
The common thread across all of these techniques: legacy MFA places a human being at the final decision point of the authentication chain, then relies on that human to make the correct call under conditions specifically engineered to defeat it.
The Structural Problem Legacy MFA Cannot Solve
The security industry’s standard response to authentication failures is user education. Train people to recognize phishing. Teach them to verify unexpected MFA prompts. Remind them not to approve requests they did not initiate.
Advertisement
This response is not wrong. It is insufficient, and the insufficiency is architectural, not motivational.
A relay attack does not require a user to recognize a phishing page. The MFA prompt they receive is real, issued by the legitimate service, delivered through the same app they use every day. There is nothing anomalous for the user to detect. The attack is designed to be invisible to the human in the loop — and it is.
The deeper problem is that the authentication architecture most organizations have deployed was not designed to answer the question that actually matters in a post-breach environment: was the authorized individual physically present and biometrically verified at the moment of authentication?
Push notifications do not answer this question. SMS codes do not answer this question. TOTP does not answer this question. USB hardware tokens answer a related but different question — they prove the registered device was present, not the authorized person.
Advertisement
Auditors, regulators, and cyber insurers are increasingly drawing this distinction explicitly. The question “can you prove the authorized individual was there?” is appearing in CMMC assessments, NYDFS examinations, and underwriter questionnaires. Device presence is no longer accepted as a proxy for human presence in high-stakes access contexts.
What Phishing-Resistant Authentication Actually Requires
FIDO2/WebAuthn gets cited frequently in this conversation, and it is a meaningful step forward — but it is not sufficient on its own. Standard passkey implementations bind the credential to a device or cloud account.
Cloud-synced passkeys inherit the vulnerabilities of the cloud account: SIM swap attacks against the recovery phone number, account takeover via credential phishing, recovery flow exploitation. Device-bound passkeys prove device possession. They do not prove human presence.
Phishing-resistant authentication that closes the relay attack vector requires three properties simultaneously:
Advertisement
Cryptographic origin binding: the authentication credential is mathematically tied to the exact origin domain. A spoofed site cannot produce a valid signature because the domain does not match. The attack fails before any credential is transmitted.
Hardware-bound private keys that never leave secure hardware: the signing key cannot be exported, copied, or exfiltrated. Compromise of the endpoint does not compromise the credential.
Live biometric verification of the authorized individual: not a stored biometric template that can be replayed, but a real-time match that confirms the authorized person is physically present at the moment of authentication.
When all three properties are present, a relay attack has no viable path. The adversary cannot produce a valid cryptographic signature from a spoofed site. They cannot relay a session because the cryptographic binding fails the moment the origin changes.
They cannot use a stolen device because the biometric verification fails without the authorized individual. They cannot social-engineer an approval because there is no approval prompt — the authentication either completes with a live biometric match at the registered hardware, or it does not complete.
Token: Cryptographic Identity That Verifies the Human, Not the Device
TokenCore was built on a single, uncompromising principle: verify the human, not the device, credential, or session.
Most authentication products add factors to a weak foundation. Token replaces the foundation. The platform combines enforced biometrics, hardware-bound cryptographic authentication, and physical proximity verification — three properties that must all be satisfied simultaneously for access to be granted.
There is no fallback. There is no bypass code a user can enter in the field. The authorized individual is either present and verified, or access does not occur.
Advertisement
This matters precisely because of the attack chain described above. Token’s Biometric Assured Identity platform eliminates each link:
No Phishing. Every authentication is cryptographically bound to the exact origin domain. A spoofed login page produces no valid signature — Token simply refuses to authenticate.
No Replay. The private signing key never leaves the hardware. A relayed session cannot be reconstructed because the cryptographic material it would need to replicate is physically inaccessible.
No Delegation. A live fingerprint match is required for every authentication event. A colleague, an adversary with a stolen device, or a social engineering target cannot complete authentication on behalf of the authorized individual.
No Exceptions. There is no code, no recovery flow, and no help-desk override that can substitute for biometric presence. The control is absolute because the risk is absolute.
The form factor matters too. Token is wireless — Bluetooth proximity, no USB port required. Authentication takes one to three seconds: the user initiates a session, taps their fingerprint on the Token device, Bluetooth proximity confirms physical presence within three feet, and access is granted.
For on-call administrators, trading floor operators, and defense contractors working across multiple workstations, this eliminates the friction that drives the shadow IT and workaround behavior legacy hardware tokens create.
Unlike USB-based alternatives, Token is field-upgradeable over the air. As adversaries evolve their tooling, Token’s cryptographic controls can be updated remotely and immediately — without replacing hardware or reissuing devices. The investment does not expire when the threat landscape changes.
Token verifies the human. Not the session. Not the device. Not the code. The human.
Advertisement
Mitigate Risk and Secure Vulnerabilities with TokenCore
The Honest Assessment
The Figure breach will produce downstream authentication attacks. So will the next breach, and the one after that. The adversary infrastructure that runs credential stuffing, AI-generated phishing, and real-time relay attacks operates continuously against exposed email records.
The question is not whether these attacks will be attempted against your environment. They will be.
The relevant question is whether your authentication architecture requires human judgment to succeed — or whether it is designed so that human judgment is not the failure point.
Legacy MFA, in all of its common forms, requires human judgment. A user must recognize the anomaly, question the prompt, and make the correct decision under adversarial pressure. That is a brittle dependency at a critical control point, and adversaries have built an entire toolchain to exploit it.
Token removes that dependency. The device signs for the legitimate domain with a confirmed biometric match — or it does nothing. There is no prompt to manipulate. There is no decision to engineer. There are no exceptions.
Advertisement
That is not a feature. It is the architectural requirement for authentication that holds under the conditions this breach, and every breach like it, creates.
See How Token Closes the Gap
Token’s Biometric Assured Identity platform is built for organizations where authentication failure is not an acceptable outcome — defense contractors, financial institutions, critical infrastructure, and enterprise environments with high-privilege access requirements.
Cryptographic. Biometric. Wireless. No phishing. No replay. No delegation. No exceptions.
from the fuck-em-for-being-human-beings,-I-guess dept
I’m not here to cut the Trump administration any slack or engage in both-sides bullshit, but this is something that has always been true: we treat anyone imprisoned or detained as less than human. The dehumanization begins with something we call “processing” — a word that separates a human from their humanity by making them sound like nothing more than paperwork.
The horrors seen in jails and prisons are often compounded at immigrant detention facilities. While some duty of less-than-minimal care might be extended to imprisoned US citizens, it’s far more often ignored when federal officers believe (mistakenly) that migrants aren’t protected by the Constitution.
The litany of violations stretches back forever. Techdirt doesn’t stretch back quite that far, but let’s take a stroll down memory lane.
From 2022, back when Biden was still in office and people like me were thinking no one would ever elect Trump to office again:
That’s taken from a report demanding (“Management Alert”) the immediate removal of all detainees from this New Mexico detention center due to numerous violations, including a shortage of 112 employees and no less than 83 cells with “inoperable” sinks and toilets.
Going back further to Trump’s first administration:
In this Inspector General’s report, we learned that only 28 of 106 contractors were provided with the tools needed to meet minimum “performance standards.” We also learned that the $3.9 billion being thrown to private contractors was shored up by absolutely no level of accountability. ICE approved 96% of waivers requested by contractors who failed to meet minimum housing standards for detainees.
While it’s been a persistent problem, things are significantly worse now. The Trump administration is detaining more migrants than ever before. It’s also far more willing to pawn these duties off on private prison contractors who prioritize making money over taking care of the people thrust into their care by Trump’s top bigots.
On top of that, the administration is fighting wars on several litigation fronts in hopes of preventing any form of oversight from slowing its roll towards total migrant annihilation. Everything that was bad before is getting so much worse.
Thanks to the White House Merchant of Death, RFK Jr., measles outbreaks are being reported at detention facilities. Thanks to absolutely every-fucking-body else in the administration, reports of inhumane conditions are somehow still on the rise, even after years of regularly reported inhuman conditions at ICE facilities.
Advertisement
Here’s even more. At a facility where guards were caught setting up suicide “death pools” for inmates, more evidence of deliberate cruelty and inhumane treatment has surfaced. The host of ongoing atrocities is none other than Camp East Montana, comfortably nestled in the heartland of the “who gives a fuck about immigrants” Fifth Circuit: El Paso, Texas.
An inspection in February of Camp East Montana in Texas, one of the country’s largest immigration detention centers, found dozens of violations of national standards, including instances that may have exposed detainees to illnesses and uses of force that were not documented, a new report found.
[…]
The inspection, which was carried out by the agency over three days in February and included interviews with 49 detainees, found that there were at least 49 overall “deficiencies” from national standards at the camp. Of all the deficiencies, 22 involved use of force and restraints, and five involved issues related to medical care.
Advertisement
ICE actually released this inspection report. However, it did make sure names were changed redacted to protect the innocent guilty. While it’s uncharacteristically protective of the inspectors, it also makes sure we may never know which “Creative Corrections” employees helped make this detention center the hell hole it is.
Other censorship by the administration deliberately denies Americans access to the facts. What possible purpose is served here, other than allowing the government to pretend its rights violations were somehow excused by the [redacted] passage of time?
The government not only censored the number of detainee files reviewed, but also the ratio of files in noncompliance. What escapes ICE’s black-boxed attempts to redeem itself is this, which is plenty damning on its own:
[I]nitial classification process and initial housing assignments were not completed within 12 hours of detainees’ admission […]; rather they were completed 14 hours to 25 days after [admission]…
Everything that might show how often (or how frequently) violations occurred has been removed. It’s a deliberate muddying of the statistical waters. Who knows what’s behind the black box? It could mean rights were violated 10% of the time. Or it could mean rights were violated almost every time. But we the people — you know, the ones expected to foot the bill for this bullshit — aren’t allowed to know the actual details of what’s being done in our names.
If the government wants to play it that way, fine. We’ll just assume the worst and dare it to provide evidence to the contrary. And we know it never will. If or when the government decides to unredact this report, it will undoubtedly show us what we’ve always assumed: The administration and its contractors routinely abused detainees and violated their rights because the people in charge made it clear they don’t consider migrants to be humans.
So far this year, 14 people have died in U.S. Immigration and Customs Enforcement custody, including a Mexican man who was found unresponsive last week at a facility outside Los Angeles, according to data from the Department of Homeland Security.
If that seems like a low (or worse, an acceptable) number of deaths, think again:
In 2025, ICE reported 33 total in-custody deaths and in 2024 there were 11.
Deaths in ICE custody tripled under Trump during his first year back in office. If this pace continues, we’ll be looking at 56 in-custody deaths, which would nearly double the same number Trump managed to triple in 2025.
This will only get worse. The administration is still trying to buy up any warehouses it can to repurpose as detention centers. The workload is being stretched even thinner, leaving private citizens more poorly trained than current ICE officers in charge of the lives and well-being of thousands of detainees. The misery and death will continue. Unfortunately for us, this administration not only welcomes blood on its hands, but revels in it.
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Saturday’s puzzle instead then click here: NYT Connections hints and answers for Saturday, April 11 (game #1035).
Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you’ve finished? Why, play some more word games of course. I’ve also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc’s Wordle today page covers the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Connections today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Connections today (game #1036) – today’s words
(Image credit: New York Times)
Today’s NYT Connections words are…
FLY
PAPER
TAKE
CAST
TROLL
PROJECT
POCKET
ANGLE
SHED
POSITION
RUSSIAN
CUFF
RAG
RADIATE
STANCE
BELT LOOP
NYT Connections today (game #1036) – hint #1 – group hints
What are some clues for today’s NYT Connections groups?
YELLOW: Trouser elements
GREEN: How one sees it
BLUE: Let it shine
PURPLE: Add the word for a humanoid toy
Need more clues?
We’re firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today’s NYT Connections puzzles…
Advertisement
Sign up for breaking news, reviews, opinion, top tech deals, and more.
NYT Connections today (game #1036) – hint #2 – group answers
What are the answers for today’s NYT Connections groups?
YELLOW: PANTS FEATURES
GREEN: PERSPECTIVE
BLUE: EMIT
PURPLE: _____DOLL
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Connections today (game #1036) – the answers
(Image credit: New York Times)
The answers to today’s Connections, game #1036, are…
YELLOW: PANTS FEATURES BELT LOOP, CUFF, FLY, POCKET
GREEN: PERSPECTIVE ANGLE, POSITION, STANCE, TAKE
BLUE: EMIT CAST, PROJECT, RADIATE, SHED
PURPLE: _____DOLL PAPER, RAG, RUSSIAN, TROLL
My rating: Easy
My score: Perfect
I managed to make zero mistakes in this game, but I came close to putting together a couple of groups that didn’t make the final four.
The first was linking FLY, CAST, ANGLE, and TROLL as all being terms connected with fishing and the other was a group about being mean to someone that would include RAG, TROLL and CUFF. Thankfully I didn’t have enough confidence to follow through on either idea.
Advertisement
Instead PANT FEATURES seemed an obvious selection, although I did waiver over CUFF and from here the other groups fell neatly into place — as they often do when you complete this game in difficulty order.
Yesterday’s NYT Connections answers (Saturday, April 11, game #1035)
PURPLE: ENDING IN BODIES OF WATER BOMBAY, CHELSEA, SCREWDRIVER, SNOWFLAKE
What is NYT Connections?
NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don’t technically need to solve the final one, as you’ll be able to answer that one by a process of elimination. What’s more, you can make up to four mistakes, which gives you a little bit of breathing room.
Advertisement
It’s a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It’s playable for free via the NYT Games site on desktop or mobile.
Google’s next flagship phones could arrive with a notable display advantage.
According to a new report from ETnews, the Pixel 11 series is set to use Samsung’s latest M16 OLED panels. This could potentially make it the first smartphone line to feature the upgraded screen technology.
The panels are expected to bring improvements in brightness, colour accuracy and power efficiency, building on Samsung’s current M14 OLED displays used in today’s premium devices. That includes phones like the Pixel 10 Pro and even recent iPhone models. Therefore, the jump to M16 could represent a modest but meaningful upgrade.
Interestingly, timing may be everything here. Google has settled into an August launch window for its Pixel flagships. This could give the Pixel 11 a head start over Apple’s expected September iPhone release. If that schedule holds, the Pixel 11 could beat the iPhone 18 Pro and Pro Max to market with the same display tech.
Advertisement
There’s another twist. Samsung itself may not be first to use its own latest panels. Reports suggest its future Galaxy S27 lineup won’t arrive until 2027. This means rival brands could showcase the company’s newest display innovation before Samsung’s own flagship devices do.
Advertisement
That said, expectations should be kept in check. Modern OLED panels are already highly refined, and the real-world differences between M14 and M16 may be subtle for most users. The Pixel 10 series already offers excellent screens. As a result, any gains here are likely to focus on efficiency and peak performance rather than dramatic visual changes.
Still, if the report proves accurate, the Pixel 11 could quietly gain an edge in one of the most important areas of a smartphone. It could underline Google’s growing confidence in taking on bigger rivals with cutting-edge hardware.
Arizona Attorney General Kris Mayes’ case against prediction market Kalshi appears to have hit a snag.
The Commodity Futures Trading Commission announced Friday that it has won a temporary restraining order preventing the state from pursuing its criminal case against Kalshi (whose CEO Tarek Mansour is pictured above).
“Arizona’s decision to weaponize state criminal law against companies that comply with federal law sets a dangerous precedent, and the court’s order today sends a clear message that intimidation is not an acceptable tactic to circumvent federal law,” said CFTC Chairman Michael S. Selig in a statement.
While the CFTC normally has five commissioners, Selig is currently the only one on the commission, following his confirmation in December and the departure of previous acting chairman Caroline Pham (who left to join crypto company MoonPay).
Advertisement
Arizona has filed charges against Kalshi accusing the company of operating an illegal gambling business in the state without a license. The announcement of the restraining order comes just a couple days after a federal judge allowed Arizona’s case to move forward, according to Bloomberg.
The CFTC also filed suits seeking to stop similar cases from moving forward in Connecticut and Illinois.
Lipovsky and Stein, who helped relaunch the Final Destination franchise with last year’s entry that made $317 million worldwide on a $50 million budget, have signed a first-look deal with Sony that goes beyond Metal Gear. Read Entire Article Source link
Five hundred bucks. That’s the price difference between the MacBook Neo and the MacBook Air. Having spent a lot of time testing and using both laptops in the MacBook lineup, I can say that there’s a clear demographic for both of these devices.
As a longtime laptop tester, my goal here is twofold. I want to make sure that you buy the right MacBook, and I also want to make sure you don’t overpay or underbuy. Deciding isn’t actually as difficult as you might think. Don’t think you want a MacBook after all? Don’t forget to check out our guides to the Best Windows Laptops, the Best Chromebooks, or the Best Linux Laptops.
The Easy Way to Decide
Photograph: Luke Larsen
There’s one easy question to answer if you’re stuck between the Neo and the Air. Is this for a job that you will use full-time? Because if you’re sitting in front of this laptop for eight hours a day, don’t bother considering the MacBook Neo. You’ll likely be tempted by the price, but it’s compromises are just too many. Trust me.
On the other hand, if you answered “No” to that question, you can likely save some cash by buying the MacBook Neo without being bothered by some of its deficiencies. For example, a lot of people have a work PC or laptop at the office, but then need something for weeknights, weekends, or to travel with. It also works perfectly for a student, whether in high school or college.
Advertisement
I know that’s an oversimplified way of thinking about it, but it’s a good place to start.
Design, Size, and Aesthetics
There’s a small difference in size, but it isn’t as significant as you might assume. The MacBook Neo’s screen is 13 inches, measured diagonally, which is over half an inch smaller than the 13.6-inch MacBook Air. As someone who frequently works on a MacBook Air, I found it pretty easy to switch to the slightly smaller Neo. You can also upgrade to the 15-inch MacBook Air, which gives you a significantly bigger canvas to work on. But that also costs an extra $200. In terms of portability, the MacBook Air is 0.44 inches versus the 0.50 inches of the Neo. Again, not a huge difference—especially since they’re identical in weight.
The MacBook Neo does depart from the MacBook formula in terms of design in a few key ways. It’s a bit more playful than other MacBooks, using rounder edges, white keycaps, and some more brighter color options. They’re nowhere near as daring as the iMac colors, but you get to choose between Silver, Blush, Citrus, and Indigo. Silver and Blush are more subtle, while Citrus and Indigo are the bolder options. My favorite aspect of the MacBook Neo is the lack of a notch, though. Don’t get me wrong: I want thin bezels on my laptop like everyone else, but I’ve always found the notch to be an ugly solution.
Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.
Leeanne Patterson discusses her role in the HR space and how organisations can develop a healthy and happy company culture.
“My interest in HR peaked during my studies in college,” Leeanne Patterson, the head of human resources at TCS Letterkenny Global Delivery Centre, told SiliconRepublic.com.
After completing her degree in business studies, she decided to delve deeper into the world of HR, completing a postgraduate diploma at the National College of Ireland.
“I have always had a genuine interest in people and how organisations can create cultures where individuals and teams thrive.
Advertisement
“I began my career building strong foundational experience across core HR disciplines, including business partnering, talent acquisition, performance management, employee engagement, compensation and benefits, reward and recognition, and working closely with leaders and employees to support growth, change and development.”
How does it feel to have TCS named as a Top Employer in Ireland by the Top Employers Institute?
It’s fantastic and something that we are very proud of. Recognition like this reflects and validates the consistent effort our teams put into creating a supportive, inclusive and engaging workplace. Importantly, it reflects an external assessment of our practices, not just our intentions, but also includes feedback from our own employees in the north-west region, Dublin and throughout the country.
Being named a Top Employer in Ireland reinforces our commitment to continuous improvement and sets a benchmark we hold ourselves accountable to every year.
How can organisations ensure that they are creating a positive and productive atmosphere for their employees?
A positive workplace culture starts with respect, trust and clear communication, ensuring that employees feel comfortable sharing ideas and voicing concerns. In Ireland, where community and connection are so important, it’s essential that organisations take the time to understand what matters to their people, both professionally and personally. Putting people first and supporting flexibility, work‑life balance and wellbeing is also critical.
Advertisement
I am particularly passionate about creating and supporting health and wellness that is core to a company’s workplace culture. Prioritising physical and mental health with wellness programmes reduces burnout and increases productivity. Good health is good business.
Diversity and inclusion enhance creativity, improve decision-making and drive innovation by leveraging varied perspectives. Inclusive workplaces boost employee engagement, trust and retention while attracting top talent, as many candidates prioritise diverse environments.
Does TCS have any initiatives or programmes aimed at creating a strong culture?
Yes, culture is at the heart of everything we do at TCS Ireland. We actively promote inclusion, collaboration and belonging through a range of initiatives, from employee engagement, employee resource groups, CSR initiatives and wellbeing programmes to upskilling in key capabilities, leadership development and mentoring.
At TCS, employee wellbeing is particularly embedded into the fabric of the organisation. I am particularly proud of the multiple programmes we have in place to support healthier lifestyles, work-life balance and online counselling sessions for better mental health. Our culture is built around shared values, but it’s lived locally, shaped by the communities in which our people work and live. We actively promote, but we also participate and encourage. It’s not just a ‘nice to have’, it’s a necessity.
Advertisement
How is training utilised as a means of building a responsive and responsible culture?
Learning and development are central to our approach in Ireland. We view training not just as a way to build skills, but as a way to empower our people and reinforce our values. Through continuous learning opportunities, employees are supported to adapt to change, grow their careers, and contribute responsibly to our clients and communities.
Training also plays a key role in ensuring consistency, accountability and high standards across all our Irish teams. Continuous learning is a way of life in TCS and employees are encouraged to make use of the extensive learning and certification opportunities.
What kind of talent does TCS typically look to bring onboard?
Individuals with high emotional intelligence, proactive individuals who are solution-driven and candidates with an enthusiasm for learning.
In Ireland, we look for people who are curious, collaborative and eager to learn. While technical capability is important, we place equal value on attitude and mindset. We seek individuals who are open to working with global teams, but who also understand the importance of local context – people who want to build long‑term careers while contributing positively to their communities, including regions like the north-west.
Advertisement
Have you any advice for a new recruit looking to join TCS on how to present themselves as an attractive candidate?
My advice would be to be yourself and show genuine interest in who we are as a company. Research TCS Ireland, understand our values and think about how your own experiences align with them. Illustrate how you are motivated by making a difference and driving tangible results. Highlight your adaptability, your willingness to learn and any examples where you’ve worked collaboratively or made a positive impact, whether through work, study or community involvement. We’re proud to attract talent from across Ireland, and we’re always interested in potential, not just past experience.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
You must be logged in to post a comment Login