Connect with us
DAPA Banner

Tech

Microsoft Copilot shakeup, Amazon phone ambitions, and pushing Claude to the limits of LinkedIn

Published

on

This week on the GeekWire Podcast: Amazon is working on a new smartphone, code-named “Transformer,” more than a decade after the Fire Phone debacle, according to Reuters.

We dig into the connection to a past GeekWire scoop: former Microsoft Xbox leader J Allard joined Amazon’s devices team in 2024, and he’s now leading a group called ZeroOne with a mandate to create “breakthrough” gadgets. Is this an AI-native device? A companion to your iPhone? J Allard’s shot at redemption? Maybe all of the above.

There’s more great Fire Phone background in this Vergecast “Version History” podcast.

Then: Microsoft shakes up its Copilot team, shifting Mustafa Suleyman to a narrower role and unifying consumer and enterprise AI under a new leader. Todd has strong feelings about Microsoft’s history of cutesy consumer tech, from Clippy to Mico.

Plus: Todd’s adventure using Claude CoWork to browse LinkedIn (and the stern warning he got in response). We also discuss King County Metro’s slick new tap-to-pay feature catches the transit system up with the modern world, the upcoming opening of cross-lake light rail, and round things out with an Amazon Treasure Truck trivia question.

Advertisement

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Audio editing by Curt Milton.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

AWS at 20*: Inside the rise of Amazon’s cloud empire, and what’s at stake in the AI era

Published

on

Jeff Bezos framed this copy ofa 2006 BusinessWeek cover, reflecting Wall Street’s skepticism about AWS at the time. (Jeff Bezos via X, May 2022)

In the early days of Amazon Web Services, technical evangelist Jeff Barr was putting in long hours on the road, pitching a novel concept: rent computing power for 10 cents an hour, and storage for 15 cents a gigabyte per month — no servers to buy, no data centers to build.

Barr remembers calling his wife to check in at the end of the day. Get a nice dinner, she told him, you deserve it. But later, at the restaurant, looking at the menu and doing the math in his head, he couldn’t help but ask himself if the pennies were adding up.

“Did enough people start using these servers to buy me a decent steak?” he wondered.

He probably should have ordered the filet.

Two decades later, AWS generates nearly $129 billion a year in revenue. That’s enough to rank in the top 40 of the Fortune 500 if it were a standalone company, ahead of the likes of Comcast, AT&T, Tesla, Disney, and PepsiCo. Companies such as Netflix, Airbnb, Slack, Stripe and thousands more have built massive businesses on its platform. 

Advertisement

When AWS goes down, it ripples across the web, taking down apps, websites, and services that most users never knew were on a common infrastructure. 

But the business that defined cloud computing — bankrolling Amazon’s expansion into everything from streaming to same-day delivery — is now grappling with the most significant challenge since it launched. The rise of AI has upended the industry, empowering Microsoft, Google and others, and creating competitive dynamics that seem to change every month.

For the first time, AWS faces questions about its long-term ability to lead the market it created.

With Amazon marking the 20th anniversary of AWS this month, GeekWire spoke with early builders, current AWS insiders, and longtime observers of the company to tell the story of how the business got started, how it won the cloud, and what it’s up against now.

Advertisement
Scalable, reliable, and low-latency

Officially, Amazon pegs the public launch of AWS to March 14, 2006. That’s when it announced “a simple storage service” that offered software developers “a highly scalable, reliable, and low-latency data storage infrastructure at very low costs.”

Dubbed S3, it was Amazon’s first metered cloud service: the first time developers could pay for exactly what they used, billed in tiny increments, with no upfront commitment.

“We think it can be a meaningful, financially attractive business.” A Bloomberg News story quotes Jeff Bezos about AWS in November 2006. S3 launched earlier in the year.

All of this might seem mundane in a modern world where the cloud and internet services are almost like electricity and water, seemingly always there when you need them. 

But remember the context of that moment: Facebook was available only on college campuses. Netflix arrived on DVDs in the mail. The iPhone was still a year away from being unveiled. And over at Microsoft in Redmond, they were finally getting ready to ship Windows Vista.

The asterisk in the headline

The history of Amazon Web Services is more complicated than it might seem, and it’s actually a subject of some disagreement behind the scenes. There are multiple origin stories, including one offered by Amazon itself, and others by former employees who say the company has tidied up the narrative over the years to shape the lore around its current leaders.

Advertisement

Journalist Brad Stone, author of the canonical Amazon book, “The Everything Store,” discovered this when Andy Jassy — the longtime AWS CEO who would go on to succeed Jeff Bezos as Amazon CEO — disputed aspects of his telling of the AWS story in a one-star review.

One point of contention: the origins of EC2, the AWS service built by a small team in South Africa, and the degree to which it sprang from the process Jassy led or was born independently.

Part of the challenge: Amazon, despite operating the storehouse of the internet, isn’t great at preserving its own history. The company, which cooperated with this piece, wasn’t able to unearth key documents such as Jassy’s original AWS six-pager from September 2003.

Some former Amazon leaders take things further back, to a set of e-commerce APIs that Amazon released in July 2002, allowing outside developers to access its product catalog and build applications on top of it. By that accounting, AWS is closer to 24 years old.

Advertisement
Overcoming internal opposition

The effort was led by business leader Colin Bryar, who ran Amazon’s affiliates program, along with technical leader Robert Frederick, whose Amazon Anywhere team (focusing on making Amazon’s site and features available on mobile devices) had been working since 1999 on internal web services that became the foundation for the external APIs.

Amazon in those days was on Seattle’s Beacon Hill, in the landmark art deco Pacific Medical Center tower overlooking downtown. Jeff Bezos was directly involved from the early days, as a believer in the vision that Amazon’s infrastructure capabilities could become a big business.

In 2002, when Bryar initially pitched a roomful of senior leaders on the idea of opening up Amazon’s product catalog and features as web services to outside developers, nearly all of them said no, as Frederick recalled in a recent interview.

The objections piled up: it would cannibalize existing business, it would educate competitors. Then, as Frederick remembers it, Bezos looked around the table and let out one of his trademark piercing laughs. Amazon’s founder wanted to see what developers would do.

Advertisement

“Let’s do it,” Frederick recalls Bezos saying, “and let’s have them surprise us.”

Later, in a July 2002 press release announcing “Amazon.com Web Services,” Bezos used nearly identical language: “We can’t wait to see how they’re going to surprise us.”

Big developer response

Within months, tens of thousands of developers had signed up. Increasingly, they were asking for things like storage, hosting, and compute, recalled Frederick, who worked at Amazon through mid-2006. He went on to found IoT platform Sirqul in 2013 and remains its CEO.

Another veteran of those early days agreed that the developer response to those initial e-commerce APIs may have opened the minds of Amazon’s leaders to the larger possibilities. 

Advertisement

“Maybe that’s where Andy’s brain lit up. … Maybe that’s where Jeff’s brain lit up,” said Dave Schappell, referring to Jassy and Bezos. Schappell arrived at Amazon in 1998 as Jassy’s MBA intern, dropped out of Wharton to stay, and spent the next seven years working with him.

Schappell ran the associates program after Bryar, became an early head of product for AWS, and hired the original product managers. Those product managers included Jeff Lawson, who went on to found Twilio. Schappell himself became a well-known Seattle entrepreneur before returning to AWS for four years after Amazon acquired his startup TeachStreet.

The ‘crystal-clear movie moment’

Jeff Barr was one of the developers who noticed. 

Now an Amazon VP and longtime AWS chief evangelist, Barr was working as an outside consultant in the web services field when he logged into his Amazon Associates account one day in 2002 and noticed a new message. 

Advertisement
AWS Chief Evangelist Jeff Barr, joined in in the early days of the business. (Amazon Photo)

Amazon now had XML, it said, referring to the data-formatting standard that allowed software systems to communicate over the internet. Amazon was making its product catalog available as a web service and connecting it to the affiliate program, a surprising move at the time.

“I clicked through, I signed up for the beta. I downloaded it right away,” Barr recalled. 

He sent feedback to the email address in the documentation. They actually replied. 

Before long, he was invited to a small developer conference at Amazon’s headquarters — maybe four or five attendees at the Pacific Medical Center tower, in a semicircular open space with a view of the city. The developers sat in the middle, with Amazon employees around them.

At some point, one of the Amazon presenters announced that they were so impressed at how developers had found the APIs and started publishing apps within 48 hours that they were going to look around the rest of the company for more services to open up.

Advertisement

“That was that crystal-clear movie moment,” Barr said. He turned to an Amazon employee nearby and told her: “I have to be a part of this.”

Creating the cloud 

But what Frederick and team had built was essentially a way for outside developers to access Amazon’s product data. It was not yet the cloud as we know it today. 

That move started in mid-2003, as Jassy told the story in a 2013 talk at Harvard Business School. Jassy, then serving as Bezos’s technical advisor, was tasked with figuring out why software projects across Amazon were taking so long. It turned out that engineers were spending months building storage, database, and compute solutions from scratch.

In a meeting of six or seven people that summer, someone made the observation that would change the company’s trajectory. Jassy recalled the thinking during his HBS talk: “We’re pretty good at this. And if we’re having so many problems, and we don’t have anything we can use externally, I imagine lots of other companies probably have the same problem.”

Advertisement

Around the same time, Amazon recruited Werner Vogels, a Cornell distributed systems researcher, as its chief technology officer. He almost didn’t take the call. “It’s an online bookstore,” he recalled in a LinkedIn post last week. “How hard could their scaling be?” 

But the company was wrestling with every problem he and his colleagues had been theorizing about — fault tolerance, consistency, availability at scale — live in production, every day. 

Fundamental building blocks

Schappell remembers those early days as a non-stop cycle of six-page memos and meetings with Jassy and Bezos, all focused on trying to figure out what to build. 

The concept that would define AWS — breaking every capability down to its most basic building block, or “primitive” — didn’t arrive fully formed. “I don’t think he said that on day one,” Schappell said of Bezos. “I think he said it after he read 47 of our six-pagers.”

Advertisement

Each primitive would stand on its own, and customers would pay only for what they used, billed in tiny increments. It was a direct rebuke to the licensing models of companies such as database giant Oracle, where customers paid for everything whether they used it or not.

Rahul Singh, who joined AWS in January 2004 as one of its first engineers, recalled the early technical plans going through just one layer of review before reaching Bezos and Jassy. (It’s the kind of streamlined decision-making that Jassy is now trying to restore across the company.) 

Fault tolerant by design

In one early meeting, Bezos told the engineers he wanted a server touched exactly twice: once when installed in the data center, and once years later when it was pulled out. In between, nothing. The software had to be built to tolerate failures, leaving dead machines behind and moving on. It was a philosophy that would define the architecture of the cloud.

On Singh’s first day, his manager Peter Cohen sat him down in the lunch area and handed him a planning document (a “PR/FAQ” in Amazon lingo) that had just been approved by Bezos.  

Advertisement

“We’re calling this S4,” Cohen said. Singh looked at the name of the product, Simple Server-Side Storage Service, and pointed out that it should be called S5. Singh recalls Cohen’s response: “Yeah, you’re really smart, aren’t you? Let’s see if you can actually build this.” 

It was eventually shortened to Simple Storage Service, or S3.

The queuing service called SQS had launched in 2004 in beta (adding further to the debate over the origin story and what counts as the launch) but S3 was the first made generally available.

A billion-dollar business?

Jassy, then the VP in charge of AWS, would hold all-hands meetings in a conference room for four or five engineers, most of them straight out of college and grad school, as Singh recalled in an interview. Jassy ran them with the discipline of a much larger organization, repeating over and over that AWS could be a billion-dollar business, at a time when it had no revenue at all. 

Advertisement

Singh remembers being highly skeptical.

“I was young and naive, and I remember thinking: a billion, that’s a really big number,” Singh said. Years later, he would joke with Jassy that the prediction had been completely wrong: it turned out to be a multi-billion dollar business, many times over.

In a LinkedIn post marking the March 14 anniversary, current AWS CEO Matt Garman — who joined the company as a summer intern in 2005, before the launch of S3 — recalled how early customers like FilmmakerLive and CastingWords took a bet on the fledgling platform. 

“That shift changed the economics of building technology almost overnight,” he wrote.

Advertisement
Meanwhile, in Cape Town … 

While one team was building S3 in Seattle, the compute side of the equation was taking shape 10,000 miles away. Chris Pinkham, an Amazon VP who wanted to move back to his native South Africa, was given permission to set up a development office in Cape Town. 

His small team built EC2 — the Elastic Compute Cloud — largely independent of the Seattle operation. The local tech community was a bit bewildered by what Amazon was doing.

“We knew this bookstore had arrived in town,” recalled Dave Brown, who was working at a local payments startup at the time. He asked his friends who had joined what they were doing. 

Dave Brown, Vice President, AWS Compute & ML Services, at AWS re:Invent 2025. (Amazon Photo)

“It’s kind of like, you know, you can rent a computer on the internet,” they told him. 

Brown asked about the revenue. “Tens of dollars every single day,” they said.

Advertisement

He remembers wondering why they were wasting their time on that. 

The answer became clear when EC2 launched in August 2006, five months after S3, adding compute to storage as another fundamental building block of AWS and the cloud.

Early customers showed EC2’s range: a Spider‑Man movie used it for rendering, and Facebook apps like FarmVille and Animoto spun up instances on demand, as Brown recalled. 

A New York Times engineer used a personal credit card to run optical character recognition on the paper’s scanned archives over a weekend, making the entire archive searchable, after being told by the company that it would be cost-prohibitive using traditional approaches. It cost a grand total of a couple hundred bucks, even after initially screwing up and doing it over again.

Advertisement
Typing ahead of the characters

Brown joined in August 2007, the 14th person on the EC2 team. They worked out of a tiny office in Constantia, the winelands part of Cape Town, across the highway from vineyards. 

They occupied part of one floor of an office building. There was one conference room, and two offices. The rest was open plan. The team was 14 engineers, one product manager, and Peter DeSantis, the leader who came from Seattle to help build the service.

The internet connection was a four-megabit DSL line shared by the entire office, with 300 milliseconds of latency to the data centers in the U.S. When engineers typed on their screens, each character had to make the round trip across the ocean and back before it appeared. 

“You get really good at typing ahead of where the actual characters are appearing,” Brown said.

Advertisement

Every morning, someone had to find the VPN token to get the office online. It lasted about 10 hours before it automatically reset. “Everybody would be shouting, where’s the VPN token?”

Scrambling to keep up

One day, they were running low on computing capacity. DeSantis came out of his office and told the engineers to shut down the machines they were using for testing. That freed up enough capacity to keep the service going for a few days until the next racks of hardware came online.

Marc Brooker, now an AWS VP and distinguished engineer working on agentic AI, joined the EC2 team in Cape Town in 2008. He could see the entire team from his desk. When Brown was away one day, Brooker and the team covered every surface of his office in sticky notes — the kind of prank that only works in a small office where everyone knows everyone else.

Brooker was drawn in by something he heard about in his job interview: the team had built a way to make a distributed system look like a physical hard drive to the operating system.

Advertisement

“Wow, that is so cool,” he recalled thinking. “Here’s 20 other things I can think of that we could do with that kind of technology.”

That instinct, that the building blocks of the cloud could be combined and recombined in ways no one at Amazon had imagined, was at the core of what made AWS catch on.

“The world would be in a very different place if you didn’t have the freedom to experiment, to pilot, to try something, to move on to some other idea, that AWS first introduced,” said Mai-Lan Tomsen Bukovec, an AWS VP who has led S3 for 13 of its 20 years.

Prasad Kalyanaraman, now the AWS vice president who oversees global infrastructure, previously spent years building supply-chain forecasting systems for Amazon’s retail operation. Around 2011, Charlie Bell, then a senior AWS leader, asked him to help with a problem: the team was forecasting its compute demand using spreadsheets.

Advertisement

He adapted the supply-chain forecasting tools for AWS, but the cloud business kept outrunning every model he built. 

“The funny thing about forecasts is that forecasts are always wrong,” he said. “It’s very hard to actually predict exponential growth.”

How AWS grew

It began with startups. The companies that would define the next era of technology were building on AWS. Airbnb, Instagram, and Pinterest all got their start on AWS. 

John Rossman, a former Amazon exec and author of books including “The Amazon Way” and “Big Bet Leadership,” remembers Jassy pulling him aside for coffee at PacMed around 2008. Rossman had left Amazon and was working as a consultant to large businesses. Jassy wanted to know: did he think big companies would ever be interested in on-demand computing?

Advertisement

Maybe, maybe not, Rossman said. He was working with Blue Shield of California at the time, and tried to imagine them running on AWS. It was hard to picture. At the time, the typical AWS customer was a startup developer with little budget for infrastructure. The idea that a big insurance company would run on AWS seemed like a stretch.

“I was a little bit of a pessimist on it,” Rossman said. 

But soon things started to change. 

Netflix moved its streaming infrastructure to AWS starting in 2009, a decision that carried particular weight because it competed with Amazon in video. In 2013, the CIA awarded AWS a contract over IBM, signaling that the platform was trusted at the highest levels of security.

Advertisement
Microsoft tips its hat

AWS’s pricing model, in which customers paid only for what they used, was a direct threat to the licensing businesses of tech’s old guard. Whether burying their heads in the sand or just preoccupied, the companies that would become the biggest AWS rivals were slow to respond.  

Microsoft didn’t unveil its cloud platform — code-named “Red Dog,” and initially launched as “Windows Azure” — until October 2008, more than two years after S3 debuted. Bill Gates had left his day-to-day role at Microsoft a few months earlier. The company was still recovering from the aftermath of the Vista flop.

“I’d like to tip my hat to Jeff Bezos and Amazon,” said Ray Ozzie, then Microsoft’s chief software architect, at the launch event — a rare public acknowledgment of a competitor’s lead.

Azure didn’t reach general availability until 2010, and its early approach was more of a platform for applications, not the raw infrastructure that made AWS so popular with developers. It took years to build out comparable offerings. 

Advertisement

Google launched App Engine, a platform for running applications, in 2008, but didn’t offer raw computing infrastructure to rival EC2 until Compute Engine arrived in 2012.

‘The AWS IPO’

For years, AWS grew in something close to silence. Amazon said little about the overall growth, and didn’t break out the financial results for the business in its quarterly earnings reports.

Then, in April 2015, Amazon reported its first-quarter earnings with AWS broken out in detail for the first time, and it stunned the industry. The business had a $6 billion annual revenue run rate and was growing 50% a year. 

The modest expo hall at the first AWS re:Invent, under construction in 2012, left. Last year’s conference, right, drew 60,000 people to Las Vegas. (2012 Photo Courtesy Jeff Barr; 2025 Photo by Todd Bishop)

AWS generated more than $250 million in profit that quarter alone, with operating margins around 17%. This was a stark contrast with the rest of Amazon, scraping by on traditional retail margins of 2% to 3%. AWS was making significantly more profit on every dollar of revenue.

The hosts of the Acquired podcast, in their extensive 2022 history about the rise of Amazon Web Services, would later call this moment “the AWS IPO,” in effect.

Advertisement

Amazon stock jumped 15% on the news.

“I was blown away,” said Schappell, the early AWS product leader who left in 2004 and later listened to the first AWS earnings breakout while training for a marathon. For years, he had assumed Amazon was losing billions on AWS. The reality was the opposite: AWS had become so profitable that it was effectively bankrolling Amazon’s future.

The margins kept climbing, reaching 35% by early 2022. 

Then the pandemic cloud boom faded. Inflation spiked amid broader economic uncertainty. Customers scrutinized their cloud bills and pulled back spending. AWS revenue growth fell from 37% to 12% over the course of the year, the slowest in its history. Margins fell to 24%.

Advertisement
The ChatGPT moment

Then everything changed, for Amazon and everyone else. 

On November 30, 2022, OpenAI released ChatGPT, with little fanfare at first. The consumer AI chatbot quickly became the fastest-growing application in history, reaching 100 million users in two months, and sending the technology world into a frenzy in the ensuing months.

For AWS, the stakes were huge. Every major wave of technology over the previous 15 years, from mobile to social to streaming to e-commerce, had been built on its platform. 

If AI was the next wave, AWS needed to lead the way again.

Advertisement

Amazon was far from absent in AI. AWS had launched SageMaker in 2017, giving developers tools to build and deploy machine learning models. It had released custom AI chips for inference and training. Alexa, the voice assistant, had been processing natural language queries since 2014. Amazon had spent many years and billions of dollars on machine learning.

But none of it looked or worked like ChatGPT. The new model could write code, draft essays, answer complex questions, and hold a conversation. It was not a feature. It was a product people wanted to use. And it was built by an AI lab running on Microsoft Azure.

‘AWS sneaked in there’

The irony: OpenAI didn’t start on Microsoft’s cloud. It launched on AWS.

When the AI lab debuted in December 2015, AWS was listed as a donor. OpenAI was running its early research on Amazon’s infrastructure under a deal worth $50 million in cloud credits. 

Advertisement

Microsoft CEO Satya Nadella learned about it after the fact. “Did we get called to participate?” he wrote to his team that day, in an email that surfaced only recently in a court filing from Elon Musk’s suit against Microsoft and OpenAI. “AWS seems to have sneaked in there.”

Microsoft moved fast. Within months, Nadella was courting OpenAI. The AWS contract was up for renewal in September 2016. “Amazon started really dicking us around on the [terms and conditions], especially on marketing commits,” Sam Altman wrote to Musk, who was then OpenAI’s co-chair. “And their offering wasn’t that good technically anyway.” 

By that November, Microsoft had won the business. 

Six years later, with the launch of ChatGPT, that bet paid off in ways no one could have predicted. Microsoft stock surged. Amazon, like many others in the industry, was scrambling to figure it all out — suddenly trying to keep up with the future of a market it had long defined.

Advertisement
Pivoting to generative AI

The AWS CEO at the time was Adam Selipsky, who had helped build the business from its earliest days before leaving in 2016 to run Tableau, the data visualization company. He returned in May 2021 to lead AWS after Jassy was promoted to succeed Bezos as Amazon CEO. 

In a May 2024 interview with Selipsky, on one of his last days in the role, GeekWire asked him directly if Amazon had been caught flat-footed by the rise of generative AI. 

After a member of his team interjected to say the question seemed to be informed by reading too many Microsoft press releases, Selipsky dismissed the idea that AWS was behind. 

While that narrative might have “more sizzle” and generate clicks, Selipsky said, the reality was different, as evidenced by Amazon’s years of work in AI and machine learning. 

Advertisement

AWS had announced Inferentia, a chip for deep learning, in 2018, building on its 2015 acquisition of Annapurna Labs, the Israeli chip startup. It began work on CodeWhisperer, an AI coding assistant, in 2020 — before GitHub Copilot existed, the company notes. In 2021, it launched Trainium, a chip designed to train models with 100 billion or more parameters. 

Dario Amodei, CEO of Anthropic, right, speaks with Adam Selipsky, then CEO of Amazon Web Services, at AWS re:Invent on Nov. 28, 2023. (GeekWire File Photo / Todd Bishop)

At the same time, Selipsky acknowledged that AWS had “pivoted many thousands of people from other interesting, important projects to work on generative AI” — a scale of reallocation signaling something other than business as usual inside the company. 

Tomsen Bukovec, who now oversees AWS’s core data services including S3, analytics, and streaming, said her team’s response was less a pivot than a process of learning. 

They educated themselves on what the technology meant for their services, she said, and thought deeply about what it would look like for AI to both create and consume data at scale. 

The question her team started asking in late 2022: what does the world look like when 70 to 80 percent of the usage of your services comes through AI?

Advertisement

“AI is going to use it at 10 times to 100 times the rate of a human, and it’s going to do it all day long, all the time, 24 hours,” she said. “AI never goes to sleep.”

Scrambling to meet the moment

The pressure to catch up in generative AI was felt across the company. In a lawsuit filed in Los Angeles Superior Court, an AI researcher who worked on Amazon’s Alexa team alleged that a director instructed her to ignore internal copyright policies because “everyone else is doing it.”

The complaint described ChatGPT’s launch in late November 2022 as causing “panic within the organization.” Amazon has denied the allegations, and the case is still pending.

On Amazon’s earnings call in early February 2023 — two months after ChatGPT’s launch — Amazon CEO Andy Jassy did not discuss generative AI or large language models.

Advertisement
Matt Garman, AWS CEO, speaks at AWS re:Invent 2025. (GeekWire File Photo / Todd Bishop)

By the next quarter’s call, in late April 2023, he spoke about it for nearly ten minutes, describing it as “a remarkable opportunity to transform virtually every customer experience that exists.”

In September 2023, the company announced an investment of up to $4 billion in Claude maker Anthropic, the AI startup founded by former OpenAI researchers. The investment would eventually grow to $8 billion — which seemed like a lot at the time.

Selipsky left AWS in mid-2024. Garman, whom Selipsky had hired as a product manager in 2006, succeeded him as CEO, charged with leading the cloud business into the new era.

From CodeWhisperer to Bedrock

The roots of Amazon’s response actually predated ChatGPT by more than two years, although it faced initial skepticism internally. In 2020, Atul Deo, an AWS product director, wrote a six-page memo proposing a generative AI service that could write code from plain English prompts.

Jassy, who was still leading AWS at the time, wasn’t sold. His reaction, as Deo later told Yahoo Finance, was that it seemed like a pipe dream. The project launched in 2023 as CodeWhisperer, an AI coding assistant. 

Advertisement

But by then, ChatGPT had redrawn the landscape, and the team realized they could offer something broader: a platform giving customers access to a range of foundation models through a single service. AWS called it Bedrock. The name reflected an ambition to do for AI models what the company had done years earlier with its Relational Database Service, which wrapped MySQL, Oracle, and other database engines in a common management layer. 

Bedrock would do the same for large language models.

The decision to offer multiple models rather than push a single in-house option was deliberate, and rooted in a pattern AWS had followed for years. It brought multiple CPUs to the cloud: AMD, Intel, and its own Graviton. It offered Nvidia GPUs alongside its own Trainium chips. 

Fastest-growing AWS service

Amazon’s view is that choice drives competition, which drives down prices for customers.

Advertisement

“We knew there was never going to be one model to rule everybody,” said Dave Brown, the AWS vice president who oversees EC2, networking, and custom silicon. “And even the best model was not going to be the best model all the time.”

Bedrock launched in preview in April 2023 and reached general availability that September, with models from Anthropic, Meta, and others alongside Amazon’s own. Two years later, it had become the fastest-growing service AWS had ever offered, with more than 100,000 customers. 

On Amazon’s most recent earnings call, Jassy described it as a multi-billion-dollar business, with customer spending growing 60% from one quarter to the next.

At the end of 2024, Amazon added its own entry to the model race. The company introduced a family of foundation models called Nova, positioned as a lower-cost, lower-latency alternative to the third-party models on the Bedrock platform. 

Advertisement
Amazon CEO Andy Jassy unveils the Nova models at AWS re:Invent in December 2024. (GeekWire Photo / Todd Bishop)

As Fortune’s Jason Del Rey observed, it was a page from the e-commerce playbook: build the marketplace first, then stock it with a house brand. Just as Amazon sells goods from thousands of merchants alongside its own private-label products, Bedrock offered models from Anthropic, Meta, and others, and now Amazon’s own models to go along with them. 

At re:Invent in late 2025, AWS pushed further, unveiling what it called “frontier agents” — autonomous AI systems designed to work for hours or days without human involvement. 

One, built into Amazon’s Kiro coding platform, can navigate multiple code repositories to fix bugs while a developer sleeps. Last month, the Financial Times reported that Amazon’s own AI coding tools caused at least one AWS service disruption. Amazon acknowledged the incident but publicly disputed aspects of the reporting, citing a misconfigured role, not the AI itself.

The $200 billion bet

Like its rivals, AWS is also building the physical infrastructure to back it up. In 2025, less than a year after it was announced, AWS opened Project Rainier, one of the world’s largest AI compute clusters, centered in Indiana, powered by more than 500,000 of Amazon’s Trainium2 chips. 

Named after the mountain visible from Seattle, Rainier was built to train and run Anthropic’s next generation of Claude models, using Amazon’s own Trainium chips rather than Nvidia GPUs.

Advertisement

Kalyanaraman, the AWS vice president who oversees global infrastructure, said the project forced AWS to rethink its supply chain from the ground up. The goal was to minimize the time between a chip leaving its fabrication facility and serving a customer workload. 

Rainier was built at a faster pace than anything AWS had ever done, Kalyanaraman said, with more than 100,000 Trainium chips available to Anthropic in under a year. But it wasn’t a one-off. He called it the new template for how AWS would build AI infrastructure going forward.

Then, late last month, came the deal that brought the story full circle. 

OpenAI — the company that launched on AWS in 2015 and left for Microsoft Azure the following year — announced a partnership with Amazon that included up to $50 billion in investment and a cloud agreement worth more than $100 billion over eight years.

Advertisement

OpenAI committed to run workloads on Amazon’s custom Trainium chips, making it the second major AI lab after Anthropic to do so. The two companies had been talking since at least May 2023, according to SEC filings, but Microsoft’s right of first refusal on OpenAI’s compute had blocked a deal until those restrictions were loosened in the latest renegotiation.

By late 2025, AWS revenue was growing at its fastest pace in more than three years, up 24% to $35.6 billion a quarter. The company disclosed that its Trainium and Graviton chips had reached a combined annual revenue run rate of more than $10 billion. Bedrock had surpassed 100,000 customers and was generating revenue in the billions.

The competitive picture was also coming into sharper focus. 

In mid-2025, Microsoft disclosed standalone Azure revenue for the first time: $75 billion a year, up 34%. Google Cloud had crossed a $50 billion annual run rate. AWS, at more than $116 billion a year at the time, was still larger — but no longer running away with the market.

All of this helps to explain Amazon’s record capital spending. On the company’s latest earnings call, Jassy defended plans to spend $200 billion this year, most of it on AI infrastructure.

Advertisement

The figure is so large it would consume nearly all of Amazon’s operating cash flow. Facing a Wall Street backlash, Jassy called artificial intelligence “an extraordinarily unusual opportunity to forever change the size of AWS and Amazon as a whole.”

What’s next: Bear and bull cases

Longtime observers are divided on the company’s AI bet.

Corey Quinn, a cloud economist who works with AWS customers through his Duckbill consultancy, sees little real‑world traction for Amazon’s Nova models. “You know someone is an Amazon employee when they talk about Nova, because no one else is,” he said. 

Some businesses bypass Amazon’s Bedrock platform entirely because of capacity constraints and slower speeds, he said, going to third-party providers like Anthropic rather than inserting Bedrock as a “middleman” — unless they’re trying to retire their committed AWS spend.

Advertisement

Looking forward, Quinn pointed to a historical parallel. Twenty years ago, Cisco was the most valuable company in the world, the backbone of the internet. Today it is a profitable but largely invisible utility. AWS, he said, could be headed for the same fate.

“It’s very clear that there will be a 40th anniversary for AWS, because that inertia does not go away,” Quinn said. “But will it be at the center of tech policy and giant companies, or is it going to be a lot more like the Cisco of today?”

Om Malik, the veteran tech writer, cast a critical eye on Amazon’s OpenAI investment.

By his math, Amazon is paying roughly 16 times more per percentage point of OpenAI than Microsoft did, with none of the exclusive IP rights, revenue share, or primary API access that Microsoft locked up years ago. The cost of being late, Malik wrote, is measured in billions.

Advertisement
The lobby at AWS headquarters, the re:Invent building in Seattle. (GeekWire Photo / Todd Bishop)

Rossman, the former Amazon executive who was once skeptical about AWS demand from big business, sees a different picture. He agrees that AWS is strong in infrastructure, the picks and shovels. But where Quinn sees that as a ceiling, Rossman sees it as a moat.

The models are the commodity, Rossman contends. They leapfrog each other constantly. What matters is everything the models run on and through: the chips, the servers, the data centers, the power. AWS is building more of that stack than most competitors. 

“That’s where the value is,” he said.

Rossman said he could envision AWS operating nuclear power plants someday. The long-term winners, he said, will be the companies that deliver the best AI at the lowest cost per token. That’s where AWS’s vertical integration — from Trainium chips to Bedrock to the data center itself — gives it an advantage competitors can’t easily replicate.

As for the risk of spending too much, Rossman put it simply: you have to decide which side of history you’d prefer to fail on — overbuilding or underbuilding. Amazon isn’t taking chances.

Advertisement

In an internal all-hands meeting last week, Jassy said AI could help AWS reach $600 billion in annual revenue, double his own prior estimate, Reuters reported. He had been thinking for years that AWS could be a $300 billion business in a decade. AI, he said, changed the math. 

Any way you add it up, it’s a lot of steaks.

Source link

Advertisement
Continue Reading

Tech

Apple raises external storage prices as AI consumes everything

Published

on

Apple has raised the price of external hard drives in its stores, as its retail efforts feel the pinch of the increased cost of storage.

External SanDisk Professional drive connected by cable to a silver MacBook with Apple logo, lying over scattered US hundred dollar bills in the background
External drives are now more expensive to buy from Apple.

The tech industry is dealing with a crisis of supply and demand, with the needs of AI infrastructure buildouts consuming masses of memory and storage. While the main discussion has been about how Apple is faring on the supply chain side of things, it seems retail is being affected at a much faster rate.
Writing in Sunday’s “Power On” newsletter for Bloomberg, Mark Gurman was informed that Apple had updated the prices for a number of its external drives. These updates occurred on both the website and in retail outlets.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

How to Back Up Your Android Phone (2026)

Published

on

There are some premium apps for MacOS that offer more of an iTunes-like experience, but nothing that I vouch for.

Backing Up to Your Chromebook

Here is how to back up files from your Android phone on a Chromebook:

  1. Plug your phone into a USB port on your Chromebook.
  2. Drag down the notification shade and look for a notification from Android System that says something like Charging this device via USB, Tap for more options and tap it.
  3. Look for an option that says File transfer and select it.
  4. The Files app will open on your Chromebook, and you can drag any files you want to copy over.

Backing Up to Another Cloud Service

Maybe you have run out of Google storage, or you prefer another cloud service. There are Android apps for Dropbox, Microsoft’s OneDrive, MEGA, Box, and others. Most of them offer some cloud storage for free, but what you can back up and how you do it differs from app to app.

Advertisement

We looked at how to back up mobile photos on a few of these before, and you can usually set up the process to be automatic, though other files often have to be backed up manually. If you want to automatically sync photos and other files across devices using one of these services, then check out the Autosync app. There are specific versions for Dropbox, OneDrive, MEGA, and Box.

Whatever service you choose, make sure to keep your cloud storage safe and secure.

Backing Up Locally

Maybe you’d prefer not to use the cloud or Google’s services for your backup. You can always use the methods listed above for Windows or Mac to download files, then manually move them onto a portable hard drive or USB flash drive, but that’s quite a lot of work.

Advertisement

If you have network-attached storage (NAS), there’s likely an app that can automatically back up some of your files when you are connected to home Wi-Fi. You might also consider Syncthing (though it’s best for syncing rather than backing up) or something like Swift Backup, though you may need to pay and/or root your phone to get the best from them.

Backing Up Within Apps

Messaging apps, and a handful of other apps, have their own backup systems built in. I’ll give you a couple of examples here, but check up on your favorites.

Image may contain Text and Indoors

WhatsApp via Simon Hill

Source link

Advertisement
Continue Reading

Tech

Elon Musk unveils chip manufacturing plans for SpaceX and Tesla

Published

on

Elon Musk recently outlined ambitious plans for a chip-building collaboration between his companies Tesla and SpaceX.

Bloomberg reports that Musk shared his plans on Saturday night at an event in downtown Austin, Texas, with a photo suggesting that what Musk is calling the “Terafab” facility will be built near Tesla’s Austin headquarters and “gigafactory.”

Musk said he’s pursuing this project because semiconductor manufacturers aren’t making chips quickly enough for his companies’ artificial intelligence and robotics needs: “We either build the Terafab or we don’t have the chips, and we need the chips, so we build the Terafab.”

The goal is to manufacture chips that can support 100 to 200 gigawatts of computing power per year on Earth, along with a terawatt in space, Musk said. He did not offer a timeline for these plans.

Advertisement

As Bloomberg noted, Musk does not have a background in semiconductor manufacturing, but he does have a history of overpromising on goals and timelines

Source link

Continue Reading

Tech

Nosh Robotics Launched a $1,500 Cooking Robot. Here’s What It Does (and Doesn’t Do)

Published

on

Overly autonomous cooking tools and kitchen appliances have largely whiffed in the US market. While culinary robots like the Thermomix have made inroads in Europe and elsewhere, adoption in the US has been slow. Super smart ovens, including the June, Suvie and Brava, have likewise struggled to connect with consumers here.

Nosh Robotics, a smart home robotics company based in Bengaluru, India, is giving it a go with the launch of Nosh One It’s a $1,499 AI-powered cooking robot seven years in the making and the company says “it can handle the entire cooking process autonomously: ingredient selection, sautéing, plating and self-cleaning.” 

Advertisement
toaster-looking smart oven on a white counter

The June Oven was the most promising smart oven we tested. It quietly stopped production in 2023. 

June

Read more: I Tried a Scan-to-Cook Meal Delivery Service. I’m Completely Obsessed

The Nosh does a few things that a slow cooker or Instant Pot doesn’t, namely, add the right amount of ingredients, cooking oils and spices from small chambers. But you still have to load the right ingredients for a given recipe into cartridges every time you cook. 

Advertisement
nosh cooking robot shot from above

The Nosh One has launched on Kickstarter for a cool $1,499.

Nosh One

The cooking functionality is also limited. While the Nosh can portion, chop (roughly — no mincing or dicing), cook and stir food in its built-in pot using highly programmed recipes so you can walk away while the recipe completes, it can’t bake, roast, boil, sear or steam, making it limited in what it can effectively make. 

I saw it in a non-demo preview at CES earlier this year and spoke with reps about the Nosh One. CEO Mira Patel calls it “the first consumer robot that truly cooks for you,” though I was less certain of its potential and remain skeptical. Up close, and even with a deep explanation from the on-site reps, the pricey machine doesn’t seem worth the cost or the space it takes up on your counter, at least for most home cooks. 

Advertisement
person touching thermomix screen

The Nosh One is similar to a Thermomix. The Thermomix offers more cooking modes and functions, but it can’t automatically deliver precise ingredient amounts to the chamber like the Nosh.

Verwerk

If your dinner menu consists mostly of stews, soups, stir-fries and curries, the Nosh should be able to shoulder a good deal of cooking. Most other foods will have to be cooked the old-fashioned way

It’s also big and bulky. Weighing 57 pounds with a 21-by-17-inch frame, it’ll command a good deal of counter space, much more than an Instant Pot or a slow cooker, both of which execute the same basic cooking tasks, albeit with far fewer automated functions.

How it works

Advertisement
nosh one robot on gray backdrop

The Nosh One precisely portions ingredients according to programmed recipes, then heats and stirs them to completion. 

Nosh Robotics

At the core of the device is NoshOS, a proprietary culinary AI trained on thousands of cooking techniques and cuisines from around the world. Multiple sensors monitor texture, moisture, aroma compounds and browning levels in real time, dynamically adjusting heat, timing and seasoning as a dish cooks. Built-in machine vision identifies produce, proteins and pantry items, allowing the system to suggest meals based on ingredients already on hand.

Ingredient cartridges, which are reusable and dishwasher-safe, store fresh items and dispense them with “millimeter-level precision.” After each meal, a closed-loop wash cycle automatically cleans the cooking chamber, utensils and internal surfaces.

Pricing and availability

The Nosh One is available to preorder on Kickstarter until March 25, starting at $1,499, with shipments expected in early summer 2026. Early backers receive a complimentary set of ingredient cartridges and access to the Nosh Founders Recipe Library, featuring dishes from award-winning chefs. According to the company, additional attachments, specialty cooking modules and premium recipe packs are planned for later in 2026.

Advertisement

As always, before contributing to any campaign, read the crowdfunding site’s policies — in this case, Kickstarter — to find out your rights (and refund policies, or the lack thereof) before and after a campaign ends.

Source link

Advertisement
Continue Reading

Tech

Black Man Shot By Cops Dies After White Cop Suffering An ‘Anxiety Attack’ Snags Ambulance

Published

on

from the no-service,-no-protection dept

Black Lives Matter. All Cops Are Bastards.

These are not temporary catchphrases. These are universal and forever.

And leave it to a cop to ensure we never forget either of these concepts. A foot pursuit that ended in the shooting of Connecticut resident Dyshan Best would otherwise just be a footnote in cop history if some cops hadn’t decided to be the bastards they wanted to see in the world and make it extremely clear they felt a Black life didn’t matter.

The internal investigation of the shooting of a Black man by Bridgeport PD officers delivered unsurprising results:

Advertisement

Dyshan Best, 39, was shot in the back last year as he fled from officers in Bridgeport, Connecticut. A report released Tuesday by the state’s inspector general found that the shooting was justified because Best had a gun in his hand and the officer pursuing him had reasons to fear for his own safety.

All the stuff we expect to see in these reports is here, beginning with the assumption that a gun is a threat even if it’s not pointed at officers to the de rigueurfear for my safety” justification for shooting a fleeing person.

What’s somewhat expected — but still somehow surprising — is what happened after the apparently justified shooting:

The first ambulance called to take Best to the hospital arrived at the scene at 6:02 p.m., about 14 minutes after the shooting. However, at the urging of other officers, that ambulance was used to take away a white police officer, Erin Perrotta, who had been involved in the foot chase, the report said.

Paramedics reported that Perrotta declined treatment in the ambulance.

“I am fine, I just needed to get out of here,” she said, according to the report. Another officer described Perrotta at the time as “visibly hysterical (crying and breathing rapidly) and had blood all over her uniform,” the report said.

Advertisement

That’s right. The ambulance sent to pick up the person police officers had just shot was instead handed over to Officer Erin Perrotta, who — as the Inspector General’s report notes — was enduring the relative hardship of a “mild anxiety attack.”

The second ambulance didn’t show up for another ten minutes. The person with actual bullet holes in him didn’t hit the ER until 14 minutes after Officer “Anxiety Attack” Perrotta arrived at the hospital. The officer who was never in any danger of dying got nearly a 15-minute head start on her medical treatment.

The person they’d shot didn’t make it.

Best died at 7:41 p.m. as he was undergoing treatment for the gunshot wound, which damaged his liver and right kidney.

Meanwhile, Officer Perrotta’s employer only seems interested in outlasting this news cycle:

Advertisement

A spokesperson for Bridgeport police, Shawnna White, declined to comment Wednesday when asked about Perrotta taking the first ambulance. She said in an email that the police department’s Internal Affairs Division would conduct its own investigation.

Sometimes the lack of direct response says more than a direct response would. Perrotta is apparently currently on administrative leave “due to an unrelated matter.” That either means Perrotta does bad stuff often enough she’s already given the department another reason to sideline her or that the department has found other stuff to add to this headline-generating “#mefirst” effort by the officer to grease the wheels for the inevitable firing.

Whatever happens now won’t budge the needle for US law enforcement agencies. But for the rest of us not standing on the inside of the Thin Blue Line, this incident says the quiet part loud: Black lives don’t matter… not when it’s a cop claiming they can’t breathe.

Filed Under: black lives matter, bridgeport police, connecticut, dyshan best, police oversight, police shooting

Advertisement

Source link

Continue Reading

Tech

VoidStealer malware steals Chrome master key via debugger trick

Published

on

VoidStealer malware steals Chrome master key via debugger trick

An information stealer called VoidStealer uses a new approach to bypass Chrome’s Application-Bound Encryption (ABE) and extract the master key for decrypting sensitive data stored in the browser.

The novel method is stealthier and relies on hardware breakpoints to extract the v20_master_key,  used for both encryption and decryption, directly from the browser’s memory, without requiring privilege escalation or code injection.

A report from Gen Digital, the parent company behind the Norton, Avast, AVG, and Avira brands, notes that this is the first case of an infostealer observed in the wild to use such a mechanism.

Google introduced ABE in Chrome 127, released in June 2024, as a new protection mechanism for cookies and other sensitive browser data. It ensures that the master key remains encrypted on disk and cannot be recovered through normal user-level access.

Advertisement

Decrypting the key requires the Google Chrome Elevation Service, which runs as SYSTEM, to validate the requesting process.

Overview of how ABE blocks out malware
Overview of how ABE blocks out malware
Source: Gen Digital

However, this system has been bypassed by multiple infostealer malware families and has even been demonstrated in open-source tools. Although Google implemented fixes and improvements to block these bypasses, new malware versions reportedly continued to succeed using other methods.

“VoidStealer is the first infostealer observed in the wild adopting a novel debugger-based Application-Bound Encryption (ABE) bypass technique that leverages hardware breakpoints to extract the v20_master_key directly from browser memory,” says Vojtěch Krejsa, threat researcher at Gen Digital.

VoidStealer is a malware-as-a-service (MaaS) platform advertised on dark web forums since at least mid-December 2025. The malware introduced the new ABE bypass mechanism in version 2.0.

Cybercriminals announcing ABE bypass in version 2.0
Cybercriminals advertising ABE bypass in VoidStealer version 2.0
Source: Gen Digital

Stealing the master key

VoidStealer’s trick to extract the master key is to target a short moment when Chrome’s v20_master_key is briefly present in memory in plaintext state during decryption operations.

Specifically, VoidStealer starts a suspended and hidden browser process, attaches it as a debugger, and waits for the target browser DLL (chrome.dll or msedge.dll) to load.

Advertisement

When loaded, it scans the DLL for a specific string and the LEA instruction that references it, using that instruction’s address as the hardware breakpoint target.

VoidStealer's target string
VoidStealer’s target string
Source: Gen Digital

Next, it sets that breakpoint across existing and newly created browser threads, waits for it to trigger during startup while the browser is decrypting protected data, then reads the register holding a pointer to the plaintext v20_master_key and extracts it with ‘ReadProcessMemory.’

Gen Digital explains that the ideal time for the malware to do this is during browser startup, when the application loads ABE-protected cookies early, forcing the decryption of the master key.

The researchers explained that VoidStealer likely did not invent this technique but rather adopted it from the open-source project ‘ElevationKatz,’ part of the ChromeKatz cookie-dumping toolset that demonstrates weaknesses in Chrome.

Although there are some differences in the code, the implementation appears to be based on ElevationKatz, which has been available for  more than a year.

Advertisement

BleepingComputer has contacted Google with a request for a comment on this bypass method being used by threat actors, but a reply was not available by publishing time.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Security Flaw Lets Tinkerer Accidentally Take Over An Army Of Robot Vacuums

Published

on





The chilling story of a robot uprising has been told in countless books, movies, and other media through the years. But of all the machines that readers imagine could be the ones to rise up, robot vacuum cleaners, which are designed to clean your home and not rule it, might be at the bottom of the list. Don’t take a baseball bat to yours however, as the real threat isn’t the machine itself but vulnerabilities in the systems that control it.

These flaws affected DJI Romo robot vacuums and were discovered by Sammy Azdoufal, an independent engineer using AI, in February of 2026. Azdoufal was trying to build a custom remote app using a PS5 controller and accidentally stumbled upon a way to get floor plans, live feeds, and full remote capability. This gave him access to and control of 6,700 vacuum cleaners around the world. But this wasn’t technically a system breach, as the way in existed through improper server-side access controls and data handling.

Fortunately, instead of leading the robot army to world domination, Azdoufal instead contacted DJI. According to comments from a company spokesperson to The Verge, DJI had already been working on a fix before the issue was made public. That fix came in the form of system updates that were released to address the problem. However, there appeared to be security concerns that still remained at the time. This includes the ability to access video feeds without a security PIN, in addition to other issues.

Advertisement

DJI faces ongoing U.S. security concerns

The discovery of flaws in the DJI Romo robot vacuum cleaner system has apparently led to the company paying Sammy Azdoufal a $30,000 reward. According to The Verge, via Tom’s Hardware, Azdoufal received word about the reward through his email. However, DJI wasn’t clear about which specific discovery qualified for the payment. Additionally, DJI confirmed that a reward was indeed paid to a researcher but didn’t elaborate on Azdoufal or his findings.

DJI is actually a Chinese company specializing in drone manufacturing and didn’t begin selling vacuums until the fall of 2025. But before its robot floor cleaners made headlines, DJI faced pushback from the U.S. government dating back to 2017. At the time, the U.S. Army ordered service members to stop using the company’s drones due to cybersecurity concerns. But the Army went a step further, ordering all related applications and storage media to be removed as well. This was due to potential vulnerabilities discovered during the Army’s internal research.

Advertisement

In the following years, DJI was added to a Pentagon watch list as U.S. officials continued to raise national security concerns about the company. The fear was that DJI’s drones posed a risk to sensitive government information and facilities. Those concerns eventually led to restrictions from the Federal Communications Commission (FCC), which banned the import of new DJI models and drone components. In response, DJI filed a lawsuit in February of 2026, arguing that the FCC’s action placed unfair limits on its U.S. operations.



Advertisement

Source link

Continue Reading

Tech

Amazon is trying smartphones again after the Fire Phone flop with Alexa-first "Transformer"

Published

on


According to people familiar with the project who spoke to Reuters, Transformer is being built inside Amazon’s devices and services division as a “personalization device” that could tie together the company’s consumer services and its revamped Alexa assistant.
Read Entire Article
Source link

Continue Reading

Tech

NYT Strands hints and answers for Monday, March 23 (game #750)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: NYT Strands hints and answers for Sunday, March 22 (game #749).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025