Connect with us
DAPA Banner

Tech

Microsoft is threatening to sue Amazon and OpenAI over a $50 billion cloud hosting deal

Published

on


According to an unnamed Microsoft insider quoted by Financial Times, the company is prepared to sue OpenAI and Amazon if they move forward with the deal. “We know our contract, and we’ll sue them if they breach it,” the person reportedly told the publication, arguing that OpenAI cannot offer Frontier…
Read Entire Article
Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Amazon’s chip business could be worth $50 billion, Jassy says, and he hints it may sell them externally

Published

on

In short: Andy Jassy’s annual letter to shareholders, published on 9 April 2026, reveals that Amazon’s custom chip business, covering Graviton, Trainium, and Nitro, generates more than $20 billion in annualised revenue growing at triple-digit rates year-on-year. If sold on the open market like Nvidia, Jassy says, the business would be worth roughly $50 billion a year. He also signals that Amazon may begin selling those chips directly to third parties, and defends the company’s $200 billion capital expenditure plan for 2026 as grounded in committed customer demand rather than speculation.

“Not on a hunch”: the $200 billion bet

Jassy opened the letter’s financial argument with a direct rebuttal of the scepticism that has surrounded Amazon’s capital commitments. “We’re not investing approximately $200 billion in capex in 2026 on a hunch,” he wrote. “We’re not going to be conservative in how we play this. We’re investing to be the meaningful leader, and our future business, operating income, and free cash flow will be much larger because of it.” The context for that claim is a company that saw its free cash flow fall from $38 billion to $11 billion last year, driven by a $50.7 billion increase in capital spending, the bulk of it committed to AI infrastructure.

The defence rests on customer commitments already in place. Of the CapEx expected to be deployed in 2026, Jassy said a substantial portion already has customer backing, citing as one example OpenAI’s commitment of more than $100 billion to AWS. That commitment, which expanded an existing $38 billion seven-year partnership struck in November 2025, also includes OpenAI consuming approximately two gigawatts of Trainium capacity through AWS infrastructure. SoftBank, which holds a majority stake in OpenAI and has been financing its infrastructure build through mechanisms including a $40 billion bridge loan, is in effect underwriting part of the demand that Jassy is now pointing to as validation for his CapEx stance.

A $50 billion chip business hiding in plain sight

Amazon’s custom silicon programme spans three product lines. Graviton is a custom CPU that Jassy says delivers more than 40% better price-performance than comparable x86 processors, the market that Intel and AMD dominate. It is now used by 98% of the top 1,000 EC2 customers, a figure that reflects a shift in the economics of cloud compute that has been underway for several years. Demand is sufficiently intense that two large AWS customers asked whether they could purchase all available Graviton capacity for 2026. Amazon declined.

Advertisement

Trainium is the AI training and inference accelerator that represents Amazon’s most direct response to Nvidia. Trainium2, which Jassy says offers roughly 30% better price-performance than comparable GPU alternatives, has largely sold out. Trainium3, which began shipping in early 2026 and offers a further 30 to 40% improvement in price-performance over Trainium2, is nearly fully subscribed, with Uber among the companies that have moved workloads onto it. Trainium4, still approximately 18 months from broad availability and featuring interoperability with Nvidia’s NVLink Fusion interconnect technology, has already been significantly reserved. Nitro, the custom network and security chip that underpins AWS’s virtualisation layer, completes the three-chip portfolio. Together, Jassy says the three lines produce more than $20 billion in annualised revenue, growing at triple-digit percentage rates year-on-year. “If we were a standalone chip company,” he writes, “our chips would be generating over $50 billion in annual revenue.” The business currently exists entirely within AWS; customers access Trainium and Graviton through EC2 instances rather than buying chips directly.

Advertisement

At scale, Jassy argues, Trainium will “save us tens of billions of capex dollars per year, and provide several hundred basis points of operating margin advantage versus relying on others’ chips for inference.” That claim is central to the investment thesis underpinning the $200 billion CapEx programme: custom silicon is not only a competitive differentiator but a structural cost advantage that compounds over time as the ratio of inference to training in AI workloads continues to rise.

The Nvidia relationship, and the “new shift”

Jassy is careful in how he frames the competitive dynamic with Nvidia. “We have a strong partnership with NVIDIA, will always have customers who choose to run NVIDIA,” he writes, while also asserting that “virtually all AI thus far has been done on NVIDIA chips, but a new shift has started.” Customers, he says, “want better price-performance.” Nvidia, which reported revenue of $68.1 billion in the fourth quarter of 2025, a 73% year-on-year increase, entered 2026 from a position of market dominance that Amazon’s custom silicon is chipping away at from within the AWS customer base rather than in any broader merchant market. Trainium4’s incorporation of NVLink Fusion means Amazon is also building in a bridge rather than a wall: customers can combine Trainium accelerators with Nvidia GPUs within the same system, preserving optionality for enterprises that have invested heavily in Nvidia’s software stack.

The letter’s most consequential signal on chips, however, may be a single sentence about the future: “There’s so much demand for our chips that it’s quite possible we’ll sell racks of them to third parties in the future.” Amazon currently monetises its custom silicon exclusively through EC2 compute services. Selling chips directly would represent a structural shift in its competitive posture, placing it in the merchant silicon market alongside Nvidia and AMD, and allowing the economics of the chip business to be assessed independently of the cloud revenue it currently underpins.

Bedrock, Amazon Leo, and the broader picture

The shareholder letter situates the chip business within a wider AI infrastructure thesis. Amazon Bedrock, the managed service through which AWS customers access foundation models including Amazon’s own Nova family, processed more tokens in Q1 2026 than in all prior periods combined, with inference volumes “nearly doubling month-over-month” in March. AWS’s AI revenue run rate crossed $15 billion in Q1 2026, a figure Jassy contextualises by noting it represents growth roughly 260 times faster than AWS experienced at a comparable stage of its development.

Advertisement

Jassy also uses the letter to frame Amazon’s satellite internet service, Amazon Leo, as a competitive counterpart to SpaceX’s Starlink, having already secured contracts with Delta Air Lines, JetBlue, AT&T, Vodafone, and NASA. The satellite and chip disclosures share an underlying argument: that Amazon is building infrastructure at a scale and across categories that most observers have not fully priced in. The legal scrutiny that has begun to attach itself to Amazon’s AI products, including a proposed class action over the training data used for Nova Reel, represents one category of risk that the letter does not address. The year 2025 established AI infrastructure as the central capital allocation question for the technology industry, and Jassy’s letter is, in part, an argument that Amazon arrived at the right answer earlier and more decisively than the market has yet recognised.

Source link

Advertisement
Continue Reading

Tech

OpenAI pauses Stargate UK over energy costs

Published

on

Stargate UK will move forward when ‘right conditions’ enable ‘long-term infrastructure investment’, OpenAI said.

OpenAI is pausing its Stargate initiative in the UK after citing energy costs and regulatory burdens.

In a statement to major news publications, the company said that it is continuing to explore Stargate UK and will move forward when the “right conditions such as regulation and the cost of energy” enable it to make “long-term infrastructure investment”.

OpenAI first announced the project last September in collaboration with Nvidia and UK AI infrastructure provider Nscale. The initiative was seen as a step forward in cross-national technology partnership, with its announcement coinciding with US president Donald Trump’s visit to the UK.

Advertisement

For UK prime minister Keir Starmer, Stargate represented a major nod from Big Tech firms supporting the country’s push to become a leader in the space. The OpenAI project was meant to support the UK’s ‘AI Growth Zone’, expected to create 5,000 new jobs and bring in £30bn in private investment.

Other companies, including Microsoft and Nvidia have also made multibillion-dollar investment commitments in the UK. A government spokesperson told Bloomberg that the UK’s AI sector has attracted more than £100bn since Starmer came into power in 2024.

Launched early last year, Stargate is a $500bn private sector investment project into OpenAI’s infrastructure. The project’s initial equity funders include OpenAI, Oracle, MGX and SoftBank, with Microsoft, Nvidia and Arm among the key technology partners.

A year since launching, Stargate’s Texas facility is already training AI systems, while a number of projects are underway in the US, as well as in the UAE and Norway. The company also announced a tie-up with India’s Tata Consultancy Services as part of Stargate.

Advertisement

OpenAI has been shuttering plans to refocus towards enterprise tools as it plans for an initial public offering later this year. Late last month, it put plans for an erotic ChatGPT on hold “indefinitely”, just days after it shut down its controversial AI video generator Sora.

It recently announced a $122bn funding round, placing the AI giant at a post-money valuation of $852bn.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Flush with cash: Washington startup lands up to $500M to deploy facilities treating sewage, dairy waste

Published

on

Dairy cows at the Puyallup Fair, now called the Washington State Fair. (GeekWire Photo / Kurt Schlosser)

Wastewater treatment startup Sedron Technologies — a Washington company that once served Bill Gates a glass of water purified from sewage — announced it’s being acquired by Ara Partners. The global equity firm is investing up to $500 million in Sedron to facilitate the deployment of its sewage and manure cleaning technologies, which gives it a controlling stake in the business.

“The Ara investment is largely designed to provide us with the equity on our own balance sheet to scale up production of additional projects and plants across the country,” said Geoff Trukenbrod, interim CEO of Sedron.

The startup is deploying facilities that efficiently and sustainably treat sewage biosolids and dairy waste. Sedron’s business model is to finance, design, build, own, operate and maintain the sites, which cost about $100 million to $200 million to build.

The company generates revenue from the municipalities and farms that use their services as well as from the sale of organic fertilizer and clean energy produced at the sites.

“Imagine having a bakery, and you get paid to get flour, and you get paid for your cookies,” said Stanley Janicki, Sedron’s chief commercial officer. “It’s a phenomenal business model, not that biosolids are cookies.”

Advertisement
Sedron’s dairy waste management facility in Fair Oaks, Ind., which handles manure from 20,000 cows. (Sedron Photo)

Sedron launched in 2014 as a spinoff from Janicki Industries, a longtime aerospace engineering and manufacturing company. Both are based in Sedro Woolley, a city north of Seattle in a largely agricultural stretch of Western Washington.

In 2011, Janicki received funding from what is now the Gates Foundation to develop a wastewater purification system, leading to Sedron’s launch and a video that went viral showing Bill Gates drinking a glass of water produced from sewage. The foundation supported the technology as a means for treating waste in developing countries where untreated sewage could otherwise spread pathogens.

The company is breaking ground this month on a regional waste treatment facility that will serve multiple municipalities that are home to 2 million people in South Florida. Operations are expected to begin in 2028.

Sedron’s system takes municipal biosolids — the residual product from a wastewater treatment plant — and dries the material in an energy efficient thermal dryer. The biosolids are about 85% water, which is largely evaporated and disposed of, and remaining material is fed into a biomass boiler to produce clean electricity. The energy that’s generated helps run the dryer and the excess electricity is sold. Another benefit of the system is the process destroys PFAS “forever chemicals” contaminating wastewater.

The startup’s second line of business is managing manure from livestock operations — which is one of the biggest costs for a dairy farmer. Sedron takes the waste, removes the water for use in irrigation, and produces two high-value organic fertilizers: a solid material and a concentrated liquid nitrogen fertilizer. The fertilizers are sold nationwide for use on crops such as apples, berries and spinach.

Advertisement

Sedron’s treatment process is more affordable and replaces the use of manure lagoons to store the waste until it can be applied to fields as a liquid. The lagoons produce planet-warming methane and pose environmental threats if they leak nutrients that can stoke algal blooms in nearby waterways or contaminate drinking water.

The company has deployed its manure technology at two dairy farms in Indiana, including a 20,000 cow dairy, and expects to start operations at a Wisconsin farm this summer.

“Our focus is on positioning Sedron as the leader in circular waste management — converting waste into carbon negative commodities faster, more cost effectively, and with greater energy efficiency than any other solution available,” said Cory Steffek, a partner at Ara Partners, in a statement.

Sedron previously raised approximately $100 million in corporate debt and equity and about $200 million in project financing, some of which was institutional. All of the legacy shareholders rolled their equity forward, Janicki said.

Advertisement

The 275 employee company has offices in Washington state and Chicago, and operational facilities in Indiana, Wisconsin and Florida.

The startup is focused on U.S. deployments of its facilities, aiming to launch at least two new sites each year for the next five years, then potentially scaling up from there. Janicki said they’d still like to operate in developing countries to address that initial use case.

Sedron’s leadership emphasized the importance of delivering a service that resonates with investors and business partners, doesn’t require government support to succeed and also benefits the planet.

“As the world today is retreating somewhat from climate efforts,” Janicki said, “it’s exciting to be in a business that is positioned for exceptional growth and solving environmental problems while creating valuable products.”

Advertisement

Source link

Continue Reading

Tech

Meet the ultra-compact NucBox K17 Mini PC delivering triple-digit AI performance and blazing-fast memory in a pocket-sized frame

Published

on


  • NucBox K17 combines CPU, GPU, and NPU for full AI performance
  • The Intel Core Ultra 5 226V processor delivers efficient, high-speed computing
  • Integrated Arc 130V GPU offers 53 TOPS AI throughput using INT8 precision

GMKTec has introduced the NucBox K17 Mini PC with a focus on compact AI performance, combining a high-efficiency processor with integrated graphics and a dedicated neural unit.

The NucBox K17 is built around the Intel Core Ultra 5 226V processor, which features 8 cores and 8 threads manufactured on the TSMC N3B process.

Source link

Advertisement
Continue Reading

Tech

Pixelmator Pro & Logic get big upgrades, rest of iWork gets minor ones

Published

on

Apple rolled out updates across its creative and productivity apps. Logic Pro and Pixelmator Pro gain new features while the rest of the iWork lineup got bug fixes and stability improvements.

Floating dark rounded app icons with colorful neon-style symbols, including charts, sliders, waveforms, a radio, turntable, lamp, and abstract shapes, on a black background
Apple Creator Studio apps

Apple’s Creator Studio bundle includes pro tools like Final Cut Pro, Logic Pro, Motion, Compressor, and Pixelmator Pro, along with productivity apps like Pages, Keynote, and Numbers. It’s a unified platform for creating and publishing across different workflows.
Apple delivered updates through the App Store, with most apps receiving maintenance-focused changes for reliability and platform stability.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Google Cloud deepens AI infrastructure partnership with Intel across Xeon and custom chips

Published

on

In short: Google Cloud and Intel have announced a deepened multi-year AI infrastructure partnership covering both CPU deployment and custom chip co-development. Google Cloud will continue adopting Intel’s Xeon 6 processors across its global infrastructure for C4 and N4 instances, while the two companies are expanding their joint development of custom Infrastructure Processing Units designed to offload networking, storage, and security from host CPUs in hyperscale AI environments. The announcement arrives as Intel’s stock surged approximately 33% on the week and two days after the company signed on as the foundry partner for Tesla’s Terafab megaproject.

“Balanced systems”: the case Intel and Google are making together

The central argument of the partnership, as framed by both companies, is that GPU accelerators alone are not sufficient to handle the demands of modern AI infrastructure. In a statement accompanying the announcement, Lip-Bu Tan, Intel’s chief executive, said: “AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators — it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.” The language is deliberate. Intel has spent much of the past two years repositioning from the general-purpose computing market it once dominated toward a more specific thesis: that the CPU and custom infrastructure silicon have a structural role in AI deployments that GPU-centric narratives have consistently underestimated.

Amin Vahdat, Google’s senior vice president and chief technologist for AI infrastructure, made the case from the demand side. “CPUs and infrastructure acceleration remain a cornerstone of AI systems — from training orchestration to inference and deployment,” he said. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.” The framing of the partnership as a multi-generational CPU roadmap commitment, rather than a one-cycle procurement agreement, is significant: it implies Google has made decisions about its infrastructure architecture several years out on the basis of Intel’s product trajectory, and that trajectory includes both the Xeon line and the custom IPU co-development effort.

Xeon 6 in Google Cloud

The CPU component of the partnership centres on Intel’s Xeon 6 processor family, which Google Cloud has deployed across its workload-optimised C4 and N4 instance types. Google says the C4 instances deliver more than 2.0 times the total cost of ownership benefit compared with predecessor configurations, a figure that captures the combination of performance uplift and power efficiency that Intel has positioned as Xeon 6’s core competitive claim. The agreement extends beyond the current generation: Google has committed to multi-generational alignment with Intel’s Xeon roadmap, meaning its infrastructure planning incorporates Intel’s future CPU releases as a known variable rather than a contingent one. Google has simultaneously been deepening its custom silicon commitments on the accelerator side, supplying Anthropic with approximately one gigawatt of TPU capacity through Broadcom in a deal that anchors Anthropic’s AI infrastructure through 2027 and beyond — a parallel track that reflects how Google is building out its infrastructure portfolio across both standard and custom silicon simultaneously.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The CPU architecture context matters for understanding why this commitment is being made public now. As AI workloads shift from the training phase, which is GPU-intensive and relatively concentrated among a small number of hyperscalers, toward inference at scale, which is distributed, latency-sensitive, and runs continuously across large server fleets, the cost structure of AI infrastructure changes. Inference places sustained demands on CPU resources for orchestration, data pre-processing, and system management that training pipelines do not. Google’s bet on Xeon 6 for its C4 and N4 instances is, in part, a bet that inference economics will make CPU efficiency a first-order concern in the years ahead.

The custom IPU programme

The more strategically significant element of the partnership is the expanded co-development of Infrastructure Processing Units. IPUs are custom ASIC-based programmable accelerators designed to take over the networking, storage, and security functions that would otherwise run on host CPUs, freeing those CPUs to focus entirely on application and AI workload processing. In hyperscale environments, where these infrastructure tasks consume a substantial and growing fraction of available compute, offloading them to a dedicated accelerator can significantly improve utilisation rates, energy efficiency, and the consistency of workload performance. Intel and Google have been collaborating on IPU development, and the announcement signals that this work is expanding in scope rather than narrowing. The specific technical details of the expanded programme — die design, process node, performance targets, and deployment timeline — have not been disclosed publicly.

Advertisement

Nvidia, whose fourth-quarter 2025 revenue reached $68.1 billion on 73% year-on-year growth and which used its GTC 2026 conference in March to position its full-stack platform as the default environment for AI infrastructure, is the implicit competitive reference point for both components of the Intel-Google partnership. Intel is not attempting to displace Nvidia’s GPU accelerators in training workloads; it is arguing that the system around those accelerators — the CPUs managing orchestration, the IPUs managing network and storage overhead, and the interconnects tying everything together — is where efficiency gains are increasingly available. That argument has a natural ally in Google, which has both the infrastructure scale to validate it empirically and commercial incentives to diversify away from a single-vendor accelerator dependency.

Intel’s strategic moment

The Google partnership arrives at a moment when Intel’s industrial position is changing rapidly. Two days before the Google announcement, Intel signed on as the primary foundry partner for Terafab, the $25 billion joint venture between Tesla, SpaceX, and xAI targeting one terawatt of AI compute per year, committing its 18A process node — the company’s most advanced logic manufacturing technology — to the project. The two announcements taken together suggest Intel is pursuing a two-track strategy: deepening its hyperscale cloud partnerships for CPU and IPU deployment while simultaneously building out its foundry business for the custom AI silicon market that Nvidia, AMD, and the hyperscalers’ in-house chip programmes have driven into existence. The stock market responded to the week’s announcements with a roughly 33% gain in Intel’s share price, the sharpest weekly move the company has recorded in years.

Whether the strategic repositioning is durable depends on execution. Intel’s 18A process node is the same technology that underpins its foundry credibility with customers like Tesla, and its delay history has been a persistent source of investor concern. The Xeon 6 deployment in Google Cloud and the IPU co-development programme are both contingent on Intel shipping what its roadmap promises on the timelines Vahdat’s statement implies Google has factored into its own planning. The AI infrastructure market that Intel is trying to enter has become one of the most heavily capitalised segments in technology, with deals such as Meta’s $27 billion agreement with Nebius in March 2026 illustrating the scale of commitments being made across the industry. The year 2025 shifted the centre of gravity in AI from model development to infrastructure deployment, establishing capital expenditure scale and infrastructure access as the primary competitive variables — and Intel, for the first time in several years, is making a credible case that it belongs in that competition on multiple fronts simultaneously.

Advertisement

Source link

Continue Reading

Tech

I don’t see a sane reason to pick another budget phone over the TCL NXTPAPER 70 Pro

Published

on

The era of truly good budget phones is over, and you can blame AI for that. Due to the rising chip costs, even flagship phones are feeling the pinch. And that’s why, when TCL finally brought the NXTPAPER 70 Pro to the US, it came as a big surprise to me. The phone costs just $199, nearly half the price you’d pay in other markets. 

Yes, the phone is exclusive to T-Mobile, but at $199, the NXTPAPER 70 Pro felt something else. A 6.9-inch 120Hz display, IP68 water resistance, 5,200mAh battery, 50MP camera, and TCL’s NXTPAPER 4.0 display technology, which is genuinely unlike anything else at this price. Naturally, I wanted to compare it to phones in a similar price range to see whether I can find a better deal.

So, I went looking for alternatives at a similar price and found three worth comparing: the Samsung Galaxy A17 5G, the Motorola Moto G Power 2026, and the Pixel 10a.  None of them can beat the TCL in price, performance, or features, and I concluded that there’s no reason to choose any other phone over the NXTPAPER 70 Pro right now. Let me show you what I mean.

But first, a quick specs comparison

Specification TCL NXTPAPER
70 Pro
Galaxy A17 5G Moto G Power
2026
Google Pixel 10a
Display 6.9 inches, IPS LCD, 120Hz (1080 x 2340 pixels) 6.7 inches, Super AMOLED, 90Hz (1080 x 2340 pixels) 6.8 inches, IPS LCD, 120Hz (1080 x 2388 pixels) 6.3 inches, P-OLED, 120Hz (1080 x 2424 pixels)
Processor Mediatek Dimensity 7300 (4 nm) Exynos 1330 (5 nm) Mediatek Dimensity 6300 (6 nm) Google Tensor G4 (4 nm)
Cameras Main:
50MP, f1.9, 24mm
Ultrawide
8MP ultrawide (120˚)
Selfie
32MP, f/2.0, 28mm
Main:
50MP, f1.8, 24mm
Ultrawide
5MP ultrawide
Macro
2MP
Selfie
13MP, f/2.0
Main:
50MP, f1.8
Ultrawide
8MP ultrawide (119˚)
Selfie
32MP, f/2.2
Main:
48MP, f1.7, 25mm
Ultrawide
13MP ultrawide, f2.2 (120˚)
Selfie
13MP, f/2.2, 20mm
Battery 5200 mAh 5000 mAh 5200 mAh 5100 mAh
Price $199 (T-Mobile) $189 (T-Mobile), $199 (unlocked) $189 (T-Mobile), $299 (unlocked) $499 (unlocked)

Is there any competition at this price?

The Samsung Galaxy A17 5G is the obvious first comparison. It is Samsung’s best-selling budget phone, and for good reason. You get a solid 6.7-inch Super AMOLED display, a triple camera system, and an impressive six years of software updates. 

Advertisement

It is a reliable, no-frills phone that does the basics well. But it runs on the Exynos 1330, a chip that has been specifically called out for poor performance. Compared to the MediaTek Dimensity 7300 powering the TCL NXTPAPER 70 Pro, the Exynos 1330 is slower across CPU, GPU, and battery performance. Take a look at the comparisons below:

It also has an IP54 rating, which means it is splash-resistant but not submersible. The NXTPAPER 70 Pro, by comparison, has a better chip, a better display, IP68 water resistance, and a more interesting feature set. The A17 sells for around $175 to $199. Simply put:

Same price. No contest.

The Moto G Power 2026 offers a similar 6.8-inch LCD display and the same 5,200mAh battery, but the MediaTek Dimensity 6300 inside is a step down from the NXTPAPER 70 Pro’s Dimensity 7300. The Dimensity 7300 uses a newer 4nm fabrication process (compared to the Dimensity 6300’s 6nm) and delivers up to 67% better performance. Have a look at the performance figures:

There are several factors working in favor of the Moto G Power (2026). It features a better Gorilla Glass 7i protection and IP68/IP69 dust and water resistance, but that’s about it. On all other fronts, the NXTPAPER 70 Pro either offers equal or better features. Moto G Power 2026 costs $189 if you get it on a similar T-Mobile contract and $299 on Amazon without a contract, so there’s no price advantage either.

As you can see, the TCL NXTPAPER 70 Pro beats the Samsung Galaxy A17 and Moto G Power 2026 on most fronts at a similar price. 

What about the Pixel 10a?

This is where it gets interesting. At $499, the Google Pixel 10a is not a phone I should consider for this comparison. But it is a genuinely great phone, a gold standard for mid-range Android, and I am not going to pretend otherwise.

It features a 6.3-inch OLED display, a 48MP camera, seven years of updates, a more powerful Tensor G4 chipset, and Google’s AI features baked deep into the software. 

But the Pixel 10a does not have a bigger battery and does not support expandable storage. Also, the TCL NXTPAPER 70 Pro costs $199, and the $300 gap is doing a lot of heavy lifting. And throughout our comparisons, we haven’t even touched on the TCL NXTPAPER 70 Pro’s standout feature: the NXTPAPER 4.0 display.

That display is what makes this phone genuinely special. TCL’s NXTPAPER 4.0 is not a software night mode or a cheap filter. It uses hardware-level changes, including circular polarized light, DC dimming that eliminates screen flicker, and a filter that reduces harmful blue light. 

The phone is certified by TÜV and SGS, independent bodies that test these things rather than take a company’s word for it. A dedicated NXTPAPER key on the side instantly switches between full-color mode, Ink Paper Mode, and Max Ink Mode, allowing you to use it as a normal phone or as an e-reader experience. In Max Ink mode, the battery lasts up to seven days.

None of the other phones on this list offer these incredible display innovations. This feature alone makes the NXTPAPER 70 Pro worth buying. But even if you disregard it, you have seen that the NXTPAPER 70 Pro offers better features at comparable prices to all other phones in its price segment. 

If you spend long hours staring at your phone for work, school, or reading, no phone at this price comes close to what TCL is offering. At $199, the TCL NXTPAPER 70 Pro is not a budget phone that asks you to make compromises. It is a genuinely good phone with one feature that no one else has figured out yet. That makes it a very easy recommendation.

Advertisement

Source link

Continue Reading

Tech

OpenAI faces investigation over ChatGPT concerns

Published

on

Just when it seemed like OpenAI was gearing up for its next big leap, possibly even an IPO, it’s now facing some serious scrutiny. And this time, it’s not just critics online. It’s a full-blown government investigation. And yeah, things are getting a little intense.

OpenAI is now under investigation, and it’s not a small one

Florida Attorney General James Uthmeier has launched a probe into OpenAI and its chatbot, ChatGPT. The concerns being raised go beyond the usual AI debates, as this one touches on national security, data handling, and real-world harm.

Today, we launched an investigation into OpenAI and ChatGPT.

AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.

Wrongdoers must be held accountable. pic.twitter.com/vRVCqIYKnB

Advertisement

— Attorney General James Uthmeier (@AGJamesUthmeier) April 9, 2026

As reported by Reuters, the investigation is looking into whether OpenAI’s technology or data could potentially fall into the wrong hands, including foreign adversaries. There are also claims linking ChatGPT to harmful use cases, ranging from misuse in criminal activity to concerns around self-harm and unsafe content.

Subpoenas are reportedly on the way, which means this isn’t just talk but a formal escalation. And all of this is happening right as OpenAI is being seen as a potential IPO candidate, with valuations being thrown around in the trillion-dollar range. That timing could complicate things further, as increased regulatory scrutiny may impact investor confidence and how aggressively the company can move forward with its public listing plans.

This could get messy, fast

Let’s be real, AI companies have been skating on thin ice when it comes to regulation. Rapid growth, massive user bases, and real-world impact were always going to attract attention eventually. But the timing here is what makes it spicy. OpenAI is scaling aggressively, pushing products like ChatGPT deeper into everyday life, and potentially preparing for a public offering. Getting hit with a government probe right now is not ideal.

At the same time, this might just be the beginning. Because once governments start asking questions about how AI is being used, and misused, it’s not just about one company anymore. It’s about the entire industry getting put under the microscope.

Advertisement

Source link

Continue Reading

Tech

GeekWire Awards: AI Innovation of the Year finalists transform HR, retail, biotech and more

Published

on

The 2026 GeekWire Awards AI Innovation of the Year finalists, clockwise from top left: Avante CEO Rohan D’Souza; ConverzAI CEO Ashwarya Poddar; Envive AI CEO Aniket Deosthali; Synthesize Bio co-founders Jeff Leek (left) and Robert Bradley; and Spangle AI co-founders Maju Kuruvilla (left) and Fei Wang.

The finalists for AI Innovation of the Year at the 2026 GeekWire Awards represent the cutting edge of the generative era, deploying sophisticated agents and foundation models to transform everything from healthcare benefits and recruitment to e-commerce personalization and life-saving drug discovery.

The finalists are: Avante, ConverzAI, Envive AI, Spangle, and Synthesize Bio.

Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.

The 2025 GeekWire Award winner for AI Innovation of the Year was Overland AI, the Seattle-based startup that develops autonomous driving technology for rugged terrain for military applications and elsewhere.

Continue reading for information on the 2026 AI Innovation of the Year finalists, who were chosen by a panel of independent judges from community nominations. You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through April 16.

Avante is an AI-native benefits intelligence platform designed to help companies decrease HR administration workload and reduce overall benefits program costs. It relies on two AI agents working together: Ava gives HR teams strategic intelligence from employee questions, claims data, and vendor contracts. Carly gives employees personalized guidance.

Advertisement

The startup, which raised a $10 million seed round, is led by CEO Rohan D’Souza, former chief product officer for health care automation company Olive AI; and epidemiologist Carly Eckert, MD, Ph.D., Avante’s head of innovation and impact, who was executive vice president at Olive AI. Kabir Shahani, a serial entrepreneur who was CEO of Seattle-based marketing tech startup Amperity, is Avante’s executive chairman.

ConverzAI helps automate recruiting processes with its virtual recruiters that help companies with staffing needs. The software can parse through applications, conduct interviews, and onboard new employees.

The 6-year-old startup, which raised $16 million in Series A funding, is led by former Microsoft product manager Ashwarya Poddar.

Envive AI builds AI agents for online retailers to help boost conversion, retention, and discoverability. Brands such as Spanx, Coterie, Supergoop! and more use Envive’s AI-powered software to engage with customers as they shop on websites and apps. Envive also helps companies improve their visibility in generative AI search results.

Advertisement

The company, which raised $15 million in Series A funding, is led by CEO Aniket Deosthali, who previously helped Walmart build its generative AI-powered shopping assistant. Other co-founders include: CTO Sameer Singh, chief scientist Iz Beltagy, and chief architect Matthew Peters.

Spangle AI helps online retailers build customized shopping experiences in real-time by generating a tailored storefront for individual customers based on how traffic flows in from social platforms, AI search tools, and even autonomous shopping agents. Spangle’s system focuses on intent and context — whether a shopper is browsing, comparison-shopping, or ready to buy — and adapts product selection, layout, and content accordingly.

The startup, which raised $15 million in a Series A round, is led by CEO Maju Kuruvilla, a former vice president at Amazon, where he worked on Prime logistics and fulfillment. Spangle CTO Fei Wang was CTO at Saks OFF 5TH, a subsidiary of Saks 5th Avenue. Wang also spent nearly 12 years at Amazon as an engineer.

Synthesize Bio aims to make new drug discovery faster and cheaper by using AI to simulate the results from hypothetical lab experiments. Its generative genomics foundation model (GEM-1) predicts gene expression, providing insights into how a novel drug is expected to impact cell behavior.

Advertisement

The startup, which raised $10 million last fall, was co-founded by leaders from Fred Hutchinson Cancer Center — Fred Hutch Chief Data Officer Jeff Leek and Robert Bradley, director of the Translational Data Science Integrated Research Center at the organization.

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships are available. Contact events@geekwire.com to reserve a spot for your team today.

(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey

Advertisement

Source link

Continue Reading

Tech

How AI is transforming hospitality operations while preserving human experience

Published

on

Hospitality has long been defined by human interaction, but the systems that support those interactions have undergone continuous change. Arran Campolucci-Bordi, owner of Casa Italia, established 50 years ago in Liverpool, UK, frames this evolution through lived experience, tracing a path from handwritten reservation books to digital booking systems and now toward AI-driven operations. In his view, each transition reflects a broader shift in how restaurants manage time, communication, and customer expectations.

He points out that earlier generations relied entirely on manual processes. Reservations were written down, availability was checked by hand, and customer inquiries were handled individually. As digital tools emerged, many of these processes moved online, creating greater structure and consistency. According to Arran, the current phase introduces a new layer, where systems are capable of responding dynamically to customer needs without requiring human input. 

From his perspective, AI within hospitality is best understood as an operational support system rather than a replacement for people. He explains that Ayra functions similarly to a trained staff member in specific contexts, particularly when it comes to handling information. Once it has been provided with details such as menus, booking systems, and policies, it can respond to customer inquiries in a conversational format. This includes tasks such as checking availability, managing reservations, and answering common questions in real time. He suggests that, in practice, this allows businesses to handle external interactions consistently, while allowing the staff to be focused on where it matters most.

Ayra
Credit: Ayra
source: Ayra
Ayra

That operational shift is increasingly visible across different industries. According to a report, 58% of employees surveyed say they are already saving time at work through AI tools, with users reporting an average of 52 minutes saved per day, or nearly five hours per week. In a sector like hospitality, where a large share of time is spent responding to enquiries and managing bookings, these time savings can accumulate quickly and better influence where teams focus their efforts.

Arran emphasizes that this type of system is designed to operate alongside existing teams. He notes that many roles within hospitality involve repetitive administrative tasks that take time away from direct customer engagement. 

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Building on this, he explains that redistributing that time can reshape how service is delivered inside the restaurant itself. “By shifting those tasks to an AI-driven interface, businesses can allow staff to focus on delivering service within the physical environment of the restaurant,” he says. “It is a way of aligning people with the aspects of their roles that require attention, awareness, and interpersonal interaction.”

The practical implications of this shift are closely tied to how restaurants allocate their time and resources. According to Arran, a significant portion of operational inefficiency comes from fragmented communication, particularly when customers reach out with similar questions or booking requests. “Each individual interaction may be brief, but collectively they represent a substantial time commitment,” he notes. Ayra, he explains, can handle these interactions 24/7, in turn increasing time spent with customers and capturing potential missed opportunities.  

Advertisement

This perspective also reflects broader changes in customer behavior. “As digital communication has become more immediate, expectations around response times have shifted accordingly,” Arran notes. “Customers increasingly expect quick and accurate answers, whether they are making a reservation or asking about menu options. Systems that can respond instantly help meet those expectations while maintaining clarity and consistency in communication.

A common misconception is that hospitality is slow to adopt new technology due to the human-centric nature of the business. According to Arran, the immediate and drastic implications of adopting supposedly “robust” technology stem from the industry’s failure to adequately vet what they adopt

He also highlights the importance of simplicity in adoption. From his experience, one of the main barriers for restaurant owners is not necessarily resistance to technology itself, but uncertainty about how it works in practice. As a result, the platform he has developed is designed to be robust, accurate, and straightforward to implement, only requiring businesses to provide a small amount of information to train their AI agent. Once that information is in place, the system can begin operating autonomously.

This approach reflects a broader shift in how technology is being integrated into traditional industries. Rather than requiring businesses to fundamentally change their operations, tools are being developed to fit within established structures. Arran suggests that this compatibility is essential for long-term adoption, particularly in sectors where consistency is key.

Advertisement

Looking ahead, he sees AI as part of an ongoing progression rather than a final destination. The transition from manual processes to digital systems has already reshaped hospitality operations, and the introduction of AI represents another stage in that evolution. Each phase, he notes, has introduced new efficiencies while maintaining the core objective of serving customers effectively.

People come into a restaurant for the experience, and that will always be the case,” Arran says. “If technology can take care of everything around that, it allows the staff to focus on what they do best, giving customers the best possible experience.”

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025