Connect with us
DAPA Banner

Tech

Meet the ultra-compact NucBox K17 Mini PC delivering triple-digit AI performance and blazing-fast memory in a pocket-sized frame

Published

on


  • NucBox K17 combines CPU, GPU, and NPU for full AI performance
  • The Intel Core Ultra 5 226V processor delivers efficient, high-speed computing
  • Integrated Arc 130V GPU offers 53 TOPS AI throughput using INT8 precision

GMKTec has introduced the NucBox K17 Mini PC with a focus on compact AI performance, combining a high-efficiency processor with integrated graphics and a dedicated neural unit.

The NucBox K17 is built around the Intel Core Ultra 5 226V processor, which features 8 cores and 8 threads manufactured on the TSMC N3B process.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Which Is Fastest, Cheapest, and Safest?

Published

on

Your payment method does more than move money. It determines how fast you can play after depositing, how quickly you can access your winnings, whether you qualify for a welcome bonus, and how much the transaction costs you. Players who choose without thinking often run into blocked methods, delayed withdrawals, or quietly voided bonuses. Choosing the right method from the start eliminates all of that.

This guide covers the nine most widely available payment methods at online casinos in 2026 — debit cards, e-wallets, open banking, prepaid cards, mobile wallets, and cryptocurrency — with honest data on speed, fees, limitations, and who each method actually suits. For players specifically looking for a guide on Klarna Pay Now (the successor to Sofort), see our dedicated Klarna casino payment guide.

⚡ Quick Take: Best Method by Use Case

    • Fastest deposit + withdrawal overall: Trustly (Open Banking)
    • Best for familiarity and broad availability: Visa/Mastercard Debit
    • Best e-wallet for withdrawal speed: Skrill or PayPal
    • Best for privacy-conscious players: Paysafecard (deposits only)
    • Fastest withdrawals overall (where available): Cryptocurrency
  • Best mobile wallet option: Apple Pay
  • Not recommended for UK players: Any credit card — banned for gambling under UKGC rules

Casino Payment Methods Compared

Method Deposit Speed Withdrawal Speed Fees UK Available
Visa / Mastercard Debit Instant 1–5 business days Free ✓ Yes
PayPal Instant 24 hours Free at most casinos ✓ Yes
Trustly (Open Banking) Under 6 seconds Same day / instant Free ✓ Yes
Skrill Instant A few hours Small fees may apply ✓ Yes
Neteller Instant 24 hours Free at most casinos ✓ Yes
Apple Pay / Google Pay Instant 24 hours Free ✓ Yes
Klarna Pay Now Instant 1–3 business days Free ✓ Yes
Paysafecard Instant Deposits only (mostly) Free ✓ Yes
Cryptocurrency Minutes Under 1 hour (crypto casinos) Network fees apply Varies by casino

Speed and fee data sourced from OLBG’s casino payment methods guide. Withdrawal times reflect casino-side processing after approval; actual timelines may vary by operator.

1. Visa and Mastercard Debit Cards

Debit cards remain the most universally accepted deposit method at licensed online casinos. Visa and Mastercard both use EMV chip protection and tokenisation, meaning your actual card number is not transmitted during online transactions — only a single-use token passes to the payment processor. Deposits are instant across the board. The main drawback is withdrawal speed: card payouts typically take 1–5 business days, as the return transfer is routed back through the card network’s settlement system rather than a direct push to your account.

Advertisement

One important distinction for UK players: debit cards are permitted under UKGC rules, but credit cards are not. The UK Gambling Commission’s ban on credit card gambling came into effect in April 2020 and applies to all UKGC-licensed operators. If you attempt to deposit with a credit card at a UK-licensed casino, the transaction will be declined at the merchant’s end — not because of a card issue, but by regulatory enforcement. Debit cards from Visa and Mastercard are unaffected. For more context on how UK regulations shape your payment choices, see our responsible gambling and regulatory guide.

2. PayPal

PayPal is the most widely recognised e-wallet in the world and a strong casino payment option where it is supported. Deposits are instant, and withdrawals typically clear within 24 hours — significantly faster than debit card returns. The platform does not share your bank or card details with the casino directly; PayPal acts as the intermediary, meaning your underlying financial details stay within the PayPal ecosystem. Most PayPal casino transactions carry no fees for the player, though PayPal may apply conversion charges for cross-currency deposits.

The main limitation is that PayPal is not universally accepted across all online casinos — availability depends on the operator’s payment processor relationships and regional licensing. Some casinos also exclude PayPal deposits from welcome bonus eligibility, or impose a separate wagering structure for PayPal users. Always check the bonus terms before depositing via PayPal if a welcome offer is a factor in your decision. It is also worth noting that PayPal’s own terms of service prohibit its use for unlicensed gambling operations — it works exclusively at regulated, licensed casino sites, which is actually a useful indirect signal of a casino’s legitimacy.

3. Trustly (Open Banking)

Trustly is the most technically advanced payment method widely available at licensed casinos in 2026. It operates as an open banking intermediary, connecting directly to your bank account through regulated bank APIs rather than routing through a card network or e-wallet balance. According to iGaming Payment Solutions’ 2026 Trustly review, the service processed $87 billion in transactions in 2024 and is connected to over 6,300 European banks — a scale that reflects its adoption as the default open banking rail for the iGaming industry.

Advertisement

Deposits complete in under six seconds according to Trustly’s Pay N Play documentation — and the Pay N Play feature, used by a growing number of European casinos, combines the deposit and KYC registration into a single bank authentication step, eliminating the need to fill in separate sign-up forms. Withdrawals push back to the same bank account, often same-day. The service works without creating a Trustly account — you simply select it at the casino cashier and authenticate through your own online banking. In the UK, 14 major banks support Trustly, including Barclays, Lloyds, HSBC, NatWest, Nationwide, and Santander. The only meaningful limitation: if your bank is not on the supported list, Trustly will not function for you — in which case a debit card or PayPal is the practical fallback.

4. Skrill

Skrill is a dedicated iGaming e-wallet that has been a staple of the online casino industry for over two decades. It is part of the Paysafe Group alongside Neteller and Paysafecard, giving it broad merchant relationships across the casino sector. Deposits are instant; withdrawal speeds are among the fastest in the non-crypto category, typically processing within a few hours once the casino approves the request. Skrill also supports cryptocurrency funding, meaning players can top up their Skrill balance using crypto and then use Skrill as the deposit method at casinos that don’t directly support crypto — a useful workaround.

The primary caveat: many casinos exclude Skrill (and Neteller) deposits from welcome bonus eligibility. This is disclosed in the terms of most major operators, but it catches new players off guard. If you plan to claim a sign-up bonus, verify the terms before depositing. Skrill also applies fees for certain transaction types, including currency conversion and inactivity charges on dormant accounts. For a direct comparison of Skrill against other e-wallet options, see our Skrill casino payment guide.

5. Neteller

Neteller occupies a similar market position to Skrill — same parent company (Paysafe), same broad casino acceptance, same instant deposit speed, and same 24-hour withdrawal window. Players often choose between the two based on which offers better rates through their VIP tier programmes, since both run loyalty structures that reduce fees and improve limits at higher tiers. If you are registered with both, it is worth comparing your current tier benefit on each before selecting your deposit method.

Advertisement

Like Skrill, Neteller deposits are excluded from welcome bonuses at most major casinos. New players in particular should prioritise a debit card or Trustly for their first deposit to capture any available sign-up offer, then switch to Neteller for ongoing play if its withdrawal speed and convenience suit their habits. Neteller is also excluded from BNPL products and cannot be funded using credit instruments in most jurisdictions — an intentional design choice aligned with responsible gambling standards.

6. Apple Pay and Google Pay

Mobile wallet payments have grown significantly at online casinos over the past two years. Apple Pay and Google Pay both function as tokenised card proxies — when you pay with Apple Pay, the casino never sees your actual debit card number; only a one-time device-generated token passes through.

For players who primarily use casinos on a smartphone, this is the lowest-friction deposit option available: Face ID or fingerprint confirmation replaces manual card entry, and deposits settle instantly. Withdrawal availability via Apple Pay and Google Pay is more limited than deposits — many casinos support them only for deposits and route payouts back to the underlying linked card, meaning withdrawal timelines revert to the card’s 1–5 day window.

7. Klarna Pay Now

Klarna Pay Now is a bank transfer payment method available at a growing number of licensed casinos, particularly in Germany, Austria, the Netherlands, and the UK. It replaced Sofort (deprecated October 2024) as Klarna’s instant bank transfer product. Deposits are instant and require no card details to be shared with the casino — authentication happens through your bank’s login interface within Klarna’s encrypted checkout flow.

Advertisement

Withdrawal support varies by operator; where available, payouts take 1–3 business days. Klarna’s credit-based products (Pay in 30 Days, Pay in 4) are not permitted for gambling transactions under regulated market rules. For a full breakdown of how Klarna works at online casinos, including the deposit and withdrawal process step-by-step, see our Klarna casino payment guide.

8. Paysafecard

Paysafecard is a prepaid voucher system: you purchase a physical or digital card loaded with a fixed denomination (£10, £25, £50, £100) from a newsagent, petrol station, or online retailer, then enter the 16-digit PIN at the casino cashier. No bank account, card details, or personal financial information is required. This makes it the most privacy-preserving deposit method available at regulated casinos. The significant limitation is withdrawals — Paysafecard functions almost exclusively as a deposit vehicle.

Most casinos cannot pay winnings back to a Paysafecard, which means players need a separate linked withdrawal method. It also cannot be used to deposit more than the loaded denomination, so high-volume players find it inconvenient. For casual, lower-stakes players who prioritise anonymity and spending control, it remains a practical choice.

9. Cryptocurrency

Cryptocurrency offers the fastest withdrawal speeds of any payment method category where it is supported. Bitcoin Lightning transactions clear in under 15 minutes at compatible crypto casinos; Ethereum, Litecoin, and stablecoin (USDT) withdrawals typically complete within one hour. The appeal is significant for players who dislike the multi-day waiting period associated with bank-route withdrawals. Deposits are similarly fast — typically confirmed within a few minutes depending on network congestion and the coin used.

Advertisement

The trade-offs are real and should not be minimised. Cryptocurrency values fluctuate, which means the value of your casino balance can change between deposit and withdrawal if you are holding crypto rather than stablecoins. Network fees — the “gas” cost per transaction — vary by coin and network congestion. In the UK, crypto gambling exists in a transitional regulatory environment: UK government legislation announced in December 2025 will bring cryptocurrency firms under firm FCA regulation from 2027, but the framework is not yet finalised.

As TecPinion’s analysis of Bitcoin in gambling for 2025–26 notes, regulatory direction in the UK, EU, and parts of Asia is tightening — players using crypto at casinos should monitor whether their specific operator’s licensing covers crypto transactions in their jurisdiction. Stablecoins like USDT reduce the volatility risk while retaining the speed benefit, making them a more predictable crypto deposit option for players who want blockchain-speed payouts without exposure to price swings.

How to Choose the Right Method: A Decision Framework

Match your circumstances to the right method:

You want the fastest possible deposits and withdrawals and your bank is Trustly-compatible

Advertisement

→ Use Trustly. It is the fastest end-to-end method available at licensed European casinos.

You are depositing for the first time and want to qualify for a welcome bonus

→ Use a Visa or Mastercard debit card. Most casinos include debit card deposits in bonus eligibility. Avoid Skrill, Neteller, and PayPal for your first deposit if a bonus is a priority — check bonus terms first.

You want faster withdrawals than card networks provide but without crypto risk

Advertisement

→ Use Skrill or PayPal. Both offer same-day or near-same-day payouts at most major casinos once approved.

You primarily deposit on a smartphone and want the least friction

→ Use Apple Pay (iPhone) or Google Pay (Android). One biometric confirmation, instant deposit.

You want to control your spending without linking any bank account or card

Advertisement

→ Use Paysafecard. Fixed denomination, no financial data shared. Set up a separate withdrawal method before playing.

You use a crypto-primary casino and want the fastest payouts

→ Use Bitcoin Lightning or USDT. Sub-hour withdrawals at crypto-native casinos. Confirm your casino’s crypto licensing status if you are in the UK.

Your preferred method has been rejected or is unavailable

Advertisement

→ Check whether the casino’s block is fee-related, geographic, or bonus-related. Switch to a debit card as a reliable universal fallback — they are accepted at virtually every licensed online casino globally.

UK Regulatory Context: What Players Need to Know

UK players operate under the strictest consumer protection framework in the online gambling world. The UK Gambling Commission prohibits the use of credit cards for gambling deposits — this applies to all UKGC-licensed operators and was introduced to prevent players from funding gambling with borrowed money. Any deposit attempt via a credit card at a UKGC-licensed site will be declined. Debit cards, e-wallets, open banking methods, and prepaid vouchers are all permissible. Klarna’s Pay in 30 and Pay in 4 products are also not available for gambling transactions under this framework for the same reason.

For cryptocurrency, UK-facing casinos that accept crypto are in a transitional regulatory window. The UK government’s December 2025 announcement confirmed that firm crypto regulation will come into force from 2027, giving operators and players a clearer compliance timeline. Until that framework is fully in force, verify that any casino accepting crypto in the UK holds a valid UKGC licence — the licensing status governs player protection regardless of payment method. For a broader overview of your rights and protections as a player, see our responsible gambling regulatory guide.

Frequently Asked Questions

Which casino payment method has the fastest withdrawal?

Cryptocurrency offers the fastest withdrawals at casinos that support it — Bitcoin Lightning and Ethereum withdrawals can clear in under one hour. Among fiat methods, Skrill is typically fastest, processing within a few hours after casino approval. PayPal and Trustly usually complete withdrawals within 24 hours. Debit cards (Visa/Mastercard) are the slowest, with payouts taking 1–5 business days through card network settlement.

Advertisement

Are any casino payment methods banned in the UK?

Yes. Credit cards are banned for gambling deposits at all UKGC-licensed online casinos, as enforced by the UK Gambling Commission since April 2020. Klarna’s buy-now-pay-later credit products (Pay in 30, Pay in 4) are also not permitted for gambling transactions in the UK. Debit cards, e-wallets, open banking services, and prepaid cards are all permitted under current rules.

Will using PayPal or Skrill affect my welcome bonus?

Possibly yes. Many online casinos exclude e-wallet deposits — including PayPal, Skrill, and Neteller — from welcome bonus eligibility or apply different wagering requirements to e-wallet players. This is disclosed in the casino’s bonus terms and conditions. If claiming a welcome offer is a priority, deposit by debit card or Trustly for your first transaction, and switch to your preferred e-wallet thereafter. Always read bonus terms before depositing.

Is open banking (Trustly) safe to use at online casinos?

Yes. Trustly is a regulated payment institution authorised under the EU Payment Services Directive and connected to over 6,300 European banks through official bank APIs. Your bank credentials are entered directly into your bank’s authenticated interface — the casino never sees them. The payment layer between you and the casino is isolated within the bank authentication flow. The main practical safety check is ensuring the casino itself is licensed: Trustly’s own merchant agreements require operators to hold valid gambling licences.

Can I use cryptocurrency for online casino deposits in the UK?

At some casinos, yes — but the regulatory picture is evolving. UK government legislation announced in December 2025 will bring crypto firms under firm FCA regulation from 2027. Until that framework is in force, crypto at UK-facing casinos exists in a grey compliance zone. If you choose to use crypto, verify that the casino holds an active UKGC licence, as that licensing status governs your player protection regardless of payment method. Unregulated crypto casinos offer no recourse if disputes arise.

Advertisement

What is the best payment method if I don’t want to share bank or card details with the casino?

Paysafecard requires no bank account, card, or personal financial information — you purchase a prepaid voucher and enter only the 16-digit PIN at checkout. For players who prefer a bank-connected method without card exposure, Trustly and open banking services authenticate entirely through your bank’s own interface; the casino only receives confirmation of payment, not your credentials. E-wallets (PayPal, Skrill) similarly act as a data buffer between your bank and the merchant.

What should I do if my preferred payment method is rejected?

First, identify the reason for the rejection — it is usually one of three things: your bank has blocked the merchant category code for gambling (common with certain current accounts), the casino does not support your method in your country, or a bonus-related restriction is preventing the deposit from being processed.

The most reliable universal fallback is a Visa or Mastercard debit card — they carry the broadest merchant acceptance of any method. If your bank blocks gambling merchant categories, open a separate account with a bank that does not impose this restriction, or use Trustly as a bank-linked alternative that may route differently through your bank’s payment infrastructure. For more guidance on navigating payment issues at specific operators, our casino payment troubleshooting guide covers the most common scenarios.

Source link

Advertisement
Continue Reading

Tech

Meta commits another $21 billion to CoreWeave, bringing total AI cloud spend to $35 billion

Published

on

In short: Meta has committed an additional $21 billion to CoreWeave for dedicated AI cloud capacity running from 2027 through December 2032, bringing the total value of the two companies’ infrastructure relationship to approximately $35 billion. The new contract will deliver early deployments of Nvidia’s Vera Rubin platform across multiple sites, and is designed specifically for inference workloads rather than training. Alongside the announcement, CoreWeave disclosed plans to raise $4.25 billion in new debt ,$3 billion in convertible notes and $1.25 billion in junk bonds,  to fund continued expansion. CoreWeave shares rose around 5% on the news; Meta shares gained roughly 3%.

From Ethereum mining to a $35 billion Meta relationship

CoreWeave was founded in 2017 in New Jersey as Atlantic Crypto, a commodity traders’ side project mining Ethereum using graphics processing units. When the 2018 cryptocurrency crash made mining uneconomical and Ethereum’s eventual move to proof-of-stake threatened to render GPU mining obsolete entirely, the founders, Michael Intrator, Brian Venturo, and Brannin McBee,  recognised that the GPU inventory they had accumulated was also exactly what machine learning researchers needed and could not easily access through conventional cloud providers. The company was renamed CoreWeave in 2019 and pivoted to GPU cloud infrastructure. It went public on March 28, 2025, at $40 per share, valuing it at $23 billion. Its 2025 revenue reached $5.13 billion, up 168% year on year, and its contracted backlog is estimated at more than $66 billion. The first Meta agreement, worth $14.2 billion and announced in September 2025, was the deal that established CoreWeave as a serious counterpart to the hyperscale cloud providers. The April 9, 2026 expansion, an additional $21 billion,  makes Meta the most significant commercial relationship in CoreWeave’s history, with a combined commitment that will sustain the company’s revenue base through the end of the decade.

What Meta is actually buying

The contract is specifically structured around inference rather than training. Meta’s Llama model family is open-weight and freely downloadable, which means the capital-intensive training phase is largely complete before any cloud contract is signed; the ongoing cost is serving those models to billions of users in real time. Inference at Meta’s scale,  hundreds of millions of daily active users across Facebook, Instagram, WhatsApp, and Meta AI,  requires sustained, low-latency compute across distributed infrastructure in a way that Meta’s own data centres cannot always absorb at peak capacity. CoreWeave will deploy that capacity across multiple locations and will include some of the first commercial deployments of Nvidia’s Vera Rubin platform, which the chipmaker unveiled at GTC 2026 in March as the next generation of its AI infrastructure hardware. The new deal supplements rather than replaces Meta’s internal build-out. Meta has guided for $115 billion to $135 billion in capital expenditure in 2026, with AI infrastructure identified as the primary driver, and the company has been explicit that it is building both owned data centres and sourcing external capacity simultaneously. The CoreWeave expansion follows a $27 billion infrastructure deal Meta signed with Nebius in March 2026, under which the Dutch neocloud operator will supply dedicated compute starting in early 2027, also featuring early Vera Rubin deployments. The two deals together illustrate that Meta is not simply procuring cloud capacity but building a diversified multi-vendor infrastructure position designed to give it flexibility and redundancy at hyperscale.

The customer diversification play

For CoreWeave, the Meta expansion solves a problem that has shadowed the company since its IPO: excessive revenue concentration. Microsoft represented 62% of CoreWeave’s 2024 revenue, a figure that made institutional investors uncomfortable and that the company has been working to reduce. With the new Meta commitment in place, CoreWeave CEO Michael Intrator said no single customer would represent more than 35% of total sales. That is still a significant concentration, but it is a materially different risk profile from a position where a single hyperscale customer controls the majority of your revenue. Nvidia, which made a $2 billion strategic investment in Nebius in March 2026 and has deepened its commercial relationships with every major AI cloud provider, sits at the centre of CoreWeave’s business model: CoreWeave’s entire infrastructure is built around Nvidia GPUs, and the Vera Rubin deployments in the Meta contract will extend that dependency into the next hardware generation. CoreWeave also recently expanded its agreement with OpenAI by up to $6.5 billion, further broadening its customer base beyond Microsoft. The company’s stock reached an all-time high of $187 in mid-2025 before pulling back to around $65 in late 2025 amid broader concerns about AI investment returns; following the Meta expansion announcement it was trading in the $88 to $95 range.

Advertisement

The debt that funds it all

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

AI cloud infrastructure is expensive to build before contracts start generating revenue, and CoreWeave has funded its growth primarily through debt. Alongside the Meta deal announcement, the company disclosed plans to raise $4.25 billion in new financing: $3 billion in convertible senior notes due 2032, carrying a coupon of between 1.5% and 2%, with an option for investors to convert into equity; and $1.25 billion in senior unsecured notes due 2031 at approximately 10%, effectively junk-bond pricing. CoreWeave’s total debt load sits at around $30 billion, roughly triple what it was a year earlier. The company’s argument for the debt structure is that its contracted revenue base,  more than $66 billion in backlog,  provides sufficient visibility to service the obligations. Intrator has described CoreWeave as an “AI factory” whose capital costs are underwritten by long-term customer commitments before infrastructure is built. The broader AI infrastructure financing environment has been characterised by similarly large-scale debt structures: SoftBank secured a $40 billion bridge loan to fund its $30 billion follow-on OpenAI investment as part of the Stargate project, illustrating that the capital requirements of AI at scale are now large enough to require financing instruments that did not exist in this form even two years ago. The year 2025 cemented AI infrastructure as the primary competitive variable in the technology industry, and CoreWeave, a company that began as a closet of Ethereum mining rigs,  has positioned itself as a load-bearing pillar of that infrastructure, one $21 billion commitment at a time.

Advertisement

Source link

Continue Reading

Tech

The diverse responsibilities of a principal software engineer

Published

on

Liberty IT’s Sarah Whelan discusses the skills she uses daily and her reaction to her nomination as part of Liberty IT’s Culture Stars initiative.

“I’m a principal software engineer in the data space at Liberty IT, leading data pipeline enablement and experimentation to help product and analytics teams deliver reliable data and run faster experiments,” said Sarah Whelan. 

A working day might involve designing reusable patterns, templates and tooling while working across functions to improve observability, testing and delivery practices, according to Whelan, who is also involved in a company group designed for women in STEM. 

She told SiliconRepublic.com, “Alongside my day job, I co‑chair the Women in Tech employee group and mentor junior engineers, providing career guidance and technical coaching. 

Advertisement

“That work focuses on removing barriers through skills workshops, resources for career growth and forums where diverse voices can share experiences. The group runs mentoring circles, interview practice sessions and visibility events that create concrete opportunities and help normalise diverse career paths in engineering.”

If there is such a thing, can you describe a typical day at work?

My day balances technical tasks and collaboration. I’ll scan pipelines and deployment health first, address urgent alerts, then focus on code reviews. For me, reviews are an opportunity to mentor, surface better approaches and make our work more maintainable. I set aside time for architecture discussions and documenting decisions so future work is clearer.

I spend time working with our product teams to shape the roadmap, meet stakeholders to understand their problems and identify solutions, and coordinate with other teams to resolve dependencies. I also plan and run mentoring sessions and Women in Tech events, organising speakers, agendas and logistics.

What types of projects do you work on?

My work delivers dependable data platforms for analytics and machine learning. I build production-grade data pipelines that give teams reliable, well-instrumented datasets. To make delivery repeatable, I design experimentation frameworks, templates and patterns that reduce manual effort.

Advertisement

I focus on observability, testing and scaling so pipelines stay performant and lead enablement sessions that teach people how to use the tools and run experiments without heavy engineering support.

What skills do you use on a daily basis?

I use core data engineering skills every day: Python for transformations and orchestration, SQL for modelling and validation, and testing and monitoring to keep systems dependable. I pair that with careful, experimental thinking, small trials, metric tracking and incremental rollouts, so changes are low-risk and measurable.

On the people side of things, clear communication, active listening and regular collaboration help turn technical work into useful outcomes. I focus on creating easy pathways for success by mentoring colleagues, running pairing sessions for practical learning and producing simple playbooks that let teams self‑serve.

What is the hardest part of your working day

The hardest part is switching gears – going from fixing urgent production issues to design workshops or running hands‑on pairing sessions can really break your flow. I try to make it easier by agreeing priorities with the team, protecting blocks for focused work and keeping documentation up to date so I can pick up where I left off. Quick handovers and regular check‑ins also keep longer‑term work visible.

Advertisement
Do you have any productivity tips that help you through the working day?

I use a to-do list to track outstanding tasks and review it each morning to plan and prioritise my day. I block focused time in my calendar for heads‑down work, which helps me avoid context switching. I document everything in a central, easily accessible location so the team never has to ‘figure something out’ twice. I also make mentoring a recurring calendar item, so coaching happens regularly.

When you first started this job, what were you most surprised to learn was important in the role?

I was surprised by how much context and communication matter; technical solutions alone rarely succeed without stakeholder buy‑in and agreed processes. I also didn’t expect observability and experiment rigour to be so central. Good monitoring, testing and repeatable experiment practices are what make pipelines reliable in production.

Finally, the value of documentation and small, consistent practices (like decision logs and runbooks) became obvious fast – they save time and prevent firefighting.

How has your role changed as the sector has grown and evolved?

The arrival of generative AI has raised the bar; it requires high‑quality, well‑labelled data, feature management, stronger data contracts and privacy controls, plus new inference and embedding pipelines and model observability, which makes the role more strategic and cross‑functional. At the same time, there’s a steady stream of new tools and platforms, so a crucial skill is distinguishing genuinely useful technology from marketing hype and choosing tools that solve real problems.

Advertisement
What do you enjoy most about the job?

I enjoy making things better for the people I work with. Most of my role is about simplifying data delivery so users get reliable, timely datasets and can make decisions faster. Each day, I try to keep the team unblocked, staying on top of potential issues so colleagues can get on with their day‑to‑day work with minimal friction.

What I like most about the job is knowing my work makes other people’s lives easier, whether that’s a data user getting answers faster or a teammate having one fewer thing to worry about. I also enjoy helping others build skills and confidence, and access opportunities. Practically, that looks like one‑to‑one coaching, structured pairing sessions and setting up repeatable playbooks so people can succeed without constantly relying on one person.

I often run knowledge‑sharing sessions or demos to share what I’ve learned and get feedback. It’s great to see patterns I’ve created adopted by other teams. When I notice incremental improvements or hear someone say a change saved them time, it reminds me why this work matters.

You received a nomination as part of Liberty IT’s Culture Stars initiative – tell us more about what this nomination meant to you?

The nomination in the ‘Be Brilliant’ category recognised mentorship, teamwork and pragmatic technical leadership. Seeing my mentee secure a promotion was the proudest, most concrete outcome; it showed the real, human impact of focused coaching and regular feedback.

Advertisement

The nomination also acknowledged the everyday teamwork and practical improvements I champion to make our pipelines more reliable. Being recognised was validation that consistent, sometimes unglamorous work – supporting others, documenting decisions and removing roadblocks – does make a difference. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Amazon’s chip business could be worth $50 billion, Jassy says, and he hints it may sell them externally

Published

on

In short: Andy Jassy’s annual letter to shareholders, published on 9 April 2026, reveals that Amazon’s custom chip business, covering Graviton, Trainium, and Nitro, generates more than $20 billion in annualised revenue growing at triple-digit rates year-on-year. If sold on the open market like Nvidia, Jassy says, the business would be worth roughly $50 billion a year. He also signals that Amazon may begin selling those chips directly to third parties, and defends the company’s $200 billion capital expenditure plan for 2026 as grounded in committed customer demand rather than speculation.

“Not on a hunch”: the $200 billion bet

Jassy opened the letter’s financial argument with a direct rebuttal of the scepticism that has surrounded Amazon’s capital commitments. “We’re not investing approximately $200 billion in capex in 2026 on a hunch,” he wrote. “We’re not going to be conservative in how we play this. We’re investing to be the meaningful leader, and our future business, operating income, and free cash flow will be much larger because of it.” The context for that claim is a company that saw its free cash flow fall from $38 billion to $11 billion last year, driven by a $50.7 billion increase in capital spending, the bulk of it committed to AI infrastructure.

The defence rests on customer commitments already in place. Of the CapEx expected to be deployed in 2026, Jassy said a substantial portion already has customer backing, citing as one example OpenAI’s commitment of more than $100 billion to AWS. That commitment, which expanded an existing $38 billion seven-year partnership struck in November 2025, also includes OpenAI consuming approximately two gigawatts of Trainium capacity through AWS infrastructure. SoftBank, which holds a majority stake in OpenAI and has been financing its infrastructure build through mechanisms including a $40 billion bridge loan, is in effect underwriting part of the demand that Jassy is now pointing to as validation for his CapEx stance.

A $50 billion chip business hiding in plain sight

Amazon’s custom silicon programme spans three product lines. Graviton is a custom CPU that Jassy says delivers more than 40% better price-performance than comparable x86 processors, the market that Intel and AMD dominate. It is now used by 98% of the top 1,000 EC2 customers, a figure that reflects a shift in the economics of cloud compute that has been underway for several years. Demand is sufficiently intense that two large AWS customers asked whether they could purchase all available Graviton capacity for 2026. Amazon declined.

Advertisement

Trainium is the AI training and inference accelerator that represents Amazon’s most direct response to Nvidia. Trainium2, which Jassy says offers roughly 30% better price-performance than comparable GPU alternatives, has largely sold out. Trainium3, which began shipping in early 2026 and offers a further 30 to 40% improvement in price-performance over Trainium2, is nearly fully subscribed, with Uber among the companies that have moved workloads onto it. Trainium4, still approximately 18 months from broad availability and featuring interoperability with Nvidia’s NVLink Fusion interconnect technology, has already been significantly reserved. Nitro, the custom network and security chip that underpins AWS’s virtualisation layer, completes the three-chip portfolio. Together, Jassy says the three lines produce more than $20 billion in annualised revenue, growing at triple-digit percentage rates year-on-year. “If we were a standalone chip company,” he writes, “our chips would be generating over $50 billion in annual revenue.” The business currently exists entirely within AWS; customers access Trainium and Graviton through EC2 instances rather than buying chips directly.

Advertisement

At scale, Jassy argues, Trainium will “save us tens of billions of capex dollars per year, and provide several hundred basis points of operating margin advantage versus relying on others’ chips for inference.” That claim is central to the investment thesis underpinning the $200 billion CapEx programme: custom silicon is not only a competitive differentiator but a structural cost advantage that compounds over time as the ratio of inference to training in AI workloads continues to rise.

The Nvidia relationship, and the “new shift”

Jassy is careful in how he frames the competitive dynamic with Nvidia. “We have a strong partnership with NVIDIA, will always have customers who choose to run NVIDIA,” he writes, while also asserting that “virtually all AI thus far has been done on NVIDIA chips, but a new shift has started.” Customers, he says, “want better price-performance.” Nvidia, which reported revenue of $68.1 billion in the fourth quarter of 2025, a 73% year-on-year increase, entered 2026 from a position of market dominance that Amazon’s custom silicon is chipping away at from within the AWS customer base rather than in any broader merchant market. Trainium4’s incorporation of NVLink Fusion means Amazon is also building in a bridge rather than a wall: customers can combine Trainium accelerators with Nvidia GPUs within the same system, preserving optionality for enterprises that have invested heavily in Nvidia’s software stack.

The letter’s most consequential signal on chips, however, may be a single sentence about the future: “There’s so much demand for our chips that it’s quite possible we’ll sell racks of them to third parties in the future.” Amazon currently monetises its custom silicon exclusively through EC2 compute services. Selling chips directly would represent a structural shift in its competitive posture, placing it in the merchant silicon market alongside Nvidia and AMD, and allowing the economics of the chip business to be assessed independently of the cloud revenue it currently underpins.

Bedrock, Amazon Leo, and the broader picture

The shareholder letter situates the chip business within a wider AI infrastructure thesis. Amazon Bedrock, the managed service through which AWS customers access foundation models including Amazon’s own Nova family, processed more tokens in Q1 2026 than in all prior periods combined, with inference volumes “nearly doubling month-over-month” in March. AWS’s AI revenue run rate crossed $15 billion in Q1 2026, a figure Jassy contextualises by noting it represents growth roughly 260 times faster than AWS experienced at a comparable stage of its development.

Advertisement

Jassy also uses the letter to frame Amazon’s satellite internet service, Amazon Leo, as a competitive counterpart to SpaceX’s Starlink, having already secured contracts with Delta Air Lines, JetBlue, AT&T, Vodafone, and NASA. The satellite and chip disclosures share an underlying argument: that Amazon is building infrastructure at a scale and across categories that most observers have not fully priced in. The legal scrutiny that has begun to attach itself to Amazon’s AI products, including a proposed class action over the training data used for Nova Reel, represents one category of risk that the letter does not address. The year 2025 established AI infrastructure as the central capital allocation question for the technology industry, and Jassy’s letter is, in part, an argument that Amazon arrived at the right answer earlier and more decisively than the market has yet recognised.

Source link

Advertisement
Continue Reading

Tech

OpenAI pauses Stargate UK over energy costs

Published

on

Stargate UK will move forward when ‘right conditions’ enable ‘long-term infrastructure investment’, OpenAI said.

OpenAI is pausing its Stargate initiative in the UK after citing energy costs and regulatory burdens.

In a statement to major news publications, the company said that it is continuing to explore Stargate UK and will move forward when the “right conditions such as regulation and the cost of energy” enable it to make “long-term infrastructure investment”.

OpenAI first announced the project last September in collaboration with Nvidia and UK AI infrastructure provider Nscale. The initiative was seen as a step forward in cross-national technology partnership, with its announcement coinciding with US president Donald Trump’s visit to the UK.

Advertisement

For UK prime minister Keir Starmer, Stargate represented a major nod from Big Tech firms supporting the country’s push to become a leader in the space. The OpenAI project was meant to support the UK’s ‘AI Growth Zone’, expected to create 5,000 new jobs and bring in £30bn in private investment.

Other companies, including Microsoft and Nvidia have also made multibillion-dollar investment commitments in the UK. A government spokesperson told Bloomberg that the UK’s AI sector has attracted more than £100bn since Starmer came into power in 2024.

Launched early last year, Stargate is a $500bn private sector investment project into OpenAI’s infrastructure. The project’s initial equity funders include OpenAI, Oracle, MGX and SoftBank, with Microsoft, Nvidia and Arm among the key technology partners.

A year since launching, Stargate’s Texas facility is already training AI systems, while a number of projects are underway in the US, as well as in the UAE and Norway. The company also announced a tie-up with India’s Tata Consultancy Services as part of Stargate.

Advertisement

OpenAI has been shuttering plans to refocus towards enterprise tools as it plans for an initial public offering later this year. Late last month, it put plans for an erotic ChatGPT on hold “indefinitely”, just days after it shut down its controversial AI video generator Sora.

It recently announced a $122bn funding round, placing the AI giant at a post-money valuation of $852bn.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Flush with cash: Washington startup lands up to $500M to deploy facilities treating sewage, dairy waste

Published

on

Dairy cows at the Puyallup Fair, now called the Washington State Fair. (GeekWire Photo / Kurt Schlosser)

Wastewater treatment startup Sedron Technologies — a Washington company that once served Bill Gates a glass of water purified from sewage — announced it’s being acquired by Ara Partners. The global equity firm is investing up to $500 million in Sedron to facilitate the deployment of its sewage and manure cleaning technologies, which gives it a controlling stake in the business.

“The Ara investment is largely designed to provide us with the equity on our own balance sheet to scale up production of additional projects and plants across the country,” said Geoff Trukenbrod, interim CEO of Sedron.

The startup is deploying facilities that efficiently and sustainably treat sewage biosolids and dairy waste. Sedron’s business model is to finance, design, build, own, operate and maintain the sites, which cost about $100 million to $200 million to build.

The company generates revenue from the municipalities and farms that use their services as well as from the sale of organic fertilizer and clean energy produced at the sites.

“Imagine having a bakery, and you get paid to get flour, and you get paid for your cookies,” said Stanley Janicki, Sedron’s chief commercial officer. “It’s a phenomenal business model, not that biosolids are cookies.”

Advertisement
Sedron’s dairy waste management facility in Fair Oaks, Ind., which handles manure from 20,000 cows. (Sedron Photo)

Sedron launched in 2014 as a spinoff from Janicki Industries, a longtime aerospace engineering and manufacturing company. Both are based in Sedro Woolley, a city north of Seattle in a largely agricultural stretch of Western Washington.

In 2011, Janicki received funding from what is now the Gates Foundation to develop a wastewater purification system, leading to Sedron’s launch and a video that went viral showing Bill Gates drinking a glass of water produced from sewage. The foundation supported the technology as a means for treating waste in developing countries where untreated sewage could otherwise spread pathogens.

The company is breaking ground this month on a regional waste treatment facility that will serve multiple municipalities that are home to 2 million people in South Florida. Operations are expected to begin in 2028.

Sedron’s system takes municipal biosolids — the residual product from a wastewater treatment plant — and dries the material in an energy efficient thermal dryer. The biosolids are about 85% water, which is largely evaporated and disposed of, and remaining material is fed into a biomass boiler to produce clean electricity. The energy that’s generated helps run the dryer and the excess electricity is sold. Another benefit of the system is the process destroys PFAS “forever chemicals” contaminating wastewater.

The startup’s second line of business is managing manure from livestock operations — which is one of the biggest costs for a dairy farmer. Sedron takes the waste, removes the water for use in irrigation, and produces two high-value organic fertilizers: a solid material and a concentrated liquid nitrogen fertilizer. The fertilizers are sold nationwide for use on crops such as apples, berries and spinach.

Advertisement

Sedron’s treatment process is more affordable and replaces the use of manure lagoons to store the waste until it can be applied to fields as a liquid. The lagoons produce planet-warming methane and pose environmental threats if they leak nutrients that can stoke algal blooms in nearby waterways or contaminate drinking water.

The company has deployed its manure technology at two dairy farms in Indiana, including a 20,000 cow dairy, and expects to start operations at a Wisconsin farm this summer.

“Our focus is on positioning Sedron as the leader in circular waste management — converting waste into carbon negative commodities faster, more cost effectively, and with greater energy efficiency than any other solution available,” said Cory Steffek, a partner at Ara Partners, in a statement.

Sedron previously raised approximately $100 million in corporate debt and equity and about $200 million in project financing, some of which was institutional. All of the legacy shareholders rolled their equity forward, Janicki said.

Advertisement

The 275 employee company has offices in Washington state and Chicago, and operational facilities in Indiana, Wisconsin and Florida.

The startup is focused on U.S. deployments of its facilities, aiming to launch at least two new sites each year for the next five years, then potentially scaling up from there. Janicki said they’d still like to operate in developing countries to address that initial use case.

Sedron’s leadership emphasized the importance of delivering a service that resonates with investors and business partners, doesn’t require government support to succeed and also benefits the planet.

“As the world today is retreating somewhat from climate efforts,” Janicki said, “it’s exciting to be in a business that is positioned for exceptional growth and solving environmental problems while creating valuable products.”

Advertisement

Source link

Continue Reading

Tech

Pixelmator Pro & Logic get big upgrades, rest of iWork gets minor ones

Published

on

Apple rolled out updates across its creative and productivity apps. Logic Pro and Pixelmator Pro gain new features while the rest of the iWork lineup got bug fixes and stability improvements.

Floating dark rounded app icons with colorful neon-style symbols, including charts, sliders, waveforms, a radio, turntable, lamp, and abstract shapes, on a black background
Apple Creator Studio apps

Apple’s Creator Studio bundle includes pro tools like Final Cut Pro, Logic Pro, Motion, Compressor, and Pixelmator Pro, along with productivity apps like Pages, Keynote, and Numbers. It’s a unified platform for creating and publishing across different workflows.
Apple delivered updates through the App Store, with most apps receiving maintenance-focused changes for reliability and platform stability.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Google Cloud deepens AI infrastructure partnership with Intel across Xeon and custom chips

Published

on

In short: Google Cloud and Intel have announced a deepened multi-year AI infrastructure partnership covering both CPU deployment and custom chip co-development. Google Cloud will continue adopting Intel’s Xeon 6 processors across its global infrastructure for C4 and N4 instances, while the two companies are expanding their joint development of custom Infrastructure Processing Units designed to offload networking, storage, and security from host CPUs in hyperscale AI environments. The announcement arrives as Intel’s stock surged approximately 33% on the week and two days after the company signed on as the foundry partner for Tesla’s Terafab megaproject.

“Balanced systems”: the case Intel and Google are making together

The central argument of the partnership, as framed by both companies, is that GPU accelerators alone are not sufficient to handle the demands of modern AI infrastructure. In a statement accompanying the announcement, Lip-Bu Tan, Intel’s chief executive, said: “AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators — it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.” The language is deliberate. Intel has spent much of the past two years repositioning from the general-purpose computing market it once dominated toward a more specific thesis: that the CPU and custom infrastructure silicon have a structural role in AI deployments that GPU-centric narratives have consistently underestimated.

Amin Vahdat, Google’s senior vice president and chief technologist for AI infrastructure, made the case from the demand side. “CPUs and infrastructure acceleration remain a cornerstone of AI systems — from training orchestration to inference and deployment,” he said. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.” The framing of the partnership as a multi-generational CPU roadmap commitment, rather than a one-cycle procurement agreement, is significant: it implies Google has made decisions about its infrastructure architecture several years out on the basis of Intel’s product trajectory, and that trajectory includes both the Xeon line and the custom IPU co-development effort.

Xeon 6 in Google Cloud

The CPU component of the partnership centres on Intel’s Xeon 6 processor family, which Google Cloud has deployed across its workload-optimised C4 and N4 instance types. Google says the C4 instances deliver more than 2.0 times the total cost of ownership benefit compared with predecessor configurations, a figure that captures the combination of performance uplift and power efficiency that Intel has positioned as Xeon 6’s core competitive claim. The agreement extends beyond the current generation: Google has committed to multi-generational alignment with Intel’s Xeon roadmap, meaning its infrastructure planning incorporates Intel’s future CPU releases as a known variable rather than a contingent one. Google has simultaneously been deepening its custom silicon commitments on the accelerator side, supplying Anthropic with approximately one gigawatt of TPU capacity through Broadcom in a deal that anchors Anthropic’s AI infrastructure through 2027 and beyond — a parallel track that reflects how Google is building out its infrastructure portfolio across both standard and custom silicon simultaneously.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The CPU architecture context matters for understanding why this commitment is being made public now. As AI workloads shift from the training phase, which is GPU-intensive and relatively concentrated among a small number of hyperscalers, toward inference at scale, which is distributed, latency-sensitive, and runs continuously across large server fleets, the cost structure of AI infrastructure changes. Inference places sustained demands on CPU resources for orchestration, data pre-processing, and system management that training pipelines do not. Google’s bet on Xeon 6 for its C4 and N4 instances is, in part, a bet that inference economics will make CPU efficiency a first-order concern in the years ahead.

The custom IPU programme

The more strategically significant element of the partnership is the expanded co-development of Infrastructure Processing Units. IPUs are custom ASIC-based programmable accelerators designed to take over the networking, storage, and security functions that would otherwise run on host CPUs, freeing those CPUs to focus entirely on application and AI workload processing. In hyperscale environments, where these infrastructure tasks consume a substantial and growing fraction of available compute, offloading them to a dedicated accelerator can significantly improve utilisation rates, energy efficiency, and the consistency of workload performance. Intel and Google have been collaborating on IPU development, and the announcement signals that this work is expanding in scope rather than narrowing. The specific technical details of the expanded programme — die design, process node, performance targets, and deployment timeline — have not been disclosed publicly.

Advertisement

Nvidia, whose fourth-quarter 2025 revenue reached $68.1 billion on 73% year-on-year growth and which used its GTC 2026 conference in March to position its full-stack platform as the default environment for AI infrastructure, is the implicit competitive reference point for both components of the Intel-Google partnership. Intel is not attempting to displace Nvidia’s GPU accelerators in training workloads; it is arguing that the system around those accelerators — the CPUs managing orchestration, the IPUs managing network and storage overhead, and the interconnects tying everything together — is where efficiency gains are increasingly available. That argument has a natural ally in Google, which has both the infrastructure scale to validate it empirically and commercial incentives to diversify away from a single-vendor accelerator dependency.

Intel’s strategic moment

The Google partnership arrives at a moment when Intel’s industrial position is changing rapidly. Two days before the Google announcement, Intel signed on as the primary foundry partner for Terafab, the $25 billion joint venture between Tesla, SpaceX, and xAI targeting one terawatt of AI compute per year, committing its 18A process node — the company’s most advanced logic manufacturing technology — to the project. The two announcements taken together suggest Intel is pursuing a two-track strategy: deepening its hyperscale cloud partnerships for CPU and IPU deployment while simultaneously building out its foundry business for the custom AI silicon market that Nvidia, AMD, and the hyperscalers’ in-house chip programmes have driven into existence. The stock market responded to the week’s announcements with a roughly 33% gain in Intel’s share price, the sharpest weekly move the company has recorded in years.

Whether the strategic repositioning is durable depends on execution. Intel’s 18A process node is the same technology that underpins its foundry credibility with customers like Tesla, and its delay history has been a persistent source of investor concern. The Xeon 6 deployment in Google Cloud and the IPU co-development programme are both contingent on Intel shipping what its roadmap promises on the timelines Vahdat’s statement implies Google has factored into its own planning. The AI infrastructure market that Intel is trying to enter has become one of the most heavily capitalised segments in technology, with deals such as Meta’s $27 billion agreement with Nebius in March 2026 illustrating the scale of commitments being made across the industry. The year 2025 shifted the centre of gravity in AI from model development to infrastructure deployment, establishing capital expenditure scale and infrastructure access as the primary competitive variables — and Intel, for the first time in several years, is making a credible case that it belongs in that competition on multiple fronts simultaneously.

Advertisement

Source link

Continue Reading

Tech

I don’t see a sane reason to pick another budget phone over the TCL NXTPAPER 70 Pro

Published

on

The era of truly good budget phones is over, and you can blame AI for that. Due to the rising chip costs, even flagship phones are feeling the pinch. And that’s why, when TCL finally brought the NXTPAPER 70 Pro to the US, it came as a big surprise to me. The phone costs just $199, nearly half the price you’d pay in other markets. 

Yes, the phone is exclusive to T-Mobile, but at $199, the NXTPAPER 70 Pro felt something else. A 6.9-inch 120Hz display, IP68 water resistance, 5,200mAh battery, 50MP camera, and TCL’s NXTPAPER 4.0 display technology, which is genuinely unlike anything else at this price. Naturally, I wanted to compare it to phones in a similar price range to see whether I can find a better deal.

So, I went looking for alternatives at a similar price and found three worth comparing: the Samsung Galaxy A17 5G, the Motorola Moto G Power 2026, and the Pixel 10a.  None of them can beat the TCL in price, performance, or features, and I concluded that there’s no reason to choose any other phone over the NXTPAPER 70 Pro right now. Let me show you what I mean.

But first, a quick specs comparison

Specification TCL NXTPAPER
70 Pro
Galaxy A17 5G Moto G Power
2026
Google Pixel 10a
Display 6.9 inches, IPS LCD, 120Hz (1080 x 2340 pixels) 6.7 inches, Super AMOLED, 90Hz (1080 x 2340 pixels) 6.8 inches, IPS LCD, 120Hz (1080 x 2388 pixels) 6.3 inches, P-OLED, 120Hz (1080 x 2424 pixels)
Processor Mediatek Dimensity 7300 (4 nm) Exynos 1330 (5 nm) Mediatek Dimensity 6300 (6 nm) Google Tensor G4 (4 nm)
Cameras Main:
50MP, f1.9, 24mm
Ultrawide
8MP ultrawide (120˚)
Selfie
32MP, f/2.0, 28mm
Main:
50MP, f1.8, 24mm
Ultrawide
5MP ultrawide
Macro
2MP
Selfie
13MP, f/2.0
Main:
50MP, f1.8
Ultrawide
8MP ultrawide (119˚)
Selfie
32MP, f/2.2
Main:
48MP, f1.7, 25mm
Ultrawide
13MP ultrawide, f2.2 (120˚)
Selfie
13MP, f/2.2, 20mm
Battery 5200 mAh 5000 mAh 5200 mAh 5100 mAh
Price $199 (T-Mobile) $189 (T-Mobile), $199 (unlocked) $189 (T-Mobile), $299 (unlocked) $499 (unlocked)

Is there any competition at this price?

The Samsung Galaxy A17 5G is the obvious first comparison. It is Samsung’s best-selling budget phone, and for good reason. You get a solid 6.7-inch Super AMOLED display, a triple camera system, and an impressive six years of software updates. 

Advertisement

It is a reliable, no-frills phone that does the basics well. But it runs on the Exynos 1330, a chip that has been specifically called out for poor performance. Compared to the MediaTek Dimensity 7300 powering the TCL NXTPAPER 70 Pro, the Exynos 1330 is slower across CPU, GPU, and battery performance. Take a look at the comparisons below:

It also has an IP54 rating, which means it is splash-resistant but not submersible. The NXTPAPER 70 Pro, by comparison, has a better chip, a better display, IP68 water resistance, and a more interesting feature set. The A17 sells for around $175 to $199. Simply put:

Same price. No contest.

The Moto G Power 2026 offers a similar 6.8-inch LCD display and the same 5,200mAh battery, but the MediaTek Dimensity 6300 inside is a step down from the NXTPAPER 70 Pro’s Dimensity 7300. The Dimensity 7300 uses a newer 4nm fabrication process (compared to the Dimensity 6300’s 6nm) and delivers up to 67% better performance. Have a look at the performance figures:

There are several factors working in favor of the Moto G Power (2026). It features a better Gorilla Glass 7i protection and IP68/IP69 dust and water resistance, but that’s about it. On all other fronts, the NXTPAPER 70 Pro either offers equal or better features. Moto G Power 2026 costs $189 if you get it on a similar T-Mobile contract and $299 on Amazon without a contract, so there’s no price advantage either.

As you can see, the TCL NXTPAPER 70 Pro beats the Samsung Galaxy A17 and Moto G Power 2026 on most fronts at a similar price. 

What about the Pixel 10a?

This is where it gets interesting. At $499, the Google Pixel 10a is not a phone I should consider for this comparison. But it is a genuinely great phone, a gold standard for mid-range Android, and I am not going to pretend otherwise.

It features a 6.3-inch OLED display, a 48MP camera, seven years of updates, a more powerful Tensor G4 chipset, and Google’s AI features baked deep into the software. 

But the Pixel 10a does not have a bigger battery and does not support expandable storage. Also, the TCL NXTPAPER 70 Pro costs $199, and the $300 gap is doing a lot of heavy lifting. And throughout our comparisons, we haven’t even touched on the TCL NXTPAPER 70 Pro’s standout feature: the NXTPAPER 4.0 display.

That display is what makes this phone genuinely special. TCL’s NXTPAPER 4.0 is not a software night mode or a cheap filter. It uses hardware-level changes, including circular polarized light, DC dimming that eliminates screen flicker, and a filter that reduces harmful blue light. 

The phone is certified by TÜV and SGS, independent bodies that test these things rather than take a company’s word for it. A dedicated NXTPAPER key on the side instantly switches between full-color mode, Ink Paper Mode, and Max Ink Mode, allowing you to use it as a normal phone or as an e-reader experience. In Max Ink mode, the battery lasts up to seven days.

None of the other phones on this list offer these incredible display innovations. This feature alone makes the NXTPAPER 70 Pro worth buying. But even if you disregard it, you have seen that the NXTPAPER 70 Pro offers better features at comparable prices to all other phones in its price segment. 

If you spend long hours staring at your phone for work, school, or reading, no phone at this price comes close to what TCL is offering. At $199, the TCL NXTPAPER 70 Pro is not a budget phone that asks you to make compromises. It is a genuinely good phone with one feature that no one else has figured out yet. That makes it a very easy recommendation.

Advertisement

Source link

Continue Reading

Tech

OpenAI faces investigation over ChatGPT concerns

Published

on

Just when it seemed like OpenAI was gearing up for its next big leap, possibly even an IPO, it’s now facing some serious scrutiny. And this time, it’s not just critics online. It’s a full-blown government investigation. And yeah, things are getting a little intense.

OpenAI is now under investigation, and it’s not a small one

Florida Attorney General James Uthmeier has launched a probe into OpenAI and its chatbot, ChatGPT. The concerns being raised go beyond the usual AI debates, as this one touches on national security, data handling, and real-world harm.

Today, we launched an investigation into OpenAI and ChatGPT.

AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.

Wrongdoers must be held accountable. pic.twitter.com/vRVCqIYKnB

Advertisement

— Attorney General James Uthmeier (@AGJamesUthmeier) April 9, 2026

As reported by Reuters, the investigation is looking into whether OpenAI’s technology or data could potentially fall into the wrong hands, including foreign adversaries. There are also claims linking ChatGPT to harmful use cases, ranging from misuse in criminal activity to concerns around self-harm and unsafe content.

Subpoenas are reportedly on the way, which means this isn’t just talk but a formal escalation. And all of this is happening right as OpenAI is being seen as a potential IPO candidate, with valuations being thrown around in the trillion-dollar range. That timing could complicate things further, as increased regulatory scrutiny may impact investor confidence and how aggressively the company can move forward with its public listing plans.

This could get messy, fast

Let’s be real, AI companies have been skating on thin ice when it comes to regulation. Rapid growth, massive user bases, and real-world impact were always going to attract attention eventually. But the timing here is what makes it spicy. OpenAI is scaling aggressively, pushing products like ChatGPT deeper into everyday life, and potentially preparing for a public offering. Getting hit with a government probe right now is not ideal.

At the same time, this might just be the beginning. Because once governments start asking questions about how AI is being used, and misused, it’s not just about one company anymore. It’s about the entire industry getting put under the microscope.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025