Connect with us
DAPA Banner

Tech

Fitbit’s Personal Health Coach Will Soon Understand Your Medical Records

Published

on

Monitoring your health has never been easier thanks to wrist- and finger-worn fitness trackers. But analyzing the collected data has largely been left to the user. Until recent years, that is, when some of the tech companies that make these wearables launched their own AI health coaches

In October 2025, Google debuted its version called Coach, powered by Gemini AI, for US Fitbit Premium subscribers on Android. However, the October launch was just a preview, with the company requesting feedback from early adopters. This February, Google expanded the public Coach preview to include iOS users and Fitbit Premium members in Canada, the UK, Australia, New Zealand and Singapore. Google announced Tuesday at its annual The Check Up health event that it’s adding additional features to its all-in-one fitness trainer, sleep coach and health advisor. 

Improved sleep insights and scoring

For sleep tracking, the company’s most significant update yet delivers a 15% increase in sleep stage accuracy, based on comparisons between its latest and previous algorithms across compatible Pixel and Fitbit devices.

Advertisement

The current model will now also be better able to differentiate between when you’re trying to sleep and when you’re actually asleep. It can detect when you’re napping, when your sleep has been interrupted or when you’re transitioning between sleep stages. 

In a few weeks, these enhancements will all contribute to a revamped Sleep Score that won’t just focus on how much sleep you got, but on how much time it took you to get that sleep. Because it has more sleep data to work with, Coach will be able to provide more informed insights and recommendations for better sleep.

Three phone screens showing Fitbit's sleep, continuous glucose monitor and lab insights with the personal health coach.

Fitbit’s upcoming personal health coach updates center around sleep, medical records and continuous glucose monitor data.

Advertisement

Google

Medical record availability 

In April, US subscribers will be able to link their medical records, such as medications, lab results and doctor visit history, in the Fitbit app. 

This feature was created in collaboration with B. Well Connected Health, an AI-powered digital health platform that aggregates health data from different providers, and Clear, the identity verification platform known from airport security.

In the Fitbit app, you can search for your doctor and then link to their member portal. Or if you use Clear to verify your identity with a selfie and a valid ID, it will search for medical records on your behalf. Availability will depend on your provider. 

Advertisement
Fitbit screen asking for personal details to locate medical records.

Once you verify your identity, Fitbit’s personal health coach can access your medical records.

Google

Fitbit’s Coach can then use your medical history to create more personalized guidance that combines your lab results, data collected by your Fitbit and any other relevant information it collects from your records. In several months, users will be able to share these records and summaries with their provider or family members using a QR code or Smart Health Link URL.

A phone screen showing a smart link for a Fitbit health summary.

What it will look like once you’re able to share your Fitbit health summary with a doctor or relative.

Advertisement

Google

Privacy is always a concern

Privacy experts caution people to think twice before uploading medical information into an AI tool. 

AI Atlas

Fitbit says it securely stores your medical records and that you control how your data is used, whether it’s shared and whether it’s deleted. The company also says your medical records won’t be used for ads. 

AI health coaches are not a replacement for a doctor, as they can’t diagnose or treat medical conditions. You shouldn’t make any changes to your lifestyle or health routine without consulting your own doctor. 

The future of Fitbit’s personal health coach

Google also announced that it’s investing in health research on topics such as predicting insulin resistance using data collected by wearables, hypertension and learning more about how AI performs in virtual care settings. These study topics help us get a sense of what Google may have in store for future Fitbit updates.

Advertisement

In April, Fitbit members in the public preview will also be able to connect a continuous glucose monitor to the Fitbit app via Health Connect. This feature lets you see all your health data from compatible apps in one place. According to a Google representative, any CGM that supports a Health Connect integration will be included, including Dexcom and Abbott Lingo. With this connection, Fitbit members can ask their Coach for more information about how their workouts or meal choices affect their glucose levels. 

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Meta commits another $21 billion to CoreWeave, bringing total AI cloud spend to $35 billion

Published

on

In short: Meta has committed an additional $21 billion to CoreWeave for dedicated AI cloud capacity running from 2027 through December 2032, bringing the total value of the two companies’ infrastructure relationship to approximately $35 billion. The new contract will deliver early deployments of Nvidia’s Vera Rubin platform across multiple sites, and is designed specifically for inference workloads rather than training. Alongside the announcement, CoreWeave disclosed plans to raise $4.25 billion in new debt ,$3 billion in convertible notes and $1.25 billion in junk bonds,  to fund continued expansion. CoreWeave shares rose around 5% on the news; Meta shares gained roughly 3%.

From Ethereum mining to a $35 billion Meta relationship

CoreWeave was founded in 2017 in New Jersey as Atlantic Crypto, a commodity traders’ side project mining Ethereum using graphics processing units. When the 2018 cryptocurrency crash made mining uneconomical and Ethereum’s eventual move to proof-of-stake threatened to render GPU mining obsolete entirely, the founders, Michael Intrator, Brian Venturo, and Brannin McBee,  recognised that the GPU inventory they had accumulated was also exactly what machine learning researchers needed and could not easily access through conventional cloud providers. The company was renamed CoreWeave in 2019 and pivoted to GPU cloud infrastructure. It went public on March 28, 2025, at $40 per share, valuing it at $23 billion. Its 2025 revenue reached $5.13 billion, up 168% year on year, and its contracted backlog is estimated at more than $66 billion. The first Meta agreement, worth $14.2 billion and announced in September 2025, was the deal that established CoreWeave as a serious counterpart to the hyperscale cloud providers. The April 9, 2026 expansion, an additional $21 billion,  makes Meta the most significant commercial relationship in CoreWeave’s history, with a combined commitment that will sustain the company’s revenue base through the end of the decade.

What Meta is actually buying

The contract is specifically structured around inference rather than training. Meta’s Llama model family is open-weight and freely downloadable, which means the capital-intensive training phase is largely complete before any cloud contract is signed; the ongoing cost is serving those models to billions of users in real time. Inference at Meta’s scale,  hundreds of millions of daily active users across Facebook, Instagram, WhatsApp, and Meta AI,  requires sustained, low-latency compute across distributed infrastructure in a way that Meta’s own data centres cannot always absorb at peak capacity. CoreWeave will deploy that capacity across multiple locations and will include some of the first commercial deployments of Nvidia’s Vera Rubin platform, which the chipmaker unveiled at GTC 2026 in March as the next generation of its AI infrastructure hardware. The new deal supplements rather than replaces Meta’s internal build-out. Meta has guided for $115 billion to $135 billion in capital expenditure in 2026, with AI infrastructure identified as the primary driver, and the company has been explicit that it is building both owned data centres and sourcing external capacity simultaneously. The CoreWeave expansion follows a $27 billion infrastructure deal Meta signed with Nebius in March 2026, under which the Dutch neocloud operator will supply dedicated compute starting in early 2027, also featuring early Vera Rubin deployments. The two deals together illustrate that Meta is not simply procuring cloud capacity but building a diversified multi-vendor infrastructure position designed to give it flexibility and redundancy at hyperscale.

The customer diversification play

For CoreWeave, the Meta expansion solves a problem that has shadowed the company since its IPO: excessive revenue concentration. Microsoft represented 62% of CoreWeave’s 2024 revenue, a figure that made institutional investors uncomfortable and that the company has been working to reduce. With the new Meta commitment in place, CoreWeave CEO Michael Intrator said no single customer would represent more than 35% of total sales. That is still a significant concentration, but it is a materially different risk profile from a position where a single hyperscale customer controls the majority of your revenue. Nvidia, which made a $2 billion strategic investment in Nebius in March 2026 and has deepened its commercial relationships with every major AI cloud provider, sits at the centre of CoreWeave’s business model: CoreWeave’s entire infrastructure is built around Nvidia GPUs, and the Vera Rubin deployments in the Meta contract will extend that dependency into the next hardware generation. CoreWeave also recently expanded its agreement with OpenAI by up to $6.5 billion, further broadening its customer base beyond Microsoft. The company’s stock reached an all-time high of $187 in mid-2025 before pulling back to around $65 in late 2025 amid broader concerns about AI investment returns; following the Meta expansion announcement it was trading in the $88 to $95 range.

Advertisement

The debt that funds it all

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

AI cloud infrastructure is expensive to build before contracts start generating revenue, and CoreWeave has funded its growth primarily through debt. Alongside the Meta deal announcement, the company disclosed plans to raise $4.25 billion in new financing: $3 billion in convertible senior notes due 2032, carrying a coupon of between 1.5% and 2%, with an option for investors to convert into equity; and $1.25 billion in senior unsecured notes due 2031 at approximately 10%, effectively junk-bond pricing. CoreWeave’s total debt load sits at around $30 billion, roughly triple what it was a year earlier. The company’s argument for the debt structure is that its contracted revenue base,  more than $66 billion in backlog,  provides sufficient visibility to service the obligations. Intrator has described CoreWeave as an “AI factory” whose capital costs are underwritten by long-term customer commitments before infrastructure is built. The broader AI infrastructure financing environment has been characterised by similarly large-scale debt structures: SoftBank secured a $40 billion bridge loan to fund its $30 billion follow-on OpenAI investment as part of the Stargate project, illustrating that the capital requirements of AI at scale are now large enough to require financing instruments that did not exist in this form even two years ago. The year 2025 cemented AI infrastructure as the primary competitive variable in the technology industry, and CoreWeave, a company that began as a closet of Ethereum mining rigs,  has positioned itself as a load-bearing pillar of that infrastructure, one $21 billion commitment at a time.

Advertisement

Source link

Continue Reading

Tech

The diverse responsibilities of a principal software engineer

Published

on

Liberty IT’s Sarah Whelan discusses the skills she uses daily and her reaction to her nomination as part of Liberty IT’s Culture Stars initiative.

“I’m a principal software engineer in the data space at Liberty IT, leading data pipeline enablement and experimentation to help product and analytics teams deliver reliable data and run faster experiments,” said Sarah Whelan. 

A working day might involve designing reusable patterns, templates and tooling while working across functions to improve observability, testing and delivery practices, according to Whelan, who is also involved in a company group designed for women in STEM. 

She told SiliconRepublic.com, “Alongside my day job, I co‑chair the Women in Tech employee group and mentor junior engineers, providing career guidance and technical coaching. 

Advertisement

“That work focuses on removing barriers through skills workshops, resources for career growth and forums where diverse voices can share experiences. The group runs mentoring circles, interview practice sessions and visibility events that create concrete opportunities and help normalise diverse career paths in engineering.”

If there is such a thing, can you describe a typical day at work?

My day balances technical tasks and collaboration. I’ll scan pipelines and deployment health first, address urgent alerts, then focus on code reviews. For me, reviews are an opportunity to mentor, surface better approaches and make our work more maintainable. I set aside time for architecture discussions and documenting decisions so future work is clearer.

I spend time working with our product teams to shape the roadmap, meet stakeholders to understand their problems and identify solutions, and coordinate with other teams to resolve dependencies. I also plan and run mentoring sessions and Women in Tech events, organising speakers, agendas and logistics.

What types of projects do you work on?

My work delivers dependable data platforms for analytics and machine learning. I build production-grade data pipelines that give teams reliable, well-instrumented datasets. To make delivery repeatable, I design experimentation frameworks, templates and patterns that reduce manual effort.

Advertisement

I focus on observability, testing and scaling so pipelines stay performant and lead enablement sessions that teach people how to use the tools and run experiments without heavy engineering support.

What skills do you use on a daily basis?

I use core data engineering skills every day: Python for transformations and orchestration, SQL for modelling and validation, and testing and monitoring to keep systems dependable. I pair that with careful, experimental thinking, small trials, metric tracking and incremental rollouts, so changes are low-risk and measurable.

On the people side of things, clear communication, active listening and regular collaboration help turn technical work into useful outcomes. I focus on creating easy pathways for success by mentoring colleagues, running pairing sessions for practical learning and producing simple playbooks that let teams self‑serve.

What is the hardest part of your working day

The hardest part is switching gears – going from fixing urgent production issues to design workshops or running hands‑on pairing sessions can really break your flow. I try to make it easier by agreeing priorities with the team, protecting blocks for focused work and keeping documentation up to date so I can pick up where I left off. Quick handovers and regular check‑ins also keep longer‑term work visible.

Advertisement
Do you have any productivity tips that help you through the working day?

I use a to-do list to track outstanding tasks and review it each morning to plan and prioritise my day. I block focused time in my calendar for heads‑down work, which helps me avoid context switching. I document everything in a central, easily accessible location so the team never has to ‘figure something out’ twice. I also make mentoring a recurring calendar item, so coaching happens regularly.

When you first started this job, what were you most surprised to learn was important in the role?

I was surprised by how much context and communication matter; technical solutions alone rarely succeed without stakeholder buy‑in and agreed processes. I also didn’t expect observability and experiment rigour to be so central. Good monitoring, testing and repeatable experiment practices are what make pipelines reliable in production.

Finally, the value of documentation and small, consistent practices (like decision logs and runbooks) became obvious fast – they save time and prevent firefighting.

How has your role changed as the sector has grown and evolved?

The arrival of generative AI has raised the bar; it requires high‑quality, well‑labelled data, feature management, stronger data contracts and privacy controls, plus new inference and embedding pipelines and model observability, which makes the role more strategic and cross‑functional. At the same time, there’s a steady stream of new tools and platforms, so a crucial skill is distinguishing genuinely useful technology from marketing hype and choosing tools that solve real problems.

Advertisement
What do you enjoy most about the job?

I enjoy making things better for the people I work with. Most of my role is about simplifying data delivery so users get reliable, timely datasets and can make decisions faster. Each day, I try to keep the team unblocked, staying on top of potential issues so colleagues can get on with their day‑to‑day work with minimal friction.

What I like most about the job is knowing my work makes other people’s lives easier, whether that’s a data user getting answers faster or a teammate having one fewer thing to worry about. I also enjoy helping others build skills and confidence, and access opportunities. Practically, that looks like one‑to‑one coaching, structured pairing sessions and setting up repeatable playbooks so people can succeed without constantly relying on one person.

I often run knowledge‑sharing sessions or demos to share what I’ve learned and get feedback. It’s great to see patterns I’ve created adopted by other teams. When I notice incremental improvements or hear someone say a change saved them time, it reminds me why this work matters.

You received a nomination as part of Liberty IT’s Culture Stars initiative – tell us more about what this nomination meant to you?

The nomination in the ‘Be Brilliant’ category recognised mentorship, teamwork and pragmatic technical leadership. Seeing my mentee secure a promotion was the proudest, most concrete outcome; it showed the real, human impact of focused coaching and regular feedback.

Advertisement

The nomination also acknowledged the everyday teamwork and practical improvements I champion to make our pipelines more reliable. Being recognised was validation that consistent, sometimes unglamorous work – supporting others, documenting decisions and removing roadblocks – does make a difference. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Amazon’s chip business could be worth $50 billion, Jassy says, and he hints it may sell them externally

Published

on

In short: Andy Jassy’s annual letter to shareholders, published on 9 April 2026, reveals that Amazon’s custom chip business, covering Graviton, Trainium, and Nitro, generates more than $20 billion in annualised revenue growing at triple-digit rates year-on-year. If sold on the open market like Nvidia, Jassy says, the business would be worth roughly $50 billion a year. He also signals that Amazon may begin selling those chips directly to third parties, and defends the company’s $200 billion capital expenditure plan for 2026 as grounded in committed customer demand rather than speculation.

“Not on a hunch”: the $200 billion bet

Jassy opened the letter’s financial argument with a direct rebuttal of the scepticism that has surrounded Amazon’s capital commitments. “We’re not investing approximately $200 billion in capex in 2026 on a hunch,” he wrote. “We’re not going to be conservative in how we play this. We’re investing to be the meaningful leader, and our future business, operating income, and free cash flow will be much larger because of it.” The context for that claim is a company that saw its free cash flow fall from $38 billion to $11 billion last year, driven by a $50.7 billion increase in capital spending, the bulk of it committed to AI infrastructure.

The defence rests on customer commitments already in place. Of the CapEx expected to be deployed in 2026, Jassy said a substantial portion already has customer backing, citing as one example OpenAI’s commitment of more than $100 billion to AWS. That commitment, which expanded an existing $38 billion seven-year partnership struck in November 2025, also includes OpenAI consuming approximately two gigawatts of Trainium capacity through AWS infrastructure. SoftBank, which holds a majority stake in OpenAI and has been financing its infrastructure build through mechanisms including a $40 billion bridge loan, is in effect underwriting part of the demand that Jassy is now pointing to as validation for his CapEx stance.

A $50 billion chip business hiding in plain sight

Amazon’s custom silicon programme spans three product lines. Graviton is a custom CPU that Jassy says delivers more than 40% better price-performance than comparable x86 processors, the market that Intel and AMD dominate. It is now used by 98% of the top 1,000 EC2 customers, a figure that reflects a shift in the economics of cloud compute that has been underway for several years. Demand is sufficiently intense that two large AWS customers asked whether they could purchase all available Graviton capacity for 2026. Amazon declined.

Advertisement

Trainium is the AI training and inference accelerator that represents Amazon’s most direct response to Nvidia. Trainium2, which Jassy says offers roughly 30% better price-performance than comparable GPU alternatives, has largely sold out. Trainium3, which began shipping in early 2026 and offers a further 30 to 40% improvement in price-performance over Trainium2, is nearly fully subscribed, with Uber among the companies that have moved workloads onto it. Trainium4, still approximately 18 months from broad availability and featuring interoperability with Nvidia’s NVLink Fusion interconnect technology, has already been significantly reserved. Nitro, the custom network and security chip that underpins AWS’s virtualisation layer, completes the three-chip portfolio. Together, Jassy says the three lines produce more than $20 billion in annualised revenue, growing at triple-digit percentage rates year-on-year. “If we were a standalone chip company,” he writes, “our chips would be generating over $50 billion in annual revenue.” The business currently exists entirely within AWS; customers access Trainium and Graviton through EC2 instances rather than buying chips directly.

Advertisement

At scale, Jassy argues, Trainium will “save us tens of billions of capex dollars per year, and provide several hundred basis points of operating margin advantage versus relying on others’ chips for inference.” That claim is central to the investment thesis underpinning the $200 billion CapEx programme: custom silicon is not only a competitive differentiator but a structural cost advantage that compounds over time as the ratio of inference to training in AI workloads continues to rise.

The Nvidia relationship, and the “new shift”

Jassy is careful in how he frames the competitive dynamic with Nvidia. “We have a strong partnership with NVIDIA, will always have customers who choose to run NVIDIA,” he writes, while also asserting that “virtually all AI thus far has been done on NVIDIA chips, but a new shift has started.” Customers, he says, “want better price-performance.” Nvidia, which reported revenue of $68.1 billion in the fourth quarter of 2025, a 73% year-on-year increase, entered 2026 from a position of market dominance that Amazon’s custom silicon is chipping away at from within the AWS customer base rather than in any broader merchant market. Trainium4’s incorporation of NVLink Fusion means Amazon is also building in a bridge rather than a wall: customers can combine Trainium accelerators with Nvidia GPUs within the same system, preserving optionality for enterprises that have invested heavily in Nvidia’s software stack.

The letter’s most consequential signal on chips, however, may be a single sentence about the future: “There’s so much demand for our chips that it’s quite possible we’ll sell racks of them to third parties in the future.” Amazon currently monetises its custom silicon exclusively through EC2 compute services. Selling chips directly would represent a structural shift in its competitive posture, placing it in the merchant silicon market alongside Nvidia and AMD, and allowing the economics of the chip business to be assessed independently of the cloud revenue it currently underpins.

Bedrock, Amazon Leo, and the broader picture

The shareholder letter situates the chip business within a wider AI infrastructure thesis. Amazon Bedrock, the managed service through which AWS customers access foundation models including Amazon’s own Nova family, processed more tokens in Q1 2026 than in all prior periods combined, with inference volumes “nearly doubling month-over-month” in March. AWS’s AI revenue run rate crossed $15 billion in Q1 2026, a figure Jassy contextualises by noting it represents growth roughly 260 times faster than AWS experienced at a comparable stage of its development.

Advertisement

Jassy also uses the letter to frame Amazon’s satellite internet service, Amazon Leo, as a competitive counterpart to SpaceX’s Starlink, having already secured contracts with Delta Air Lines, JetBlue, AT&T, Vodafone, and NASA. The satellite and chip disclosures share an underlying argument: that Amazon is building infrastructure at a scale and across categories that most observers have not fully priced in. The legal scrutiny that has begun to attach itself to Amazon’s AI products, including a proposed class action over the training data used for Nova Reel, represents one category of risk that the letter does not address. The year 2025 established AI infrastructure as the central capital allocation question for the technology industry, and Jassy’s letter is, in part, an argument that Amazon arrived at the right answer earlier and more decisively than the market has yet recognised.

Source link

Advertisement
Continue Reading

Tech

OpenAI pauses Stargate UK over energy costs

Published

on

Stargate UK will move forward when ‘right conditions’ enable ‘long-term infrastructure investment’, OpenAI said.

OpenAI is pausing its Stargate initiative in the UK after citing energy costs and regulatory burdens.

In a statement to major news publications, the company said that it is continuing to explore Stargate UK and will move forward when the “right conditions such as regulation and the cost of energy” enable it to make “long-term infrastructure investment”.

OpenAI first announced the project last September in collaboration with Nvidia and UK AI infrastructure provider Nscale. The initiative was seen as a step forward in cross-national technology partnership, with its announcement coinciding with US president Donald Trump’s visit to the UK.

Advertisement

For UK prime minister Keir Starmer, Stargate represented a major nod from Big Tech firms supporting the country’s push to become a leader in the space. The OpenAI project was meant to support the UK’s ‘AI Growth Zone’, expected to create 5,000 new jobs and bring in £30bn in private investment.

Other companies, including Microsoft and Nvidia have also made multibillion-dollar investment commitments in the UK. A government spokesperson told Bloomberg that the UK’s AI sector has attracted more than £100bn since Starmer came into power in 2024.

Launched early last year, Stargate is a $500bn private sector investment project into OpenAI’s infrastructure. The project’s initial equity funders include OpenAI, Oracle, MGX and SoftBank, with Microsoft, Nvidia and Arm among the key technology partners.

A year since launching, Stargate’s Texas facility is already training AI systems, while a number of projects are underway in the US, as well as in the UAE and Norway. The company also announced a tie-up with India’s Tata Consultancy Services as part of Stargate.

Advertisement

OpenAI has been shuttering plans to refocus towards enterprise tools as it plans for an initial public offering later this year. Late last month, it put plans for an erotic ChatGPT on hold “indefinitely”, just days after it shut down its controversial AI video generator Sora.

It recently announced a $122bn funding round, placing the AI giant at a post-money valuation of $852bn.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Flush with cash: Washington startup lands up to $500M to deploy facilities treating sewage, dairy waste

Published

on

Dairy cows at the Puyallup Fair, now called the Washington State Fair. (GeekWire Photo / Kurt Schlosser)

Wastewater treatment startup Sedron Technologies — a Washington company that once served Bill Gates a glass of water purified from sewage — announced it’s being acquired by Ara Partners. The global equity firm is investing up to $500 million in Sedron to facilitate the deployment of its sewage and manure cleaning technologies, which gives it a controlling stake in the business.

“The Ara investment is largely designed to provide us with the equity on our own balance sheet to scale up production of additional projects and plants across the country,” said Geoff Trukenbrod, interim CEO of Sedron.

The startup is deploying facilities that efficiently and sustainably treat sewage biosolids and dairy waste. Sedron’s business model is to finance, design, build, own, operate and maintain the sites, which cost about $100 million to $200 million to build.

The company generates revenue from the municipalities and farms that use their services as well as from the sale of organic fertilizer and clean energy produced at the sites.

“Imagine having a bakery, and you get paid to get flour, and you get paid for your cookies,” said Stanley Janicki, Sedron’s chief commercial officer. “It’s a phenomenal business model, not that biosolids are cookies.”

Advertisement
Sedron’s dairy waste management facility in Fair Oaks, Ind., which handles manure from 20,000 cows. (Sedron Photo)

Sedron launched in 2014 as a spinoff from Janicki Industries, a longtime aerospace engineering and manufacturing company. Both are based in Sedro Woolley, a city north of Seattle in a largely agricultural stretch of Western Washington.

In 2011, Janicki received funding from what is now the Gates Foundation to develop a wastewater purification system, leading to Sedron’s launch and a video that went viral showing Bill Gates drinking a glass of water produced from sewage. The foundation supported the technology as a means for treating waste in developing countries where untreated sewage could otherwise spread pathogens.

The company is breaking ground this month on a regional waste treatment facility that will serve multiple municipalities that are home to 2 million people in South Florida. Operations are expected to begin in 2028.

Sedron’s system takes municipal biosolids — the residual product from a wastewater treatment plant — and dries the material in an energy efficient thermal dryer. The biosolids are about 85% water, which is largely evaporated and disposed of, and remaining material is fed into a biomass boiler to produce clean electricity. The energy that’s generated helps run the dryer and the excess electricity is sold. Another benefit of the system is the process destroys PFAS “forever chemicals” contaminating wastewater.

The startup’s second line of business is managing manure from livestock operations — which is one of the biggest costs for a dairy farmer. Sedron takes the waste, removes the water for use in irrigation, and produces two high-value organic fertilizers: a solid material and a concentrated liquid nitrogen fertilizer. The fertilizers are sold nationwide for use on crops such as apples, berries and spinach.

Advertisement

Sedron’s treatment process is more affordable and replaces the use of manure lagoons to store the waste until it can be applied to fields as a liquid. The lagoons produce planet-warming methane and pose environmental threats if they leak nutrients that can stoke algal blooms in nearby waterways or contaminate drinking water.

The company has deployed its manure technology at two dairy farms in Indiana, including a 20,000 cow dairy, and expects to start operations at a Wisconsin farm this summer.

“Our focus is on positioning Sedron as the leader in circular waste management — converting waste into carbon negative commodities faster, more cost effectively, and with greater energy efficiency than any other solution available,” said Cory Steffek, a partner at Ara Partners, in a statement.

Sedron previously raised approximately $100 million in corporate debt and equity and about $200 million in project financing, some of which was institutional. All of the legacy shareholders rolled their equity forward, Janicki said.

Advertisement

The 275 employee company has offices in Washington state and Chicago, and operational facilities in Indiana, Wisconsin and Florida.

The startup is focused on U.S. deployments of its facilities, aiming to launch at least two new sites each year for the next five years, then potentially scaling up from there. Janicki said they’d still like to operate in developing countries to address that initial use case.

Sedron’s leadership emphasized the importance of delivering a service that resonates with investors and business partners, doesn’t require government support to succeed and also benefits the planet.

“As the world today is retreating somewhat from climate efforts,” Janicki said, “it’s exciting to be in a business that is positioned for exceptional growth and solving environmental problems while creating valuable products.”

Advertisement

Source link

Continue Reading

Tech

Meet the ultra-compact NucBox K17 Mini PC delivering triple-digit AI performance and blazing-fast memory in a pocket-sized frame

Published

on


  • NucBox K17 combines CPU, GPU, and NPU for full AI performance
  • The Intel Core Ultra 5 226V processor delivers efficient, high-speed computing
  • Integrated Arc 130V GPU offers 53 TOPS AI throughput using INT8 precision

GMKTec has introduced the NucBox K17 Mini PC with a focus on compact AI performance, combining a high-efficiency processor with integrated graphics and a dedicated neural unit.

The NucBox K17 is built around the Intel Core Ultra 5 226V processor, which features 8 cores and 8 threads manufactured on the TSMC N3B process.

Source link

Advertisement
Continue Reading

Tech

Pixelmator Pro & Logic get big upgrades, rest of iWork gets minor ones

Published

on

Apple rolled out updates across its creative and productivity apps. Logic Pro and Pixelmator Pro gain new features while the rest of the iWork lineup got bug fixes and stability improvements.

Floating dark rounded app icons with colorful neon-style symbols, including charts, sliders, waveforms, a radio, turntable, lamp, and abstract shapes, on a black background
Apple Creator Studio apps

Apple’s Creator Studio bundle includes pro tools like Final Cut Pro, Logic Pro, Motion, Compressor, and Pixelmator Pro, along with productivity apps like Pages, Keynote, and Numbers. It’s a unified platform for creating and publishing across different workflows.
Apple delivered updates through the App Store, with most apps receiving maintenance-focused changes for reliability and platform stability.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Google Cloud deepens AI infrastructure partnership with Intel across Xeon and custom chips

Published

on

In short: Google Cloud and Intel have announced a deepened multi-year AI infrastructure partnership covering both CPU deployment and custom chip co-development. Google Cloud will continue adopting Intel’s Xeon 6 processors across its global infrastructure for C4 and N4 instances, while the two companies are expanding their joint development of custom Infrastructure Processing Units designed to offload networking, storage, and security from host CPUs in hyperscale AI environments. The announcement arrives as Intel’s stock surged approximately 33% on the week and two days after the company signed on as the foundry partner for Tesla’s Terafab megaproject.

“Balanced systems”: the case Intel and Google are making together

The central argument of the partnership, as framed by both companies, is that GPU accelerators alone are not sufficient to handle the demands of modern AI infrastructure. In a statement accompanying the announcement, Lip-Bu Tan, Intel’s chief executive, said: “AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators — it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.” The language is deliberate. Intel has spent much of the past two years repositioning from the general-purpose computing market it once dominated toward a more specific thesis: that the CPU and custom infrastructure silicon have a structural role in AI deployments that GPU-centric narratives have consistently underestimated.

Amin Vahdat, Google’s senior vice president and chief technologist for AI infrastructure, made the case from the demand side. “CPUs and infrastructure acceleration remain a cornerstone of AI systems — from training orchestration to inference and deployment,” he said. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.” The framing of the partnership as a multi-generational CPU roadmap commitment, rather than a one-cycle procurement agreement, is significant: it implies Google has made decisions about its infrastructure architecture several years out on the basis of Intel’s product trajectory, and that trajectory includes both the Xeon line and the custom IPU co-development effort.

Xeon 6 in Google Cloud

The CPU component of the partnership centres on Intel’s Xeon 6 processor family, which Google Cloud has deployed across its workload-optimised C4 and N4 instance types. Google says the C4 instances deliver more than 2.0 times the total cost of ownership benefit compared with predecessor configurations, a figure that captures the combination of performance uplift and power efficiency that Intel has positioned as Xeon 6’s core competitive claim. The agreement extends beyond the current generation: Google has committed to multi-generational alignment with Intel’s Xeon roadmap, meaning its infrastructure planning incorporates Intel’s future CPU releases as a known variable rather than a contingent one. Google has simultaneously been deepening its custom silicon commitments on the accelerator side, supplying Anthropic with approximately one gigawatt of TPU capacity through Broadcom in a deal that anchors Anthropic’s AI infrastructure through 2027 and beyond — a parallel track that reflects how Google is building out its infrastructure portfolio across both standard and custom silicon simultaneously.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The CPU architecture context matters for understanding why this commitment is being made public now. As AI workloads shift from the training phase, which is GPU-intensive and relatively concentrated among a small number of hyperscalers, toward inference at scale, which is distributed, latency-sensitive, and runs continuously across large server fleets, the cost structure of AI infrastructure changes. Inference places sustained demands on CPU resources for orchestration, data pre-processing, and system management that training pipelines do not. Google’s bet on Xeon 6 for its C4 and N4 instances is, in part, a bet that inference economics will make CPU efficiency a first-order concern in the years ahead.

The custom IPU programme

The more strategically significant element of the partnership is the expanded co-development of Infrastructure Processing Units. IPUs are custom ASIC-based programmable accelerators designed to take over the networking, storage, and security functions that would otherwise run on host CPUs, freeing those CPUs to focus entirely on application and AI workload processing. In hyperscale environments, where these infrastructure tasks consume a substantial and growing fraction of available compute, offloading them to a dedicated accelerator can significantly improve utilisation rates, energy efficiency, and the consistency of workload performance. Intel and Google have been collaborating on IPU development, and the announcement signals that this work is expanding in scope rather than narrowing. The specific technical details of the expanded programme — die design, process node, performance targets, and deployment timeline — have not been disclosed publicly.

Advertisement

Nvidia, whose fourth-quarter 2025 revenue reached $68.1 billion on 73% year-on-year growth and which used its GTC 2026 conference in March to position its full-stack platform as the default environment for AI infrastructure, is the implicit competitive reference point for both components of the Intel-Google partnership. Intel is not attempting to displace Nvidia’s GPU accelerators in training workloads; it is arguing that the system around those accelerators — the CPUs managing orchestration, the IPUs managing network and storage overhead, and the interconnects tying everything together — is where efficiency gains are increasingly available. That argument has a natural ally in Google, which has both the infrastructure scale to validate it empirically and commercial incentives to diversify away from a single-vendor accelerator dependency.

Intel’s strategic moment

The Google partnership arrives at a moment when Intel’s industrial position is changing rapidly. Two days before the Google announcement, Intel signed on as the primary foundry partner for Terafab, the $25 billion joint venture between Tesla, SpaceX, and xAI targeting one terawatt of AI compute per year, committing its 18A process node — the company’s most advanced logic manufacturing technology — to the project. The two announcements taken together suggest Intel is pursuing a two-track strategy: deepening its hyperscale cloud partnerships for CPU and IPU deployment while simultaneously building out its foundry business for the custom AI silicon market that Nvidia, AMD, and the hyperscalers’ in-house chip programmes have driven into existence. The stock market responded to the week’s announcements with a roughly 33% gain in Intel’s share price, the sharpest weekly move the company has recorded in years.

Whether the strategic repositioning is durable depends on execution. Intel’s 18A process node is the same technology that underpins its foundry credibility with customers like Tesla, and its delay history has been a persistent source of investor concern. The Xeon 6 deployment in Google Cloud and the IPU co-development programme are both contingent on Intel shipping what its roadmap promises on the timelines Vahdat’s statement implies Google has factored into its own planning. The AI infrastructure market that Intel is trying to enter has become one of the most heavily capitalised segments in technology, with deals such as Meta’s $27 billion agreement with Nebius in March 2026 illustrating the scale of commitments being made across the industry. The year 2025 shifted the centre of gravity in AI from model development to infrastructure deployment, establishing capital expenditure scale and infrastructure access as the primary competitive variables — and Intel, for the first time in several years, is making a credible case that it belongs in that competition on multiple fronts simultaneously.

Advertisement

Source link

Continue Reading

Tech

I don’t see a sane reason to pick another budget phone over the TCL NXTPAPER 70 Pro

Published

on

The era of truly good budget phones is over, and you can blame AI for that. Due to the rising chip costs, even flagship phones are feeling the pinch. And that’s why, when TCL finally brought the NXTPAPER 70 Pro to the US, it came as a big surprise to me. The phone costs just $199, nearly half the price you’d pay in other markets. 

Yes, the phone is exclusive to T-Mobile, but at $199, the NXTPAPER 70 Pro felt something else. A 6.9-inch 120Hz display, IP68 water resistance, 5,200mAh battery, 50MP camera, and TCL’s NXTPAPER 4.0 display technology, which is genuinely unlike anything else at this price. Naturally, I wanted to compare it to phones in a similar price range to see whether I can find a better deal.

So, I went looking for alternatives at a similar price and found three worth comparing: the Samsung Galaxy A17 5G, the Motorola Moto G Power 2026, and the Pixel 10a.  None of them can beat the TCL in price, performance, or features, and I concluded that there’s no reason to choose any other phone over the NXTPAPER 70 Pro right now. Let me show you what I mean.

But first, a quick specs comparison

Specification TCL NXTPAPER
70 Pro
Galaxy A17 5G Moto G Power
2026
Google Pixel 10a
Display 6.9 inches, IPS LCD, 120Hz (1080 x 2340 pixels) 6.7 inches, Super AMOLED, 90Hz (1080 x 2340 pixels) 6.8 inches, IPS LCD, 120Hz (1080 x 2388 pixels) 6.3 inches, P-OLED, 120Hz (1080 x 2424 pixels)
Processor Mediatek Dimensity 7300 (4 nm) Exynos 1330 (5 nm) Mediatek Dimensity 6300 (6 nm) Google Tensor G4 (4 nm)
Cameras Main:
50MP, f1.9, 24mm
Ultrawide
8MP ultrawide (120˚)
Selfie
32MP, f/2.0, 28mm
Main:
50MP, f1.8, 24mm
Ultrawide
5MP ultrawide
Macro
2MP
Selfie
13MP, f/2.0
Main:
50MP, f1.8
Ultrawide
8MP ultrawide (119˚)
Selfie
32MP, f/2.2
Main:
48MP, f1.7, 25mm
Ultrawide
13MP ultrawide, f2.2 (120˚)
Selfie
13MP, f/2.2, 20mm
Battery 5200 mAh 5000 mAh 5200 mAh 5100 mAh
Price $199 (T-Mobile) $189 (T-Mobile), $199 (unlocked) $189 (T-Mobile), $299 (unlocked) $499 (unlocked)

Is there any competition at this price?

The Samsung Galaxy A17 5G is the obvious first comparison. It is Samsung’s best-selling budget phone, and for good reason. You get a solid 6.7-inch Super AMOLED display, a triple camera system, and an impressive six years of software updates. 

Advertisement

It is a reliable, no-frills phone that does the basics well. But it runs on the Exynos 1330, a chip that has been specifically called out for poor performance. Compared to the MediaTek Dimensity 7300 powering the TCL NXTPAPER 70 Pro, the Exynos 1330 is slower across CPU, GPU, and battery performance. Take a look at the comparisons below:

It also has an IP54 rating, which means it is splash-resistant but not submersible. The NXTPAPER 70 Pro, by comparison, has a better chip, a better display, IP68 water resistance, and a more interesting feature set. The A17 sells for around $175 to $199. Simply put:

Same price. No contest.

The Moto G Power 2026 offers a similar 6.8-inch LCD display and the same 5,200mAh battery, but the MediaTek Dimensity 6300 inside is a step down from the NXTPAPER 70 Pro’s Dimensity 7300. The Dimensity 7300 uses a newer 4nm fabrication process (compared to the Dimensity 6300’s 6nm) and delivers up to 67% better performance. Have a look at the performance figures:

There are several factors working in favor of the Moto G Power (2026). It features a better Gorilla Glass 7i protection and IP68/IP69 dust and water resistance, but that’s about it. On all other fronts, the NXTPAPER 70 Pro either offers equal or better features. Moto G Power 2026 costs $189 if you get it on a similar T-Mobile contract and $299 on Amazon without a contract, so there’s no price advantage either.

As you can see, the TCL NXTPAPER 70 Pro beats the Samsung Galaxy A17 and Moto G Power 2026 on most fronts at a similar price. 

What about the Pixel 10a?

This is where it gets interesting. At $499, the Google Pixel 10a is not a phone I should consider for this comparison. But it is a genuinely great phone, a gold standard for mid-range Android, and I am not going to pretend otherwise.

It features a 6.3-inch OLED display, a 48MP camera, seven years of updates, a more powerful Tensor G4 chipset, and Google’s AI features baked deep into the software. 

But the Pixel 10a does not have a bigger battery and does not support expandable storage. Also, the TCL NXTPAPER 70 Pro costs $199, and the $300 gap is doing a lot of heavy lifting. And throughout our comparisons, we haven’t even touched on the TCL NXTPAPER 70 Pro’s standout feature: the NXTPAPER 4.0 display.

That display is what makes this phone genuinely special. TCL’s NXTPAPER 4.0 is not a software night mode or a cheap filter. It uses hardware-level changes, including circular polarized light, DC dimming that eliminates screen flicker, and a filter that reduces harmful blue light. 

The phone is certified by TÜV and SGS, independent bodies that test these things rather than take a company’s word for it. A dedicated NXTPAPER key on the side instantly switches between full-color mode, Ink Paper Mode, and Max Ink Mode, allowing you to use it as a normal phone or as an e-reader experience. In Max Ink mode, the battery lasts up to seven days.

None of the other phones on this list offer these incredible display innovations. This feature alone makes the NXTPAPER 70 Pro worth buying. But even if you disregard it, you have seen that the NXTPAPER 70 Pro offers better features at comparable prices to all other phones in its price segment. 

If you spend long hours staring at your phone for work, school, or reading, no phone at this price comes close to what TCL is offering. At $199, the TCL NXTPAPER 70 Pro is not a budget phone that asks you to make compromises. It is a genuinely good phone with one feature that no one else has figured out yet. That makes it a very easy recommendation.

Advertisement

Source link

Continue Reading

Tech

OpenAI faces investigation over ChatGPT concerns

Published

on

Just when it seemed like OpenAI was gearing up for its next big leap, possibly even an IPO, it’s now facing some serious scrutiny. And this time, it’s not just critics online. It’s a full-blown government investigation. And yeah, things are getting a little intense.

OpenAI is now under investigation, and it’s not a small one

Florida Attorney General James Uthmeier has launched a probe into OpenAI and its chatbot, ChatGPT. The concerns being raised go beyond the usual AI debates, as this one touches on national security, data handling, and real-world harm.

Today, we launched an investigation into OpenAI and ChatGPT.

AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.

Wrongdoers must be held accountable. pic.twitter.com/vRVCqIYKnB

Advertisement

— Attorney General James Uthmeier (@AGJamesUthmeier) April 9, 2026

As reported by Reuters, the investigation is looking into whether OpenAI’s technology or data could potentially fall into the wrong hands, including foreign adversaries. There are also claims linking ChatGPT to harmful use cases, ranging from misuse in criminal activity to concerns around self-harm and unsafe content.

Subpoenas are reportedly on the way, which means this isn’t just talk but a formal escalation. And all of this is happening right as OpenAI is being seen as a potential IPO candidate, with valuations being thrown around in the trillion-dollar range. That timing could complicate things further, as increased regulatory scrutiny may impact investor confidence and how aggressively the company can move forward with its public listing plans.

This could get messy, fast

Let’s be real, AI companies have been skating on thin ice when it comes to regulation. Rapid growth, massive user bases, and real-world impact were always going to attract attention eventually. But the timing here is what makes it spicy. OpenAI is scaling aggressively, pushing products like ChatGPT deeper into everyday life, and potentially preparing for a public offering. Getting hit with a government probe right now is not ideal.

At the same time, this might just be the beginning. Because once governments start asking questions about how AI is being used, and misused, it’s not just about one company anymore. It’s about the entire industry getting put under the microscope.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025