Connect with us
DAPA Banner

Tech

Electric Candela P-12 Business Ferry Glides Across Water in Near Silence

Published

on

Electric Candela P-12 Business Ferry
Passengers board a ferry that feels more like a luxury executive lounge than any other boat on the water. The Candela P-12 Business truly delivers, as the ride is so silky smooth and silent that discussions flow smoothly, you can even hold a full cup of coffee in your hand, and the entire journey becomes the highlight of your day. Candela devised a set of computer-controlled underwater wings for this 12-meter electric ferry, which lifts the hull out of the water once the boat reaches speed. Drag plummets and waves simply flow below instead than bouncing off the sides.



Two motors generate electricity, each having 110 kilowatts of continuous power and 160 kilowatts to spare. The motors are powered by a 378-kilowatt-hour battery pack, which can only be used for approximately 336 kilowatt-hours. At a steady cruise speed of 25 knots, the ferry may travel up to 40 nautical miles on a single charge, which is not surprising given the average length of daily routes between islands, coastal towns, and city harbors. During testing, top speed exceeded 30 knots. Recharging is achieved using the conventional high-power DC stations used for heavy trucks or rapid electric vehicles.

Sale


DJI Osmo 360 Camera Adventure Combo, Waterproof 360° Action Camera with 1-Inch 360° Imaging, Extended…
  • Big Views, Brilliant Quality – Groundbreaking 1-inch 360° imaging [1] delivers excellent low-light for sharper shots on every adventure. Now with…
  • Stunning, Day or Night – Capture every detail with 8K 360° videos and store them effortlessly with 105GB Built-in Storage. Whether you’re exploring…
  • No Pole, All Action – 1.2m Invisible Selfie Stick turns your Osmo 360 into a cameraman that follows you. Shoot super-smooth 4K/120fps and magical…

Electric Candela P-12 Business Ferry
Noise levels within the cabin are roughly similar to a normal discussion in a quiet room (63 to 64 dB) when cruising at high speeds. Traditional speedboats, on the other hand, frequently reach 85 to 95 decibels, making quite the racket. Even modern diesel ships operate at 65 to 75 decibels. To put things into perspective, a 10 decibel drop sounds about half as loud to human hearing. The only sound you hear is a mild hum from the motors; there are no roaring engines or regular thuds of water against the hull. They’ve installed more sound insulation and some very thick carpets to keep the space as quiet as possible.

Electric Candela P-12 Business Ferry
Inside, the layout prioritizes comfort for up to 20 passengers (although 16 is the typical). There are some quite comfortable seats with plenty of legroom, and each seat has a built-in USB-C connector to keep your electronics charged. They also have a coffee bar to keep everyone refreshed and air conditioning to keep the temperature consistent. There’s also some storage room in the back to keep people’s gear. At night, a star ceiling lighting system casts a calm, shifting radiance above. There are large panoramic windows that let you to take in the views from all directions, which is a huge plus. A wide, robust ramp at the boarding point is extremely useful, especially for anyone who need additional assistance, such as strollers or wheelchairs.

Electric Candela P-12 Business Ferry
The P-12 Business’s operators gain significantly from its design. When the ferry is on its foils, energy consumption lowers by up to 80% compared to a typical vessel of the same size. That means decreased fuel bills, because electricity is far less expensive than marine diesel, after all. As a result, the boat causes significantly less disturbance to shorelines and other vessels. It also minimizes underwater noise, which benefits marine life significantly. All of these qualities make the P-12 Business a clear winner on routes where emissions and noise laws are tightening.
[Source]

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The diverse responsibilities of a principal software engineer

Published

on

Liberty IT’s Sarah Whelan discusses the skills she uses daily and her reaction to her nomination as part of Liberty IT’s Culture Stars initiative.

“I’m a principal software engineer in the data space at Liberty IT, leading data pipeline enablement and experimentation to help product and analytics teams deliver reliable data and run faster experiments,” said Sarah Whelan. 

A working day might involve designing reusable patterns, templates and tooling while working across functions to improve observability, testing and delivery practices, according to Whelan, who is also involved in a company group designed for women in STEM. 

She told SiliconRepublic.com, “Alongside my day job, I co‑chair the Women in Tech employee group and mentor junior engineers, providing career guidance and technical coaching. 

Advertisement

“That work focuses on removing barriers through skills workshops, resources for career growth and forums where diverse voices can share experiences. The group runs mentoring circles, interview practice sessions and visibility events that create concrete opportunities and help normalise diverse career paths in engineering.”

If there is such a thing, can you describe a typical day at work?

My day balances technical tasks and collaboration. I’ll scan pipelines and deployment health first, address urgent alerts, then focus on code reviews. For me, reviews are an opportunity to mentor, surface better approaches and make our work more maintainable. I set aside time for architecture discussions and documenting decisions so future work is clearer.

I spend time working with our product teams to shape the roadmap, meet stakeholders to understand their problems and identify solutions, and coordinate with other teams to resolve dependencies. I also plan and run mentoring sessions and Women in Tech events, organising speakers, agendas and logistics.

What types of projects do you work on?

My work delivers dependable data platforms for analytics and machine learning. I build production-grade data pipelines that give teams reliable, well-instrumented datasets. To make delivery repeatable, I design experimentation frameworks, templates and patterns that reduce manual effort.

Advertisement

I focus on observability, testing and scaling so pipelines stay performant and lead enablement sessions that teach people how to use the tools and run experiments without heavy engineering support.

What skills do you use on a daily basis?

I use core data engineering skills every day: Python for transformations and orchestration, SQL for modelling and validation, and testing and monitoring to keep systems dependable. I pair that with careful, experimental thinking, small trials, metric tracking and incremental rollouts, so changes are low-risk and measurable.

On the people side of things, clear communication, active listening and regular collaboration help turn technical work into useful outcomes. I focus on creating easy pathways for success by mentoring colleagues, running pairing sessions for practical learning and producing simple playbooks that let teams self‑serve.

What is the hardest part of your working day

The hardest part is switching gears – going from fixing urgent production issues to design workshops or running hands‑on pairing sessions can really break your flow. I try to make it easier by agreeing priorities with the team, protecting blocks for focused work and keeping documentation up to date so I can pick up where I left off. Quick handovers and regular check‑ins also keep longer‑term work visible.

Advertisement
Do you have any productivity tips that help you through the working day?

I use a to-do list to track outstanding tasks and review it each morning to plan and prioritise my day. I block focused time in my calendar for heads‑down work, which helps me avoid context switching. I document everything in a central, easily accessible location so the team never has to ‘figure something out’ twice. I also make mentoring a recurring calendar item, so coaching happens regularly.

When you first started this job, what were you most surprised to learn was important in the role?

I was surprised by how much context and communication matter; technical solutions alone rarely succeed without stakeholder buy‑in and agreed processes. I also didn’t expect observability and experiment rigour to be so central. Good monitoring, testing and repeatable experiment practices are what make pipelines reliable in production.

Finally, the value of documentation and small, consistent practices (like decision logs and runbooks) became obvious fast – they save time and prevent firefighting.

How has your role changed as the sector has grown and evolved?

The arrival of generative AI has raised the bar; it requires high‑quality, well‑labelled data, feature management, stronger data contracts and privacy controls, plus new inference and embedding pipelines and model observability, which makes the role more strategic and cross‑functional. At the same time, there’s a steady stream of new tools and platforms, so a crucial skill is distinguishing genuinely useful technology from marketing hype and choosing tools that solve real problems.

Advertisement
What do you enjoy most about the job?

I enjoy making things better for the people I work with. Most of my role is about simplifying data delivery so users get reliable, timely datasets and can make decisions faster. Each day, I try to keep the team unblocked, staying on top of potential issues so colleagues can get on with their day‑to‑day work with minimal friction.

What I like most about the job is knowing my work makes other people’s lives easier, whether that’s a data user getting answers faster or a teammate having one fewer thing to worry about. I also enjoy helping others build skills and confidence, and access opportunities. Practically, that looks like one‑to‑one coaching, structured pairing sessions and setting up repeatable playbooks so people can succeed without constantly relying on one person.

I often run knowledge‑sharing sessions or demos to share what I’ve learned and get feedback. It’s great to see patterns I’ve created adopted by other teams. When I notice incremental improvements or hear someone say a change saved them time, it reminds me why this work matters.

You received a nomination as part of Liberty IT’s Culture Stars initiative – tell us more about what this nomination meant to you?

The nomination in the ‘Be Brilliant’ category recognised mentorship, teamwork and pragmatic technical leadership. Seeing my mentee secure a promotion was the proudest, most concrete outcome; it showed the real, human impact of focused coaching and regular feedback.

Advertisement

The nomination also acknowledged the everyday teamwork and practical improvements I champion to make our pipelines more reliable. Being recognised was validation that consistent, sometimes unglamorous work – supporting others, documenting decisions and removing roadblocks – does make a difference. 

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Amazon’s chip business could be worth $50 billion, Jassy says, and he hints it may sell them externally

Published

on

In short: Andy Jassy’s annual letter to shareholders, published on 9 April 2026, reveals that Amazon’s custom chip business, covering Graviton, Trainium, and Nitro, generates more than $20 billion in annualised revenue growing at triple-digit rates year-on-year. If sold on the open market like Nvidia, Jassy says, the business would be worth roughly $50 billion a year. He also signals that Amazon may begin selling those chips directly to third parties, and defends the company’s $200 billion capital expenditure plan for 2026 as grounded in committed customer demand rather than speculation.

“Not on a hunch”: the $200 billion bet

Jassy opened the letter’s financial argument with a direct rebuttal of the scepticism that has surrounded Amazon’s capital commitments. “We’re not investing approximately $200 billion in capex in 2026 on a hunch,” he wrote. “We’re not going to be conservative in how we play this. We’re investing to be the meaningful leader, and our future business, operating income, and free cash flow will be much larger because of it.” The context for that claim is a company that saw its free cash flow fall from $38 billion to $11 billion last year, driven by a $50.7 billion increase in capital spending, the bulk of it committed to AI infrastructure.

The defence rests on customer commitments already in place. Of the CapEx expected to be deployed in 2026, Jassy said a substantial portion already has customer backing, citing as one example OpenAI’s commitment of more than $100 billion to AWS. That commitment, which expanded an existing $38 billion seven-year partnership struck in November 2025, also includes OpenAI consuming approximately two gigawatts of Trainium capacity through AWS infrastructure. SoftBank, which holds a majority stake in OpenAI and has been financing its infrastructure build through mechanisms including a $40 billion bridge loan, is in effect underwriting part of the demand that Jassy is now pointing to as validation for his CapEx stance.

A $50 billion chip business hiding in plain sight

Amazon’s custom silicon programme spans three product lines. Graviton is a custom CPU that Jassy says delivers more than 40% better price-performance than comparable x86 processors, the market that Intel and AMD dominate. It is now used by 98% of the top 1,000 EC2 customers, a figure that reflects a shift in the economics of cloud compute that has been underway for several years. Demand is sufficiently intense that two large AWS customers asked whether they could purchase all available Graviton capacity for 2026. Amazon declined.

Advertisement

Trainium is the AI training and inference accelerator that represents Amazon’s most direct response to Nvidia. Trainium2, which Jassy says offers roughly 30% better price-performance than comparable GPU alternatives, has largely sold out. Trainium3, which began shipping in early 2026 and offers a further 30 to 40% improvement in price-performance over Trainium2, is nearly fully subscribed, with Uber among the companies that have moved workloads onto it. Trainium4, still approximately 18 months from broad availability and featuring interoperability with Nvidia’s NVLink Fusion interconnect technology, has already been significantly reserved. Nitro, the custom network and security chip that underpins AWS’s virtualisation layer, completes the three-chip portfolio. Together, Jassy says the three lines produce more than $20 billion in annualised revenue, growing at triple-digit percentage rates year-on-year. “If we were a standalone chip company,” he writes, “our chips would be generating over $50 billion in annual revenue.” The business currently exists entirely within AWS; customers access Trainium and Graviton through EC2 instances rather than buying chips directly.

Advertisement

At scale, Jassy argues, Trainium will “save us tens of billions of capex dollars per year, and provide several hundred basis points of operating margin advantage versus relying on others’ chips for inference.” That claim is central to the investment thesis underpinning the $200 billion CapEx programme: custom silicon is not only a competitive differentiator but a structural cost advantage that compounds over time as the ratio of inference to training in AI workloads continues to rise.

The Nvidia relationship, and the “new shift”

Jassy is careful in how he frames the competitive dynamic with Nvidia. “We have a strong partnership with NVIDIA, will always have customers who choose to run NVIDIA,” he writes, while also asserting that “virtually all AI thus far has been done on NVIDIA chips, but a new shift has started.” Customers, he says, “want better price-performance.” Nvidia, which reported revenue of $68.1 billion in the fourth quarter of 2025, a 73% year-on-year increase, entered 2026 from a position of market dominance that Amazon’s custom silicon is chipping away at from within the AWS customer base rather than in any broader merchant market. Trainium4’s incorporation of NVLink Fusion means Amazon is also building in a bridge rather than a wall: customers can combine Trainium accelerators with Nvidia GPUs within the same system, preserving optionality for enterprises that have invested heavily in Nvidia’s software stack.

The letter’s most consequential signal on chips, however, may be a single sentence about the future: “There’s so much demand for our chips that it’s quite possible we’ll sell racks of them to third parties in the future.” Amazon currently monetises its custom silicon exclusively through EC2 compute services. Selling chips directly would represent a structural shift in its competitive posture, placing it in the merchant silicon market alongside Nvidia and AMD, and allowing the economics of the chip business to be assessed independently of the cloud revenue it currently underpins.

Bedrock, Amazon Leo, and the broader picture

The shareholder letter situates the chip business within a wider AI infrastructure thesis. Amazon Bedrock, the managed service through which AWS customers access foundation models including Amazon’s own Nova family, processed more tokens in Q1 2026 than in all prior periods combined, with inference volumes “nearly doubling month-over-month” in March. AWS’s AI revenue run rate crossed $15 billion in Q1 2026, a figure Jassy contextualises by noting it represents growth roughly 260 times faster than AWS experienced at a comparable stage of its development.

Advertisement

Jassy also uses the letter to frame Amazon’s satellite internet service, Amazon Leo, as a competitive counterpart to SpaceX’s Starlink, having already secured contracts with Delta Air Lines, JetBlue, AT&T, Vodafone, and NASA. The satellite and chip disclosures share an underlying argument: that Amazon is building infrastructure at a scale and across categories that most observers have not fully priced in. The legal scrutiny that has begun to attach itself to Amazon’s AI products, including a proposed class action over the training data used for Nova Reel, represents one category of risk that the letter does not address. The year 2025 established AI infrastructure as the central capital allocation question for the technology industry, and Jassy’s letter is, in part, an argument that Amazon arrived at the right answer earlier and more decisively than the market has yet recognised.

Source link

Advertisement
Continue Reading

Tech

OpenAI pauses Stargate UK over energy costs

Published

on

Stargate UK will move forward when ‘right conditions’ enable ‘long-term infrastructure investment’, OpenAI said.

OpenAI is pausing its Stargate initiative in the UK after citing energy costs and regulatory burdens.

In a statement to major news publications, the company said that it is continuing to explore Stargate UK and will move forward when the “right conditions such as regulation and the cost of energy” enable it to make “long-term infrastructure investment”.

OpenAI first announced the project last September in collaboration with Nvidia and UK AI infrastructure provider Nscale. The initiative was seen as a step forward in cross-national technology partnership, with its announcement coinciding with US president Donald Trump’s visit to the UK.

Advertisement

For UK prime minister Keir Starmer, Stargate represented a major nod from Big Tech firms supporting the country’s push to become a leader in the space. The OpenAI project was meant to support the UK’s ‘AI Growth Zone’, expected to create 5,000 new jobs and bring in £30bn in private investment.

Other companies, including Microsoft and Nvidia have also made multibillion-dollar investment commitments in the UK. A government spokesperson told Bloomberg that the UK’s AI sector has attracted more than £100bn since Starmer came into power in 2024.

Launched early last year, Stargate is a $500bn private sector investment project into OpenAI’s infrastructure. The project’s initial equity funders include OpenAI, Oracle, MGX and SoftBank, with Microsoft, Nvidia and Arm among the key technology partners.

A year since launching, Stargate’s Texas facility is already training AI systems, while a number of projects are underway in the US, as well as in the UAE and Norway. The company also announced a tie-up with India’s Tata Consultancy Services as part of Stargate.

Advertisement

OpenAI has been shuttering plans to refocus towards enterprise tools as it plans for an initial public offering later this year. Late last month, it put plans for an erotic ChatGPT on hold “indefinitely”, just days after it shut down its controversial AI video generator Sora.

It recently announced a $122bn funding round, placing the AI giant at a post-money valuation of $852bn.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Flush with cash: Washington startup lands up to $500M to deploy facilities treating sewage, dairy waste

Published

on

Dairy cows at the Puyallup Fair, now called the Washington State Fair. (GeekWire Photo / Kurt Schlosser)

Wastewater treatment startup Sedron Technologies — a Washington company that once served Bill Gates a glass of water purified from sewage — announced it’s being acquired by Ara Partners. The global equity firm is investing up to $500 million in Sedron to facilitate the deployment of its sewage and manure cleaning technologies, which gives it a controlling stake in the business.

“The Ara investment is largely designed to provide us with the equity on our own balance sheet to scale up production of additional projects and plants across the country,” said Geoff Trukenbrod, interim CEO of Sedron.

The startup is deploying facilities that efficiently and sustainably treat sewage biosolids and dairy waste. Sedron’s business model is to finance, design, build, own, operate and maintain the sites, which cost about $100 million to $200 million to build.

The company generates revenue from the municipalities and farms that use their services as well as from the sale of organic fertilizer and clean energy produced at the sites.

“Imagine having a bakery, and you get paid to get flour, and you get paid for your cookies,” said Stanley Janicki, Sedron’s chief commercial officer. “It’s a phenomenal business model, not that biosolids are cookies.”

Advertisement
Sedron’s dairy waste management facility in Fair Oaks, Ind., which handles manure from 20,000 cows. (Sedron Photo)

Sedron launched in 2014 as a spinoff from Janicki Industries, a longtime aerospace engineering and manufacturing company. Both are based in Sedro Woolley, a city north of Seattle in a largely agricultural stretch of Western Washington.

In 2011, Janicki received funding from what is now the Gates Foundation to develop a wastewater purification system, leading to Sedron’s launch and a video that went viral showing Bill Gates drinking a glass of water produced from sewage. The foundation supported the technology as a means for treating waste in developing countries where untreated sewage could otherwise spread pathogens.

The company is breaking ground this month on a regional waste treatment facility that will serve multiple municipalities that are home to 2 million people in South Florida. Operations are expected to begin in 2028.

Sedron’s system takes municipal biosolids — the residual product from a wastewater treatment plant — and dries the material in an energy efficient thermal dryer. The biosolids are about 85% water, which is largely evaporated and disposed of, and remaining material is fed into a biomass boiler to produce clean electricity. The energy that’s generated helps run the dryer and the excess electricity is sold. Another benefit of the system is the process destroys PFAS “forever chemicals” contaminating wastewater.

The startup’s second line of business is managing manure from livestock operations — which is one of the biggest costs for a dairy farmer. Sedron takes the waste, removes the water for use in irrigation, and produces two high-value organic fertilizers: a solid material and a concentrated liquid nitrogen fertilizer. The fertilizers are sold nationwide for use on crops such as apples, berries and spinach.

Advertisement

Sedron’s treatment process is more affordable and replaces the use of manure lagoons to store the waste until it can be applied to fields as a liquid. The lagoons produce planet-warming methane and pose environmental threats if they leak nutrients that can stoke algal blooms in nearby waterways or contaminate drinking water.

The company has deployed its manure technology at two dairy farms in Indiana, including a 20,000 cow dairy, and expects to start operations at a Wisconsin farm this summer.

“Our focus is on positioning Sedron as the leader in circular waste management — converting waste into carbon negative commodities faster, more cost effectively, and with greater energy efficiency than any other solution available,” said Cory Steffek, a partner at Ara Partners, in a statement.

Sedron previously raised approximately $100 million in corporate debt and equity and about $200 million in project financing, some of which was institutional. All of the legacy shareholders rolled their equity forward, Janicki said.

Advertisement

The 275 employee company has offices in Washington state and Chicago, and operational facilities in Indiana, Wisconsin and Florida.

The startup is focused on U.S. deployments of its facilities, aiming to launch at least two new sites each year for the next five years, then potentially scaling up from there. Janicki said they’d still like to operate in developing countries to address that initial use case.

Sedron’s leadership emphasized the importance of delivering a service that resonates with investors and business partners, doesn’t require government support to succeed and also benefits the planet.

“As the world today is retreating somewhat from climate efforts,” Janicki said, “it’s exciting to be in a business that is positioned for exceptional growth and solving environmental problems while creating valuable products.”

Advertisement

Source link

Continue Reading

Tech

Meet the ultra-compact NucBox K17 Mini PC delivering triple-digit AI performance and blazing-fast memory in a pocket-sized frame

Published

on


  • NucBox K17 combines CPU, GPU, and NPU for full AI performance
  • The Intel Core Ultra 5 226V processor delivers efficient, high-speed computing
  • Integrated Arc 130V GPU offers 53 TOPS AI throughput using INT8 precision

GMKTec has introduced the NucBox K17 Mini PC with a focus on compact AI performance, combining a high-efficiency processor with integrated graphics and a dedicated neural unit.

The NucBox K17 is built around the Intel Core Ultra 5 226V processor, which features 8 cores and 8 threads manufactured on the TSMC N3B process.

Source link

Advertisement
Continue Reading

Tech

Pixelmator Pro & Logic get big upgrades, rest of iWork gets minor ones

Published

on

Apple rolled out updates across its creative and productivity apps. Logic Pro and Pixelmator Pro gain new features while the rest of the iWork lineup got bug fixes and stability improvements.

Floating dark rounded app icons with colorful neon-style symbols, including charts, sliders, waveforms, a radio, turntable, lamp, and abstract shapes, on a black background
Apple Creator Studio apps

Apple’s Creator Studio bundle includes pro tools like Final Cut Pro, Logic Pro, Motion, Compressor, and Pixelmator Pro, along with productivity apps like Pages, Keynote, and Numbers. It’s a unified platform for creating and publishing across different workflows.
Apple delivered updates through the App Store, with most apps receiving maintenance-focused changes for reliability and platform stability.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Google Cloud deepens AI infrastructure partnership with Intel across Xeon and custom chips

Published

on

In short: Google Cloud and Intel have announced a deepened multi-year AI infrastructure partnership covering both CPU deployment and custom chip co-development. Google Cloud will continue adopting Intel’s Xeon 6 processors across its global infrastructure for C4 and N4 instances, while the two companies are expanding their joint development of custom Infrastructure Processing Units designed to offload networking, storage, and security from host CPUs in hyperscale AI environments. The announcement arrives as Intel’s stock surged approximately 33% on the week and two days after the company signed on as the foundry partner for Tesla’s Terafab megaproject.

“Balanced systems”: the case Intel and Google are making together

The central argument of the partnership, as framed by both companies, is that GPU accelerators alone are not sufficient to handle the demands of modern AI infrastructure. In a statement accompanying the announcement, Lip-Bu Tan, Intel’s chief executive, said: “AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators — it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.” The language is deliberate. Intel has spent much of the past two years repositioning from the general-purpose computing market it once dominated toward a more specific thesis: that the CPU and custom infrastructure silicon have a structural role in AI deployments that GPU-centric narratives have consistently underestimated.

Amin Vahdat, Google’s senior vice president and chief technologist for AI infrastructure, made the case from the demand side. “CPUs and infrastructure acceleration remain a cornerstone of AI systems — from training orchestration to inference and deployment,” he said. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.” The framing of the partnership as a multi-generational CPU roadmap commitment, rather than a one-cycle procurement agreement, is significant: it implies Google has made decisions about its infrastructure architecture several years out on the basis of Intel’s product trajectory, and that trajectory includes both the Xeon line and the custom IPU co-development effort.

Xeon 6 in Google Cloud

The CPU component of the partnership centres on Intel’s Xeon 6 processor family, which Google Cloud has deployed across its workload-optimised C4 and N4 instance types. Google says the C4 instances deliver more than 2.0 times the total cost of ownership benefit compared with predecessor configurations, a figure that captures the combination of performance uplift and power efficiency that Intel has positioned as Xeon 6’s core competitive claim. The agreement extends beyond the current generation: Google has committed to multi-generational alignment with Intel’s Xeon roadmap, meaning its infrastructure planning incorporates Intel’s future CPU releases as a known variable rather than a contingent one. Google has simultaneously been deepening its custom silicon commitments on the accelerator side, supplying Anthropic with approximately one gigawatt of TPU capacity through Broadcom in a deal that anchors Anthropic’s AI infrastructure through 2027 and beyond — a parallel track that reflects how Google is building out its infrastructure portfolio across both standard and custom silicon simultaneously.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The CPU architecture context matters for understanding why this commitment is being made public now. As AI workloads shift from the training phase, which is GPU-intensive and relatively concentrated among a small number of hyperscalers, toward inference at scale, which is distributed, latency-sensitive, and runs continuously across large server fleets, the cost structure of AI infrastructure changes. Inference places sustained demands on CPU resources for orchestration, data pre-processing, and system management that training pipelines do not. Google’s bet on Xeon 6 for its C4 and N4 instances is, in part, a bet that inference economics will make CPU efficiency a first-order concern in the years ahead.

The custom IPU programme

The more strategically significant element of the partnership is the expanded co-development of Infrastructure Processing Units. IPUs are custom ASIC-based programmable accelerators designed to take over the networking, storage, and security functions that would otherwise run on host CPUs, freeing those CPUs to focus entirely on application and AI workload processing. In hyperscale environments, where these infrastructure tasks consume a substantial and growing fraction of available compute, offloading them to a dedicated accelerator can significantly improve utilisation rates, energy efficiency, and the consistency of workload performance. Intel and Google have been collaborating on IPU development, and the announcement signals that this work is expanding in scope rather than narrowing. The specific technical details of the expanded programme — die design, process node, performance targets, and deployment timeline — have not been disclosed publicly.

Advertisement

Nvidia, whose fourth-quarter 2025 revenue reached $68.1 billion on 73% year-on-year growth and which used its GTC 2026 conference in March to position its full-stack platform as the default environment for AI infrastructure, is the implicit competitive reference point for both components of the Intel-Google partnership. Intel is not attempting to displace Nvidia’s GPU accelerators in training workloads; it is arguing that the system around those accelerators — the CPUs managing orchestration, the IPUs managing network and storage overhead, and the interconnects tying everything together — is where efficiency gains are increasingly available. That argument has a natural ally in Google, which has both the infrastructure scale to validate it empirically and commercial incentives to diversify away from a single-vendor accelerator dependency.

Intel’s strategic moment

The Google partnership arrives at a moment when Intel’s industrial position is changing rapidly. Two days before the Google announcement, Intel signed on as the primary foundry partner for Terafab, the $25 billion joint venture between Tesla, SpaceX, and xAI targeting one terawatt of AI compute per year, committing its 18A process node — the company’s most advanced logic manufacturing technology — to the project. The two announcements taken together suggest Intel is pursuing a two-track strategy: deepening its hyperscale cloud partnerships for CPU and IPU deployment while simultaneously building out its foundry business for the custom AI silicon market that Nvidia, AMD, and the hyperscalers’ in-house chip programmes have driven into existence. The stock market responded to the week’s announcements with a roughly 33% gain in Intel’s share price, the sharpest weekly move the company has recorded in years.

Whether the strategic repositioning is durable depends on execution. Intel’s 18A process node is the same technology that underpins its foundry credibility with customers like Tesla, and its delay history has been a persistent source of investor concern. The Xeon 6 deployment in Google Cloud and the IPU co-development programme are both contingent on Intel shipping what its roadmap promises on the timelines Vahdat’s statement implies Google has factored into its own planning. The AI infrastructure market that Intel is trying to enter has become one of the most heavily capitalised segments in technology, with deals such as Meta’s $27 billion agreement with Nebius in March 2026 illustrating the scale of commitments being made across the industry. The year 2025 shifted the centre of gravity in AI from model development to infrastructure deployment, establishing capital expenditure scale and infrastructure access as the primary competitive variables — and Intel, for the first time in several years, is making a credible case that it belongs in that competition on multiple fronts simultaneously.

Advertisement

Source link

Continue Reading

Tech

I don’t see a sane reason to pick another budget phone over the TCL NXTPAPER 70 Pro

Published

on

The era of truly good budget phones is over, and you can blame AI for that. Due to the rising chip costs, even flagship phones are feeling the pinch. And that’s why, when TCL finally brought the NXTPAPER 70 Pro to the US, it came as a big surprise to me. The phone costs just $199, nearly half the price you’d pay in other markets. 

Yes, the phone is exclusive to T-Mobile, but at $199, the NXTPAPER 70 Pro felt something else. A 6.9-inch 120Hz display, IP68 water resistance, 5,200mAh battery, 50MP camera, and TCL’s NXTPAPER 4.0 display technology, which is genuinely unlike anything else at this price. Naturally, I wanted to compare it to phones in a similar price range to see whether I can find a better deal.

So, I went looking for alternatives at a similar price and found three worth comparing: the Samsung Galaxy A17 5G, the Motorola Moto G Power 2026, and the Pixel 10a.  None of them can beat the TCL in price, performance, or features, and I concluded that there’s no reason to choose any other phone over the NXTPAPER 70 Pro right now. Let me show you what I mean.

But first, a quick specs comparison

Specification TCL NXTPAPER
70 Pro
Galaxy A17 5G Moto G Power
2026
Google Pixel 10a
Display 6.9 inches, IPS LCD, 120Hz (1080 x 2340 pixels) 6.7 inches, Super AMOLED, 90Hz (1080 x 2340 pixels) 6.8 inches, IPS LCD, 120Hz (1080 x 2388 pixels) 6.3 inches, P-OLED, 120Hz (1080 x 2424 pixels)
Processor Mediatek Dimensity 7300 (4 nm) Exynos 1330 (5 nm) Mediatek Dimensity 6300 (6 nm) Google Tensor G4 (4 nm)
Cameras Main:
50MP, f1.9, 24mm
Ultrawide
8MP ultrawide (120˚)
Selfie
32MP, f/2.0, 28mm
Main:
50MP, f1.8, 24mm
Ultrawide
5MP ultrawide
Macro
2MP
Selfie
13MP, f/2.0
Main:
50MP, f1.8
Ultrawide
8MP ultrawide (119˚)
Selfie
32MP, f/2.2
Main:
48MP, f1.7, 25mm
Ultrawide
13MP ultrawide, f2.2 (120˚)
Selfie
13MP, f/2.2, 20mm
Battery 5200 mAh 5000 mAh 5200 mAh 5100 mAh
Price $199 (T-Mobile) $189 (T-Mobile), $199 (unlocked) $189 (T-Mobile), $299 (unlocked) $499 (unlocked)

Is there any competition at this price?

The Samsung Galaxy A17 5G is the obvious first comparison. It is Samsung’s best-selling budget phone, and for good reason. You get a solid 6.7-inch Super AMOLED display, a triple camera system, and an impressive six years of software updates. 

Advertisement

It is a reliable, no-frills phone that does the basics well. But it runs on the Exynos 1330, a chip that has been specifically called out for poor performance. Compared to the MediaTek Dimensity 7300 powering the TCL NXTPAPER 70 Pro, the Exynos 1330 is slower across CPU, GPU, and battery performance. Take a look at the comparisons below:

It also has an IP54 rating, which means it is splash-resistant but not submersible. The NXTPAPER 70 Pro, by comparison, has a better chip, a better display, IP68 water resistance, and a more interesting feature set. The A17 sells for around $175 to $199. Simply put:

Same price. No contest.

The Moto G Power 2026 offers a similar 6.8-inch LCD display and the same 5,200mAh battery, but the MediaTek Dimensity 6300 inside is a step down from the NXTPAPER 70 Pro’s Dimensity 7300. The Dimensity 7300 uses a newer 4nm fabrication process (compared to the Dimensity 6300’s 6nm) and delivers up to 67% better performance. Have a look at the performance figures:

There are several factors working in favor of the Moto G Power (2026). It features a better Gorilla Glass 7i protection and IP68/IP69 dust and water resistance, but that’s about it. On all other fronts, the NXTPAPER 70 Pro either offers equal or better features. Moto G Power 2026 costs $189 if you get it on a similar T-Mobile contract and $299 on Amazon without a contract, so there’s no price advantage either.

As you can see, the TCL NXTPAPER 70 Pro beats the Samsung Galaxy A17 and Moto G Power 2026 on most fronts at a similar price. 

What about the Pixel 10a?

This is where it gets interesting. At $499, the Google Pixel 10a is not a phone I should consider for this comparison. But it is a genuinely great phone, a gold standard for mid-range Android, and I am not going to pretend otherwise.

It features a 6.3-inch OLED display, a 48MP camera, seven years of updates, a more powerful Tensor G4 chipset, and Google’s AI features baked deep into the software. 

But the Pixel 10a does not have a bigger battery and does not support expandable storage. Also, the TCL NXTPAPER 70 Pro costs $199, and the $300 gap is doing a lot of heavy lifting. And throughout our comparisons, we haven’t even touched on the TCL NXTPAPER 70 Pro’s standout feature: the NXTPAPER 4.0 display.

That display is what makes this phone genuinely special. TCL’s NXTPAPER 4.0 is not a software night mode or a cheap filter. It uses hardware-level changes, including circular polarized light, DC dimming that eliminates screen flicker, and a filter that reduces harmful blue light. 

The phone is certified by TÜV and SGS, independent bodies that test these things rather than take a company’s word for it. A dedicated NXTPAPER key on the side instantly switches between full-color mode, Ink Paper Mode, and Max Ink Mode, allowing you to use it as a normal phone or as an e-reader experience. In Max Ink mode, the battery lasts up to seven days.

None of the other phones on this list offer these incredible display innovations. This feature alone makes the NXTPAPER 70 Pro worth buying. But even if you disregard it, you have seen that the NXTPAPER 70 Pro offers better features at comparable prices to all other phones in its price segment. 

If you spend long hours staring at your phone for work, school, or reading, no phone at this price comes close to what TCL is offering. At $199, the TCL NXTPAPER 70 Pro is not a budget phone that asks you to make compromises. It is a genuinely good phone with one feature that no one else has figured out yet. That makes it a very easy recommendation.

Advertisement

Source link

Continue Reading

Tech

OpenAI faces investigation over ChatGPT concerns

Published

on

Just when it seemed like OpenAI was gearing up for its next big leap, possibly even an IPO, it’s now facing some serious scrutiny. And this time, it’s not just critics online. It’s a full-blown government investigation. And yeah, things are getting a little intense.

OpenAI is now under investigation, and it’s not a small one

Florida Attorney General James Uthmeier has launched a probe into OpenAI and its chatbot, ChatGPT. The concerns being raised go beyond the usual AI debates, as this one touches on national security, data handling, and real-world harm.

Today, we launched an investigation into OpenAI and ChatGPT.

AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.

Wrongdoers must be held accountable. pic.twitter.com/vRVCqIYKnB

Advertisement

— Attorney General James Uthmeier (@AGJamesUthmeier) April 9, 2026

As reported by Reuters, the investigation is looking into whether OpenAI’s technology or data could potentially fall into the wrong hands, including foreign adversaries. There are also claims linking ChatGPT to harmful use cases, ranging from misuse in criminal activity to concerns around self-harm and unsafe content.

Subpoenas are reportedly on the way, which means this isn’t just talk but a formal escalation. And all of this is happening right as OpenAI is being seen as a potential IPO candidate, with valuations being thrown around in the trillion-dollar range. That timing could complicate things further, as increased regulatory scrutiny may impact investor confidence and how aggressively the company can move forward with its public listing plans.

This could get messy, fast

Let’s be real, AI companies have been skating on thin ice when it comes to regulation. Rapid growth, massive user bases, and real-world impact were always going to attract attention eventually. But the timing here is what makes it spicy. OpenAI is scaling aggressively, pushing products like ChatGPT deeper into everyday life, and potentially preparing for a public offering. Getting hit with a government probe right now is not ideal.

At the same time, this might just be the beginning. Because once governments start asking questions about how AI is being used, and misused, it’s not just about one company anymore. It’s about the entire industry getting put under the microscope.

Advertisement

Source link

Continue Reading

Tech

GeekWire Awards: AI Innovation of the Year finalists transform HR, retail, biotech and more

Published

on

The 2026 GeekWire Awards AI Innovation of the Year finalists, clockwise from top left: Avante CEO Rohan D’Souza; ConverzAI CEO Ashwarya Poddar; Envive AI CEO Aniket Deosthali; Synthesize Bio co-founders Jeff Leek (left) and Robert Bradley; and Spangle AI co-founders Maju Kuruvilla (left) and Fei Wang.

The finalists for AI Innovation of the Year at the 2026 GeekWire Awards represent the cutting edge of the generative era, deploying sophisticated agents and foundation models to transform everything from healthcare benefits and recruitment to e-commerce personalization and life-saving drug discovery.

The finalists are: Avante, ConverzAI, Envive AI, Spangle, and Synthesize Bio.

Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.

The 2025 GeekWire Award winner for AI Innovation of the Year was Overland AI, the Seattle-based startup that develops autonomous driving technology for rugged terrain for military applications and elsewhere.

Continue reading for information on the 2026 AI Innovation of the Year finalists, who were chosen by a panel of independent judges from community nominations. You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through April 16.

Avante is an AI-native benefits intelligence platform designed to help companies decrease HR administration workload and reduce overall benefits program costs. It relies on two AI agents working together: Ava gives HR teams strategic intelligence from employee questions, claims data, and vendor contracts. Carly gives employees personalized guidance.

Advertisement

The startup, which raised a $10 million seed round, is led by CEO Rohan D’Souza, former chief product officer for health care automation company Olive AI; and epidemiologist Carly Eckert, MD, Ph.D., Avante’s head of innovation and impact, who was executive vice president at Olive AI. Kabir Shahani, a serial entrepreneur who was CEO of Seattle-based marketing tech startup Amperity, is Avante’s executive chairman.

ConverzAI helps automate recruiting processes with its virtual recruiters that help companies with staffing needs. The software can parse through applications, conduct interviews, and onboard new employees.

The 6-year-old startup, which raised $16 million in Series A funding, is led by former Microsoft product manager Ashwarya Poddar.

Envive AI builds AI agents for online retailers to help boost conversion, retention, and discoverability. Brands such as Spanx, Coterie, Supergoop! and more use Envive’s AI-powered software to engage with customers as they shop on websites and apps. Envive also helps companies improve their visibility in generative AI search results.

Advertisement

The company, which raised $15 million in Series A funding, is led by CEO Aniket Deosthali, who previously helped Walmart build its generative AI-powered shopping assistant. Other co-founders include: CTO Sameer Singh, chief scientist Iz Beltagy, and chief architect Matthew Peters.

Spangle AI helps online retailers build customized shopping experiences in real-time by generating a tailored storefront for individual customers based on how traffic flows in from social platforms, AI search tools, and even autonomous shopping agents. Spangle’s system focuses on intent and context — whether a shopper is browsing, comparison-shopping, or ready to buy — and adapts product selection, layout, and content accordingly.

The startup, which raised $15 million in a Series A round, is led by CEO Maju Kuruvilla, a former vice president at Amazon, where he worked on Prime logistics and fulfillment. Spangle CTO Fei Wang was CTO at Saks OFF 5TH, a subsidiary of Saks 5th Avenue. Wang also spent nearly 12 years at Amazon as an engineer.

Synthesize Bio aims to make new drug discovery faster and cheaper by using AI to simulate the results from hypothetical lab experiments. Its generative genomics foundation model (GEM-1) predicts gene expression, providing insights into how a novel drug is expected to impact cell behavior.

Advertisement

The startup, which raised $10 million last fall, was co-founded by leaders from Fred Hutchinson Cancer Center — Fred Hutch Chief Data Officer Jeff Leek and Robert Bradley, director of the Translational Data Science Integrated Research Center at the organization.

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships are available. Contact events@geekwire.com to reserve a spot for your team today.

(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025