Xflow, an Indian fintech startup, has secured backing from both Stripe and PayPal Ventures in a $16.6 million funding round. The investment comes as the company works to carve out a position in cross-border B2B payments, a market still dominated by banks and manual processes.
The Series A round was led by General Catalyst, with participation from existing investors Square Peg, Stripe, Lightspeed, and Moore Capital, while PayPal Ventures joined as a new backer. The all-equity round values the Bengaluru-based startup at $85 million post-investment and brings its total funding to more than $32 million to date.
Despite rapid digitization in domestic payments, cross-border B2B transfers for Indian exporters remain heavily reliant on banks, often with limited visibility into fees, settlement timelines, and the final amount received in rupees. The friction is particularly acute for larger exporters moving millions of dollars into India to fund salaries and local operations, creating an opening for fintech infrastructure players such as Xflow that promise greater transparency and speed in international money movement.
Founded in 2021, Xflow provides cross-border payment infrastructure for businesses ranging from exporters and SaaS firms to platforms and freelancers, enabling them to collect international payments, manage foreign exchange, and settle funds in India.
Advertisement
“Cross-border B2B payments were stuck in a different age compared to UPI,” co-founder Anand Balaji (pictured above, center) said in an interview, referring to India’s widely used instant domestic payments network, the Unified Payments Interface.
Balaji, who previously helped build out Stripe’s India business, founded Xflow with former Stripe colleagues Ashwin Bhatnagar (pictured above, right) and Abhijit Chandrasekaran (pictured above, left).
Last year, Xflow said it enabled Indian businesses to collect payments from more than 100 countries in over 25 currencies. It processed close to $1 billion in annualized cross-border payment volume last year, marking roughly 10-fold growth from the same period in 2024, Balaji told TechCrunch.
Techcrunch event
Advertisement
Boston, MA | June 9, 2026
According to the company, its customer base has expanded to about 15,000 businesses spanning SaaS firms, global capability centers (which are offshore units that multinationals operate in India), IT services exporters, freelancers, and fintech platforms.
Advertisement
Transaction sizes vary widely by segment, with global capability centers averaging about $1 million to $2 million per transaction, goods exporters around $30,000 to $40,000, and freelancers roughly $3,000, according to Balaji.
Xflow is positioning itself as a payments infrastructure provider rather than a direct payments application, offering APIs that allow platforms and exporters to embed cross-border money movement into their own products.
“We didn’t want to build the next Wise — we want to power the next thousand Wises,” Balaji said.
The startup has also introduced an AI-based foreign exchange tool to help finance teams optimize the timing of currency conversions. Xflow says the feature has generated incremental gains for some customers through data-driven foreign exchange decisions.
Advertisement
The tool allows businesses to set target conversion rates rather than accepting prevailing bank quotes. Balaji likened the feature to limit orders in trading — instructions to buy or sell only at a specified price.
“What we’ve added is the prediction layer and the ability to actually set a limit order,” he said. The model currently provides a three-day forecast with about 92% confidence, Balaji said, though TechCrunch could not independently verify that figure.
Xflow faces competition from banks that still dominate large cross-border B2B transfers, as well as fintech players such as Wise, Payoneer, and Skydo at the lower end of the market. But Balaji said the startup’s focus on high-value transactions and API-led infrastructure differentiates it from many rivals.
The startup plans to deploy the new capital toward building additional products on top of its core payments infrastructure and securing regulatory licenses in new markets, Balaji said. Xflow is preparing to roll out import capabilities in the coming months and is pursuing licenses in markets including Singapore, while already holding a payments license in Canada, even as it remains focused on India as its primary market.
Advertisement
Xflow said it has also received final authorization from the Reserve Bank of India for a Payment Aggregator–Cross Border (PA-CB) license covering both exports and imports. The startup has signed platform partnerships with Easebuzz and Drip Capital to embed its cross-border capabilities into their offerings.
Backing from Stripe and PayPal Ventures, Balaji said, has helped strengthen the startup’s credibility with banking and regulatory partners, even as it continues to work with multiple payment providers commercially.
The startup currently has about 65 employees as it scales its cross-border infrastructure business.
A total investment of €260m will boost clean electricity generation, reduce reliance on imported energy and support the delivery of 2030 climate targets, said the Government.
The European Investment Bank (EIB) will support the construction and operation of four new utility-scale solar photovoltaic projects across Ireland via a €100m project finance loan to Dolmen Solar Ltd, a holding company of Power Capital Renewable Energy.
The overall investment – which, in total, will be worth €260m – will see four new solar power operations developed in Clare, Wicklow, Wexford and Tipperary, generating around 367GWh of clean electricity per annum, which is equivalent to the annual consumption of roughly 79,900 households. The funding and development is also expected to create new jobs in construction, civil works, grid connections and maintenance services.
The scheme is among the largest single solar investments financed in Ireland to date and could contribute significantly to Ireland’s target of 80pc renewable electricity by 2030, as well as see out the national ambition for roughly 8GW of installed solar capacity under the Renewable Electricity Support Scheme.
Advertisement
Ballinaclough, Co Wicklow will host a 15.5MWp solar farm, with construction expected to start this month. Tullabeg, Co Wexford will be home to the largest scheme in the portfolio – a 181.6MWp plant – and construction is planned from April.
In Tipperary, Barnaleen-Cauteen will be the site of a 98MWp farm, and construction is expected to begin this month. Lastly, in Clare, Manusmore near Ennis is earmarked for a 99.5MWp plant, with construction also expected to commence in March.
Work on some of the projects will run into 2028.
Commenting on the investment, the Minister for Climate, Energy and the Environment Darragh O’Brien, TD said: “Ireland is sometimes seen as an unlikely home for solar power, but projects like this show how quickly that perception is changing and how strong the investor appetite now is for Irish renewables.
Advertisement
“This is a very welcome €260m investment, spread across Clare, Tipperary, Wicklow and Wexford, which will boost clean electricity generation right across the country, reduce our reliance on imported energy and support delivery of our 2030 climate targets. The European Investment Bank is playing a key role as a long‑term partner for Ireland’s energy transition”.
The EIB vice-president Ioannis Tsakiris added: “By backing Ireland’s first solar project financed on a pure project finance basis, the EIB is helping to unlock almost 400MW of new renewable capacity that will strengthen Ireland’s energy security and cut greenhouse gas emissions.”
In February of this year, SunArc, a renewable energy company based in Carlow, announced plans to create up to 50 new jobs as a result of a €20m investment into the organisation. The company offers a ‘solar-as-a-service’ model which it said is a significant step towards accelerating Ireland’s transition to clean energy.
SunArc has stated that the solar-as-a-service model will enable businesses to access solar power and energy independence with no upfront costs, removing what it believes to be one of the biggest barriers to solar power adoption.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
The Raspberry Pi line of single-board computers can be hooked up with a wide range of compatible cameras. There are a number of first party options, but you don’t have to stick with those—there are other sensors out there with interesting capabilities, too. [Collimated Beard] has been exploring the use of the IMX585 camera sensor, exploiting its abilities to capture HDR content on the Raspberry Pi.
The IMX585 sensor from Sony is a neat part, capable of shooting at up to 3840 x 2160 resolution (4K) in high-dynamic range if so desired. Camera boards with this sensor that suit the Raspberry Pi aren’t that easy to find, but there are designs out there that you can look up if you really want one. There are also some tricks you’ll have to do to get this part working on the platform. As [Collimated Beard] explains, in the HDR modes, a lot of the standard white balance and image control algorithms don’t work, and image preview can be unusable at times due to the vagaries of the IMX585’s data format. You’ll also need to jump some hurdles with the Video4Linux2 tools to enable the full functionality of these modes.
Do all that, recompile the kernel with some tweaks and the right drivers, though, and you’ll finally be able to capture in 16-bit HDR modes. Oh, and don’t forget—you’ll need to find a way deal with the weird RAW video files this setup generates. It’s a lot of work, but that’s the price of entry to work with this sensor right now. If it helps convince you, the sample shots shared by [Collimated Beard] are pretty good.
Humanoid robotics is advancing rapidly, yet engineers continue to face formidable barriers in locomotion stability, real-time perception, safe human interaction, and power-constrained hardware design. As the industry approaches a projected shift from small-scale prototyping to mass commercialisation in the late 2020s, understanding the component-level decisions that affect system reliability, cost, and performance is becoming critical. This guide examines the technical landscape across sensing, motion, control, and battery subsystems — outlining the design trade-offs, modular architecture trends, and supply chain considerations that will shape the next generation of deployable humanoid platforms.
A North Carolina man was found guilty of extorting a D.C.-based technology company while still being employed as a data analyst contractor.
While a Justice Department press release published on Thursday doesn’t name the victim, court documents reveal that he targeted Brightly Software, a Software-as-a-Service (SaaS) company previously known as SchoolDude, which Siemens acquired in August 2022.
Brightly has been in business for more than 20 years, employs over 700 people, and provides intelligent asset management and maintenance software to over 12,000 clients worldwide, mainly in the United States, Canada, the United Kingdom, and Australia.
As revealed in the indictment, 27-year-old Cameron Curry (also known as “Loot”) took advantage of his access to Brightly’s payroll information and corporate data to steal sensitive documents, which he used as leverage in an extortion scheme after learning that his six-month contract wouldn’t be extended.
Advertisement
One day after his contract ended on December 10, Curry began sending over 60 extortion emails to Brightly employees using the lootsoftware@outlook.com Microsoft email address and the Loot alias, threatening to leak sensitive information stolen between August and December 2023 unless he was paid a $2.5 million ransom.
With the extortion messages, Curry also attached screenshots of spreadsheets listing the personal identification information (PII) of Brightly employees, including names, dates of birth, home addresses, and compensation information. He also threatened to report the company to the U.S. Securities and Exchange Commission (SEC) for failing to disclose the breach as required by law.
“We will commence the process of disseminating salary information starting January 1,2024 in phases to all employees and will report you to the SEC after for not reporting the breach,” Curry threatened in one of the extortion emails.
“If you wish to reclaim your data, we recommend doing so promptly at 2.5 million USD in order to save your company and stocks, as each subsequent month will incur a $100,000 USD increase. Discrepancies in your books are currently over 16 million USD, posing a potential risk for retention issues, a hostile work environment, resentment, and more.”
Advertisement
Extortion email sample (Justice Department)
Following Curry’s numerous extortion emails, Brightly paid $7,540 in Bitcoin, which was transferred to a cryptocurrency wallet controlled by Curry.
The FBI searched Curry’s residence on January 24 after the company reported the incident and seized various electronic devices containing evidence of his extortion scheme.
Curry was released on bond in January 2024 and now faces up to 12 years in prison for six counts of transmitting or willfully causing interstate communications with the intent to extort a victim company.
Brightly also notified customers of a data breach unrelated to this case in May 2023 after attackers gained access to the database of its SchoolDude online platform and stole credentials and personal data (including names, email addresses, account passwords, phone numbers).
Information filed with the Office of the Maine Attorney General revealed that the intrusion was discovered 8 days after the attackers breached Brightly’s systems on April 20, and that the data breach affected nearly 3 million SchoolDude customers and users.
Advertisement
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Wicked: For Good, the second and final movie of the Wicked franchise, is an adaptation of the 2003 Broadway musical. Starring Ariana Grande and Cynthia Erivo, the film was released in mid-November 2025 and was a huge hit, with an impressive opening of $147M, going on to gross $532M.
Now that it has finally left theaters, Wicked: For Good is set to stream on Peacock from March 20, 2026 – and we’ve found a sneaky way to watch it for just $1.
The musical drama’s story picks up where the first part left off. Elphaba is now in exile and known as the “Wicked Witch of the West,” while Glinda has become a public figure in Oz, working with the Wizard’s regime.
Advertisement
Although most of the songs are adapted from Act II of the stage production, the film features two new songs: No Place Like Home (Cynthia Erivo) and The Girl in the Bubble (Ariana Grande).
Watch Wicked: For Good for $1
U.S. viewers are in luck — there’s a savvy way to get Peacock for just $1 (usually $10.99).
Right now, Walmart+ is offering a 30-day trial for $1, which includes your choice of a subscription to either Paramount+ or Peacock. If you’re tuning in for Wicked: For Good, Peacock is the one you’ll want.
Visiting another country from America? NordVPN can help unlock your $1 trial — more on that below.
Advertisement
How to watch Wicked: For Good from anywhere
If you’re traveling abroad when Wicked: For Good airs, you’ll be unable to watch the show like you normally would due to regional restrictions. Luckily, there’s an easy solution.
Downloading a VPN will allow you to stream online, no matter where you are. It’s a simple bit of software that changes your IP address, meaning that you can access on-demand content or live TV just as if you were at home.
Use a VPN to watch Wicked: For Good from anywhere.
Advertisement
How to watch Wicked: For Good online in UK, Canada, Australia and worldwide
Along with the US, Wicked: For Good is also available to buy or rent via premium video-on-demand (PVOD) services in the UK, Canada, and Australia, on platforms such as Apple TV, Prime Video, and Sky Store.
We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.
The future of AI isn’t just agentic; it’s deep personalization.
Rather than simple recommender systems that correlate user behavior to identify patterns and apply those to individual workflows, large language models (LLMs) and AI agents can analyze users directly to create deeply personalized experiences.
It’s this kind of aggressive customization users are increasingly demanding — and the savviest enterprises who provide it (and soon) will win.
The goal is: “Don’t try to randomize, or guess who I am. I tell you, this is what I care about,” Lijuan Qin, head of product, at Zoom AI, explains in a new Beyond the Pilot podcast.
Advertisement
How Zoom is incorporating personalization
Zoom is one company that has adapted to this trend: Its generative assistant, AI Companion, goes beyond basic summarization, smart recordings, and after-meeting action items to opinion divergence and user alignment tracking.
Users can customize meeting summaries based on their specific interests, and create targeted templates for follow-up emails to different personas (whether it be a salesperson or account executive). The AI assistant can then automatically populate these documents post-call. Meanwhile, a custom dictionary in Zoom AI Studio can process unique enterprise terminology and vocabulary for more relevant AI outputs, and a deep research mode can quickly deliver comprehensive analyses based on “internal expertise and external insights.”
Control is key here; the human can be “very specific [and] nail down” agent permissioning, Qin explained. They have “very clear controls” on follow-up actions, such as: Can the agent automatically send emails to specific recipients? Or will it trigger a verification step when it recognizes transcripts contain sensitive information (as dictated by the user)?
Advertisement
Knowing that AI can go off the rails at times, human users can track agent behavior in Zoom, enable and disable features, and control data access. This can help prevent outputs that are inaccurate or off-target.
“The most important thing is we do not assume AI is smart enough to get everything right,” Qin emphasized.
Getting context right
In this new agentic AI age, there is essentially a “land grab for context,” Sam Witteveen, co-founder of Red Dragon AI and Beyond the Pilot host, explains in the podcast.
“Definitely knowing your users is the big thing, right? Knowing what apps they are living in, what day-to-day tasks are they constantly doing?,” he said. “Companies realize the more they have about you, the better the [AI] memory can get, the better they can customize.”
Advertisement
Claude Cowork is one app that is “really shining” at this, Witteveen says; OpenClaw is another. Models are good enough that they can begin to make decisions for users and respond to directions like: “You know a bunch of things about me. You’ve got all this context. Go and generate the skills that are going to help me do a better job.”
“With something like OpenClaw, you can customize it in any way you want, right? You can chat with it, you can tell it, ‘Hey, at 4 o’clock I want you to do this,’” Witteveen said.
However, token usage and security must always be taken into account, he advised. OpenClaw has been plagued by security issues since its launch. This has prompted many enterprises to uninstall the autonomous agent or outright ban its use; however, these uninstalls must be done correctly so that IT leaders don’t inadvertently delete their entire enterprise stack.
Meanwhile, in terms of token budget, personalization can run up costs. “You need to think about the metrics you are tracking,” Witteveen said. “This is very different from product to product, but metrics around these things are gonna be key.”
Advertisement
Watch the podcast to hear more about:
Why the companies that don’t experiment with AI skills right now “may be toast”
How Zoom built an AI companion that tracks opinion divergence — not just action items — in your meetings
Why the build vs. buy question just got a lot more urgent for enterprise software
Why “skills” may matter more than MCP for the future of enterprise AI
An anonymous reader quotes a report from The Register: A lobbying trade body for smaller cloud providers is asking the European Commission to impose interim measures blocking Broadcom from terminating the VMware Cloud Service Provider program, calling the decision a death sentence for some tech suppliers and an illegal squeeze on customer choice. As The Reg revealed in January, Broadcom shuttered the scheme, a move sources claimed affects hundreds of CSPs across Europe and curtails options for enterprises buying VMware software and services. The Cloud Infrastructure Service Provider in Europe (CISPE) trade group, representing nearly 50 tech suppliers, filed the complaint today with the EC Directorates-General, accusing Broadcom of bully-boy tactics, and calling for authorities to halt what it terms as “ongoing abuse.”
Francisco Mingorance, CISPE secretary general, said of the complaint: “Businesses — both cloud providers and their customers — are being irreparably damaged by Broadcom’s unfair actions, which we believe are illegal. “After imposing outrageous and unjustified price hikes immediately following the acquisition of VMware, Broadcom is now applying the ‘coup de grace’. We need urgent intervention to force them to change. The only way to stop bullies is to stand up to them.” CISPE claims that, since Broadcom completed its $69 billion takeover of VMware in October 2023, prices have risen tenfold, payment is demanded upfront, products are bundled regardless of customer need, and minimum commitments are based on potential rather than actual consumption.
The VMware Cloud Service Provider (VCSP) program officially closed in January and all transactions must be complete by March 31. After that date, only a select group of suppliers will be able to sell VMware subscriptions — either standalone or as part of a broader service. Across Europe, we’re told this equates to hundreds of businesses losing their authorization. For some, the loss of VCSP status effectively destroys their market. Those whose operations were built around VMware must now hand customers to another authorized supplier or begin the costly migration to an alternative platform. Broadcom said in a statement responding to the complaint: “Broadcom strongly disagrees with the allegations by CISPE, an organization funded by hyperscalers, which misrepresent the realities of the market. We continue to be committed to investing significantly in our European VMware Cloud Service Provider partners… helping them offer alternatives to the hyperscalers and meet the evolving needs of European businesses and organizations.”
A US appeals court has found in favor of Masimo in its fight against Apple over pulse oximetry patents, but in the court that matters, a ruling makes it clear that there won’t be another ban on the Apple Watch.
The dispute concerns the blood pulse oximeter in the Apple Watch
In the now six year-long legal battle between medical technology firm Masimo and Apple, this particular appeal concerns a ruling by the International Trade Commission (ITC). The ITC ruled that Apple had stolen trade secrets and violated patents with its blood pulse oximeter in the Apple Watch. Masimo wanted a ban on the Apple Watch and in October 2023, the ITC issued an order barring Apple from importing the Apple Watch into the US, and in December denied the company’s appeal against it. Continue Reading on AppleInsider | Discuss on our Forums
Alphabet’s life sciences business Verily is restructuring and raising money as a new corporate entity. Verily announced that with its $300 million investment round, it will change from an LLC to a corporation and rename itself Verily Health Inc. As a result, Alphabet now has a minority stake rather than a controlling one in the business.
Similar to every other tech business, this chapter for Verily will be focused on AI. “From research to care, our customers need solutions that bring the best of clinical and scientific rigor together with AI to deliver the next generation of healthcare – one that is as precise as it is personal,” Chairman and CEO Stephen Gillett said.
One morning in May 2019, a cardiac surgeon stepped into the operating room at Boston Children’s Hospital more prepared than ever before to perform a high-risk procedure to rebuild a child’s heart. The surgeon was experienced, but he had an additional advantage: He had already performed the procedure on this child dozens of times—virtually. He knew exactly what to do before the first cut was made. Even more important, he knew which strategies would provide the best possible outcome for the child whose life was in his hands.
How was this possible? Over the prior weeks, the hospital’s surgical and cardio-engineering teams had come together to build a fully functioning model of the child’s heart and surrounding vascular system from MRI and CT scans. They began by carefully converting the medical imaging into a 3D model, then used physics to bring the 3D heart to life, creating a dynamic digital replica of the patient’s physiology. The mock-up reproduced this particular heart’s unique behavior, including details of blood flow, pressure differentials, and muscle-tissue stresses.
This type of model, known as a virtual twin, can do more than identify medical problems—it can provide detailed diagnostic insights. In Boston, the team used the model to predict how the child’s heart would respond to any cut or stitch, allowing the surgeon to test many strategies to find the best one for this patient’s exact anatomy.
That day, the stakes were high. With the patient’s unique condition—a heart defect in which large holes between the atria and ventricles were causing blood to flow between all four chambers—there was no manual or textbook to fully guide the doctors. The condition strains the lungs, so the doctors planned an open-heart surgery to reroute deoxygenated blood from the lower body directly to the lungs, bypassing the heart. Typically with this kind of surgery, decisions would be made on the fly, under demanding conditions, and with high uncertainty. But in this case, the plan had been tested in advance, and the entire team had rehearsed it before the first incision. The surgery was a complete success.
Advertisement
Such procedures have become routine at the Boston hospital. Since that first patient, nearly 2,000 procedures have been guided by virtual-twin modeling. This is the power of the technology behind the Living Heart Project, which I launched in 2014, five years before that first procedure. The project started as an exploratory initiative to see if modeling the human heart was possible. Now with more than 150 member organizations across 28 countries, the project includes dozens of multidisciplinary teams that regularly use multiscale virtual twins of the heart and other vital organs.
This technology is reshaping how we understand and treat the human body. To reach this transformative moment, we had to solve a fundamental challenge: building a digital heart accurate enough—and trustworthy enough—to guide real clinical decisions.
A father’s concern
Now entering its second decade, the Living Heart Project was born in part from a personal conviction. For many years, I had watched helplessly as my daughter Jesse faced endless diagnostic uncertainty due to a rare congenital heart condition in which the position of the ventricles is reversed, threatening her life as she grew. As an engineer, I understood that the heart was an array of pumping chambers, controlled by an electrical signal and its blood flow carefully regulated by valves. Yet I struggled to grasp the unique structure and behavior of my daughter’s heart well enough to contribute meaningfully to her care. Her specialists knew the bleak forecast children like her faced if left untreated, but because every heart with her condition is anatomically unique, they had little more than their best guesses to guide their decisions about what to do and when to do it. With each specialist, a new guess.
Then my engineering curiosity sparked a question that has guided my career ever since: Why can’t we simulate the human body the way we simulate a car or a plane?
Advertisement
At a visualization center in Boston, VR imagery helps the mother of a young girl with a complex heart defect understand the inner workings of her child’s heart. Dassault Systèmes
I had spent my career developing powerful computational tools to help engineers build digital models of complex mechanical systems, using models that ranged from the interactions of individual atoms to the components of entire vehicles. What most of these models had in common was the use of physics to predict behavior and optimize performance. But in medicine today, those same physics-based approaches rarely inform decision-making. In most clinical settings, treatment decisions still hinge on judgments drawn from static 2D images, statistical guidelines, and retrospective studies.
This was not always the case. Historically, physics was central to medicine. The word “physician” itself traces back to the Latin physica, which translates to “natural science.” Early doctors were, in a sense, applied physicists. They understood the heart as a pump, the lungs as bellows, and the body as a dynamic system. To be a physician meant you were a master of physics as it applied to the human body.
As medicine matured, biology and chemistry grew to dominate the field, and the knowledge of physics got left behind. But for patients like my daughter, that child in Boston, and millions like them, outcomes are governed by mechanics. No pill or ointment—no chemistry-based solution—would help, only physics. While I did not realize it at the time, virtual twins can reunite modern physicians with their roots, using engineering principles, simulation science, and artificial intelligence.
A decade of progress
The LHP concept was simple: Could we combine what hundreds of experts across many specialties knew about the human heart to build a digital twin accurate enough to be trusted, flexible enough to personalize, and predictive enough to guide clinical care?
Advertisement
We invited researchers, clinicians, device and drug companies, and government regulators to share their data, tools, and knowledge toward a common goal that would lift the entire field of medicine. The Living Heart Project launched with a dozen or so institutions on board. Within a year, we had created the first fully functional virtual twin of the human heart.
The Living Heart was not an anatomical rendering, tuned to simply replicate what we observed. It was a first-principles model, coupling the network of fibers in the heart’s electrical system, the biological battery that keeps us alive, with the heart’s mechanical response, the muscle contractions that we know as the heartbeat.
The Living Heart virtual twin simulates how the heart beats, offering different views to help scientists and doctors better predict how it will respond to disease or treatment. The center view shows the fine engineering mesh, the detailed framework that allows computers to model the heart’s motion. The image on the right uses colors to show the electrical wave that drives the heartbeat as it conducts through the muscle, and the image on the left shows how much strain is on the tissue as it stretches and squeezes. Dassault Systèmes
Academic researchers had long explored computational models of the heart, but those projects were typically limited by the technology they had access to. Our version was built on industrial-grade simulation software from Dassault Systèmes, a company best known for modeling tools used in aerospace and automotive engineering, where I was working to develop the engineering simulation division. This platform gave teams the tools to personalize an individual heart model using the patient’s MRI and CT data, blood-pressure readings, and echocardiogram measurements, directly linking scans to simulations.
Advertisement
Surgeons then began using the Living Heart to model procedures. Device makers used it to design and test implants. Pharmaceutical companies used it to evaluate drug effects such as toxicity. Hundreds of publications have emerged from the project, and because they all share the same foundation, the findings can be reproduced, reused, and built upon. With each application, the research community’s understanding of the heart snowballed.
Early on, we also addressed an essential requirement for these innovations to make it to patients: regulatory acceptance. Within the project’s first year, the U.S Food and Drug Administration agreed to join the project as an observer. Over the next several years, methods for using virtual-heart models as scientific evidence began to take shape within regulatory research programs. In 2019, we formalized a second five-year collaboration with the FDA’s Center for Devices and Radiological Health with a specific goal.
That goal was to use the heart model to create a virtual patient population and re-create a pivotal trial of a previously approved device for repairing the heart’s mitral valve. This helped our team learn how to create such a population, and let the FDA experiment with evaluating virtual evidence as a replacement for evidence from flesh-and-blood patients. In August 2024, we published the results, creating the first FDA-led guidelines for in silico clinical trials and establishing a new paradigm for streamlining and reducing risk in the entire clinical-trial process.
In 10 years, we went from a concept that many people doubted could be achieved to regulatory reality. But building the heart was only the beginning. Following the template set by the heart team, we’ve expanded the project to develop virtual twins of other organs, including the lungs, liver, brain, eyes, and gut. Each corresponds to a different medical domain, which has its own community, data types, and clinical use cases. Working independently, these teams are progressing toward a breakthrough in our understanding of the human body: a multiscale, modular twin platform where each organ twin could plug into a unified virtual human.
Advertisement
How a digital twin of the heart is constructed
A cardiac digital twin starts with medical imaging, typically MRI, CT, or both. The slices are reconstructed into the 3D geometry of the heart and connected vessels. The geometry of the whole organ must then be segmented into its constituent parts, so each substructure—atria, ventricles, valves, and so on—can be assigned their unique properties.
At this point, the object is converted to a functional, computational model that can represent how the various cardiac tissues deform under load—the mechanics. The complete digital twin model becomes “living” when we integrate the electrical fiber network that drives mechanical contractions in the muscle tissue.
Each part of the heart, such as the left ventricle [left], is superimposed with a detailed digital mesh to re-create its physiology. These pieces come together to form an anatomically accurate rendering of the whole organ [right].Dassault Systèmes
To simulate circulation, the twin adds computational models of hemodynamics, the physics of blood flow and pressure. The model is constrained by boundary conditions of blood flow, valve behavior, and vascular resistance set to closely match human physiology. This lets the model predict blood flow patterns, pressure differentials, and tissue stresses.
Finally, the model is personalized and calibrated using available patient data, such as how much the volume of the heart chambers changes during the cardiac cycle, pressure measurements, and the timing of electrical pulses. This means the twin reflects not only the patient’s anatomy but how their specific heart functions.
When the FDA in silico clinical trial initiative launched in 2019, the project’s focus shifted from these handcrafted virtual twins of specific patients to cohorts large enough to stand in for entire trial populations. That scale is feasible today only because virtual twins have converged with generative AI. Modeling thousands of patients’ responses to a treatment or projecting years of disease progression is prohibitively slow with conventional digital-twin simulations. Generative AI removes that bottleneck.
AI boosts the capability of virtual twins in two complementary ways. First, machine learningalgorithms are unrivaled at integrating the patchwork of imaging, sensor, and clinical records needed to build a high-fidelity twin. The algorithms rapidly search thousands of model permutations, benchmark each against patient data, and converge on the most accurate representation. Workflows that once required months of manual tuning can now be completed in days, making it realistic to spin up population-scale cohorts or to personalize a single twin on the fly in the clinic.
Second, enriching AI models’ training sets with data from validated virtual patients grounds the AI simulations in physics. By contrast, many conventional AI predictions for patient trajectories rely on statistical modeling trained on retrospective datasets. Such models can drift beyond physiological reality, but virtual twins anchor predictions in the laws of hemodynamics, electrophysiology, and tissue mechanics. This added rigor is indispensable for both research and clinical care—especially in areas where real-world data are scarce, whether because a disease is rare or because certain patient populations, such as children, are underrepresented in existing datasets.
Enabling in silico clinical trials
On the research side, the FDA-sponsored In Silico Clinical Trial Project that we completed in 2024 opened a new world for medical innovations. A conventional clinical trial may take a decade, and 90 percent of new drug treatments fail in the process. Virtual twins, combined with AI methods, allow researchers to design and test treatments quickly in a simulated human environment. With a small library of virtual twins, AI models can rapidly create expansive virtual patient cohorts to cover any subset of the general population. As clinical data becomes available, it can be added into the training set to increase reliability and enable better predictions.
Advertisement
The Living Heart Project has expanded beyond the heart, modeling organs throughout the body. The 3D brain reconstruction [top] shows major pathways in the brain’s white matter connecting color-coded regions of the brain. The lung virtual twin [middle] combines the organ’s geometry with a physics-based simulation of air flowing down the trachea and into the bronchi. And the cross section of a patient’s foot [bottom] shows points of strain in the soft tissue when bearing weight. Dassault Systèmes
Virtual twin cohorts can represent a realistic population by building individual “virtual patients” that vary by age, gender, race, weight, disease state, comorbidities, and lifestyle factors. These twins can be used as a rich training set for the AI model, which can expand the cohort from dozens to hundreds of thousands. Next the virtual cohort can be filtered to identify patients likely to respond to a treatment, increasing the chances of a successful trial for the target population.
The trial design can also include a sampling of patient types less likely to respond or with elevated risk factors, thus allowing regulators and clinicians to understand the risks to the broader population without jeopardizing overall trial success. This methodology enhances precision and efficiency in clinical research, providing population-level insights previously available only after many years of real-world evidence.
Of course, though today’s heart digital twins are powerful, they’re not perfect replicas. Their accuracy is bounded by three main factors: what we can measure (for example, image resolution or the uncertainty of how tissue behaves in real life), what we must assume about the physiology, and what we can validate against real outcomes. Many inputs, like scarring, microvascular function, or drug effects are difficult to capture clinically, so models often rely on population data or indirect estimation. That means predictions can be highly reliable for certain questions but remain less certain for others. Additionally, today’s digital twins lack validation for predicting long-term outcomes years in the future, because the technology has been in use for only a few years.
Over time, each of these limitations will steadily shrink. Richer, more standardized data will tighten personalization of the models. AI tools will help automate labor-intensive steps. And the collection of longitudinal data will improve the model’s ability to reliably predict how the body will evolve over time.
Throughout modern medicine, new technologies have sharpened our ability to diagnose, providing ever-clearer images, lab data, and analytics that tell physicians what is presently happening inside a patient’s body. Virtual twins shift that paradigm, giving clinicians a predictive tool.
This “Living Lung” virtual-twin simulation shows strain patterns during breathing. Mona Eskandari/UC Riverside
Early demonstrations are already appearing in many areas of medicine, including cardiology, orthopedics, and oncology. Soon, doctors will also be able to collaborate across specialties, using a patient-specific virtual twin as the common ground for discussing potential interactions or side effects they couldn’t predict independently.
Although these applications will take some time to become the standard in clinical care, more changes are on the horizon. Real-time data from wearables, for example, could continuously update a patient’s personalized virtual twin. This approach could empower patients to understand and engage more deeply in their care, as they could see the direct effects of medical and lifestyle changes. In parallel, their doctors could get comprehensive data feeds, using virtual twins to monitor progress.
Imagine a digital companion that shows how your particular heart will react to different amounts of salt intake, stress, or sleep deprivation. Or a visual explanation of how your upcoming surgery will affect your circulation or breathing. Virtual twins could demystify the body for patients, fostering trust and encouraging proactive health decisions.
Advertisement
A new era of healing
With the Living Heart Project, we’re bringing physics back to physicians. Modern physicians won’t need to be physicists, any more than they need to be chemists to use pharmacology. However, to benefit from the new technology, they will need to adapt their approach to care.
This means no longer seeing the body as a collection of discrete organs and considering only symptoms, but instead viewing it as a dynamic system that can be understood, and in most cases, guided toward health. It means no longer guessing what might work but knowing—because the simulation has already shown the result. By better integrating engineering principles into medicine, we can redefine it as a field of precision, rooted in the unchanging laws of nature. The modern physician will be a true physicist of the body and an engineer of health.
You must be logged in to post a comment Login