I interviewed two Assassin’s Creed Shadows devs and asked what they’re excited for fans to experience
One dev focused on the recruitment in-game
Another highlighted the new way that Shadows is telling its story
Excitement for Assassin’s Creed Shadows is growing as we race toward the game’s March 20 release date.
I recently visited Ubisoft Quebec to go hands-on with the game and get a behind-the-scenes look at development. During my visit, I spoke to key developers and quizzed them on their favorite elements of the game that have not been spoken about much up until now but they were most excited for fans to see.
I first talked to associate game director Simon Lemay-Comtois, who identified something about the people you’ll find out in the world in Assassin’s Creed Shadows.
“I think the allies that you can recruit, like their individual stories and personalities, and the actors who play some of them is the coolest bit that we have not yet revealed fully.”
This relates to the party of extra characters you can collect in Shadows who will take up residence in the Hideout area of the game after you meet them.
Advertisement
Game Director, Charles Benoit, then looked towards the narrative when I asked him the same question, saying the most exciting thing we’re yet to see more of is “the way the stories unfold.”
“I think players will really like the story structure”, referring to the game’s use of flashbacks for both characters to understand their backstories, as well as the dual perspective on the world and narrative more generally.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
“So you have a mix of exploring and doing the open world and then leaving [and seeing] past events of their life. And I think the way it’s structured is pretty cool and pretty different from other Assassin’s Creed games, so I can’t wait to see what people will think about that”.
Advertisement
The story is something I’m really keen to learn more about myself. Even after several hands-on hours with the game, and experiencing plenty of intrigue and interesting hooks, I’m still in the dark about what’s going on in the game.
The FBI’s missive follows three previous ones in as many years
Statement is aimed at educating businesses and warding off domestic collaborators
Suggested remedies include employing endpoint protection on computer systems and checking applications for “typos and unusual nomenclature”
The FBI has claimed North Korean IT workers are extorting US companies which have hired them by leveraging their access to steal source code.
In a statement, the agency warned domestic and international firms employees turned threat actors, “facilitate cyber-criminal activities and conduct revenue-generating activity” using stolen data “on behalf of the regime.”
It recommended endpoint protection, and monitoring network logs to identify where data has been compromised across “easily accessible means” like shared internal drives and cloud storage drives.
FBI guidance on remote hiring processes
The FBI also recommended a litany of actions that all amount to taking care to know who you’re hiring, which sounds like good practice even if you’re not especially worried about unwittingly hiring a threat actor.
It recommended stringent identity verification processes throughout the recruitment process and cross-checking applicants’ details against that of others in the pile, and across different HR systems.
Advertisement
It also claimed these applicants are using AI tools to obfuscate their identities, but, if true, offered little advice to counter them beyond conducting recruitment processes in person; which isn’t always possible.
The agency also suggested recruiters ask applicants “soft questions” about their whereabouts and identity, but we’d suggest that this is good practice all round too.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
North Korean IT workers have been a target of the FBI for some time, having released separate guidance in 2022, 2023, and 2024. In the latter, it expressed concern that US-based individuals were, knowingly or unknowingly, helping facilitate state-sponsored threat actors by setting up US-based infrastructure such as front addresses and businesses.
When the US Food and Drug Administration opened the door for hearing aids to be sold over the counter in 2022, I was all in. Prescription hearing aids are criminally expensive, and several OTC models have proven that you don’t need to visit a hearing aid shop in a mall to get a product that gets the job done. I’ve tested 38 hearing aids to date, and 29 have been available over the counter. All of my favorite hearing aid products have been OTC models. Until now.
Starkey is a major name in the hearing aid business, and it’s not some white-label company that slaps a logo on someone else’s product (an epidemic in this industry). Starkey has been around since 1967, and while it no longer designs or manufactures its own digital signal processing chips, it is intimately involved with hearing aid development—and famously brags that it has outfitted everyone from Ronald Reagan to Mother Teresa with its hearing aids.
Now, with its new Edge AI RIC RT hearing aids, Starkey takes a position at the very top of the heap in product quality and performance thanks in large part to a new audio processor that includes an integrated neural processing unit—just like our laptops and phones. Starkey says this is the only NPU-powered hearing aid line on the market.
Receiver in Canal
There’s nothing particularly inventive about the way the Edge AI RIC RT (which stands for “receiver in canal, rechargeable with telecoil”) looks, built on the classic, teardrop-shaped behind-the-ear design, though it is available in your choice of seven colors. Each aid weighs 2.62 grams, which is competitive for a behind-the-ear hearing aid. (To compare, the Jabra Enhance Select 500 weighs 2.56 grams.) A single button on the back of each aid controls volume: down on the left aid, up on the right aid.
Advertisement
As these are prescription aids, you’ll need an audiologist to fit and tune them. Rather than sending me to a local doctor, Starkey took the unusual step of flying its chief hearing health officer, Dave Fabry, to my home to complete this task. Fabry brought a suitcase full of equipment to re-create what the doctor’s office experience would normally be like, only at my dining table. Afterward, he gave me a training session on the aids and walked me through the My Starkey app, just like a standard audiologist.
Fabry also outfitted me with custom eartips molded to fit the exact shape of my ear canals. (This type of service would be at the discretion of your audiologist.) This is a simple process that involves jamming putty into your ears and waiting for it to harden. This putty can then be used to create a bespoke eartip that fits perfectly—although the usual collection of open and closed eartips in various sizes are also included in the box.
I was in middle school the last time I took a Spanish class. I remember enough for toddler talk — phrases like “Donde está el baño?” and “mi gato es muy gordo” — but having a meaningful conversation in Spanish without a translator is out of the question. So I was genuinely surprised the other day when, thanks to the Ray-Ban Meta smart glasses, I could have a mostly intelligible conversation with a Spanish speaker about K-pop.
Live translations were added as part of a feature drop last month, alongside live AI and Shazam. It’s exactly what it sounds like. When you turn the feature on, you can have a conversation with a Spanish, French, or Italian speaker, and the glasses will translate what’s being said directly into your ears in real-time. You can also view a transcript of the conversation on your phone. Whatever you say in English will also be translated into the other language.
Missing is the bit where we both start singing “APT APT APT!” Screenshot: Meta
Full disclosure, my conversation was part of a Meta-facilitated demo. That’s not truly the same thing as plopping these glasses on, hopping down to Barcelona, and trying it in the wild. That said, I’m a translation tech skeptic and intended to find all the cracks where this tech could fail.
Advertisement
The glasses were adept at translating a basic conversation about K-pop bands. After my conversation partner was done speaking, the translation would kick in soon after. This worked well if we talked in measured, medium-speed speech, with only a few sentences at a time. But that’s not how people actually speak. In real life, we launch into long-winded tirades, lose our train of thought, and talk much faster when angry or excited.
To Meta’s credit, it considered the approach to some of these situations. I had my conversation partner speak at a faster speed and a longer duration. It handled the speed decently well, though there was understandably some lag in the real-time transcript. For longer speech, the glasses started translating mid-way through before my partner was done talking. That was a bit jarring and awkward, as you, the listener, have to recognize you’re a bit behind. The experience is similar to how live interpreters do it on international news or broadcasts.
I was most impressed that the glasses could handle a bit of Spanglish. Often, multilingual speakers rarely stick to just one language, especially when in mixed-language company. In my family, we call it Konglish (Korean-English), and people slip in and out of each language, mixing and matching grammar that’s chaotic and functional. For example, my aunt will often speak several sentences in Korean, throw in two sentences in English, do another that’s a mix of Korean and English, and then revert to Korean. I had my conversation partner try something similar in Spanish and… the results were mixed.
You can see the transcript start to struggle with slang while trying to rapidly switch between Spanish and English.Screenshot: Meta
Advertisement
On the one hand, the glasses could handle short switches between languages. However, longer forays into English led to the AI repeating the English in my ear. Sometimes, it’d also repeat what I’d said, because it started getting confused. That got so distracting I couldn’t focus on what was being said.
The glasses struggled with slang. Every language has its dialects, and each dialect can have its unique spin on colloquialisms. You need look no further than how American teens have subjected us all to phrases like skibidi and rizz. In this case, the glasses couldn’t accurately translate “no manches.” That translates to “no stain,” but in Mexican Spanish, it also means “no way” or “you’re kidding me!” The glasses chose the literal translation. In that vein, translation is an art. In some instances, the glasses got the correct gist across but failed to capture some nuances of what was being said to me. This is the burden of all translators — AI and human alike.
You can’t use these to watch foreign-language movies or TV shows without subtitles. I watched a few clips of Emilia Pérez, and while it could accurately translate scenes where everyone was speaking loudly and clearly, it quit during a scene where characters were rapidly whispering to each other in hushed tones. Forget about the movie’s musical numbers entirely.
You wouldn’t necessarily have these issues if you stuck to what Meta intended with this feature. It’s clear these glasses were mostly designed to help people have basic interactions while visiting other countries — things like asking for directions, ordering food at a restaurant, going to a museum, or completing a transaction. In those instances, you’re more likely to encounter people who speak slower with the understanding that you are not a native speaker.
Advertisement
It’s a good start, but I still dream of the babel fish from Douglas Adams’ Hitchhiker’s Guide to the Galaxy — a little creature that when plopped in your ear, can instantly and accurately translate any language into your own. For now, that’s still the realm of science fiction.
Meta CEO Mark Zuckerberg said that the company plans to significantly up its capital expenditures this year as it aims to keep pace with rivals in the cutthroat AI space.
In a Facebook post Friday, Zuckerberg said that Meta expects to spend $60 billion-$80 billion on CapEx in 2025, primarily on data centers and growing the company’s AI development teams. That projected range is around double the $35 billion-$40 billion Meta spent on CapEx last year.
Zuckerberg also wrote that Meta plans to bring around one gigawatt of compute online this year, roughly the amount of power consumed by 750,000 average homes, and expects the company’s data centers to pack over 1.3 million GPUs by year-end.
Meta’s investments come as AI rivals pour billions into their own infrastructure projects. Microsoft plans to spend $80 billion on AI data centers in 2025, while OpenAI is contributing to a joint venture, Stargate, that could yield it hundreds of billions of dollars’ worth of data center resources.
Now that Netflix has unveiled which titles will be leaving in February 2025, take this as your sign to get a head start on catching the best Netflix movies before it’s too late. But while Netflix doesn’t tend to remove a huge amount of titles each month (which is a good thing), there are always one or two gems thrown into the mix that I know I’ll miss dearly. As for this month, those movies are Pearl (2022) and Stand By Me (1986), but there are plenty new additions on the way to make up for it, looking at everything coming to Netflix in February.
When it comes to Netflix series, I find that these are less prone to being axed from Netflix’s library. This month, only two TV shows (Brooklyn Nine-Nine and The Mindy Project) are being removed compared to over 30 movies, so with that said, you can’t say that Netflix isn’t one of the best streaming services when it comes to TV content.
Everything leaving Netflix in February 2025
Leaving on February 1
Cocaine Cowboys 2 (movie) Plus One (movie) Run All Night (movie)
Advertisement
Leaving on February 11
The Fast and the Furious (movie) 2 Fast 2 Furious (movie) The Fast and the Furious: Tokyo Drift (movie) Fast Five (movie) Fast & Furious 6 (movie) The Pope’s Exorcist (movie)
Leaving on February 14
The Catcher Was a Spy (movie) White Boy (movie)
Advertisement
Leaving on February 15
Sign up for breaking news, reviews, opinion, top tech deals, and more.
47 Meters Down: Uncaged (movie) Blackhat (movie) Pearl (movie)
Leaving on February 20
Advertisement
Book Club (movie) Southpaw (movie)
Leaving on February 21
All Good Things (movie)
Leaving on February 24
Advertisement
U Turn (movie)
Leaving on February 25
Brooklyn Nine-Nine seasons 1-2 (TV show)
Leaving on February 28
Advertisement
21 Bridges (movie) A Haunted House (movie) A Haunted House 2 (movie) Aloha (movie) The Angry Birds Movie (movie) Blended (movie) Cinderella Man (movie) Due Date (movie) Green Lantern (movie) Inception (movie) Legends of the Fall (movie) Little (movie) The Mindy Project seasons 1-6 (TV show) Oblivion (movie) The Other Guys (movie) Scooby-Doo (movie) Scooby-Doo 2: Monsters Unleashed (movie) Sixteen Candles (movie) Stand by Me (movie) Without a Paddle (movie)
Under the executive order, DOGE teams, which “will typically include one DOGE Team Lead, one engineer, one human resources specialist, and one attorney” will be dispatched to various agencies. They will be granted “access to all unclassified agency records, software systems, and IT systems,” ostensibly with the goal of streamlining data sharing across federal agencies.
A former USDS employee who spoke to WIRED on condition of anonymity called the repurposing of the Digital Service an “A+ bureaucratic jiu-jitsu move.” But, they say, they’re concerned that DOGE’s access to sensitive information could be used to do more than just streamline government operations.
“Is this technical talent going to be pointed toward using data from the federal government to track down opponents?” they ask. “To track down particular populations of interest to this administration for the purposes of either targeting them or singling them out or whatever it might end up being?”
Got a Tip?
Advertisement
Are you a current or former employee with the US Digital Services or another government agency impacted by DOGE? We’d like to hear from you. Using a nonwork phone or computer, contact Vittoria Elliott at vittoria_elliott@wired.com or securely at velliott88.18 on Signal.
“DOGE teams have a lawyer, an HR director, and an engineer. If you were looking to identify functions to cut, people to cut, having an HR director there and having a lawyer say, ‘Here’s what we’re allowed to do or not do,’ would be one way that you would facilitate that,” says Don Moynihan, a professor of public policy at the University of Michigan, noting that DOGE’s potential access to federal employee data could put “them in some sort of crosshairs to be fired.”
Who exactly is going to be part of DOGE is a particularly thorny issue because there are technically two DOGEs. One is the permanent organization, the revamped USDS—now the US DOGE Service. The other is a temporary organization, with a termination date of July 4, 2026. Creating this organization means the temporary DOGE can operate under a special set of rules. It can sequester employees from other parts of the government, and can also accept people who want to work for the government as volunteers. Temporary organizations can also hire what are known as special government employees—experts in a given field who can bypass the rigors of the regular federal hiring processes. They’re also not subject to the same transparency requirements as other government employees.
In the best case scenario, this would allow DOGE to move quickly to address issues and fast track necessary talent, as well as build systems that make government services more seamless by facilitating the flow of information and data. But in the worst case, this could mean less transparency around the interests of people working on important government projects, while enabling possible surveillance.
Sony is officially ending production of recordable Blu-rays. In an announcement from Japan spotted by Tom’s Hardware, Sony Storage Media Solutions said it will stop manufacturing the discs in February, alongside recordable MiniDiscs, MD-Data, and MiniDV cassettes, adding, “there will be no successor models.”
This discontinuation doesn’t impact the Blu-rays you can buy with films or TV shows on them; it just affects the blank ones consumers use to record stuff on themselves with PCs or DVRs. Sony hinted at the discontinuation last year, telling the Japanese outlet AVWatch that it would “gradually end development and production of ‘recordable optical disc media.’’
Sony has helped lead the production of Blu-ray since the very beginning. In 2000, the company showed off the first Blu-ray prototypes and later revealed its Blu-ray disc recorders in 2006. Like Sony, LG, Samsung, and Oppo have also started backing away from the format by ending the production of Blu-ray players.
In addition to Blu-ray, Sony’s announcement also affects the MiniDisc — the compact disc format Sony made in 1992 as an alternative to more fragile cassettes and unwieldy CDs. This might make it harder for MiniDisc diehards to get their hands on blank discs, which you can apparently still load music onto using your smartphone.
Flip is a social commerce app that lets shoppers become creators. They can share honest reviews and earn cash based on engagement on the platform. As Flip competes with TikTok Shop and other platforms in a highly competitive market, it has introduced a new creator fund that offers a unique opportunity to help it stand out.
The creator fund quietly rolled out earlier this week and provides up to $100 million worth of equity to participating creators over the next five or so years. Grants range from $6,000 to $100,000, depending on a creator’s level of engagement. Notably, Flip aims to distribute up to $1 million per day for the first 30 days of the program.
The new grant system sets Flip apart from competitors because it offers users equity in the company. Equity funds typically have the potential for substantial long-term returns, so creators who remain with the program could see significant benefits, depending on Flip’s success in the near future. (However, it’s important to note these returns aren’t guaranteed and vary based on market conditions.)
To be eligible, a Flip creator must have over 4,000 followers and at least 10 videos posted in the last 30 days, each garnering around 3,000 views. Flip also accepts creators with at least 20,000 followers on other platforms like Instagram, TikTok, and YouTube. Flip reviews applications and awards grants within 48 to 72 hours, the company notes.
Payments will occur in five years or sooner in the event of a company sale. In either case, payments will be made in cash. Regarding a potential acquisition, however, Flip President Eddie Vivas shared with TechCrunch, “We don’t currently see an acquisition as an interesting outcome for our company. Our goal is to take the company public one day.”
Advertisement
Within 72 hours of launching the program, nearly 10,000 influencers have applied, according to Vivas. He said that approximately 22% of the applications came from large influencers. Users can currently view a live leaderboard of all the grants on the creator fund page. As of this writing, the highest grant awarded is $67,000 to Tyler K (@cheftyler).
As TikTok’s future in the U.S. remains unclear, Flip is likely counting on its new creator fund to incentivize more users to actively participate on the platform.
Amid the current TikTok drama, Flip has seen significant growth, garnering 580,000 new downloads in January alone, according to estimates from app store intelligence provider Appfigures. On Monday, Flip made it to the No. 10 spot in the Overall Top Charts in Apple’s U.S. App Store.
Additionally, Flip is currently gaining about 250,000 new users daily, with people spending an average of 35 minutes in the app every day, according to the company.
To maintain traction and stay competitive, Vivas said that Flip plans to introduce additional social features in the coming months. These include polls, group chats, and “elegant ways to repost,” he said.
Advertisement
“We are focused on becoming a more full-featured, dynamic social commerce platform building on the base we have today, which is now clearly working,” Vivas said.
Flip, which launched in 2021, has raised $236 million to date. The company is valued at $1.1 billion.
In today’s rapidly advancing technological landscape, digital transformation is a key driver of innovation across many industries. One of the most impactful technologies leading this revolution is the digital twin. This is a real-time virtual model that replicates a physical object, system, or process. It continuously receives data from its physical twin, creating a dynamic, up-to-date digital replica. This allows users to monitor, simulate, and enhance the object or system without directly interacting with the real-world version.
While the concept of a digital twin isn’t entirely new, its application has exploded due to progress in Internet of Things (IoT) devices, artificial intelligence (AI), machine learning, and big data. Across industries such as manufacturing, healthcare, smart cities, and aerospace, digital twins are helping businesses operate more efficiently, reduce costs, and improve decision-making.
This comprehensive guide explains what digital twins are, how they are used in different industries, and how they are built, while also examining their potential impact on future technology.
Frank Scheufens
Advertisement
Product Manager for Professional Visualization at PNY Technologies EMEA.
What is a Digital Twin?
A digital twin is a digital replica of a physical object or system that is kept in sync with its real-world counterpart. This virtual model collects real-time data through sensors, cameras, and IoT devices, providing an accurate representation of the object’s current state. A digital twin is more than a 3D model—it is a dynamic, data-driven simulation that evolves as the physical object changes over time.
One of the key features of a digital twin is its ability to simulate future scenarios. By using historical and real-time data, digital twins can model various conditions and outcomes, allowing businesses to foresee challenges, predict system failures, and optimize operations. These simulations are incredibly useful for making informed decisions without physically testing every possible outcome, saving both time and money.
Advertisement
The origins and evolution of Digital Twins
The idea of creating digital replicas of physical objects goes back to the early days of space exploration. NASA engineers used physical models and simulators to monitor and diagnose issues with spacecraft that were too far away to inspect directly. These early models laid the groundwork for the development of the digital twin concept.
But it wasn’t until the convergence of IoT, big data, and AI technologies that digital twins became a practical tool for mainstream use. Today, digital twins are more sophisticated than ever before. They can process vast amounts of data in real time, enabling detailed simulations and advanced analytics. As digital twins continue to evolve, they are playing a pivotal role in the Fourth Industrial Revolution (Industry 4.0), transforming sectors from manufacturing to urban planning.
What are Digital Twins used for?
Digital twins are incredibly versatile and have found applications across many industries. Below are some of the primary use cases for these tools and how they are transforming different sectors.
Manufacturing
Advertisement
Manufacturing is one of the largest users of digital twin technology. In this sector, digital twins are used to optimize production lines, monitor machinery, and improve product designs. By creating digital replicas of factory equipment and processes, manufacturers can simulate different production scenarios, spot inefficiencies, and predict potential breakdowns. For example, a car manufacturer might create a digital twin of an assembly line to test how adding a new robotic arm will affect workflow. By running simulations, the manufacturer can fine-tune the process to ensure the robotic arm integrates seamlessly, leading to more efficient working.
Additionally, digital twins are invaluable for predictive maintenance. By collecting data on the condition of machines—such as temperature, vibration, or pressure—they can predict when a machine is likely to break down. This allows manufacturers to schedule maintenance at the right time, cutting unplanned downtime and extending the lifespan of equipment.
Healthcare
Digital twin technology is making strides in healthcare, where it is used to model individual patients, medical devices, and biological systems. Personalized healthcare, in particular, benefits from digital twins. By creating a virtual model of a patient’s body or organ, doctors can simulate different treatment options to determine the best course of action.
Advertisement
For example, heart surgeons may use a digital twin of a patient’s heart to plan and simulate a procedure before performing it. This allows them to visualize the surgery and plan for potential complications, improving the chances of successful surgery.
Similarly, pharmaceutical companies use digital twins to simulate how drugs interact with the human body. This helps them develop new treatments more quickly and efficiently by testing drug reactions virtually before conducting human trials.
Medical device manufacturers also leverage digital twins to design and test products like pacemakers, joint replacements, or diagnostic machines. By using digital twins, they can ensure that devices perform optimally within the body before they are ever implanted or used.
Smart cities and urban planning
Advertisement
Digital twins are now playing an increasingly important role in the development of smart cities. City planners and local authorities are using digital twins to create virtual models of urban infrastructure and services, such as transport systems, energy grids, and waste management. With real-time data collected from sensors placed throughout an urban area, digital twins can help cities monitor traffic flow, energy usage, and pollution. This data allows town planners to test different strategies for improving transport networks, reducing congestion, and lowering energy consumption.
For example, local government could use a digital twin of its public transport system to simulate the impact of rerouting buses or adding new train lines. By running these models, planners can identify the best ways to cut travel times and improve service without disrupting the real-world network.
Digital twins are also instrumental in disaster response planning. By modelling how a city would be affected by natural disasters such as floods, earthquakes, or fires, emergency workers can develop better contingency plans and improve their ability to manage crises in real time.
Aerospace
Advertisement
In the aerospace industry, digital twins are widely used to improve aircraft design, production, and maintenance. By creating digital replicas of airplanes, engines, and other components, aerospace engineers can simulate how different factors—such as extreme weather, air pressure, or mechanical stress—will affect an aircraft over time.
One of key aspects of digital twins in aerospace is their ability to inform predictive maintenance. For example, digital twins of jet engines collect data on performance metrics like temperature, pressure, and vibration. Using this data, engineers can predict when parts are likely to wear out or malfunction, allowing airlines to perform maintenance before a problem occurs. This reduces the risk of in-flight issues and lowers operational costs. Airlines are also using digital twins to simulate flight conditions and boost fuel efficiency. By modelling various flight paths, weather conditions, and aircraft configurations, digital twins help pilots and airlines reduce fuel consumption, leading to both cost savings and environmental benefits.
Energy
Digital twins have found significant applications in the energy sector, where they are used to monitor and enhance the performance of power plants, wind farms, and solar energy systems. By creating digital replicas of these systems, operators can simulate different conditions—such as changes in weather or energy demand—and finetune operations accordingly. For example, wind farm operators use digital twins to track the performance of individual turbines. The digital twin collects data on wind speed, turbine rotation, and power output, providing insights into each turbine’s efficiency. This data helps operators identify underperforming turbines and adjust to maximize energy production.
Advertisement
In the case of power plants, digital twins can monitor critical elements, such as generators, cooling systems, and pipelines. By predicting when components are likely to fail, digital twins enable operators to carry out preventive maintenance, cutting downtime and improving the reliability of the energy grid.
Digital twins also help energy companies manage grid stability. With real-time data on energy consumption and generation, they allow operators to balance supply and demand more efficiently, preventing blackouts and reducing energy waste.
How to build a Digital Twin
Creating a digital twin involves several steps, from data collection to simulation and analysis. Below is a detailed explanation of each phase in the development process.
Data collection
Advertisement
The first step in building a digital twin is collecting data from the physical object or system. This is typically gathered using sensors, IoT devices, and control systems that measure key parameters such as temperature, pressure, speed, and vibration. In some cases, historical data may also be used to model how the object has performed over time.
For example, if you are creating a digital twin of a factory production line, you would install sensors on the machines to track their performance, energy consumption, and maintenance needs. The more data is collected, the more accurate and detailed the digital twin will be.
Create the digital model
Once the data is collected, the next step is to create a digital model of the object or system. This is often built using 3D modelling software or computer-aided design (CAD) tools. The complexity of the model will depend on the nature of the object being reproduced. For some applications, a simple 3D model might suffice, while for others, a highly detailed, physics-based simulation may be needed.
Advertisement
For instance, a digital twin of a wind turbine would not only include a 3D model of the turbine blades but also a simulation of how the blades interact with different wind speeds and environmental conditions.
Real-time data connection
To keep the digital twin updated, it must be connected to its physical twin through real-time data transmission. This connection ensures that the digital version evolves as the physical object changes or moves through different operating conditions.
For example, in smart cities, sensors placed throughout an urban area feed data back to the digital twin, which continuously updates itself in line with real-time conditions like traffic flow, air quality, or energy consumption.
Advertisement
In many cases, real-time data is transmitted via IoT platforms and processed in the cloud. Advanced AI and machine learning algorithms are often used to analyze this data, providing insights into how the physical object is performing and predicting future outcomes.
Simulation and optimization
Once the digital twin is live and connected to its physical equivalent, it can be used for simulations and optimizations. By testing different scenarios and variables on the digital model, users can identify areas which can be finetuned without affecting the real-world object. For instance, a digital twin of a factory machine could help simulate how different workloads or production speeds impact overall efficiency. Based on the findings of these models, factory managers can adjust operations to reduce bottlenecks, save energy, or cut downtime.
Digital twins can also be used for scenario planning, helping organisations to model how changes—such as new laws or market conditions—will impact their operations. By running these simulations, businesses can prepare for potential challenges and make more informed strategic decisions.
Advertisement
Continuous updates and maintenance
To remain accurate, digital twins must be constantly updated with real-time data and information about the physical object’s condition. This includes tracking wear and tear, repairs, and upgrades. Regular updates ensure that the digital twin remains a reliable tool for monitoring and simulation.
In the aerospace industry, digital twins of aircraft engines are regularly updated to reflect the engine’s current condition and usage history. These updates allow engineers to make accurate predictions about future maintenance needs and performance.
Understanding Digital Twins vs. traditional simulations
At the core, a digital twin is a digital replica of a physical object, system, or process, continuously updated with real-time data. Unlike traditional simulations, which run under set parameters and aren’t connected to the real world, digital twins are dynamic. They reflect the current state of their physical counterpart, using data gathered from sensors and other sources. This connection allows digital twins to provide accurate insights, predict future behaviors, and make real-time decisions.
Advertisement
For example, a simulation might help design a new product, testing it virtually under various conditions. However, once the simulation ends, it’s static. A digital twin, on the other hand, remains active, continuously mirroring the physical product’s lifecycle. This ongoing connection enables businesses to manage assets, troubleshoot issues, and optimize operations more effectively than a traditional simulation could.
Building a Digital Twin: timeframes and considerations
The timeline to create a digital twin depends heavily on the complexity of the object or system being modeled. For a simple asset, like a single machine or piece of equipment, a digital twin could be developed in a few weeks to a few months. The process involves setting up data collection through sensors, building the digital model, and validating its accuracy. For more complex systems—like an entire manufacturing plant or a smart city—development can take from 6 months to a year, or even longer. These large-scale digital twins require extensive data integration, advanced simulations, and rigorous testing to ensure they mirror their physical counterparts accurately.
Creating a digital twin isn’t just about initial setup; it’s an ongoing effort. A digital twin must be updated and maintained to keep it in sync with the physical world. This involves continuous data collection, periodic calibration, and refining the model to reflect any changes in the real-world object.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Now, after Trump’s comment and actions on the first day of his presidency, the group’s crisis helpline is once again receiving a torrent of calls. Sixty-two percent of incoming calls this week, the group tells WIRED, are from trans and gender nonconforming adolescents between the ages of 14 and 17.
The callers are expressing varying degrees of emotional and mental distress, often expressing feelings of hopelessness and fear. One of the most common sentiments shared is “my country does not want me to exist.”
While the Trump administration’s actions are causing huge distress for the trans community and their families, a stark increase in the attacks, both online and offline, are already coming from Trump supporters who feel emboldened.
“We have already seen an uptick in the hate against us,” Fisher says. “We had someone who came to our home just last Tuesday and put a note in our mailbox that said: ‘He’s your daddy now, he’s your president. You people won’t exist anymore.’ So yes, they’re definitely emboldened.”
Advertisement
A trans pride flag they had hanging on their porch has been stolen twice in the space of a week. At her local Piggly Wiggly, a supermarket, she overheard people at an adjacent table talking about how glad they were that Trump had “gotten rid of” trans people.
“He didn’t get rid of them, they’re always going to exist—but he damn so put a target on them, especially my teenage son,” Fisher said.
And the attacks are also targeting the groups who are trying to help the LGBTQ+ community.
“We have seen a lot more hate,” Lance Preston, executive director of the Rainbow Youth Project, tells WIRED. “We’ve been receiving a lot of messages, crazy shit, like ‘Trump is your president, now all of you are gonna have to go away. We don’t want you here.’ We get those in contact submission forms every day and since the election it has just grown exponentially. It’s really sad.”
Advertisement
Some activists are also concerned that those who have always stood with the LGBTQ+ community, could be too scared to speak up under Trump’s new administration.
“Every time something like this happens, we notice supporters backing down and just getting quiet,” Chris Sederburg, who helps trans and gender nonconforming people through the Rainbow Youth Project, tells WIRED. “Not all of them, but a lot of them do because they’re scared of what’s happening. They’re scared of what might happen to them or they might catch hate for it.”
Sederburg, a trans man who works as a trucker, communicates with young trans people on social media and says that the response this week from the community has been one of “intense, immediate fear.”
For Jamie Anderson, a 40-year-old teacher living in Texas, her biggest fear is that Trump’s administration forces her 15-year-old daughter Dawn, who came out as trans last year, to make a traumatic decision.
Advertisement
“My biggest worry is that she’s going to have to go back to living a lie, like not being who she is meant to be,” says Jamie. “She’s happy now, she’s a lot happier than she was right before she came out. She was super-depressed. We had no idea what was going on. And finally she comes out and she’s this whole brand new, amazing, loving child.”
You must be logged in to post a comment Login