For instance, Mastercard’s network processes roughly 160 billion transactions a year, and experiences surges of 70,000 transactions a second during peak periods (like the December holiday rush). Finding the fraudulent purchases among those — without chasing false alarms — is an incredible task, which is why fraudsters have been able to game the system.
But now, sophisticated AI models can probe down to individual transactions, pinpointing the ones that seem suspicious — in milliseconds’ time. This is the heart of Mastercard’s flagship fraud platform, Decision Intelligence Pro (DI Pro).
“DI Pro is specifically looking at each transaction and the risk associated with it,” Johan Gerber, Mastercard’s EVP of security solutions, said in a recent VB Beyond the Pilot podcast. “The fundamental problem we’re trying to solve here is assessing in real time.”
Advertisement
How DI Pro works
Mastercard’s DI Pro was built for latency and speed. From the moment a consumer taps a card or clicks “buy,” that transaction flows through Mastercard’s orchestration layer, back onto the network, and then on to the issuing bank. Typically, this occurs in less than 300 milliseconds.
Ultimately, the bank makes the approve-or-decline decision, but the quality of that decision depends on Mastercard’s ability to deliver a precise, contextualized risk score based on whether the transaction could be fraudulent. Complicating this whole process is the fact that they’re not looking for anomalies, per se; they’re looking for transactions that, by design, are similar to consumer behavior.
At the core of DI Pro is a recurrent neural network (RNN) that Mastercard refers to as an “inverse recommender” architecture. This treats fraud detection as a recommendation problem; the RNN performs a pattern completion exercise to identify how merchants relate to one another.
Advertisement
As Gerber explained: “Here’s where they’ve been before, here’s where they are right now. Does this make sense for them? Would we have recommended this merchant to them?”
Chris Merz, SVP of data science at MasterCard, explained that the fraud problem can be broken down into two sub components: A user’s pattern behavior and a fraudster’s pattern behavior. “And we’re trying to tease those two things out,” he said.
Another “neat technique,” he said, is how Mastercard approaches data sovereignty, or when data is subject to the laws and governance structures in the region where it is collected, processed, or stored. To keep data “on soil,” the company’s fraud team relies on aggregated, “completely anonymized” data that is not sensitive to any privacy concerns and thus can be shared with models globally.
“So you still can have the global patterns influencing every local decision,” said Gerber. “We take a year’s worth of knowledge and squeeze it into a single transaction in 50 milliseconds to say yes or no, this is good or this is bad.”
Advertisement
Scamming the scammers
While AI is helping financial companies like Mastercard, it’s helping fraudsters, too; now, they’re able to rapidly develop new techniques and identify new avenues to exploit.
Mastercard is fighting back by engaging cyber criminals on their turf. One way they’re doing so is by using “honeypots,” or artificial environments meant to essentially “trap” cyber criminals. When threat actors think they’ve got a legitimate mark, AI agents engage with them in the hopes of accessing mule accounts used to funnel money. That becomes “extremely powerful,” Gerber said, because defenders can apply graph techniques to determine how and where mule accounts are connected to legitimate accounts.
Because in the end, to get their payout, scammers need a legitimate account somewhere, linked to mule accounts, even if it’s cloaked 10 layers down. When defenders can identify these, they can map global fraud networks.
“It’s a wonderful thing when we take the fight to them, because they cause us enough pain as it is,” Gerber said.
Advertisement
Listen to the podcast to learn more about:
How Mastercard created a “malware sandbox” with Recorded Future;
Why a data science engineering requirements document (DSERD) was essential to align four separate engineering teams;
The importance of “relentless prioritization” and tough decision-making to move beyond “a thousand flowers blooming” to projects that actually have a strong business impact;
Why successful AI deployment should incorporate three phases: ideation, activation, and implementation — but many enterprises skip the second step.
A Moscow-based startup has taken a big move into the area of animal-machine hybrids. Neiry, a Moscow-based neurotech company, claims to have already made progress on remotely operated pigeons by inserting electrodes into their brains.
In late 2025, there were reports of early flying tests in the city, in which the modified birds flew controlled routes before returning to base. The project is known as PJN-1, and while the company promotes it as a valuable tool for civilian occupations, this type of technology is gaining traction due to its potential for surveillance purposes.
Due to platform compatibility issue, the DJI Fly app has been removed from Google Play. DJI Neo must be activated in the DJI Fly App, to ensure a…
Lightweight and Regulation Friendly – At just 135g, this drone with camera for adults 4K may be even lighter than your phone and does not require FAA…
Palm Takeoff & Landing, Go Controller-Free [1] – Neo takes off from your hand with just a push of a button. The safe and easy operation of this drone…
The surgeons utilize a specialized frame to delicately place these microscopic electrodes into specific areas of the pigeon’s brain. The electrodes are then linked to a mini-stimulator on the bird’s head. All of the bird’s electronics and navigation system are housed inside a lightweight backpack and powered by solar panels. A tiny camera is mounted on the bird’s chest to capture video. The operators can direct the bird to fly left or right by giving electrical signals to its brain. GPS devices, like a standard drone, track the bird’s location and control its trajectory. According to Neiry, the bird does not require training and may be operated immediately after the process, and the surgery has been reported to be 100% successful in terms of survival rate.
Pigeons have some significant advantages over regular drones in certain scenarios. For starters, they can fly hundreds of miles, perhaps 300 or more in a single day, without having to replace batteries or land frequently. They can simply navigate difficult environments, handle anything the weather throws at them, and even fit into small spaces. Neiry believes they can use pigeons to inspect pipelines or electrical lines, survey industrial regions, or assist with search and rescue efforts in difficult-to-reach areas. According to the company’s inventor, Alexander Panov, the same technology may be used with various birds, such as ravens for coastal monitoring, seagulls, or even albatrosses for operations over the ocean, as long as they can carry the burden and fly the distance required. [Source]
A team of researchers led by Nvidia has released DreamDojo, a new AI system designed to teach robots how to interact with the physical world by watching tens of thousands of hours of human video — a development that could significantly reduce the time and cost required to train the next generation of humanoid machines.
The research, published this month and involving collaborators from UC Berkeley, Stanford, the University of Texas at Austin, and several other institutions, introduces what the team calls “the first robot world model of its kind that demonstrates strong generalization to diverse objects and environments after post-training.”
At the core of DreamDojo is what the researchers describe as “a large-scale video dataset” comprising “44k hours of diverse human egocentric videos, the largest dataset to date for world model pretraining.” The dataset, called DreamDojo-HV, is a dramatic leap in scale — “15x longer duration, 96x more skills, and 2,000x more scenes than the previously largest dataset for world model training,” according to the project documentation.
A simulated robot places a cup into a cardboard box in a workshop setting, one of thousands of scenarios DreamDojo can model after training on 44,000 hours of human video. (Credit: Nvidia)
Advertisement
Inside the two-phase training system that teaches robots to see like humans
The system operates in two distinct phases. First, DreamDojo “acquires comprehensive physical knowledge from large-scale human datasets by pre-training with latent actions.” Then it undergoes “post-training on the target embodiment with continuous robot actions” — essentially learning general physics from watching humans, then fine-tuning that knowledge for specific robot hardware.
For enterprises considering humanoid robots, this approach addresses a stubborn bottleneck. Teaching a robot to manipulate objects in unstructured environments traditionally requires massive amounts of robot-specific demonstration data — expensive and time-consuming to collect. DreamDojo sidesteps this problem by leveraging existing human video, allowing robots to learn from observation before ever touching a physical object.
One of the technical breakthroughs is speed. Through a distillation process, the researchers achieved “real-time interactions at 10 FPS for over 1 minute” — a capability that enables practical applications like live teleoperation and on-the-fly planning. The team demonstrated the system working across multiple robot platforms, including the GR-1, G1, AgiBot, and YAM humanoid robots, showing what they call “realistic action-conditioned rollouts” across “a wide range of environments and object interactions.”
Why Nvidia is betting big on robotics as AI infrastructure spending soars
The release comes at a pivotal moment for Nvidia’s robotics ambitions — and for the broader AI industry. At the World Economic Forum in Davos last month, CEO Jensen Huang declared that AI robotics represents a “once-in-a-generation” opportunity, particularly for regions with strong manufacturing bases. According to Digitimes, Huang has also stated that the next decade will be “a critical period of accelerated development for robotics technology.”
Advertisement
The financial stakes are enormous. Huang told CNBC’s “Halftime Report” on February 6 that the tech industry’s capital expenditures — potentially reaching $660 billion this year from major hyperscalers — are “justified, appropriate and sustainable.” He characterized the current moment as “the largest infrastructure buildout in human history,” with companies like Meta, Amazon, Google, and Microsoft dramatically increasing their AI spending.
That infrastructure push is already reshaping the robotics landscape. Robotics startups raised a record $26.5 billion in 2025, according to data from Dealroom. European industrial giants including Siemens, Mercedes-Benz, and Volvo have announced robotics partnerships in the past year, while Tesla CEO Elon Musk has claimed that 80 percent of his company’s future value will come from its Optimus humanoid robots.
How DreamDojo could transform enterprise robot deployment and testing
For technical decision-makers evaluating humanoid robots, DreamDojo’s most immediate value may lie in its simulation capabilities. The researchers highlight downstream applications including “reliable policy evaluation without real-world deployment and model-based planning for test-time improvement” — capabilities that could let companies simulate robot behavior extensively before committing to costly physical trials.
Advertisement
This matters because the gap between laboratory demonstrations and factory floors remains significant. A robot that performs flawlessly in controlled conditions often struggles with the unpredictable variations of real-world environments — different lighting, unfamiliar objects, unexpected obstacles. By training on 44,000 hours of diverse human video spanning thousands of scenes and nearly 100 distinct skills, DreamDojo aims to build the kind of general physical intuition that makes robots adaptable rather than brittle.
The research team, led by Linxi “Jim” Fan, Joel Jang, and Yuke Zhu, with Shenyuan Gao and William Liang as co-first authors, has indicated that code will be released publicly, though a timeline was not specified.
The bigger picture: Nvidia’s transformation from gaming giant to robotics powerhouse
Whether DreamDojo translates into commercial robotics products remains to be seen. But the research signals where Nvidia’s ambitions are heading as the company increasingly positions itself beyond its gaming roots. As Kyle Barr observed at Gizmodo earlier this month, Nvidia now views “anything related to gaming and the ‘personal computer’” as “outliers on Nvidia’s quarterly spreadsheets.”
The shift reflects a calculated bet: that the future of computing is physical, not just digital. Nvidia has already invested $10 billion in Anthropic and signaled plans to invest heavily in OpenAI’s next funding round. DreamDojo suggests the company sees humanoid robots as the next frontier where its AI expertise and chip dominance can converge.
Advertisement
For now, the 44,000 hours of human video at the heart of DreamDojo represent something more fundamental than a technical benchmark. They represent a theory — that robots can learn to navigate our world by watching us live in it. The machines, it turns out, have been taking notes.
Waymo has pulled the human safety driver from its autonomous test vehicles in Nashville, as the Alphabet-owned company moves closer to launching a robotaxi service in the city.
Waymo, which has been testing in Nashville for months, is slated to launch a robotaxi service there this year in partnership with Lyft. Riders will initially hail rides directly through the Waymo app. Once the service expands, Waymo will also make its self-driving vehicles available through the Lyft app. Lyft has said it will handle fleet services, such as vehicle readiness and maintenance, charging infrastructure, and depot operations, through its wholly owned subsidiary Flexdrive.
Waymo has accelerated its robotaxi expansion and today operates commercial services in Atlanta, Austin, Los Angeles, Miami, the San Francisco Bay Area, and Phoenix. It also has driverless test fleets in Dallas, Houston, San Antonio, and Orlando.
The company tends to follow the same rollout strategy in every new market, starting with a small fleet of vehicles that are manually driven to map the city. The autonomous vehicles are then tested with a human safety operator in the driver’s seat. Eventually, the company conducts driverless testing, often allowing employees to hail rides, before launching a robotaxi service.
An anonymous reader writes: Linus Torvalds has confirmed the next major kernel series as Linux 7.0, reports Linux news website 9to5Linux.com: “So there you have it, the Linux 6.x era has ended with today’s Linux 6.19 kernel release, and a new one will begin with Linux 7.0, which is expected in mid-April 2026. The merge window for Linux 7.0 will open tomorrow, February 9th, and the first Release Candidate (RC) milestone is expected on February 22nd, 2026.”
This cutaway view shows the interior of the Starlab space station’s laboratory. (Starlab Illustration)
How do you design a living space where there’s no up or down? That’s one of the challenges facing Teague, a Seattle-based design and innovation firm that advises space companies such as Blue Origin, Axiom Space and Voyager Technologies on how to lay out their orbital outposts.
Mike Mahoney, Teague’s senior director of space and defense programs, says the zero-gravity environment is the most interesting element to consider in space station design.
“You can’t put things on surfaces, right? You’re not going to have tables, necessarily, unless you can attach things to them, and they could be on any surface,” he told GeekWire. “So, directionality is a big factor. And knowing that opens up new opportunities. … You could have, let’s say, two scientists working in different orientations in the same area.”
Mike Mahoney is senior director of space and defense programs at Teague. (Photo via LinkedIn)
Over the next few years, NASA and its partners are expected to make the transition from the aging International Space Station to an array of commercial space stations — and Teague is helping space station builders get ready for the shift.
In the 1980s, Teague helped Boeing and NASA with their plans for Space Station Freedom, an orbital project that never got off the ground but eventually evolved into the International Space Station. Teague also partnered with NASA on a 3D-printed mockup for a Mars habitat, known as the Crew Health and Performance Exploration Analog.
Advertisement
Nowadays, Teague is focusing on interior designs for commercial spacecraft, a business opportunity that capitalizes on the company’s traditional expertise in airplane design.
Mahoney said Teague has been working with Jeff Bezos’ Blue Origin space venture on a variety of projects for more than a decade. The first project was the New Shepard suborbital rocket ship, which made its debut in 2015.
“We partnered with their engineering team to design for the astronaut experience within the New Shepard space capsule,” Mahoney said. “It’s all the interior components that you see that come together, from the linings to the lighting. We created a user experience vision for the displays as well.”
GeekWire’s Alan Boyle sits in one of the padded seats inside a mockup of the crew capsule for Blue Origin’s suborbital spaceship, on display at a space conference in 2017. The door of the capsule’s hatch is just to the right of Boyle’s head. (GeekWire File Photo / Kevin Lisota)
Teague also worked with Blue Origin on design elements for the Orbital Reef space station and the Blue Moon lunar lander. “We were involved in initial concepting for the look and feel of the vehicles,” Mahoney said. “In other cases, we designed and built mockups that were used for astronaut operations and testing. How do we navigate around the lunar lander legs? How do we optimize toolboxes on the surface of the moon?”
Other space station ventures that have benefited from Teague’s input include Axiom Space (which also brought in Philippe Starck as a big-name designer) and Starlab Space, a joint venture founded by Voyager Technologies and Airbus.
Advertisement
Starlab recently unveiled a three-story mockup of its space station at NASA’s Johnson Space Center in Texas. The mockup is built so that it can be reconfigured to reflect tweaks that designers want to make in the space station’s layout, before launch or even years after launch.
“One of the things that’s been very helpful along this development path has been working with Teague, because you have to have a really good idea on how you lay out this very large volume so that you can optimize the efficiency of the crew,” said Tim Kopra, a former NASA astronaut who now serves as chief human exploration officer at Voyager Technologies.
Kopra compared the Starlab station to a three-story condo. “The first floor is essentially like the basement of a large building that has the infrastructure,” he said, “It has our life support systems, avionics and software, the toilets, the hygiene station — which encompasses both the toilet and a cleaning station — and the workout equipment.”
The second floor serves as a laboratory and workspace, with a glovebox, freezer, centrifuge, microscope and plenty of racks and lockers for storage. “We are very focused on four different industries: semiconductors, life sciences, pharmaceuticals and materials science,” Kopra said.
Advertisement
He said the third floor will be a “place that people will enjoy … because Deck 3 has our crew quarters, our galley table, our windows and a little bit more experiment capacity.”
First ever look inside Voyager Space's Starlab Space Station mockup. The space station is 17m tall & 7.7m wide, essentially a 3-story tall building, and can host 4 astronauts continuously & 8 astronauts briefly. The company hopes to launch Starlab by 2029, to replace the ISS. pic.twitter.com/BzwfjUwkmZ
The galley table is a prime example of how zero-gravity interior design differs from the earthly variety. “No chairs,” Kopra said. “Just like on the ISS, all you need is a place to hook your feet. There are little design features, like where do you put a handrail, and how tall is the table?” (He said the designers haven’t yet decided whether the table should be round or square.)
Kopra said one of his top design priorities is to use the station’s volume, and the astronauts’ time, as efficiently as possible. “Time is extremely valuable on the ISS. They calculate that crew time is worth about $135,000 per hour,” he said. “Ours will be a fraction of that, but it really illustrates how important it is to be efficient with the time on board.”
Advertisement
Starlab is laid out to maximize efficiency. “We have a really cool design where the middle has a hatchway that goes all the way through the three stories,” he said. “So, imagine if it were a fire station, you’d have a pole that went from floor to floor. We don’t need a fire pole. We can just translate through that area.”
Mahoney said human-centered design will be more important for commercial space stations than it was for the ISS.
“In the past, space stations have been primarily designed for professionally trained, military background astronauts,” he said. “Now we’ll have different folks in there. … How do we think about how researchers and scientists will be using these spaces? How do we think about non-professional private astronauts? As the International Space Station gets retired, how do these companies step in to fill the void, serving NASA but also a lot of these new customers?”
A three-story mockup of the Starlab space station has been installed inside a building at NASA’s Johnson Space Center in Texas. (Starlab Photo)
An interior view of the Starlab mockup highlights the large observation windows and emergency equipment. (Starlab Photo)
The Internal Payload Laboratory inside the Starlab mockup features a glovebox, cold stowage, an optical bench and a workbench. (Starlab Photo)
Level 3 of the Starlab mockup includes the crew quarters, galley and Earth viewing areas. Starlab Space’s partners include Hilton and Journey, the team behind the Sphere in Las Vegas. (Starlab Photo)
When will commercial space stations step in? The answer to that question is up in the air.
But NASA has been slow to follow through on the revised plan, sparking concern in Congress. Late last month, the space agency said it was still working to “align acquisition timelines with national space policy and broader operational objectives.” Now some lawmakers are calling on NASA to reconsider its plan to deorbit the ISS in the 2030-2031 time frame.
The timetable for the space station transition may be in flux, but Mahoney and other space station designers are staying the course — and taking the long view.
“We may not know right now how the space station is going to be used 20 years from now,” Mahoney said. “How do we start to future-proof and create a system within that’s modular and flexible, so that we can add technologies and add systems, or we can configure in different ways? … Those are the kinds of things that we’re thinking about designing for.”
House Judiciary Committee member Jamie Raskin (D-MD) has asked the US Department of Justice to turn over all its communications with both Apple and Google regarding the companies’ decisions to remove apps that shared information about sightings of US Immigrations and Customs Enforcement officers. Several apps that allowed people to share information about where they had seen ICE members were removed from both Apple’s App Store and Google’s Play Store in October. Politicoreported that Raskin has contacted Attorney General Pam Bondi on the issue and also questioned the agency’s use of force against protestors as it executes the immigration policy set by President Donald Trump.
“The coercion and censorship campaign, which ultimately targets the users of ICE-monitoring applications, is a clear effort to silence this Administration’s critics and suppress any evidence that would expose the Administration’s lies, including its Orwellian attempts to cover up the murders of Renee and Alex,” Raskin wrote to Bondi. He refers to Minneapolis residents Renee Good and Alex Pretti, who were both fatally shot by ICE agents. In the two separate incidents, claims made by federal leaders about the victims and the circumstances of their deaths were contradicted by eyewitnesses or camera footage, echoing violent interactions and lies about them that occurred while ICE conducted raids in Chicago several months ago.
Another day, another wave of gaming layoffs. Today it’s Riot Games with the announcement that it’s cutting jobs on its pair-based fighting game 2XKO. For context, a representative from Riot confirmed to Game Developer that about 80 people are being cut, or roughly half of 2XKO’s global development team.
“As we expanded from PC to console, we saw consistent trends in how players were engaging with 2XKO,” according to the blog post from executive producer Tom Cannon. “The game has resonated with a passionate core audience, but overall momentum hasn’t reached the level needed to support a team of this size long term.”
The console launch for 2XKO happened last month. Cannon said the company’s plans for its 2026 competitive season have not altered with the layoffs. He added that Riot will attempt to place the impacted people at new positions within the company where possible.
Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.
Today’s Connections: Sports Edition is all about the Winter Olympics. If you’re struggling with today’s puzzle but still want to solve it, read on for hints and the answers.
Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.
Hints for today’s Connections: Sports Edition groups
Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.
Yellow group hint: Elegant ice sport.
Advertisement
Green group hint: Sports cinema.
Blue group hint: Five for fighting.
Purple group hint: Unusual Olympic sport.
Answers for today’s Connections: Sports Edition groups
Installing an RPi Pico board like it’s a modchip. (Credit: Tucker Osman, YouTube)
Although generally iPads tend to keep their resale value, there are a few exceptions, such as when you find yourself burdened with iCloud-locked devices. Instead of tossing these out as e-waste, you can still give them a new, arguably better purpose in life: an external display, with touchscreen functionality if you’re persistent enough. Basically someone like [Tucker Osman], who spent the past months on making the touchscreen functionality play nice in Windows and Linux.
While newer iPads are easy enough to upcycle as an external display as they use eDP (embedded Display Port), the touch controller relies on a number of chips that normally are initialized and controlled by the CPU. Most of the time was thus spent on reverse-engineering this whole process, though rather than a full-depth reverse-engineering, instead the initialization data stream was recorded and played back.
This thus requires that the iPad can still boot into iOS, but as demonstrated in the video it’s good enough to turn iCloud-locked e-waste into a multi-touch display. The SPI data stream that would normally go to the iPad’s SoC is instead intercepted by a Raspberry Pi Pico board which pretends to be a USB HID peripheral to the PC.
Ahead of the official launch on February 12, 2026, a French publication has leaked the key features of Sony’s WF-1000XM6 TWS earbuds. Per Dealabs, the device will feature a new QN3e chipset that is three times faster than the one on the XM5s.
Along with eight adaptive microphones (beamforming + bone conduction), up from six on the outgoing model, the chipset should offer smarter, more responsive, and effective active noise cancellation (ANC).
A faster brain for better silence
Further, the model could get an “Adaptive NC Optimizer” feature that adjusts the noise-cancellation intensity based on the user’s environment, optimizing the performance in real time.
Advertisement
Like the XM5s, the XM6 will offer three core listening modes: Noise Cancelling, Ambient Sound Mode, and regular (with no cancellation or passthrough). Moreover, expect the earbuds to offer noticeably better noise cancellation, perhaps enough to be on par with the AirPods Pro 3.
The report also mentions a “new speaker” or audio driver that should deliver richer audio than the 8.4mm Dynamic Driver X on the XM5s. Sony could also equip the device with an upgraded Digital-to-Analog Converter (DAC) and an enhanced amplifier.
Sound upgrades beyond ANC
On the software side, the flagship earbuds could get DSEE Ultimate support, a more advanced upscaling technology than DSEE Extreme on the XM5s. It can improve both the sample rate frequency and the bit depth of compressed audio files.
Additional features may include adaptive sound control, Quick Attention mode, background music effect, and Speak to Chat. The device should also support LE Audio and Auracast.
Advertisement
You could also get Spatial Audio with Head Tracking, new audio modes, Bluetooth v5.3 (with multi-point connectivity), support for Hi-Res Wireless (LDAC) codec, and a 10-band equalizer within the Headphones Connect app to customize the sound.
The Sony WF-1000XM5 launched at $300, a price I expect Sony will stick withSimon Cohen / Digital Trends
Sony WF-1000XM6 to demand $80 over the AirPods Pro 3
Regarding battery life, the Sony WF-1000XM6 should last up to eight hours on a single charge (likely with ANC) and up to 24 hours with the charging case (which supports fast and wireless charging).
Post their launch on February 12, 2026, the Sony WF-1000XM6 are expected to be available in the United States for $329.99, in Black and Silver finishes (commanding a more premium price tag than the AirPods Pro 3).