Connect with us

Tech

Today’s NYT Connections: Sports Edition Hints, Answers for Feb. 10 #505

Published

on

Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.


Today’s Connections: Sports Edition is all about the Winter Olympics. If you’re struggling with today’s puzzle but still want to solve it, read on for hints and the answers.

Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.

Advertisement

Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta

Hints for today’s Connections: Sports Edition groups

Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Elegant ice sport.

Advertisement

Green group hint: Sports cinema.

Blue group hint: Five for fighting.

Purple group hint: Unusual Olympic sport.

Answers for today’s Connections: Sports Edition groups

Yellow group: Figure skating disciplines.

Advertisement

Green group: Winter Olympic movies.

Blue group: Hockey penalties.

Purple group: Biathlon equipment.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

Advertisement

What are today’s Connections: Sports Edition answers?

completed NYT Connections: Sports Edition puzzle for Feb. 10, 2026

The completed NYT Connections: Sports Edition puzzle for Feb. 10, 2026.

NYT/Screenshot by CNET

The yellow words in today’s Connections

The theme is figure skating disciplines. The four answers are ice dance, pairs, singles and team event.

The green words in today’s Connections

The theme is Winter Olympic movies. The four answers are Cool Runnings; I, Tonya; Miracle and The Cutting Edge.

Advertisement

The blue words in today’s Connections

The theme is hockey penalties. The four answers are boarding, hooking, icing and offsides.

The purple words in today’s Connections

The theme is biathlon equipment. The four answers are poles, rifle, skis and targets.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Google Pixel 10A Leaks: New Colors and Price Info Revealed

Published

on

7

The Google Pixel 10A is set to make its debut on Feb. 18.

Google/CNET

Google’s Pixel 10A will make its debut next week, on Feb. 18, and while a recent YouTube teaser video (below) provided the first official look at the new Android phone, there are a lot of details that have been revealed via various leaks and rumors from around the internet.

We’re rounding up the top highlights here in this live blog, and will be on the lookout for when Google official reveals the Pixel 9A’s successor later this month.

Advertisement

Pixel 10A release date

Google’s Pixel 10A will get its official reveal next Wednesday, Feb. 18, as confirmed by a Feb. 4 Google teaser video on its Made by Google YouTube channel. This date will mark one of the earliest debuts for a Pixel A series phone, as prior affordable Google phones have typically been released in the spring or summer. 

This earlier release date likely means the phone will arrive just ahead of Samsung’s Galaxy S26 line, and just as Apple’s rumored to be debuting its own lower-cost iPhone 17E that could replace last year’s iPhone 16E.

Pixel 10A price

While there aren’t many rumors about the price of the Pixel 10A, the phone does appear to be very similar in appearance to last year’s Pixel 9A. Rumors point to the Pixel 10A offering 128GB and 256GB storage options, the same as the 9A, with a $499 starting price. However, that price is far from certain. The ongoing global RAM shortage is expected to eventually lead to phone-makers raising the prices of their devices to offset the higher cost of memory. It’s entirely possible that this could be the case with the Pixel line.

Pixel 10A specs

The Pixel 10A is expected to have similar specs to last year’s Pixel 9A. While we know from Google’s teaser that the Pixel 10A will have a flat camera bump, presumably with a wide and ultrawide camera inside, we’re waiting for Feb. 18 to get an official look at what’s inside. The specs that are presumed by multiple rumors include:

Advertisement
  • 6.3-inch display
  • A “boosted” edition of the Tensor G4 processor rather than the Tensor G5 seen in the rest of the Pixel 10 line
  • Four colors: obsidian, berry, fog and lavender. The latter model seems to be what’s on display in Google’s teaser video.
  • 48-megapixel main camera and a 13-megapixel ultrawide camera
  • 5,100mAh battery
  • 13-megapixel selfie camera

We’re rounding up the rumors as they come. We’ll continue to update this live blog as we learn more about the device.

Source link

Continue Reading

Tech

Toyota just introduced its own robotaxi to tackle Tesla and Waymo

Published

on

Toyota, along with the Chinese autonomous-driving firm Pony.ai, has announced that its first mass-produced robotaxi, based on the Toyota bZ4X, has rolled off the production line.

The project is a joint venture between Toyota Motor China, GAC Toyota, and Pony.ai, with themanufacturing handled by the latter two. Unlike prototypes that live their life on press stages, the robotaxis are headed for real-world service.

From prototype to production

The companies, as mentioned in the official press release, plan to build more than 1,000 Toyota bZ4X robotaxis this year, with gradual commercial deployment across China’s tier-one cities.

At the heart of the robotaxis is Pony.ai’s latest, seventh-generation autonomous driving system, which cuts the bill of materials for the self-driving kit by 70% (compared to the previous generation), while using 100% automotive-grade hardware. With this, the firm expects its total robotaxi fleet to exceed 3,000 vehicles by the end of 2026.

Advertisement

Inside, the Toyota bZ4X-based Pony.ai robotaxis offer features that should feel familiar to anyone who has used a modern ride-hailing app: Bluetooth-based automatic unlocking, voice interaction, online music, pre-trip climate control, and smoother braking and acceleration for enhanced comfort and reduced motion sickness.

Scaling fast, the Toyota way

The robotaix are built using the Toyota Production System and adhere to Toyota’s “QDR” principles, which stand for “Quality, Durability, and Reliability.” While Tesla and Waymo often grab attention with cutting-edge software, Toyota is leaning into scale, cost control, and manufacturing discipline.

“Together, these efforts demonstrate a clear pathway for autonomous driving technology to progress from limited-scale validation to large-scale mass production.”

Right now, the U.S. robotaxi landscape looks like a race where some runners already have a head start. Waymo, for instance, is doing extremely well in commercial deployment, operating around 2,500 fully autonomous robotaxis in multiple U.S. cities. Tesla is scaling its service out of Austin with a controlled, well-equipped fleet.

Advertisement

However, given that Toyota’s robotaxis are currently available only in China, they’re competing with local players like Baidu’s Apollo Go, WeRide, and AutoX.

Source link

Advertisement
Continue Reading

Tech

The missing layer between agent connectivity and true collaboration

Published

on

Today’s AI challenge is about agent coordination, context, and collaboration. How do you enable them to truly think together, with all the contextual understanding, negotiation, and shared purpose that entails? It’s a critical next step toward a new kind of distributed intelligence that keeps humans firmly in the loop.

At the latest stop on VentureBeat’s AI Impact Series, Vijoy Pandey, SVP and GM of Outshift by Cisco, and Noah Goodman, Stanford professor and co-founder of Humans&, sat down to talk about how to move beyond agents that just connect to agents that are steeped in collective intelligence.

The need for collective intelligence, not coordinated actions

The core challenge, Pandey said, is that “agents today can connect together, but they can’t really think together.”

While protocols like MCP and A2A have solved basic connectivity, and AGNTCY tackles the problems of discovery, identity management to inter-agent communication and observability, they’ve only addressed the equivalent of making a phone call between two people who don’t speak the same language. But Pandey’s team has identified something deeper than technical plumbing: the need for agents to achieve collective intelligence, not just coordinated actions.

Advertisement

How shared intent and shared knowledge enable collective innovation

To understand where multi-agent AI needs to go, both speakers pointed to the history of human intelligence. While humans became individually intelligent roughly 300,000 years ago, true collective intelligence didn’t emerge until around 70,000 years ago with the advent of sophisticated language.

This breakthrough enabled three critical capabilities: shared intent, shared knowledge, and collective innovation.

“Once you have a shared intent, a shared goal, you have a body of knowledge that you can modify, evolve, build upon, you can then go towards collective innovation,” Pandey said.

Goodman, whose work bridges computer science and psychology, explained that language is far more than just encoding and decoding information.

Advertisement

“Language is this kind of encoding that requires understanding the context, the intention of the speaker, the world, how that affects what people will say in order to figure out what people mean,” he said.

This sophisticated understanding is what scaffolds human collaboration and cumulative cultural evolution, and it’s what is currently missing from agent-to-agent interaction.

Addressing the gaps with the Internet of Cognition

“We have to mimic human evolution,” Pandey explained. “In addition to agents getting smarter and smarter, just like individual humans, we need to build infrastructure that enables collective innovation, which implies sharing intent, coordination, and then sharing knowledge or context and evolving that context.”

Pandey calls it the Internet of Cognition: a three-layer architecture designed to enable collective thinking among heterogeneous agents:

Advertisement

Protocol layer: Beyond basic connectivity, these protocols enable understanding, handling intent sharing, coordination, negotiation, and discovery between agents from different vendors and organizations.

Fabric layer: A shared memory system that allows agents to build and evolve collective context, with emergent properties arising from their interactions.

Cognition engine layer: Accelerators and guardrails that help agents think faster while operating within necessary constraints around compliance, security, and cost.

The difficulty is that organizations need to build collective intelligence across organizational boundaries.

Advertisement

“Think about shared memory in a heterogeneous way,” Pandey said. “We have agents from different parties coming together. So how do you evolve that memory and have emergent properties?”

New foundation training protocols to advance agent connection

At Humans&, rather than relying solely on additional protocols, Goodman’s team is fundamentally changing how foundation models are trained not only between a human and an agent, but between a human and multiple agents, and especially between an agent and multiple humans.

“By changing the training that we give to the foundation models and centering the training over extremely long horizon interactions, they’ll come to understand how interactions should proceed in order to achieve the right long-term outcomes,” he said.

And, he adds, it’s a deliberate divergence from the longer-autonomy path pursued by many large labs.

Advertisement

“Our goal is not longer and longer autonomy. It’s better and better collaboration,” he said. “Humans& is building agents with deep social understanding: entities that know who knows what, can foster collaboration, and put the right specialists in touch at the right time.”

Establishing guardrails that support cognition

Guardrails remain a central challenge in deploying multi-functional agents that touch every part of an organization’s system. The question is how to enforce boundaries without stifling innovation. Organizations need strict, rule-like guardrails, but humans don’t actually work that way. Instead, people operate on a principle of minimal harm, or thinking ahead about consequences and making contextual judgments.

“How do we provide the guardrails in a way which is rule-like, but also supports the outcome-based cognition when the models get smart enough for that?” Goodman asked.

Pandey extended this thinking to the reality of innovation teams that need to apply the rules with judgment, not just follow them mechanically. Figuring out what’s open to interpretation is a “very collaborative task,” he said. “And you don’t figure that out through a set of predicates. You don’t figure that out through a document. You figure that out through common understanding and grounding and discovery and negotiation.”

Advertisement

Distributed intelligence: the path to superintelligence

True superintelligence won’t come from increasingly powerful individual models, but from distributed systems.

“While we build better and better models, and better and better agents, eventually we feel that true super intelligence will happen through distributed systems,” Pandey said

Intelligence will scale along two axes, both vertical, or better individual agents, and horizontal, or more collaborative networks, in a manner very similar to traditional distributed computing.

However, said Goodman, “We can’t move towards a future where the AIs go off and work by themselves. We have to move towards a future where there’s an integrated ecosystem, a distributed ecosystem that seamlessly merges humans and AI together.”

Advertisement

Source link

Continue Reading

Tech

MrBeast’s company buys Gen Z-focused fintech app Step

Published

on

YouTube megastar MrBeast announced on Monday that his company, Beast Industries, is buying Step, a teen-focused banking app.

Step, which raised half a billion in funding and has grown to over 7 million users, offers financial services geared toward Gen Z to help them build credit, save money, and invest. The company has attracted celebrity investors like Charli D’Amelio, Will Smith, The Chainsmokers, and Stephen Curry, in addition to venture firms like General Catalyst, Coatue, and the payments company Stripe.

If the company wants to continue getting its fintech product in front of young eyes, then partnering with Gen Z phenom MrBeast is wise. MrBeast, whose real name is Jimmy Donaldson, is the most-subscribed creator on YouTube, with over 466 million subscribers, but his ambitions stretch beyond his over-the-top videos.

“Nobody taught me about investing, building credit, or managing money when I was growing up,” the 27-year-old said. “I want to give millions of young people the financial foundation I never had.”

Advertisement

This acquisition makes sense, considering that a leaked pitch document from last year showed this was an area of interest for Beast Industries. The company is also reportedly interested in launching a mobile virtual network operator (MVNO), a lower-cost cell phone plan similar to Ryan Reynolds’ Mint Mobile.

In line with other top creators, Beast Industries’ business is much more than YouTube ad revenue. (In fact, the company reinvests much of that money back into the content.) The company’s cash cow is the chocolate brand Feastables, which is more profitable than both the MrBeast YouTube channel and the Prime Video show “Beast Games,” according to leaked documents reported on by Bloomberg. Some of his other ventures, like Lunchly and MrBeast Burger, have struggled.

Advertisement

“We’re excited about how this acquisition is going to amplify our platform and bring more groundbreaking products to Step customers,” Step founder and CEO CJ MacDonald said in a statement.

Source link

Advertisement
Continue Reading

Tech

IEEE Honors Innovators Shaping AI and Education

Published

on

Meet the recipients of the 2026 IEEE Medals—the organization’s highest-level honors. Presented on behalf of the IEEE Board of Directors, these medals recognize innovators whose work has shaped modern technology across disciplines including AI, education, and semiconductors.

The medals will be presented at the IEEE Honors Ceremony in April in New York City. View the full list of 2026 recipients on the IEEE Awards website, and follow IEEE Awards on LinkedIn for news and updates.

Sponsor: IEEE

Jensen Huang

Advertisement

Nvidia

Santa Clara, Calif.

“For leadership in the development of graphics processing units and their application to scientific computing and artificial intelligence.”

Sponsor: IBM

Advertisement

Luis von Ahn

Duolingo

Pittsburgh

“For contributions to the advancement of societal improvement and education through innovative technology.”

Advertisement

Sponsor: Nokia Bell Labs

Scott J. Shenker

University of California, Berkeley

“For contributions to Internet architecture, network resource allocation, and software-defined networking.”

Advertisement

Sponsor: Mani L. Bhaumik

Co-recipients:
Erik Dahlman

Stefan Parkvall
Johan Sköld

Ericsson

Stockholm

Advertisement

“For contributions to and leadership in the research, development, and standardization of cellular wireless communications.”

Sponsor: Google

Karen Ann Panetta

Tufts University

Advertisement

Medford, Mass.

“For contributions to computer vision and simulation algorithms, and for leadership in developing programs to promote STEM careers.”

Sponsor: The Edison Medal Fund

Eric Swanson

Advertisement

PIXCEL Inc.

MIT

“For pioneering contributions to biomedical imaging, terrestrial optical communications and networking, and inter-satellite optical links.”

Sponsor: Toyota Motor Corp.

Advertisement

Wei-Jen Lee

University of Texas at Arlington

“For contributions to advancing electrical safety in the workplace, integrating renewable energy and grid modernization for climate change mitigation.”

Sponsor: IEEE Foundation

Advertisement

Marian Rogers Croak

Google

Reston, Va.

“For leadership in communication networks, including acceleration of digital equity, responsible Artificial Intelligence, and the promotion of diversity and inclusion.”

Advertisement

Sponsor: Qualcomm, Inc.

Muriel Médard

MIT

“For contributions to coding for reliable communications and networking.”

Advertisement

Sponsor: Friends of Nick Holonyak, Jr.

Steven P. DenBaars

University of California, Santa Barbara

“For seminal contributions to compound semiconductor optoelectronics, including high-efficiency visible light-emitting diodes, lasers, and LED displays.”

Advertisement

Sponsor: IEEE Engineering Medicine and Biology Society

Rosalind W. Picard

MIT

“For pioneering contributions to wearable affective computing for health and wellbeing.”

Advertisement

Sponsor: Apple

Biing-Hwang “Fred” Juang

Georgia Tech

“For contributions to signal modeling, coding, and recognition for speech communication.”

Advertisement

Sponsor: ARM, Ltd.

Paul B. Corkum

University of Ottawa

“For the development of the recollision model for strong field light–matter interactions leading to the field of attosecond science.”

Advertisement

Sponsor: IEEE Life Members Fund and MathWorks

James H. McClellan

Georgia Tech

“For fundamental contributions to electrical and computer engineering education through innovative digital signal processing curriculum development.”

Sponsor: IEEE Jun-ichi Nishizawa Medal Fund

Advertisement

Eric R. Fossum

Dartmouth College

Hanover, N.H.

“For the invention, development, and commercialization of the CMOS image sensor.”

Advertisement

Sponsor: Intel Corp.

Chris Malachowsky

Nvidia

Santa Clara, Calif.

Advertisement

“For pioneering parallel computing architectures and leadership in semiconductor design that transformed artificial intelligence, scientific research, and accelerated computing.”

Sponsor: RTX

Yoshio Yamaguchi

Niigata University

Advertisement

Japan

“For contributions to polarimetric synthetic aperture radar imaging and its utilization.”

Sponsors: IEEE Industry Applications, Industrial Electronics, Power Electronics, and Power & Energy societies

Fang Zheng Peng

Advertisement

University of Pittsburgh

“For contributions to Z-Source and modular multi-level converters for distribution and transmission networks.”

Sponsor: Northrop Grumman Corp.

Michael D. Griffin

Advertisement

LogiQ, Inc.

Arlington, Va.

“For leadership in national security, civil, and commercial systems engineering and development of elegant design principles.”

Sponsor: IBM

Advertisement

Donald D. Chamberlin

IBM

San Jose, Calif.

“For contributions to database query languages, particularly Structured Query Language, which powers most of the world’s data management and analysis systems.”

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Is Disney+ Losing Dolby Vision Dynamic HDR Streaming Due to a Patent Dispute?

Published

on

According to a report in FlatPanelsHD, Disney+ users throughout Europe are apparently reporting the loss of Dolby Vision and HDR10+ dynamic HDR formats on 4K video content streamed on the Disney+ service. Titles that previously streamed in 4K resolution with either Dolby Vision or HDR10+ dynamic HDR are now streaming in basic HDR10 static HDR instead. Disney+ users in Germany first started seeing the lack of Dolby Vision as early as December, 2025 and reports have spread since then to other European countries.

As of February 2026, Disney+ users in the United States are still able to watch select titles on the service in Dolby Vision HDR, but it remains unclear if this will continue to be true or if Disney will remove the technology from all markets. Some European customers have installed VPNs (Virtual Private Networks) to spoof their geographic location, which fools the streaming service into thinking they are streaming from the United States in order to work around the issue. To be clear, we are not suggesting that affected customers take this action; we are simply reporting what some users have stated online.

The loss of Dolby Vision comes on the heels of a German court’s ruling in a patent dispute filed by InterDigital, Inc. against Disney+. InterDigital has claimed that Disney+ is currently in violation of a patent they own related to the streaming of video content using high dynamic range (HDR) technology. The German court agreed with the validity of the claim and granted an injunction against Disney+ in November of last year to cease using the alleged infringing technology to deliver HDR content. The timing of this injunction and the first reports of disappearing Dolby Vision appear to be more than coincidental.

dolby-vision-vs-sdr-example

Why It Matters

While having four times as many pixels in 4K (UHD) content compared to 1080p HD does results in a sharper picture, it’s the wider color gamut and high dynamic range capabilities of 4K/UHD content that make video content look more lifelike and vivid. But most consumer displays, including TVs and projectors, are unable to reproduce the full dynamic range and peak brightness that is used by studios to master movies and TV shows for consumer displays. A film may be mastered for peaks of 4,000 nits while a high quality OLED display may only be able to reproduce 1,000 or possibly 2,000 nit peaks. Attempting to display content mastered for 4,000 nits on a consumer display with lower peak brightness could result in loss of specular (bright) highlights, a loss of shadow detail or both.

Dynamic HDR formats like Dolby Vision and HDR10+ allow the content’s dynamic range to be adjusted at playback time on a scene by scene basis to fit the capabilities of the display. This allows consumers to see a better representation of the content creators’ “artistic intent” regardless of whether they’re watching on a DLP projector, an OLED TV or a MiniLED/LCD TV. By eliminating the dynamic HDR content options, Disney+ could be creating an inferior viewing experience for their customers, even though these customers are paying for a “premium” streaming experience.

Advertisement
Sony SDR vs HDR ABC TV

When contacted for comment, a source at the company had this to add:

“Dolby Vision support for Disney+ content in several European countries is currently unavailable due to technical issues. We are actively exploring ways to restore it and will provide updates as soon as we can. 4K UHD and HDR support remain available on supported devices. HDR10+ has not ever been available in market in Europe to date, we expect it to be available soon.”

The Bottom Line

From Disney’s response, it appears that the company is aware of the issue and is attempting to deliver a solution to their customers. In the meantime, affected users can still access this content in 4K resolution with standard HDR10 HDR. And on TVs with good HDR tone mapping processing, results with HDR10 can be quite high quality. Still, there are many fans out there of Dolby Vision, and many TVs that do support Dolby Vision, but do not support HDR10+ nor have their own dynamic HDR tone mapping. We’re sure these customers would like to see Dolby Vision support restored in order to get the highest possible visual quality of their streamed content.

Advertisement. Scroll to continue reading.
Advertisement

Source link

Continue Reading

Tech

Meta Goes to Trial in a New Mexico Child Safety Case. Here’s What’s at Stake

Published

on

Today, Meta went to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation on its apps, including Facebook and Instagram. The state claims that Meta violated New Mexico’s Unfair Practices Act by implementing design features and algorithms that created dangerous conditions for users. Now, more than two years after the case was filed, opening arguments have begun in Santa Fe.

It’s a big week for Meta in court: A landmark social media trial kicks off in California today as well, the nation’s first legal test of social media addiction. That case is part of a “JCCP”, or judicial council coordinated proceedings, that brings together many civil suits that focus on similar issues.

The plaintiffs in that case allege that social media companies designed their products in a negligent manner and caused various harms to minors using their apps. Snap, TikTok, and Google were named as defendants alongside Meta; Snap and TikTok have already settled. The fact that Meta has not means that some of the company’s top executives may be called to the witness stand in the coming weeks.

Meta executives, including Mark Zuckerberg, are not likely to testify live in the New Mexico trial. But the proceedings may still be noteworthy for a few reasons. It’s the first standalone, state-led case against Meta that has actually gone to trial in the US. It’s also a highly charged case alleging child sexual exploitation that will ultimately lean on very technical arguments, including what it means to “mislead” the public, how algorithmic amplification works on social media, and what protections Meta and other social media platforms have through Section 230.

Advertisement

And, while Meta’s top brass might not be required to appear in person, executive depositions and testimonies from other witnesses could still offer an interesting look at the inner workings of the company as it established policies around underage users and responded to complaints that claim it wasn’t doing enough to protect them.

Meta has so far given no indication that it plans to settle. The company has denied the allegations, and Meta spokesperson Aaron Simpson told WIRED previously, “While New Mexico makes sensationalist, irrelevant and distracting arguments, we’re focused on demonstrating our longstanding commitment to supporting young people…We’re proud of the progress we’ve made, and we’re always working to do better.”

Sacha Haworth, executive director of The Tech Oversight Project, a tech industry watchdog, said in an emailed statement that these two trials represent “the split screen of Mark Zuckerberg’s nightmares: a landmark trial in Los Angeles over addicting children to Facebook and Instagram, and a trial in New Mexico exposing how Meta enabled predators to use social media to exploit and abuse kids.”

“These are the trials of a generation,” Haworth added. “Just as the world watched courtrooms hold Big Tobacco and Big Pharma accountable, we will, for the first time, see Big Tech CEOs like Zuckerberg take the stand.”

Advertisement

The Cost of Doing Business

New Mexico Attorney General Raúl Torrez filed his complaint against Meta in December 2023. In it, he alleged that Meta proactively served underage users explicit content, enabled adults to exploit children on the platform, allowed Facebook and Instagram users to easily find child pornography, and allowed an investigator on the case, purporting to be a mother, to offer her underage daughter to sex traffickers.

The trial is expected to take place over seven weeks. Last week jurors were selected, a panel of 10 women and eight men (12 jurors and six alternates). New Mexico First Judicial District Judge Bryan Biedscheid is presiding over the case.

Source link

Advertisement
Continue Reading

Tech

ByteDance’s new Seedance 2.0 supposedly ‘surpasses Sora 2′

Published

on

Seedance 2.0 says it generates ‘cinematic content’ with ‘seamless video extension and natural language control’.

TikTok parent company ByteDance launched the pre-release version of its new AI video model, called Seedance 2.0, over the weekend, sparking value growth of shares in Chinese AI firms.

Seedance 2.0 markets itself as a “true” multi-modal AI creator, allowing users to combine images, videos, audio and text to generate “cinematic content” with “precise reference capabilities, seamless video extension and natural language control”. The model is currently available to select users of Jimeng AI, ByteDance’s AI video platform.

The new Seedance model allows exporting in 2k with 30pc faster generation than the previous version 1.5, the company’s website read.

Advertisement

Swiss-based consultancy CTOL called it the “most advanced AI video generation model available”, “surpassing OpenAI’s Sora 2 and Google’s Veo 3.1 in practical testing”.

Positive response to the Seedance 2.0 launch drove up shares in Chinese AI firms.

Data compiled by Bloomberg earlier today (9 February) showed publishing company COL Group Co hit its 20pc daily price ceiling, while Shanghai Film Co and gaming and entertainment firm Perfect World Co rose by 10pc. Meanwhile, the Shanghai Shenzhen CSI 300 Index is up by 1.63pc at the time of publishing.

Consumer AI video generators have made leaps in advances in just a short period of time. The usual tells – blurry fingers, overly smooth and unrealistic skin and inexplainable changes from frame to frame – in AI videos are all becoming extremely hard to notice.

Advertisement

While rivalling AI video generator Sora 2 by OpenAI produces results with watermarks (although, there’s no shortage of tutorials on how to remove them), and Google’s Veo 3.1 comes with a metadata watermark called SynthID, Seedance boasts that its results are “completely watermark-free”.

The prevalence of advanced AI tools coupled with ease of access has opened the gates to a new wave of AI deepfakes, with the likes of xAI’s Grok at the centre of the issue.

Last month, the EU launched a new investigation into X to probe whether the Elon Musk-owned social media site properly assessed and mitigated risks stemming from its in-platform AI chatbot Grok after it was outfitted with the ability to edit images.

Users on the social media site quickly prompted the tool to undress people – generally women and children – in images and videos. Millions of such pieces of content were generated on X, The New York Times found.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Startup Uses Neural Chips to Turn Live Pigeons Into Cyborg Drones

Published

on

Russia Neiry Neural Chips Pigeon Cyborg Drones
A Moscow-based startup has taken a big move into the area of animal-machine hybrids. Neiry, a Moscow-based neurotech company, claims to have already made progress on remotely operated pigeons by inserting electrodes into their brains.



In late 2025, there were reports of early flying tests in the city, in which the modified birds flew controlled routes before returning to base. The project is known as PJN-1, and while the company promotes it as a valuable tool for civilian occupations, this type of technology is gaining traction due to its potential for surveillance purposes.


DJI Neo, Mini Drone with 4K UHD Camera for Adults, 135g Self Flying Drone that Follows You, Palm Takeoff,…
  • Due to platform compatibility issue, the DJI Fly app has been removed from Google Play. DJI Neo must be activated in the DJI Fly App, to ensure a…
  • Lightweight and Regulation Friendly – At just 135g, this drone with camera for adults 4K may be even lighter than your phone and does not require FAA…
  • Palm Takeoff & Landing, Go Controller-Free [1] – Neo takes off from your hand with just a push of a button. The safe and easy operation of this drone…

The surgeons utilize a specialized frame to delicately place these microscopic electrodes into specific areas of the pigeon’s brain. The electrodes are then linked to a mini-stimulator on the bird’s head. All of the bird’s electronics and navigation system are housed inside a lightweight backpack and powered by solar panels. A tiny camera is mounted on the bird’s chest to capture video. The operators can direct the bird to fly left or right by giving electrical signals to its brain. GPS devices, like a standard drone, track the bird’s location and control its trajectory. According to Neiry, the bird does not require training and may be operated immediately after the process, and the surgery has been reported to be 100% successful in terms of survival rate.

Pigeons have some significant advantages over regular drones in certain scenarios. For starters, they can fly hundreds of miles, perhaps 300 or more in a single day, without having to replace batteries or land frequently. They can simply navigate difficult environments, handle anything the weather throws at them, and even fit into small spaces. Neiry believes they can use pigeons to inspect pipelines or electrical lines, survey industrial regions, or assist with search and rescue efforts in difficult-to-reach areas. According to the company’s inventor, Alexander Panov, the same technology may be used with various birds, such as ravens for coastal monitoring, seagulls, or even albatrosses for operations over the ocean, as long as they can carry the burden and fly the distance required.
[Source]

Advertisement

Source link

Continue Reading

Tech

Nvidia releases DreamDojo, a robot ‘world model’ trained on 44,000 hours of human video

Published

on

A team of researchers led by Nvidia has released DreamDojo, a new AI system designed to teach robots how to interact with the physical world by watching tens of thousands of hours of human video — a development that could significantly reduce the time and cost required to train the next generation of humanoid machines.

The research, published this month and involving collaborators from UC Berkeley, Stanford, the University of Texas at Austin, and several other institutions, introduces what the team calls “the first robot world model of its kind that demonstrates strong generalization to diverse objects and environments after post-training.”

At the core of DreamDojo is what the researchers describe as “a large-scale video dataset” comprising “44k hours of diverse human egocentric videos, the largest dataset to date for world model pretraining.” The dataset, called DreamDojo-HV, is a dramatic leap in scale — “15x longer duration, 96x more skills, and 2,000x more scenes than the previously largest dataset for world model training,” according to the project documentation.

Nvidia - Workshop and box packing image

A simulated robot places a cup into a cardboard box in a workshop setting, one of thousands of scenarios DreamDojo can model after training on 44,000 hours of human video. (Credit: Nvidia)

Advertisement

Inside the two-phase training system that teaches robots to see like humans

The system operates in two distinct phases. First, DreamDojo “acquires comprehensive physical knowledge from large-scale human datasets by pre-training with latent actions.” Then it undergoes “post-training on the target embodiment with continuous robot actions” — essentially learning general physics from watching humans, then fine-tuning that knowledge for specific robot hardware.

For enterprises considering humanoid robots, this approach addresses a stubborn bottleneck. Teaching a robot to manipulate objects in unstructured environments traditionally requires massive amounts of robot-specific demonstration data — expensive and time-consuming to collect. DreamDojo sidesteps this problem by leveraging existing human video, allowing robots to learn from observation before ever touching a physical object.

One of the technical breakthroughs is speed. Through a distillation process, the researchers achieved “real-time interactions at 10 FPS for over 1 minute” — a capability that enables practical applications like live teleoperation and on-the-fly planning. The team demonstrated the system working across multiple robot platforms, including the GR-1, G1, AgiBot, and YAM humanoid robots, showing what they call “realistic action-conditioned rollouts” across “a wide range of environments and object interactions.”

Why Nvidia is betting big on robotics as AI infrastructure spending soars

The release comes at a pivotal moment for Nvidia’s robotics ambitions — and for the broader AI industry. At the World Economic Forum in Davos last month, CEO Jensen Huang declared that AI robotics represents a “once-in-a-generation” opportunity, particularly for regions with strong manufacturing bases. According to Digitimes, Huang has also stated that the next decade will be “a critical period of accelerated development for robotics technology.”

Advertisement

The financial stakes are enormous. Huang told CNBC’s “Halftime Report” on February 6 that the tech industry’s capital expenditures — potentially reaching $660 billion this year from major hyperscalers — are “justified, appropriate and sustainable.” He characterized the current moment as “the largest infrastructure buildout in human history,” with companies like Meta, Amazon, Google, and Microsoft dramatically increasing their AI spending.

That infrastructure push is already reshaping the robotics landscape. Robotics startups raised a record $26.5 billion in 2025, according to data from Dealroom. European industrial giants including Siemens, Mercedes-Benz, and Volvo have announced robotics partnerships in the past year, while Tesla CEO Elon Musk has claimed that 80 percent of his company’s future value will come from its Optimus humanoid robots.

How DreamDojo could transform enterprise robot deployment and testing

For technical decision-makers evaluating humanoid robots, DreamDojo’s most immediate value may lie in its simulation capabilities. The researchers highlight downstream applications including “reliable policy evaluation without real-world deployment and model-based planning for test-time improvement” — capabilities that could let companies simulate robot behavior extensively before committing to costly physical trials.

Advertisement

This matters because the gap between laboratory demonstrations and factory floors remains significant. A robot that performs flawlessly in controlled conditions often struggles with the unpredictable variations of real-world environments — different lighting, unfamiliar objects, unexpected obstacles. By training on 44,000 hours of diverse human video spanning thousands of scenes and nearly 100 distinct skills, DreamDojo aims to build the kind of general physical intuition that makes robots adaptable rather than brittle.

The research team, led by Linxi “Jim” Fan, Joel Jang, and Yuke Zhu, with Shenyuan Gao and William Liang as co-first authors, has indicated that code will be released publicly, though a timeline was not specified.

The bigger picture: Nvidia’s transformation from gaming giant to robotics powerhouse

Whether DreamDojo translates into commercial robotics products remains to be seen. But the research signals where Nvidia’s ambitions are heading as the company increasingly positions itself beyond its gaming roots. As Kyle Barr observed at Gizmodo earlier this month, Nvidia now views “anything related to gaming and the ‘personal computer’” as “outliers on Nvidia’s quarterly spreadsheets.”

The shift reflects a calculated bet: that the future of computing is physical, not just digital. Nvidia has already invested $10 billion in Anthropic and signaled plans to invest heavily in OpenAI’s next funding round. DreamDojo suggests the company sees humanoid robots as the next frontier where its AI expertise and chip dominance can converge.

Advertisement

For now, the 44,000 hours of human video at the heart of DreamDojo represent something more fundamental than a technical benchmark. They represent a theory — that robots can learn to navigate our world by watching us live in it. The machines, it turns out, have been taking notes.

Source link

Continue Reading

Trending

Copyright © 2025