Connect with us

Tech

DOJ may face investigation for pressuring Apple, Google to remove apps for tracking ICE agents

Published

on

House Judiciary Committee member Jamie Raskin (D-MD) has asked the US Department of Justice to turn over all its communications with both Apple and Google regarding the companies’ decisions to remove apps that shared information about sightings of US Immigrations and Customs Enforcement officers. Several apps that allowed people to share information about where they had seen ICE members were removed from both Apple’s App Store and Google’s Play Store in October. Politico reported that Raskin has contacted Attorney General Pam Bondi on the issue and also questioned the agency’s use of force against protestors as it executes the immigration policy set by President Donald Trump.

“The coercion and censorship campaign, which ultimately targets the users of ICE-monitoring applications, is a clear effort to silence this Administration’s critics and suppress any evidence that would expose the Administration’s lies, including its Orwellian attempts to cover up the murders of Renee and Alex,” Raskin wrote to Bondi. He refers to Minneapolis residents Renee Good and Alex Pretti, who were both fatally shot by ICE agents. In the two separate incidents, claims made by federal leaders about the victims and the circumstances of their deaths were contradicted by eyewitnesses or camera footage, echoing violent interactions and lies about them that occurred while ICE conducted raids in Chicago several months ago.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Is Disney+ Losing Dolby Vision Dynamic HDR Streaming Due to a Patent Dispute?

Published

on

According to a report in FlatPanelsHD, Disney+ users throughout Europe are apparently reporting the loss of Dolby Vision and HDR10+ dynamic HDR formats on 4K video content streamed on the Disney+ service. Titles that previously streamed in 4K resolution with either Dolby Vision or HDR10+ dynamic HDR are now streaming in basic HDR10 static HDR instead. Disney+ users in Germany first started seeing the lack of Dolby Vision as early as December, 2025 and reports have spread since then to other European countries.

As of February 2026, Disney+ users in the United States are still able to watch select titles on the service in Dolby Vision HDR, but it remains unclear if this will continue to be true or if Disney will remove the technology from all markets. Some European customers have installed VPNs (Virtual Private Networks) to spoof their geographic location, which fools the streaming service into thinking they are streaming from the United States in order to work around the issue. To be clear, we are not suggesting that affected customers take this action; we are simply reporting what some users have stated online.

The loss of Dolby Vision comes on the heels of a German court’s ruling in a patent dispute filed by InterDigital, Inc. against Disney+. InterDigital has claimed that Disney+ is currently in violation of a patent they own related to the streaming of video content using high dynamic range (HDR) technology. The German court agreed with the validity of the claim and granted an injunction against Disney+ in November of last year to cease using the alleged infringing technology to deliver HDR content. The timing of this injunction and the first reports of disappearing Dolby Vision appear to be more than coincidental.

dolby-vision-vs-sdr-example

Why It Matters

While having four times as many pixels in 4K (UHD) content compared to 1080p HD does results in a sharper picture, it’s the wider color gamut and high dynamic range capabilities of 4K/UHD content that make video content look more lifelike and vivid. But most consumer displays, including TVs and projectors, are unable to reproduce the full dynamic range and peak brightness that is used by studios to master movies and TV shows for consumer displays. A film may be mastered for peaks of 4,000 nits while a high quality OLED display may only be able to reproduce 1,000 or possibly 2,000 nit peaks. Attempting to display content mastered for 4,000 nits on a consumer display with lower peak brightness could result in loss of specular (bright) highlights, a loss of shadow detail or both.

Dynamic HDR formats like Dolby Vision and HDR10+ allow the content’s dynamic range to be adjusted at playback time on a scene by scene basis to fit the capabilities of the display. This allows consumers to see a better representation of the content creators’ “artistic intent” regardless of whether they’re watching on a DLP projector, an OLED TV or a MiniLED/LCD TV. By eliminating the dynamic HDR content options, Disney+ could be creating an inferior viewing experience for their customers, even though these customers are paying for a “premium” streaming experience.

Advertisement
Sony SDR vs HDR ABC TV

When contacted for comment, a source at the company had this to add:

“Dolby Vision support for Disney+ content in several European countries is currently unavailable due to technical issues. We are actively exploring ways to restore it and will provide updates as soon as we can. 4K UHD and HDR support remain available on supported devices. HDR10+ has not ever been available in market in Europe to date, we expect it to be available soon.”

The Bottom Line

From Disney’s response, it appears that the company is aware of the issue and is attempting to deliver a solution to their customers. In the meantime, affected users can still access this content in 4K resolution with standard HDR10 HDR. And on TVs with good HDR tone mapping processing, results with HDR10 can be quite high quality. Still, there are many fans out there of Dolby Vision, and many TVs that do support Dolby Vision, but do not support HDR10+ nor have their own dynamic HDR tone mapping. We’re sure these customers would like to see Dolby Vision support restored in order to get the highest possible visual quality of their streamed content.

Advertisement. Scroll to continue reading.
Advertisement

Source link

Continue Reading

Tech

Meta Goes to Trial in a New Mexico Child Safety Case. Here’s What’s at Stake

Published

on

Today, Meta went to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation on its apps, including Facebook and Instagram. The state claims that Meta violated New Mexico’s Unfair Practices Act by implementing design features and algorithms that created dangerous conditions for users. Now, more than two years after the case was filed, opening arguments have begun in Santa Fe.

It’s a big week for Meta in court: A landmark social media trial kicks off in California today as well, the nation’s first legal test of social media addiction. That case is part of a “JCCP”, or judicial council coordinated proceedings, that brings together many civil suits that focus on similar issues.

The plaintiffs in that case allege that social media companies designed their products in a negligent manner and caused various harms to minors using their apps. Snap, TikTok, and Google were named as defendants alongside Meta; Snap and TikTok have already settled. The fact that Meta has not means that some of the company’s top executives may be called to the witness stand in the coming weeks.

Meta executives, including Mark Zuckerberg, are not likely to testify live in the New Mexico trial. But the proceedings may still be noteworthy for a few reasons. It’s the first standalone, state-led case against Meta that has actually gone to trial in the US. It’s also a highly charged case alleging child sexual exploitation that will ultimately lean on very technical arguments, including what it means to “mislead” the public, how algorithmic amplification works on social media, and what protections Meta and other social media platforms have through Section 230.

Advertisement

And, while Meta’s top brass might not be required to appear in person, executive depositions and testimonies from other witnesses could still offer an interesting look at the inner workings of the company as it established policies around underage users and responded to complaints that claim it wasn’t doing enough to protect them.

Meta has so far given no indication that it plans to settle. The company has denied the allegations, and Meta spokesperson Aaron Simpson told WIRED previously, “While New Mexico makes sensationalist, irrelevant and distracting arguments, we’re focused on demonstrating our longstanding commitment to supporting young people…We’re proud of the progress we’ve made, and we’re always working to do better.”

Sacha Haworth, executive director of The Tech Oversight Project, a tech industry watchdog, said in an emailed statement that these two trials represent “the split screen of Mark Zuckerberg’s nightmares: a landmark trial in Los Angeles over addicting children to Facebook and Instagram, and a trial in New Mexico exposing how Meta enabled predators to use social media to exploit and abuse kids.”

“These are the trials of a generation,” Haworth added. “Just as the world watched courtrooms hold Big Tobacco and Big Pharma accountable, we will, for the first time, see Big Tech CEOs like Zuckerberg take the stand.”

Advertisement

The Cost of Doing Business

New Mexico Attorney General Raúl Torrez filed his complaint against Meta in December 2023. In it, he alleged that Meta proactively served underage users explicit content, enabled adults to exploit children on the platform, allowed Facebook and Instagram users to easily find child pornography, and allowed an investigator on the case, purporting to be a mother, to offer her underage daughter to sex traffickers.

The trial is expected to take place over seven weeks. Last week jurors were selected, a panel of 10 women and eight men (12 jurors and six alternates). New Mexico First Judicial District Judge Bryan Biedscheid is presiding over the case.

Source link

Advertisement
Continue Reading

Tech

ByteDance’s new Seedance 2.0 supposedly ‘surpasses Sora 2′

Published

on

Seedance 2.0 says it generates ‘cinematic content’ with ‘seamless video extension and natural language control’.

TikTok parent company ByteDance launched the pre-release version of its new AI video model, called Seedance 2.0, over the weekend, sparking value growth of shares in Chinese AI firms.

Seedance 2.0 markets itself as a “true” multi-modal AI creator, allowing users to combine images, videos, audio and text to generate “cinematic content” with “precise reference capabilities, seamless video extension and natural language control”. The model is currently available to select users of Jimeng AI, ByteDance’s AI video platform.

The new Seedance model allows exporting in 2k with 30pc faster generation than the previous version 1.5, the company’s website read.

Advertisement

Swiss-based consultancy CTOL called it the “most advanced AI video generation model available”, “surpassing OpenAI’s Sora 2 and Google’s Veo 3.1 in practical testing”.

Positive response to the Seedance 2.0 launch drove up shares in Chinese AI firms.

Data compiled by Bloomberg earlier today (9 February) showed publishing company COL Group Co hit its 20pc daily price ceiling, while Shanghai Film Co and gaming and entertainment firm Perfect World Co rose by 10pc. Meanwhile, the Shanghai Shenzhen CSI 300 Index is up by 1.63pc at the time of publishing.

Consumer AI video generators have made leaps in advances in just a short period of time. The usual tells – blurry fingers, overly smooth and unrealistic skin and inexplainable changes from frame to frame – in AI videos are all becoming extremely hard to notice.

Advertisement

While rivalling AI video generator Sora 2 by OpenAI produces results with watermarks (although, there’s no shortage of tutorials on how to remove them), and Google’s Veo 3.1 comes with a metadata watermark called SynthID, Seedance boasts that its results are “completely watermark-free”.

The prevalence of advanced AI tools coupled with ease of access has opened the gates to a new wave of AI deepfakes, with the likes of xAI’s Grok at the centre of the issue.

Last month, the EU launched a new investigation into X to probe whether the Elon Musk-owned social media site properly assessed and mitigated risks stemming from its in-platform AI chatbot Grok after it was outfitted with the ability to edit images.

Users on the social media site quickly prompted the tool to undress people – generally women and children – in images and videos. Millions of such pieces of content were generated on X, The New York Times found.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Startup Uses Neural Chips to Turn Live Pigeons Into Cyborg Drones

Published

on

Russia Neiry Neural Chips Pigeon Cyborg Drones
A Moscow-based startup has taken a big move into the area of animal-machine hybrids. Neiry, a Moscow-based neurotech company, claims to have already made progress on remotely operated pigeons by inserting electrodes into their brains.



In late 2025, there were reports of early flying tests in the city, in which the modified birds flew controlled routes before returning to base. The project is known as PJN-1, and while the company promotes it as a valuable tool for civilian occupations, this type of technology is gaining traction due to its potential for surveillance purposes.


DJI Neo, Mini Drone with 4K UHD Camera for Adults, 135g Self Flying Drone that Follows You, Palm Takeoff,…
  • Due to platform compatibility issue, the DJI Fly app has been removed from Google Play. DJI Neo must be activated in the DJI Fly App, to ensure a…
  • Lightweight and Regulation Friendly – At just 135g, this drone with camera for adults 4K may be even lighter than your phone and does not require FAA…
  • Palm Takeoff & Landing, Go Controller-Free [1] – Neo takes off from your hand with just a push of a button. The safe and easy operation of this drone…

The surgeons utilize a specialized frame to delicately place these microscopic electrodes into specific areas of the pigeon’s brain. The electrodes are then linked to a mini-stimulator on the bird’s head. All of the bird’s electronics and navigation system are housed inside a lightweight backpack and powered by solar panels. A tiny camera is mounted on the bird’s chest to capture video. The operators can direct the bird to fly left or right by giving electrical signals to its brain. GPS devices, like a standard drone, track the bird’s location and control its trajectory. According to Neiry, the bird does not require training and may be operated immediately after the process, and the surgery has been reported to be 100% successful in terms of survival rate.

Pigeons have some significant advantages over regular drones in certain scenarios. For starters, they can fly hundreds of miles, perhaps 300 or more in a single day, without having to replace batteries or land frequently. They can simply navigate difficult environments, handle anything the weather throws at them, and even fit into small spaces. Neiry believes they can use pigeons to inspect pipelines or electrical lines, survey industrial regions, or assist with search and rescue efforts in difficult-to-reach areas. According to the company’s inventor, Alexander Panov, the same technology may be used with various birds, such as ravens for coastal monitoring, seagulls, or even albatrosses for operations over the ocean, as long as they can carry the burden and fly the distance required.
[Source]

Advertisement

Source link

Continue Reading

Tech

Nvidia releases DreamDojo, a robot ‘world model’ trained on 44,000 hours of human video

Published

on

A team of researchers led by Nvidia has released DreamDojo, a new AI system designed to teach robots how to interact with the physical world by watching tens of thousands of hours of human video — a development that could significantly reduce the time and cost required to train the next generation of humanoid machines.

The research, published this month and involving collaborators from UC Berkeley, Stanford, the University of Texas at Austin, and several other institutions, introduces what the team calls “the first robot world model of its kind that demonstrates strong generalization to diverse objects and environments after post-training.”

At the core of DreamDojo is what the researchers describe as “a large-scale video dataset” comprising “44k hours of diverse human egocentric videos, the largest dataset to date for world model pretraining.” The dataset, called DreamDojo-HV, is a dramatic leap in scale — “15x longer duration, 96x more skills, and 2,000x more scenes than the previously largest dataset for world model training,” according to the project documentation.

Nvidia - Workshop and box packing image

A simulated robot places a cup into a cardboard box in a workshop setting, one of thousands of scenarios DreamDojo can model after training on 44,000 hours of human video. (Credit: Nvidia)

Advertisement

Inside the two-phase training system that teaches robots to see like humans

The system operates in two distinct phases. First, DreamDojo “acquires comprehensive physical knowledge from large-scale human datasets by pre-training with latent actions.” Then it undergoes “post-training on the target embodiment with continuous robot actions” — essentially learning general physics from watching humans, then fine-tuning that knowledge for specific robot hardware.

For enterprises considering humanoid robots, this approach addresses a stubborn bottleneck. Teaching a robot to manipulate objects in unstructured environments traditionally requires massive amounts of robot-specific demonstration data — expensive and time-consuming to collect. DreamDojo sidesteps this problem by leveraging existing human video, allowing robots to learn from observation before ever touching a physical object.

One of the technical breakthroughs is speed. Through a distillation process, the researchers achieved “real-time interactions at 10 FPS for over 1 minute” — a capability that enables practical applications like live teleoperation and on-the-fly planning. The team demonstrated the system working across multiple robot platforms, including the GR-1, G1, AgiBot, and YAM humanoid robots, showing what they call “realistic action-conditioned rollouts” across “a wide range of environments and object interactions.”

Why Nvidia is betting big on robotics as AI infrastructure spending soars

The release comes at a pivotal moment for Nvidia’s robotics ambitions — and for the broader AI industry. At the World Economic Forum in Davos last month, CEO Jensen Huang declared that AI robotics represents a “once-in-a-generation” opportunity, particularly for regions with strong manufacturing bases. According to Digitimes, Huang has also stated that the next decade will be “a critical period of accelerated development for robotics technology.”

Advertisement

The financial stakes are enormous. Huang told CNBC’s “Halftime Report” on February 6 that the tech industry’s capital expenditures — potentially reaching $660 billion this year from major hyperscalers — are “justified, appropriate and sustainable.” He characterized the current moment as “the largest infrastructure buildout in human history,” with companies like Meta, Amazon, Google, and Microsoft dramatically increasing their AI spending.

That infrastructure push is already reshaping the robotics landscape. Robotics startups raised a record $26.5 billion in 2025, according to data from Dealroom. European industrial giants including Siemens, Mercedes-Benz, and Volvo have announced robotics partnerships in the past year, while Tesla CEO Elon Musk has claimed that 80 percent of his company’s future value will come from its Optimus humanoid robots.

How DreamDojo could transform enterprise robot deployment and testing

For technical decision-makers evaluating humanoid robots, DreamDojo’s most immediate value may lie in its simulation capabilities. The researchers highlight downstream applications including “reliable policy evaluation without real-world deployment and model-based planning for test-time improvement” — capabilities that could let companies simulate robot behavior extensively before committing to costly physical trials.

Advertisement

This matters because the gap between laboratory demonstrations and factory floors remains significant. A robot that performs flawlessly in controlled conditions often struggles with the unpredictable variations of real-world environments — different lighting, unfamiliar objects, unexpected obstacles. By training on 44,000 hours of diverse human video spanning thousands of scenes and nearly 100 distinct skills, DreamDojo aims to build the kind of general physical intuition that makes robots adaptable rather than brittle.

The research team, led by Linxi “Jim” Fan, Joel Jang, and Yuke Zhu, with Shenyuan Gao and William Liang as co-first authors, has indicated that code will be released publicly, though a timeline was not specified.

The bigger picture: Nvidia’s transformation from gaming giant to robotics powerhouse

Whether DreamDojo translates into commercial robotics products remains to be seen. But the research signals where Nvidia’s ambitions are heading as the company increasingly positions itself beyond its gaming roots. As Kyle Barr observed at Gizmodo earlier this month, Nvidia now views “anything related to gaming and the ‘personal computer’” as “outliers on Nvidia’s quarterly spreadsheets.”

The shift reflects a calculated bet: that the future of computing is physical, not just digital. Nvidia has already invested $10 billion in Anthropic and signaled plans to invest heavily in OpenAI’s next funding round. DreamDojo suggests the company sees humanoid robots as the next frontier where its AI expertise and chip dominance can converge.

Advertisement

For now, the 44,000 hours of human video at the heart of DreamDojo represent something more fundamental than a technical benchmark. They represent a theory — that robots can learn to navigate our world by watching us live in it. The machines, it turns out, have been taking notes.

Source link

Continue Reading

Tech

Waymo is testing driverless robotaxis in Nashville

Published

on

Waymo has pulled the human safety driver from its autonomous test vehicles in Nashville, as the Alphabet-owned company moves closer to launching a robotaxi service in the city.

Waymo, which has been testing in Nashville for months, is slated to launch a robotaxi service there this year in partnership with Lyft. Riders will initially hail rides directly through the Waymo app. Once the service expands, Waymo will also make its self-driving vehicles available through the Lyft app. Lyft has said it will handle fleet services, such as vehicle readiness and maintenance, charging infrastructure, and depot operations, through its wholly owned subsidiary Flexdrive.

Waymo has accelerated its robotaxi expansion and today operates commercial services in Atlanta, Austin, Los Angeles, Miami, the San Francisco Bay Area, and Phoenix. It also has driverless test fleets in Dallas, Houston, San Antonio, and Orlando.

The company tends to follow the same rollout strategy in every new market, starting with a small fleet of vehicles that are manually driven to map the city. The autonomous vehicles are then tested with a human safety operator in the driver’s seat. Eventually, the company conducts driverless testing, often allowing employees to hail rides, before launching a robotaxi service.

Advertisement

Source link

Continue Reading

Tech

Linux 7.0 Kernel Confirmed By Linus Torvalds, Expected In Mid-April 2026

Published

on

An anonymous reader writes: Linus Torvalds has confirmed the next major kernel series as Linux 7.0, reports Linux news website 9to5Linux.com: “So there you have it, the Linux 6.x era has ended with today’s Linux 6.19 kernel release, and a new one will begin with Linux 7.0, which is expected in mid-April 2026. The merge window for Linux 7.0 will open tomorrow, February 9th, and the first Release Candidate (RC) milestone is expected on February 22nd, 2026.”

Source link

Continue Reading

Tech

How to design a space station: Meet the Seattle company that’s helping define the look of the final frontier

Published

on

Illustration: Cutaway view of lab workspace on Starlab space station
This cutaway view shows the interior of the Starlab space station’s laboratory. (Starlab Illustration)

How do you design a living space where there’s no up or down? That’s one of the challenges facing Teague, a Seattle-based design and innovation firm that advises space companies such as Blue Origin, Axiom Space and Voyager Technologies on how to lay out their orbital outposts.

Mike Mahoney, Teague’s senior director of space and defense programs, says the zero-gravity environment is the most interesting element to consider in space station design.

“You can’t put things on surfaces, right? You’re not going to have tables, necessarily, unless you can attach things to them, and they could be on any surface,” he told GeekWire. “So, directionality is a big factor. And knowing that opens up new opportunities. … You could have, let’s say, two scientists working in different orientations in the same area.”

Mike Mahoney is senior director of space and defense programs at Teague. (Photo via LinkedIn)

Over the next few years, NASA and its partners are expected to make the transition from the aging International Space Station to an array of commercial space stations — and Teague is helping space station builders get ready for the shift.

Space is one of the newer frontiers for a company that’s celebrating the 100th anniversary of its founding this year. Teague is best-known for helping to design the interiors of Boeing airplanes as well as the first Polaroid camera and Microsoft’s first Xbox gaming console.

In the 1980s, Teague helped Boeing and NASA with their plans for Space Station Freedom, an orbital project that never got off the ground but eventually evolved into the International Space Station. Teague also partnered with NASA on a 3D-printed mockup for a Mars habitat, known as the Crew Health and Performance Exploration Analog.

Advertisement

Nowadays, Teague is focusing on interior designs for commercial spacecraft, a business opportunity that capitalizes on the company’s traditional expertise in airplane design.

Mahoney said Teague has been working with Jeff Bezos’ Blue Origin space venture on a variety of projects for more than a decade. The first project was the New Shepard suborbital rocket ship, which made its debut in 2015.

“We partnered with their engineering team to design for the astronaut experience within the New Shepard space capsule,” Mahoney said. “It’s all the interior components that you see that come together, from the linings to the lighting. We created a user experience vision for the displays as well.”

Alan Boyle in Blue Origin New Shepard seat
GeekWire’s Alan Boyle sits in one of the padded seats inside a mockup of the crew capsule for Blue Origin’s suborbital spaceship, on display at a space conference in 2017. The door of the capsule’s hatch is just to the right of Boyle’s head. (GeekWire File Photo / Kevin Lisota)

Teague also worked with Blue Origin on design elements for the Orbital Reef space station and the Blue Moon lunar lander. “We were involved in initial concepting for the look and feel of the vehicles,” Mahoney said. “In other cases, we designed and built mockups that were used for astronaut operations and testing. How do we navigate around the lunar lander legs? How do we optimize toolboxes on the surface of the moon?”

Other space station ventures that have benefited from Teague’s input include Axiom Space (which also brought in Philippe Starck as a big-name designer) and Starlab Space, a joint venture founded by Voyager Technologies and Airbus.

Advertisement

Starlab recently unveiled a three-story mockup of its space station at NASA’s Johnson Space Center in Texas. The mockup is built so that it can be reconfigured to reflect tweaks that designers want to make in the space station’s layout, before launch or even years after launch.

“One of the things that’s been very helpful along this development path has been working with Teague, because you have to have a really good idea on how you lay out this very large volume so that you can optimize the efficiency of the crew,” said Tim Kopra, a former NASA astronaut who now serves as chief human exploration officer at Voyager Technologies.

Kopra compared the Starlab station to a three-story condo. “The first floor is essentially like the basement of a large building that has the infrastructure,” he said, “It has our life support systems, avionics and software, the toilets, the hygiene station — which encompasses both the toilet and a cleaning station — and the workout equipment.”

The second floor serves as a laboratory and workspace, with a glovebox, freezer, centrifuge, microscope and plenty of racks and lockers for storage. “We are very focused on four different industries: semiconductors, life sciences, pharmaceuticals and materials science,” Kopra said.

Advertisement

He said the third floor will be a “place that people will enjoy … because Deck 3 has our crew quarters, our galley table, our windows and a little bit more experiment capacity.”

The galley table is a prime example of how zero-gravity interior design differs from the earthly variety. “No chairs,” Kopra said. “Just like on the ISS, all you need is a place to hook your feet. There are little design features, like where do you put a handrail, and how tall is the table?” (He said the designers haven’t yet decided whether the table should be round or square.)

Kopra said one of his top design priorities is to use the station’s volume, and the astronauts’ time, as efficiently as possible. “Time is extremely valuable on the ISS. They calculate that crew time is worth about $135,000 per hour,” he said. “Ours will be a fraction of that, but it really illustrates how important it is to be efficient with the time on board.”

Advertisement

Starlab is laid out to maximize efficiency. “We have a really cool design where the middle has a hatchway that goes all the way through the three stories,” he said. “So, imagine if it were a fire station, you’d have a pole that went from floor to floor. We don’t need a fire pole. We can just translate through that area.”

Mahoney said human-centered design will be more important for commercial space stations than it was for the ISS.

“In the past, space stations have been primarily designed for professionally trained, military background astronauts,” he said. “Now we’ll have different folks in there. … How do we think about how researchers and scientists will be using these spaces? How do we think about non-professional private astronauts? As the International Space Station gets retired, how do these companies step in to fill the void, serving NASA but also a lot of these new customers?”

  • Exterior view of Starlab mockup

When will commercial space stations step in? The answer to that question is up in the air.

Advertisement

Last year, NASA reworked its process for awarding further funding for the development of commercial space stations. The revised plan was aimed at giving commercial partners a better chance of putting their orbital outposts in operation by 2030, the date set for the International Space Station’s retirement.

But NASA has been slow to follow through on the revised plan, sparking concern in Congress. Late last month, the space agency said it was still working to “align acquisition timelines with national space policy and broader operational objectives.” Now some lawmakers are calling on NASA to reconsider its plan to deorbit the ISS in the 2030-2031 time frame.

The timetable for the space station transition may be in flux, but Mahoney and other space station designers are staying the course — and taking the long view.

“We may not know right now how the space station is going to be used 20 years from now,” Mahoney said. “How do we start to future-proof and create a system within that’s modular and flexible, so that we can add technologies and add systems, or we can configure in different ways? … Those are the kinds of things that we’re thinking about designing for.”

Advertisement

Source link

Continue Reading

Tech

Riot Games is laying off half of the 2XKO development team

Published

on

Another day, another wave of gaming layoffs. Today it’s Riot Games with the announcement that it’s cutting jobs on its pair-based fighting game 2XKO. For context, a representative from Riot confirmed to Game Developer that about 80 people are being cut, or roughly half of 2XKO’s global development team.

“As we expanded from PC to console, we saw consistent trends in how players were engaging with 2XKO,” according to the blog post from executive producer Tom Cannon. “The game has resonated with a passionate core audience, but overall momentum hasn’t reached the level needed to support a team of this size long term.”

The console launch for 2XKO happened last month. Cannon said the company’s plans for its 2026 competitive season have not altered with the layoffs. He added that Riot will attempt to place the impacted people at new positions within the company where possible.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Connections: Sports Edition Hints, Answers for Feb. 10 #505

Published

on

Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.


Today’s Connections: Sports Edition is all about the Winter Olympics. If you’re struggling with today’s puzzle but still want to solve it, read on for hints and the answers.

Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.

Advertisement

Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta

Hints for today’s Connections: Sports Edition groups

Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Elegant ice sport.

Advertisement

Green group hint: Sports cinema.

Blue group hint: Five for fighting.

Purple group hint: Unusual Olympic sport.

Answers for today’s Connections: Sports Edition groups

Yellow group: Figure skating disciplines.

Advertisement

Green group: Winter Olympic movies.

Blue group: Hockey penalties.

Purple group: Biathlon equipment.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

Advertisement

What are today’s Connections: Sports Edition answers?

completed NYT Connections: Sports Edition puzzle for Feb. 10, 2026

The completed NYT Connections: Sports Edition puzzle for Feb. 10, 2026.

NYT/Screenshot by CNET

The yellow words in today’s Connections

The theme is figure skating disciplines. The four answers are ice dance, pairs, singles and team event.

The green words in today’s Connections

The theme is Winter Olympic movies. The four answers are Cool Runnings; I, Tonya; Miracle and The Cutting Edge.

Advertisement

The blue words in today’s Connections

The theme is hockey penalties. The four answers are boarding, hooking, icing and offsides.

The purple words in today’s Connections

The theme is biathlon equipment. The four answers are poles, rifle, skis and targets.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025