Connect with us

Tech

The Requirements Of AI | Hackaday

Published

on

The media is full of breathless reports that AI can now code and human programmers are going to be put out to pasture. We aren’t convinced. In fact, we think the “AI revolution” is just a natural evolution that we’ve seen before. Consider, for example, radios. Early on, if you wanted to have a radio, you had to build it. You may have even had to fabricate some or all of the parts. Even today, winding custom coils for a radio isn’t that unusual.

But radios became more common. You can buy the parts you need. You can even buy entire radios on an IC. You can go to the store and buy a radio that is probably better than anything you’d cobble together yourself. Even with store-bought equipment, tuning a ham radio used to be a technically challenging task. Now, you punch a few numbers in on a keypad.

The Human Element

What this misses, though, is that there’s still a human somewhere in the process. Just not as many. Someone has to design that IC. Someone has to conceive of it to start with. We doubt, say, the ENIAC or EDSAC was hand-wired by its designers. They figured out what they wanted, and an army of technicians probably did the work. Few, if any, of them could have envisoned the machine, but they can build it.

Does that make the designers less? No. If you write your code with a C compiler, should assembly programmers look down on you as inferior? Of course, they probably do, but should they?

Advertisement

If you have ever done any programming for most parts of the government and certain large companies, you probably know that system engineering is extremely important in those environments. An architect or system engineer collects requirements that have very formal meanings. Those requirements are decomposed through several levels. At the end, any competent programmer should be able to write code to meet the requirements. The requirements also provide a good way to test the end product.

Anatomy of a Requirement

System Design Process (public domain – from MIT Open Course).

A good requirement will look like this: “The system shall…” That means that it must comply with the rest of the sentence. For example, “The system shall process at least 50 records per minute.” This is testable.

Bad requirements might be something like “The system shall process many records per minute.” Or, “The system shall not present numeric errors.” A classic bad example is “The system shall use aesthetically pleasing cabinets.”

The first bad example is too hazy. One person might think “many” is at least 1,000. Someone else might be happy with 50. Requirements shouldn’t be negative since it is difficult to prove a negative. You could rewrite it as “The system shall present errors in a human-readable form that explains the error cause in English.” The last one, of course, is completely subjective.

You usually want to have each requirement handle one thing to simplify testing. So “The system shall present errors in human-readable form that explain the error cause in English and keep a log for at least three days of all errors.” This should be two requirements or, at least, have two parts to it that can be tested separately.

Advertisement

In general, requirements shouldn’t tell you how to do something. “The system shall use a bubble sort,” is probably a poor requirement. However, it should also be feasible. “The system shall detect lifeforms” doesn’t tell you how to make that work, but it is suspicious because it isn’t clear how that could work. “The system shall operate forever with no external power” is calling for a perpetual motion machine, so even if that’s what you wish for, it is still a bad requirement.

A portion of a typical NASA SRS requirements document

You sometimes see sentences with “should” instead of shall. These mark goals, and those are important, but not held to the same standard of rigor. For example, you might have “The system should work for as long as possible in the absence of external power.” That communicates the desire to work with no external power to the level that it is practical. If you actually want it to work at least for a certain period of time, then you are back to a solid and testable requirement, assuming such a time period is feasible.

You can find many NASA requirements documents, like this SRS (software requirements specification), for example. Note the table provides a unique ID for each requirement, a rationale, and notes about testing the requirement.

Requirement Decomposition

High-level requirements trace down to lower-level requirements and vice versa. For example, your top-level requirement might be: “The system shall allow underwater research at location X, which is 600 feet underwater.” This might decompose to: “The system shall support 8 researchers,” and “The system shall sustain the crew for up to three months without resupply.”

The next level might levy requirements based on what structure is needed to operate at 600 feet, how much oxygen, fresh water, food, power, and living space are required. Then an even lower level might break that down to even more detail.

Advertisement

Of course, a lower-level document for structures will be different from a lower-level requirement for, say, water management. In general, there will be more lower-level requirements than upper-level ones. But you get the idea. There may be many requirment documents at each level and, in general, the lower you go, the more specific the requirements.

And AI?

We suspect that if you could leap ahead a decade, a programmer’s life might be more like today’s system architect. Your value isn’t understanding printf or Python decorators. It is in visualizing useful solutions that can actually be done by a computer.

Then you generate requirements. Sure, AI might help improve your requirements, trace them, and catalog them. Eventually, AI can take the requirements and actually write code, or do mechanical design, or whatever. It could even help produce test plans.

The real question is, when can you stop and let the machine take over? If you can simply say “Design an underwater base,” then you would really have something. But the truth is, a human is probably more likely to understand exactly what all the unspoken assumptions are. Of course, an AI, or even a human expert, may ask clarifying questions: “How many people?” or “What’s the maximum depth?” But, in general, we think humans will retain an edge in both making assumptions and making creative design choices for the foreseeable future.

Advertisement

The End Result

There is more to teaching practical mathematics than drilling multiplication tables into students. You want them to learn how to attack complex problems and develop intuition from the underlying math. Perhaps programming isn’t about writing for loops any more than mathematics is about how to take a square root without a calculator. Sure, you should probably know how things work, but it is secondary to the real tools: creativity, reasoning, intuition, and the ability to pick from a bewildering number of alternatives to get a workable solution.

Our experience is that normal people are terrible about unambiguously expressing what they want a computer to do. In fact, many people don’t even understand what they want the computer to do beyond some fuzzy handwaving goal. It seems unlikely that the CEO of the future will simply tell an AI what it wants and a fully developed system will pop out.

Requirements are just one part of the systems engineering picture, but an important one. MITRE has a good introduction, especially the section on requirements engineering.

What do you think? Is AI coding a fad? The new normal? Or is it just a stepping stone to making human programmers obsolete? Let us know in the comments. Although they have improved, we still think the current crop of AI is around the level of a bad summer intern.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Google says its AI systems helped deter Play Store malware in 2025

Published

on

Fewer bad actors are targeting Google Play with malicious apps, the company says, a shift that the tech giant credits with its increased investments in proactive security systems and AI technology.

In its latest Android app ecosystem safety report released on Thursday, Google said it prevented 1.75 million policy-violating apps from being published on Google Play in 2025, down from 2.36 million in 2024 and 2.28 million in 2023.

The annual report offers a look at how Google is keeping Android users safe by reviewing and monitoring apps to protect against malware, financial fraud, privacy invasions, sneaky subscriptions, and other threats.

For instance, Google says it banned more than 80,000 developer accounts in 2025 that had tried to publish these types of bad apps. That figure is also down year-over-year from 158,000 in 2024, and 333,000 in 2023.

Advertisement
a screenshot showing a graphic of Google numbers, including 1.75 million policy-violating apps from being published to Google play.

Google touted how its investments in AI and other real-time defenses have helped fight these sorts of threats, but also how they served as a deterrent.

“Initiatives like developer verification, mandatory pre-review checks, and testing requirements have raised the bar for the Google Play ecosystem, significantly reducing the paths for bad actors to enter,” the company’s blog post explained, adding that its “AI-powered, multi-layer protections” have been “discouraging bad actors from publishing malicious apps.”

Google noted it now runs over 10,000 safety checks on every app it publishes and continues to recheck apps after publication. The company has also integrated its latest generative AI models into the app review process, which has helped human reviewers find more complex malicious patterns faster. Google said it plans to increase its AI investments in 2026 to stay ahead of emerging threats.

In addition, Google said it prevented more than 255,000 apps from gaining excessive access to sensitive user data, a figure that’s down from 1.3 million in 2024. The company also blocked 160 million spam ratings and reviews last year, and prevented an average 0.5-star rating drop for apps targeted by review bombing.

Meanwhile, Android’s defense system, known as Google Play Protect, identified more than 27 million new malicious apps, and warned users or blocked the app from running. That’s an increase from the 13 million non-Play Store apps identified in 2024 and five million seen in 2023. These increases seem to suggest that bad actors are now more often avoiding the Play Store when targeting users with their malicious apps.

Advertisement

Source link

Continue Reading

Tech

Donald Trump Jr.’s Private DC Club Has Mysterious Ties to an Ex-Cop With a Controversial Past

Published

on

When the Executive Branch soft-launched in Washington, DC, last spring, the private club’s initial buzz centered on its starry roster of backers and founding members. The president’s eldest son, Donald Trump Jr., is one of the club’s several co-owners, according to previous reporting. Founding members reportedly include Trump administration AI czar David Sacks and his All-In podcast cohost Chamath Palihapitiya, as well as crypto bigwigs Tyler and Cameron Winklevoss.

“We wanted to create something new, hipper, and Trump-aligned,” Sacks said at the time. Proximity to Trumpworld didn’t come cheap; though the club headquarters is located in a basement space behind a shopping complex, fees to join are reportedly as high as $500,000.

The initial wave of press for the MAGA hot spot identified Trump Jr. and his business associates Omeed Malik, Chris Buskirk, and Zach and Alex Witkoff as the club’s co-owners. A Mother Jones report later revealed the involvement of David Sacks’ frequent business associate Glenn Gilmore, a San Francisco Bay Area real estate developer who is given a variety of titles on official documents, including co-owner, managing member, director, and president.

But according to corporate filings reviewed by WIRED, there’s another key figure whose involvement has not been previously reported and whose connection to its more famous founders remains unclear: Sean LoJacono, a former Metropolitan Police Department cop in Washington, DC, who gained local notoriety for his role in a stop and frisk that resulted in a lawsuit.

Advertisement

According to the legal complaint, in 2017, after questioning a man named M.B. Cottingham for a suspected open-container-law violation, LoJacono conducted a body search. A recording of the incident went viral on YouTube, sparking intense debate over aggressive policing tactics. “He stuck his finger in my crack,” Cottingham says in the video. “Stop fingering me, though, bro.” The next year, the American Civil Liberties Union of the District of Columbia sued LoJacono on behalf of Cottingham, alleging that LoJacono had “jammed his fingers between Mr. Cottingham’s buttocks and grabbed his genitals.” Cottingham agreed to settle his lawsuit with LoJacono and was paid an undisclosed amount by the District of Columbia (which admitted no wrongdoing) in 2018.

The MPD announced its intention to dismiss LoJacono following an internal affairs investigation, which concluded that the Cottingham search was not a fireable offense but that another search he had conducted the same day was. By early 2019, LoJacono had appealed his dismissal, arguing in well-publicized hearings that he had conducted searches according to how he had been taught by fellow officers in the field. Initially, the dismissal was upheld. However, the police union’s collective bargaining agreement enabled LoJacono to further appeal to a third-party arbitrator, which in November 2023 ruled in LoJacono’s favor.

Instead of returning to the police force, though, LoJacono has gone down a different path. A LinkedIn account featuring LoJacono’s name, likeness, and employment history lists his profession as “Director of Security and Facilities Management” at an unnamed private club in Washington, DC, from June 2025 to the present. Official incorporation paperwork for the Executive Branch Limited Liability Company filed to the Government of the District of Columbia’s corporations division in March 2025, shortly before the club launched, lists LoJacono as the “beneficial owner” of the business. The address listed on the paperwork matches the Executive Branch’s location. Donald Trump Jr. and other reported owners are not listed on the paperwork; Gilmore is listed on this document as the company’s “organizer.”

The paperwork indicates that LoJacono is considered a beneficial owner of a legal entity associated with the Executive Branch. But what does that mean, exactly?

Advertisement

Source link

Continue Reading

Tech

This New Home Depot Deal Gets You $250 In Tools For Just $50

Published

on





For those watching the calendar closely, every passing day is bringing us closer to the arrival of spring. For many, that means it is almost time to glove up and get to work whipping their gardens and green spaces back into shape after they’ve spent the past few months battling snow, ice, and freezing temps. Folks in that category might want to know that The Home Depot is looking to outfit lawn care DIYers with some necessary gear via the “Let It Spring” sales event.

For the record, this is not the sort of holiday sale where certain items are discounted for a day or a long weekend. But it is sort of a holiday-styled event, with Home Depot’s marketing team essentially taking an advent calendar approach to its spring sales celebration via a 20-day countdown package. According to the company, the “Let It Spring” deal will provide serious savings to participants, essentially promising them $250 worth of tools and other helpful items.

All that gear will arrive in the form of a physical calendar-styled box which participants must purchase for $49.99. It is not, however, clear which specific items will be included inside, or if budget-friendly tool kits might feature in the mix. Rather, the Spring Countdown Calendar will feature “curated product SKUs across key spring categories,” covering everything from lawn care to grilling and outdoor entertainment. But according to one HD representative’s comments, each kit will include “practical tools and seasonal favorites” that “you’ll actually use.” 

Advertisement

Here’s how to ring in Spring with The Home Depot deal

Given the general setup of Home Depot’s Let It Spring Countdown Calendar approach, there’s no guarantee that participants will find the exact items they are looking for to accomplish their Spring time yard tune-up. It also seems unlikely that power tools from Home Depot’s exclusive brand Ryobi will be included in the mix. Moreover, we should note the event’s press release states that not every Home Depot shopper will be able to take advantage of the Let It Spring deal, as the calendars will be available only “while supplies last.” 

Advertisement

Whatever the case, the sale seems pretty fun in its overall design. Instead of solely providing online sales specials, it essentially works like an advent calendar, with participants opening a “door” in their calendar box every day to reveal a new item. They’ll do so over the final 20 days leading up to the first day of spring, which is officially March 20th in 2026. But you can, presumably, also just open them all the moment the box arrives if you like. It’s also not clear how many of the items are physically in the box vs. coupons or other discounts.

Along with tools, cleaning gear, and garden-friendly items, every door opened also promises to provide inspiration for spring projects and tips on how best to accomplish them. Per the press release, The Home Depot will be offering its Let It Spring Countdown Calendar to consumers in two separate drops, the first coming on February 20, 2026, and the second coming 5 days later on February 25. If you miss out, you can reportedly still follow the fun and potentially even score a deal on Home Depot’s website. 

Advertisement



Source link

Continue Reading

Tech

David Silver is chasing superhuman intelligence with a $1bn seed

Published

on

David Silver, a British AI researcher known for his role at Google’s DeepMind lab, has helped build some of the most influential AI systems and is now leading his own ambitious start-up. He is in the middle of raising a $1 billion seed round for his new London-based venture, Ineffable Intelligence.

If the fundraising will be completed, would be the largest seed round ever seen in Europe.

The round is being led by Sequoia Capital, with titans such as Nvidia, Google and Microsoft reportedly in talks to participate, according to industry sources familiar with the deal.

The proposed investment would place Ineffable Intelligence’s pre-money valuation at around $4 billion, an eye-watering figure for a company that hasn’t yet shipped a product.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

David Silver’s name is synonymous with some of the most memorable milestones in AI. During more than a decade at Google DeepMind, he helped develop AlphaGo, the first AI to defeat a world champion at the ancient game of Go, and later AlphaStar, which bested professional players in StarCraft II.

His work has shaped how modern AI systems learn and make decisions.

Advertisement

In late 2025, David Silver left DeepMind, where he also contributed to the development of the Gemini family of large language models, to launch Ineffable Intelligence. .

In the current days billion-dollar seed rounds are no longer unheard of, but they remain exceptionally rare, especially in Europe. 

If Sequoia and its partners follow through on their commitments, the round would serve as both a vote of confidence in David Silver’s vision and a broader signal that venture capitalists are increasingly willing to fund ambitious, research-driven AI ventures well before traditional product milestone

This could be a new chapter for European AI

The sheer size of the potential round also reshapes the narrative around Europe’s role in the global AI race.

Advertisement

Historically, most of the industry’s capital and rich technical talent have gravitated to Silicon Valley and other U.S. hubs. A £737 million-plus round led by a top U.S. investor shows that world-class AI ideas can still attract world-class capital in Europe.

David Silver’s dual role – as an entrepreneur and as a professor at University College London,  hints at a blend of academic ambition and commercial pressure that may define the next era of AI innovation.

His thesis suggests that the path to genuinely powerful AI may not be simply scaling up models, but teaching systems to learn and improve through real-world experience, a direction he believes could outpace approaches grounded solely in data accumulated from human interactions.

Whether Ineffable Intelligence ultimately delivers on its lofty vision remains to be seen.

Advertisement

But the enthusiasm around its fundraising reflects a broader trend: AI is entering a phase where conviction in founders and ideas can translate directly into unprecedented capital flows.

For Europe, a successful $1 billion cap table would be a milestone moment, not just financially, but symbolically,  in positioning the continent as a contender for the future of AI research and deployment.

Source link

Advertisement
Continue Reading

Tech

Google’s new music tool, Lyria 3 is here

Published

on

Google’s announcement that its Gemini app now writes music for you isn’t just one of those “blowing my mind” product updates. It feels like a symbolic surrender to a long-standing refrain from Big Tech: creative work is now just another checkbox for a machine. 

If you don’t know what I am talking about, yesterday Google launched a new feature, Lyria 3, in the Gemini app, that allows us to cook up 30-second tracks complete with lyrics and cover art from a text prompt or a photo, of course, generated by Nano Banana; basically, no instruments, no experience, no pesky tactile skill required.

It’s essentially a LEGO set for “songs” that lasts about as long as a TikTok loop. They say it’s designed for YouTube Creators, and I tend to agree with them, because you can’t make to much with 30 seconds. 

Still, the underlying problem is another one, as I am seeing different projects/songs made with AI, including AI artists. And this is what I want to highlight in this piece.

Advertisement

“Behind every beautiful thing, there’s some kind of pain, “ said Bob Dylan, and I could not agree more.

If we take a look at the history ( art, music, literature, poetry, and so on), the main fuel for creation was indeed pain. 

Now, how should I put this? Probably the only pain that Lyria can feel is more like a faint server-overload alert than heartbreak.

Real songwriters know that soul isn’t born in a 30-second prompt, it’s extracted through years of mistakes, late nights, losses, and tiny revelations.

Advertisement

Call it a toy if you like. Google will, too. 

They even watermark the outputs with a SynthID tag so the 30-second ditties are officially AI-generated, not “inspired.” That’s a nod to copyright concerns, but it also reads like an admission: these aren’t really art, they’re chemical by-products of pattern statistics. 

What’s striking isn’t the novelty. Much of this has been possible in labs and APIs for years, and creators have been experimenting with generative music tools as collaborators. 

What Lyria 3 does, and what makes this moment worth watching, is normalising the idea that anyone can “write” a song with a chatbot and a mood descriptor. That’s not empowerment; it’s a devaluation of craft.

Advertisement

Just because you pay a subscription to Suno, that’s another AI music generator, and that one is more complex, that doesn’t make you an artist or a singer. Just because you learn how to write a prompt for any LLM model and generates you pages, you are not a writer. 

Imagine a world where every blog has AI-generated copy, not that we are not almost in the middle of this, and every company can churn out half-baked music for their ads or social posts.

In that economy, a professional songwriter’s unique skill becomes as optional as knowing how to use a metronome.

You could ask Gemini for an “emotional indie ballad about a lost sock,” and voilà, you have something. Whether it has actual coherence or soul is left to the listener to decide. It is fun to use it with your friends, shorts, to impress your date. 

Advertisement

Video: Gemini Lyria Music Generation Feature – Socks, uploaded by Google on YouTube

Still, Lyria 3’s music is capped at 30 seconds, and that’s no accident. It sidesteps deeper legal and ethical quarrels about training data and mimicry of existing works by keeping outputs short and legally fuzzy. That’s a thumbs up from me. 

But even within that limit, it’s now possible for someone with no craft or cultural context to generate riffs, lyrics, and chord progressions that sound, to the casual ear, adequately musical. In an attention economy obsessed with shareability, “adequate” quickly becomes plenty. 

This matters because real songs, the ones that endure, that carry human experience, aren’t just collections of musical atoms. They’re shaped by story, risk, cultural memory, and sometimes contradiction. 

One of my favorite artists, Tom Waits, said, ”I don’t have a formal background. I learned from listening to records, from talking to people, from hanging around record stores, and hanging around musicians and saying, “Hey, how did you do that? Do that again. Let me see how you did that.”

Advertisement

This was the research and prompting before, and it’s not only about reducing time, or getting things faster, and “having more time for you.”

It’s about the entire process, the contact with other artists, humans, and IDEAS. 

Those are qualities machines can mimic but not originate. When the machines own the first pass at creation, and the commercial ecosystem embraces that output because it is cheap and fast, the incentives shift. Not gradually. Suddenly.

The record industry is already grappling with AI. Streaming services, publishers, and even labels have begun experimenting with algorithmic playlists and automated composition.

Advertisement

What Gemini’s Lyria 3 does is extend that experiment to public perception. A whole generation may come to think that “making music” means typing a description and choosing a style. Songwriting becomes a UX problem, not a craft one.

That raises a serious question: in a world where AI can conjure up a half-decent hook on demand, what will distinguish professional artists?

If the answer is only brand story or marketing muscle, we aren’t celebrating creativity; we are monetising it out of existence.

Tech companies like Google will frame this as liberation. And in a literal sense, anyone who’s ever wanted to hear a short tune about a sock’s existential crisis now can. But liberation without value for the creator is just consumerism by another name

Advertisement

Lyria 3 might be good for GIF soundtracks and social clips, and TikTok viral reels, but it doesn’t make professional musicians obsolete; it makes their work less necessary to the platforms that reward hyper-consumable content.

That’s a different threat from outright replacement: it’s obsolescence by trivialisation.

If AI is going to be part of musical creation, then let it be as an assistant to the composer,  someone who improves ideas, not replaces them. What we’re seeing with Gemini is not collaboration but outsourcing.

And the lesson for artists isn’t to fear the algorithm. It’s to insist on clarity about where AI replaces labour and where it augments human sensibility.

Advertisement

Because once the marketplace equates the two, the humans who do the work will be the ones left asking for royalties in a language no one else wants to speak.

And, as a personal recommendation, not sponsored, there are streaming platforms like Deezer that have built AI detection tools that flag and label AI-generated tracks, excluding them from recommendations and royalties so that human songwriters aren’t buried under synthetic spam, and consumers can make the difference between AI and human. 

If you care about preserving real artistry in a world of text-to-tune generative models, start paying attention to how platforms handle AI-tagging and choose services that give you transparency about what you’re actually listening to.

Yet, I’m not here to throw shade at Lyria 3; if anything, the idea of letting people turn a photo or a mood into a short track sounds like fun for casual use and creative experimentation. It is what Google says it’s meant for. 

Advertisement

Yet the reality is that as these models proliferate, we risk confusing novelty with art. And here, the big tech companies are not the ones to blame, but us. 

Source link

Advertisement
Continue Reading

Tech

YouTube’s latest experiment brings its conversational AI tool to TVs

Published

on

The race to advance conversational AI in the living room is heating up, with YouTube being the latest to expand its tool to smart TVs, gaming consoles, and streaming devices. 

This experimental feature, previously limited to mobile devices and the web, now brings conversational AI directly to the largest screen in the home, allowing users to ask questions about content without leaving the video they’re watching. 

According to YouTube’s support page, eligible users can click the “Ask” button on their TV screen to summon the AI assistant. The feature offers suggested questions based on the video, or users can use their remote’s microphone button to ask anything related to the video. For instance, they might ask about recipe ingredients or the background of a song’s lyrics, and receive instant answers without pausing or leaving the app. 

Currently, this feature is available to a select group of users over 18 and supports English, Hindi, Spanish, Portuguese, and Korean.

Advertisement

YouTube first launched this conversational AI tool in 2024 to help viewers explore content in greater depth. The expansion to TVs comes as more Americans now access YouTube through their television than ever before. A Nielsen report from April 2025 found that YouTube accounted for 12.4% of total television audience time, surpassing major platforms like Disney and Netflix.

Other companies are also making significant strides with their conversational AI technologies. Amazon rolled out Alexa+ on Fire TV devices, enabling users to engage in natural conversations and ask Alexa+ for tailored content recommendations, hunt for specific scenes in movies, or even ask questions about actors and filming locations.

Meanwhile, Roku has enhanced its AI voice assistant to handle open-ended questions about movies and shows, such as “What’s this movie about?” or “How scary is it?” Netflix is also testing its AI search experience. 

Techcrunch event

Advertisement

Boston, MA
|
June 23, 2026

Another way YouTube has tried to improve its TV experience with AI is the recent launch of a feature that automatically enhances videos uploaded at lower resolutions to full HD.

Advertisement

Additionally, the company continues to launch other AI features, like a comments summarizer that helps viewers catch up on video discussions and an AI-driven search results carousel. In January, the company announced that creators will soon be able to make Shorts using AI-generated versions of their own likeness. 

Last week, YouTube launched a dedicated app for the Apple Vision Pro, too, letting users watch their favorite content on a theater-sized virtual screen in an immersive environment.

Source link

Advertisement
Continue Reading

Tech

The 10 Best Shows to Stream Right Now (February 2026)

Published

on

No matter how well your favorite streaming service’s algorithm knows you, come February, sometimes even the smartest technology can be swayed by the power of Valentine’s Day. Hence all those romance-heavy promos at the top of your screen, from Ryan Murphy’s Love Story to Netflix’s ever-proliferating Love Is Blind.

But love—romantic or otherwise—can be found in the oddest of places, including the radioactive wasteland of postapocalyptic Los Angeles, Westeros in the rare midst of relative peace, or behind the scenes of the latest MCU blockbuster.

Whether you’re in the mood for a reliable sci-fi gem or an enlightening new docuseries courtesy of Marty Supreme director Josh Safdie, February’s streaming lineup offers plenty of options to swoon over. Here are the 10 shows we’re falling for right now.

Star Trek: Starfleet Academy

Picking up from roughly where Bryan Fuller and Alex Kurtzman’s Star Trek: Discovery left off when it ended in 2024, Starfleet Academy might be best described as Star Trek for the TikTok age … or simply the franchise’s horniest iteration. Set in the 32nd century, it follows the first group of cadets to train at the Academy in more than a century, giving it license to play out more like a college soap opera à la Hulu’s Tell Me Lies. Fortunately, Hollywood heavyweights Holly Hunter and Paul Giamatti are there to refocus these wannabe Starfleet officers on boldly going where no one has gone before.

Advertisement

While franchise diehards may find it all a bit too lightweight for their tastes, it’s impossible to not appreciate the show’s obvious nostalgia for the iconic series that started it all and Gene Roddenberry’s fierce commitment to diversity, which has unfortunately become an ugly word to the country’s powers that be—and an aspect of the original show that appears to have gone over self-described Trekkie Stephen Miller’s head completely.

Wonder Man

What do you do if you’re Disney and realize the Marvel movies that once regularly crossed the $1 billion box office threshold while barely lifting a Nano Gauntlet are now facing a seriously dwindling ROI problem? You poke fun at the problem with a playfully self-aware buddy comedy that smartly doesn’t require any real knowledge of the MCU on the audience’s part.

Source link

Advertisement
Continue Reading

Tech

This is the first time we’ve seen a fridge have a nugget ice machine built right in

Published

on

Whirlpool has unveiled what it claims is an industry first: a refrigerator with a built-in nugget ice maker.

Announced ahead of KBIS 2026 in Florida, the new 36-inch Wide True Counter Depth French Door model integrates soft, chewable nugget ice directly into the door. This means no countertop machine is required.

For years, nugget ice has been the preserve of standalone makers cluttering up kitchen counters. However, Whirlpool’s approach folds it into a full-size fridge, complete with dual ice makers that dispense both traditional cubes and nugget ice from the same appliance.

It’s a small upgrade on paper, but for households that love the softer, chewable style popularised by cafés and fast-food chains, it removes the need for an extra gadget.

Advertisement

Advertisement

The refrigerator comes in both three-door and four-door French Door configurations and is designed to sit flush thanks to its true counter-depth build. Whirlpool says the goal is to combine convenience with a cleaner kitchen layout, effectively replacing a niche appliance with something already central to the home.

The nugget ice feature headlines Whirlpool’s wider showcase at KBIS 2026, where the company is leaning into “industry-first” innovations across multiple categories. Alongside the fridge, Whirlpool also introduced a Front Load Laundry Tower with UV Clean technology that uses ultraviolet light during the wash cycle to help reduce bacteria without relying on higher temperatures that can fade fabrics.

In the kitchen, a new 24-inch stainless steel dishwasher is also debuting, featuring what Whirlpool describes as the first 360-degree spinning lower rack, designed to make loading and unloading easier. It pairs this with an AI-powered sensor-controlled wash cycle that automatically adjusts water temperature and soil levels.

Advertisement

But it’s the nugget ice fridge that stands out. While high-end refrigeration has focused heavily on smart screens and app integration in recent years, Whirlpool is betting that practical upgrades — like better ice — might be what families actually notice day to day.

Pricing and availability haven’t yet been detailed. However, if you’ve been eyeing a countertop nugget ice maker, you may soon be able to reclaim that space entirely.

Advertisement

Source link

Advertisement
Continue Reading

Tech

DJI Osmo Pocket 4 Camera with LED Light Allegedly Leaked in New Hands-On Video

Published

on

DJI Osmo Pocket 4 Camera Leak
Photo credit: The New Camera
A new hands-on video has emerged, purportedly showing the DJI Osmo Pocket 4 in action, and it comes directly from a Malaysian store. The clip, provided by a local DJI outlet named DronesKaki in the Kuala Lumpur area, shows a customer messing with what appears to be a production unit.



The device retains the Pocket series’ signature compact dimensions. Its three-axis gimbal performs an excellent job of keeping the footage smooth, with no shakes visible even when moving about the store. The flip out screen on the bottom measures a few inches and is noticeably brighter than previously. You also have a joystick and a few buttons on the front, as well as a couple more concealed beneath the screen that we’re not sure what they do.

Sale


DJI Osmo Pocket 3, Vlogging Cameras with 1” CMOS & 4K/120fps Vlog Camera, 3-Axis Stabilization, Fast…
  • Capture Stunning Footage – This vlogging camera features a 1-inch CMOS sensor and records in 4K resolution at an impressive 120fps. Capture…
  • Effortlessly Frame Your Shots – Get the ideal composition with Osmo Pocket 3’s expansive 2-inch touch screen that rotates for both horizontal and…
  • Ultra-Steady Footage – Say goodbye to shaky videos. Osmo Pocket 3’s advanced 3-axis mechanical stabilization delivers superb stability. Enjoy smooth…

One visible new feature is the camera’s tiny built-in LED light, which appears to be mounted on an adjustable arm. That will come in handy when shooting in low light, and you won’t have to carry around any extra equipment. You may also spin it to face a different direction to give yourself more angle options.

Advertisement

DJI Osmo Pocket 4 Camera Leak
The Pocket 4’s rumored specs sound promising, with a beefed-up 1 inch CMOS sensor that should help with low-light work and the ability to record 4K video at 120fps. According to speculations, you may be able to get 4K at 240fps slow motion or 6K at 30fps. Autofocus is said to be more faster now, and AI-assisted tracking helps keep your subject in frame.

The battery reportedly lasts up to 200 minutes, which is significantly longer than the Pocket 3’s 166 minutes. The device still weighs 179 grams and now includes Wi-Fi 6, which should make file transfers much faster.

DJI Osmo Pocket 4 Camera Leak
According to the video, there is a SuperPhoto mode that utilizes artificial intelligence to analyze your environment and perform smart processing. That should provide you with some decent stills, up to 33MP at least. This might be a feature you’ve seen on other action cameras, but it’s a welcome addition to the DJI ecosystem.

With the leak, prior FCC records suggest a March 2026 arrival date. Some anticipate a late February or early March release, and while there are rumors of a Pro version with additional features such as dual cameras, the video we viewed focuses entirely on the standard model.
[Source]

Source link

Advertisement
Continue Reading

Tech

Fixing A Destroyed XBox 360 Development Kit

Published

on

As common as the Xbox 360 was, the development kits (XDKs) for these consoles are significantly less so. This makes it even more tragic when someone performs a botched surgery on one of these rare machines, leaving it in dire straits. Fortunately [Josh Davidson] was able to repair the XDK in question for a customer, although it entailed replacing the GPU, CPU and fixing many traces.

The Xbox 360 Development Kit is effectively a special version of the consumer console — with extra RAM and features that make debugging software on the unit much easier, such as through direct access to RAM contents. They come in a variety of hardware specifications that developed along with the game console during its lifecycle, with this particular XDK getting an upgrade to being a Super Devkit with fewer hardware restrictions.

Replacing the dead GPU was a new old stock Kronos 1 chip. Fortunately the pads were fine underneath the old GPU, making it easy to replace. After that various ripped-off pads and traces were discovered underneath the PCB, all of which had to be painstakingly repaired. Following this the CPU had apparently suffered heat damage and was replaced with a better CPU, putting this XDK back into service.

Advertisement

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025