Connect with us
DAPA Banner

Tech

How Long Can A Quadcopter Drone Fly On Just Solar?

Published

on

The final second prototype flying. (Credit: Luke Maximo Bell, YouTube)
The final second prototype flying. (Credit: Luke Maximo Bell, YouTube)

The dream of fully powering everything from aircraft to cars on just the power generated from solar panels attached to the machine remains a tempting one, but always seems to require some serious engineering including putting the machine on a crash diet. The quadcopter that [Luke Maximo Bell] tried to fly off just solar power is a good case in point, as the first attempt crashed after three minutes and wrecked its solar panels. Now he’s back with a second attempt that ought to stay airborne for as long as the sun is shining.

Among the flaws with the first prototype were poor support for the very thin and fragile PV panels, requiring much better support on the carbon fiber frame of the drone. To support the very large solar array, the first drone’s arms were made to be very long, but this interfered with maneuvering, so the second version got trimmed down and the array raised above the frame. This saved 70 grams of weight from the shortened tubs, which could then be added to the new panel supports.

After an initial test flight resulted in a crash when the PV output dropped, the need for a small battery buffer was clear, so this was added, along with a reduction of the array to 4×7 panels to get the same 20V as the battery. The array also had to be reinforced, as the thin array was very wobbly in addition to making it impossible to fly with any significant wind.

The power circuit as implemented on the second prototype. (Credit: Luke Maximo Bell, YouTube)
The power circuit as implemented on the second prototype. (Credit: Luke Maximo Bell, YouTube)

During the subsequent five hours long test flight it was clear that the resulting PV-powered drone was at the limits of its performance, with even some mild cloud cover forcing the battery to provide backup power.

For the test location a tree-sheltered site far away from windy Cape Town was also selected to provide the best possible shot, as keeping position with this drone was very hard. With the low weight and the big surface area of the solar panel array catching any little bit of wind, the GPS-based position keeping was essential. Unfortunately a few hours into the test this feature failed.

Manual position keeping is definitely possible, but [Luke] had to constantly counteract the drone wanting to drift off somewhere else. Ultimately the test flight ended when it was still very much a sunny South African summer’s day, due to the current provided by the array no longer keeping up with the power demands of the motors.

Advertisement

What this perhaps demonstrates best is that if you want to use PV solar power for your flying drone – especially with a significant payload – it’s probably best to use it for recharging while idle, or to extend the battery life by an appreciable amount. That said, props to [Luke] for persevering and making it work in the end.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

There Aren’t a Lot of Reasons to Get Excited About a New Amazon Smartphone

Published

on

“This is not a consumer device company that takes privacy very seriously,” Gamero-Garrido says. Since people use smartphones far more than Alexa or a Kindle, he says an Amazon smartphone today would “significantly increase the scale of the potential privacy harms.”

Gamero-Garrido thinks Amazon could use Transformer as a data-gathering tool to glean how people use its devices, build its advertising network, and compete with the likes of Alphabet and Meta, which are facing regulatory scrutiny in the European Union and California.

One way it could do this is through the Fire TV approach. This is Amazon’s TV streaming platform integrated into a third-party TV (or via a dongle); while you may not have bought a Fire TV-powered TV from Amazon, the data collected by the operating system is still owned by the company.

“Whether they end up succeeding with this phone supplement device, or whether they eventually use a similar model where they install their operating system on other phones or ”light” phones that are built by third parties, it has the same effect,” he says. “Ultimately, what Amazon is doing is centralizing all the network traffic through its own infrastructure so it can improve its advertising business.”

Advertisement

If Amazon can detect when a person is sick from the sound of their voice, then it can recommend that you buy specific cold medicine from Amazon Health—that’s a real patent Amazon owns. If this is now powered on a device you carry everywhere, Gamero-Garrido says it can listen to more of your conversations and serve you better ads.

Even with its past regressions, customers have shown a general acceptance of Amazon’s hardware, says Kassem Fawaz, an associate professor at the University of Wisconsin, Madison, who researches security and privacy in consumer devices.

“I think when it comes to products, unfortunately, consumers value utility and price over privacy,” Fawaz wrote in an email to WIRED.

The accelerant here could be Amazon’s Devices & Services lead, Panos Panay, who joined the company in 2023. Panay famously helped turn Microsoft’s Surface line of computers into an aspirational hardware brand through his “pumped” and emotionally charged keynotes.

Advertisement

Panay has already brought that kind of energy to a few Amazon hardware announcements, like the Kindle Scribe Colorsoft, though he has not matched the success of Surface. If Amazon is truly making a smartphone, it will need to generate a lot of passion to entice customers.

“If someone can do it, it’s going to be Panos,” Jeronimo says. “For that, I have total confidence. He is the right person for these kinds of initiatives.”

Source link

Advertisement
Continue Reading

Tech

Trump Outlines New AI Regulation Plan: What’s in It and What’s Missing

Published

on

The White House’s new policy framework for regulating generative artificial intelligence, released Friday, covers many areas, but one thing is clear: President Donald Trump wants the federal government to set the rules. And those rules appear to fall far short of what consumer and privacy advocates argue is necessary. 

The generative AI revolution has been underway for years, and US legislation is slow to catch up. This is despite the growing awareness of AI’s harms and challenges: chatbots’ dangerous impacts on mental health and child development, the widespread legal wrangling over the copyright protections, the dangerous spread of deepfakes and AI-powered scams, to name a few. 

Sen. Marsha Blackburn introduced the new policy package, called The Trump America AI Act, in Congress on Thursday. The Tennessee Republican’s bill is an attempt to codify a vision based on Trump’s 2025 AI Action Plan, while delving into more legal specifics and providing guidance on implementing new laws (or changing existing ones). 

Advertisement
AI Atlas

Trump has maintained that the federal government should be responsible for regulating the AI industry — and that requiring AI companies to comply with 50 different sets of state laws would prevent the US from “winning” the global AI race. However, a proposal to temporarily ban states from regulating AI failed back in July, when it was removed at the last minute from the massive budget bill, known as the “One Big Beautiful Bill Act.” 

Now, the White House is doubling down on its claim to be in charge, with a few exceptions. The plan addresses some of the biggest concerns people have about AI: job losscopyright chaos for creatorsrapidly expanding infrastructure such as data centers and the protection of vulnerable groups like children. But critics say it doesn’t go far enough to regulate the fast-growing AI industry. 

“It is light on protection and heavy on promotion of dangerous AI systems,” Alan Butler, president and executive director of the Electronic Privacy Information Center, said in a statement. “The American people deserve better, and Congress should do better than this.”

The White House’s new proposed AI laws

The White House’s 2026 AI proposal says Congress should not create a new governing body to oversee AI rules, but should let existing agencies and subject-matter experts regulate as they see fit. 

Advertisement

Protecting children: This is one area where the federal government won’t prevent states from creating laws. And many state governments are already leading the charge, especially in regulating romantic or companion chatbots. 

The plan highlights protecting kids from AI-powered deepfakes, a huge issue highlighted in AI creating child sexual abuse material. Shielding young people from the ill effects of AI is an ongoing battle, with several high-profile cases of teenagers using AI for self-harm and suicide.

Blackburn’s policy plan includes general language related to kids’ online safety. Existing bills like the Kids Online Safety Act and the Children’s Online Privacy Protection Rule are, theoretically, designed to protect kids, but advocates and tech experts say they could create a chilling effect on free speech and lead to censorship

Though Trump’s AI framework addresses censorship, it’s limited to preventing AI companies from including ideological or partisan bias in their products. Trump has previously railed against what he calls “woke” AI, a term the president and his allies have used to attack concepts like diversity, equity and inclusion.

Advertisement

Job loss: It’s not just translators and data entry folks who are worried about losing their jobs to AI — legacy tech workers like coders and engineers are, too. There have been a lot of concerns about AI disrupting the workforce, with retail giants like Amazon laying off thousands of employees in the name of AI efficiency. The White House says it should use “nonregulatory” methods to focus on youth development and AI workforce training.

Infrastructure: In line with Trump’s previous AI Action Plan, the framework calls for states and local governments to streamline data center construction and operation. These facilities are increasingly controversial, with nearby residents reporting environmental damage and strain on their existing electrical grids, creating higher electric bills. 

Several big tech companies recently agreed to foot the bill for any higher electricity costs, but there’s no way to enforce the voluntary pledge.

Copyright: Whether the use of copyrighted materials in AI training is fair use or copyright infringement is one of the biggest legal issues of the AI age. The plan reiterates the administration’s position that AI companies are covered by fair use — meaning they wouldn’t have to obtain permission or pay for copyrighted content when creating their models. 

Advertisement

But, given the ever-growing number of lawsuits asking the judiciary the same question, the federal government should allow those cases to play out. So far, limited cases with Anthropic and Meta have carved out narrow victories for tech companies, not authors.

The framework document hints that the federal government could become a future licensing partner for AI companies, stating that it should “provide resources to make federal datasets accessible to industry and academia in AI-ready formats for use in training AI models and systems.”

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 

Does the White House plan do enough?

Tech industry groups praised the administration’s proposals, while consumer advocacy groups offered skepticism at best. 

Advertisement

In a statement backing the plan, the Consumer Technology Association supported a single set of rules for the entire country. 

“AI can and will make us better, and we agree that children need special protection, First Amendment rights are paramount, harmful deep fakes should be regulated, and Congress should not act to restrict AI platforms from relying on fair use protection,” the tech industry trade group said.

But according to Samir Jain, vice president of policy at the Center for Democracy and Technology, the government’s playbook is rife with internal contradictions. While it calls for the federal government to preempt state rules and laws on AI development, it also says the federal government shouldn’t undermine state authority. 

“The White House’s high-level AI framework contains some sound statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to grapple with key tensions between various approaches to important topics like kids’ online safety,” Jain said in a statement.

Advertisement

Ben Winters, director of AI and data privacy at the Consumer Federation of America, said the proposal prioritizes Big Tech over consumers.

“It’s encouraging to see some stated desires to protect people from AI-generated scams and data abuse of minors, but it’s not enough,” Winters said in a statement. “We need to see money where their mouth is on the protections — more money for consumer protection agencies at both the federal and state levels. So far, they’ve done nothing but cut and hamstring them.”

Source link

Advertisement
Continue Reading

Tech

Stranger Things Complete Series 4K UHD Blu-ray Box Sets Arrive in July With Four Collector Editions

Published

on

This is BIG, and we’re not just talking about the box it all arrives in.

Boutique label Arrow Video has announced a Complete Series set collecting all five seasons/42 episodes of streaming giant Netflix’s nostalgic horror show, Stranger Things. It will be available in four different versions–Special or Deluxe, 4K or HD Blu-ray–all available on July 28, just 13 days after the tenth anniversary of the premiere.

A phenomenal worldwide hit, Duffer brothers Matt and Ross’ Stranger Things broke viewership records across all seasons, beloved for its quirky characters, heart, humor, dark frights, and perhaps most of all its painstaking recreation of ‘80s middle America. The Bros. are indisputably old-school, stating, “We always dreamed that Stranger Things could be owned in its entirety, not just as a collector’s set, but as a way to preserve the show for decades to come.”

netflix-stranger-things-special-edition
Special Edition

This is huge news for a couple of reasons. Only Seasons One and Two ever had a physical media release, in 2017 and 2018 respectively, as Target exclusives. Both fetched hundreds of dollars on eBay, even though the first season notoriously disappointed many with its lack of high dynamic range and only Dolby Digital 5.1 audio. Season Two did a bit better, with DTS-HD Master Audio and HDR10, but that was the last disc offering… until now.

Moreover, this drop suggests a continued willingness by the streamer to offer some of its most coveted properties on disc. It comes on the heels of the revelation that Criterion Collection editions of KPop Demon Hunters and Guillermo del Toro’s Frankenstein are on the way, recent Oscar winners both. Theoretically, this strategy would take a nibble out of The Big Red N’s core business, but perhaps this is a major-league endorsement that digital delivery and physical media can happily co-exist?

Advertisement

The 25 discs themselves appear to be identical between the Special and Deluxe editions. The production values of the show were always cinema-quality, and the 4K will be presented in Dolby Vision at its proper 2:1 aspect ratio with the original DTS-HD Master Audio 5.1 surround and two-channel stereo for all episodes plus an upgrade to Dolby Atmos for the final two seasons. The on-disc bonus content includes:

  • Interviews with the cast and crew 
  • Behind-the-scenes featurettes 
  • Set tours 
  • Bloopers
netflix-stranger-things-deluxe-set
Deluxe Edition

The Special Edition arrives with a booklet, whereas the Deluxe Edition exclusively packs quite a bit more, including several in-universe souvenirs:

  • Palace Arcade alloy-zinc token-style coin
  • Self-adhesive Hellfire Club patch
  • Hellfire Club 20-sided die
  • Enhanced packaging including wraparound box artwork by Juan Ramos
  • Twenty-five artcards representing all five seasons
  • Five double-sided posters featuring original artwork by Kyle Lambert
  • Reversible sleeve inserts featuring new artwork by Juan Ramos and original artwork by Kyle Lambert
  • Double-sided foldout map of fictional Hawkins, Indiana
  • A 148-page perfect-bound artbook with design sketches, concept art, storyboards and new writing on the making of the series from the Duffer Brothers, Shawn Levy, Andrew Stanton, Kyle Dixon and more

Price & Availability

All editions are available for pre-order now at Amazon and Arrow Video, but won’t start shipping until July 28, 2026.

Advertisement. Scroll to continue reading.

Source link

Advertisement
Continue Reading

Tech

Tech Moves: Carbon Robotics’ new CFO; Microsoft gaming GM goes to Netflix; Nordstrom gets VP of AI

Published

on

Kevan Kryslter, CFO of Carbon Robotics. (Carbon Robotics Photo)

Agtech company Carbon Robotics appointed Kevan Krysler as chief financial officer. The Seattle startup, known for zapping weeds with lasers, reports it has surpassed $100 million in annual revenue. Carbon has also been name-checked twice by U.S. Health and Human Services Secretary Robert F. Kennedy Jr. for its pesticide-free approach to weed control.

Krysler joins Carbon from Silicon Valley-based Pure Storage, where he also served as CFO.

Carbon CEO Paul Mikesell said in a statement that Krysler “really gels with our culture and brings public company financial and executive experience to round out our team. This is indicative of Carbon Robotics pushing forward and evolving our leadership to match our rapidly increasing maturity in the market.”

Founded in 2018, Carbon has raised $177 million to date and employs about 260 people. The company operates a manufacturing facility in Richland, Wash., and ranks No. 10 on the GeekWire 200, our list of the top privately held startups in the Pacific Northwest.

T-Mobile has promoted Allan Samson to chief marketing officer after nearly a decade with the telecom giant. In recent years he has led its broadband business scaling its 5G Home Internet nationally and worked to advance its fiber strategy and joint ventures.

“As CMO, Allan will bring the full power of our marketing organization into one connected performance marketing engine, aligning media, pricing, portfolio, product marketing, innovation and digital experience,” said Mike Katz, T-Mobile’s chief business and product officer, on LinkedIn.

Keith Dolliver. (LinkedIn Photo)

— Attorney Keith Dolliver has retired after more than three decades at Microsoft, where he worked on initiatives involving LinkedIn, GitHub, Activision, Mojang (Minecraft) and others. He departs as vice president, deputy general counsel and corporate secretary.

Dolliver thanked executive leadership, partners and legal colleagues, and the corporate legal group, which he had led.

Advertisement

“I will miss all of you and will be cheering you on as you continue to take this consequential company forward,” he said on LinkedIn. He also credited his family, “who made home a place of positive energy, treated me with patience and grace, and were always in my corner.”

Haiyan Zhang. (GeekWire Photo)

Haiyan Zhang is leaving Microsoft for Netflix, where she’ll take on a role in gaming. Zhang spent more than 13 years at Microsoft, holding positions across Microsoft Gaming, Microsoft Research and Xbox Studios, most recently as general manager and partner for Gaming AI.

“Reflecting back, I still remember stepping through the doors at 30 Great Pulteney Street on March 27, 2013, into a newly formed Xbox game studio in London,” Zhang said on LinkedIn. “I felt at once excitement, trepidation, and optimism. As I step into this next chapter, I find many of those same emotions returning as I look ahead.”

Zhang is also founder and CEO of Thriven Foundation Labs, a nonprofit promoting AI for social good. Her wide-ranging career includes roles at BBC and IDEO in the United Kingdom.

Graham Sheldon. (LinkedIn Photo)

Graham Sheldon, has resigned as chief product officer for AI automation giant UiPath after more than three years with the company. UiPath, which is based in New York City, has an office in Bellevue, Wash.

Sheldon was previously with Microsoft for more than 20 years, leaving the role of corporate VP of product for Microsoft Teams. Early in his tenure, Sheldon served as technical advisor to Satya Nadella when the now CEO was a senior vice president. Sheldon also led engineers working on Bing, ads, MSN, Cortana and other initiatives.

Advertisement

Sheldon didn’t disclose his next move on LinkedIn, but said he’d be tackling bucket list items including getting his commercial pilot license, running a marathon, cheering on his daughter’s select soccer team and building with OpenClaw.

— Seattle’s Redfin has promoted Ariel Dos Santos to chief product and design officer. Dos Santos has been with the real estate platform for nearly four years. His career has also included roles at Amazon, where he helped lead the launch of Just Walk Out Technology, and at Microsoft, where he oversaw social marketing.

Vinit Tople. (LinkedIn Photo)

Vinit Tople is now vice president of AI and developer platforms at Seattle’s Nordstrom. He previously spent more than 12 years at Amazon, most recently as head of product for Alexa, and more recently worked at JPMorgan Chase, helping lead adoption of AI agents.

“Nordstrom, often called a ‘century-old startup,’ has reinvented itself time and again over 125 years — evolving ahead of each new era of retail — and now it’s making a bold move to put AI at the center of its next chapter,” Tople said on LinkedIn.

Sanjay Parmar. (LinkedIn Photo)

Chronus named Sanjay Parmar as chief AI officer for the Seattle-based mentoring software platform. He joins from Degreed, where he was CTO of the San Francisco Bay Area company.

Chronus CEO Ankur Ahlowalia, who took the helm in January, praised Parmar’s background in enterprise SaaS and AI-powered workforce solutions, saying in a statement it will help would help the company “make life-changing mentorship accessible to everyone.”

Advertisement

— Law firm Dorsey & Whitney appointed Cyrus Ansari as a technology commerce partner at its Seattle office. He was previously with two other Seattle firms: Perkins Coie and Davis Wright Tremaine.

“For several years now, my work has centered on commercial deals for cloud, AI, gaming and other technology businesses,” Ansari said on LinkedIn. “That focus continues at Dorsey.”

— Seattle’s Richard Moulds — a self-described car restorer, advisor, mentor and investor — joined the supervisory board of QuiX Quantum, a Netherlands-based developer of photonic quantum computing systems. Moulds left his role as general manager with AWS last year and now serves as a strategic advisor for quantum startups QEDMA and Nu Quantum.

Source link

Advertisement
Continue Reading

Tech

Three ways AI is learning to understand the physical world

Published

on

Large language models are running into limits in domains that require an understanding of the physical world — from robotics to autonomous driving to manufacturing. That constraint is pushing investors toward world models, with AMI Labs raising a $1.03 billion seed round shortly after World Labs secured $1 billion.

Large language models (LLMs) excel at processing abstract knowledge through next-token prediction, but they fundamentally lack grounding in physical causality. They cannot reliably predict the physical consequences of real-world actions. 

AI researchers and thought leaders are increasingly vocal about these limitations as the industry tries to push AI out of web browsers and into physical spaces. In an interview with podcaster Dwarkesh Patel, Turing Award recipient Richard Sutton warned that LLMs just mimic what people say instead of modeling the world, which limits their capacity to learn from experience and adjust themselves to changes in the world.

This is why models based on LLMs, including vision-language models (VLMs), can show brittle behavior and break with very small changes to their inputs. 

Advertisement

Google DeepMind CEO Demis Hassabis echoed this sentiment in another interview, pointing out that today’s AI models suffer from “jagged intelligence.” They can solve complex math olympiads but fail at basic physics because they are missing critical capabilities regarding real-world dynamics. 

To solve this problem, researchers are shifting focus to building world models that act as internal simulators, allowing AI systems to safely test hypotheses before taking physical action. However, “world models” is an umbrella term that encompasses several distinct architectural approaches. 

That has produced three distinct architectural approaches, each with different tradeoffs.

JEPA: built for real-time

The first main approach focuses on learning latent representations instead of trying to predict the dynamics of the world at the pixel level. Endorsed by AMI Labs, this method is heavily based on the Joint Embedding Predictive Architecture (JEPA). 

Advertisement

JEPA models try to mimic how humans understand the world. When we observe the world, we do not memorize every single pixel or irrelevant detail in a scene. For example, if you watch a car driving down a street, you track its trajectory and speed; you do not calculate the exact reflection of light on every single leaf of the trees in the background. 

V-JEPA

V-JEPA architecture (source: Meta FAIR)

JEPA models reproduce this human cognitive shortcut. Instead of forcing the neural network to predict exactly what the next frame of a video will look like, the model learns a smaller set of abstract, or “latent,” features. It discards the irrelevant details and focuses entirely on the core rules of how elements in the scene interact. This makes the model robust against background noise and small changes that break other models.

This architecture is highly compute and memory efficient. By ignoring irrelevant details, it requires much fewer training examples and runs with significantly lower latency. These characteristics make it suitable for applications where efficiency and real-time inference are non-negotiable, such as robotics, self-driving cars, and high-stakes enterprise workflows. 

Advertisement

For example, AMI is partnering with healthcare company Nabla to use this architecture to simulate operational complexity and reduce cognitive load in fast-paced healthcare settings. 

Yann LeCun, a pioneer of the JEPA architecture and co-founder of AMI, explained that world models based on JEPA are designed to be “controllable in the sense that you can give them goals, and by construction, the only thing they can do is accomplish those goals” in an interview with Newsweek.

Gaussian splats: built for space

A second approach leans on generative models to build complete spatial environments from scratch. Adopted by companies like World Labs, this method takes an initial prompt (it could be an image or a textual description) and uses a generative model to create a 3D Gaussian splat. A Gaussian splat is a technique for representing 3D scenes using millions of tiny, mathematical particles that define geometry and lighting. Unlike flat video generation, these 3D representations can be imported directly into standard physics and 3D engines, such as Unreal Engine, where users and other AI agents can freely navigate and interact with them from any angle.

The primary benefit here is a drastic reduction in the time and one-time generation cost required to create complex interactive 3D environments. It addresses the exact problem outlined by World Labs founder Fei-Fei Li, who noted that LLMs are ultimately like “wordsmiths in the dark,” possessing flowery language but lacking spatial intelligence and physical experience. World Labs’ Marble model gives AI that missing spatial awareness. 

Advertisement

While this approach is not designed for split-second, real-time execution, it has massive potential for spatial computing, interactive entertainment, industrial design, and building static training environments for robotics. The enterprise value is evident in Autodesk’s heavy backing of World Labs to integrate these models into their industrial design applications.

End-to-end generation: built for scale

The third approach uses an end-to-end generative model to process prompts and user actions, continuously generating the scene, physical dynamics, and reactions on the fly. Rather than exporting a static 3D file to an external physics engine, the model itself acts as the engine. It ingests an initial prompt alongside a continuous stream of user actions, and it generates the subsequent frames of the environment in real-time, calculating physics, lighting, and object reactions natively. 

DeepMind’s Genie 3 and Nvidia’s Cosmos fall into this category. These models provide a highly simple interface for generating infinite interactive experiences and massive volumes of synthetic data. DeepMind demonstrated this natively with Genie 3, showcasing how the model maintains strict object permanence and consistent physics at 24 frames per second without relying on a separate memory module.

Advertisement

This approach translates directly into heavy-duty synthetic data factories. Nvidia Cosmos uses this architecture to scale synthetic data and physical AI reasoning, allowing autonomous vehicle and robotics developers to synthesize rare, dangerous edge-case conditions without the cost or risk of physical testing. Waymo (a fellow Alphabet subsidiary) built its world model on top of Genie 3, adapting it for training its self-driving cars.

The downside to this end-to-end generative method is the great compute cost required to continuously render physics and pixels simultaneously. Still, the investment is necessary to achieve the vision laid out by Hassabis, who argues that a deep, internal understanding of physical causality is required because current AI is missing critical capabilities to operate safely in the real world.

What comes next: hybrid architectures

LLMs will continue to serve as the reasoning and communication interface, but world models are positioning themselves as foundational infrastructure for physical and spatial data pipelines. As the underlying models mature, we are seeing the emergence of hybrid architectures that draw on the strengths of each approach. 

For example, cybersecurity startup DeepTempo recently developed LogLM, a model that integrates elements from LLMs and JEPA to detect anomalies and cyber threats from security and network logs. 

Advertisement

Source link

Continue Reading

Tech

Microsoft rolls back some of its Copilot AI bloat on Windows

Published

on

Microsoft announced on Friday a series of changes focused on improving the quality of its Windows 11 operating system, which notably includes dialing back the number of entry points to its AI assistant, Copilot.

The company said it will reduce Copilot AI integrations in some apps, starting with Photos, Widgets, Notepad, and its Snipping Tool.

Under the heading of “integrating AI where it’s most meaningful,” Pavan Davuluri, EVP of Windows and Devices, wrote on the company’s blog that Microsoft is becoming more intentional about “how and where Copilot integrates across Windows.” Its goal, he explained, is to focus on AI experiences that are “genuinely useful.”

This “less-is-more” approach to integrating AI into existing platforms may reflect the growing consumer pushback against AI bloat. While many people today understand AI to be a useful tool, there are also concerns around trust and safety. For instance, a Pew Research study published this month noted that half of U.S. adults are now more concerned than excited about AI as of June 2025, up from 37% in 2021.

Advertisement

This is not the first time Microsoft has rethought its Copilot integrations. Earlier this month, the news site Windows Central said the company’s plan to ship Copilot-branded AI features across Windows 11 had been quietly shelved. This, the site said, included some system-level integrations within the Settings app, File Explorer, and elsewhere.

Before this, Microsoft had delayed the launch of its AI-powered memory feature, Windows Recall for Copilot + PCs, for over a year as it tried to address users’ privacy concerns. The Recall feature launched last April, but security vulnerabilities are still being discovered.

It’s clear that user feedback is influencing Microsoft’s moves around AI on Windows. Davuluri wrote that he and his team have spent the past several months listening to the community about how they’d like to see Windows improved.

The Copilot rollback is just one of the changes being made.

Advertisement

The company said it’s also introducing the ability to move the taskbar to the top or sides of the screen, giving users more control over system updates, speeding up File Explorer, improving the Widgets experience, updating the Feedback Hub, and making it easier to navigate its Windows Insider Program — a community that offers feedback about Windows’ future.

Source link

Continue Reading

Tech

Mac gaming is better than ever, and it still sucks

Published

on

The state of Mac gaming does Apple’s incredible chips and stunning displays a disservice. As it has always been, there’s little to suggest Apple knows how, or even wants, to fix it.

Silver MacBook with Apple logo sits on a gray desk mat, flanked by a black rectangular device on the left and glossy black VR headset on the right.
The M5 Max MacBook Pro should be the ultimate gaming laptop, but it isn’t

The new M5 Max MacBook Pro might be Apple’s fastest-ever Mac. It’s still a terrible buy for anyone serious about gaming.
In truth, the 16-inch MacBook Pro should be a beast of a gaming laptop. It has a glorious, huge, bright, and colorful display and a massive battery.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

What is DLSS 5? Nvidia’s controversial AI update explained

Published

on

Nvidia saw a busy few days during its global GTC event hosted in San Jose.

Alongside the launch of NemoClaw, Nvidia unveiled DLSS 5 which the company has hailed as its “most significant breakthrough in computer graphics” since real-time ray tracing. Although it might sound interesting, DLSS 5 has been met with significant backlash and criticism.

We explain everything you need to know about Nvidia DLSS 5, including what it really is and how it will affect video games. We’ll also explain why there’s been such backlash and why gamers and developers alike are not happy with DLSS 5.

What is Nvidia DLSS 5?

Before we dive into the specifics of DLSS 5, we’ll start by reminding you what DLSS actually is. Released back in 2018, DLSS (short for Deep Learning Super Sampling) is an AI-powered technology that upscales resolutions and generates new frames in video games to boost its overall performance. According to Nvidia, it has been integrated into over 750 games and has become the “gold standard for the industry”. 

Advertisement

DLSS has seen various improvements and upgrades over the years, with DLSS 5 being the latest update. 

Advertisement

Nvidia explains that DLSS 5 uses an AI model that is trained end-to-end to understand aspects of complex scenes, including characters, hair and fabric alongside environmental lighting conditions, all by analysing a single frame. DLSS 5 will then generate visually precise images while retaining the structure and semantics of the original scene – all in real time at up to 4K resolution, so you shouldn’t notice any interruptions during gameplay.

But what does this mean for game developers? Nvidia says that DLSS 5 provides developers with “detailed controls” so they can determine where and how enhancements are applied to maintain each game’s aesthetic. At the time of writing, DLSS 5 will be supported by the likes of Bethesda, Ubisoft, Warner Bros Games and more. 

When is DLSS 5 coming out?

At the time of writing, Nvidia has only stated that DLSS 5 will arrive “this fall” and is yet to provide a concrete release date.

Advertisement

We do know a handful of titles that will support DLSS 5, including AION 2, Assassin’s Creed Shadows, Black State, CINDER CITY, Delta Force, Hogwarts Legacy, Justice, NARAKA: BLADEPOINT, NTE: Neverness to Everness, Phantom Blade Zero, Resident Evil Requiem, Sea of Remnants, Starfield, The Elder Scrolls IV: Oblivion Remastered, Where Winds Meet and more.

Advertisement

Nvidia also provided several examples of DLSS 5 in the likes of Resident Evil Requiem, EA SPORTS FC, Starfield and Hogwarts Legacy throughout the GTC.

Advertisement

What is the DLSS 5 controversy?

You’ll likely have seen some of the online backlash surrounding Nvidia’s DLSS 5 announcement, with disgruntled gamers taking to the likes of X and Reddit to declare the technology as nothing more than “AI slop”. However, there is more to their frustrations than that.

Game developers and artists have voiced concerns that DLSS 5 not only seems to make games look worse, but that they’ll lose artistic control over the game’s design. In addition, gamers have explained that they prefer a more nuanced game design, rather than one that feels smooth and less thought through.

Many also took to the comments section on Nvidia’s YouTube announcement to voice their unhappiness at DLSS 5. Nvidia has attempted to put minds at ease, which can be seen through the pinned comment on the video. In this comment, Nvidia explains that “game developers have full, detailed artistic control over DLSS 5’s effects to ensure they maintain their game’s unique aesthetic.”

The comment continues by explaining DLSS 5 is “not a filter” and instead “inputs the game’s color and motion vectors for each frame into the model, anchoring the output in the source 3D content.”

Advertisement

Advertisement

Not only that, but in a recent interview with Tom’s Hardware, Jensen Huang stated that those voicing anger are simply “completely wrong”. Huang concludes that DLSS 5 is “very different than generative AI; it’s content-control generative AI” and that developers will still retain control of the game and can “fine-tune the generative AI” to make it match the style of the game. 

In fact, some video game powerhouses have voiced their praise for DLSS 5. For example, Todd Howard, Studio Head and Executive Producer at Bethesda Game Studios stated in Nvidia’s press release that “with DLSS 5, the artistic style and detail shine through without being held back by the traditional limits of real-time rendering.”

Advertisement

In addition, Charlie Guillemot, co-CEO of Vantage Studios said the way DLSS 5 “renders lighting, materials and characters changes what we can promise to players. On Assassin’s Creed Shadows, it’s letting us build the kind of worlds we’ve always wanted to.”

Source link

Advertisement
Continue Reading

Tech

‘A Rigged and Dangerous Product’: The Wildest Week for Prediction Markets Yet

Published

on

Kalshi CEO Tarek Mansour posted a video on Wednesday of six men decked out in business casual doing push-ups on the sidewalk. “This is how Kalshi Q1 board meeting ended,” he wrote on X. The board members are laughing and smiling in the video after their impromptu cardio session, and the mood is jubilant. The next day, it became clear that the team had ample reason to celebrate: Kalshi had just raised $1 billion at a $22 billion valuation, making the company worth on paper roughly double what it was only a few months ago.

The funding round represented a bright spot during one of the most turbulent weeks for the prediction market industry yet. In just the past five days, Nevada temporarily banned Kalshi by issuing a temporary restraining order and Arizona filed criminal charges accusing it of running an illegal gambling business; an Israeli reporter said that he received an avalanche of threats from Polymarket traders furious about how a story he wrote impacted their wagers; Polymarket scored a major deal with Major League Baseball, further entrenching itself in the world of professional sports; and US Senators introduced legislation to ban specific types of markets offered by the industry, including any involving “government actions, terrorism, war, assassination, and events where an individual knows or controls the outcome.” It is the latest in a series of bills intended to place guardrails around the prediction industry.

Senator Chris Murphy, a cosponsor of the bill and one of the industry’s most outspoken critics, said in an interview with WIRED that prediction markets are “a rigged and dangerous product,” and represent “a brand-new source of mind-bending corruption.”

“Kalshi already bans insider trading and markets directly tied to death and war,” says Kalshi spokesperson Elisabeth Diana. “As a US-based exchange, we support regulators and policymakers from both sides of the aisle in their efforts to keep these markets safe and responsible in America.” Polymarket did not return requests for comment.

Advertisement

Existing law gives the Commodity Futures Trading Commission, the agency that oversees prediction markets, the authority to ban offerings related to assassination, war, terrorism, and other subjects deemed contrary to the public interest. Some prediction markets already stay away from these categories. But not all of their users understand where exactly the lines are drawn, which created a messy situation when some assumed that a market on the fate of Iran’s supreme leader would result in a payout if he “left office” by getting killed.

Meanwhile, Polymarket, which largely operates outside of the United States, offers plenty of war markets—but legislation is unlikely to impact these offerings. The platform is currently offering a market on whether Israeli Prime Minister Benjamin Netanyahu will be “out” by certain dates; someone recently wagered $177,000 that he would be out by March 31. Polymarket would likely resolve the market to “yes” and allow its bettors to profit if Netanyahu dies, just as it did when Khamenei was killed.

One of the reasons Senator Murphy is so passionate about prediction markets is because he sees them as vectors for insider trading. The Israeli government, for example, has charged two of its citizens with leaking classified information by placing Polymarket bets tied to the war in Iran. The Connecticut lawmaker suspects that other trades related to the conflict may have been carried out by members of Trump’s inner circle who have advanced knowledge about military operations. “It’s bone chilling to think that there are staffers inside the situation room that are pushing the United States into war, not because it’s good for our security, but because they’re going to make $100,000 off it,” he says.

Source link

Advertisement
Continue Reading

Tech

‘There are many pathways to becoming an excellent engineer’

Published

on

Kevin O’Riordan explains what potential applicants need to know in advance of a career at Acuity.

In September of last year, US industrial technology company Acuity announced plans to create 100 jobs out of its Cork-based facility, which is its new Global Digital Centre of Excellence, located in the city centre. 

As March is when Silicon Republic often focuses on subjects aligned with all things engineering, now seemed like an ideal opportunity to check back in with Acuity and explore what prospective candidates should know if they have plans of developing a career in this field. 

Kevin O’Riordan, the vice-president of technology and site leader at Acuity, explained the organisation has already employed more than 30 engineers and leaders across cloud‑native development, embedded systems, UI engineering and DevOps, however, there are still plenty of opportunities open to jobseekers looking for a new role in Cork.

Advertisement

“We are actively hiring for a wide range of roles, including software engineers (at all levels), software QA and automation, DevOps engineers, architects and technical leads, data and AI engineers and product managers,” he said. “New roles are being added regularly as we scale toward more than 100 software engineering R&D positions over the next three years.”

So how can a potential candidate stand out, as they look towards a new professional opportunity?

Skills and thrills

O’Riordan said Acuity is looking for people who are passionate about technology, with a curious nature who have a penchant for solving real-world problems. Strong collaboration skills are critical and candidates who can demonstrate initiative and a genuine interest in shaping the technology landscape stand out.

He added: “Because our product ecosystem spans multiple technology layers, we hire for a broad range of skills. We’re seeking engineers for a variety of projects with experience in any of embedded, cloud-native or application and UI. Desirable skills, depending on the role, include Python, AI, Azure, embedded C/C++, C#, modern JavaScript and React.

Advertisement

“For early career applicants, qualifications matter, but demonstrated skills, practical experience and personal projects carry significant weight. There are many pathways to becoming an excellent engineer, and we value all of them.”

The might of Munster

With many of Ireland’s key organisations and working hubs located in the capital, O’Riordan noted that the decision to establish Acuity in Cork was a “deliberate, strategic choice”, as Cork offers a “thriving tech hub with the talent, community and momentum to support a highly networked culture”. 

He said: “Our investment reinforces Cork’s growing reputation as a hub for innovation, talent and digital advancement. From day one, we’ve been intentional about integrating into the community, partnering with local organisations, supporting charity initiatives and sourcing materials and suppliers locally where possible.

“We are also contributing to the region’s long‑term talent pipeline through internships, graduate hiring and deep partnerships with local universities. By bringing advanced R&D work to Cork, we’re strengthening the broader tech ecosystem and supporting the city’s continued economic momentum.”

Advertisement

He also noted, perhaps with students attending university in the region in mind, that university partnerships are crucial to shaping the next generation of engineers in Ireland. “Cork’s institutions offer world‑class education, research capabilities and a diverse, motivated talent pipeline.”

For anyone interested in engaging with the organisation via their educational institution, O’Riordan said: “Our work with UCC, MTU and partners like Tyndall National Institute includes internships, course engagement, guest sessions that give students real-world insight, and early‑stage research exploration. These partnerships strengthen both Acuity and the broader ecosystem while ensuring students gain meaningful exposure to industry.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025