Connect with us
DAPA Banner

Tech

Saturn’s Rings and Storms Stand Out in Combined Webb and Hubble Telescope Views

Published

on

Webb Hubble Space Telescopes Saturn New View
Astronomers have just released what may be the sharpest views of Saturn ever captured, courtesy of the Hubble and James Webb space telescopes working in tandem. One image was taken in visible light and is breathtaking on its own, while the other, captured in infrared, pulls back the curtain on an entirely different layer of detail across the planet’s clouds, rings, and poles.



Hubble captured its image on August 22nd during a routine weather monitoring sweep of the outer planets. Bands of clouds wrap around the globe with subtle shifts in tone where sunlight catches the upper atmosphere, and the rings cast long shadows across the planet’s face at that particular angle. Three of Saturn’s smaller moons, Janus, Mimas, and Epimetheus, sit quietly at the edges of the frame, adding a sense of scale to an already striking image.

Sale


Gskyer Telescope, 70mm Aperture 400mm AZ Mount Astronomical Refracting Telescope for Kids Beginners…
  • Superior Optics: 400mm(f/5.7) focal length and 70mm aperture, fully coated optics glass lens with high transmission coatings creates stunning images…
  • Magnification: Come with two replaceable eyepieces and one 3x Barlow lens.3x Barlow lens trebles the magnifying power of each eyepiece. 5×24 finder…
  • Wireless Remote: This refractor telescope includes one smart phone adapter and one Wireless camera remote to explore the nature of the world easily…

The James Webb Space Telescope returned to the same spot a few months later on November 29th, this time with its near infrared camera. The rings respond brilliantly to infrared light, the water ice within them practically glowing in the exposure. The narrow outer F ring shows up with crisp definition alongside the broader B ring, which carries subtle spoke like structures that are easy to miss at first glance. The wider field of view also reveals six of Saturn’s larger moons, including Titan off to one side and Dione and Enceladus sitting remarkably close together.

Advertisement

Webb Hubble Space Telescopes Saturn New View
The two images were taken 14 weeks apart, during a period when Saturn was slowly approaching its 2025 equinox. The northern hemisphere is easing out of summer while the south is just beginning its transition into spring, and that gradual seasonal shift gives astronomers a rare window to track how the planet’s clouds, rings, and atmospheric features evolve over the coming decade.

Webb Hubble Space Telescopes Saturn New View
Hubble’s visible light image captures Saturn’s surface and the cloud formations that scientists have been studying for decades, but Webb’s infrared view goes considerably deeper, revealing cloud structures and atmospheric compounds at multiple levels, from the dense lower layers all the way up to the thin air at the top. Together the two images give researchers something far more powerful than either could provide alone, allowing them to study the atmosphere in layers rather than as a single flat snapshot.

Webb Hubble Space Telescopes Saturn New View
The Webb image reveals a wavy jet stream cutting across the northern mid latitudes, bent by atmospheric waves churning beneath it. Further south a handful of small storms dot the lower hemisphere, one of which appears to be the final remnant of the enormous storm system that raged for years after it first appeared in 2010. Over in the Hubble image the famous north pole hexagon is faintly visible, the six sided wind pattern that has persisted since the 1980s and shows no signs of fading yet, though it will eventually disappear as Saturn’s north pole descends into a 15 year winter by the 2040s.

Webb Hubble Space Telescopes Saturn New View
The poles in the infrared image take on a grey green tint that scientists believe could be caused by high altitude aerosols or charged particles connected to auroral activity around Saturn’s magnetic field, details that are simply invisible in visible light. The rings tell their own story across both images as well. Visible light shows their structure and the shadows they cast across the planet’s surface, while infrared highlights just how reflective the ice particles within them are, making the entire ring system pop against the darkness of space. Subtle differences between the two images also reflect the different viewing angles and wavelengths each telescope works with, adding another layer of information for researchers to work through.
[Source]

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Japan's bullet train to debut high-tech private cabins, for an added fee

Published

on


A recently introduced Shinkansen high-speed train is set to add several exclusive private cabins over the next few months. According to a local report, these “private rooms” will include high-tech services designed to improve remote working conditions and internet performance. Travelers visiting Japan may also find the option appealing, although…
Read Entire Article
Source link

Continue Reading

Tech

Ember Artline vs Samsung Frame: Comparing the arty TVs

Published

on

Although Amazon first revealed its Samsung Frame competitor TV back at CES, it’s now finally available to pre-order.

Coined Ember Artline, how does Amazon’s new lifestyle TV compare to the Samsung Frame? Ahead of our review, we’ve compared the initial specs of the Ember Artline to the four-star Samsung Frame and noted the key differences between the two below.

Once you’re done here, make sure you visit our round-up of the best TVs, best cheap TVs and best 4K TVs too, to find your next investment.

Price and Availability

At the time of writing, Amazon’s Ember Artline is available for pre-order and will launch officially on April 22nd in the US and Canada, and May 7th in the UK. Germany is slated to see the TV later in May, although an exact date hasn’t been announced just yet.

Advertisement

The Ember Artline has a starting RRP of $899.99/£949.99 for the 55-inch model.

Advertisement

SQUIRREL_PLAYLIST_10208400

In comparison, the Samsung Frame is available to buy now and has a starting price of £799/$899 for the smallest 43-inch model. While the Ember Artline is only available in two sizes (55- and 65-inches), the Samsung Frame comes as a 43-, 50-, 55- or 65-inch screen.

Advertisement

SQUIRREL_PLAYLIST_10208402

Ember Artline supports Alexa+

Naturally as it’s an Amazon TV, the Ember Artline is fitted with Alexa – specifically the recently launched Alexa+. However, we should disclaim that Alexa+ is only free for Prime members, no non-Prime subscribers will have to spend £19.99 to access the voice assistant. 

Alexa+ is essentially a smarter, more conversational and personalised upgrade over the original Alexa. While we’re yet to provide our full review on the voice assistant, our Home Technology Editor Dave Ludlow has given his early thoughts on Alexa+ and noted where it excels and still struggles.

Enable Alexa Plus early accessEnable Alexa Plus early access
Alexa+ on Echo Show. Image Credit (Trusted Reviews)

Advertisement

Otherwise, Alexa+ provides hands-free control on the TV, and allows you to search for shows, receive personalised recommendations and have natural conversations too.

Advertisement

Fire TV vs Tizen

One of the key differences between the Ember Artline and Samsung Frame is with their respective operating systems. While the Ember Artline runs on Amazon’s Fire TV, the Samsung Frame is powered by, unsurprisingly, Samsung’s Tizen OS instead.

All New Fire TV Experience 2026All New Fire TV Experience 2026
Image Credit (Amazon)

Both are smart TV systems that offer access to streaming apps such as Netflix, BBC iPlayer, Disney Plus and more, and have their respective pros and cons. For example, while Tizen isn’t the easiest to navigate, it does offer recommendations and there’s now the option to create multiple profiles for your household. In comparison, although Fire TV is intuitive, we found that it has a tendency to promote Amazon Prime content – which is somewhat understandable. 

Ember Artline includes artwork at no additional cost

The key selling point of the two TVs here is that they can display artwork on their screens when not in use. The Samsung Frame has a dedicated Art Mode that presents a gallery of artwork and even your own photos on screen. Plus, with Pantone-validated colour and the promise of no screen burn, images don’t only look vibrant and authentic but you can keep the screen on without worry.

Samsung Frame 2025Samsung Frame 2025
Image Credit (Samsung)

Advertisement

However, although the Samsung Frame does offer a selection of complimentary pieces to display, you will need to pay in order to access the complete library of over 3500 works of art.

In comparison, at least at the time of writing, the Ember Artline offers its collection of 2000 art pieces without any additional cost. Much like the Samsung Frame, you can also choose to display your own photos on the Ember Artline, via the Amazon Photos app.

Advertisement

Samsung Frame has more ports

You can never have too many ports, and the Samsung Frame offers a pretty generous selection overall. Alongside its four HDMIs, there’s three USBs (two A and one C), an Ethernet port and an optical port too.

In comparison, the Ember Artline has slightly less, with three HDMI 2.0s, one HDMI with eARC, one USB type-3 and an optical audio port.

However, the Ember Artline does benefit from Wi-Fi 6 support whereas the Samsung Frame sports the older Wi-Fi 5.

Amazon Ember Artline darkAmazon Ember Artline dark
Ember Artline. Image Credit (Amazon)

Advertisement

Both are 4K QLED displays

Both the Ember Artline and Samsung Frame are 4K, QLED displays, and are packed with plenty of premium screen technologies too, including HDR. In addition, both displays have an anti-glare finish that reduces reflections. In our review of the 2022 Samsung Frame, we found the screen did an excellent job at keeping reflections at bay, so we expect the latest model to do the same.

Advertisement

Otherwise, both the Ember Artline and Samsung Frame have a motion sensor that can either wake or turn off the screen accordingly. 

Finally, it’s worth noting that both TVs here also have customisable frames, or bezels, which are sold separately.

Early Verdict

Both the Amazon Ember Artline and Samsung Frame are impressive lifestyle TVs. As we’re yet to review the Ember Artline, we’ll hold off from giving a conclusive review for now. However, if you already own some of the best Amazon Echo devices, enjoy using Alexa for hands-free controls and don’t want to pay extra for artwork, then the Ember Artline seems like a great choice.

On the other hand, if you require more ports, don’t mind TizenOS and want a wider choice of screen sizes, then the Samsung Frame will likely suit you better.

Advertisement

Advertisement

We’ll update this versus once we review the Ember Artline.

Source link

Advertisement
Continue Reading

Tech

Is Linux Mint In Trouble?

Published

on

BrianFagioli writes: The developers behind Linux Mint say the project is rethinking its release strategy and moving toward a longer development cycle, with the next version now expected around Christmas 2026. In a monthly update, project lead Clement Lefebvre said the team reached a “crossroads” and needs more flexibility to fix bugs, improve the desktop, and adapt to rapid changes across the Linux ecosystem. The upcoming development build, temporarily called Mint 23 “Alfa,” is currently based on Ubuntu 26.04 LTS and includes Linux kernel 7.0, an unstable build of Cinnamon 6.7, and early Wayland related work.

Mint is also replacing the long used Ubiquity installer with “live-installer,” the same tool used by Linux Mint Debian Edition, allowing the project to unify installation infrastructure across its Ubuntu based and Debian based variants. While the team frames the changes as an opportunity to improve quality and reduce maintenance overhead, the shift has raised questions about the project’s long term direction and whether Linux Mint may eventually lean more heavily on its Debian roots rather than its traditional Ubuntu base.

Source link

Continue Reading

Tech

Last chance to vote! Help pick the 2026 GeekWire Awards winners across 10 categories

Published

on

Who will take home the coveted robot trophies at the 2026 GeekWire Awards? (GeekWire Photo)

Voting closes today for the 2026 GeekWire Awards, so it’s your final chance to help us select the top innovators and entrepreneurs in Pacific Northwest tech.

Cast your ballot here or in the embedded form at the bottom. 

Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.

With 50 finalists across 10 categories, we’ve previewed every potential winner — from Startup of the Year to Next Tech Titan — in stories over the past several weeks. Catch up here:

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships are available. Contact events@geekwire.com to reserve a spot for your team today.

(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey

Advertisement

Source link

Continue Reading

Tech

No, Anthropic’s New Claude Opus 4.7 Model Is Not Mythos Preview

Published

on

Anthropic on Thursday released a new AI model, and no, it’s not Claude Mythos Preview. Claude Opus 4.7 is now generally available, meant to help developers and vibe coders with their hardest coding tasks.

Opus 4.7, like a well-trained dog, is supposedly better at following instructions. Anthropic wrote in its blog post that Opus 4.7 takes instructions “literally,” where previous models skipped or loosely interpreted prompts. It has improvements to its file-based memory system, so it should be able to recall information from previous sessions and documents. And it can handle larger image files and analyze data from charts more easily. 

Anthropic also said the model is more “tasteful and creative” when creating interfaces, documents and slide decks. There are no details on exactly what Anthropic considers bad versus good taste.

Advertisement
AI Atlas

Anthropic made waves earlier this month when it revealed it had created Claude Mythos Preview, its next-generation model, but the model was so good at finding security gaps that the company would be sharing it with tech and internet infrastructure companies — like Cisco, CrowdStrike and Amazon Web Services — so they could address the issues Mythos found. 

The idea is that if tech companies can improve their systems with the help of AI, they will be more resilient to cyberattacks by bad actors who can use publicly available AI models like everyone else.

While Opus 4.7 isn’t the same as Mythos, Anthropic is testing some of its new cybersecurity protections in Opus 4.7. These safeguards, which “automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,” are the watered-down version of what will be in “Mythos-class” models, the company’s blog post said. But they’re still important as cybersecurity becomes increasingly saturated with AI, both for defense and for attack.

Advertisement

Source link

Continue Reading

Tech

Are we getting what we paid for? How to turn AI momentum into measurable value

Published

on

Enterprise AI is entering a new phase — one where the central question is no longer what can be built, but how to make the most of our AI investment.

At VentureBeat’s latest AI Impact Tour session, Brian Gracely, director of portfolio strategy at Red Hat, described the operational reality inside large organizations: AI sprawl, rising inference costs, and limited visibility into what those investments are actually returning.

It’s the “Day 2” moment — when pilots give way to production, and cost, governance, and sustainability become harder than building the system in the first place.

“We’ve seen customers who say, ‘I have 50,000 licenses of Copilot. I don’t really know what people are getting out of that. But I do know that I’m paying for the most expensive computing in the world, because it’s GPUs,’” Gracely said. “‘How am I going to get that under control?’”

Advertisement

Why enterprise AI costs are now a board-level problem

For much of the past two years, cost was not the primary concern for organizations evaluating generative AI. The experimental phase gave teams cover to spend freely, and the promise of productivity gains justified aggressive investment, but that dynamic is shifting as enterprises enter their second and third budget cycles with AI. The focus has moved from “can we build something?” to “are we getting what we paid for?”

Enterprises that made large, early bets on managed AI services are conducting hard reviews of whether those investments are delivering measurable value. The issue isn’t just that GPU computing is expensive. It is that many organizations lack the instrumentation to connect spending to outcomes, making it nearly impossible to justify renewals or scale responsibly.

The strategic shift from token consumer to token producer

The dominant AI procurement model of the past few years has been straightforward: pay a vendor per token, per seat, or per API call, and let someone else manage the infrastructure. That model made sense as a starting point but is increasingly being questioned by organizations with enough experience to compare alternatives.

Enterprises that have been through one AI cycle are starting to rethink that model.

Advertisement

“Instead of being purely a token consumer, how can I start being a token generator?” Gracely said. “Are there use cases and workloads that make sense for me to own more? It may mean operating GPUs. It may mean renting GPUs. And then asking, ‘Does that workload need the greatest state-of-the-art model? Are there more capable open models or smaller models that fit?’”

The decision is not binary. The right answer depends on the workload, the organization, and the risk tolerance involved, but the math is getting more complicated as the number of capable open models, from DeepSeek to models now available through cloud marketplaces, grows. Now enterprises actually have real alternatives to the handful of providers that dominated the landscape two years ago.

Falling AI costs and rising usage create a paradox for enterprise budgets

Some enterprise leaders argue that locking into infrastructure investments now could mean significantly overpaying in the long run, pointing to the statement from Anthropic CEO Dario Amodei that AI inference costs are declining roughly 60% per year.

The emergence of open-source models such as DeepSeek and others has meaningfully expanded the strategic options available to enterprises that are willing to invest in the underlying infrastructure in the last three years.

Advertisement

But while costs per token are falling, usage is accelerating at a pace that more than offsets efficiency gains. It’s a version of Jevons Paradox, the economic principle that improvements in resource efficiency tend to increase total consumption rather than reduce it, as lower cost enables broader adoption.

For enterprise budget planners, this means declining unit costs do not translate into declining total bills. An organization that triples its AI usage while costs fall by half still ends up spending more than it did before. The consideration becomes which workloads genuinely require the most capable and most expensive models, and which can be handled just fine by smaller, cheaper alternatives.

The business case for investing in AI infrastructure flexibility

The prescription isn’t to slow down AI investment, but to build with flexibility being top of mind. The organizations that will win aren’t necessarily the ones that move fastest or spend the most; they’re the ones building infrastructure and operating models capable of absorbing the next unexpected development.

“The more you can build some abstractions and give yourself some flexibility, the more you can experiment without running up costs, but also without jeopardizing your business. Those are as important as asking whether you’re doing everything best practice right now,” Gracely explained.

Advertisement

But despite how entrenched AI discussions have become in enterprise planning cycles, the practical experience most organizations have is still measured in years, not decades.

“It feels like we’ve been doing this forever. We’ve been doing this for three years,” Gracely added. “It’s early and it’s moving really fast. You don’t know what’s coming next. But the characteristics of what’s coming next — you should have some sense of what that looks like.”

For enterprise leaders still calibrating their AI investment strategies, that may be the most actionable takeaway: the goal is not to optimize for today’s cost structure, but to build the organizational and technical flexibility to adapt when, not if, it changes again.

Source link

Advertisement
Continue Reading

Tech

Meta Raises Prices on Quest 3 and Quest 3S Due to RAM Shortage

Published

on

Meta’s latest virtual reality headset, the Meta Quest 3 (512 GB), will cost $100 more starting Sunday. You can blame the ongoing RAM shortage. 

Meta released the pricing update on Wednesday in a blog post calling out price increases for the Meta Quest 3 and 3S models. “The cost of building high-performance VR hardware has risen significantly,” Meta said in the post explaining the increase. 

High demand from AI data centers is straining memory chip supplies, causing supply constraints and price increases in consumer tech. Many experts aren’t expecting the RAM shortage to end until 2028. 

Advertisement

Counterpoint Research released findings in February showing that RAM costs increased by 80% to 90% in the first quarter of this year. Tech companies continue to hike prices, with Microsoft being the latest to increase the cost of the Microsoft Surface and Samsung doing the same for some Galaxy devices

Watch this: Meta Quest 3S Review: The Best of the Quest 2 and 3

Here’s the original pricing as of Thursday, along with what you can expect to pay starting April 19. 

Price changes for Meta Quest 3 models

Advertisement

Meta Quest model and storage Original price New price
Meta Quest 3S (128 GB) $300 $350
Meta Quest 3S (256 GB) $400 $450
Meta Quest 3 (512 GB) $500 $600

Expect price bumps for refurbished Meta Quest headsets. Prices for Quest accessories will remain the same for now, though we’re unsure whether this applies to games in the Meta store, or whether there’ll be a change in the future. 

Meta did not immediately respond to a request for comment. 

Advertisement

Watch this: Meta Quest 3S Review: The Best of the Quest 2 and 3

The Meta Quest 3 and 3S are Meta’s latest virtual reality headsets. The Quest 3S is the budget-friendly version, while the Quest 3 is the “pro” model. CNET’s Scott Stein rated both models high for their mixed reality, with better color cameras and improvements from the Quest 2.

Source link

Advertisement
Continue Reading

Tech

This AI lets self-driving cars “remember” past drives to plan safer routes

Published

on

One of the biggest problems with self-driving systems is that they can see the road perfectly well and still make shaky short-term decisions in messy city traffic. The advanced systems struggle to keep up with complex and fluctuating road situations. But a new study argues that these cars don’t need better vision, but a better memory.

In the peer-reviewed paper KEPT (Knowledge-Enhanced Prediction of Trajectories from Consecutive Driving Frames with Vision-Language Models), researchers from Tongji University and collaborators developed a system that helps autonomous vehicles “remember” past driving scenes before choosing what to do next.

How does this new self-driving tech work?

The method, called KEPT, uses front-view camera video, compares it with a large library of earlier real-world driving clips, and then predicts a safer short-term trajectory based on both the current scene and retrieved examples from the past. The core idea is pretty intuitive. Instead of asking an AI model to react to every situation as if it has never seen anything like it before, KEPT lets it recall similar moments from previous drives.

Those examples are then fed into a vision-language model as part of a structured reasoning process. This matters since researchers say large vision-language models can otherwise hallucinate, ignore physical constraints, or suggest motion that looks plausible on paper but is not great for an actual car. So KEPT basically acts like guardrails to keep the model grounded in what similar traffic situations looked like in the real world.

Is it better than conventional autonomous systems?

The researchers tested KEPT on the widely used nuScenes benchmark and said it outperformed both conventional end-to-end planning systems and newer vision-language-based planners on open-loop metrics. It even managed to reduce prediction error and lowered potential collision indicators, while keeping retrieval fast enough to remain practical for real-time driving.

This may make it seem like an obvious choice for next-gen self-driving cars but it’s not road-ready yet. Still, the broader idea is compelling. If autonomous cars can combine real-time perception with a meaningful memory of how similar situations unfolded before, they may end up making decisions that feel less brittle and more human-like.

Advertisement

Source link

Continue Reading

Tech

Bogus crypto wallet on App Store steals $9.5M

Published

on

Multiple cryptocurrency users have lost approximately $9.5 million after a fake Ledger Live app on the macOS App Store drained their funds.

Ledger cryptocurrency dashboard on a large screen with account balance and swap panel, surrounded by various Ledger hardware wallet devices in different shapes and colors on a gradient background
A fake version of the Ledger Live macOS app has stolen $9.5M in cryptocurrency.

The world of cryptocurrency has always carried significant risks, and even iPhone and iPad users aren’t immune to its dangers. Now and then, malicious actors find ways to steal money, be it via outright hacking or through a cams designed to drain cryptowallets.
In April 2026, Mac users were hit with the latter after downloading a fake version of the Ledger Live app from the macOS App Store. The fake app was submitted by the publisher “Leva Heal,” which has nothing to do with Ledger SAS, the owner and developer of the real Ledger Live app.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Perplexity brings its Personal Computer AI assistant to Mac

Published

on

Perplexity has just released Personal Computer. The software, which is available starting today for Mac, builds on the multi-model orchestration capabilities the company debuted with Perplexity Computer at the end of February. Like Claude Cowork (and, as of today, OpenAI Codex too), it’s a suite of computer use agents that can work with your files, apps, connectors and the web to complete complex and “even continuous workflows.”

Perplexity suggests a few different use cases for Personal Computer, starting with the obvious. “You can ask Personal Computer to read your to-do list,” the company states. “In fact, you can ask it to DO your to-do list.” It explains you can open the Notes app on your Mac, ask Personal Computer for help and the system will reason how to best assist you. In the process of tackling that task, it can work across all your files, as well as apps like Apple Messages. When needed, it will also employ multiple agents to complete a request. Like Anthropic did with Claude Cowork, Perplexity says you can also use its software to organize messy folders so files feature sensible names and there’s an easy-to-understand structure to everything.

You can prompt Personal Computer with your voice, and you can even initiate and manage tasks from your phone. Perplexity says the app creates files in a secure sandbox, and any actions it takes are auditable and reversible. “A system that acts on your behalf needs to be useful and legible. It should feel like a team you manage, not a rogue employee with keys to your most important data,” the company said.

Personal Computer for Mac is available starting today, beginning with Max subscribers. Perplexity said it would bring the app to its other users soon, prioritizing those who joined the waitlist for the experience.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025