Connect with us
DAPA Banner

Tech

Android malware uses blank icons and fake screens to steal financial credentials

Published

on


  • Four Android banking trojan campaigns target hundreds of finance and social apps
  • Malware hides icons, blocks removal, and overlays fake banking login screens
  • Live screen streaming lets attackers monitor activity and capture authentication steps

Security researchers have tracked four Android banking trojan campaigns that rely on deception, stealth, and disappearing app icons to stay hidden out of sight after installation.

Researchers at Zimperium say the campaigns, named RecruitRat, SaferRat, Astrinox, and Massiv, collectively targeted more than 800 banking, cryptocurrency, and social media apps.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Insurance startup Corgi hits $1.3B valuation 4 months after its Series A

Published

on

Business insurance startup Corgi announced on Wednesday a $160 million Series B, led by TCV, valuing the startup at $1.3 billion, the startup’s co-founder Nico Laqua said on LinkedIn.

This comes just four months after the company announced a $108 million Series A. The company has now raised $268 million in funding to date, Laqua said, and has become Y Combinator’s latest unicorn.

Laqua started the company with Emily Yuan in 2024 and was part of YC’s Spring 2024 batch. Corgi, which names Deel and Artisan as customers, offers coverage for general liability, cyber liability, and tech and AI liability. Other investors in the round include Kindred Ventures, Leblon Capital, and First Order Fund.

“We’re excited about the raise and incredibly grateful to our investors for believing in what we’re building. But the job is not done,” Laqua told TechCrunch. “Our mission is bigger: we want to use the fresh capital to expand into more lines of insurance and build a generational company.”

Advertisement

Source link

Continue Reading

Tech

Five architects of the AI economy explain where the wheels are coming off

Published

on

Earlier this week, five people who touch every layer of the AI supply chain sat down at the Milken Global Conference in Beverly Hills, where they talked with this editor about everything from chip shortages to orbital data centers to the possibility that the whole architecture that undergirds the tech is wrong.

On stage with TechCrunch: Christophe Fouquet, CEO of ASML, the Dutch company that holds a monopoly on the extreme ultraviolet lithography machines without which modern chips would not exist; Francis deSouza, COO of Google Cloud, who is overseeing one of the biggest infrastructure bets in corporate history; Qasar Younis, co-founder and CEO of Applied Intuition, a $15 billion physical AI company that started in simulation and has since moved into defense; Dimitry Shevelenko, the chief business officer of Perplexity, the AI-native search-to-agents company; and Eve Bodnia, a quantum physicist who left academia to challenge the foundational architecture most of the AI industry takes for granted at her startup, Logical Intelligence. (Meta’s former chief AI scientist, Yan LeCun, signed on as founding chair of its technical research board earlier this year.)

Here’s what the five had to say:

The bottlenecks are real

Advertisement

The AI boom is running into hard physical limits, and the constraints begin further down the stack than many may realize. Fouquet was the first to say it, describing a “huge acceleration of chips manufacturing,” while expressing his “strong belief” that despite all that effort, “for the next two, three, maybe five years, the market will be supply limited,” meaning the hyperscalers — Google, Microsoft, Amazon, Meta — aren’t going to get all the chips they’re paying for, full stop.

DeSouza highlighted how big — and how fast growing — an issue this is, reminding the audience that Google Cloud’s revenue crossed $20 billion last quarter, growing 63%, while its backlog — the committed but not yet delivered revenue — nearly doubled in a single quarter, from $250 billion to $460 billion. “The demand is real,” he said with impressive calm.

For Younis, the constraint comes primarily from elsewhere. Applied Intuition builds autonomy systems for cars, trucks, drones, mining equipment and defense vehicles, and his bottleneck isn’t silicon — it’s the data that one can only gather by sending machines into the real world and watching what happens. “You have to find it from the real world,” he said, and no amount of synthetic simulation fully closes that gap. “There will be a long time before you can fully train models that run on the physical world synthetically.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

The energy problem is also real

Advertisement

If chips are the first bottleneck, energy is the one looming behind it. DeSouza confirmed that Google is exploring data centers in space as a serious response to energy constraints. “You get access to more abundant energy,” he noted. Of course, even in orbit, it isn’t simple. DeSouza observed space is a vacuum, so eliminates convection, leaving radiation as the only way to shed heat into the surrounding environment (a much slower and harder-to-engineer process than the air and liquid cooling systems that data centers rely on today). But the company is still treating it as a legitimate path.

The deeper argument de Souza made, somewhat unsurprisingly, was about efficiency through integration. Google’s strategy of co-engineering its full AI stack — from custom TPU chips through to models and agents — pays dividends in watts per flop that a company buying off-the-shelf components simply can’t replicate, he suggested. “Running Gemini on TPUs is much more energy efficient than any other configuration,” because chip designers know what’s coming in the model before it ships, he said. In a world where energy availability is becoming a massive constraint on how far this tech can go, that kind of vertical integration is a major competitive advantage.

Fouquet’s echoed the point later in the discussion. “Nothing can be priceless,” he said. The industry is in an strange moment right now, investing extraordinary amounts of capital, driven by strategic necessity. But more compute means more energy, and more energy has a price.

A different kind of intelligence

Advertisement

While the rest of the industry debates scale, architecture, and inference efficiency within the large language model paradigm, Bodnia is building something very different.

Her company, Logical Intelligence, is built on so-called energy-based models (EBMs), a class of AI that doesn’t predict the next token in a sequence but instead attempts to understand the rules underlying data, in a way she argues is closer to how the human brain actually works. “Language is a user interface between my brain and yours,” she said. “The reasoning itself is not attached to any language.”

Her largest model runs to 200 million parameters — compared to the hundreds of billions in leading LLMs — and she claims it runs thousands of times faster. More importantly, it’s designed to update its knowledge as data changes, rather than requiring retraining from scratch.

For chip design, robotics and other domains where a system needs to grasp physical rules rather than linguistic patterns, she argues EBMs are the more natural fit. “When you drive a car, you’re not searching for patterns in any language. You look around you, understand the rules about the world around you, and make a decision.” It’s an interesting argument and one that’s likely to attract more attention in the coming months, given the AI field is beginning to ask whether scale alone is sufficient.

Advertisement

Agents, guardrails, and trust

Shevelenko spent much of the conversation explaining how Perplexity has evolved from a search product into something it now calls a “digital worker.” Perplexity Computer, its newest offering, is designed not as a tool a knowledge worker uses, but as a staff that a knowledge worker directs. “Every day you wake up and you have a hundred staff on your team,” he said of the opportunity. “What are you going to do to make the most of it?”

It’s a compelling pitch; it also raises obvious questions about control, so I asked them. His answer was granularity. Enterprise administrators can specify not just which connectors and tools an agent can access, but whether those permissions are read-only or read-write — a distinction that matters enormously when agents are acting inside corporate systems. When Comet, Perplexity’s computer-use agent, takes actions on a user’s behalf, it presents a plan and asks for approval first. Some users find the friction annoying, Shevelenko said, but he said heconsiders it essential, particularly after joining the board of Lazard, where said he has found himself unexpectedly sympathetic to the conservative instincts of a CISO protecting a 180-year-old brand built entirely on client trust. “Granularity is the bedrock of good security hygiene,” he said.

Sovereignty, not just safety

Advertisement

Younis offered what may have been the panel’s most geopolitically charged observation, which is that physical AI and national sovereignty are entangled in ways that purely digital AI never was.

The internet initially spread as American technology and faced pushback only at the application layer — the Ubers and DoorDashes — when offline consequences became visible. Physical AI is different. Autonomous vehicles, defense drones, mining equipment, agricultural machines — these manifest in the real world in ways governments can’t ignore, raising questions about safety, data collection, and who ultimately controls systems that operate inside a nation’s borders. “Almost consistently, every country is saying: we don’t want this intelligence in a physical form in our borders, controlled by another country.” Fewer nations, he told the crowd, can currently field a robotaxi than possess nuclear weapons.

Fouquet framed it a little differently. China’s AI progress is real — DeepSeek’s release earlier this year sent something close to a panic through parts of the industry — but that progress is constrained below the model layer. Without access to EUV lithography, Chinese chipmakers cannot manufacture the most advanced semiconductors, and models built on older hardware operate at a compounding disadvantage no matter how good the software gets. “Today, in the United States, you have the data, you have the computing access, you have the chips, you have the talent. China does a very good job on the top of the stack, but is lacking some elements below,” Fouquet said.

The generation question

Advertisement

Near the end of our panel, someone in the audience asked the obvious uncomfortable question: is all of this going to impact the next generation’s capacity for critical thinking?

The answers were, perhaps unsurprisingly, optimistic, though not naively so. De Souza pointed to the scale of problems that more powerful tools might finally let humanity address. Think neurological diseases whose biological mechanisms we don’t yet understand, greenhouse gas removal, and grid infrastructure that has been deferred for decades. “This should unleash us to the next level of creativity,” he said.

Shevelenko made a more pragmatic point: the entry-level job may be disappearing, but the ability to launch something independently has never been more accessible. “[For] anybody who has Perplexity Computer . . . the constraint is your own curiosity and agency.”

Younis drew the sharpest distinction between knowledge work and physical labor. He pointed to the fact that the average American farmer is 58 years old and that labor shortages in mining, long-haul trucking, and agriculture are chronic and growing — not because wages are too low, but because people don’t want those jobs. In those domains, physical AI isn’t displacing willing workers. It’s filling a void that already exists and looks only to deepen from here.

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading

Tech

Google updates AI Overviews with Further Exploration links, subscription labels as 58% publisher click decline triggers antitrust suits

Published

on

TL;DR

Google announced five updates to AI Overviews and AI Mode designed to send more traffic to publishers, including a Further Exploration links section, subscription labels, and inline link context. The changes arrive as AI Overviews face a 58 per cent click-through rate decline, antitrust lawsuits from Penske Media, and EU investigations into whether Google is cannibalising the web content its business depends on.

Google has a publisher problem. AI Overviews, the AI-generated summaries that now appear at the top of search results for a growing share of queries, have been correlated with a 58 per cent reduction in click-through rates to the websites whose content those summaries are built on. Penske Media has filed an antitrust lawsuit. The European Publishers Council has filed a formal complaint with the European Commission. A third of publishers surveyed say they will block AI Overviews once the tools to do so become available. And Google’s search advertising business, which generated more than 50 billion dollars in the first quarter of 2026 alone, depends on the continued existence of the web content that AI Overviews are systematically disincentivising publishers from producing. On Tuesday, Google announced five updates to AI Mode and AI Overviews designed to send more traffic back to the websites it has been accused of cannibalising. The updates are Google’s most direct acknowledgement yet that AI search and the open web have a relationship problem, and its most concrete attempt to argue that the relationship can be repaired.

Advertisement

The features

The most significant addition is Further Exploration, a new section that appears at the end of AI Overviews with curated links to specific articles, case studies, and reports related to the query. The section is designed to transform the AI summary from a destination into a departure point, giving users who want to go deeper a structured path to the source material rather than leaving them with an answer that renders the original content unnecessary. Google is also introducing inline link context on desktop: hovering over a link embedded in an AI Overview will now display the name of the website or page title, addressing what the company describes as user hesitancy to click links when they are unsure where they lead.

Three additional changes target specific use cases. AI Mode and AI Overviews will begin labelling links from a user’s active news subscriptions so they stand out in results, a feature Google says early testing showed made users “significantly more likely” to click. AI responses will also surface previews of perspectives from public forums such as Reddit, social media, and other firsthand sources, with context including the creator’s handle or community name. And Google is expanding the display of product review cards and comparison features within AI Overviews for shopping queries, adding more direct links to retailer and review sites. Taken together, the five updates represent a concerted effort to make AI Overviews more porous: more links, more context around those links, and more reasons for users to click through to the websites that generated the information the AI is summarising.

The problem

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Advertisement

The updates arrive in the context of an existential confrontation between Google and the publishers whose content powers its search engine. An Ahrefs study published in February 2026 found that AI Overviews correlate with a 58 per cent reduction in click-through rates for top-ranking pages, nearly double the 34.5 per cent decline documented in April 2025. The Pew Research Center found that only eight per cent of users click on traditional search results when an AI Overview is present, compared to 15 per cent when no overview appears. Digital Content Next, which represents major digital publishers, reported that most of its members experienced traffic losses between one and 25 per cent, with some reporting declines exceeding 75 per cent. Chartbeat data tracking more than 2,500 news sites globally showed that Google search referrals declined by 33 per cent in 2025.

The European Commission has told Google what it must do to share search data with rivals under the Digital Markets Act, proposing six specific areas of obligation including how Google must provide third-party search engines and AI chatbots with access to search index data. The EU has also launched a separate antitrust investigation into whether Google’s AI Overviews and AI Mode violate competition rules by using publisher content without appropriate compensation and without allowing publishers to refuse without losing access to Google Search. In the United States, the Department of Justice won its antitrust case against Google, with a federal judge prohibiting exclusive contracts relating to the distribution of Google Search and ordering behavioural remedies, though the DOJ is considering whether to appeal for additional structural relief.

The tension

Sundar Pichai’s vision for Google is to transform Search from a retrieval engine into an agent manager, a platform that does not merely find information but acts on it. The plan, articulated at Google Cloud Next 2026, positions AI agents as the next interface layer between users and the web, with Google’s models interpreting queries, synthesising answers, and executing tasks across services. The strategic direction is clear: Google wants users to interact with AI, not with websites. But the business model depends on those websites continuing to exist, continuing to produce content, and continuing to attract enough traffic that advertisers will pay to appear alongside their pages. The five updates announced on Tuesday are an attempt to square this circle, to keep AI Overviews as the primary interface while creating enough clickthrough to sustain the web ecosystem that feeds them.

Google’s repositioning of Chrome as an agentic AI workplace tool underscores the direction of travel. The browser that once existed to connect users to websites is being rebuilt as an autonomous agent that completes tasks without requiring users to visit individual sites at all. The trajectory from AI Overviews to agentic browsing to fully autonomous agents suggests that the five publisher-friendly updates are a tactical concession within a strategic movement that is structurally reducing the value of the open web to Google’s users. Publishers are aware of this tension. The European Publishers Council’s complaint specifically argues that Google’s approach amounts to a forced choice: accept unlicensed use of content for AI training and AI-generated answers, or risk losing the search traffic that sustains digital publishing.

Advertisement

The calculation

The economics of AI search are fundamentally different from the economics of link-based search. A user who receives a complete answer from an AI Overview has no incentive to click through to a publisher’s website. A publisher whose content is summarised in an AI Overview receives no compensation for the content used and no traffic from the summary generated. The advertising model that sustained both Google and publishers for two decades depended on imperfect information: users searched, found promising links, clicked through, consumed content, and encountered ads. AI Overviews collapse this chain by providing the answer directly, eliminating the click, and stranding the advertising that was attached to the destination page. Google is simultaneously investing billions in custom AI inference chips to reduce the cost of generating those overviews at scale, which means the economic incentive to expand AI answers to more queries will only intensify.

Google’s five updates attempt to rebuild some of the click incentive that AI Overviews have destroyed. Further Exploration sections add links. Subscription labels add familiarity. Inline context adds transparency. Forum perspectives add social proof. Product cards add commercial intent. Whether these additions are sufficient to reverse a 58 per cent decline in clickthrough rates, or whether they are window dressing on a structural shift that has already occurred, will be determined not by Google’s announcements but by the traffic data that publishers track in the months that follow. Google’s broader strategy is to make AI the interface for everything, from search to workspace to enterprise to commerce. The open web is the content layer that trains and feeds that interface. The question Google has not answered, and that Tuesday’s updates do not resolve, is what happens to the content layer when the interface no longer sends it traffic. The updates are a gesture. The trajectory is unchanged.

Source link

Advertisement
Continue Reading

Tech

What’s The Average Lifespan Of A Lawn Mower Engine?

Published

on





The arrival of spring means that many homeowners are heading back outside to take care of their yard. But unless your lawn mower is brand new, you might be wondering how much life it has left before you’ll need to replace the engine or the entire mower. The answer to the question can vary, based on the type of mower you’re using, and there is no single standardized lifespan across all manufacturers.

Some sources expect a gas-powered residential push mower engine to last anywhere from 450 to 1,500 hours, depending on the brand; this can equate to 10 years or more. Gas-powered riding lawn mowers can last anywhere from 5 to 9 years, or 500 to 1,500 hours — again, depending on the model. Electric lawn mowers can last anywhere from 5 to 7 years, but the Lithium-ion batteries they use can last longer, potentially 7 to 10 years.

Commercial gas-powered lawn mower engines are different because those mowers are built to be more durable than their residential counterparts. That’s because they typically cover larger areas and have to handle more challenging conditions. So in terms of hours, these mowers can last from 1,200 up to 2,500 hours, or more, depending on the brand. That can work out to anywhere from 8 to 12 years.

Advertisement

How to get the most life out of your lawn mower engine

There are ways to ensure your mower engine lasts as long as possible, and it begins with regular maintenance. This includes keeping the blades sharp, so the engine isn’t forced to work harder than necessary. Keeping the deck clean also plays a part, because grass build-up can restrict airflow and put more strain on your engine. For gas-powered mowers, routine oil changes are important, as is regular maintenance of the engine and fuel system.

Electric lawn mowers require proper care as well, though they are a bit different from gas mowers. Cords, connections, and especially the battery system, are all important components and must be regularly maintained to ensure maximum life. This means cleaning the battery, charging it as needed, and keeping it in a cool, dry place when not in use.

Advertisement

Proper storage is equally important, and that means keeping it in a dry location that protects the mower from rain, snow, humidity, or direct sunlight. A garage, shed, or other well-ventilated and enclosed space can be ideal. If you don’t have such a space and can only keep your mower outside, you should protect it with a waterproof tarp, although that should only be a temporary solution until you’re able to store it in a better location.



Advertisement

Source link

Continue Reading

Tech

Is xAI a neocloud now?

Published

on

On Wednesday, xAI and Anthropic announced a surprise partnership that has the Claude-maker buying out “all of the compute capacity at [xAI’s] Colossus 1 data center,” roughly 300MW that allowed Anthropic to immediately raise its usage limits. It’s a huge deal for xAI, likely worth billions of dollars. More importantly, it immediately monetized one of the company’s most impressive accomplishments, turning xAI from a consumer to a provider of compute. 

It’s tempting to see the arrangement as a shot at OpenAI amid the ongoing lawsuit. But Musk’s explanation on X was that xAI had already moved training to a newer data center, Colossus 2, and xAI simply didn’t need them both. 

In the short term, there’s an obvious logic at work. xAI’s existing products are mostly focused on Grok, which has seen plummeting usage since the image generation debacles earlier this year. If xAI’s data center buildout is much more than what Grok needs to operate, partnering with Anthropic adds a lot of green to the balance sheet. This is especially useful as the company, now combined with SpaceX, speeds toward an IPO. More broadly, having Anthropic lined up as a customer makes it easier to believe that SpaceX’s orbital data center play might actually work

But beyond the short-term benefit, the Anthropic partnership sends an unusual message about where Elon Musk’s priorities really lie. It suggests the company’s real business may be more about building data centers than training AI models. 

Advertisement

It’s rare to see a major tech company treat compute resources this way when companies like Google and Meta, which are also training models, are building more data centers. It’s an easy point to miss, because so many of these companies are working as enterprise AI vendors, online services, and cloud providers all at once. But when forced to make a choice between selling more available compute to customers and preserving some to build their own tools, they reliably choose door No. 2. 

Just last month, Sundar Pichai admitted on a call that Google Cloud revenue was lower than it could have been because the company was “capacity constrained” — and when given the choice of renting out their GPUs or using them to develop AI products, Google chose the AI products. 

Facebook has faced a more extreme version of the same constraint, spinning up an entirely new cloud apparatus just to ensure they would have enough GPU power to chase Mark Zuckerberg’s AI ambition. As he put it when announcing Meta Compute in January, “How we engineer, invest, and partner to build this infrastructure will become a strategic advantage.” 

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

The key word there is “strategic.” Both Zuckerberg and Pichai are looking toward a future where AI is powering the most popular and lucrative systems in the world. Computing power isn’t just a way to satisfy today’s inference demand, but to build tomorrow’s products — and running short on compute means missing out on that chance.  

Advertisement

By focusing on data centers (earthbound and otherwise), xAI is positioning itself more like a neocloud business: buying GPUs from Nvidia and renting them out to model developers like Anthropic. It’s a far more difficult business, squeezed by both chip suppliers and the shifting cycles of demand. The valuations for most active neoclouds reflect that reality: xAI was valued at $230 billion in its January funding round; CoreWeave, which oversees a comparable quantity of computing power, is worth less than a third of that

Musk’s version of a neocloud is more ambitious, as you might expect. Some of the data centers might be in space — at least by 2035, if things go according to plan. xAI will be making its own chips at the Terafab, which will take away some but not all of Nvidia’s pricing power. But none of it changes the basic economics of the neocloud business. 

As recently as the February all-hands, xAI had real ambitions in software. That was the presentation that unveiled the orbital data center project, but it also teased significant ambitions in coding (since bolstered by the Cursor partnership) and interesting ideas like leveraging computer use into full-scale digital twins (in the unfortunately named Macrohard project). These are the kind of long-horizon projects that need committed computing resources to succeed. As long as xAI is selling large quantities of compute to its competitors, it’s hard to think such new ambitions have much of a future.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Advertisement

Source link

Continue Reading

Tech

Google’s AI Search Results Will Now Turn To Reddit For Expert Advice

Published

on

Google is updating AI Overviews and AI Mode, the AI-generated portions of its search engine, to highlight sources in new ways, and interestingly, more prominently feature first-hand accounts from social media, expert blogs and forums like Reddit.

Via a new section that can appear in AI responses, Google will display “a preview of perspectives from public online discussions, social media and other firsthand sources.” In the sample screenshot the company provided, the section was called “Expert Advice” and included quotes from forums, WordPress blogs and Reddit. These were arranged above links to their respective sources. Google plans to add more context to these links, too, showing “a creator’s name, handle or community name,” so you can judge what you might want to click through and read from a glance.

Google will also start recommending in-depth articles at the end of AI responses for further exploration of a given topic, and link to more sources directly in its generated answers rather than just at the end. If you subscribe to any publications, AI responses will also highlight sources from the subscriptions you link to your Google account.

Given the rapid progress of AI in general, AI Overviews and AI Mode have been pretty consistently iterated on since Google launched them in 2024 and 2025, respectively. Pulling from Reddit and other online social platforms isn’t exactly a new strategy for the company, either — at least one early AI Overview hallucination was caused by information from Reddit. It is perhaps telling Google plans to cite the platform more prominently now, though, because Reddit is considered by some to be a more useful source of information than Google. Even more this update, the search engine has been prominently featuring Reddit links in standard search results.

Advertisement

Whether adding more links and recommending long-form reporting makes a meaningful difference for the dwindling number of publications Google pulls from is another story, however. As of 2025, Google claimed that its AI search tools were leading to more searches and more “high-quality clicks” on the websites it cites. Regardless of how much the company tinkers with its AI responses, though, one outcome of AI Overviews and AI Mode is the creation of scenarios where you don’t have to click away to another website at all, because Google answered your question for you.

Update, May 6, 5:30PM ET: This story was updated after publish to include information from Google that the title of the section is dynamic, rather than called Expert Advice.

Source link

Advertisement
Continue Reading

Tech

This new smart ring lasts twice as long as Samsung’s Galaxy Ring

Published

on

RingConn has launched its latest smart ring, and it’s going straight after Samsung with one standout claim: battery life that’s double that of the Galaxy Ring.

The new RingConn Gen 3 lasts up to 14 days on a single charge (with the caveat that haptics are turned off) compared to the seven-day estimate on Samsung’s Galaxy Ring. That alone makes it one of the longer-lasting smart rings currently on the market, and appeals to users who don’t want another device to charge every few days.

Battery aside, RingConn is also leaning further into health tracking. The Gen 3 focuses on vascular health insights, alongside features like sleep tracking and sleep apnea monitoring. There’s also a planned blood pressure tracking feature, but it’s not available at launch and will instead arrive in a future update. As expected, it’s positioned as a trend-tracking tool rather than a medical-grade replacement.

One of the more unusual additions is a built-in vibration module, something you don’t typically see in smart rings. It can deliver haptic alerts for things like health reminders or low-battery warnings, but it notably doesn’t support notifications for messages or apps, which limits its usefulness somewhat.

Advertisement

In terms of design, the Gen 3 remains lightweight, at 2.5-3.5g, depending on size, putting it roughly in line with Samsung’s alternative. It’s available in sizes 6 through 15 and comes in five colour options, with support for both iOS and Android via the companion app. There’s also no subscription fee for accessing health data, which could be a draw for users put off by ongoing costs.

Advertisement

Pricing starts at $349. However, a pre-order discount drops it to $314 (or $332 for premium finishes) until May 28, 2026.

On paper, the RingConn Gen 3 looks like a serious alternative to Samsung’s Galaxy Ring, particularly if battery life is high on your priority list.

Advertisement

Source link

Continue Reading

Tech

YG Acoustics Titan Loudspeakers Are Really $1 Million Per Pair

Published

on

The ultra high-end loudspeaker category has been rather busy in 2026, which is either a sign that the market still has serious money to spend or that nobody in this business knows how to tap the brakes. Wilson Audio has already made a very large statement with the $788,000 per pair Autobiography, while Børresen’s M8 Gold Signature pushes the conversation into seven figure territory. Now YG Acoustics has entered the same rarefied air with Titan, the first model in its new flagship Ultimate Range and arguably the most talked about new loudspeaker at AXPONA 2026.

This is not a speaker aimed at the casually curious. Titan is YG’s attempt to plant a flag at the top of the mountain, where the air is thin, the rooms are large, and your bank account needs to barely flinch when the invoice is opened.

Attack of the Titans?

Five years of research and three years of product development later, YG Acoustics has arrived at Titan, the most ambitious loudspeaker the company has released to date. It is the first model in YG’s new Ultimate Range and is designed to showcase the brand’s latest engineering work while pushing further into the ultra high end loudspeaker category.

Titan is not trying to disappear visually. It stands approximately 7 feet tall, or 84.5 inches, and weighs about 1,000 pounds. That is half a ton before the crates, the room treatment, and the quiet conversation with your financial advisor. Inside the cabinet, YG uses a seven driver symmetrical array supported by a custom designed sub bass driver.

Advertisement
yg-acoustics-titan-loudspeaker-front

YG Acoustics Titan Versions & Pricing

YG Acoustics offers the Titan in three configurations, giving ultra high-end buyers some room to choose between passive operation, active sub bass support, and a fully active system. “Some room” being relative when the starting point is $880,000 per pair.

Titan Passive: $880,000 per pair

The Titan Passive is a fully passive five-way loudspeaker that uses an external crossover with YG Acoustics’ Ultracoherent circuits. Crossover points are specified at 35 Hz, 90 Hz, 360 Hz, and 1.85 kHz.

Titan with Active Sub: $910,000 per pair

The Titan with Active Sub uses a passive four-way system with an external crossover and Ultracoherent circuits at 90 Hz, 360 Hz, and 1.85 kHz. Sub-bass duties are handled by a dedicated external 1,000-watt amplifier with DSP tuned crossover control for the 12.5-inch (32 cm) driver, including room correction support. The active sub-bass section is time aligned to integrate with the passive drivers.

YG Titan sub-Bass Driver and 1KW Amp (Active Sub Option)

Titan Live: $1,000,000 per pair

The Titan Live is the fully integrated active version of YG Acoustics Titan. It includes dedicated amplification and DSP for each channel, covering the high, midrange, low, and sub bass sections. Each tower is partnered with an external amplifier using 8 x 700-watts with an optimized DSP crossover.

The system connects to the Live Controller using glass fiber optic cables, with support for Roon Ready streaming, analog and digital inputs, and a high quality phono stage. In other words, this is the version for buyers who want the full YG ecosystem, not just a pair of loudspeakers and a nervous conversation with their amplifier dealer.

Advertisement

The Drivers

High frequencies are handled by YG’s lattice tweeter set inside a unique oval waveguide flanked by proprietary YG aluminum cone drivers arranged symmetrically above and below it: a pair of 15 cm (6-inch) mid-range drivers, a pair of 18 cm (7.25-inch) mid-bass drivers, and a pair of 26 cm (10.25-inch) bass drivers.

yg-acoustics-titan-loudspeaker-top-half

The tweeter, midrange, mid-bass, and bass array is supported by a custom 32 cm (12.5 inch) YG aluminum cone sub bass driver with an ultra high field magnet structure. YG says each driver is phase aligned across a wide frequency range to support a point source presentation.

Advertisement. Scroll to continue reading.

The goal is improved clarity, more precise imaging, and a wider listening area. That matters with a loudspeaker this large, because nobody spending this kind of money wants to sit with their head locked in one exact spot like they are being scanned for replicant behavior.

More details are included in the specifications chart later in the article.

Advertisement

Driver array includes:

  • One 26 mm (1”) Tweeter: Proprietary YG Lattice hybrid construction
  • Dual 15 cm (6 inch) midrange drivers: Proprietary YG aluminum cone drivers with neodymium magnets and close pair matching.
  • Dual 18.5 cm (7.25 inch) midbass drivers: Proprietary YG aluminum cone drivers with neodymium magnets and close pair matching.
  • Dual 26 cm (10.25 inch) bass drivers: Proprietary YG aluminum cone drivers with ultra high field magnets and close pair matching.
  • 32 cm (12.5 inch) sub bass driver: Proprietary YG aluminum cone driver with an ultra high field magnet and close matching.

Cabinet Construction

The Titan cabinet uses a five layer construction, another example of YG Acoustics’ focus on structural rigidity, resonance control, and fit and finish.

yg-acoustics-titan-loudspeaker-angle

The side panels use three layers of aerospace grade aluminum alloy, with precision engineered damping materials between them. That creates the five layer structure and gives the cabinet additional stiffness without relying on brute mass alone.

The front faceplates are machined from solid aerospace grade aluminum measuring 75 mm, or 3 inches, thick. YG says the aluminum is heat treated to optimize its crystalline structure before being milled to extremely tight tolerances.

Inside the cabinet, Titan uses advanced bracing, composite resin damping, and computationally optimized diffuser and absorber structures to reduce internal resonances and help the drivers operate with fewer cabinet related distortions.

In addition, the subwoofer section at the bottom maximizes cabinet volume by tapping into a channel that runs up the entire back of the Titan.

The Passive Crossover

The passive crossover is the result of the art and science of speaker engineering. The crossover is built on bespoke multi-layer PCB material with advanced dielectric properties and internal resonance damping.

Advertisement
yg-acoustics-titan-external-crossover

Each crossover circuit is precisely milled in-house from extra-thick, high-purity copper, with traces optimized through simulation to eliminate interference and distortion. The crossover is housed externally in an enclosure crafted entirely from a specially selected polymer material. The external placement eliminates any field interactions with the crossover signal.

YG Acoustics Titan Specifications

YG Acoustics Model  Titan
Speaker Design 5-Way (Available in passive, Active Sub, and Live versions)
Price per pair $880,000 (passive)
$910,000 (passive with active sub)
$1,000,000 (fully active)
Cabinet  Construction Side Panels: 3 aerospace aluminum layers with two damping layers in between for a total of five.

Front and Back: 3″ thick monolithic slabs of aerospace aluminum with specially engineered damping chambers

Speaker Type Floorstanding loudspeaker
Tweeter One 26 mm (1”) proprietary YG Lattice hybrid tweeter 
Midrange Dual 15 cm (6”) proprietary YG aluminum cone midrange drivers, neodymium magnets, and exceptional matching 
Advertisement

Dual 18.5 cm (7.25”) proprietary YG aluminum cone mid bass drivers, neodymium magnets, and exceptional matching 

Woofer Dual 26 cm (10.25”) proprietary YG aluminum cone bass drivers, ultra-high field magnets, and exceptional matching 

32 cm (12.5”) proprietary YG aluminum cone sub-bass driver, ultra-high field magnets, exceptional matching 

Crossover Passive: External crossover with Ultracoherent circuits at 35 Hz, 90 Hz, 360 Hz, and 1.85 kHz
Frequency Response Usable output extends from below 20 Hz to 40 kHz
Sensitivity 88 dB
Impedance 4 Ohms average
2.2 Ohms minimum
Dimensions (HxWxD) 215 x 54 x 108 cm
(84.5 x 21.5 x 42.5 inches)
Weight  455 kg (1,000 lbs) per tower unpacked
yg-acoustics-titan-loudspeaker-left
YG Acoustics Titan Loudspeaker with external crossover behind speaker.

The Bottom Line 

The YG Acoustics Titan is not just the first model in the company’s new Ultimate Range. It is YG’s statement that it wants a seat at the same very expensive table as the Wilson Audio Autobiography, Børresen M8 Gold Signature, and Sonus faber Suprema. That table does not come with a kids’ menu.

What makes Titan interesting is not one single trick feature, but the complete system approach: a massive seven driver symmetrical array, dedicated 12.5-inch sub bass driver, five layer cabinet construction, phase aligned driver integration, and three available configurations: passive, active sub bass, and fully active Titan Live. That flexibility is unusual for a loudspeaker at this level, especially when buyers can choose between traditional external amplification, active low frequency support, or a complete YG controlled ecosystem with amplification, DSP, streaming, digital inputs, analog inputs, and a phono stage.

Advertisement

Prospective buyers need to think beyond the loudspeaker price. The passive and active sub versions still require serious amplification, sources, cabling, setup expertise, and a room large enough to let a 7 foot tall, 1,000 pound loudspeaker breathe. With output extending below 20 Hz and up to 40 kHz, a 4 ohm average impedance, and a 2.4 ohm minimum load, Titan is not something you casually drop into a living room next to the sectional and hope for the best. It needs space, power, proper setup, and a system built around it.

Advertisement. Scroll to continue reading.

Titan is aimed at listeners who already understand the cost of entry in the ultra high-end category and want YG’s most ambitious expression of scale, control, and engineering. Everyone else can admire it from a safe distance, preferably outside the freight elevator.

yg-acoustics-titan-loudspeaker-gold
YG Acoustics Titan in special gold finish that could double the price.

Price & Availability

YG Acoustics Titan (aluminum version) is available now through Authorized Dealers in three configurations:

  • Titan Passive – $880,000 per pair
  • Titan w/Active Sub – $910,000 per pair
  • Titan Live – $1,000,000 per pair

A gold version is also available for roughly $2 million per pair, depending on market pricing and thickness of gold plating.

Sound Clip from AXPONA 2026

For more information: yg-acoustics.com

Advertisement

Source link

Continue Reading

Tech

Google’s AI Search Results Will Now Turn To Reddit For ‘Expert Advice’

Published

on

Google is updating AI Overviews and AI Mode to more prominently surface “Expert Advice” from public discussions, social platforms, forums, blogs, and Reddit. Engadget reports: Via a new “Expert Advice” section that can appear in AI responses, Google will display “a preview of perspectives from public online discussions, social media and other firsthand sources.” In the sample screenshot the company provided, quotes from forums, WordPress blogs and Reddit were arranged above links to their respective sources. Google plans to add more context to these links, too, showing “a creator’s name, handle or community name,” so you can judge what you might want to click through and read from a glance.

Google will also start recommending in-depth articles at the end of AI responses for further exploration of a given topic, and link to more sources directly in its generated answers rather than just at the end. If you subscribe to any publications, AI responses will also highlight sources from the subscriptions you link to your Google account.

Source link

Continue Reading

Tech

Inside the 15,500 malicious domains secretly using ad trackers to push AI investment scams across the web

Published

on


  • 15,500 domains were actively used to deliver cloaked AI investment scams
  • Cloaking ensures harmful content is shown only to targeted victims
  • Commercial tracking software allows cybercriminals to scale operations without building infrastructure

Cloaking has shifted from a supporting tactic into a central layer of cybercriminal infrastructure, and commercial tools are now widely embedded in cybercrime operations at scale.

A four-month analysis of malicious activity by Infoblox and Confiant identified roughly 15,500 domains linked to malicious tracker deployments.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025