Connect with us

Tech

Flood prediction at local level key, say StopFloods4.ie researchers

Published

on

The National Challenge Fund-supported project StopFlood4.ie is developing a smart flood forecasting system.

January hit Ireland hard and strong this year as Storm Chandra swept the country towards the end of the month, causing heavy flooding, damaging dozens of homes and disrupting lives – all in a matter of a few days.

While the total cost of damages and repairs hasn’t been made public yet, Fianna Fail MEP Barry Andrews expects it to be “significant”. Following the storm, the Department of Social Protection announced emergency response payments for those affected.

Emergency teams were “caught by surprise”, according to Keith Leonard, the national director of the National Directorate for Fire and Emergency Management, who spoke to RTÉ’s Morning Ireland. He said that they “just weren’t expecting those levels of rainfall”.

Advertisement

And sure enough, a study released following the events found that rainfall accumulation over the week preceding Storm Chandra caused the flooding to be this devastating.

The rapid study, conducted by climate scientists at the ICARUS Climate Research Centre in Maynooth University, and at Met Éireann, also found that the likelihood of Ireland experiencing a similar amount of rainfall in a week is a staggering three times more likely as a result of the climate crisis. Human-induced weather warming is a major factor in the problem.

Of course, there’s lessons to be learnt, whether it’s from the Government’s supposed dependency on external consultants to assist with storm planning, or, according to University of Galway scientists Dr Indiana Olbert and Dr Thomas McDermott, a need to for better support and data at a local level.

Speaking on RTÉ’s Drivetime show, Maynooth University climatologist Prof John Sweeney seconded the Galway scientists’ recommendation. He said: “What we need to do is change the way in which the public are alerted at a smaller scale, to what might be happening in their own catchment area.”

Advertisement

AI-powered forecasting

Olbert and McDermott are leading a unique – and aptly-named – project called StopFloods4.ie, which recently bagged €1.3m in funds as part of the Digital for Resilience Challenge, a part of the National Challenge Fund.

StopFloods4.ie is developing an AI-powered flood forecasting and decision-support system which integrates meteorological, tidal and river flow data.

By transforming fragmented data into actionable insights, the collaborative project – supported by the flood forecasting centre at Met Éireann, Cork City Council and local authorities – aims to equip emergency managers and communities with the means to anticipate, prepare for and respond to flood threats more effectively.

“What’s needed for better preparedness and response to flooding is primarily time,” say Olbert and McDermott, in a joint response to SiliconRepublic.com.

Advertisement

“That means people at a local level need to know in good time what the risk of flooding is and where is most likely to be affected. Currently that information is not available at the local scale.

“Warnings or alerts are issued at the county scale – meaning that many people who receive them are unlikely to be affected. While for those at risk, there is still a doubt about whether their location is at risk,” they added.

StopFloods4.ie’s technology predicts flooding on a street-by-street basis, providing timely, consistent and local information to decision makers, they explained.

The project’s pilot test site is currently situated in Cork city, given its historic vulnerability to flooding and population concentration. According to the project leads, they plan to fully roll out their system for use by Cork authorities in two years’ time.

Advertisement

They have also begun to pilot their technology in other areas, starting with other major urban areas with high flood exposure such as Galway, and eventually plan to expand it to the rest of the country.

Last week, Met Éireann reported that this past January was the wettest one in Ireland since 2018 – seeing, overall, 123pc of the long-term average rainfall.

Climate models show a “strong trend towards wetter winters, and more extreme [and] intense downpours”, StopFloods4.ie leads say.

“These extremes are becoming more frequent and as a result we can expect more frequent and more extensive flooding.”

Advertisement

They add: “Places that are currently at risk will see more frequent [and] intense floods, while new risks will also be created.”

The team hopes that their AI-powered solution will help by providing timely information to allow those at risk to prepare, which would ultimately reduce the costs and impacts of flooding for affected communities.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Bluesky finally adds drafts | TechCrunch

Published

on

Social network Bluesky is finally rolling out one of users’ most-requested features: drafts. Bluesky’s competitors, X and Threads, have long supported the ability to write drafts, which is seen as a baseline feature for services like this.

Users can access drafts on Bluesky the same way they do on these other platforms, which is by opening the new post flow and selecting the Drafts button in the top-right corner.

The rollout of drafts comes as Bluesky recently teased its roadmap for the year ahead. The company said it plans to focus on improving the app’s algorithmic Discover feed, offering better recommendations on who to follow, and making the app feel more real-time, among other updates. At the same time, the company acknowledged that it still needs to get the basics right.

Although Bluesky has gained a loyal user base, it still lags behind rivals when it comes to basic features, like private accounts and support for longer videos.

Advertisement

Launched to the public in early 2024, Bluesky has since scaled to over 42 million users, according to data sourced directly from the Bluesky API for developers.

Source link

Continue Reading

Tech

World-First Supercomputer Discovered This Invisible Flaw In All Jet Engines

Published

on





Jet engine technology is among some of the most advanced means of propulsion in the skies today. From commercial airlines to military fighter planes, these massive engines can be heard roaring overhead in countries around the world. But while jet engines are powerful, and just keep getting bigger, they all actually share one common problem: small imperfections are negatively affecting performance. It’s a major flaw that wasn’t actually discovered until late January 2026.

The Frontier, the world’s first exascale supercomputer located at Oak Ridge National Laboratory, is responsible for catching the flaw, which became visible during high-resolution simulations. The simulations revealed surface roughness on jet engine turbine blades, which can be found in both turbojet and turbofan engines. That roughness can lead to a loss of fuel efficiency and more heat being generated. Over time, this can shorten the life of the blades and require more maintenance to keep the engine’s components from malfunctioning. These imperfections aren’t manufacturer defects, and spotting them before would not have been possible, due to the tremendous computing power it took for Frontier to find them.

But identifying the problem is just the first step, as the Frontier’s findings are now being used to inform future jet engine design and construction. While it might be impossible to fully remove all surface imperfections, turbines can be engineered to compensate and overcome the flaws. Plus, thanks to the data Frontier gathered, cooling the jet engine’s turbine blades will now be more of a focus moving forward.

Advertisement

Frontier’s capability beyond jet engines

The Frontier supercomputer’s findings regarding jet engine flaws are the result of the US Department of Energy’s (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. But jet engine research isn’t the only work being done through INCITE on Frontier, as 81 other projects were selected by the DOE in 2025. Those projects involve research in various different fields, including cosmic ray transport and drug discovery using quantum-AI.

Advertisement

The Frontier is in high demand due to its ability to perform one quintillion calculations per second. This is an astronomical amount of computational power and makes the Frontier more capable than any of the supercomputers that came before it. It can process so much complex data at one time that it’s opening doors in physics, machine learning, and more. But as the Frontier is helping researchers take some tremendous strides forward, it’s using a lot of energy in the process.

Despite the Frontier’s advanced design, it’s consuming anywhere from 8 megawatts to 30 megawatts of electricity. That’s enough to power several thousand residential homes. That much energy produces an enormous amount of heat, which is addressed through a complex cooling system. That system pumps around 2,378 to just under 6,000 gallons of water per minute in a closed loop design, to keep everything running smoothly. However, the heat being wasted isn’t easily redirected, so in the end, quite a bit of it unfortunately cannot be reused.

Advertisement



Source link

Continue Reading

Tech

ChatGPT could be a killer feature for CarPlay

Published

on

Apple may soon expand the range of apps allowed on CarPlay by opening the dashboard to third-party AI chatbots, a change that would mark a notable shift in how the in-car system handles voice interaction and information access.

A new report from Mark Gurman of Bloomberg outlines plans for Apple to permit AI services such as ChatGPT and Google Gemini to appear within the CarPlay interface, potentially arriving within the next few months as part of upcoming software updates.

CarPlay has historically limited third-party apps to tightly controlled categories like navigation, audio, and messaging, reflecting Apple’s cautious approach to distraction, safety, and consistency within the vehicle environment.

Allowing conversational AI tools onto the dashboard would expand CarPlay’s functionality beyond command-based voice controls, enabling more flexible interactions such as contextual questions, summaries, and multi-step requests while driving.

Advertisement

According to the report, Siri would remain the default voice assistant on CarPlay, but users could actively launch alternative AI assistants from the dashboard rather than relying solely on Apple’s built-in system.

Advertisement

Apple CarPlay next genApple CarPlay next gen

This approach mirrors Apple’s recent software strategy, which increasingly supports external AI services alongside its own tools rather than insisting on a single, vertically integrated assistant experience.

CarPlay and Apple’s broader AI direction

The reported CarPlay changes follow Apple’s recent integration of ChatGPT into Siri on iOS 18, which introduced large language model capabilities without fully replacing Apple’s existing voice assistant framework.

Apple routes certain Siri requests to ChatGPT when a query requires more advanced generative responses, while the system retains control over device permissions and core system actions.

Advertisement

Extending similar access to CarPlay would bring generative AI into a context where hands-free interaction is especially valuable, potentially improving navigation queries, travel planning, and general information requests during longer drives.

The timing also reflects ongoing challenges in Apple’s internal AI roadmap, as the company has delayed a major next-generation Siri overhaul while competitors continue to roll out increasingly capable conversational systems.

Advertisement

Earlier announcements confirmed Apple’s plans to incorporate Google’s AI models into future versions of Siri, though that partnership focuses on backend processing rather than user-visible chatbot interfaces.

Advertisement

The Bloomberg report does not specify which iOS version will introduce expanded AI support for CarPlay, though additional iOS 26 updates are expected before Apple’s next Worldwide Developers Conference in June.

Apple has not publicly confirmed support for third-party chatbots on CarPlay, and details around regional availability, app approval requirements, and developer access remain unclear at this stage.

If implemented, the change would represent one of the most significant functional expansions to CarPlay in recent years, with rollout timing likely tied to broader iOS updates rather than a standalone release.

Advertisement

Source link

Continue Reading

Tech

India makes Aadhaar more ubiquitous, but critics say security and privacy concerns remain

Published

on

India is pushing Aadhaar, the world’s largest digital identity system, deeper into everyday private life through a new app and offline verification support, a move that raises new questions about security, consent, and the broader use of the massive database.

Announced in late January by the Indian government-backed Unique Identification Authority of India (UIDAI), the changes introduce a new Aadhaar app alongside an offline verification framework that allows individuals to prove their identity without real-time checks against the central Aadhaar database. 

The app allows users to share a limited amount of information, such as confirming that they are over a certain age rather than revealing their full date of birth, with a range of services, like hotels and housing societies to workplaces, platforms, and payment devices, while the existing mAadhaar app continues to operate in parallel for now.

Alongside the new app, UIDAI is also expanding Aadhaar’s footprint in mobile wallets, with upcoming integration with Google Wallet and discussions underway to enable similar functionality in Apple Wallet, in addition to existing support on Samsung Wallet. 

Advertisement
The new Aadhaar app with selective data sharing
The new Aadhaar app with selective data sharingImage Credits:Google Play

The Indian authority is also promoting the app’s use in policing and hospitality. The Ahmedabad City Crime Branch has become the first police unit in India to integrate Aadhaar-based offline verification with PATHIK, a guest-monitoring platform launched by the police department, aimed at hotels and guest accommodations to record visitors’ information.

UIDAI has also positioned the new Aadhaar app as a digital visiting card for meetings and networking, allowing users to share selected personal details via a QR code.

Officials at the launch in New Delhi said these latest efforts are part of a broader effort to replace photocopies and manual ID checks with consent-based, offline verification. The approach, they argued, is meant to give users more control over which specific identity information they want to share, while enabling verification at scale without having to query Aadhaar’s central database.

Techcrunch event

Boston, MA
|
June 23, 2026

Advertisement

Early uptake on top of massive scale

While UIDAI formally launched the new Aadhaar app last month, it had been in testing since earlier in 2025. Estimates from Appfigures show that the app, which appeared in app stores toward the end of 2025, quickly overtook the older mAadhaar app in monthly downloads. 

Combined monthly installs of Aadhaar-related apps rose from close to 2 million in October to nearly 9 million in December.

Advertisement

The new app is being layered onto an identity system that already operates at enormous scale considering India’s population. Figures published on UIDAI’s public dashboard show that Aadhaar has issued more than 1.4 billion identity numbers and handles roughly 2.5 billion authentication transactions each month, alongside tens of billions of electronic “know your customer” checks since its launch. 

The shift toward offline verification does not replace this infrastructure so much as extending it, moving Aadhaar from a largely backend verification tool into a more visible and everyday interface.

At the app’s launch, UIDAI officials said the move toward offline verification was intended to address long-standing risks associated with physical photocopies and screenshots of Aadhaar documents, which have often been collected, stored, and circulated with little oversight.

The expansion comes at a time of regulatory changes, easing restrictions, and a new framework (PDF), with UIDAI now allowing some public and private organizations to verify Aadhaar credentials without querying the central database. 

Advertisement

Civil liberties and digital rights groups say those legal changes do not resolve Aadhaar’s deeper structural risks. 

Raman Jit Singh Chima, senior international counsel and Asia Pacific policy director at Access Now, said the expansion of Aadhaar into offline and private-sector settings introduces new threats, particularly at a time when India’s data protection framework is still being put in place.

Chima questioned the timing of the rollout, arguing that the federal government should have waited for India’s Data Protection Board to be established first, and allow for independent review and wider consultation with affected communities.

“The fact that this has gone ahead at this point of time seems to indicate a preference to continue the expansion of the use of Aadhaar, even if it is unclear in terms of the further risks that it might pose to the system, as well as to the data of Indians,” Chima told TechCrunch.

Advertisement

Indian legal advocacy groups also point to unresolved implementation failures. 

Prasanth Sugathan, legal director at New Delhi-based digital rights group SFLC.in, said that while UIDAI has framed the app as a tool for citizen empowerment, it does little to address persistent problems, such as inaccuracies in the Aadhaar database, security lapses, and poor mechanisms for redress, which have disproportionately affect vulnerable populations. 

He also cited a 2022 report by India’s Comptroller and Auditor General, which found UIDAI had failed to meet certain compliance standards.

“Such issues can often result in disenfranchisement of people, especially those who were meant to be benefited by such systems,” Sugathan told TechCrunch, adding that it remains unclear how data shared through the new app would prevent breaches or leaks.

Advertisement

Campaigners associated with Rethink Aadhaar, a civil society campaign focused on Aadhaar-related rights and accountability, argue that the offline verification system risks reintroducing private-sector use of Aadhaar in ways the Supreme Court has already explicitly barred. 

Shruti Narayan and John Simte of the group said enabling private entities to routinely rely on Aadhaar for verification amounts to “Aadhaar creep”, normalizing its use across social and economic life despite a 2018 judgment that struck down provisions allowing private actors to use Aadhaar to verify people’s information. They warned that consent in such contexts is often illusory, particularly in situations involving hotels, housing societies, or delivery workers, while India’s data protection law remains largely untested.

Together, the new app, regulatory changes, and expanding ecosystem are shifting Aadhaar from a background identity utility into a visible layer of daily life that is increasingly hard to avoid. As India doubles down on Aadhaar, governments and tech companies are watching closely, attracted by the promise of population-scale identity checks.

The Indian IT ministry and UIDAI CEO did not respond to requests for comments.

Advertisement

Source link

Continue Reading

Tech

Why Haven’t Quantum Computers Factored 21 Yet?

Published

on

If you are to believe the glossy marketing campaigns about ‘quantum computing’, then we are on the cusp of a computing revolution, yet back in the real world things look a lot less dire. At least if you’re worried about quantum computers (QCs) breaking every single conventional encryption algorithm in use today, because at this point they cannot even factor 21 yet without cheating.

In the article by [Craig Gidney] the basic problem is explained, which comes down to simple exponentials. Specifically the number of quantum gates required to perform factoring increases exponentially, allowing QCs to factor 15 in 2001 with a total of 21 two-qubit entangling gates. Extrapolating from the used circuit, factoring 21 would require 2,405 gates, or 115 times more.

Explained in the article is that this is due to how Shor’s algorithm works, along with the overhead of quantum error correction. Obviously this puts a bit of a damper on the concept of an imminent post-quantum cryptography world, with a recent paper by [Dennish Willsch] et al. laying out the issues that both analog QCs (e.g. D-Wave) and digital QCs will have to solve before they can effectively perform factorization. Issues such as a digital QC needing several millions of physical qubits to factor 2048-bit RSA integers.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Tem raises $75M to remake electricity markets using AI

Published

on

As AI data centers drive up electricity prices, London-based startup Tem thinks AI might be able to help solve it, too.

Tem has built an energy transaction engine that relies on AI to cut prices relative to other energy traders. The company has signed up more than 2,600 business customers throughout the U.K. on the promise that buying energy from its utility division can save them up to 30% on their energy bills.

The startup recently closed an oversubscribed $75 million Series B led by Lightspeed Venture Partners with participation from AlbionVC, Allianz, Atomico, Hitachi Ventures, Revent, Schroders Capital, and Voyager Ventures, TechCrunch has exclusively learned. 

The round values Tem at more than $300 million, a source familiar with the deal told TechCrunch. The startup plans to use the funding to help expand to Australia and the U.S., starting with Texas.

Advertisement

“We’re in a nice position where we kind of have control over our own profitability. So I could have chosen not to raise at all and had a lovely, nice bootstrap business in some ways,” Joe McDonald, co-founder and CEO of Tem, told TechCrunch. “Well, we’re not that kind of business. We know what we want to achieve as someone who wants to go public over the years.”

Tem is a classic marketplace play, matching electricity generators with consumers. The company intentionally started by focusing almost exclusively on renewable energy generators and small businesses to fill both sides of the ledger. “The more decentralized and the more distributed, the better it is for the algorithms,” McDonald said. “But this works all the way up to enterprise.”

The company’s customers include fast-fashion retailer Boohoo Group, soft drink company Fever-Tree, and Newcastle United FC. 

Techcrunch event

Advertisement

Boston, MA
|
June 23, 2026

Currently, Tem is running what amounts to two different businesses. One, called Rosso, is the transaction engine that matches suppliers with buyers. Here, machine learning algorithms and LLMs help predict supply and demand. 

Advertisement

The goal with Rosso, McDonald said, is to cut costs by eliminating several layers that are present in current energy markets. “In each of them, you’ve got different teams doing different jobs, taking different levels of profit from back office to trading, trading desks to other trading desks, and probably five to six intermediaries in total that enable the flow of money to move from one side to the other,” he said.

With AI, he said, “you now have an opportunity to replace the humans, the labor costs, and the disparate systems into one single transaction infrastructure.” The goal is to make the price that customers pay for electricity closer to the wholesale cost.

The other part of Tem, called RED, is a “neo-utility” built to prove the value of Rosso.

“When we first started, we tried to sell our infrastructure to the energy companies, and we got nowhere,” he said. RED is currently the only utility using Rosso, and McDonald said its growth has pushed the company to prioritize it over opening Rosso to others.

Advertisement

At some point, though, Tem plans to allow other utilities in.

“In reality, it doesn’t matter how good [RED] is; it’s not going to get above a 40% market share. And it shouldn’t, because that becomes a monopoly in itself. So, me, I’d much rather go to get access to all the transaction flow,” McDonald said.

“Long term, we really don’t mind who owns the customer, who owns the generation as long as our infrastructure is being used,” he said. “This is just an infrastructure play in the same way AWS was, or Stripe was.”

Source link

Advertisement
Continue Reading

Tech

OpenAI starts testing ads in ChatGPT

Published

on

Users on ChatGPT’s free and Go plans in the US may now start to see ads as OpenAI has started testing them in the chatbot. The company announced plans to bring ads to ChatGPT. At the time, the company said it would display sponsored products and services that are relevant to the current conversations of logged-in users, though they can disable personalization and “clear the data used for ads” whenever they wish.

“Our goal is for ads to support broader access to more powerful ChatGPT features while maintaining the trust people place in ChatGPT for important and personal tasks,” OpenAI wrote in a blog post. “We’re starting with a test to learn, listen and make sure we get the experience right.”

These ads will appear below at the bottom of chats. They’re labeled and separated from ChatGPT’s answers. Ads won’t have an impact on ChatGPT’s responses.

Ads won’t appear when users are conversing with ChatGPT about regulated or sensitive topics such as health, mental wellbeing or politics. Users aged under 18 won’t see ads in ChatGPT during the tests either. Moreover, OpenAI says it won’t share or sell users’ conversations or data to advertisers.

Advertisement

A source close to the company told CNBC that OpenAI expects ads to account for less than half of its revenue in the long run. Currently the company also takes a cut of items bought through its chatbot via the shopping integration feature. Also according to CNBC, OpenAI CEO Sam Altman told staff on Friday that the company will deploy “an updated Chat model” this week.

The tests come on the heels of Anthropic running Super Bowl ads that poked fun at OpenAI for introducing advertising. Anthropic’s spot asserted that while “ads are coming to AI,” they won’t appear in its own chatbot, Claude.

Source link

Advertisement
Continue Reading

Tech

Why meaning and purpose is vital to modern work

Published

on

Patrick Williams discusses the tech trends he expects to see over the next 12 months and offers his advice to professionals navigating change.

On reflection of the technology landscape, particularly across the previous year, Patrick Williams – a 40-year veteran of the tech ecosystem – has come to the firm conclusion that “we are on the cusp of redefining survival”. 

He elaborated: “In work and as humans, we need meaning more than ever to be able to cope with the speed of change we are facing. In the year gone by, I have seen a worrying increase in apathy. 

“I believe a powerful emergent counter-trend is the pursuit of Ikigai, that vital nexus of: what we love to do, what we are good at, what pays and what is seeking to change the world.”

Advertisement

In contrast to the “default facts before feelings mindset” he believes popular among those in leadership roles, Williams – who worked at Google for 21 years as a software engineer – said another emerging trend is the realisation that this old-fashioned model leads to a dead end of burnout and a lack of stability. 

“The only way to optimise for efficiency is to work on the whole agile ecosystem. This means that we need more than just to be technically excellent; we need to be ‘self-aware and self-regulating,” he said. 

This self-awareness has powered Williams’ own professional evolution over the course of the last 12 months, as he has moved away from the urge to constantly change his skin to fit the mould. Instead, he noted the importance of entering professional spaces with “a clear sense of purpose, proactively leading change”.

He said: “I believe this kind of sense of meaning and purpose is missing in the world right now, in fact, in three areas in particular.”

Advertisement

The first being “meaning as a survival skill”, as if you don’t have a strong sense of purpose in your work, you may well be an “island of technical capability, but you are not going to be resilient enough to make a long-term difference”.

Secondly, he highlighted the need for new types of leadership. He said there is a shift away from top-down and “poor at best” communications and management, moving towards the idea of open communications and high-trust partnerships where the understanding of the self and wider teams is a priority.

Thirdly, he discussed the use of AI as an ‘EQ agent’, which he said is possibly the most exciting new trend. “It is being used not just as an automation tool, but as a mirror to help people reflect on and remember what gives their lives meaning.”

And this is all pulled together by recognising the “self as an instrument”. Williams is of the opinion that among the most important developments of this year will be those that can bridge the disconnect between technically complex problem-solving and human meaning. 

Advertisement

“As we struggle to grapple with a world where ‘doing it right’ is harder and harder to do and means more than just ‘speed of execution’ one thing is clear, it has to be work that can help the next generation be confident, resilient and emotionally intelligent in a world that is often designed to produce polar opposites,” he said.

A year of innovation

“The paradox for innovation as the 21st century unfolds,” explained Williams, “is a conflict between the short-term perspectives of an increasingly traditional corporate model driven by quarterly ‘Wall Street presence’, with its reactive ‘chameleon’-like tendencies for survival mode, versus the longer-term essential humanity-focus needed to drive sustainable change, particularly as innovation becomes more decentralised.”

To break free of the cycle, he is of the opinion that those powering the ecosystem must commit to double loop learning – the modification of goals or decision-making rules in the light of experience – that incorporates a more comprehensive review of an organisation’s challenges, goals and outcomes. He finds professionals should be encouraged to explore the fundamental need for meaning in their roles and should also have access to strategic consortia.

He said: “This complexity can be managed, tackled, only when it is attacked by a consortium of partnerships. Such ‘families’ give the collective and stable base necessary to innovate within the increasing chaos.

Advertisement

“This (empowered by digital transformation success) will bring into focus the most valuable innovators, the ‘change agents’ who are not merely technically correct, but are giving licence for a more empowered generation of people who value confidence, resilience and EQ, not just the technical IQ.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Zillow at 20: Real estate giant leans on AI to make homebuying hurt less

Published

on

Zillow Group wants to not only help people search for homes, but also facilitate other parts of the homebuying process such as mortgages. (Zillow Image)

When Jeremy Wacksman joined Zillow Group in 2009, his first job was getting the upstart real estate site onto the iPhone. Now, amid a generative AI boom, the CEO says the next platform shift is even bigger for Zillow than mobile — and so are the company’s ambitions.

“It’s going to enable all of our services to just be a lot smarter and a lot more intelligent and a lot more personalized,” Wacksman told GeekWire. “And I think it will help us solve the problem we’ve been after forever: how do we digitize the transaction, and how do we actually integrate and remove all the busy work and the redundant paperwork and the errors and the pain of the transaction?”

Zillow built its brand by letting people window‑shop for homes and by generating advertising revenue from real estate agents. More than 200 million people visit Zillow’s apps and sites on a monthly basis. But now, as Zillow marks its 20th anniversary on Monday, its leaders are pushing toward something bigger: a “remote control” that keeps buyers, agents and lenders inside Zillow for the entire home-buying experience.

Zillow CEO Jeremy Wacksman. (Zillow Photo)

It’s part of a “housing super app” strategy the company first laid out several years ago, following the failed attempt to build Zillow Offers, its “iBuying” home-flipping business. Zillow remains focused on finding ways to streamline how people buy homes beyond search and alleviate what can be a stressful process.

“More than half of buyers report that they cry during the transaction process,” Wacksman noted.

While Zillow’s traditional advertising business still makes up a majority of its revenue, it has made a bigger push into mortgages — which grew 36% year-over-year in the third quarter of 2025 — as well as rentals, which grew 41%. Zillow, which reports fourth quarter results this week, is also piloting closing services.

Advertisement

The shift marks a deliberate move away from a model where Zillow made money when a shopper raised a hand, toward one where it participates in — and tries to simplify — the entire transaction.

Executives see AI as central to the super app play. Zillow CTO David Beitel, who has led technology efforts at the company since 2005, said the new capabilities of large language models feel “pretty monumental.”

He said AI models have improved so much and so quickly that there is no single part of the business where Zillow isn’t exploring how to harness them.

“It’s really starting to change the way we think about presenting information and change the way that we interact with our customers,” Beitel said.

Advertisement

Long before Zillow launched an app within ChatGPT, the company has used AI in some form since its early days. It applied machine learning to create the “Zestimate” home value tool and later built out computer vision tools to enhance listings.

Now the company is using AI to boost CRM tools for real estate agents — summarizing calls, drafting follow‑up messages, prepping next‑step checklists, and reducing repetitive data entry. Zillow says agents have sent millions of AI‑assisted messages, and that those tools are improving conversion.

Inside Zillow’s own walls, the shift may be even more dramatic.

Beitel said software teams are shipping more code with the same headcount thanks to AI‑assisted development — in some cases, up to a 15% improvement in productivity. The company also uses internal copilots that sit on top of documents, Slack conversations and email, letting employees ask natural‑language questions against Zillow’s own data. Recruiters are using AI to help schedule interviews and coordinate with candidates.

Advertisement
Zillow CTO David Beitel. (Zillow Photo)

Just in the past two years, Beitel said, the company has “much higher expectations of our team about embracing these tools and using them in their daily jobs.” Zillow encourages experimentation but stops short of mandating specific tools across every team, letting managers decide how to adapt LLMs to their own workflows.

Both executives stressed that, for all the automation, they don’t see AI replacing real estate professionals. Instead, they framed the technology as the next step in a long evolution that started when agents were gatekeepers of listing books and became guides in a world where buyers already know what’s on the market.

“It’s going to pull away all the busy work, all the back office work, all the coordination, all the data collection — all the stuff that a machine can do — to let the human do a great job of actually being your guide,” said Wacksman, who was named CEO in 2024, taking over for co-founder Rich Barton.

All of this is unfolding against a housing market that Wacksman describes as “bouncing along the bottom.” Existing home sales remain well below pre‑pandemic norms; affordability is still strained in many markets; and even optimistic forecasts call for only modest improvement this year. That puts pressure on Zillow’s bet that it can keep growing revenue at a double‑digit clip by capturing a bigger slice of every transaction, even if there aren’t many more transactions to go around.

At the same time, the company is facing louder questions from regulators and rivals about how much control one platform should have over the digital plumbing of the housing market. Zillow is a defendant in a high-profile antitrust lawsuit from the Federal Trade Commission and multiple states over its multifamily rental listings syndication deal with Redfin — a case that alleges the arrangement stifles competition in the rental advertising market. The company is also defending a lawsuit from brokerage Compass challenging Zillow’s private‑listing policies and a separate copyright infringement case from rival CoStar over the use of listing photos.

Advertisement

Wacksman said it hasn’t changed the core roadmap — or Zillow’s room to grow. He said the company still touches a single-digit share of U.S. transactions. “We can grow our business regardless of what happens in [the] macro, and regardless of the clouds from external forces,” he said.

Source link

Continue Reading

Tech

The Blue Yeti is still the easiest “sound better instantly” upgrade, and it’s back under $100

Published

on

If you’ve ever watched a stream or a YouTube video and thought, “The visuals are fine, but the audio is rough,” you already know the truth: sound quality makes or breaks content. The Logitech Creators Blue Yeti USB microphone is $94.99, down from $139.99 for 32% off. For anyone starting a podcast, upgrading a WFH setup, or trying to make game chat and voiceovers clearer, this is one of those practical buys that pays off the first time you hit record.

What you’re getting

The Blue Yeti is a USB microphone built for easy plug-and-play use on a PC. You don’t need an audio interface or a complicated setup to get studio-ish voice quality compared to a laptop mic or a basic headset.

It’s popular for a reason: it works across a ton of use cases, including streaming, podcasting, voiceovers, Discord, and general video calls. It’s the kind of mic that can follow you from “I’m just trying this” to “okay, I’m doing this consistently” without forcing an immediate upgrade.

Why it’s worth it

This deal is worth attention because sub-$100 is a great price point for a microphone that can become your default for years. The real win is clarity. Your voice sounds fuller, background noise becomes less of the main character, and listeners do not have to strain to understand you.

Advertisement

It’s also an underrated upgrade for non-creators. If you’re on calls all day, a solid mic makes you sound more professional with basically zero effort. Pair it with a decent set of headphones, and your whole communication setup feels cleaner.

The bottom line

At $94.99, the Logitech Blue Yeti is a strong value if you want an easy, reliable way to improve voice quality for streaming, podcasting, YouTube, gaming chat, or work calls. If you want something ultra-compact or you prefer an XLR studio workflow, there are other options. But for a straightforward plug-in-and-sound-better upgrade, this 32% discount is a great time to buy.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025