Connect with us
DAPA Banner

Tech

The iPhone 18 Pro could boast some of the best camera hardware yet

Published

on

Apple’s next Pro iPhones might be shaping up to be a much bigger camera leap than usual.

According to a new Bloomberg report, the iPhone 18 Pro and Pro Max, expected this autumn, will include what are being described as “some of the biggest camera hardware upgrades in the lineup’s history.”

That’s a strong claim, especially given how incremental iPhone camera updates have been in recent years. However, the report itself is light on specifics. Instead, it mostly frames the upgrades as part of a broader shift happening across Apple’s imaging ecosystem.

The timing lines up with other recent leaks about Apple’s camera direction. Bloomberg previously reported that iOS 27 will introduce a new “Siri mode” in the Camera app. This mode brings visual intelligence features directly into shooting and scene recognition.

Advertisement

Moreover, separate reports also point to new AI-powered editing tools arriving in Photos. This suggests Apple is tightening the link between hardware capture and on-device image processing.

Advertisement

As for the hardware itself, earlier rumours give a slightly clearer (but still incomplete) picture. The iPhone 18 Pro’s main camera is said to feature a variable aperture system that’ll allow users to adjust depth of field and exposure more manually than before. Meanwhile, the telephoto camera may gain a wider aperture, which should help in low-light zoom shots.

But beyond that, details are thin. And that’s part of why Bloomberg’s “biggest upgrades ever” framing stands out. It’s more of a hint than a breakdown, and doesn’t yet line up with known changes.

Advertisement

On paper, variable aperture and improved optics are meaningful steps, but not necessarily the kind of generational leap Gurman’s wording suggests. This leaves some uncertainty around what else might be coming. It’s unclear whether it’s sensor improvements, computational upgrades, or something that hasn’t leaked yet.

For now, the iPhone 18 Pro camera story feels like it’s still forming. The direction is deeper integration between AI tools and camera hardware. However, the full picture likely won’t land until Apple gets closer to launch.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Definitive Edition Arrives On Mac Next Month

Published

on





One of the greatest real-time strategy games ever is making its way to macOS (again). Publisher Feral Interactive announced today it will bring Age of Empires II: Definitive Edition to Apple computers through Steam on May 28, with an App Store release to follow later this year. Feral worked with World’s Edge, the studio that has managed the Age of Empires franchise for Microsoft since 2019, to develop the port.

Advertisement

Like its PC counterpart, the Definitive Edition on Mac will include content from AoE2’s original Age of Kings release alongside its highly regarded The Conquerors expansion. It also comes with three pieces of more recent DLC: Lords of the West, Dynasties of India and Dawn of the Dukes. Between those, you could easily spend hundreds of hours playing all the included single player campaigns. (I know I did.) This being a remake, you also get updated graphics, music and about two decades of quality of life improvements. 

For multiplayer, you will also have access to many of the civilizations that are in the game. If you’re still keen to play more AoE2 after all that, every piece of DLC available for the PC version of the game, up to and including the most recent The Last Chieftains expansion, will be available to purchase separately. 

Technically, this isn’t the first time Age of Empires II has been available on Mac. The original game arrived on Mac back in 2001, but this is the first time the Definitive Edition has been available on Apple’s operating system since it was released on PC back in 2019. Notably, this is the first Microsoft title to make its way to Mac since Psychonauts 2 in 2021. Seven years is a long time to wait for a game to release on another platform, but the nice thing about Age of Empires II is the community hasn’t left the game. On Steam, there are consistently about 20,000 people playing at any time, so you can always find a match.

Advertisement



Source link

Advertisement
Continue Reading

Tech

AI Cyberattacks Meet Memory-Safe Code Defenses

Published

on

Transforming a newly discovered software vulnerability into a cyberattack used to take months. Today—as the recent headlines over Anthropic’s Project Glasswing have shown—generative AI can do the job in minutes, often for less than a dollar of cloud computing time.

But while large language models present a real cyber-threat, they also provide an opportunity to reinforce cyberdefenses. Anthropic reports its Claude Mythos preview model has already helped defenders preemptively discover over a thousand zero-day vulnerabilities, including flaws in every major operating system and web browser, with Anthropic coordinating disclosure and its efforts to patch the revealed flaws.

It is not yet clear whether AI-driven bug finding will ultimately favor attackers or defenders. But to understand how defenders can increase their odds, and perhaps hold the advantage, it helps to look at an earlier wave of automated vulnerability discovery.

In the early 2010s, a new category of software appeared that could attack programs with millions of random, malformed inputs—a proverbial monkey at a typewriter, tapping on the keys until it finds a vulnerability. When such “fuzzers” like American Fuzzy Lop (AFL) hit the scene, they found critical flaws in every major browser and operating system.

Advertisement

The security community’s response was instructive. Rather than panic, organizations industrialized the defense. For instance, Google built a system called OSS-Fuzz that runs fuzzers continuously, around the clock, on thousands of software projects. So software providers could catch bugs before they shipped, not after attackers found them. The expectation is that AI-driven vulnerability discovery will follow the same arc. Organizations will integrate the tools into standard development practice, run them continuously, and establish a new baseline for security.

But the analogy has a limit. Fuzzing requires significant technical expertise to set up and operate. It was a tool for specialists. An LLM, meanwhile, finds vulnerabilities with just a prompt—resulting in a troubling asymmetry. Attackers no longer need to be technically sophisticated to exploit code, while robust defenses still require engineers to read, evaluate, and act on what the AI models surface. The human cost of finding and exploiting bugs may approach zero, but fixing them won’t.

Is AI Better at Finding Bugs Than Fixing Them?

In the opening to his book Engineering Security, Peter Gutmann observed that “a great many of today’s security technologies are ‘secure’ only because no-one has ever bothered to look at them.” That observation was made before AI made looking for bugs dramatically cheaper. Most present-day code—including the open source infrastructure that commercial software depends on—is maintained by small teams, part-time contributors, or individual volunteers with no dedicated security resources. A bug in any open source project can have significant downstream impact, too.

In 2021, a critical vulnerability in Log4j—a logging library maintained by a handful of volunteers—exposed hundreds of millions of devices. Log4j’s widespread use meant that a vulnerability in a single volunteer-maintained library became one of the most widespread software vulnerabilities ever recorded. The popular code library is just one example of the broader problem of critical software dependencies that have never been seriously audited. For better or worse, AI-driven vulnerability discovery will likely perform a lot of auditing, at low cost and at scale.

Advertisement

An attacker targeting an under-resourced project requires little manual effort. AI tools can scan an unaudited codebase, identify critical vulnerabilities, and assist in building a working exploit with minimal human expertise.

Research on LLM-assisted exploit generation has shown that capable models can autonomously and rapidly exploit cyber weaknesses, compressing the time between disclosure of the bug and working exploit of that bug from weeks down to mere hours. Generative AI-based attacks launched from cloud servers operate staggeringly cheaply as well. In August 2025, researchers at NYU’s Tandon School of Engineering demonstrated that an LLM-based system could autonomously complete the major phases of a ransomware campaign for some $0.70 per run, with no human intervention.

And the attacker’s job ends there. The defender’s job, on the other hand, is only getting underway. While an AI tool can find vulnerabilities and potentially assist with bug triaging, a dedicated security engineer still has to review any potential patches, evaluate the AI’s analysis of the root cause, and understand the bug well enough to approve and deploy a fully-functional fix without breaking anything. For a small team maintaining a widely-depended-upon library in their spare time, that remediation burden may be difficult to manage even if the discovery cost drops to zero.

Why AI Guardrails and Automated Patching Aren’t the Answer

The natural policy response to the problem is to go after AI at the source: holding AI companies responsible for spotting misuse, putting guardrails in their products, and pulling the plug on anyone using LLMs to mount cyberattacks. There is evidence that pre-emptive defenses like this have some effect. Anthropic has published data showing that automated misuse detection can derail some cyberattacks. However, blocking a few bad actors does not make for a satisfying and comprehensive solution.

Advertisement

At a root level, there are two reasons why policy does not solve the whole problem.

The first is technical. LLMs judge whether a request is malicious by reading the request itself. But a sufficiently creative prompt can frame any harmful action as a legitimate one. Security researchers know this as the problem of the persuasive prompt injection. Consider, for example, the difference between “Attack website A to steal users’ credit card info” and “I am a security researcher and would like secure website A. Run a simulation there to see if it’s possible to steal users’ credit card info.” No one’s yet discovered how to root out the source of subtle cyberattacks, like in the latter example, with 100 percent accuracy.

The second reason is jurisdictional. Any regulation confined to US-based providers (or that of any other single country or region) still leaves the problem largely unsolved worldwide. Strong, open-source LLMs are already available anywhere the internet reaches. A policy aimed at handful of American technology companies is not a comprehensive defense.

Another tempting fix is to automate the defensive side entirely—let AI autonomously identify, patch, and deploy fixes without waiting for an overworked volunteer maintainer to review them.

Advertisement

Tools like GitHub Copilot Autofix generate patches for flagged vulnerabilities directly with proposed code changes. Several open-source security initiatives are also experimenting with autonomous AI maintainers for under-resourced projects. It is becoming much easier to have the same AI system find bugs, generate a patch, and update the code with no human intervention.

But LLM-generated patches can be unreliable in ways that are difficult to detect. For example, even if they pass muster with popular code-testing software suites, they may still introduce subtle logic errors. LLM-generated code, even from the most powerful generative AI models out there, are still subject to a range of cyber vulnerabilities, too. A coding agent with write access to a repository and no human in the loop is, in so many words, an easy target. Misleading bug reports, malicious instructions hidden in project files, or untrusted code pulled in from outside the project can turn an automated AI codebase maintainer into a cyber-vulnerability generator.

Guardrails and automated patching are useful tools, but they share a common limitation. Both are ad hoc and incomplete. Neither addresses the deeper question of whether the software was built securely from the start. The more lasting solution is to prevent vulnerabilities from being introduced at all. No matter how deeply an AI system can inspect a project, it cannot find flaws that don’t exist.

Memory-Safe Code Creates More Robust Defenses

The most accessible starting point is the adoption of memory-safe languages. Simply by changing the programming language their coders use, organizations can have a large positive impact on their security.

Advertisement

Both Google and Microsoft have found that roughly 70 percent of serious security flaws come down to the ways in which software manages memory. Languages like C and C++ leave every memory decision to the developer. And when something slips, even briefly, attackers can exploit that gap to run their own code, siphon data, or bring systems down. Languages like Rust go further; they make the most dangerous class of memory errors structurally impossible, not just harder to make.

Memory-safe languages address the problem at the source, but legacy codebases written in C and C++ will remain a reality for decades. Software sandboxing techniques complement memory-safe languages by addressing what they cannot—containing the blast radius of vulnerabilities that do exist. Tools like WebAssembly and RLBox already demonstrate this in practice in web browsers and cloud service providers like Fastly and Cloudflare. However, while sandboxes dramatically raise the bar for attackers, they are only as strong as their implementation. Moreover, Antropic reports that Claude Mythos has demonstrated that it can breach software sandboxes.

For the most security-critical components, where implementation complexity is highest and the cost of failure greatest, a stronger guarantee still is available.

Formal verification proves, mathematically, that certain bugs cannot exist. It treats code like a mathematical theorem. Instead of testing whether bugs appear, it proves that specific categories of flaw cannot exist under any conditions.

Advertisement

Cloudflare, AWS, and Google already use formal verification to protect their most sensitive infrastructure—cryptographic code, network protocols, and storage systems where failure isn’t an option. Tools like Flux now bring that same rigor to everyday production Rust code, without requiring a dedicated team of specialists. That matters when your attacker is a powerful generative-AI system that can rapidly scan millions of lines of code for weaknesses. Formally verified code doesn’t just put up some fences and firewalls—it provably has no weaknesses to find.

The defenses described above are asymmetric. Code written in memory-safe languages—separated by strong sandboxing boundaries and selectively formally verified—presents a smaller and much more constrained target. When applied correctly, these techniques can prevent LLM-powered exploitation, regardless of how capable an attacker’s bug-scanning tools become.

Generative AI can support this more foundational shift by accelerating the translation of legacy code into safer languages like Rust, and making formal verification more practical at every stage. Which helps engineers write specifications, generate proofs, and keep those proofs current as code evolves.

For organizations, the lasting solution is not just better scanning but stronger foundations: memory-safe languages where possible, sandboxing where not, and formal verification where the cost of being wrong is highest. For researchers, the bottleneck is making those foundations practical—and using generative AI to accelerate the migration. But instead of automated, ad hoc vulnerability patching, generative AI in this mode of defense can help translate legacy code to memory-safe alternatives. It also assists in verification proofs and lowers the expertise barrier to a safer and less vulnerable codebase.

Advertisement

The latest wave of smarter AI bug scanners can still be useful for cyberdefense—not just as another overhyped AI threat. But AI bug scanners treat the symptom, not the cause. The lasting solution is software that doesn’t produce vulnerabilities in the first place.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Historic Apple Porsche colors return on Porsche 963 at Laguna

Published

on

More than four decades after an Apple-branded Porsche first hit the track, Porsche Penske Motorsport revives the rainbow livery on its 963 prototypes for a one-off run at Laguna Seca.

The livery revives the rainbow-striped look of a 1980 Porsche 935, marking the 75th anniversary of Porsche Motorsport and the 50th anniversary of Apple. It will appear on May 3 at WeatherTech Raceway Laguna Seca.

Porsche based the look on a Dick Barbour Racing Porsche 935 K3 that carried Apple branding during the 1980 season, including an entry at the 24 Hours of Le Mans. Both factory-entered 963 cars will wear it for the fourth round of the IMSA WeatherTech SportsCar Championship, limiting the tribute to a single race.

Oliver Schusser, Vice President Apple Music, Sports and Beats, said the collaboration continues a relationship that began in 1980, when a Porsche race car first carried its logo. The companies are using Laguna Seca to reconnect with today’s motorsport program, but the change is limited to branding.

Advertisement

Porsche Penske Motorsport enters Laguna Seca leading the championship standings after early-season wins at the 24 Hours of Daytona and the 12 Hours of Sebring. Kevin Estre and Laurens Vanthoor will drive the No. 6 Porsche 963, while Julien Andlauer and Felipe Nasr will share the No. 7 car.

Top view of a sleek white Porsche race car with colorful rainbow stripes, aerodynamic bodywork, large rear wing, and black cockpit, photographed on a bright, clean backgroundThe livery revives the rainbow-striped look of a 1980 Porsche 935. Image credit: Porsche

Laguna Seca serves as a deliberate choice for the tribute because the track sits about 80 miles south of Apple Park in Cupertino. The circuit has also hosted multiple Rennsport Reunion events, which ties the collaboration to both companies’ history.

Both anniversaries land in 2026, with Apple marking 50 years since its 1976 founding and Porsche Motorsport reaching 75 years since 1951. Porsche uses that direct link to give the tribute more weight and to justify keeping the design to a single race.

The Laguna Seca round runs two hours and 40 minutes and serves as the fourth stop on the IMSA calendar. Porsche’s 963 program remains the focus on track regardless of the one-off livery.

Apple stays involved through partnerships and services tied to motorsports without expanding its role. Porsche uses the tribute to reinforce its heritage while its prototype program continues to run at the front of the championship.

Advertisement

Source link

Continue Reading

Tech

Geeks Give Back: AI House and UW’s Center for an Informed Public to be honored at GeekWire Awards

Published

on

Top: Center for an Informed Public co-founder Kate Starbird speaking at a University of Washington lecture. Bottom: AI House managing director Jifan Zhang and an AI House event. (CIP and GeekWire Photos)

Each year, the GeekWire Awards celebrate the geeky endeavors making a meaningful impact across the Pacific Northwest. This year’s Geeks Give Back honorees are building community and sharing knowledge — one focused on advancing AI innovation, the other on education and research in our rapidly evolving media landscape.

The honorees are AI House, a first-in-the-nation hub fostering collaboration in the burgeoning AI sector, and the University of Washington’s Center for an Informed Public (CIP), a program that teaches everyone from students to seniors how to identify rumors and misinformation.

The GeekWire Awards will recognize nearly 50 finalists and honorees across a dozen categories, from Startup of the Year to Next Tech Titan. Geeks Give Back honorees are selected through community nominations and input from awards judges.

Geeks Give Back is presented again this year by BECU.

Winners will receive their coveted robot trophies live onstage on May 7 at Showbox SoDo in Seattle. Individual tickets are on sale now — grab a seat here — and keep reading to learn more about this year’s Geeks Give Back honorees.

AI House

In addition to events, AI House has 1,000 desks for tech workers. (GeekWire Photo / Kurt Schlosser)

Since launching a little more than a year ago, AI House has hosted more than 150 events at its collaborative space at Seattle’s Pier 70. The 108,000-square-foot waterfront facility brings together entrepreneurs, investors, students and community leaders to foster big ideas and forge connections in the pursuit of AI innovation.

The initiative launched out of AI2 Incubator, a startup organization and venture firm, and offers co-working space for companies, including those affiliated with the incubator.

Advertisement

The AI House calendar features events ranging from monthly Pitch Please gatherings, which have led to AI2 Incubator investments, to conversations with prominent leaders. The organization has also created affinity groups for female founders, founder mental health and B2C founders.

Yifan Zhang, managing director of AI House, says she regularly meets people who are new to the Seattle startup scene — whether they recently moved or graduated, have been building independently, or left Big Tech and are curious about the startup world.

“They’re often astonished and thrilled to land at a place like AI House while starting their explorations,” Zhang said. “This matters because in order for Seattle’s startup scene to succeed, we need it to be much much bigger than it is today. Our thesis is that AI House can be that ‘big tent.’”

Her goal is that everyone who visits leaves having met someone new and gained a perspective they hadn’t considered before — one that opens new possibilities in their entrepreneurial pursuits.

Advertisement

Center for an Informed Public (CIP)

CIP manager Liz Crouse, left, speaks with Ballard High School teacher Shawn Lee, at CIP’s MisinfoDay 2026. (UW Information School / Doug Parry)

When the UW’s Center for an Informed Public launched in 2019 with a $5 million grant, the central concerns were misinformation threatening upcoming elections and social media’s role in igniting rumors. CIP set out to better understand the sources of false information and map how it spreads, and to educate the public on how to recognize and guard against it.

More than six years later, information untethered from facts permeates social media, influencer posts, and many news outlets. Generative AI tools that fabricate images and videos — and help users craft deceptive, persuasive messages — continue to proliferate.

In response, CIP is expanding its efforts: connecting professors across disciplines, hosting high school students, librarians and teachers, and equipping people with the tools they need to make sense of modern life.

“The CIP is an organization that’s fundamentally about research and knowledge production, but really in service of the communities locally around the campus, and across the state, across the nation, across the world,” said Emma Spiro, CIPs’ faculty director and UW Information School associate professor.

Recent highlights include the launch of a free online humanities course titled “Modern-Day Oracles or Bullshit Machines?” examining AI use; co-hosting an intergenerational AI event with high school students and seniors; and webinars such as “Understanding and Navigating Political Divides” and “Preparing Informed Citizens in an AI-Powered World.”

Advertisement

Spiro credits the people involved with CIP for its impact. “We’ve been really successful at finding those mission-aligned, values-driven people who are invested in the mission and willing to take on what can be sometimes controversial work,” she said.

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

Source link

Advertisement
Continue Reading

Tech

Best Side-Sleeper Mattresses 2026: Picked by a Sleep Science Coach

Published

on

Mattress Mattress type Materials Firmness Height Certifications Trial Period Shipping Warranty Nolah Evolution Hybrid Organic cotton or GlacioTex cover, AirFoam Luxe memory foams, gel memory foam, AirBreath border gusset, pocketed coils Plush, luxury firm, firm 15 inches CertiPur-US, GreenGuard Gold 120 nights; 30-day break-in period required before initiating return ($99 shipping fee) Arrives in a box as part of standard shipping; white-glove delivery available (mattress setup and old bed removal) for $225 Limited lifetime Helix Midnight Luxe Hybrid Tencel cover, memory foams, pocketed coils One option is 6.5/10 13.5 inches CertiPur-US, GreenGuard Gold 120 nights; 30-day break-in period required before initiating return Free for customers in the contiguous US Limited lifetime Bear Elite Hybrid Hybrid Phase change material (PCM) cooling cover, copper-infused memory foam, dynamic memory foam, pocketed coils Soft, medium, firm 14 inches CertiPur-US, GreenGuard Gold 120 nights; 30-day break-in period required before initiating return Free for customers in the contiguous US Limited lifetime Leesa Sapira Chill Hybrid Phase change cooling cover, cooling memory foams, pocketed coils Plush, medium firm, firm 14 inches CertiPur-US, GreenGuard Gold 120 nights; 30-day break-in period required before initiating return Free for customers in the contiguous US Limited lifetime Naturepedic EOS Classic Hybrid Organic cotton cover, plant-based PLA layer, organic wool batting, organic latex, organic cotton batting, organic cotton fill and fabric, pocketed coils Plush, medium, cushion firm, firm, extra firm (each side can have different firmnesses) 12 inches Global Organic Latex Standard (GOLS), Global Organic Textile Standard (GOTS), Made Safe, EWG verified, GreenGuard Gold, Formaldehyde-Free Claim Verified by UL Environment, Organic Content Standard certified, Organic Trade Association certified, Responsible Wool Standard Certified, Forest Stewardship Council certified 100 nights; 30-day break-in period required before initiating return Arrives in a box as part of standard shipping. For contiguous US shoppers, mattress setup is $299; setup and old bed removal is $349 25-year limited Saatva Contour5 Memory foam Cotton cover, memory foam Medium, firm 12.5 inches CertiPur-US, GreenGuard Gold Year-long sleep trial; $99 return fee White glove delivery included with purchase Lifetime Casper Dream Hybrid Memory foam hybrid Knit cover, memory foam, zoned memory foam, pocketed coils, foam rail edge support Medium firm 12 inches CertiPur-US 100 nights Arrives in box as part of standard shipping; separate shipping fee for Alaska and Hawaii 10-year limited Birch Luxe Natural Hybrid Organic cotton cover, wool, organic latex, pocketed coils Medium 11.5 inches Global Organic Latex Standard (GOLS), GreenGuard Gold 120 nights; 30-day break-in period required before initiating return ($99 shipping fee) Arrives in a box as part of standard shipping; white-glove delivery available for $199 Limited lifetime The WinkBed Hybrid Tencel cover, gel memory foam, pocketed coils Softer, luxury firm, firm, Plus 13.5 inches CertiPur-US 120 nights; 30-day break-in period required before initiating return Free shipping via UPS ground for contiguous US Limited lifetime Wolf Memory Foam Hybrid Premium Firm Mattress Hybrid Cooling cover, gel memory foam, support foam, pocketed coils Medium firm 13 inches CertiPur-US 101 nights; 30-day break-in period required before initiating return Arrives in a box as part of standard shipping Limited lifetime Sonu Sleep System Hybrid Cooling cover, “Comfort Channel” internal structure, and support foams that contain “Support Pillows”; cooling foam, support foam, pocketed coils Firm, 8/10 14 inches CertiPur-US 100, return fee is $99, and can go up to $250 Free delivery within the contiguous US 10 years Sleep Number ComfortNext Lux Smart bed Phase change cooling cover, copper gel memory foam, support foam, Ultra-Flex air chambers, rail system, comfort foam, bottom cover, air control unit 45 firmness levels 13 inches CertiPur-US 120 nights Arrives in a box as part of standard shipping 5 years full coverage, 20 years prorated coverage

Source link

Continue Reading

Tech

Uber taps Hertz to clean, charge, and fix its Lucid Motors robotaxis

Published

on

Uber’s forthcoming luxury robotaxi service with Lucid Motors and Nuro is getting a fourth partner: Hertz.

The companies announced Thursday that Hertz will provide “day-to-day vehicle asset management, including charging, maintenance, repairs, cleaning, and depot staffing.” The service, announced last year, is supposed to launch by the end of 2026 in the San Francisco Bay Area, using Lucid’s Gravity SUVs and Nuro’s self-driving tech.

Hertz is handling this work through a newly-established affiliate it’s calling Oro Mobility, which the rental company says will “provide integrated fleet management solutions across a range of mobility segments.”

“As the industry transitions from personally owned vehicles to commercially operated driver-led and autonomous fleets, Oro aims to fill a critical orchestration and operations gap,” the Hertz press release reads.

Advertisement

This is not the first time Hertz, which went through a bankruptcy restructuring process in 2020, has followed new mobility trends.

The company made a big splash in 2021 when it announced it was buying 100,000 EVs from Tesla, news that helped Elon Musk’s car company reach a $1 trillion valuation for the first time (and helped Hertz’s image as it emerged from bankruptcy). Hertz also announced plans in 2022 to buy up to 175,000 EVs from General Motors, and another 65,000 from Polestar.

None of those deals were ever fully realized, and Hertz started a fire sale of the EVs it had bought in early 2024. It did that in part because of higher-than-expected maintenance costs due to Uber drivers renting the EVs, and because Tesla slashed prices to stave off competition and boost sales.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Starting up a fleet management and operations arm, though, should be closer to Hertz’s core competencies as a rental car giant. Competitors like Avis are already doing this kind of work for Waymo. And with robotaxi companies seemingly keen to use third parties to manage this piece of the puzzle, Hertz could build a decent business with Oro.

Advertisement

To wit, Hertz and Uber said Thursday that they will “explore expansion opportunities in 2027.” Uber has deals with dozens of autonomous vehicle companies around the world, and has plans to order at least 35,000 robotaxi-ready vehicles from Lucid Motors alone in the coming years. It’s starting with 10,000 Gravity SUVs, and recently announced plans to order another 25,000 EVs from Lucid Motors that will be based on its upcoming mid-sized platform. (Uber also now owns more than 11% of Lucid Motors as part of investments it has made alongside the vehicle orders.)

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Google Photos feature uses AI to scan your pictures and help pick your clothes

Published

on


Google says Wardrobe will be perfect for streamlining those “nothing to wear” mornings, evenings, and vacations. Essentially, the feature catalogs the clothes you’re wearing in Google Photos to create a so-called digital closet.
Read Entire Article
Source link

Continue Reading

Tech

How can organisations ‘stay safe’ amid intense geopolitical pressures?

Published

on

Matthew Lloyd Davies discusses the steps companies must take to stay ahead of malicious behaviours and advanced threats.

“Periods of geopolitical instability have historically been accompanied by increased cyber activity and today’s situation is no different,” Matthew Lloyd Davies, a principal security author at Pluralsight, told SiliconRepublic.com. 

He explained, state-aligned threat groups, criminal networks and politically motivated hacktivists often exploit periods of heightened tension, in order to launch harmful campaigns targeting world governments, infrastructure providers and organisations in the private sector. 

In April alone there were multiple breaches and security incidents reported by organisations dealing with sensitive information. For example, Dublin recruitment platform Healthdaq recently suffered a cyberattack from hacker group XP95, which claims to have accessed hundreds of thousands of files.

Advertisement

Also in April, OpenAI said that the organisation would be working on safeguarding and updating the certification process for its apps running on MacOS following reports of a security issue around a third-party development tool. It was also reported that a private Discord group possibly gained unauthorised access to Anthropic’s new AI model Mythos

“Operations vary widely in sophistication,” noted Lloyd Davies, who added, “Some involve advanced espionage or long-term infiltration carried out by highly capable threat actors, while others are less complex but still disruptive, such as distributed denial-of-service attacks, defacement campaigns, or the release of stolen data.”

He said, “Crucially, organisations do not need to be directly involved in a geopolitical dispute to feel the impact. Shared infrastructure, third-party suppliers and cloud platforms create indirect pathways through which cyber activity can spread globally. This means cybersecurity teams must prepare not just for highly sophisticated attacks, but also for waves of opportunistic disruption that often accompany geopolitical events.”

The skills safety net

The security industry is evolving quickly to a point where threat actors and genuine professionals alike are increasingly using AI and other advancements to create new opportunities. On top of that employers are finding it difficult to create a consistent talent pool in a space where cyber resilience is now dependent on the defensive skills evident across the wider workforce, not just within specific teams.  

Advertisement

“Developers, cloud engineers, IT administrators and security teams must all understand how to build, deploy, and maintain secure systems. Without continuous upskilling across these roles, as global tensions rise and attacks become more complex, even well-funded security programmes can struggle to keep pace with evolving threats,” he said. 

The organisations that invest in developing their cloud and cybersecurity skills, across the workforce, will find themselves better positioned to detect security threats earlier, respond faster and adapt.

“This means moving beyond reactive security measures and embedding cybersecurity capability into the broader technology workforce. Upskilling developers in secure coding, strengthening cloud security expertise and ensuring security teams can effectively use emerging technologies like AI all contribute to a stronger defensive posture.”

He suggested that organisations could benefit from letting go of traditional ideas of training such as the one-size-fits-all model and instead of assuming proficiency based on roles or certifications, should consider merit-based hiring, wherein companies quickly identify gaps, creating teams that can adapt, learn new skills and keep pace with threats as they occur. 

Advertisement

Lloyd Davies said, “Training programmes need to be aligned to real-world operational demands, directly drawing on the evolving attack vectors that security teams encounter daily and the conflict scenarios behind them. Infrastructure can’t be secured by theory alone. Scenario-based learning is crucial.”

To be truly effective he said, “Cyber teams must be given opportunities to practice and hone their skills in safe sandbox environments and as cyber threats evolve continuously, upskilling must too. Organisations need to invest in simulation platforms and scenario-based exercises that mirror modern attack vectors including ransomware and identity compromise. 

“Continuous learning without the risk of real-world consequences can allow teams to build confidence while being updated on emerging threats. Equally important is embedding this learning into regular workflows, avoiding skill development being seen as a ‘one-off,’ so that professionals remain agile and prepared to respond effectively to cyber attacks.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Netflix gets its own “Clips” vertical video feed to lure you away from TikToks and Reels

Published

on

Netflix is adding a new way for you to watch content on its app. The company’s latest rework is bringing the vertical TikTok and Instagram Reels-style system with “Clips”, which is designed to make the process of finding something to watch more interactive.

The new vertical video feed shares short snippets of movies and shows, blending the addictive interactive experience of social media with the streaming platform.

How this isn’t a TikTok rip-off

While such a format was popularized by apps like TikTok, Netflix Clips has a bigger angle than just showing short-form video content. The company has been testing this for a while, and the main goal is to make discovery faster and more intuitive. You’ve probably been through that tedious process of deciding what to watch on Netflix, which is where Clips comes in.

It lets users get a quick taste of the movies and shows on Netflix. Clips also focuses entirely on Netflix’s own content. The broader plan is also expected to include other forms of media, like podcasts and live events, over time.

Why Netflix made this for smartphone users

Netflix wants to keep users engaged throughout the day, and not just during long viewing sessions at home. Executives have said the platform is aiming to become more of a “daily companion,” using features like Clips to fill shorter attention windows that are already dominated by social media apps like TikTok, Instagram Reels, and YouTube Shorts.

The redesign also introduces a more streamlined navigation system that can curate collections based on genres or moods, turning the app into a more personalized experience.

Advertisement

The Clips section lets users add movies or shows straight to the list from their feed and share the snippets with friends. Even if Netflix isn’t outright saying it, this does make it closer to a social app.

Source link

Advertisement
Continue Reading

Tech

Goal Zero Yeti 1500 Power Station Review (2026): More Power, Better Chemistry

Published

on

All those ports are fairly standard for a power station in this class, and similar to what was on the previous model—although the 140-W USB port is new and very nice to have. Where the new Yeti 1500 shines is the 12-V charging options, which include a high-power 12-V port capable of 30 amp output. That’s enough for most van and overlanding vehicle power systems, meaning you can tie the Yeti 1500 directly to your house power 12-V distribution panel. There’s also standard Anderson connector outputs and a cigarette-lighter-style outlet available.

There are three ways to charge the Yeti 1500. There’s AC wall power, which can charge at up to 1,800 watts, getting you from 0 to 100 percent in just over an hour. (There’s a switch to slow this down to 1,500 W if you’re plugged into a campground pole, which typically can’t handle the full draw.) You can also hook the Yeti up to a max of 900 watts of solar panels. There are both 8-mm inputs and HPP inputs for Goal Zero solar panels. You don’t need Goal Zero panels, though; you can use just about anything so long as you get the right adapters and stick within the charging limits (I use an adapter like this to plug just about any solar panel into just about any power station/charger). The rear charging panel is also where you’ll find the ground lug for semi-permanent installs in a vehicle or off-grid tiny home.

Goal Zero’s Yeti app allows you to control the system from your phone, potentially from the other side of the world if you have the battery connected to your Wi-Fi. I opted for direct connection via Bluetooth, bypassing the network, since I don’t always have my Starlink network up and running in my camper. This still allows me to toggle all the output types on and off, get basic battery status like charge state, current power draw (by type), change the charge profile (there are four), and some charge and discharge history information. The latter is not as full-featured with direct connection as it would be with a network connection, and I found it often had trouble loading, but overall I found the app handled everything I needed it to do. I particularly like the ability to turn off the 12-V output from bed at night, shutting off all power to eliminate any phantom drains on the battery.

The Only One

Image may contain Camera Electronics Tape Player Speaker and Stereo

Photograph: Scott Gilbertson

I’ve relied on a fourth-gen Yeti 1500 as supplemental power for many years now. I’ve run everything from power tools to space heaters to full-size refrigerators, and as a backup for my RV when I needed to do something to the built-in system. In all that time it’s never let me down, and in my experience strikes the best balance between portability and power. It’s heavy, but the dual handles make it pretty easy to carry. I’ve also tested the 1000X and the 500X models, which while lighter and smaller, lack some of the things that make the 1500 great.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025