This is a follow-up to my recent piece “AI Coach or AI Ghostwriter? The Choice Is Yours,” which argued that AI can either sharpen your thinking or replace it. That piece was about writing. This one is about the other side of the coin: reading. The practical question is: how do you use AI to become a more productive reader rather than a lazier one?
Back in 2006, my UW students and I coined the term “machine reading” to describe the autonomous understanding of text by computers (Etzioni, Banko, & Cafarella, AAAI 2006). Two decades later, large language models (LLMs) can digest, summarize, and answer questions about text with startling competence. The irony is that the biggest consumers of this capability are people, using AI to do our reading for us.
AI-assisted reading has become so pervasive that we are approaching the absurdity captured in Tom Fishburne’s famous Marketoonist cartoon: “AI Written, AI Read.” One AI writes the memo, another AI summarizes it, with minimal human involvement.
The simplest use of AI for reading is summarization, and it certainly has its merits. Drop a 50-page PDF into your favorite LLM, ask for a summary, and you’ll get one in seconds.
But that summary is merely a skeleton. It strips away the voice, the best lines, the telling details, and the nuances that can make or break your understanding. If you are reading a legal contract, the details are the whole point. If you are reading a competitor’s product announcement, the spin they put on the numbers matters more than the numbers themselves. A skeleton doesn’t have a pulse!
Advertisement
AI-assisted reading punishes passivity. A recent Wharton study of over 10,000 participants found that people who relied on AI-generated summaries showed shallower knowledge and offered fewer concrete facts afterward compared to those who engaged with original sources. Advice written after AI use was shorter, less factual, and more homogeneous across users. In other words, AI summaries do not just compress text. They flatten it.
Speed reading via AI can be a bit like speed dating: you cover a lot of ground, but you do not actually know anyone when you leave.
The fundamental question here is not productivity. It is about the impact of AI reading on you as the reader: What happens to your retention, your understanding, your ability to synthesize across sources? Are you winning by doing this, or are you atrophying the cognitive muscle that makes you good at your job? Outsourcing your thinking to AI is not productivity gain; it’s a competence leak.
My practical advice is: treat the summary as a triage tool, not a destination. Use it to decide whether a document deserves your time. That is genuinely valuable. The world produces more text than any human can process, and AI can help you sort the wheat from the chaff in minutes instead of hours. But once you decide that something matters, put down the summary and engage with the source.
Advertisement
The real power of AI reading lies not in one-shot summarization but in dialog. Think of it as an interrogation of the document, focused on what interests you. Upload the contract, the research paper, or the earnings call transcript, and then start asking questions. What are the three riskiest clauses? How does this methodology compare to the Chen et al. paper from last year? Where does the CFO’s commentary contradict the numbers in Table 4?
This is not a command you fire off and forget. It is a back-and-forth conversation between you and the AI about the text, one that surfaces specific quotes, draws connections to related materials, and drills into exactly what you need. The quality of the conversation depends entirely on the quality of your questions. AI-assisted reading rewards curiosity.
A word of caution that I can’t repeat often enough: always verify anything important yourself. AI models hallucinate. They fabricate quotes, invent statistics, and present fiction with the serene confidence of a tenured professor. The verification step is essential. If you skip it, you are not reading with AI. You are gambling with AI.
You also want to adopt different reading strategies for different tasks, just as you would without AI. Summarization is fine for getting the gist of a piece, for sorting your inbox, for deciding what to read next. It will not serve you well if you need to retain the content, defend it in a meeting, or build on it in your own work. For those tasks, you need the interrogation approach, and you need to supplement it with old-fashioned human reading of the passages the AI points you to.
Advertisement
Used well, AI can make you a better, faster, more thorough reader by helping you navigate more material, ask sharper questions, and spot connections you would have missed. Used badly, it turns you into a consumer of predigested pablum, the intellectual equivalent of living on protein shakes when there is a farmers’ market across the street.
The machines are happy to read for you, but they won’t understand for you. The choice, as always, is yours.
Editor’s note: GeekWire publishes guest opinions to foster informed discussion and highlight a diversity of perspectives on issues shaping the tech and startup community. If you’re interested in submitting a guest column, email us at tips@geekwire.com. Submissions are reviewed by our editorial team for relevance and editorial standards.
Back in 2017, Hackaday featured an audio reactive LED strip project from [Scott Lawson], that has over the years become an extremely popular choice for the party animals among us. We’re fascinated to read his retrospective analysis of the project, in which he looks at how it works in detail and explains that why for all its success, he’s still not satisfied with it.
Sound-to-light systems have been a staple of electronics for many decades, and have progressed from simple volume-based flashers and sequencers to complex DSP-driven affairs like his project. It’s particularly interesting to be reminded that the problem faced by the designer of such a system involves interfacing with human perception rather than making a pretty light show, and in that context it becomes more important to understand how humans perceive sound and light rather than to simply dump a visualization to the LEDs. We receive an introduction to some of the techniques used in speech recognition, because our brains are optimized to recognize activity in the speech frequency range, and in how humans register light intensity.
For all this sophistication and the impressive results it improves though, he’s not ready to call it complete. Making it work well with all musical genres is a challenge, as is that elusive human foot-tapping factor. He talks about using a neural network trained using accelerometer data from people listening to music, which can only be described as an exciting prospect. We genuinely look forward to seeing future versions of this project. Meanwhile if you’re curious, you can head back to 2017 and see our original coverage.
That phrase defined retail cybersecurity in 2025. What began as isolated incidents quickly became prolonged, intense disruptions, exposing just how interconnected — and fragile — modern retail operations really are.
Nadir Izrael
CTO and Co-Founder at Armis.
Over the year, high-profile retailers around the world were hit. Luxury global brands like Gucci and Balenciaga suffered data breaches; Victoria’s Secret was forced to temporarily shut down parts of its digital operations. While Marks & Spencer, Co-Op and Harrods in the UK all faced incidents, with disruption for M&S lasting for 15 weeks.
Article continues below
Advertisement
Different triggers, same outcome: major disruption and financial loss.
But when disruption spreads this quickly and lingers this long, it stops being about individual attacks and starts raising a more uncomfortable question: why was retail such fertile ground for them in the first place?
Advertisement
Why disruption spread so easily
While the volume of retailers hit in 2025 might have felt anomalous, it makes sense when viewed through this lens: retail is one of the most effective sectors for causing maximum disruption at scale. The cyberattack on United Natural Foods, a key supplier to tens of thousands of grocery stores across North America, showed how a single compromise can ripple outward – emptying shelves, disrupting lives, and triggering wider economic impact.
But it wasn’t simply a lack of security investment that caught out countless retailers last year, it was the sheer scale of cyber exposure retailers are now dealing with. The most disruptive incidents of the year weren’t driven by sophisticated zero-day exploits, but by attackers exploiting complexity and that lack of contextual understanding around how systems, assets and users interact.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Retailers operate sprawling digital ecosystems that combine ecommerce platforms, cloud infrastructure, in-store operational technology, identity systems, and third-party services. Each connection improves efficiency and scale — but also introduces new exposure and risk. A weakness in one area, whether a supplier, a trusted integration or an unmanaged asset, can quickly cascade into widespread disruption.
Advertisement
Attackers are increasingly adept at exploiting these conditions, too. Rather than targeting a single critical vulnerability, they chain together lower-risk weaknesses, move laterally across environments or providers and take advantage of fragmented visibility between IT, cloud storage and operational systems. The Adidas breach is a clear example: attackers gained access via a third-party supplier, stole customer data and demonstrated how interconnected environments can amplify impact.
And every incident that occurred last year was enabled by the realities of modern retail operations. New systems are deployed quickly, integrations are prioritized over security hygiene, and legacy infrastructure often sits alongside modern cloud services.
This creates blind spots that attackers can exploit long before an incident becomes visible. Security teams are left defending environments that are constantly changing, often without the visibility or intelligence needed to anticipate where risk is building. Many are under-resourced, fighting the growing threat of generative AI, all while trying to embed a culture of collaborative risk management.
Advertisement
After a tumultuous year, one thing is clear; this wasn’t a brief surge in activity or a single bad quarter. It was a sustained pattern of exposure playing out across the retail ecosystem. And as long as that exposure remains fragmented and poorly understood, disruption will continue to outpace response.
Cyber exposure becomes the foundation for resilience
What the past year made clear is that resilience in retail can no longer be built by reacting faster to incidents after they occur. With AI, as well as other emerging technologies becoming more mainstream, the problem is only going to get worse. The scale and persistence of disruption showed that retailers need to rethink how they understand risk in the first place.
That starts with recognizing that many of the most damaging weaknesses don’t sit in a single system or vulnerability, but in the relationships between software assets, platforms, and partners that underpin modern retail operations. This is where cyber exposure management becomes key. Rather than treating risk as a series of isolated alerts or vulnerabilities to be patched, exposure management focuses on understanding how risk originates and accumulates across an organization’s entire digital footprint.
Advertisement
For retailers, that footprint is uniquely complex: ecommerce platforms connect directly to inventory systems, in-store operational technology links back to central networks, identity management systems span employees, and third-party suppliers or contractors are embedded into day-to-day operations. Without a clear understanding of how these elements interact, it becomes impossible to anticipate how a seemingly minor weakness can escalate into widespread disruption.
Cyber exposure management offers a strategic approach to identifying, assessing, prioritizing and reducing cyber risk across an organization’s entire digital footprint. It’s about developing a living, contextual understanding of what assets exist, what role they play within retail operations, how critical they are during peak trading periods, and what other systems or partners they depend on – whether assets are managed or unmanaged, IT or OT, cloud-based or on-premises. This context is what separates manageable risk from systemic failure.
With attackers consistently exploiting gaps, exposure management allows organizations to assess risk in terms of real-world impact – not just technical severity – helping retailers prioritize the exposures most likely to affect operations, customer trust and revenue continuity.
This shift is ultimately about resilience, not just security maturity. By grounding risk decisions in how retail operations actually function, exposure-led approaches help teams anticipate where disruption is most likely to emerge, rather than responding after it has already taken hold. The result is more informed decision-making across IT, security and the wider business, with risk reduction aligned to operational continuity, customer experience and revenue protection.
Advertisement
Resilience starts before the next incident
There’s little room left for complacency. Retailers have learned the hard way that disruption doesn’t arrive in isolation, but through complex, interconnected environments – and once it begins, the impact can escalate quickly and spread far beyond the initial point of failure.
Last year was a wake-up call for the entire retail sector, not just for those that made the headlines. The challenge now is to ask harder questions about how environments are designed, how risk accumulates across systems, and whether businesses truly understand where their most critical points of exposure lie.
Because after all, when it rains, it pours. And the cost of inaction could now very well mean the difference between profit and sustained financial damage.
On Mar 20, the Ministry of Manpower (MOM) released its latest quarterly Labour Market Report, revealing updated figures on retrenchments and broader employment trends.
The data showed that the incidence of retrenchment rose to 6.3 per 1,000 employees, up from 5.9 per 1,000 the year before.
And within this broader trend, white-collar workers have experienced disproportionate pressure.
Advertisement
PMETs are increasingly on the chopping block
Professional, managerial, executive, and technician (PMET) retrenchments have shown a steeper incline compared to the overall workforce.
In 2025, the incidence of retrenchment for this group rose to 10.1 per 1,000 resident PMETs—above the pre-recessionary average—from 8.6 per 1,000 in 2024.
The layoffs have been largely concentrated in three sectors:
Financial Services: Banking and insurance firms have cut headcount as market conditions tighten
Information and Communications: Tech and telecom companies are restructuring in response to changing demands
Professional Services: Consulting, legal, and accounting firms have undergone notable workforce adjustments
For this specific labour market report, MOM examined trends in PMET roles to assess concerns around AI-driven job disruptions.
While the evidence does not point conclusively to broad-based displacement, there are signs of restructuring that warrant continued monitoring.
Advertisement
Total employment continued to grow
If you’re working in a PMET role, these trends may naturally raise concerns. However, the broader data suggest that this is not necessarily a contraction in demand for these jobs.
The same sectors that saw the highest PMET layoffs also had relatively high PMET job vacancies in Dec 2025, with a combined total of 14,600, up from 13,900 in the year-ago period.
Data on the number of job vacancies are rounded to the nearest 100.
According to MOM, the overlap between higher retrenchments and higher PMET vacancies in these sectors suggests ongoing restructuring and skills transition, where some jobs are being displaced as firms restructure, while hiring continues for others.
For the full year of 2025, total employment grew by 55,500, up from 44,500 in 2024. Of this, resident employment grew by 11,600, driven largely by financial services as well as health and social services.
In 2026, resident employment is expected to grow at a similar or slightly slower pace, said MOM.
Advertisement
Read more articles we’ve written on Singapore’s job trends here.
Featured Image Credit: Shadow_of_light/ depositphotos
A critical vulnerability in the wolfSSL SSL/TLS library can weaken security via improper verification of the hash algorithm or its size when checking Elliptic Curve Digital Signature Algorithm (ECDSA) signatures.
Researchers warn that an attacker could exploit the issue to force a target device or application to accept forged certificates for malicious servers or connections.
wolfSSL is a lightweight TLS/SSL implementation written in C, designed for embedded systems, IoT devices, industrial control systems, routers, appliances, sensors, automotive systems, and even aerospace or military equipment.
The vulnerability, discovered by Nicholas Carlini of Anthropic and tracked as CVE-2026-5194, is a cryptographic validation flaw that affects multiple signature algorithms in wolfSSL, allowing improperly weak digests to be accepted during certificate verification.
Advertisement
The issue impacts multiple algorithms, including ECDSA/ECC, DSA, ML-DSA, Ed25519, and Ed448. For builds that have both ECC and EdDSA or ML-DSA active, it is recommended to upgrade to the latest wolfSSL release.
“Missing hash/digest size and OID checks allow digests smaller than allowed when verifying ECDSA certificates, or smaller than is appropriate for the relevant key type, to be accepted by signature verification functions,” reads the security advisory.
“This could lead to reduced security of ECDSA certificate-based authentication if the public CA [certificate authority] key used is also known.”
Advertisement
According to Lukasz Olejnik, independent security researcher and consultant, exploiting CVE-2026-5194 could trick applications or devices using a vulnerable wolfSSL version to “accept a forged digital identity as genuine, trusting a malicious server, file, or connection it should have rejected.”
An attacker can exploit this weakness by supplying a forged certificate with a smaller digest than cryptographically appropriate, so the system accepts a signature that is easier to falsify or reproduce.
While the vulnerability impacts the core signature verification routine, there may be prerequisites and deployment-specific conditions that might limit exploitation.
System administrators managing environments that do not use upstream wolfSSL releases but instead rely on Linux distribution packages, vendor firmware, and embedded SDKs should seek downstream vendor advisories for better clarity.
Advertisement
For example, Red Hat’s advisory, which assigns the flaw a maximum severity rating, states that MariaDB is not affected because it uses OpenSSL rather than wolfSSL for cryptographic operations.
Organizations using wolfSSL are advised to review their deployments and apply the security updates promptly to ensure certificate validation remains secure.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
Microsoft is shutting down Outlook Lite on May 26, the company confirmed to TechCrunch on Monday. Launched in 2022, Outlook Lite is a lightweight version of the regular Outlook app, designed for Android phones with limited storage and regions with slower internet connections.
The app had already been scheduled for retirement — Microsoft announced last year that the app would be removed from the Google Play Store in October 2025. Now the company has confirmed that the app will lose functionality for existing users next month.
“To continue enjoying a secure and feature-rich email experience, we recommend switching to Outlook Mobile,” Microsoft says in an Outlook Lite support page.
Advertisement
Outlook Lite users will be able to access their existing email, calendar items, and attachments by signing into Outlook Mobile. Users will also be directed to the Google Play Store to download the standard Outlook app.
If you talk to sports car fans and enthusiasts, you’ll probably hear differing opinions about which is better: front-engined or mid-engined sports cars. Both have their own pros and cons and unique driving characteristics, and the layouts will also impart a certain look, most evident in the radical exterior change the Chevrolet Corvette underwent after its switch from a front-engine to a mid-engine platform.
With their engines mounted behind the cabin, mid-engined sports have a distinct profile that often brings to mind high-end, exotic European supercars. It’s a look that survives even when scaled down to less expensive, more mainstream-oriented cars, resulting in some beautiful vehicles. So with that in mind, we’ve rounded up five of what we think are the best-looking, most handsomely designed mid-engined sports cars of the modern era.
Now, there are countless beautiful (and incredibly expensive) mid-engined exotics that we could include on a list like this, but we’ve left some of the obvious choices out to keep things interesting. Thus, no exotic Ferraris and Lamborghinis here. Even so, we have a diverse mix of machinery that includes mid-engined offerings from Japan, the United States, and Europe, with engines ranging from modest, low-power four-cylinders to fire-breathing V8s.
Advertisement
Toyota MR2 (second generation)
Heritage Images/Getty Images
Along with the front-engined Mazda MX-5 Miata, the Toyota MR2 is one of the most popular lightweight sports cars to come out of Japan. The MR2 debuted in the early 1980s and was built over three distinct generations before being discontinued in the mid-2000s. Each generation of the MR2 has its own personality and following, but from a design and performance standpoint, it’s the second generation that represents the MR2 at its peak.
The second-generation MR2 debuted in Japan in 1989 and was on sale around the world shortly after. With its wider profile, its flip-up headlights, and distinct side vents, the second-generation car had a more aggressive look that, to some eyes, looks a lot like a scaled-down version of the Ferrari 348. The second-gen MR2 also had the performance to back up its look. Thanks to the 3S-GTE engine under the hood, Car and Driver got the MR2 Turbo to 60 mph in just under six seconds — very impressive by early ’90s standards.
Advertisement
To do all this at a relatively affordable price — $20,000 or so for the Turbo in 1990 — shows just how powerful Toyota was during this time. Today, along with the Supra it shared showrooms with, the second-gen MR2 is considered one of the most desirable Toyotas of its time, and especially in turbocharged form, one of the most desirable Japanese sports cars of the ’90s.
Advertisement
2004-2006 Ford GT
Sjoerd Van Der Wal/Getty Images
When Ford designers started working on the automaker’s mid-2000s Ford GT revival, they had a pretty big head start in creating a beautiful car. That’s because the design of the Ford GT was heavily inspired by the attractive and legendary Ford GT40 race car of the 1960s. Still, retro design isn’t always as easy as it looks, and it doesn’t take much for retro cars to veer into the tacky, but the GT’s designers absolutely aced their mission.
The modern road-going Ford GT is a much larger car than the GT40 it’s based on, but the lines are so good that you don’t realize that until you actually see the two cars side by side. The GT’s attractiveness carries over to the interior as well, with a wonderfully executed modern interpretation of 1960s design. Of course, it also doesn’t hurt that it’s got a mid-mounted supercharged 5.4-liter V8 mated to a manual transmission.
Because the initial design was executed so well, the 2000s Ford GT has never felt dated in the way other cars from its era might. Design-wise, it almost feels like a remastered car from the ’60s rather than a product of the 2000s. All of these are reasons why, despite only being a little over 20 years old, the value of the mid-2000s Ford GT has climbed tremendously, with the car now becoming a highly desirable modern classic in its own right.
Advertisement
Lotus Elise
Photosvit/Getty Images
A sports car’s appealing design need not be tied to its physical size or amount of horsepower. Case in point: the Lotus Elise. The Elise is considered one of the purest sports cars of the modern era, with a platform and design that stretches back to the mid-1990s. While some could argue that the Elise isn’t a traditionally beautiful sports car, much of the Elise’s beauty comes from its focus on simplicity. The Elise evolved significantly between its mid-’90s debut and the end of its production run in 2021, but the car never strayed from its mission of delivering lightness and response over all else.
The later variants of the Elise sold in North America use modestly powered Toyota four-cylinder engines, with the Elise’s light weight meaning it didn’t need massive amounts of horsepower to offer a fast and highly enjoyable sports car experience — part of why drivers love this car. Design-wise, the Elise is all about compact minimalism, and its svelte body lines and distinct round tail lights helped give the Elise its signature look.
Its attractive looks and go-kart-like handling are just a couple of the reasons why both the Elise and its closely-related counterpart, the Lotus Exige, have emerged as genuine modern classics. With its focus only on the essentials, the Elise is the antidote to the high-horsepower, overweight, and often overstyled modern performance car.
Advertisement
Alpine A110
Like the reto-styled Ford GT, the modern Alpine A110 is a modern, mid-engined sports car that might technically be cheating with its good looks. That’s because, like the Ford, the A110 is a modern reinterpretation of an iconic 1960s design — and one that happens to be done very well.
The modern Alpine A110 (which is built by Renault) debuted in the late 2010s to wide acclaim as a rival to the Porsche Cayman. Boasting a mid-mounted turbocharged four-cylinder engine and a low curb weight, the A110 took its design inspiration from the original, rear-engined Alpine A110 of the ’60s and ’70s. Among the styling traits that carried over to the new A110 are the original’s quad front headlights and wrap-around rear window.
Advertisement
To this point, the biggest problem with the A110 is that, like other French models, it’s not offered in North America. In fact, it might just be the coolest modern performance car that’s not currently sold here. There have been rumors and serious speculation that the A110 will eventually make its way to the United States, although we don’t yet know whether it will be as a gasoline model or as a next-generation electric Alpine sports car.
Advertisement
Honda/Acura NSX
Sjoerd Van Der Wal/Getty Images
Sometimes a sports car is a hit from the moment it debuts; other times, it ages nicely and becomes a favorite for a new generation of enthusiasts. In the case of the highly unique Honda (or Acura) NSX, it’s both. When the NSX first debuted in 1989, the car was a game-changer. It wasn’t just an impressive Japanese sports car; instead, it was a bona fide, homegrown Japanese exotic laced with Honda’s racing DNA.
Thanks to design choices like an all-aluminum construction and a mid-mounted, naturally aspirated VTEC V6 engine, the NSX had the performance and feel of a Ferrari — but in a more affordable and more reliable package that could be serviced at your local Honda or Acura dealer. In comparison tests, it edged out its more established performance car competitors. Design-wise, the original NSX was somewhat restrained, but its clean lines have aged extremely well, making it a favorite even among those born too late to experience its original run.
When new, the NSX had a relatively affordable price tag for what it delivered, but values have climbed substantially in recent years, with certain examples crossing the $300,000 mark at auction. While many subsequent Japanese sports cars have eclipsed the original NSX’s performance benchmarks, its aura is still unmatched.
The photorealistic digital character is trained on Zuckerberg’s mannerisms, tone, and his own thinking on company strategy. He is personally involved in testing it. The effort, described by four people familiar with the matter, is separate from a ‘CEO agent’ that handles tasks for Zuckerberg directly.
Meta is building a photorealistic, AI-powered version of Mark Zuckerberg that can interact with employees in his place, the Financial Times reported on Monday, citing four people familiar with the matter.
The character is being developed by Meta’s Superintelligence Labs and is trained on Zuckerberg’s mannerisms, tone, and publicly available statements, as well as his own thinking on company strategy, so that employees, in the words of one person familiar with the project, ‘might feel more connected to the founder through interactions with it.’ Z
uckerberg is personally involved in training and testing the animated version of himself.
The effort is at an early stage and is separate from a different project, first reported by the Wall Street Journal, in which Meta is building a ‘CEO agent’ designed to help Zuckerberg himself retrieve information faster, a tool that assists him rather than stands in for him.
Advertisement
The AI character project is part of a broader push within Meta’s Superintelligence Labs to develop lifelike, AI-driven digital figures capable of real-time conversation. The technical challenge is substantial: achieving realism and preventing perceptible delays in conversation requires enormous computing power.
The project reflects a significant escalation of Zuckerberg’s own involvement in Meta’s AI work. According to people familiar with the matter, he has been spending five to ten hours a week writing code on various AI projects and attending technical engineering review sessions, an unusual level of hands-on engagement for a CEO running a $1.6 trillion company.
He has committed publicly to developing what he calls ‘personal superintelligence’ as Meta works to close the gap with OpenAI and Google. On a January earnings call, he said Meta was ‘elevating individual contributors and flattening teams’ through AI-native tooling.
Meta has a history with AI characters. In September 2023 it launched a range of celebrity-based chatbots, among them personas modelled on Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka, all of whom licensed their likenesses, but these were discontinued in the summer of 2024 after failing to gain meaningful traction.
Advertisement
Meta then opened an AI Studio allowing users and creators to build their own AI characters, but ran into controversy when users began generating sexually explicit personas. Since January, Meta has restricted teenager access to AI characters. Zuckerberg’s interest in the format was reportedly sharpened by the success of AI companion startup Character.AI, particularly with younger users.
Meta is not the only company exploring AI versions of its leadership. Uber CEO Dara Khosrowshahi said during a podcast interview earlier this year that his employees had built an AI clone of him.
But the Zuckerberg project has a different scale and institutional purpose: it is being designed as a mechanism for a $1.6 trillion company’s 79,000 employees to feel a sense of connection to a founder who is, by any measure, difficult to reach.
A multifaceted decahedral black ceramic bezel and sandwich-style three-piece case—a reworking of Bremont’s signature Trip-Tick construction—house a chronometer-rated automatic chronograph movement made by Sellita, with a 62-hour power reserve.
The watch will be a passenger aboard the FLIP rover, due to launch as part of Astrobotic’s Griffin Mission One (Griffin-1), expected to land at the lunar south pole at some point in the second half of this year.
It’s a one-way mission: The rover will remain permanently on the lunar surface, with the watch ticking away as it roams the landscape. FLIP’s objectives include reaching elevated positions on the lunar terrain, gathering data on lunar dust accumulation, testing dust-mitigation coatings, and surviving a two-week lunar night in hibernation (which would be a first for a US rover).
In terms of serious timekeeping data for Bremont, the mission is frankly symbolic. The watch will be positioned vertically in a specially designed housing within the FLIP’s chassis, between its front wheels. Only the watch head, weighing 107 grams, is included, glued in place using a specialist composite, its face visible to FLIP’s HD cameras. But the hibernatory periods will mean the watch (whose mechanical movement is driven in normal circumstances by the motion of the wearer’s arm) will stop running once its 62-hour power reserve runs down.
Advertisement
When the FLIP is on the move again, its motion should—in theory—jolt the mechanism into action once more. Despite the gravitational pull that’s a sixth of the Earth’s, the acceleration, pitches, and tilts of the rover should swing the winding rotor, if with less torque and efficiency than on Earth.
“My guess is that the watch will function from time to time, but for short periods,” Cerrato says. “We will learn along the way. But that’s what is exciting—it projects us into a thinking process that is absolutely out of the box. Just the fact of having it there is inspiring.” However, there is little doubt that Bremont will, just like other brands with any ties to the cosmos, mine its new space connection for all it is worth.
FLIP itself, which weighs just 1,058 pounds and carries a mix of commercial and government payloads, four HD cameras, and a deployable solar array, is fundamentally a technology demonstrator for Flexible Logistics and Exploration (FLEX), Astrolab’s much larger SUV-sized rover destined to support NASA’s Artemis program. The firm developed the FLIP from scratch after NASA’s equivalent vehicle for which the Griffin-1 mission was contracted, the VIPER, was put on pause in 2024. This left Astrobotic seeking a stand-in in short order. Astrolab, which signed the contract within a month of hearing about the opportunity in the fall of 2024, took the FLIP from blank sheet to finished rover in roughly a year.
Its standout feature is its hyper-deformable wheels, minutely structured from silicone, composite, and stainless steel, which create a soft, enlarged contact surface with the terrain. “It’s like if you’re off-roading in a Jeep or Land Rover where you let some air out of the tires to go softer and spread the load over a larger area,” explains Astrolab’s founder, Jaret Matthews. While the moon’s nighttime temperatures of around -200 degrees Celsius (around -328 Fahrenheit) would cause conventional rubber tires to become glass-like and shatter, Astrolab’s solution is intended to keep the rover from sinking into the unconsolidated lunar dust—or regolith—that covers the environment.
The case is white zirconium oxide ceramic with a Ceratanium bezel and back, rated to handle temperature swings from 100 to -100 degrees Celsius (212 to -238 Fahrenheit). Indeed, the whole piece has been shaken to 10 g’s at Vast’s Long Beach facility, exceeding forces astronauts experience during ascent, and came out the other side running just fine. Price is still up in the air.
TAG Heuer Monaco Evergraph (From $25,000)
Watch brands love finding ever more recherché areas to reinvent, and the precise “snick” of a chronograph’s stop/start/reset buttons is the latest micro-battlefield in which R&D teams are duking it out. Last year, Audemars Piguet took the feel of an iPhone button as the inspiration for its Royal Oak RD#5; now TAG Heuer has its own take on push-button ergonomics.
Normally, chronograph buttons involve a cluster of levers, springs, and cams that click into place with varying degrees of precision. TAG Heuer has thrown most of that out with the Calibre TH80-00, five years in development between its TAG Heuer LAB innovation department and movement maker Vaucher Manufacture Fleurier. It replaces the traditional architecture with two flexible bistable components—essentially shape-shifting parts that snap between positions—produced via high-precision LIGA fabrication, a micro-manufacturing technique that includes lithography, electroforming, and molding.
The result? Crisper actuation that, crucially, doesn’t degrade. According to TAG, the 10,000th press feels identical to the first. Paired with TAG’s incredibly high-tech TH-Carbonspring oscillator (magnetism-resistant, 5-Hz, 70-hour reserve, COSC-certified), it’s housed in a reworked 40-mm titanium Monaco with the crown back on the left where Steve McQueen’s 1969 original had it. You get two versions: brushed titanium with blue accents or black Diamond-Like Carbon (DLC) with red. The dial is transparent acrylic, so you can watch the compliant mechanism do its thing.
Vacheron Constantin Overseas Dual Time Cardinal Points (Price on Request)
Vacheron Constantin’s Overseas line, among the most celebrated examples of Switzerland’s dominant “sports-luxe” genre, leans heavily into the sports side with a full-titanium, GMT-treatment across four references. Each dial is color-mapped to a compass point: white for north, brown for south, green for west, blue for east, contrasting with a bright orange, Rolex-style GMT hand for the time zone at home.
The lineage traces to a 2019 prototype built for explorer Cory Richards to wear up Everest—probably the most luxurious timepiece that has been to such places. The 41-mm case, integrated bracelet, and folding clasp are all in titanium with a matte anthracite finish on the bezel and crown. Inside is the in-house Calibre 5110 DT/3, a self-winding GMT with home-time am/pm indicator, local-time date pusher, and 60-hour reserve. Classic sports watch attributes, but here certified with the Geneva Hallmark, the highest official benchmark of fine watchmaking and hand-finishing.
American Airlines has now become the latest company to take advantage of the revamped boarding pass system in the iOS 26 update.
American Airlines now supports the revamped iOS 26 boarding pass system.
At WWDC 2025, Apple revealed that upgraded boarding passes, with support for Live Activities and real-time flight information, would make their way to the Apple Wallet app with iOS 26. Improved support for tracking luggage with AirTags and Find My, along with maps data for airports, was touted as well. Since then, United Airlines and Southwest Airlines have rolled out support for the iOS 26 boarding pass system, and now American Airlines has followed suit. The American Airlines iOS app was recently updated, and its release notes detail the upgraded boarding pass experience. Continue Reading on AppleInsider | Discuss on our Forums
You must be logged in to post a comment Login