Jenny Zhang left New York for Shenzhen last year with a clear plan. She wanted to build a camera that fit right into daily routines without forcing anyone to hold a device or wear something on their face. The result sits in her hair like an ordinary barrette, chunky and white, ready to record whatever passes in front of it.
Zhang is the founder of Computer Angel, a small startup company where she spent months hammering away in workshops to develop her idea into a fully functional prototype. The clip easily snaps into place and keeps securely in place even when you move around; you wouldn’t want to take it off once it’s attached. With the camera positioned directly over the top of your head, the moment you hit the button or even tap it, it begins snapping away.
8K30fps 360° Video with Dual 1/1.28″ Sensors: Capture stunning detail with dual 1/1.28″ sensors shooting up to 8K30fps. Film epic adventures…
Triple AI Chip Design, Better Low Light: Shoot confidently even in challenging lighting. X5’s triple AI chip design powers advanced noise reduction…
Invisible Selfie Stick: Create impossible third-person views with no selfie stick in sight! Capture everything in 360°, then choose your angles later…
The resulting footage appears to be fairly low-resolution, with a quality comparable to those old-school flip phones. The colors are all warm and fuzzy on the edges, giving each clip a unique personality that is far more appealing than the super-sharp, clinical stuff. You receive a hands-free view of your daily life from an angle that your phone simply cannot reach, as if you had a personal cameraman following you around at all times.
Zhang made a point of keeping things lighthearted with design, such as making the clip look like a piece of jewelry first and then a piece of technology, which turns out to be quite significant because people are far more inclined to go for something that looks beautiful on them. Now, the smart glasses that larger businesses are producing are all about packing in mics, speakers, and other aids that can identify things in real time or answer your queries on the fly. Computer Angel’s camera? No way, because there is only one task to do: save what you see, exactly as you see it.
Zhang has yet to announce exact pricing or release dates. She’s keeping the details under wraps while she refines the build, but she’s always glad to share progress on social media, posting test videos and behind-the-scenes looks at the process of transitioning from a sketch to actual hardware. [Source]
Amazon’s decision to cut support for older Kindles has pushed some longtime owners toward jailbreaking, a route many never expected to consider.
From May 20, 2026, Kindle devices released in 2012 or earlier will no longer be able to buy, borrow, or download new books directly from Amazon. Books already downloaded will still work, but the store experience is basically being switched off for these devices. Reports now suggest that some users are looking at jailbreaks as a way to keep older Kindles useful instead of replacing hardware that still works.
Why are Kindle owners turning to jailbreaks?
The frustration is not just about losing store access. On Reddit, many users are treating this as another “buying isn’t owning” moment. Several owners say their old Kindles still work perfectly for reading, which makes the shutdown feel unnecessary. Many users see this as a right-to-repair and ownership issue. If an old Kindle still turns on, has a working screen, battery, and buttons, they argue it should not be pushed toward retirement because Amazon has ended software support.
A Kindle jailbreak means removing some of Amazon’s software restrictions so users can install community-made tools and manage the device more freely. In this case, owners are mainly interested in keeping older Kindles useful for reading, sideloading books, and avoiding forced updates that could close those workarounds.
Advertisement
What are the risks of jailbreaking a Kindle?
Jailbreaking is not a clean fix for everyone. The process can fail if users install the wrong files, follow bad instructions, or use a method that does not match their Kindle model or firmware version. In the worst case, the device can become unstable or stop working properly.
Steven Winkelman / Digital Trends
In many places, modifying a device for personal use may not automatically be treated as illegal. But using it to break DRM, remove copy protection, or sell modified Kindles can create legal trouble.
Even if Amazon’s decision makes sense from a support and maintenance perspective, it has landed badly with many users. People are tired of electronics being treated as disposable once official support ends. For some older Kindle owners, Jailbreaking is one way to keep those devices out of the e-waste pile.
United Nations’ Under-Secretary and Special Envoy for Digital and Emerging Technologies, Amandeep Singh Gill, appears via Zoom to deliver the opening keynote at Seattle University’s 2026 Ethics and Tech conference on May 15, 2026. (Photo: Ken Yeung)
Big tech companies are deploying compute clusters with millions of GPUs to train and run AI models. But across the entire continent of Africa — encompassing 54 countries and more than 1.5 billion people — fewer than 1,000 GPUs are available for researchers and developers to train models on local-language datasets.
That disparity illustrates what the United Nations’ top digital envoy calls an “immense concentration of tech power and wealth” in a few zip codes — not just countries or regions, but confined areas, primarily in the U.S., where the companies shaping AI are based.
He didn’t name names, but the point hit close to home for the Seattle audience: 98109 for Amazon, 98052 for Microsoft.
Delivering the keynote address via Zoom at Seattle University’s 2026 Ethics and Tech conference on Friday, Under-Secretary Amandeep Singh Gill called 2026 “especially seminal” for AI governance, as the technology shifts from model capabilities and infrastructure investment to systems that perform real-world tasks autonomously.
Gill pointed to the global response to Anthropic’s Mythos AI model — which the company restricted from broad public release over cybersecurity concerns — as an example of why AI governance requires a comprehensive, international approach.
Advertisement
Here are more of the key messages from his talk.
AI could become a “systemic risk.” Gill said the technology is a “relatively minor risk” now but warned it could soon bypass cybersecurity defenses, accelerate armed conflict, and erode public trust through deepfakes and misinformation. “When we cannot tell the difference between what is true and untrue, what is reality or imaginary, then we lose this shared sense of an understanding of facts,” he said.
Armed conflict could worsen. Gill warned that AI risks “lowering the threshold of conflict, confusing accountability under international humanitarian law, and setting us off on escalation ladders that we cannot control.”
AI’s energy demands are threatening climate goals. The energy required for large language models, agentic systems, and inference is already threatening national net-zero targets, Gill warned. Data center emissions, water consumption for cooling, hardware turnover, and mineral extraction costs are compounding — and falling disproportionately on low-income countries.
Advertisement
AI is both “a potential solution and a stressor” for the environment. It could optimize renewable energy grids and accelerate progress in fusion and batteries, but the short-term costs are mounting. Gill said the UN is examining how to ensure equity and just transitions “over these time horizons.”
The UN is building a scientific panel for AI modeled after the Intergovernmental Panel on Climate Change. Chaired by journalist and Nobel Peace Prize winner Maria Ressa and Turing Award-winning AI researcher Yoshua Bengio, the 40-member panel is deliberately composed of only two members each from China and the U.S., with the remaining 36 from other countries, including seven from Africa, to ensure more countries are heard. Its first report is expected in July 2026.
The UN is putting AI governance conversations under one roof. Conversations about AI previously happened in separate bodies with narrow mandates. Now they’re being brought onto what Gill called a “horizontal platform” where policymakers from all 193 countries can learn from each other and develop common approaches.
Gill called AI governance a “sovereign decision.” The UN won’t tell countries how to regulate AI, but governance frameworks mean little if nations lack the capacity to participate. Gill called for support of community-driven AI projects that invest in local research and innovation ecosystems, allowing people to use these tools to solve their own problems.
Advertisement
He acknowledged the UN is working with limited resources against an enormous challenge, but said the alternative is leaving AI’s trajectory to market forces and geopolitical competition.
The goal, he said, is a world where AI empowers democracies and societies, and creates opportunities not just for “a few billionaires and trillionaires” but for everyone.
Retrieval-augmented generation (RAG) has become the de facto standard for grounding large language models (LLMs) in private data. The standard architecture — chunking documents, embedding them into a vector database, and retrieving top-k results via cosine similarity — is effective for unstructured semantic search.
However, for enterprise domains characterized by highly interconnected data (supply chain, financial compliance, fraud detection), vector-only RAG often fails. It captures similarity but misses structure. It struggles with multi-hop reasoning questions like, “How will the delay in Component X impact our Q3 deliverable for Client Y?” because the vector store doesn’t “know” that Component X is part of Client Y’s deliverable.
This article explores the graph-enhanced RAG pattern. Drawing on my experience building high-throughput logging systems at Meta and private data infrastructure at Cognee, we will walk through a reference architecture that combines the semantic flexibility of vector search with the structural determinism of graph databases.
The problem: When vector search loses context
Vector databases excel at capturing meaning but discard topology. When a document is chunked and embedded, explicit relationships (hierarchy, dependency, ownership) are often flattened or lost entirely.
Advertisement
Consider a supply chain risk scenario. While this is a hypothetical example, it represents the exact class of structural problems we see constantly in enterprise data architectures:
Structured data: A SQL database defining that Supplier A provides Component X to Factory Y.
Unstructured data: A news report stating, “Flooding in Thailand has halted production at Supplier A’s facility.”
A standard vector search for “production risks” will retrieve the news report. However, it likely lacks the context to link that report to Factory Y’s output. The LLM receives the news but cannot answer the critical business question: “Which downstream factories are at risk?”
In production, this manifests as hallucination. The LLM attempts to bridge the gap between the news report and the factory but lacks the explicit link, leading it to either guess relationships or return an “I don’t know” response despite the data being present in the system.
The pattern: Hybrid retrieval
To solve this, we move from a “Flat RAG” to a “Graph RAG” architecture. This involves a three-layer stack:
Advertisement
Ingestion (The “Meta” Lesson): At Meta, working on the Shops logging infrastructure, we learned that structure must be enforced at ingestion. You cannot guarantee reliable analytics if you try to reconstruct structure from messy logs later. Similarly, in RAG, we must extract entities (nodes) and relationships (edges) during ingestion. We can use an LLM or named entity recognition (NER) model to extract entities from text chunks and link them to existing records in the graph.
Storage: We use a graph database (like Neo4j) to store the structural graph. Vector embeddings are stored as properties on specific nodes (e.g., a RiskEvent node).
Retrieval: We execute a hybrid query:
Reference implementation
Let’s build a simplified implementation of this supply chain risk analyzer using Python, Neo4j, and OpenAI.
1. Modeling the graph
We need a schema that connects our unstructured “risk events” to our structured “supply chain” entities.
2. Ingestion: Linking structure and semantics
In this step, we assume the structural graph (suppliers -> factories) already exists. We ingest a new unstructured “risk event” and link it to the graph.
3. The hybrid retrieval query
This is the core differentiator. Instead of just returning the top-k chunks, we use Cypher to perform a vector search to find the event, and then traverse to find the downstream impact.
The output: Instead of a generic text chunk, the LLM receives a structured payload:
This allows the LLM to generate a precise answer: “The flooding at TechChip Inc puts Assembly Plant Alpha at risk.”
Advertisement
Production lessons: Latency and consistency
Moving this architecture from a notebook to production requires handling trade-offs.
1. The latency tax
Graph traversals are more expensive than simple vector lookups. In my work on product image experimentation at Meta, we dealt with strict latency budgets where every millisecond impacted user experience. While the domain was different, the architectural lesson applies directly to Graph RAG: You cannot afford to compute everything on the fly.
Mitigation: We use semantic caching. If a user asks a question similar (cosine similarity > 0.85) to a previous query, we serve the cached graph result. This reduces the “graph tax” for common queries.
2. The “stale edge” problem
In vector databases, data is independent. In a graph, data is dependent. If Supplier A stops supplying Factory Y, but the edge remains in the graph, the RAG system will confidently hallucinate a relationship that no longer exists.
Advertisement
Mitigation: Graph relationships must have Time-To-Live (TTL) or be synced via Change Data Capture (CDC) pipelines from the source of truth (the ERP system).
Infrastructure decision framework
Should you adopt Graph RAG? Here is the framework we use at Cognee:
Use vector-only RAG if:
The corpus is flat (e.g., a chaotic Wiki or Slack dump).
Questions are broad (“How do I reset my VPN?”).
Latency < 200ms is a hard requirement.
Use graph-enhanced RAG if:
The domain is regulated (finance, healthcare).
“Explainability” is required (you need to show the traversal path).
The answer depends on multi-hop relationships (“Which indirect subsidiaries are affected?”).
Conclusion
Graph-enhanced RAG is not a replacement for vector search, but a necessary evolution for complex domains. By treating your infrastructure as a knowledge graph, you provide the LLM with the one thing it cannot hallucinate: The structural truth of your business.
Daulet Amirkhanov is a software engineer at UseBead.
Advertisement
Welcome to the VentureBeat community!
Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.
Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!
The pricing of the 30m class A common stock shares is significantly higher than was expected.
Cerebras Systems, the AI chipmaker aiming to rival Nvidia, is set to raise more than $5.5bn after pricing its US initial public offering (IPO) at $185 per share.
The pricing of the 30m class A common stock shares – set to begin trading today (14 May) as ‘CBRS’ on the Nasdaq Global Select Market – is significantly higher than was expected.
In early May, a $3.5bn raise through the sale of 28m shares at between $115 and $125 each was mooted. Last week, that estimate had grown to proceeds of up to $4.8bn at a range of $150-160 per share.
Advertisement
Reported media valuations for the company after the IPO sit at around $50bn.
Morgan Stanley, Citigroup, Barclays and UBS Investment Bank are acting as lead book-running managers for the offering, according to Cerebras. Mizuho and TD Cowen are acting as bookrunners.
Needham & Company, Craig-Hallum, Wedbush Securities, Rosenblatt, Academy Securities, Credit Agricole CIB, MUFG and First Citizens Capital Securities are acting as co-managers.
Cerebras claims that it builds the “fastest AI infrastructure in the world” and CEO Andrew Feldman has also gone on record to say that his hardware runs AI models multiple times faster than that of Nvidia’s.
Cerebras is behind WSE-3, touted by the company to be the “largest” AI chip ever built with its 19-times more transistors and 28-times more compute than the Nvidia B200.
Cerebras initially filed for IPO in September 2024, but drew criticism for a perceived heavy reliance on a single United Arab Emirates-based customer, the Microsoft-backed G42. The following October, it withdrew from a planned IPO without providing an official reason.
According to Bloomberg, the Cerebras IPO is the largest of 2026 so far, and drew orders for more than 20 times the number of shares available. Cerebras said it had granted the IPO underwriters a 30-day option to purchase up to an additional 4,500,000 shares.
Advertisement
Last month, Elon Musk’s SpaceX was reported to have confidentially filed for a US IPO, with estimates of how much this could raise put at between $50bn and $75bn, while the company’s valuation could be up to $1.75trn.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
“We may have accidentally detected dark matter back in 2019,” writes ScienceAlert.
“What if instead of trying to see dark matter, scientists attempted to hear it instead?” asks Space.com: New research suggests dark matter could leave a tiny but discernible imprint in the cacophony of ripples in spacetime called “gravitational waves” that ring through the cosmos when two black holes slam together and merge… Fortunately, when it comes to detecting gravitational waves from colliding black holes, humanity’s instruments, such as LIGO (Laser Interferometer Gravitational-Wave Observatory), are getting more and more sensitive all the time…
Vicente and colleagues searched through data gathered by LIGO and its fellow gravitational wave detectors, KAGRA (Kamioka Gravitational Wave Detector) and Virgo, focusing on 28 of the clearest signals from merging black holes. Of these, 27 appeared to have come from mergers that occurred in the relative vacuum of space. One signal, however, GW190728, first heard on July 19, 2019, and the result of merging binary black holes with a combined mass of 20 times that of the sun and located an estimated 8 billion light-years away, seemed to carry the telltale trace of this merger occurring in a region of dense, “buttery” dark matter.
The team behind this research is quick to point out that this can’t be considered a positive detection of dark matter, but does say it gives us a hint at what to look for and thus where to direct follow-up investigations… “We know that dark matter is around us. It just has to be dense enough for us to see its effects,” said team leader Josu Aurrekoetxea, of the Massachusetts Institute of Technology (MIT) Department of Physics. “Black holes provide a mechanism to enhance this density, which we can now search for by analyzing the gravitational waves emitted when they merge.” They published their results this week in the journal Physical Review Letters.
A lot of humans are feeling very down on humanity these days. Maybe you’ve met them. Or maybe you’re one of them.
I’m talking about those who look around and say: Humans are destroying the planet — causing climate change, making other species go extinct. Soon enough we’ll be mucking up the cosmos, too — polluting it with still more space junk, colonizing the moon, even exporting data centers into the heavens. The world would be better off if we ourselves just go extinct!
One reader recently exemplified this rising anti-humanism by writing in to my philosophical advice column, Your Mileage May Vary, and telling me bluntly: “I’m disgusted to be a human.” I responded by reminding them that hating on humanity is neither a new nor an enlightened position. It lets us off the hook too easily, because it expects nothing of us.
But I’m also aware that this distaste for humanity isn’t only motivating old-school misanthropy these days.
Advertisement
It’s also motivating transhumanism, the movement that says we should use tech to proactively evolve our species into Homo sapiens 2.0. Transhumanists — who span the gamut from Silicon Valley tech bros to academic philosophers — do want to keep some version of humanity going, but definitely not running on the current hardware. They imagine us with chips in our brains, or with AI telling us how to make moral decisions more objectively, or with digitally uploaded minds that live forever in the cloud. All of this will someday, they assert, usher us into a utopian future where we transcend suffering and become as perfect and immortal as gods.
To better understand why a distaste for humanity is driving some people into the arms of transhumanism these days, I reached out to Shannon Vallor, a philosopher of technology at the University of Edinburgh and author of The AI Mirror. Vallor is a devoted humanist — but not a naive one. To her, being pro-human doesn’t mean being anti-technology. We talked about how classical humanism has failed to offer a compelling vision for the 21st century and beyond — and how we can still do better. Our conversation, edited for length and clarity, is below.
What’s driving transhumanism to become more popular these days?
We’re living in a world that digital technologies and social media have made more fragmented and alienating. We are busier, more tired, more lonely, more uncertain than ever about the future and what it holds. So we’re at a low point in our ability to place faith in our fellow humans. And instead of looking at the deeper causes of that — the breakdown of the social fabric and of institutions and of local networks of care — there is an attempt to normalize and naturalize anti-humanism.
Advertisement
It’s an attempt to treat it not as a symptom of some disease or malaise in society — which is how I see it — but rather to treat it as a new, more enlightened frame of mind. To say: If you’re a humanist, you’re somehow stuck in the past, you have this overly romantic attachment to humans, you’re committing a fallacy of exceptionalism.
And there is a history of humanism being inappropriately exceptionalist — for example, imagining that other living things can’t have feelings or intelligence or moral standing. So as we’ve surpassed those errors, it’s very easy to think: Oh, you just go one step further and decide that humans don’t really need to be part of the story, or they don’t need to be writing the story. And if you quiver or flinch at the notion of machines writing the story of the future, that’s just your parochial attachment.
Right, this is the accusation of “speciesism” that we hear a lot these days.
Exactly. At a very superficial intellectual level, this is all very plausible and appealing and seems very enlightened, right? But it’s rooted in a deep misconception of what it is to be human.
Advertisement
The reason why it’s mistaken for humans to place themselves at the center of all value and to see other living beings as mere tools has nothing to do with humans somehow being unimportant, or humans somehow being insignificant in the broad story. It’s rather a failure to understand that to be human is to be dependent upon this much bigger living system, and our value is inseparable and intertwined with the value of other living things. It’s not that humans are something to be cast aside.
Have a question you want me to answer in the next Your Mileage May Vary advice column?
Do you think the classical humanism that we’ve inherited from the Renaissance and the Enlightenment era is enough to meet the current moment? Or do we need a new humanism?
No. I do think we need a new humanism. And one of the reasons, of course, is because classical humanism, in addition to suffering from the flaws of speciesism, had a vision of the human that was itself heavily gendered and racialized. It was very much an ideal that is both unattainable and undesirable in its naive form: the idea of the individual, rational agent that is entirely self-determining and surpassing the more basic networks of care and concern that hold communities together. This Enlightenment version of humanism, which carried with it many of the flaws of European Enlightenment thinking more broadly — that’s not the kind of humanism that’s going to carry us into a sustainable future.
Advertisement
The most common pro-human response to AI that I see nowadays is this style of humanism that tries to say there are certain fixed traits that make humans unique, and that tries to locate value only in humans as they currently exist. It says: Let’s use tech to alleviate problems like disease but not try to augment the species.
To me, that feels insufficient as a guide. Because we’re all already transhuman in some sense, right? “Human” has never been a static category. Homo sapiens has always been evolving and augmenting itself, with everything from meditation and fasting to eyeglasses and antidepressants. A humanism that refuses to recognize that feels like it doesn’t offer a compelling vision for the future.
That’s the naive version of humanism. It’s the idea that there’s this blueprint for what a human is and that somehow technology, or any things that change us, take us away from that blueprint, when in fact we’ve been changing ourselves with language, with tools, with architecture, with culture, from the moment we climbed down from the trees.
“We need to ground ourselves in an ethos of sustainability, of care, of solidarity and mutual aid and repair of the systems that we need in order to have a future. That can be its own philosophy.”
Advertisement
I wrote about this in The AI Mirror, where I talked about the existentialist Jose Ortega y Gasset’s notion of “autofabrication” [literally, self-making]. From the beginning, humans have had to invent and reinvent themselves anew again and again. If there is anything unique about the human, it’s that as far as we know there’s no other creature that has to get up in the morning and decide if it’s going to live differently than it did the day before, or if it’s going to maintain the commitments and promises it’s made to itself or others.
This kind of identity construction is something that our cognitive makeup has given us, both as a blessing and a bit of a curse. It’s the responsibility to choose — and to not fall back on this idea that there’s a blueprint for what a human is supposed to be and that we’re just supposed to follow that blueprint.
I think people really crave a positive vision for the future that they can get behind. To you, what is the positive, humanist-but-not-naive-humanist vision?
Sometimes I think about this demand for a positive vision and I think about how unfair and unreasonable that demand is when the mere homeostasis of life on this planet, and of human life, is fragile. For a being whose future is threatened, survival is a positive future! Maintaining the strength and resilience of our form of life is a victory. And in a way, I think there’s a danger in the desire to immediately leap past that.
Advertisement
We have to look at the fundamental structural causes of the scarcity we face, and see the positive, exciting, mobilizing, motivating work as addressing those deficiencies. We should be able to be excited about doing that work.
I have two simultaneous reactions to this. The first is: Yes, we should be able to get excited about that. And I think if we had a cultural narrative that taught us that just the dynamism of being alive is itself the gift, we’d be better placed to think of sustainability as the thing to treasure.
My second reaction is: But people have this persistent hunger for a story about how we can overcome suffering and make things better than ever before — a transcendence narrative!
And that’s okay. What I want to say is, if you meet people’s basic needs, both as individuals and in community, they will naturally generate the instruments of transcendence.
Advertisement
When you give people the ability to be free from fear and free from imminent threat, and you get them out of this feeling that they’re in a lifeboat situation — that’s when people’s creative energy really kicks in.
I’m someone who loves animals — I’m a big birder, I’m obsessed with snorkeling, I just love exploring different kinds of minds. So I could feel excited about a future where we have a multitude of diverse intelligences — animals, conscious AIs, augmented humans, etc. Do you think part of a positive vision for the future could be an expanded space of different kinds of minds? Does that excite you at all?
Yeah! Look, I’m a giant sci-fi nerd. I spent my whole childhood living in imaginary worlds with other kinds of minds: talking animals, various hybrid human-animal creations, robots, artificial intelligences. There is nothing about my humanism that blocks a future where humans share the planet with many more kinds of minds than we have today.
What I resent is the exploitation of that excitement by tech companies to sell and impose harmful, unsafe technologies that pretend to be minds, that are disguised as minds. Claude is not [a mind]. Claude is a language model built to roleplay that.
Advertisement
I have no assurance that it’s possible to create a machine mind. But I also have no principled reason to think it’s impossible. And the vision that you described sounds wonderful. The problem is that it’s very easy for the AI industry to say: Ah, but that’s what we’re already giving you!
You said in a talk last year that you think maybe we should take a break from a certain kind of philosophizing about humanity’s future. But looking around at the political landscape, that feels like a luxury we can’t afford. The tech broligarchs have links to the authoritarian right. Some of them want to escape the control of democratic governments, so they’re trying to create their own sovereign colonies — whether that’s space colonies or “network states.” Given their influence, taking a break from trying to steer the future feels like capitulation at a time when capitulation is very dangerous.
I hear you. It does seem very dangerous to say that there shouldn’t be some kind of counter-philosophical-movement opposing that. But when I was saying that maybe we need to pause, what I was speaking of is the kinds of philosophical preoccupations that jump ahead of the obvious needs of the moment and serve as a perpetual distraction from those needs.
There is a certain kind of philosophy that I think we need to perhaps put on hold: It’s the philosophy of forget the present, forget the problems of the moment, think bigger, think about the universal point of view.
Advertisement
What I’m suggesting is that we need to ground ourselves in an ethos of sustainability, of care, of solidarity and mutual aid and repair of the systems that we need in order to have a future. That can be its own philosophy.
But it’s not a utopian kind of move. Utopia is very often used as an instrument of authoritarianism and it’s used as a way to rip people away from their present commitments and needs, and to distract them with a dream that relieves the pressure to address our current circumstances. I think that’s the opposite of what we need right now.
Yeah, this is the classic point made about Christendom — how it tells us: Just focus on getting to a good afterlife, don’t expect anything good from your life on Earth. Malcolm X called it “pie in the sky and heaven in the hereafter.” It’s one of the ways I often feel like transhumanism is weirdly doing Christendom’s bidding.
Oh absolutely, 100 percent. It’s strangely regressive, right? It’s bringing us back precisely to that worldview: Don’t worry about the feudal circumstances that you are presently in, because that’s going to be a distant memory soon, when the world of infinite abundance is delivered unto you. That story was effective for millennia. But it was one that we ultimately managed to break ourselves free from.
Advertisement
Right, and that was one of the genuinely great innovations of humanism: Let’s not just put all our faith in the beautiful hereafter, but let’s actually care about human lives here on Earth, now.
Apps for budgeting and personal finance do a good job of tracking your money as you earn and spend it. Some also have excellent debt calculators that help you figure out how to pay off your debts.
Each debt calculator is a little different. Some suggest a specific method for paying down debt, while others are simulators that let you see how your total amount paid will decrease if you increase your monthly payment.
Here are a few useful calculators and some guidance about what makes them different.
A Straightforward Plan: Bankrate
Bankrate’s free debt payoff calculator gives you a timetable for paying off each of your debts. You enter as many debts as you want to include, their interest rates, total loan amounts, and other details. You also enter any new income you expect to receive, such as an annual salary increase or windfall, and the amount that you can put toward your debts. The calculator then generates one payment table for each debt showing how much to pay each month until the debt is cleared.
Advertisement
Bankrate prioritizes paying off the debt with the highest interest rate first. Once your first debt is paid off, the money you would have put toward it is diverted to your other monthly payments. In other words, as you eliminate debts, the monthly payments on your other debts increase until they, too, are paid off.
Who should use it? Bankrate’s calculator works for people who have multiple debts, and the total monthly minimum payments are within their financial reach. If that’s you, then you’ll get a crystal clear plan—with a timeline—for getting rid of all your debts.
Where it comes up short. This calculator assumes that paying off your debts by clearing the one with the highest interest rate first is in your best interest. That’s not true for everyone. You might have other options, such as consolidating credit card debt to a new card with a 0 percent introductory rate or filing for bankruptcy. Bankrate also doesn’t take into account other personal finance concerns, like other uses of monthly funds that free up once you pay off your first debt—Bankrate tells you to put that money toward your next-highest-interest debt. You might be better off putting it toward retirement savings or an emergency fund.
Big-Picture Guidance: NerdWallet
NerdWallet’s free debt load calculator determines your debt load as a percentage of your income. The resulting debt load is classified as smaller (less than 36 percent), larger (37–42), or overwhelming (43 percent or more). Based on the outcome, NerdWallet suggests a method for eliminating your debt, which you read about in an educational article below the results.
Advertisement
Who should use it? This calculator helps you get a big-picture sense of your debt. If you have a lot of debt, it’s useful for ruling out (or ruling in) the option of declaring bankruptcy.
Where it comes up short. It’s not great at analyzing the finer details of your debt. For example, in the setup, there’s no line item for student loans or a mortgage, much less the exact interest rate you pay on loans. The results are a rough guide rather than a personalized strategy.
Automated Inputs: WalletHub
When you sign up for WalletHub (free) and connect your financial accounts, the app pulls real information about how much money you owe and your payment history. Its debt payoff plan is a calculator that lets you play with the numbers to see what would happen if you increased your monthly payment. How much faster can you clear the debt? How much will you save in interest? You can quickly see the difference between increasing your monthly payment by, say, $50 versus $150.
Who should use it? This calculator is for WalletHub users who have connected their financial accounts. It’s most useful for people who can afford to pay more than the monthly minimum on their debts.
Lexus has spent more than three decades earning the reliability that most luxury brands would love to borrow. From the original LS 400 that humbled German sedans, to early RX and ES models, the brand has conditioned buyers to trust any Lexus engine almost by default, and most of the time that trust is warranted.
But no automaker bats a thousand. Hidden in Lexus’ 35-year engine catalog are a few designs that don’t quite live up to the badge. The five engines ahead span nearly every era of the brand and together power hundreds of thousands of vehicles still on the road. These include a twin-turbo V6 that can stall when stray machining debris wipes out its bearings, another V6 that became known for turning its oil into sludge, the hybrid four-cylinder that powered the company’s first hybrid car and burned oil faster than fuel, a compact direct injection V6 that misfires when carbon clogs its intake valves, and an otherwise reliable Lexus V8 engine with a fire-risk related recall.
Advertisement
Have all of them been fixed by recalls, updated parts, or warranty programs? In most cases, yes. Does that mean every example you’ll find on a used car lot will be bad? Not really. But if you’re shopping for a used LX 600, IS 250, ES 300, RX 300, HS 250h, GX 460, or LS 460, the engine under the hood deserves more attention than the badge on the grille.
Advertisement
1. 1MZ-FE 3.0L V6
When Toyota introduced the all-aluminum 1MZ-FE in the mid-1990s, it looked like the perfect luxury V6. Aluminum saved weight over the iron 3VZ it replaced, twin overhead cams kept it smooth to 5,800 rpm, and its broad torque curve gave the ES 300 and first-gen RX 300 the effortless feel buyers expected from a Lexus. Later updates even added variable valve timing, helping the engine meet low-emissions targets without giving up power. The problem is that the 1MZ-FE also became one of the main engines tied to Toyota and Lexus’s oil-sludge controversy.
It started with reports of thick, oily sludge building up under the valve covers, and it quickly became one of Toyota’s most notorious reliability issues. Engine oil is supposed to stay thin enough to move quickly through narrow passages, carry heat away from hot spots, and keep bearings and cam surfaces from grinding against each other.
In the 1MZ-FE, however, degraded oil could thicken into sticky deposits instead of flowing cleanly through the engine, and it showed up as warning lights, blue smoke at startup, burning oil, valve knock, sudden stalling, and no-start conditions. In the worst cases, the engine sludge problem led to complete engine failure, with quotes for thousands of dollars in major internal work involving the short block, heads, valve covers, and cams.
The problem was widespread enough to pull in the 1MZ-FE-powered Lexus ES 300 and RX 300, and Toyota addressed it through a Special Policy adjustment rather than a formal recall; a later class-action settlement ultimately covered about 3.5 million 1997-2002 Toyota and Lexus vehicles.
Advertisement
2. 4GR-FSE 2.5L direct-injection V6
Toyota’s GR family makes some of the most respected V6s in modern motoring, but the 4GR-FSE is the odd child. Lexus dropped it into the second-generation IS 250 (2006-2010 sedan, 2010 IS 250C) as a downsized alternative to the 3.5-liter IS 350. Technically, it looked smart: a modern, high-compression GR-family V6 with dual VVT-i and, critically, D-4 direct fuel injection. Lexus claims the direct-injection system helped cool the cylinders, allowing the 4GR-FSE to run at higher compression and extract more efficiency from a small luxury-sedan V6.
Advertisement
The problem is that gasoline direct injection engines also remove one useful side effect of port injection. In a port-injected engine, fuel is sprayed upstream of the intake valve, which helps “wash” the backs of the valves as the engine runs and makes it harder for oily vapors and deposits to stick. In the 4GR-FSE, fuel is injected directly into the cylinder, so the intake valves don’t get that natural cleaning effect. Without it, carbon deposits are more likely to build up on the intake side over time. Once carbon deposits built up, the 4GR-FSE could show check-engine and VSC lights, rough cold starts, shaky idle, random cylinder misfires, sputtering at stops, sudden loss of power, and occasional stalling when rpm dropped. Some cases involved repeat top-engine cleanings, piston/ring work, or complete engine replacement.
Because Lexus treated it as a drivability/emissions issue — not a safety defect — it was handled with service bulletins and a Customer Support Program instead of a recall. That coverage ran for nine years, but it’s expired now, which means today’s used-IS buyers pay out of pocket for cleanings and related repairs or sidestep the 4GR altogether and buy the port-and-direct-injected IS 350 instead.
Advertisement
3. 1UR-FE/1UR-FSE 4.6L V8
When Lexus replaced its long-running 4.3L LS V8 and 4.7L GX V8 engines, the 4.6L 1UR looked like the perfect upgrade. The 1UR-FSE arrived in the LS 460 as a newly developed 4.6-liter V8, while the 1UR-FE followed in the 2010 GX 460 as a stronger, more efficient replacement for the old 4.7-liter V8. Early 1UR-era cars, however, had a number of problems, and the one that drew the most attention was a valve-spring defect.
Toyota found that some valve springs in certain 2007-2008 LS 460/LS 460L and 2008 GS 460 V8 engines could create small cracks and eventually break. Once a valve spring fails, the engine can act like it’s starving for fuel; sluggish throttle response, sudden power loss, heavy shaking/misfires, and in the worst cases, it stalls and won’t restart.
Another issue involved the fuel system. On some 1UR-powered Lexus models, the gasket sealing the fuel-pressure sensor to the fuel delivery pipe could lose its seal over time, causing the fuel to leak into the engine bay, sometimes with little warning beyond a fuel smell, and that obviously raises the risk of a fire. On the SUV side, some GX 460s had a secondary-air injection fault that could trigger the check-engine light and put the truck into reduced-power/limp mode until the pump or valves were replaced.
Advertisement
Toyota addressed the broken springs with a safety recall, replaced the fuel-sensor gasket under a different recall, and later issued a GX 460 Warranty Enhancement for air-injection pump failures and switching valves for 10 years.
Advertisement
4. 2AZ-FXE 2.4L hybrid four-cylinder
The 2AZ-FXE was the mechanical heart of the Lexus HS 250h, which arrived for 2010 as the world’s first hybrid-only luxury vehicle and Lexus’s first four-cylinder gas engine paired with Lexus Hybrid Drive. It came from Toyota’s ubiquitous 2AZ engine family, including the conventional 2AZ-FE and the hybrid 2AZ-FXE, which powered countless Camrys, RAV4s, and Scion tCs before doing duty in the HS 250h’s 2010-2012 run. It was a very different kind of Lexus engine from the brand’s well-known V6s; a 2.4-liter tuned to prioritize fuel economy above everything else. Unfortunately, fuel economy wasn’t the only thing it became known for; oil consumption became the real problem.
In a healthy engine, piston rings are supposed to do two jobs at once: keep combustion pressure above the piston where it belongs and scrape excess oil off the cylinder walls so it doesn’t get pulled into the combustion chamber. When the oil control side of that job starts failing, the engine can begin consuming oil so gradually that a driver may not notice until the level has fallen much farther than it should. Once oil levels drop too far, bearings, cylinder walls, and the valvetrain are all working with less protection than they were designed to have.
There was no recall for the HS 250h; Lexus addressed excessive oil consumption with a Warranty Enhancement Program for certain 2010-2012 HS 250h vehicles, which called for updated piston assemblies. The HS 250h itself was a short-lived Lexus experiment, effectively discontinued in North America after 2012 and credited with only about 67,000 sales globally by 2016. Even Toyota moved on with the 2012 Camry, switching to a new 2.5-liter hybrid engine in place of the 2.4.
Advertisement
5. V35A-FTS 3.4 twin-turbo V6
The V35A-FTS was Lexus’s and Toyota’s clean break from the V8s that powered their old-school trucks and body-on-frame flagship SUVs. Instead of relying on displacement, the 3.4-liter twin turbo V6 uses boost to do the heavy lifting, which is why the LX 600 can make 409 horsepower and 479 lb-ft of torque from two fewer cylinders than the LX 570 before it. The tradeoff is that such boosted engines deliver their strongest shoves early, right in the low-mid rpm range where heavy SUVs and pickups spend most of their time. That also puts repeated stress through the crankshaft, which makes the bottom end especially important.
That starts with the crankshaft main bearings, which are not glamorous parts but keep the rotating assembly alive. Every time combustion pushes a piston down, that force travels through the connecting rod into the crankshaft. And the crank only survives because it rides on main bearings with a thin, pressurized oil layer acting as a lubricant between the metal surfaces.
Advertisement
In the V35A-FTS’s case, machining debris was left inside some engines during manufacturing. Those tiny metal particles can circulate with the oil, reach the crankshaft main bearings, and get trapped right where the crank is supposed to be riding on a clean, pressurized film. If the debris sticks and the engine keeps seeing higher loads over time, the bearings can fail – showing up as knocking, rough running, a no-start, or even a stall. Once it gets far, the result is complete engine failure.
That V35A-FTS engine is used in the 2022-present Toyota Tundra, 2022-present Lexus LX 600, and 2024-present Lexus GX 550. The machining debris was covered by a recall for certain 2022-2024 Tundra/LX and 2024 GX vehicles (126,691 in the US)
Advertisement
How we chose these engines
Lexus is one of the most reliable luxury brands in the world, which is why this list needed a careful filter, as reliability should not be treated like a free pass. We didn’t choose engines just because they had a few angry owner complaints, high repair bills, or one-off horror stories. A Lexus engine only made the cut if the problem had a larger paper trail behind it, such as a recall, service bulletin, warranty extension, or other official action.
That doesn’t mean every vehicle with one of these engines is doomed. In fact, the opposite is true. Plenty of owners continue to report long, uneventful runs with some of the powertrains on this list, and many affected examples have run perfectly fine for years after being repaired.
You’ve probably seen an ad for the rock-tuned Heavys H1H on the internet, and true to their word, they are in fact tuned for guitar-heavy rock. Here’s how they compare to Apple’s AirPods Max.
AirPods Max 2 [left], Heavys H1H [right]
As a major mainstream tech company, Apple has an incentive to design to accommodate as many people as possible. Its products may not necessarily have everything a particular person wants in an item. But even so, it will still be close enough to be acceptable for the majority of customers. Continue Reading on AppleInsider | Discuss on our Forums
We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.
JLab JBuds Open Wireless: Two-minute review
JLab is well-known for its affordable headphones and earbuds, but this time the brand is branching out into something more unusual.
The JBuds Open Wireless are over-ear headphones designed to allow you to hear the world around you. Yes, everyone is doing that right now, just take a look at our best open earbuds guide — but while most open-ear options are earbuds, JLab has made an over-ear version. It promises to deliver the same open benefits but from a bigger — and for some people, more comfortable — form factor.
Advertisement
Now, open-back headphones are nothing new. They’re actually a firm favorite among audiophiles. That’s because venting the back of the driver housing stops sound from bouncing back onto the driver itself, which gives you a cleaner and more accurate sound with a wider, more natural soundstage.
Latest Videos From
However, the JBuds Open Wireless aren’t that. Sure, they look similar, but the “open” part here means something different. The earcups don’t create a strong seal against your head, and the cups can have grilles over them or the option to be completely open, so ambient sound outside flows freely in alongside your music.
So it’s not open-back as an audio engineering choice, but more open-ear as a lifestyle one, where the goal isn’t better sound quality but a mix of sound and awareness of what’s happening around you.
Interestingly, this design might seem new but it’s been done before several times. One of my favorite examples is back in the late ’90s when Sony released the MDR-F1 — not identical, but similar open or open-air headphones, and people referred to them as “earspeakers” at the time. This is a similar thing, and a few other brands have done it, such as the ONE Wireless Open-Ear Headphones from nwm.
Advertisement
But they’re still unusual right now, and I can’t work out if they’re uncommon because they’re about to appeal to everyone and we’ll see more soon, or because the use case is so specific that plenty of people will love the idea but find it falls apart in practice. Unfortunately, I’m in the second camp.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Don’t get me wrong, there’s a lot to like here. The design is genuinely cool, with removable grilles and a comfortable all-day fit thanks to some memory foam padding in the cups and band. The sound also delivers more bass than I’d expect from an open design. And the ambient awareness really works. In quiet environments, it’s really nice to listen with them.
But add just a bit of background chatter or noise and the openness becomes the problem. There are just too many competing sounds and the experience collapses. I know what you’re thinking. Isn’t that the whole point of open-ear designs? Sure, but if the music you’ve bought them to listen to becomes unlistenable, then we’ve got a problem.
Advertisement
At under $100/£100, the risk still feels low. But I think the use case is narrow, and most people will know within a day whether these are for them.
JLab JBuds Open Wireless review: Price and release date
(Image credit: Future / Becca Caddy)
Released in late 2025
Priced at $99.99 / £99.99 / AU$199.99
After being unveiled at IFA 2025 in September of 2025, the JLab JBuds Open Wireless headphones were launched in some markets in late 2025, and then the rest in early 2026.
You can buy the JBuds Open for $99.99 / £99.99 / AU$199.99. That price means they sit somewhere between the higher end of budget and mid-range.
Now this is where I’d usually give you context of how they compare to similar products, but it’s tricky to compare these headphones directly to anything else right now. They give you the benefits of open-ear styles, but those are mostly buds, and these still look and feel like over-ears.
Advertisement
In that case, let’s look at the open-ear buds you can get right now. Like the Shokz OpenFit 2+, our current top pick, which are $179.95 / £169. Though you can get much more affordable open buds that still sound good, like the Earfun Clip 2 with a clip-on design that’ll cost you $79.99 / £69.99 (about AU$120).
In terms of over-ears, one of our favorite budget picks is the OneOdio Focus A6 over-ears at $55 / £55 / AU$112, which we think sound fantastic for the price. Though at that higher end of the budget range you’ve got plenty of choice, like the very highly rated 1More Sonoflow Pro HQ51 at $89 / £99 / AU$130.
Although there’s nothing to strictly compare them to, the price reflects what you’re getting. Which is over-ear comfort and build with open-ear awareness in a form factor that doesn’t really exist elsewhere. For under $100 / £100, that does seem like a fair ask. But whether it’s worth it comes down entirely to your preferences, which we’ll get into.
Advertisement
JLab JBuds Open Wireless review: Specs
Swipe to scroll horizontally
Drivers
35mm and 12mm Coaxial Dynamic Drivers
Active noise cancellation
No
Advertisement
Battery life (ANC off)
Up to 24 hours
Weight
245g
Advertisement
Connectivity
Bluetooth 6.0, USB-C
Frequency response
20-20 kHz
Advertisement
Waterproofing
None
JLab JBuds Open Wireless review: Features
(Image credit: Future / Becca Caddy)
Simple app with essentials
Multipoint connectivity
24 hours battery life (well, nearly)
The JLab JBuds Open aren’t overflowing with features, but you have everything you need for the price here.
The app is basic, but that’s not a criticism. I found it easy to use and it covers the essentials well. You can customize the manual buttons on the right earcup, check battery life, set an interval timer, toggle spatial audio on/off, and switch between music and movie modes.
Advertisement
There’s also a 10-band custom EQ alongside three presets, which I enjoyed playing with to try and address some of the issues with the sound, more on that later.
The headphones have dual coaxial drivers onboard, a 35mm and a 12mm unit, and Bluetooth 6.0 connectivity with support for SBC and AAC codecs. There’s no wireless hi-res audio options, but a USB-C cable is included if you want a wired connection.
(Image credit: Future / Becca Caddy)
Multipoint connectivity to two devices worked seamlessly during my testing, switching cleanly between my laptop and phone while I was working.
Battery life is rated at 24 hours, though in some of JLab’s specs it says to expect 18 hours. In my testing I got around 20 hours, with a full recharge taking roughly 2.5 to 3 hours.
Advertisement
That’s not bad, but it does lag behind other over-ear headphones. The Sony WH-1000XM6 manages 30 hours, and the cheaper 1More Sonoflow Pro HQ51 headphones deliver an extraordinary 65 hours with ANC on. But, to be fair, it’s much harder to fit batteries in when you’ve removed all the physical space from your headphones…
Measured against open-ear buds, this amount is impressive as the Shokz OpenFit 2+ only manages 11 hours, but that’s expected given the size difference.
JLab JBuds Open Wireless review: Sound quality
(Image credit: Future / Becca Caddy)
Better bass than most open options but sub-bass is lacking
Wide soundstage suits big, orchestral tracks
Sound leakage is an issue
With the JLab JBuds Open headphones, you can obviously hear your surroundings — that’s the whole point. But you’re going to want to bear that in mind, because these sound really open. Like, really open.
On a long quiet walk along the canal, it was lovely. I had music playing, I could hear bike bells and birds and I felt very happy. But walking through the city was a different experience entirely.
Advertisement
What I was hearing from the headphones was competing for my attention with a fire alarm, other music, and general chatter. There’s open-ear, which I’ve tried many times now from different brands, and then there’s this.
And some people might genuinely want this. If ambient awareness always trumps music for you, and competing sounds don’t overwhelm you, these could be ideal. That’s subjective and worth acknowledging, but it wasn’t my experience.
The reason it’s so pronounced is physical, because the drivers sit further from your ear than other open options. They’re outside the ear rather than in the concha, where other open buds sit. Sealed over-ears obviously don’t have the problem at all.
Here it’s essentially like holding a speaker close to your ear. I recommending testing adding the grilles in and out, because they do reduce the sound leakage in, and they’re very easy to remove.
Advertisement
With dual coaxial 35mm and 12mm drivers, they’re working with bigger hardware than most open-ear buds, and you can really tell when you listen. There’s genuine presence in the low end, with far more bass and substance than you’d typically expect from a pair of open-ear buds.
Vocals come through clearly, and the wide soundstage is a real strength here. I spent a lot of time listening to Jóhann Jóhannsson’s Arrival score and instrument separation was impressive. Big, cinematic or orchestral tracks have a sense of space that genuinely suits the open design.
Moving onto Rolling Stones’ Sympathy for the Devil and the track’s swagger and drive translated well. It felt wide, punchy and instruments were given plenty of room.
(Image credit: Future / Becca Caddy)
But there are weaknesses. Sub-bass is mostly absent. Hi-hats and cymbals also had a tendency to tip into shrill territory, and kick drums have a sharp, thin quality rather than a satisfying thud.
Advertisement
The overall character skews mid-heavy, and you’ll find yourself pushing the volume higher than expected to get a sense of immersion.
At times it felt a bit like hearing your phone playing in front of you; it’s present and clear enough, but thin and lacking warmth. The bass boost EQ setting helps on the right tracks and is worth trialling, but it can’t resolve the main limitation here which is that there’s no seal to trap and focus the sound.
Calls were fine. With open ears, conversations feel more natural to me, and the noise-cancelling mic picked up my voice well. It lacked some clarity at times, but was fine for most purposes.
Sound leakage from the headphones is also worth flagging. I recorded audio on my phone while wearing them and could make out the track even at a moderate volume with the grilles on. If you remove them, it gets noticeably worse.
Advertisement
Push the volume up, which you will find yourself doing, and it gets worse still. So there’s a sort of irony here, which is that the open design means you need more volume to feel the music, but more volume means more leakage.
Ambient noise outside will mask the leakage, so you’ll get away with it way more in public than you might expect. But a quiet office or commute is going to be a different story.
JLab JBuds Open Wireless review: Design
(Image credit: Future / Becca Caddy)
A bold design that may divide people
Genuinely comfortable for long wear
Removable grilles change the look and the sound
The JLab JBuds Open headphones have a very unusual design and I think they’ll divide people. Some will find them incredibly cool and a bit sci-fi looking, whereas others just won’t get on with them.
They’re over-ear headphones with a build that feels substantial, though they do feel a little more cheap and plasticky than something like the Bose QuietComfort Headphones, my all-time favorite over-ears, but that’s to be expected at this lower price.
Advertisement
Both the earcups and headband are padded with memory foam and I found it genuinely comfortable for long sessions. The clamping force was occasionally a little much when I was working indoors, but on runs outside it actually helped and kept them feeling secure.
(Image credit: Future / Becca Caddy)
At 245g, they’re light, and you can shave a couple of grams off by removing the metal grilles. The earcups have a sort of wheel-spoke pattern with a grille sitting over under it. And if you twist the cup, the grille pops out cleanly, opening things up even more both in how these headphones look and sound.
I noticed it really changes the look of them, and noticeably affects how much ambient sound comes through. It’s a small but genuinely fun customization option.
That said, they’re bulky. They stick out from your head considerably more than most modern over-ears nowadays, and while the cups pivot flat, they don’t fold inward either, which makes them less practical to carry and store than many rivals.
Advertisement
The included carry case is a nice touch. It’s a similar concept to the AirPods Max case but it’s more practical with more coverage of the headphones. The matte, brushed finish picks up marks easily though.
(Image credit: Future / Becca Caddy)
You control the JBuds Open with physical buttons on the side of the right earcup. I personally prefer physical buttons over touch controls, and found these easy to locate and use on the move, and they’re also customizable via the app.
The headphones come in black, which is the pair I tested here, or Cloud, which is a light gray with gold accents that’s a nice option if you’re sick of all black tech.
There’s no IP rating here, which on paper suggests avoiding sweaty workouts when you’re wearing them. But given their open design means far more airflow than a sealed pair, I’d argue they’re pretty workout-friendly as long as you’re mindful about sweat and splashes.
Advertisement
I tested them on several runs without any problems and actually really enjoyed the ambient awareness and added airflow as I got warmer and more tired. But I maybe wouldn’t risk them in the rain.
(Image credit: Future / Becca Caddy)
JLab JBuds Open Wireless review: Value
Good value compared to open-ear buds
But whether it’s worth it depends on your feelings about ambient sound
These are good value compared to other over-ear headphones and even some open-ear options. You can pick up open-ear buds for well under $100 / £100 these days, but top performers like the Shokz OpenFit 2+ cost nearly double at $179 / £169. So if you specifically want open-ear audio on a budget, they’re worth considering.
But really, whether these are worth it has less to do with price and more to do with your lifestyle. Under $100 / £100 feels fair for what’s here. But if you’re going to struggle to hear your music in most environments or find the bulk doesn’t suit you, the price won’t save them.
For the right person though, which I think will be someone who prioritizes awareness, loves the over-ear form factor, and isn’t chasing audiophile sound, then these were essentially made for you.
Advertisement
Should I buy the JLab JBuds Open Wireless?
(Image credit: Future / Becca Caddy)
Swipe to scroll horizontally
JLab JBuds Open Wireless score card
Attributes
Notes
Rating
Advertisement
Features
The app is easy-to-use, and it’s nice to get multipoint connectivity and a USB-C option.
3.5 / 5
Sound quality
Advertisement
Good for an open design, especially for bass. But it’s hard to hear your music in anything other than a quiet environment, and sound leaks out, too.
3.5 / 5
Design
They’re comfortable enough for all-day listening thanks to their memory foam. The design is chunky and divisive but I like that you can switch the grilles in and out.
Advertisement
4 / 5
Value
Good sound, features and design for the money, but whether it’s good value for you or not is an entirely different story. It’ll be a really subjective thing for these.
3.5 / 5
Advertisement
Buy them if…
Don’t buy them if…
JLab JBuds Open Wireless review: Also consider
Swipe to scroll horizontally
Header Cell – Column 0
Jlab Jbuds Open Headphones
Advertisement
1More Sonoflow Pro HQ51
Earfun Clip 2
Drivers
35mm and 12mm coaxial dynamic drivers
Advertisement
40mm dynamic
12mm dual-magnetic titanium composite driver
Active noise cancellation
No
Advertisement
Yes
No
Battery life (ANC on)
Up to 24 hours
Advertisement
60 hours (ANC on), 100 hours (ANC off)
11 hours
Weight
245g
Advertisement
246g
5.5g
Connectivity
Bluetooth 6.0, USB-C
Advertisement
Bluetooth 5.4, 3.5mm
Bluetooth 6.0
Waterproofing
None
Advertisement
None
IP55
How I tested the JLab JBuds Open Wireless
(Image credit: Future / Becca Caddy)
Tested over 10 days
Used with my iPhone 16 Pro
Listened to music, podcasts and some movies
I tested the JLab JBuds Open Ear Headphones for 10 days, which gave me plenty of time to trial them in different environments, wear them in a few different weather conditions and run a battery test.
I took them with me on daily long walks and two runs along the canalside, as well as one bigger hike in the countryside. They also came with me often when I was walking through a big city, in a busy market, to plenty of coffee shops while I was working remotely, on several bus rides and just out and about getting on with my day more generally.
Advertisement
I used my iPhone 16 Pro to test them and mostly listened to music and podcasts. I also used them when watching a couple of movies to test the movie preset and the spatial audio. I tested the different modes and EQ settings and used them with and without their grilles.
I actually became really fascinated by the subtle sound differences when it came to the grilles, so know my experience in this review comes from a lot of careful listening.
I’ve been writing about and testing tech for more than 15 years now. I’ve focused mainly on wearables, smart home devices and a lot of audio tech. Over the past few years I’ve been testing a lot of open ear buds, so I know what I’m looking (and listening out) for.
I’m always keen to think about the real world use cases and everyday practicality of tech so you get your money’s worth and pick the best device for you.
You must be logged in to post a comment Login