Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

5 Lexus Engines You Should Steer Clear Of

Published

on





Lexus has spent more than three decades earning the reliability that most luxury brands would love to borrow. From the original LS 400 that humbled German sedans, to early RX and ES models, the brand has conditioned buyers to trust any Lexus engine almost by default, and most of the time that trust is warranted.

But no automaker bats a thousand. Hidden in Lexus’ 35-year engine catalog are a few designs that don’t quite live up to the badge. The five engines ahead span nearly every era of the brand and together power hundreds of thousands of vehicles still on the road. These include a twin-turbo V6 that can stall when stray machining debris wipes out its bearings, another V6 that became known for turning its oil into sludge, the hybrid four-cylinder that powered the company’s first hybrid car and burned oil faster than fuel, a compact direct injection V6 that misfires when carbon clogs its intake valves, and an otherwise reliable Lexus V8 engine with a fire-risk related recall.

Advertisement

Have all of them been fixed by recalls, updated parts, or warranty programs? In most cases, yes. Does that mean every example you’ll find on a used car lot will be bad? Not really. But if you’re shopping for a used LX 600, IS 250, ES 300, RX 300, HS 250h, GX 460, or LS 460, the engine under the hood deserves more attention than the badge on the grille.

Advertisement

1. 1MZ-FE 3.0L V6

When Toyota introduced the all-aluminum 1MZ-FE in the mid-1990s, it looked like the perfect luxury V6. Aluminum saved weight over the iron 3VZ it replaced, twin overhead cams kept it smooth to 5,800 rpm, and its broad torque curve gave the ES 300 and first-gen RX 300 the effortless feel buyers expected from a Lexus. Later updates even added variable valve timing, helping the engine meet low-emissions targets without giving up power. The problem is that the 1MZ-FE also became one of the main engines tied to Toyota and Lexus’s oil-sludge controversy.

It started with reports of thick, oily sludge building up under the valve covers, and it quickly became one of Toyota’s most notorious reliability issues. Engine oil is supposed to stay thin enough to move quickly through narrow passages, carry heat away from hot spots, and keep bearings and cam surfaces from grinding against each other.

In the 1MZ-FE, however, degraded oil could thicken into sticky deposits instead of flowing cleanly through the engine, and it showed up as warning lights, blue smoke at startup, burning oil, valve knock, sudden stalling, and no-start conditions. In the worst cases, the engine sludge problem led to complete engine failure, with quotes for thousands of dollars in major internal work involving the short block, heads, valve covers, and cams.

The problem was widespread enough to pull in the 1MZ-FE-powered Lexus ES 300 and RX 300, and Toyota addressed it through a Special Policy adjustment rather than a formal recall; a later class-action settlement ultimately covered about 3.5 million 1997-2002 Toyota and Lexus vehicles.

Advertisement

2. 4GR-FSE 2.5L direct-injection V6

Toyota’s GR family makes some of the most respected V6s in modern motoring, but the 4GR-FSE is the odd child. Lexus dropped it into the second-generation IS 250 (2006-2010 sedan, 2010 IS 250C) as a downsized alternative to the 3.5-liter IS 350. Technically, it looked smart: a modern, high-compression GR-family V6 with dual VVT-i and, critically, D-4 direct fuel injection. Lexus claims the direct-injection system helped cool the cylinders, allowing the 4GR-FSE to run at higher compression and extract more efficiency from a small luxury-sedan V6.

Advertisement

The problem is that gasoline direct injection engines also remove one useful side effect of port injection. In a port-injected engine, fuel is sprayed upstream of the intake valve, which helps “wash” the backs of the valves as the engine runs and makes it harder for oily vapors and deposits to stick. In the 4GR-FSE, fuel is injected directly into the cylinder, so the intake valves don’t get that natural cleaning effect. Without it, carbon deposits are more likely to build up on the intake side over time. Once carbon deposits built up, the 4GR-FSE could show check-engine and VSC lights, rough cold starts, shaky idle, random cylinder misfires, sputtering at stops, sudden loss of power, and occasional stalling when rpm dropped. Some cases involved repeat top-engine cleanings, piston/ring work, or complete engine replacement.

Because Lexus treated it as a drivability/emissions issue — not a safety defect — it was handled with service bulletins and a Customer Support Program instead of a recall. That coverage ran for nine years, but it’s expired now, which means today’s used-IS buyers pay out of pocket for cleanings and related repairs or sidestep the 4GR altogether and buy the port-and-direct-injected IS 350 instead.

Advertisement

3. 1UR-FE/1UR-FSE 4.6L V8

When Lexus replaced its long-running 4.3L LS V8 and 4.7L GX V8 engines, the 4.6L 1UR looked like the perfect upgrade. The 1UR-FSE arrived in the LS 460 as a newly developed 4.6-liter V8, while the 1UR-FE followed in the 2010 GX 460 as a stronger, more efficient replacement for the old 4.7-liter V8. Early 1UR-era cars, however, had a number of problems, and the one that drew the most attention was a valve-spring defect.

Toyota found that some valve springs in certain 2007-2008 LS 460/LS 460L and 2008 GS 460 V8 engines could create small cracks and eventually break. Once a valve spring fails, the engine can act like it’s starving for fuel; sluggish throttle response, sudden power loss, heavy shaking/misfires, and in the worst cases, it stalls and won’t restart.

Another issue involved the fuel system. On some 1UR-powered Lexus models, the gasket sealing the fuel-pressure sensor to the fuel delivery pipe could lose its seal over time, causing the fuel to leak into the engine bay, sometimes with little warning beyond a fuel smell, and that obviously raises the risk of a fire. On the SUV side, some GX 460s had a secondary-air injection fault that could trigger the check-engine light and put the truck into reduced-power/limp mode until the pump or valves were replaced.

Advertisement

Toyota addressed the broken springs with a safety recall, replaced the fuel-sensor gasket under a different recall, and later issued a GX 460 Warranty Enhancement for air-injection pump failures and switching valves for 10 years.

Advertisement

4. 2AZ-FXE 2.4L hybrid four-cylinder

The 2AZ-FXE was the mechanical heart of the Lexus HS 250h, which arrived for 2010 as the world’s first hybrid-only luxury vehicle and Lexus’s first four-cylinder gas engine paired with Lexus Hybrid Drive. It came from Toyota’s ubiquitous 2AZ engine family, including the conventional 2AZ-FE and the hybrid 2AZ-FXE, which powered countless Camrys, RAV4s, and Scion tCs before doing duty in the HS 250h’s 2010-2012 run. It was a very different kind of Lexus engine from the brand’s well-known V6s; a 2.4-liter tuned to prioritize fuel economy above everything else. Unfortunately, fuel economy wasn’t the only thing it became known for; oil consumption became the real problem.

In a healthy engine, piston rings are supposed to do two jobs at once: keep combustion pressure above the piston where it belongs and scrape excess oil off the cylinder walls so it doesn’t get pulled into the combustion chamber. When the oil control side of that job starts failing, the engine can begin consuming oil so gradually that a driver may not notice until the level has fallen much farther than it should. Once oil levels drop too far, bearings, cylinder walls, and the valvetrain are all working with less protection than they were designed to have.

There was no recall for the HS 250h; Lexus addressed excessive oil consumption with a Warranty Enhancement Program for certain 2010-2012 HS 250h vehicles, which called for updated piston assemblies. The HS 250h itself was a short-lived Lexus experiment, effectively discontinued in North America after 2012 and credited with only about 67,000 sales globally by 2016. Even Toyota moved on with the 2012 Camry, switching to a new 2.5-liter hybrid engine in place of the 2.4.

Advertisement

5. V35A-FTS 3.4 twin-turbo V6

The V35A-FTS was Lexus’s and Toyota’s clean break from the V8s that powered their old-school trucks and body-on-frame flagship SUVs. Instead of relying on displacement, the 3.4-liter twin turbo V6 uses boost to do the heavy lifting, which is why the LX 600 can make 409 horsepower and 479 lb-ft of torque from two fewer cylinders than the LX 570 before it. The tradeoff is that such boosted engines deliver their strongest shoves early, right in the low-mid rpm range where heavy SUVs and pickups spend most of their time. That also puts repeated stress through the crankshaft, which makes the bottom end especially important.

That starts with the crankshaft main bearings, which are not glamorous parts but keep the rotating assembly alive. Every time combustion pushes a piston down, that force travels through the connecting rod into the crankshaft. And the crank only survives because it rides on main bearings with a thin, pressurized oil layer acting as a lubricant between the metal surfaces.

Advertisement

In the V35A-FTS’s case, machining debris was left inside some engines during manufacturing. Those tiny metal particles can circulate with the oil, reach the crankshaft main bearings, and get trapped right where the crank is supposed to be riding on a clean, pressurized film. If the debris sticks and the engine keeps seeing higher loads over time, the bearings can fail – showing up as knocking, rough running, a no-start, or even a stall. Once it gets far, the result is complete engine failure.

That V35A-FTS engine is used in the 2022-present Toyota Tundra, 2022-present Lexus LX 600, and 2024-present Lexus GX 550. The machining debris was covered by a recall for certain 2022-2024 Tundra/LX and 2024 GX vehicles (126,691 in the US)

Advertisement

How we chose these engines

Lexus is one of the most reliable luxury brands in the world, which is why this list needed a careful filter, as reliability should not be treated like a free pass. We didn’t choose engines just because they had a few angry owner complaints, high repair bills, or one-off horror stories. A Lexus engine only made the cut if the problem had a larger paper trail behind it, such as a recall, service bulletin, warranty extension, or other official action.

That doesn’t mean every vehicle with one of these engines is doomed. In fact, the opposite is true. Plenty of owners continue to report long, uneventful runs with some of the powertrains on this list, and many affected examples have run perfectly fine for years after being repaired.

Advertisement



Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Week in Review: Most popular stories on GeekWire for the week of May 10, 2026

Published

on

Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of May 10, 2026.

Sign up to receive these updates every Sunday in your inbox by subscribing to our GeekWire Weekly email newsletter.

Most popular stories on GeekWire

Source link

Continue Reading

Tech

Today’s NYT Connections: Sports Edition Hints, Answers for May 18 #602

Published

on

Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.


Today’s Connections: Sports Edition is a tough one. The purple category requires you to mentally add some letters to four words to turn them into sports-related words. If you’re struggling with the puzzle but still want to solve it, read on for hints and the answers.

Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.

Advertisement

Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta

Hints for today’s Connections: Sports Edition groups

Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Not going up.

Advertisement

Green group hint: Gridiron plan.

Blue group hint: Pacific Northwest teams.

Purple group hint: Add two letters to form a baseball team’s name.

Answers for today’s Connections: Sports Edition groups

Yellow group: Slide.

Advertisement

Green group: Football running plays.

Blue group: An Oregon athlete.

Purple group: MLB teams, minus the last two letters.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

Advertisement

What are today’s Connections: Sports Edition answers?

completed NYT Connections: Sports Edition puzzle for May 18, 2026

The completed NYT Connections: Sports Edition puzzle for May 18, 2026.

NYT/Screenshot by CNET

The yellow words in today’s Connections

The theme is slide. The four answers are decline, dip, downturn and slump.

The green words in today’s Connections

The theme is football running plays. The four answers are counter, dive, draw and sweep.

Advertisement

The blue words in today’s Connections

The theme is an Oregon athlete. The four answers are Beaver, Duck, Thorn and Timber.

The purple words in today’s Connections

The theme is MLB teams, minus the last two letters. The four answers are ange (Angels), dodge (Dodgers), marine (Mariners) and range (Rangers)

Source link

Advertisement
Continue Reading

Tech

If You’re a Serious Bowler, You Need to Know About Bowling Lane Oil

Published

on

Lately, Kegel has been steadily improving its automation, to the point where today’s machines do the entire job without any human intervention.

The lanes you and I bowl on as amateurs are oiled very differently from the ones pros use.

At your local bowling center, public lanes are oiled in what’s referred to as a “high” ratio: The level of oil present in the middle of a lane is eight to 10 times higher than what’s on the outside. At the far left and right of the lane, many public bowling alleys have no oil at all.

“On a normal pattern at your normal bowling center, there is some autocorrect,” Tackett says. Because the edges of the lane have very little oil, shots that drift to either side will slow down; if the ball has been thrown with the proper spin to guide it back toward the middle of the lane, it will curl more effectively on the drier surface. “It makes it easier to hit the pocket.”

Advertisement

(By “the pocket,” Tacket means that sweet spot at the front corner of the standard 10-pin configuration. For right-handed bowlers that’s the space between the first and third pins slightly right of center; for lefties, it’s on the left side.)

In the pros, though, the patterns are far tougher. Instead of 8:1 or even 10:1 ratios of oil in the middle of the lane to the outside, the PBA uses ratios of 3:1 and under—even as low as nearly 1:1 in some cases. Learning how each board is oiled at the start of a match allows the pros to map their ideal shots. “You have to be a lot more precise, not only with where you’re placing the ball on the lane, but with your speed that you’re throwing it and the revolutions that you’re applying to the ball,” Tackett says.

Oil patterns also vary in terms of their length up the 60-foot lane. Many common patterns run for the first 40 feet before the oil tapers off near the pins, but several variations exist.

As lane oil technology has improved, understanding and adjusting to lane oil patterns and ratios has become an outsize tactical element for professional bowlers. Tackett likens it in some ways to golf.

Advertisement

“An oil pattern basically adds water and trees and bunkers,” he says. “It’s adding obstacles to the lane.”

The PBA, the sport’s governing body, likes those comparisons. Rather than using the latest advances in lane oil tech to standardize lanes across every PBA competition, the organization takes the opposite approach, intentionally using varying conditions across different events to challenge top bowlers.

“It forces players to think, adapt, and create, which is how we test greatness,” says Tom Clark, PBA commissioner, via email. “It’s what makes the sport more exciting, interesting, and entertaining every single week.”

The PBA has a library of 20 lane oil patterns for the 2026 season from Kegel, which use varying ratios, lengths, and even specific oil formulations, each of which has its own character. A different pattern is used at virtually every event through the season. For instance, the PBA Tournament of Champions on the week of April 20 used the “Don Johnson 40” pattern, named for famed bowler Don Johnson, with the “40” signifying the length of the pattern in feet.

Advertisement

Source link

Continue Reading

Tech

Tiny underwater cables keep entire island nations online as sabotage fears and accidental damage push global internet stability dangerously closer collapse

Published

on


  • Report finds five island nations depend entirely on one vulnerable underwater internet cable
  • Accidental ship anchoring causes most global undersea cable failures every year
  • Smaller island nations remain dangerously exposed to complete nationwide internet blackouts

A new report has highlighted how all 48 island nations worldwide, including major economies such as the United Kingdom, Japan, and Indonesia, rely on just 126 undersea cables for their internet connectivity.

These cables are often no thicker than a garden hose, making them surprisingly vulnerable to accidental damage or deliberate sabotage.

Source link

Continue Reading

Tech

Old kindle owners are revolting against Amazon’s support shutdown with jailbreaking

Published

on

Amazon’s decision to cut support for older Kindles has pushed some longtime owners toward jailbreaking, a route many never expected to consider.

From May 20, 2026, Kindle devices released in 2012 or earlier will no longer be able to buy, borrow, or download new books directly from Amazon. Books already downloaded will still work, but the store experience is basically being switched off for these devices. Reports now suggest that some users are looking at jailbreaks as a way to keep older Kindles useful instead of replacing hardware that still works.

Why are Kindle owners turning to jailbreaks?

The frustration is not just about losing store access. On Reddit, many users are treating this as another “buying isn’t owning” moment. Several owners say their old Kindles still work perfectly for reading, which makes the shutdown feel unnecessary. Many users see this as a right-to-repair and ownership issue. If an old Kindle still turns on, has a working screen, battery, and buttons, they argue it should not be pushed toward retirement because Amazon has ended software support.

A Kindle jailbreak means removing some of Amazon’s software restrictions so users can install community-made tools and manage the device more freely. In this case, owners are mainly interested in keeping older Kindles useful for reading, sideloading books, and avoiding forced updates that could close those workarounds.

Advertisement

What are the risks of jailbreaking a Kindle?

Jailbreaking is not a clean fix for everyone. The process can fail if users install the wrong files, follow bad instructions, or use a method that does not match their Kindle model or firmware version. In the worst case, the device can become unstable or stop working properly.

In many places, modifying a device for personal use may not automatically be treated as illegal. But using it to break DRM, remove copy protection, or sell modified Kindles can create legal trouble.

Even if Amazon’s decision makes sense from a support and maintenance perspective, it has landed badly with many users. People are tired of electronics being treated as disposable once official support ends. For some older Kindle owners, Jailbreaking is one way to keep those devices out of the e-waste pile.

Source link

Advertisement
Continue Reading

Tech

UN digital envoy warns AI influence is concentrated in a ‘few zip codes,’ calls for global action

Published

on

United Nations’ Under-Secretary and Special Envoy for Digital and Emerging Technologies, Amandeep Singh Gill, appears via Zoom to deliver the opening keynote at Seattle University’s 2026 Ethics and Tech conference on May 15, 2026. (Photo: Ken Yeung)

Big tech companies are deploying compute clusters with millions of GPUs to train and run AI models. But across the entire continent of Africa — encompassing 54 countries and more than 1.5 billion people — fewer than 1,000 GPUs are available for researchers and developers to train models on local-language datasets.  

That disparity illustrates what the United Nations’ top digital envoy calls an “immense concentration of tech power and wealth” in a few zip codes — not just countries or regions, but confined areas, primarily in the U.S., where the companies shaping AI are based.

He didn’t name names, but the point hit close to home for the Seattle audience: 98109 for Amazon, 98052 for Microsoft.

Delivering the keynote address via Zoom at Seattle University’s 2026 Ethics and Tech conference on Friday, Under-Secretary Amandeep Singh Gill called 2026 “especially seminal” for AI governance, as the technology shifts from model capabilities and infrastructure investment to systems that perform real-world tasks autonomously.

Gill pointed to the global response to Anthropic’s Mythos AI model — which the company restricted from broad public release over cybersecurity concerns — as an example of why AI governance requires a comprehensive, international approach.

Advertisement

Here are more of the key messages from his talk.

AI could become a “systemic risk.” Gill said the technology is a “relatively minor risk” now but warned it could soon bypass cybersecurity defenses, accelerate armed conflict, and erode public trust through deepfakes and misinformation. “When we cannot tell the difference between what is true and untrue, what is reality or imaginary, then we lose this shared sense of an understanding of facts,” he said. 

Armed conflict could worsen. Gill warned that AI risks “lowering the threshold of conflict, confusing accountability under international humanitarian law, and setting us off on escalation ladders that we cannot control.” 

AI’s energy demands are threatening climate goals. The energy required for large language models, agentic systems, and inference is already threatening national net-zero targets, Gill warned. Data center emissions, water consumption for cooling, hardware turnover, and mineral extraction costs are compounding — and falling disproportionately on low-income countries. 

Advertisement

AI is both “a potential solution and a stressor” for the environment. It could optimize renewable energy grids and accelerate progress in fusion and batteries, but the short-term costs are mounting. Gill said the UN is examining how to ensure equity and just transitions “over these time horizons.” 

The UN is building a scientific panel for AI modeled after the Intergovernmental Panel on Climate Change. Chaired by journalist and Nobel Peace Prize winner Maria Ressa and Turing Award-winning AI researcher Yoshua Bengio, the 40-member panel is deliberately composed of only two members each from China and the U.S., with the remaining 36 from other countries, including seven from Africa, to ensure more countries are heard. Its first report is expected in July 2026. 

The UN is putting AI governance conversations under one roof. Conversations about AI previously happened in separate bodies with narrow mandates. Now they’re being brought onto what Gill called a “horizontal platform” where policymakers from all 193 countries can learn from each other and develop common approaches.

Gill called AI governance a “sovereign decision.” The UN won’t tell countries how to regulate AI, but governance frameworks mean little if nations lack the capacity to participate. Gill called for support of community-driven AI projects that invest in local research and innovation ecosystems, allowing people to use these tools to solve their own problems.

Advertisement

He acknowledged the UN is working with limited resources against an enormous challenge, but said the alternative is leaving AI’s trajectory to market forces and geopolitical competition.

The goal, he said, is a world where AI empowers democracies and societies, and creates opportunities not just for “a few billionaires and trillionaires” but for everyone.

Source link

Advertisement
Continue Reading

Tech

Architectural patterns for graph-enhanced RAG: Moving beyond vector search in production

Published

on

Retrieval-augmented generation (RAG) has become the de facto standard for grounding large language models (LLMs) in private data. The standard architecture — chunking documents, embedding them into a vector database, and retrieving top-k results via cosine similarity — is effective for unstructured semantic search.

However, for enterprise domains characterized by highly interconnected data (supply chain, financial compliance, fraud detection), vector-only RAG often fails. It captures similarity but misses structure. It struggles with multi-hop reasoning questions like, “How will the delay in Component X impact our Q3 deliverable for Client Y?” because the vector store doesn’t “know” that Component X is part of Client Y’s deliverable.

This article explores the graph-enhanced RAG pattern. Drawing on my experience building high-throughput logging systems at Meta and private data infrastructure at Cognee, we will walk through a reference architecture that combines the semantic flexibility of vector search with the structural determinism of graph databases.

The problem: When vector search loses context

Vector databases excel at capturing meaning but discard topology. When a document is chunked and embedded, explicit relationships (hierarchy, dependency, ownership) are often flattened or lost entirely.

Advertisement

Consider a supply chain risk scenario. While this is a hypothetical example, it represents the exact class of structural problems we see constantly in enterprise data architectures:

  • Structured data: A SQL database defining that Supplier A provides Component X to Factory Y.

  • Unstructured data: A news report stating, “Flooding in Thailand has halted production at Supplier A’s facility.”

A standard vector search for “production risks” will retrieve the news report. However, it likely lacks the context to link that report to Factory Y’s output. The LLM receives the news but cannot answer the critical business question: “Which downstream factories are at risk?”

In production, this manifests as hallucination. The LLM attempts to bridge the gap between the news report and the factory but lacks the explicit link, leading it to either guess relationships or return an “I don’t know” response despite the data being present in the system.

The pattern: Hybrid retrieval

To solve this, we move from a “Flat RAG” to a “Graph RAG” architecture. This involves a three-layer stack:

Advertisement
  1. Ingestion (The “Meta” Lesson): At Meta, working on the Shops logging infrastructure, we learned that structure must be enforced at ingestion. You cannot guarantee reliable analytics if you try to reconstruct structure from messy logs later. Similarly, in RAG, we must extract entities (nodes) and relationships (edges) during ingestion. We can use an LLM or named entity recognition (NER) model to extract entities from text chunks and link them to existing records in the graph.

  2. Storage: We use a graph database (like Neo4j) to store the structural graph. Vector embeddings are stored as properties on specific nodes (e.g., a RiskEvent node).

  3. Retrieval: We execute a hybrid query:

Reference implementation

Let’s build a simplified implementation of this supply chain risk analyzer using Python, Neo4j, and OpenAI.

1. Modeling the graph

We need a schema that connects our unstructured “risk events” to our structured “supply chain” entities.

Image 1
Image 2

2. Ingestion: Linking structure and semantics

In this step, we assume the structural graph (suppliers -> factories) already exists. We ingest a new unstructured “risk event” and link it to the graph.

Image 3
Image 4

3. The hybrid retrieval query

This is the core differentiator. Instead of just returning the top-k chunks, we use Cypher to perform a vector search to find the event, and then traverse to find the downstream impact.

Image 5

The output: Instead of a generic text chunk, the LLM receives a structured payload:

[{‘issue’: ‘Severe flooding…’, ‘impacted_supplier’: ‘TechChip Inc’, ‘risk_to_factory’: ‘Assembly Plant Alpha’}]

This allows the LLM to generate a precise answer: “The flooding at TechChip Inc puts Assembly Plant Alpha at risk.”

Advertisement

Production lessons: Latency and consistency

Moving this architecture from a notebook to production requires handling trade-offs.

1. The latency tax

Graph traversals are more expensive than simple vector lookups. In my work on product image experimentation at Meta, we dealt with strict latency budgets where every millisecond impacted user experience. While the domain was different, the architectural lesson applies directly to Graph RAG: You cannot afford to compute everything on the fly.

Mitigation: We use semantic caching. If a user asks a question similar (cosine similarity > 0.85) to a previous query, we serve the cached graph result. This reduces the “graph tax” for common queries.

2. The “stale edge” problem

In vector databases, data is independent. In a graph, data is dependent. If Supplier A stops supplying Factory Y, but the edge remains in the graph, the RAG system will confidently hallucinate a relationship that no longer exists.

Advertisement

Mitigation: Graph relationships must have Time-To-Live (TTL) or be synced via Change Data Capture (CDC) pipelines from the source of truth (the ERP system).

Infrastructure decision framework

Should you adopt Graph RAG? Here is the framework we use at Cognee:

  1. Use vector-only RAG if:

    • The corpus is flat (e.g., a chaotic Wiki or Slack dump).

    • Questions are broad (“How do I reset my VPN?”).

    • Latency < 200ms is a hard requirement.

  2. Use graph-enhanced RAG if:

    • The domain is regulated (finance, healthcare).

    • “Explainability” is required (you need to show the traversal path).

    • The answer depends on multi-hop relationships (“Which indirect subsidiaries are affected?”).

Conclusion

Graph-enhanced RAG is not a replacement for vector search, but a necessary evolution for complex domains. By treating your infrastructure as a knowledge graph, you provide the LLM with the one thing it cannot hallucinate: The structural truth of your business.

Daulet Amirkhanov is a software engineer at UseBead.

Advertisement

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Cerebras Systems IPO set to raise more than $5.5bn

Published

on

The pricing of the 30m class A common stock shares is significantly higher than was expected.

Cerebras Systems, the AI chipmaker aiming to rival Nvidia, is set to raise more than $5.5bn after pricing its US initial public offering (IPO) at $185 per share.

The pricing of the 30m class A common stock shares – set to begin trading today (14 May) as ‘CBRS’ on the Nasdaq Global Select Market – is significantly higher than was expected.

In early May, a $3.5bn raise through the sale of 28m shares at between $115 and $125 each was mooted. Last week, that estimate had grown to proceeds of up to $4.8bn at a range of $150-160 per share.

Advertisement

Reported media valuations for the company after the IPO sit at around $50bn.

Morgan Stanley, Citigroup, Barclays and UBS Investment Bank are acting as lead book-running managers for the offering, according to Cerebras. Mizuho and TD Cowen are acting as bookrunners.

Needham & Company, Craig-Hallum, Wedbush Securities, Rosenblatt, Academy Securities, Credit Agricole CIB, MUFG and First Citizens Capital Securities are acting as co-managers.

In February, the company was valued at around $23bn after a $1bn Series H raise.

Advertisement

Cerebras claims that it builds the “fastest AI infrastructure in the world” and CEO Andrew Feldman has also gone on record to say that his hardware runs AI models multiple times faster than that of Nvidia’s.

Cerebras is behind WSE-3, touted by the company to be the “largest” AI chip ever built with its 19-times more transistors and 28-times more compute than the Nvidia B200.

Cerebras initially filed for IPO in September 2024, but drew criticism for a perceived heavy reliance on a single United Arab Emirates-based customer, the Microsoft-backed G42. The following October, it withdrew from a planned IPO without providing an official reason.

According to Bloomberg, the Cerebras IPO is the largest of 2026 so far, and drew orders for more than 20 times the number of shares available. Cerebras said it had granted the IPO underwriters a 30-day option to purchase up to an additional 4,500,000 shares.

Advertisement

Last month, Elon Musk’s SpaceX was reported to have confidentially filed for a US IPO, with estimates of how much this could raise put at between $50bn and $75bn, while the company’s valuation could be up to $1.75trn.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

‘We Still Can’t See Dark Matter. But What If We Can Hear It?’

Published

on

“We may have accidentally detected dark matter back in 2019,” writes ScienceAlert.

“What if instead of trying to see dark matter, scientists attempted to hear it instead?” asks Space.com:
New research suggests dark matter could leave a tiny but discernible imprint in the cacophony of ripples in spacetime called “gravitational waves” that ring through the cosmos when two black holes slam together and merge… Fortunately, when it comes to detecting gravitational waves from colliding black holes, humanity’s instruments, such as LIGO (Laser Interferometer Gravitational-Wave Observatory), are getting more and more sensitive all the time…

Vicente and colleagues searched through data gathered by LIGO and its fellow gravitational wave detectors, KAGRA (Kamioka Gravitational Wave Detector) and Virgo, focusing on 28 of the clearest signals from merging black holes. Of these, 27 appeared to have come from mergers that occurred in the relative vacuum of space. One signal, however, GW190728, first heard on July 19, 2019, and the result of merging binary black holes with a combined mass of 20 times that of the sun and located an estimated 8 billion light-years away, seemed to carry the telltale trace of this merger occurring in a region of dense, “buttery” dark matter.

The team behind this research is quick to point out that this can’t be considered a positive detection of dark matter, but does say it gives us a hint at what to look for and thus where to direct follow-up investigations… “We know that dark matter is around us. It just has to be dense enough for us to see its effects,” said team leader Josu Aurrekoetxea, of the Massachusetts Institute of Technology (MIT) Department of Physics. “Black holes provide a mechanism to enhance this density, which we can now search for by analyzing the gravitational waves emitted when they merge.”
They published their results this week in the journal Physical Review Letters.

Advertisement

Source link

Continue Reading

Tech

How to fall in love with humanity in the age of AI

Published

on

A lot of humans are feeling very down on humanity these days. Maybe you’ve met them. Or maybe you’re one of them.

I’m talking about those who look around and say: Humans are destroying the planet — causing climate change, making other species go extinct. Soon enough we’ll be mucking up the cosmos, too — polluting it with still more space junk, colonizing the moon, even exporting data centers into the heavens. The world would be better off if we ourselves just go extinct!

One reader recently exemplified this rising anti-humanism by writing in to my philosophical advice column, Your Mileage May Vary, and telling me bluntly: “I’m disgusted to be a human.” I responded by reminding them that hating on humanity is neither a new nor an enlightened position. It lets us off the hook too easily, because it expects nothing of us.

But I’m also aware that this distaste for humanity isn’t only motivating old-school misanthropy these days.

Advertisement

It’s also motivating transhumanism, the movement that says we should use tech to proactively evolve our species into Homo sapiens 2.0. Transhumanists — who span the gamut from Silicon Valley tech bros to academic philosophers — do want to keep some version of humanity going, but definitely not running on the current hardware. They imagine us with chips in our brains, or with AI telling us how to make moral decisions more objectively, or with digitally uploaded minds that live forever in the cloud. All of this will someday, they assert, usher us into a utopian future where we transcend suffering and become as perfect and immortal as gods.

To better understand why a distaste for humanity is driving some people into the arms of transhumanism these days, I reached out to Shannon Vallor, a philosopher of technology at the University of Edinburgh and author of The AI Mirror. Vallor is a devoted humanist — but not a naive one. To her, being pro-human doesn’t mean being anti-technology. We talked about how classical humanism has failed to offer a compelling vision for the 21st century and beyond — and how we can still do better. Our conversation, edited for length and clarity, is below.

What’s driving transhumanism to become more popular these days?

We’re living in a world that digital technologies and social media have made more fragmented and alienating. We are busier, more tired, more lonely, more uncertain than ever about the future and what it holds. So we’re at a low point in our ability to place faith in our fellow humans. And instead of looking at the deeper causes of that — the breakdown of the social fabric and of institutions and of local networks of care — there is an attempt to normalize and naturalize anti-humanism.

Advertisement

It’s an attempt to treat it not as a symptom of some disease or malaise in society — which is how I see it — but rather to treat it as a new, more enlightened frame of mind. To say: If you’re a humanist, you’re somehow stuck in the past, you have this overly romantic attachment to humans, you’re committing a fallacy of exceptionalism.

And there is a history of humanism being inappropriately exceptionalist — for example, imagining that other living things can’t have feelings or intelligence or moral standing. So as we’ve surpassed those errors, it’s very easy to think: Oh, you just go one step further and decide that humans don’t really need to be part of the story, or they don’t need to be writing the story. And if you quiver or flinch at the notion of machines writing the story of the future, that’s just your parochial attachment.

Right, this is the accusation of “speciesism” that we hear a lot these days.

Exactly. At a very superficial intellectual level, this is all very plausible and appealing and seems very enlightened, right? But it’s rooted in a deep misconception of what it is to be human.

Advertisement

The reason why it’s mistaken for humans to place themselves at the center of all value and to see other living beings as mere tools has nothing to do with humans somehow being unimportant, or humans somehow being insignificant in the broad story. It’s rather a failure to understand that to be human is to be dependent upon this much bigger living system, and our value is inseparable and intertwined with the value of other living things. It’s not that humans are something to be cast aside.

Have a question you want me to answer in the next Your Mileage May Vary advice column?

Do you think the classical humanism that we’ve inherited from the Renaissance and the Enlightenment era is enough to meet the current moment? Or do we need a new humanism?

No. I do think we need a new humanism. And one of the reasons, of course, is because classical humanism, in addition to suffering from the flaws of speciesism, had a vision of the human that was itself heavily gendered and racialized. It was very much an ideal that is both unattainable and undesirable in its naive form: the idea of the individual, rational agent that is entirely self-determining and surpassing the more basic networks of care and concern that hold communities together. This Enlightenment version of humanism, which carried with it many of the flaws of European Enlightenment thinking more broadly — that’s not the kind of humanism that’s going to carry us into a sustainable future.

Advertisement

The most common pro-human response to AI that I see nowadays is this style of humanism that tries to say there are certain fixed traits that make humans unique, and that tries to locate value only in humans as they currently exist. It says: Let’s use tech to alleviate problems like disease but not try to augment the species.

To me, that feels insufficient as a guide. Because we’re all already transhuman in some sense, right? “Human” has never been a static category. Homo sapiens has always been evolving and augmenting itself, with everything from meditation and fasting to eyeglasses and antidepressants. A humanism that refuses to recognize that feels like it doesn’t offer a compelling vision for the future.

That’s the naive version of humanism. It’s the idea that there’s this blueprint for what a human is and that somehow technology, or any things that change us, take us away from that blueprint, when in fact we’ve been changing ourselves with language, with tools, with architecture, with culture, from the moment we climbed down from the trees.

“We need to ground ourselves in an ethos of sustainability, of care, of solidarity and mutual aid and repair of the systems that we need in order to have a future. That can be its own philosophy.”

Advertisement

I wrote about this in The AI Mirror, where I talked about the existentialist Jose Ortega y Gasset’s notion of “autofabrication” [literally, self-making]. From the beginning, humans have had to invent and reinvent themselves anew again and again. If there is anything unique about the human, it’s that as far as we know there’s no other creature that has to get up in the morning and decide if it’s going to live differently than it did the day before, or if it’s going to maintain the commitments and promises it’s made to itself or others.

This kind of identity construction is something that our cognitive makeup has given us, both as a blessing and a bit of a curse. It’s the responsibility to choose — and to not fall back on this idea that there’s a blueprint for what a human is supposed to be and that we’re just supposed to follow that blueprint.

I think people really crave a positive vision for the future that they can get behind. To you, what is the positive, humanist-but-not-naive-humanist vision?

Sometimes I think about this demand for a positive vision and I think about how unfair and unreasonable that demand is when the mere homeostasis of life on this planet, and of human life, is fragile. For a being whose future is threatened, survival is a positive future! Maintaining the strength and resilience of our form of life is a victory. And in a way, I think there’s a danger in the desire to immediately leap past that.

Advertisement

We have to look at the fundamental structural causes of the scarcity we face, and see the positive, exciting, mobilizing, motivating work as addressing those deficiencies. We should be able to be excited about doing that work.

I have two simultaneous reactions to this. The first is: Yes, we should be able to get excited about that. And I think if we had a cultural narrative that taught us that just the dynamism of being alive is itself the gift, we’d be better placed to think of sustainability as the thing to treasure.

My second reaction is: But people have this persistent hunger for a story about how we can overcome suffering and make things better than ever before — a transcendence narrative!

And that’s okay. What I want to say is, if you meet people’s basic needs, both as individuals and in community, they will naturally generate the instruments of transcendence.

Advertisement

When you give people the ability to be free from fear and free from imminent threat, and you get them out of this feeling that they’re in a lifeboat situation — that’s when people’s creative energy really kicks in.

I’m someone who loves animals — I’m a big birder, I’m obsessed with snorkeling, I just love exploring different kinds of minds. So I could feel excited about a future where we have a multitude of diverse intelligences — animals, conscious AIs, augmented humans, etc. Do you think part of a positive vision for the future could be an expanded space of different kinds of minds? Does that excite you at all?

Yeah! Look, I’m a giant sci-fi nerd. I spent my whole childhood living in imaginary worlds with other kinds of minds: talking animals, various hybrid human-animal creations, robots, artificial intelligences. There is nothing about my humanism that blocks a future where humans share the planet with many more kinds of minds than we have today.

What I resent is the exploitation of that excitement by tech companies to sell and impose harmful, unsafe technologies that pretend to be minds, that are disguised as minds. Claude is not [a mind]. Claude is a language model built to roleplay that.

Advertisement

I have no assurance that it’s possible to create a machine mind. But I also have no principled reason to think it’s impossible. And the vision that you described sounds wonderful. The problem is that it’s very easy for the AI industry to say: Ah, but that’s what we’re already giving you!

You said in a talk last year that you think maybe we should take a break from a certain kind of philosophizing about humanity’s future. But looking around at the political landscape, that feels like a luxury we can’t afford. The tech broligarchs have links to the authoritarian right. Some of them want to escape the control of democratic governments, so they’re trying to create their own sovereign colonies — whether that’s space colonies or “network states.” Given their influence, taking a break from trying to steer the future feels like capitulation at a time when capitulation is very dangerous.

I hear you. It does seem very dangerous to say that there shouldn’t be some kind of counter-philosophical-movement opposing that. But when I was saying that maybe we need to pause, what I was speaking of is the kinds of philosophical preoccupations that jump ahead of the obvious needs of the moment and serve as a perpetual distraction from those needs.

There is a certain kind of philosophy that I think we need to perhaps put on hold: It’s the philosophy of forget the present, forget the problems of the moment, think bigger, think about the universal point of view.

Advertisement

What I’m suggesting is that we need to ground ourselves in an ethos of sustainability, of care, of solidarity and mutual aid and repair of the systems that we need in order to have a future. That can be its own philosophy.

But it’s not a utopian kind of move. Utopia is very often used as an instrument of authoritarianism and it’s used as a way to rip people away from their present commitments and needs, and to distract them with a dream that relieves the pressure to address our current circumstances. I think that’s the opposite of what we need right now.

Yeah, this is the classic point made about Christendom — how it tells us: Just focus on getting to a good afterlife, don’t expect anything good from your life on Earth. Malcolm X called it “pie in the sky and heaven in the hereafter.” It’s one of the ways I often feel like transhumanism is weirdly doing Christendom’s bidding.

Oh absolutely, 100 percent. It’s strangely regressive, right? It’s bringing us back precisely to that worldview: Don’t worry about the feudal circumstances that you are presently in, because that’s going to be a distant memory soon, when the world of infinite abundance is delivered unto you. That story was effective for millennia. But it was one that we ultimately managed to break ourselves free from.

Advertisement

Right, and that was one of the genuinely great innovations of humanism: Let’s not just put all our faith in the beautiful hereafter, but let’s actually care about human lives here on Earth, now.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025