Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

‘We Still Can’t See Dark Matter. But What If We Can Hear It?’

Published

on

“We may have accidentally detected dark matter back in 2019,” writes ScienceAlert.

“What if instead of trying to see dark matter, scientists attempted to hear it instead?” asks Space.com:
New research suggests dark matter could leave a tiny but discernible imprint in the cacophony of ripples in spacetime called “gravitational waves” that ring through the cosmos when two black holes slam together and merge… Fortunately, when it comes to detecting gravitational waves from colliding black holes, humanity’s instruments, such as LIGO (Laser Interferometer Gravitational-Wave Observatory), are getting more and more sensitive all the time…

Vicente and colleagues searched through data gathered by LIGO and its fellow gravitational wave detectors, KAGRA (Kamioka Gravitational Wave Detector) and Virgo, focusing on 28 of the clearest signals from merging black holes. Of these, 27 appeared to have come from mergers that occurred in the relative vacuum of space. One signal, however, GW190728, first heard on July 19, 2019, and the result of merging binary black holes with a combined mass of 20 times that of the sun and located an estimated 8 billion light-years away, seemed to carry the telltale trace of this merger occurring in a region of dense, “buttery” dark matter.

The team behind this research is quick to point out that this can’t be considered a positive detection of dark matter, but does say it gives us a hint at what to look for and thus where to direct follow-up investigations… “We know that dark matter is around us. It just has to be dense enough for us to see its effects,” said team leader Josu Aurrekoetxea, of the Massachusetts Institute of Technology (MIT) Department of Physics. “Black holes provide a mechanism to enhance this density, which we can now search for by analyzing the gravitational waves emitted when they merge.”
They published their results this week in the journal Physical Review Letters.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Startup building portable AI data centers for remote operations grows Seattle-area hub to 120 people

Published

on

An Armada Galleon portable data center in transit. The company has grown its Bellevue, Wash., engineering team to about 120 people as demand increases for AI infrastructure that can operate in remote environments. (Armada Photo)

A heavily-funded San Francisco-based startup by the name of Armada is quietly building a major engineering presence in Bellevue, Wash., as demand surges for AI infrastructure that can operate beyond the walls of traditional data centers.

The 4-year-old company, which builds rugged computing systems that enable satellite connectivity in remote settings like oil fields, mines and military sites, now employs about 120 people at Bellevue’s Sunset Corporate Campus along the I-90 corridor. 

Armada raised $131 million in funding last summer, including backing from Microsoft, Founder’s Fund, Lux Capital and others. Total funding stands at more than $200 million. 

How its technology is used: Armada is trying to solve a growing problem in the AI era: bringing powerful computing to places where internet connectivity is unreliable, nonexistent or too sensitive to rely on outside networks. 

That includes the U.S. Navy’s littoral combat ship USS Cooperstown, the Alaska Department of Transportation and Public Facilities — and, closer to home, the vast evergreen forests of Washington state. 

Advertisement

The Washington State Department of Natural Resources, which coordinates wildfire response across 5.6 million acres, is using Armada’s Atlas platform to centralize Starlink internet systems and give emergency crews more reliable connectivity in the remote areas, where traditional broadband is limited. This has become critical as wildfire operations increasingly rely on drones, satellite imagery and real-time data.

In addition, Armada builds portable, modular data centers, which it calls Galleons, that bring connectivity to the edge of the network. Instead of sending data back and forth to centralized data centers, it lets customers process and analyze information locally, in real time.

That matters because AI systems increasingly require large amounts of computing power and near-instant responses. In environments with poor connectivity, relying on distant cloud infrastructure can introduce delays, security concerns or operational risks.

Armada’s Seattle-area operations: The Bellevue office serves as the hub of the company’s hardware and software engineering teams. 

Advertisement

Armada chose the Seattle area three years ago for the engineering center, due to the concentration of experienced engineers from companies like Microsoft and Amazon who know how to “build and operate at massive scale,” said Justin O’Kelly, head of communications at Armada, via email. 

“Practically speaking, this region has something you don’t always find in a tech hub: engineers who have shipped real products at scale, not just written code,” O’Kelly said.

Because Armada’s systems are deployed in settings like mines and military sites, they have to work without fail — there’s no IT department to call in the field. The platform is designed to let organizations run AI-powered operations anywhere, even without existing internet connectivity. 

The Bellevue office is led by Kenny Hsu, who serves as chief business officer, and Prag Mishra, chief AI officer. Mishra previously spent more than a decade at Amazon, working on Prime Air, Amazon Health and Amazon Logistics, and before that was the head of research for the Bing Geospatial program at Microsoft. Hsu previously ran revenue operations at AuditBoard, which sold to Hg for $3 billion in 2024. 

Advertisement

The 400-person company currently has more than 20 open positions in AI engineering, infrastructure, security, and product management in Bellevue. 

Microsoft partnership: The startup has also expanded its relationship with Microsoft, whose venture arm, M12, invested in the company’s early rounds.

More recently, Armada signed an agreement to combine Microsoft’s Azure Local and Foundry Local with its modular infrastructure, aimed at running AI systems in edge environments where data can’t leave the site.

The partnership reflects a broader shift, where companies are racing to deploy AI systems outside traditional cloud environments, closer to where data is actually generated.

Advertisement

That trend has become especially important in defense technology, where connectivity can’t always be guaranteed and sensitive data cannot be compromised.

Big picture: Armada’s expansion in the region speaks to a growing trend around defense tech.

RELATED: See GeekWire’s list of Seattle-area engineering outposts.

Source link

Advertisement
Continue Reading

Tech

Gaza Is Rebuilding With Lego-Like Bricks Made From Rubble

Published

on

Inside a makeshift workshop in Gaza, rebuilt after it was damaged by Israeli air strikes, Suleiman Abu Hassanin stands among piles of broken concrete, trying to give them a new form. His voice over the phone sounds tired, carrying the weight of what he is trying to do: rebuild in a place where building materials are no longer available.

Gaza’s construction crisis did not begin with the latest war. For years, the Israeli blockade restricted the entry of cement, steel, and other building materials, slowing reconstruction efforts across the enclave. But after nearly two years of intensified bombardment, the scale of destruction has pushed the system far beyond collapse.

According to UN estimates, Gaza now contains more than 60 million tons of rubble, while hundreds of thousands of displaced people continue to live in tents with little protection from heat or winter chill and no clear prospect for reconstruction.

In that environment, rubble is no longer just debris. It is becoming one of the only construction resources left.

Advertisement

One local response is Green Rock, a project led by Abu Hassanin that aims to recycle the remains of destroyed buildings into usable Lego-like bricks. Similar interlocking brick systems have been used elsewhere, including in parts of Europe and in post-conflict settings such as Sudan and Iraq. But in Gaza, the project emerges under very different conditions: not as an architectural experiment, but as a response to the near disappearance of conventional reconstruction materials.

Abu Hassanin says the idea was born out of necessity rather than innovation. “We were facing a simple equation: destruction without solutions,” he says. “So we tried to turn it into a resource.”

The process involves crushing and sorting rubble, then mixing it with local soil and alternative binding materials developed inside Gaza before compressing it into blocks using a machine built by hand. The resulting interlocking bricks can be assembled without traditional mortar, reducing reliance on cement, which remains scarce.

Legolike interlocking bricks made from recycled rubble inside the Green Rock workshop in Gaza.

Lego-like interlocking bricks made from recycled rubble inside the Green Rock workshop in Gaza.

Advertisement

Photograph: Hassan Herzallah

Under normal conditions, this type of brick would require some cement, around 7 to 12 percent. But because access to it remains heavily restricted, the team says it developed a version using locally available replacement materials instead. Engineer Wajdi Jouda helped define the brick’s size and structure to meet engineering standards and connected the team with technical expertise from outside Gaza.

Source link

Continue Reading

Tech

Week in Review: Most popular stories on GeekWire for the week of May 10, 2026

Published

on

Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of May 10, 2026.

Sign up to receive these updates every Sunday in your inbox by subscribing to our GeekWire Weekly email newsletter.

Most popular stories on GeekWire

Source link

Continue Reading

Tech

Today’s NYT Connections: Sports Edition Hints, Answers for May 18 #602

Published

on

Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.


Today’s Connections: Sports Edition is a tough one. The purple category requires you to mentally add some letters to four words to turn them into sports-related words. If you’re struggling with the puzzle but still want to solve it, read on for hints and the answers.

Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.

Advertisement

Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta

Hints for today’s Connections: Sports Edition groups

Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Not going up.

Advertisement

Green group hint: Gridiron plan.

Blue group hint: Pacific Northwest teams.

Purple group hint: Add two letters to form a baseball team’s name.

Answers for today’s Connections: Sports Edition groups

Yellow group: Slide.

Advertisement

Green group: Football running plays.

Blue group: An Oregon athlete.

Purple group: MLB teams, minus the last two letters.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

Advertisement

What are today’s Connections: Sports Edition answers?

completed NYT Connections: Sports Edition puzzle for May 18, 2026

The completed NYT Connections: Sports Edition puzzle for May 18, 2026.

NYT/Screenshot by CNET

The yellow words in today’s Connections

The theme is slide. The four answers are decline, dip, downturn and slump.

The green words in today’s Connections

The theme is football running plays. The four answers are counter, dive, draw and sweep.

Advertisement

The blue words in today’s Connections

The theme is an Oregon athlete. The four answers are Beaver, Duck, Thorn and Timber.

The purple words in today’s Connections

The theme is MLB teams, minus the last two letters. The four answers are ange (Angels), dodge (Dodgers), marine (Mariners) and range (Rangers)

Source link

Advertisement
Continue Reading

Tech

If You’re a Serious Bowler, You Need to Know About Bowling Lane Oil

Published

on

Lately, Kegel has been steadily improving its automation, to the point where today’s machines do the entire job without any human intervention.

The lanes you and I bowl on as amateurs are oiled very differently from the ones pros use.

At your local bowling center, public lanes are oiled in what’s referred to as a “high” ratio: The level of oil present in the middle of a lane is eight to 10 times higher than what’s on the outside. At the far left and right of the lane, many public bowling alleys have no oil at all.

“On a normal pattern at your normal bowling center, there is some autocorrect,” Tackett says. Because the edges of the lane have very little oil, shots that drift to either side will slow down; if the ball has been thrown with the proper spin to guide it back toward the middle of the lane, it will curl more effectively on the drier surface. “It makes it easier to hit the pocket.”

Advertisement

(By “the pocket,” Tacket means that sweet spot at the front corner of the standard 10-pin configuration. For right-handed bowlers that’s the space between the first and third pins slightly right of center; for lefties, it’s on the left side.)

In the pros, though, the patterns are far tougher. Instead of 8:1 or even 10:1 ratios of oil in the middle of the lane to the outside, the PBA uses ratios of 3:1 and under—even as low as nearly 1:1 in some cases. Learning how each board is oiled at the start of a match allows the pros to map their ideal shots. “You have to be a lot more precise, not only with where you’re placing the ball on the lane, but with your speed that you’re throwing it and the revolutions that you’re applying to the ball,” Tackett says.

Oil patterns also vary in terms of their length up the 60-foot lane. Many common patterns run for the first 40 feet before the oil tapers off near the pins, but several variations exist.

As lane oil technology has improved, understanding and adjusting to lane oil patterns and ratios has become an outsize tactical element for professional bowlers. Tackett likens it in some ways to golf.

Advertisement

“An oil pattern basically adds water and trees and bunkers,” he says. “It’s adding obstacles to the lane.”

The PBA, the sport’s governing body, likes those comparisons. Rather than using the latest advances in lane oil tech to standardize lanes across every PBA competition, the organization takes the opposite approach, intentionally using varying conditions across different events to challenge top bowlers.

“It forces players to think, adapt, and create, which is how we test greatness,” says Tom Clark, PBA commissioner, via email. “It’s what makes the sport more exciting, interesting, and entertaining every single week.”

The PBA has a library of 20 lane oil patterns for the 2026 season from Kegel, which use varying ratios, lengths, and even specific oil formulations, each of which has its own character. A different pattern is used at virtually every event through the season. For instance, the PBA Tournament of Champions on the week of April 20 used the “Don Johnson 40” pattern, named for famed bowler Don Johnson, with the “40” signifying the length of the pattern in feet.

Advertisement

Source link

Continue Reading

Tech

Tiny underwater cables keep entire island nations online as sabotage fears and accidental damage push global internet stability dangerously closer collapse

Published

on


  • Report finds five island nations depend entirely on one vulnerable underwater internet cable
  • Accidental ship anchoring causes most global undersea cable failures every year
  • Smaller island nations remain dangerously exposed to complete nationwide internet blackouts

A new report has highlighted how all 48 island nations worldwide, including major economies such as the United Kingdom, Japan, and Indonesia, rely on just 126 undersea cables for their internet connectivity.

These cables are often no thicker than a garden hose, making them surprisingly vulnerable to accidental damage or deliberate sabotage.

Source link

Continue Reading

Tech

Old kindle owners are revolting against Amazon’s support shutdown with jailbreaking

Published

on

Amazon’s decision to cut support for older Kindles has pushed some longtime owners toward jailbreaking, a route many never expected to consider.

From May 20, 2026, Kindle devices released in 2012 or earlier will no longer be able to buy, borrow, or download new books directly from Amazon. Books already downloaded will still work, but the store experience is basically being switched off for these devices. Reports now suggest that some users are looking at jailbreaks as a way to keep older Kindles useful instead of replacing hardware that still works.

Why are Kindle owners turning to jailbreaks?

The frustration is not just about losing store access. On Reddit, many users are treating this as another “buying isn’t owning” moment. Several owners say their old Kindles still work perfectly for reading, which makes the shutdown feel unnecessary. Many users see this as a right-to-repair and ownership issue. If an old Kindle still turns on, has a working screen, battery, and buttons, they argue it should not be pushed toward retirement because Amazon has ended software support.

A Kindle jailbreak means removing some of Amazon’s software restrictions so users can install community-made tools and manage the device more freely. In this case, owners are mainly interested in keeping older Kindles useful for reading, sideloading books, and avoiding forced updates that could close those workarounds.

Advertisement

What are the risks of jailbreaking a Kindle?

Jailbreaking is not a clean fix for everyone. The process can fail if users install the wrong files, follow bad instructions, or use a method that does not match their Kindle model or firmware version. In the worst case, the device can become unstable or stop working properly.

In many places, modifying a device for personal use may not automatically be treated as illegal. But using it to break DRM, remove copy protection, or sell modified Kindles can create legal trouble.

Even if Amazon’s decision makes sense from a support and maintenance perspective, it has landed badly with many users. People are tired of electronics being treated as disposable once official support ends. For some older Kindle owners, Jailbreaking is one way to keep those devices out of the e-waste pile.

Source link

Advertisement
Continue Reading

Tech

UN digital envoy warns AI influence is concentrated in a ‘few zip codes,’ calls for global action

Published

on

United Nations’ Under-Secretary and Special Envoy for Digital and Emerging Technologies, Amandeep Singh Gill, appears via Zoom to deliver the opening keynote at Seattle University’s 2026 Ethics and Tech conference on May 15, 2026. (Photo: Ken Yeung)

Big tech companies are deploying compute clusters with millions of GPUs to train and run AI models. But across the entire continent of Africa — encompassing 54 countries and more than 1.5 billion people — fewer than 1,000 GPUs are available for researchers and developers to train models on local-language datasets.  

That disparity illustrates what the United Nations’ top digital envoy calls an “immense concentration of tech power and wealth” in a few zip codes — not just countries or regions, but confined areas, primarily in the U.S., where the companies shaping AI are based.

He didn’t name names, but the point hit close to home for the Seattle audience: 98109 for Amazon, 98052 for Microsoft.

Delivering the keynote address via Zoom at Seattle University’s 2026 Ethics and Tech conference on Friday, Under-Secretary Amandeep Singh Gill called 2026 “especially seminal” for AI governance, as the technology shifts from model capabilities and infrastructure investment to systems that perform real-world tasks autonomously.

Gill pointed to the global response to Anthropic’s Mythos AI model — which the company restricted from broad public release over cybersecurity concerns — as an example of why AI governance requires a comprehensive, international approach.

Advertisement

Here are more of the key messages from his talk.

AI could become a “systemic risk.” Gill said the technology is a “relatively minor risk” now but warned it could soon bypass cybersecurity defenses, accelerate armed conflict, and erode public trust through deepfakes and misinformation. “When we cannot tell the difference between what is true and untrue, what is reality or imaginary, then we lose this shared sense of an understanding of facts,” he said. 

Armed conflict could worsen. Gill warned that AI risks “lowering the threshold of conflict, confusing accountability under international humanitarian law, and setting us off on escalation ladders that we cannot control.” 

AI’s energy demands are threatening climate goals. The energy required for large language models, agentic systems, and inference is already threatening national net-zero targets, Gill warned. Data center emissions, water consumption for cooling, hardware turnover, and mineral extraction costs are compounding — and falling disproportionately on low-income countries. 

Advertisement

AI is both “a potential solution and a stressor” for the environment. It could optimize renewable energy grids and accelerate progress in fusion and batteries, but the short-term costs are mounting. Gill said the UN is examining how to ensure equity and just transitions “over these time horizons.” 

The UN is building a scientific panel for AI modeled after the Intergovernmental Panel on Climate Change. Chaired by journalist and Nobel Peace Prize winner Maria Ressa and Turing Award-winning AI researcher Yoshua Bengio, the 40-member panel is deliberately composed of only two members each from China and the U.S., with the remaining 36 from other countries, including seven from Africa, to ensure more countries are heard. Its first report is expected in July 2026. 

The UN is putting AI governance conversations under one roof. Conversations about AI previously happened in separate bodies with narrow mandates. Now they’re being brought onto what Gill called a “horizontal platform” where policymakers from all 193 countries can learn from each other and develop common approaches.

Gill called AI governance a “sovereign decision.” The UN won’t tell countries how to regulate AI, but governance frameworks mean little if nations lack the capacity to participate. Gill called for support of community-driven AI projects that invest in local research and innovation ecosystems, allowing people to use these tools to solve their own problems.

Advertisement

He acknowledged the UN is working with limited resources against an enormous challenge, but said the alternative is leaving AI’s trajectory to market forces and geopolitical competition.

The goal, he said, is a world where AI empowers democracies and societies, and creates opportunities not just for “a few billionaires and trillionaires” but for everyone.

Source link

Advertisement
Continue Reading

Tech

Architectural patterns for graph-enhanced RAG: Moving beyond vector search in production

Published

on

Retrieval-augmented generation (RAG) has become the de facto standard for grounding large language models (LLMs) in private data. The standard architecture — chunking documents, embedding them into a vector database, and retrieving top-k results via cosine similarity — is effective for unstructured semantic search.

However, for enterprise domains characterized by highly interconnected data (supply chain, financial compliance, fraud detection), vector-only RAG often fails. It captures similarity but misses structure. It struggles with multi-hop reasoning questions like, “How will the delay in Component X impact our Q3 deliverable for Client Y?” because the vector store doesn’t “know” that Component X is part of Client Y’s deliverable.

This article explores the graph-enhanced RAG pattern. Drawing on my experience building high-throughput logging systems at Meta and private data infrastructure at Cognee, we will walk through a reference architecture that combines the semantic flexibility of vector search with the structural determinism of graph databases.

The problem: When vector search loses context

Vector databases excel at capturing meaning but discard topology. When a document is chunked and embedded, explicit relationships (hierarchy, dependency, ownership) are often flattened or lost entirely.

Advertisement

Consider a supply chain risk scenario. While this is a hypothetical example, it represents the exact class of structural problems we see constantly in enterprise data architectures:

  • Structured data: A SQL database defining that Supplier A provides Component X to Factory Y.

  • Unstructured data: A news report stating, “Flooding in Thailand has halted production at Supplier A’s facility.”

A standard vector search for “production risks” will retrieve the news report. However, it likely lacks the context to link that report to Factory Y’s output. The LLM receives the news but cannot answer the critical business question: “Which downstream factories are at risk?”

In production, this manifests as hallucination. The LLM attempts to bridge the gap between the news report and the factory but lacks the explicit link, leading it to either guess relationships or return an “I don’t know” response despite the data being present in the system.

The pattern: Hybrid retrieval

To solve this, we move from a “Flat RAG” to a “Graph RAG” architecture. This involves a three-layer stack:

Advertisement
  1. Ingestion (The “Meta” Lesson): At Meta, working on the Shops logging infrastructure, we learned that structure must be enforced at ingestion. You cannot guarantee reliable analytics if you try to reconstruct structure from messy logs later. Similarly, in RAG, we must extract entities (nodes) and relationships (edges) during ingestion. We can use an LLM or named entity recognition (NER) model to extract entities from text chunks and link them to existing records in the graph.

  2. Storage: We use a graph database (like Neo4j) to store the structural graph. Vector embeddings are stored as properties on specific nodes (e.g., a RiskEvent node).

  3. Retrieval: We execute a hybrid query:

Reference implementation

Let’s build a simplified implementation of this supply chain risk analyzer using Python, Neo4j, and OpenAI.

1. Modeling the graph

We need a schema that connects our unstructured “risk events” to our structured “supply chain” entities.

Image 1
Image 2

2. Ingestion: Linking structure and semantics

In this step, we assume the structural graph (suppliers -> factories) already exists. We ingest a new unstructured “risk event” and link it to the graph.

Image 3
Image 4

3. The hybrid retrieval query

This is the core differentiator. Instead of just returning the top-k chunks, we use Cypher to perform a vector search to find the event, and then traverse to find the downstream impact.

Image 5

The output: Instead of a generic text chunk, the LLM receives a structured payload:

[{‘issue’: ‘Severe flooding…’, ‘impacted_supplier’: ‘TechChip Inc’, ‘risk_to_factory’: ‘Assembly Plant Alpha’}]

This allows the LLM to generate a precise answer: “The flooding at TechChip Inc puts Assembly Plant Alpha at risk.”

Advertisement

Production lessons: Latency and consistency

Moving this architecture from a notebook to production requires handling trade-offs.

1. The latency tax

Graph traversals are more expensive than simple vector lookups. In my work on product image experimentation at Meta, we dealt with strict latency budgets where every millisecond impacted user experience. While the domain was different, the architectural lesson applies directly to Graph RAG: You cannot afford to compute everything on the fly.

Mitigation: We use semantic caching. If a user asks a question similar (cosine similarity > 0.85) to a previous query, we serve the cached graph result. This reduces the “graph tax” for common queries.

2. The “stale edge” problem

In vector databases, data is independent. In a graph, data is dependent. If Supplier A stops supplying Factory Y, but the edge remains in the graph, the RAG system will confidently hallucinate a relationship that no longer exists.

Advertisement

Mitigation: Graph relationships must have Time-To-Live (TTL) or be synced via Change Data Capture (CDC) pipelines from the source of truth (the ERP system).

Infrastructure decision framework

Should you adopt Graph RAG? Here is the framework we use at Cognee:

  1. Use vector-only RAG if:

    • The corpus is flat (e.g., a chaotic Wiki or Slack dump).

    • Questions are broad (“How do I reset my VPN?”).

    • Latency < 200ms is a hard requirement.

  2. Use graph-enhanced RAG if:

    • The domain is regulated (finance, healthcare).

    • “Explainability” is required (you need to show the traversal path).

    • The answer depends on multi-hop relationships (“Which indirect subsidiaries are affected?”).

Conclusion

Graph-enhanced RAG is not a replacement for vector search, but a necessary evolution for complex domains. By treating your infrastructure as a knowledge graph, you provide the LLM with the one thing it cannot hallucinate: The structural truth of your business.

Daulet Amirkhanov is a software engineer at UseBead.

Advertisement

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Cerebras Systems IPO set to raise more than $5.5bn

Published

on

The pricing of the 30m class A common stock shares is significantly higher than was expected.

Cerebras Systems, the AI chipmaker aiming to rival Nvidia, is set to raise more than $5.5bn after pricing its US initial public offering (IPO) at $185 per share.

The pricing of the 30m class A common stock shares – set to begin trading today (14 May) as ‘CBRS’ on the Nasdaq Global Select Market – is significantly higher than was expected.

In early May, a $3.5bn raise through the sale of 28m shares at between $115 and $125 each was mooted. Last week, that estimate had grown to proceeds of up to $4.8bn at a range of $150-160 per share.

Advertisement

Reported media valuations for the company after the IPO sit at around $50bn.

Morgan Stanley, Citigroup, Barclays and UBS Investment Bank are acting as lead book-running managers for the offering, according to Cerebras. Mizuho and TD Cowen are acting as bookrunners.

Needham & Company, Craig-Hallum, Wedbush Securities, Rosenblatt, Academy Securities, Credit Agricole CIB, MUFG and First Citizens Capital Securities are acting as co-managers.

In February, the company was valued at around $23bn after a $1bn Series H raise.

Advertisement

Cerebras claims that it builds the “fastest AI infrastructure in the world” and CEO Andrew Feldman has also gone on record to say that his hardware runs AI models multiple times faster than that of Nvidia’s.

Cerebras is behind WSE-3, touted by the company to be the “largest” AI chip ever built with its 19-times more transistors and 28-times more compute than the Nvidia B200.

Cerebras initially filed for IPO in September 2024, but drew criticism for a perceived heavy reliance on a single United Arab Emirates-based customer, the Microsoft-backed G42. The following October, it withdrew from a planned IPO without providing an official reason.

According to Bloomberg, the Cerebras IPO is the largest of 2026 so far, and drew orders for more than 20 times the number of shares available. Cerebras said it had granted the IPO underwriters a 30-day option to purchase up to an additional 4,500,000 shares.

Advertisement

Last month, Elon Musk’s SpaceX was reported to have confidentially filed for a US IPO, with estimates of how much this could raise put at between $50bn and $75bn, while the company’s valuation could be up to $1.75trn.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025