Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Amazon Leo aims to double its pace as it gets set to roll out its satellite broadband network

Published

on

Chris Weber, vice president of consumer and enterprise business for Amazon Leo, sports a T-shirt bearing Amazon Leo’s logo in the project’s signature krypton shade of purple during countdown coverage for an April satellite launch. (Credit: United Launch Alliance)

REDMOND, Wash. — Chris Weber isn’t ready to say yet exactly when Amazon Leo will start letting individual customers sign up for satellite broadband service, but when it happens, he’ll have the right wardrobe for the debut.

During a recent interview at Amazon Leo’s Mission Operations Center in Redmond, Weber sported running shoes in a shade of purple with the Leo brand emblazoned on the back.

“It’s not purple, it’s krypton,” Weber, who came over from GitLab in 2024 to become Amazon Leo’s vice president of consumer and enterprise business, told GeekWire. “Krypton is the color when our thrusters fire in space, so we picked that. It was obviously available in the Amazon palette. … There’s a lot of meaning and thought that went into our brands, and we’re quite excited about that.”

It’s been a year since Amazon Leo, formerly known as Project Kuiper, began its multibillion-dollar campaign to send up thousands of satellites to provide broadband internet access across the globe. So far, 304 satellites have been deployed over the course of 11 launches — and Weber said the Amazon Leo team will be running twice as hard in the year ahead.

“The theme moving forward is acceleration,” he said. “What we’ve said is that over the next 12 months, we’ll double the number of launches, satellites, et cetera, so everything is about accelerating that.”

Advertisement

Amazon Leo has already been making its service available to a select group of enterprise customers on a preview basis, and Weber signaled that the official launch of commercial service isn’t all that far away. But Amazon Leo won’t be available everywhere all at once.

“What we’ve said publicly is that in the coming months — so it’s not years away — we’ll launch, and that’ll be in the northern and southern hemisphere, because you need enough satellites to have coverage where your customer terminal is seeing a satellite,” he said. “And so we’ll launch that in the next couple of months, our fixed service. And then as we get more and more satellites up, that coverage will expand inward geographically.”

There’s a lot of catching up to do: Even if Amazon Leo doubles its pace over the next year, it’ll still be far behind SpaceX’s Starlink network, which currently has more than 10,000 satellites in orbit and more than 12 million subscribers.

Closing the gap with Starlink isn’t the only factor motivating Amazon Leo’s speedup: Under the terms of its license from the Federal Communications Commission, Amazon was supposed to deploy half of its planned 3,232 first-generation satellites by the end of July. The company is seeking a two-year extension; last month, FCC Chairman Brendan Carr said the agency was still “reviewing the paperwork” for Amazon’s request.

Advertisement

Even assuming the FCC grants the extension and Leo’s pace doubles by mid-2027, Amazon would have to increase its pace further to get to 1,616 satellites by mid-2028, and then speed up even more to get all 3,232 satellites in low Earth orbit by mid-2029.

Waiting for rockets

Amazon Leo’s brand adorns the fairing of a United Launch Alliance Atlas 5 rocket for a satellite launch in December 2025. (Amazon Photo)

In its filings with the FCC, Amazon said it had to slow down its deployment schedule due to the limited availability of launch vehicles. It doesn’t help that Amazon founder Jeff Bezos’ Blue Origin space venture — one of the launch providers for Amazon Leo — had to ground its heavy-lift New Glenn rocket temporarily due to an unrelated launch failure last month.

The rocket shortage forced Amazon to throttle back from its target production rate of five satellites a day at its Kirkland manufacturing facility. Weber said hundreds of satellites are in storage at Amazon’s processing facility in Florida, waiting for liftoff.

“The last I heard, we have like the next six [batches] stacked in the dispensers, ready to go for the launch providers to pick up,” he said.

Weber voiced confidence that heavy-lift rockets from Blue Origin, United Launch Alliance and Arianespace will support a higher launch rate in the year ahead. Amazon is even buying launches from SpaceX to accelerate satellite deployment.

Advertisement

“We’ve contracted for 100 rocket launches, the largest in space history,” he said. “And so, obviously, the commitment is there. We continue to look for ways to acquire additional launches and move launches up.”

Back in 2020, Amazon said it planned to spend more than $10 billion to get Amazon Leo off the ground. Since then, some industry observers have estimated the cost could amount to as much as $20 billion. But the projected costs would be more than matched by the expected payoff.

Just this week, a market study commissioned by Amazon and conducted by Oxford Economics estimated that broadband services provided by satellites in low Earth orbit could add between $32 billion and $863 billion to global GDP by 2035, and support between 800,000 and 21 million jobs worldwide. By 2035, somewhere between 78 million and 421 million people could be using satellite broadband, depending on which of the scenarios analyzed by the British-based advisory firm actually plays out.

Inside Mission Control

Controllers are on duty at Amazon Leo’s Mission Operations Center in Redmond, Wash., for the first launch of production-grade satellites on April 28, 2025. (Amazon Photo)

Amazon has been careful about protecting the “secret sauce” of its satellite operation — which means you’d be hard-pressed to find full-frontal photos of its fully deployed satellites, or pictures showing the display systems inside its Mission Operations Center in Redmond.

Suffice it to say that the MOC is laid out much like NASA’s Mission Control in Houston, but on a smaller scale. Most of the time, satellite operations are monitored by a handful of controllers, but that number can swell to about 20 team members for a launch.

Advertisement

The current center is larger than the facility that Amazon used for putting a couple of prototype satellites through their paces starting in 2023. It opened for business not long before the first launch of operational satellites. A corporate-style snack bar is around the corner from the rows of computer consoles, and a porthole installed on the center’s back wall lets visitors peek in from the lounge outside the doors.

Amazon has also been careful when it comes to discussing pricing for satellite broadband. In last month’s annual letter to shareholders, Amazon CEO Andy Jassy promised that Leo’s services would come “at a lower cost than alternatives.”

The company has described three tiers of service:

  • Nano: 7-by-7-inch portable antenna for download speeds up to 100 megabits per second.
  • Pro: 11-by-11-inch antenna supporting 400 Mbps downloads.
  • Ultra: 20-by-30-inch antenna, delivering up to 1 gigabit per second for downloads and 400 Mbps for uploads.

“We showed a downlink video of 1.3 gigabits and above the 400 on the uplink, which is quite stunning,” Weber said. “So we feel really good on the design. The stability of it, the quality is job one for us as we’re putting that up.”

Even though Amazon isn’t quite ready to reveal its pricing, either for the terminals or for the subscriptions, Weber said his team has a good handle on what the price should be.

Advertisement

“That’s a lot of work we’ve been doing over the years that looks at lots of different external metrics and internal metrics,” he said. “The good news is, particularly on the government and business side, you get demand signals every day, and we’ve been talking to customers every day. … We get incredible signals in order to be able to forecast our demand by not only customer terminal, but what’s the service plan they would need, the speeds they would need on the downlink and uplink.”

Satellite synergies

Amazon Leo satellites are folded up in their dispenser, ready for deployment in low Earth orbit. (Amazon Photo)

Amazon is also fine-tuning its strategies for taking advantages of synergies between Leo and its other business lines, starting with Amazon Web Services.

“We’ve announced our private networking option via AWS, where if you’re a business or a government customer, you can go from your customer terminal to the antenna into your AWS data estate or computing estate or your own private data center without ever touching the internet,” Weber said. “That’s incredible value. And boy, does that resonate significantly with business and government customers.”

Regular consumers will see synergies as well, potentially involving Prime Video, Fire TV, Ring, Zoox and even Amazon delivery services. “Without announcing anything, I would say we’re very excited about bringing differentiated new value to our customers across the Amazon set of products and services,” Weber said.

Like SpaceX, Amazon Leo is nailing down deals for in-flight connectivity with the likes of Delta and JetBlue — and exploring the latest frontier in connectivity: direct-to-device satellite service.

Advertisement

“We just announced the acquisition of Globalstar and our partnership with Apple on direct-to-device,” Weber said. “That’s been part of our strategy from the beginning, but it really starts to expand the use cases.”

Amazon is expected to follow through on Globalstar’s expansion plans and take them to the next level, but it won’t fold its direct-to-device service into Amazon Leo’s broadband offerings. The way Weber sees it, the direct-to-device market is different from the satellite broadband market, at least in the short to medium-long term.

“What direct-to-device does is open up brand-new scenarios where people simply don’t have connectivity today, and now you’re taking these billions of mobile handsets and making those connected so you can do voice messaging, those types of things,” he said. “The way I think about it is that they’re pieces of a puzzle and expanded use cases, with broadband and direct-to-device versus one replacing the other.”

Some connectivity customers may want both. “You could foresee something in the automobile where they want broadband coverage, but also the ability to have direct-to-device, which is lower speed but gives you broader connectivity,” Weber said.

Advertisement

What else does Weber see in his crystal ball? What will Amazon Leo look like a year from now?

“Well, I will tell you, we’ll be in service, and we’ll have a lot more satellites up, and so we’ll have broader geographic coverage,” he said. “The thing that I talk to our team about all the time, and it’s the thing we’re focused on, is building a service that customers love. That is job number one, two and three for us — because if we get that right, then as we expand, everything else can happen.”

As Weber said, Amazon Leo is likely to be available initially to customers in mid-northern and mid-southern latitudes. Internet users can plug their postal code and email address into an online form at Leo.Amazon.com to get updates on the project’s progress and availability in their area.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

OpenAI is reportedly preparing legal action against Apple; it wouldn’t be the first partner to feel burned

Published

on

OpenAI is so frustrated with Apple over a ChatGPT integration that failed to deliver the subscribers and prominence it expected that the company is now actively exploring legal action against the iPhone maker, Bloomberg News reported Thursday, citing people familiar with the matter.

According to Bloomberg, OpenAI has enlisted an outside law firm to work through its options, which could include sending Apple a formal breach-of-contract notice without necessarily escalating to a full lawsuit (at least not immediately). Any legal move would likely wait until after the conclusion of OpenAI’s ongoing trial with Elon Musk.

Still, it’s a reminder of what a difficult partner Apple can be for major software companies. The iPhone is an enormously attractive platform for growth, but it’s fully under Apple’s control — and companies that build there are only guests. From Google to Adobe, there’s a long history of Apple showing guests the door when they seem as if they’re getting too comfortable.

TechCrunch has reached out to both OpenAI and Apple for comment.

Advertisement

The OpenAI partnership, announced at Apple’s Worldwide Developers Conference in June 2024, wove ChatGPT into Apple’s operating systems as an option within Siri and as part of the iPhone’s Visual Intelligence feature (allowing users to use their camera to analyze their surrounds and send photos to ChatGPT with related questions).

OpenAI, along with industry watchers, expected the deal might eventually funnel billions of dollars in new subscriptions its way and give the company prime real estate across one of the world’s most-used mobile ecosystems. Instead, Bloomberg reports, OpenAI has grown increasingly aggravated, complaining that the integration has been buried, its features hard to find, and that revenue from the tie-up is nowhere close to projections. “They basically said, ‘OpenAI needs to take a leap of faith and trust us,’” one OpenAI executive told Bloomberg. “It didn’t work out well.”

Apple, for its part, has its own grievances, including concerns about OpenAI’s privacy standards and, according to Bloomberg, irritation over OpenAI’s push into hardware, an effort led by former Apple executives including ex-design chief Jony Ive.

Either way, OpenAI is hardly the first partner of Apple to regret hitching its wagon to the company. Apple has a long history of embracing partners and then alienating them. The most famous case is Google Maps, which was a flagship feature of the original iPhone. It was so central to the device’s appeal that its removal in 2012 — replaced by Apple’s markedly inferior Apple Maps product — became one of the biggest tech fiascos of the decade, prompting a rare public apology from CEO Tim Cook. The friction between the two companies had been building for years at that point, thanks to the rollout of Google’s Android phone a year after the iPhone’s 2007 debut; after Google’s then-CEO Eric Schmidt stepped down from Apple’s board in 2009, that rivalry only intensified.

Advertisement

Adobe has some scar tissue, too. Steve Jobs refused to support Flash on the iPhone and iPad, publishing a famous open letter in 2010 explaining why and effectively dooming the technology. Flash never recovered its footing on mobile.

Then there’s Spotify, which spent years arguing that Apple leveraged its control over the App Store to disadvantage rival music streaming services after launching Apple Music in 2015. The European Commission agreed, fining Apple nearly €1.8 billion in March 2024.

Sometimes these rifts can be overcome in the name of commercial interests. Google is now Apple’s AI infrastructure partner, having struck a multiyear deal in January to power the next generation of Apple Intelligence with Gemini models. Apple is paying Google roughly $1 billion a year.

In the meantime, OpenAI has had its own share of strained relationships lately. Elon Musk’s lawsuit against the company — which accuses OpenAI of abandoning its nonprofit founding mission and operating in bad faith — is currently at trial.

Advertisement

The company has also reportedly navigated tensions with Microsoft, its biggest backer and infrastructure partner, as it pushes for greater independence ahead of its own IPO ambitions.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Noise Master Buds 2 Review: Premium Sound Without The Premium Price

Published

on

The TWS earbuds category, at least in India, is a tricky business. There are too many players, each with their specific appeal. Noise is one such homegrown brand that’s always appealed to budget-conscious buyers. I was one of the early adopters when they first started doing smart bands. Their earphones have been solid overall, but have struggled to stand out among more established players like OPPO and OnePlus. On the flip side, Bose is the most recognized player in the premium headphone space, as they essentially invented ANC back in the 90s.

So, what if you combine the value proposition of Noise with the premiumness and sound of Bose? You get the Noise Master Buds. The first generation of these unique collaboration earbuds was a hit, with experts praising the sound quality and the ANC capabilities. Fast forward to today, and the second generation of the Master Buds are out, bringing some meaningful updates in the sound department. And I called Noise to get these for a review. It’s been a month since that call, and I even took them with me to Thailand. Spoiler alert: these earbuds are a hit. Here’s why.

Noise Master Buds 2

Hisan Kidwai

Advertisement

Summary

With the Master Buds 2, the design alone is a conversation starter. The Bose-tuned sound signature focuses on balance rather than overwhelming bass, making everything from vocals to instruments sound clean and detailed. ANC performance is strong enough for everyday commutes and flights, the gesture controls are surprisingly useful, and the companion app is genuinely well-designed rather than feeling like an afterthought.

Advertisement

Design & Comfort

Front design of the Master Buds

When I first got the Master Buds 2, I was genuinely in awe of the design. I genuinely haven’t seen a case this pretty before, yet still functional. The concentric circle pattern on the front is a really nice touch that adds character to the stale world of TWS earbuds. The semicircle shape is also unique and something I have yet to see from other makers. One more benefit of this shape is that you can keep the case standing vertically, almost resembling the clocks of the past, or maybe I’m just too old at this point.

Nevertheless, I was constantly asked by my friends and family what earbuds I was using, all of whom adored the design, so you know it’s not just me yapping. I’d recommend sticking to the silver color, as it’s the best and doesn’t pick up random scratches, but the black also looks pretty decent. There’s a diagonal white LED strip that’s meant to indicate when the earbuds are on. You also get a dedicated pairing button, so no hand gymnastics is needed. Still, the best part of the design for me is the opening/closing mechanism. There’s a good amount of weight behind it, which makes that heavy thud that I love.

Side by side of the Noise Master Buds 2 with regular earphones

I do, however, have a problem with the size. I know these earbuds pack a lot of hardware, but the case size is just massive. I’ve been daily driving the Enco Buds 3 Pro+ for months, and compared to them, the Master Buds feel huge.

Moving to the earbuds themselves, comfort is highly subjective, as everyone’s ears are different. All that being said, though, the first day of me using the Master Buds wasn’t the best. My ears are small, and the medium-sized tips that come pre-installed were just too big for my liking. Thankfully, when I put on the small eartips, the experience was much better. You’ll still feel them sticking out, but the rubber fins work well to keep them from falling out during my everyday gym struggles with weights. It’s been a couple of hours since I started writing this review, and the Master Buds 2 are still sitting comfortably in my ears.

Sound Quality & Battery Life

A person holding the individual earbuds

Often, budget earbuds lean heavily towards bass and compromise everything else. Well, that’s not the case with the Master Buds 2. The 10 mm drivers tuned by Bose prioritize balance over everything else, and I love that. The mids, where most of the vocals live, are crisp, with you being able to hear the small nuances of a singer’s voice. The highs are decent without that sharpness that pinches through, and I love the separation between the different elements. Everything is placed perfectly, and that’s what many earbuds fail to do. Bass isn’t the selling point of the Master Buds, but the lows are still there, working hard in the background. Just don’t expect a rumble.

If you’re not happy with the signature Bose tuning, which would be surprising, Noise does bundle a few different listening modes like Jazz, Club, and Rock, each with its unique style. For audiophiles, there’s also a custom equalizer. Beyond the basics, Noise has bundled spatial audio with head tracking. I tried some songs on Apple Music, and it works great with instruments placed around you that change as you move your head. Call quality was excellent, with the other person hearing me loud and clear. Speaking of loud, though, there’s a new feature called Sidetone that lets you hear yourself on calls to check if you’re talking too loudly. Honestly, I can think of a lot of people needing that feature.

The Master Buds 2 use their 6 microphones for up to 51dB of active noise cancellation, and I put that number to the test. Using the ANC at Max, the buds do a stellar job of quietening the everyday hum of an AC or fan without a hitch. Voices were muffled enough that I couldn’t hear someone talking near me at half volume. For commuters, the Master Buds 2 will be enough to block out the chatter of a metro, but sudden loud noises, like a horn, will still find their way through. Last but not least, the battery on the buds is very decent. I got about 4-5 hours of use on a single charge. The case can recharge the buds at least 3 times, and once you do run out, fast charging should come to the rescue.

Advertisement

Companion App

A person using the Noise app

It’s no secret that controls are super easy to mess up. Fortunately, that’s not the case with the Noise Master Buds. In fact, there’s a lot going on. Everything is controlled via the Noise Audio app, and it looks really well designed. It’s sophisticated, and everything is laid out neatly. Still, there are plenty of ways you can control music.

The obvious choice is the touch controls on the earbuds themselves. You can configure a single touch, a double touch, a triple touch, a quadruple touch, and even a tap-and-hold. But that’s not it. Noise has incorporated motion controls, meaning if you nod your head twice, you can play/pause the music. Beyond that, shaking your head twice to the right will skip to the next track and vice versa. This is a pretty niche feature that I thought would be a gimmick at best. Surprisingly, it works quite well, except for the occasional misses. You can also nod to accept calls.

No review of a 2026 product will be complete without mentioning AI, and the same is true here. Noise has bundled something called Noise AI, which, in theory, is meant to answer your questions. I gave it a go with one of the recommended questions: suggesting a good cafe near me. The answer was that it doesn’t have access to live restaurant listings and that I should search Google instead. If I have to search Google, then what’s even the point?

Verdict

A person holding the Master Buds 2

At ₹8,999, the Noise Master Buds 2 sit in a very competitive segment, but they do enough differently to stand out. The design alone is a conversation starter, and unlike many flashy earbuds, these actually back it up with substance. The Bose-tuned sound signature focuses on balance rather than overwhelming bass, making everything from vocals to instruments sound clean and detailed. ANC performance is strong enough for everyday commutes and flights, the gesture controls are surprisingly useful, and the companion app is genuinely well-designed rather than feeling like an afterthought.

Of course, they aren’t perfect. The case is noticeably larger than most competitors, and bass lovers might find the tuning a little too restrained. Noise AI also feels half-baked right now and adds very little to the experience. Still, if you’re shopping in this segment, the Master Buds 2 are a must-consider.

Source link

Advertisement
Continue Reading

Tech

Apple 14inch M5 MacBook Pro 1TB drops to $1,499 at Amazon

Published

on

The $200 discount applies to the M5 model with 1TB of storage, matching the lowest price on record. Plus, grab an even larger markdown on an upgraded spec.

The $1,499.99 special is available on the 14-inch MacBook Pro with a 10-core CPU and 10-core GPU. It also has 16GB of memory and 1TB of storage, with the $200 discount available in your choice of Space Black or Silver.

Buy M5/16GB/1TB MacBook Pro for $1,499.99

If you’d prefer additional RAM, B&H is running a $300 discount on the M5 model with 24GB of RAM. Note that this configuration comes with 512GB of storage, rather than 1TB, but it rings in at the same $1,499 price as the above deal when ordered in Silver.

Advertisement

Buy M5/24GB/512GB MacBook Pro for $1,499

Both of these deals can be found in our M5 MacBook Pro Price Guide, with markdowns on higher-end M5 Pro and M5 Max models available in our M5 Pro 14-inch MacBook Pro Price Guide and M5 Pro 16-inch MacBook Pro Price Guide.

Source link

Advertisement
Continue Reading

Tech

AI Promised the Audemars Piguet x Swatch Wristwatch. China Will Deliver It

Published

on

Laden with iconic Royal Oak design cues, most notably the octagonal case, eight-screw bezel, and Petite Tapisserie-patterned dial, the strapless design heavily references 1979’s Royal Oak Pocket Watch reference 5691. Inside is an entirely new hand-wound version of Swatch’s Sistem51 caliber, a movement that is completely machine assembled. Swatch has 15 active patents on this new iteration and has also squeezed in an impressive 90-hour power reserve. There’s even an antimagnetic Nivachron balance spring that was, incidentally, codeveloped with Audemars Piguet.

Swatch’s 1986 POP line, whose watch heads could be physically ejected from their frames and clipped elsewhere, has been plundered here to create a design that allows the Royal Pops to ping out of their bioceramic holder clips, too.

Why There’s No Wristwatch

The simple logic of the pocket watch design authorized by Audemars Piguet, which, unlike Omega, is not part of the Swatch Group, is that it doesn’t upset its existing high-net-worth customer base. Royal Oak owners will no doubt be breathing sighs of relief now that it’s confirmed a version of their coveted pieces won’t be coming to market for a mere few hundred bucks.

However, this doesn’t mean that AP would have been financially hit had it delivered what the public so clearly wanted. Omega, which was also concerned for its sales when shown the original MoonSwatch internal prototypes, enjoyed a sizable 50 percent bump in sales following the release of its budget cousin.

Advertisement

The Royal Pop pocket watch, cleverly, is a sidestep designed to generate as much hype as possible yet be as safe as can be for AP’s brand. The Royal Oak design language is unmistakable, but the wrist is off-limits. With Swatch, Audemars built something real for its aspirational fans; it just didn’t build them what they wanted.

What does Swatch get out of this? Valuable PR as well, but far more importantly, the potential of a much-needed sales hit. In 2025, the group posted a 6.75 percent drop in sales and a staggering 55.6 percent decline in operating profit, primarily attributed to a sharp drop in demand for its watches in China, Hong Kong, and Macau. Swatch Group shareholders are not happy.

How China Will Come to the Rescue

Here is where the story gets interesting for reasons neither Swatch nor AP planned. As Swatch resurrected its POP design, allowing the Royal Pop to be removed from its housing, within hours of the Royal Pop announcement, third-party strap brands seized on this prospect, looking to quickly fashion adaptations that convert the timepiece from pocket to wristwatch. As Royal Pops were designed to snap in and out of lanyards and desk stands, they should just as easily clip into bracelets and straps made specifically to receive them.

The market recognized in real time that the pocket watch from Swatch and AP tantalizingly contained all that was structurally needed to deliver the very wristwatch that the AI concepts had promised. All that was required now was something to connect the case to a wrist.

Advertisement

Source link

Continue Reading

Tech

Graphon AI raises $8.3M seed to build a pre-model intelligence layer for enterprise AI

Published

on

TL;DR

Graphon AI emerged from stealth with $8.3 million in seed funding to build a “pre-model intelligence layer” that discovers relationships across multimodal enterprise data before it reaches a foundation model. The round was led by Novera Ventures, with participation from Perplexity Fund, Samsung Next, GS Futures, Hitachi Ventures, and others. The company is named after a mathematical concept co-formalised by its technical advisors, UC Berkeley professors Jennifer Chayes and Christian Borgs. Founded by Arbaaz Khan (CEO), Deepak Mishra (COO), and Clark Zhang (CTO), with team members from Amazon, Meta, Google, Apple, NVIDIA, and NASA. Early customer GS Group (South Korean conglomerate) has deployed Graphon for convenience-store analytics and construction-site safety.

The name is the tell. Graphon AI, which emerged from stealth on Wednesday with $8.3 million in seed funding, is named after a mathematical object that most people in AI have never heard of and that its two most prominent advisors helped invent. A graphon is the limit of a sequence of dense graphs: a continuous function that captures the structure of relationships as networks grow infinitely large. It is the kind of concept that exists at the boundary between pure mathematics and theoretical computer science, and it is now the foundation of a startup that claims to have built the missing layer between enterprise data and the models that are supposed to make sense of it.

Advertisement

The company’s thesis is straightforward, even if the mathematics behind it are not. Today’s large language models can process roughly one million tokens at a time. Enterprises hold trillions of tokens across documents, video, audio, images, logs, and databases. Retrieval-augmented generation, the current standard approach, can surface relevant content from that mass, but it cannot discover relationships between pieces of data that were never stored together. An LLM using RAG can answer a question about a specific document. It cannot reason about how that document connects to a surveillance video, a compliance log, and a customer database, at least not without someone having already mapped those connections.

Graphon’s product sits before the model, not inside it. Using graphon functions, a mathematical framework that extends the academic concept into a software layer, the system ingests multimodal data and automatically discovers relational structure across it, producing what the company calls persistent relational memory. The result, in theory, is a representation of an organisation’s data that any foundation model or agent framework can query without being constrained by its context window.

The people behind the mathematics

The founding team comprises Arbaaz Khan as chief executive, Deepak Mishra as chief operating officer, and Clark Zhang as chief technology officer. The company says its broader team includes former researchers and engineers from Amazon, Meta, Google, Apple, NVIDIA, Samsung AI Center, MIT, Rivian, and NASA.

More notable, perhaps, are the technical advisors. Jennifer Chayes, dean of the College of Computing, Data Science, and Society at UC Berkeley, and Christian Borgs, a UC Berkeley computer science professor, are both listed as advisors. Borgs was among the group of researchers, alongside Chayes, László Lovász, Vera Sós, and Katalin Vesztergombi — who formalised the graphon as a mathematical concept in 2008. The company is, in effect, commercialising a framework that its advisors co-invented.

Chayes and Borgs described the approach in a joint statement as one that treats relational structure as a first-class element of the AI stack rather than something to be inferred after the fact. The distinction matters because most current AI systems treat data as collections of individual items to be retrieved, not as networks of relationships to be preserved.

Advertisement

An unusual investor table

The seed round was led by Arvind Gupta of Novera Ventures, who made Graphon his fund’s first investment from its flagship vehicle. Gupta is better known as the founder of IndieBio, the life-sciences accelerator, and his pivot toward an AI infrastructure company suggests he sees structural overlap between the problems Graphon addresses and the complex, multimodal data challenges that define scientific computing.

The rest of the cap table reads like a deliberate exercise in strategic diversity. Perplexity Fund, the $50 million venture arm of the AI search company, participated alongside Samsung Next, Hitachi Ventures, GS Futures (the venture arm of South Korean conglomerate GS Group), Gaia Ventures, B37 Ventures, and Aurum Partners, the investment fund affiliated with the ownership group of the San Francisco 49ers.

The mix is telling. A search-AI company, a consumer electronics giant, a Japanese industrial conglomerate, and a Korean chaebol all investing in the same pre-model data layer suggests that the context-window problem Graphon claims to solve is felt across industries that otherwise have little in common. GS Group, which ranks among South Korea’s largest conglomerates with interests spanning energy, retail, and construction, is also an early customer. Ally Kim, a vice president at GS, said the company’s multimodal AI solutions have been applied to analysing customer movement in convenience stores and enhancing safety through CCTV analysis at construction sites.

The technical bet

Graphon’s positioning reflects a broader shift in the AI infrastructure market. The past three years have been dominated by a race to build larger models with longer context windows. But even the most capable models still hit a ceiling: they can process more tokens, but they cannot maintain relational awareness across the volumes of data that large organisations generate. The question Graphon is betting on is whether the solution lies not in extending the context window further, but in structuring data before it enters the window at all.

Advertisement

The company says it has already deployed its platform for enterprise content management, industrial intelligence, agentic workflows, and on-device applications across phones, cameras, wearables, and smart glasses. The breadth of claimed use cases is ambitious for a company at the seed stage, and the absence of independent benchmarks or detailed customer case studies beyond GS Group makes it difficult to assess how far the technology has progressed from concept to production.

What is clear is that the problem Graphon describes is real. The gap between what LLMs can theoretically do and what they can actually do with enterprise data remains one of the most significant constraints on AI deployment. Retrieval-augmented generation has been the industry’s primary answer, and its limitations, flat retrieval that misses cross-dataset relationships, context windows that force artificial boundaries on what the model can see, are well documented. Whether graphon functions offer a fundamentally better approach or merely a more theoretically elegant version of graph-based data structuring is the question the company will need to answer as it moves from stealth-mode mathematics to production-grade infrastructure.

The $8.3 million gives it runway to try. The advisors who co-invented the underlying mathematics give it credibility. But in an AI market that has seen no shortage of startups claiming to have found the missing layer, Graphon’s challenge will be proving that the mathematics it is named after translates into a measurable improvement in how foundation models handle the messy, multimodal reality of enterprise data, not just in theory, but at the scale where theory stops being sufficient.

Advertisement

Source link

Continue Reading

Tech

Best Early Memorial Day Mattress Deals: Helix, Saatva (2026)

Published

on

Memorial DaY brings discounts to the mattress models we test all year long, and the sales have already started. As a seasoned deal hunter, I know that mattresses go on sale pretty often, but whenever someone asks me the best time to buy, I tell them to wait until Memorial Day or Black Friday and Cyber Monday. If you’ve been in the market for a new mattress, now’s the time to act.

The WIRED Reviews team thoroughly tests the best mattresses long-term. We don’t conduct “nap tests” or base recommendations on first impressions. Our top picks are tried-and-true, and they’re on sale right now. We’ll also include some deals on bedding, pillows, mattress toppers, and other sleep accessories as we update this story through Memorial Day on May 26. Prices shown are for queen sizes.

Feel free to check out our many other sleep recommendations, including the best pillows for neck pain, the best body pillows, and the best sunrise alarm cocks. You might also want to read our guide on how to choose a mattress.

WIRED Featured Deals:

Advertisement

Helix Sleep Midnight Luxe Hybrid for $1,824 ($675 off)—Use Code WIRED27

Helix Sleep

Midnight Luxe Hybrid Mattress (14-Inch)

Use our exclusive coupon code WIRED27 to get 27 percent off our very favorite mattress for most people. We’ve seen it sell for about $100 less before, and they’ve thrown in more freebies, but this is still a great deal. Just be aware that the price might drop a little later in the month. In any case, the Midnight Luxe Hybrid is springy and medium-firm and should be well-suited to any style of sleeping. The individually wrapped springs are zoned so that you have more support where you need it to prevent back pain. It also doesn’t get too warm, though it’s thick enough that you’ll want deep-pocketed sheets. It’s been our favorite mattress for over eight years.

Source link

Advertisement
Continue Reading

Tech

Can AI replicate an army of associates? These lawyers are betting their new firm on it

Published

on

Matt Souza, left, and Sam Shaddox, founders of Talairis Law Group. (Talairis Photo)

Sam Shaddox and Matt Souza have spent years on the inside of big-time legal work, as attorneys at a major Seattle firm and later as general counsels at tech companies. They’ve watched as law firms charge startup clients a fortune for work they believed AI could do faster and cheaper.

Talairis Law Group is their answer. The Seattle-based firm, launching this week, is built around the idea that AI can handle much of the work that associates at big law firms have traditionally done — and that startups shouldn’t have to pay big law prices for it.

“It’s a startup for this AI moment,” Shaddox said, “and it’s the startup that we all need and the Seattle startup scene needs. We’ve been on the other side of the aisle, and now it’s time for us to make a mark.”

The idea isn’t unique. Venture capital has poured into AI-native law firms over the past couple of years, with players like Crosby, Manifest OS, Eudia and Lawhive raising hundreds of millions of dollars combined. But Shaddox and Souza say those firms have each picked a single practice area — contract review, immigration, M&A diligence — leaving a gap nobody has filled.

“They’re all picking one lane,” Souza said, “and there’s not an AI-powered law firm that you can rely on to help you with your day-to-day as things come up, helping to pilot your ship.”

Advertisement

The founders: Shaddox and Souza were both partner-track attorneys at Perkins Coie, the prominent Seattle-based law firm, before moving in-house at Seattle-area tech companies.

Shaddox went on to legal roles at Big Fish Games and OfferUp before serving as general counsel at SeekOut, the AI-powered talent intelligence company. Souza was senior counsel at Zillow before becoming general counsel at Wrapbook, the entertainment payroll and financial platform.

It was that in-house experience, they say, that made the problem impossible to ignore.

“We were getting billed out the ears for work that — as we were adopting AI in-house — we saw law firms were not doing, or not doing it very well,” Souza said. “The whole economic model of law firms is broken. And so that’s where we started.”

Advertisement

How it works: Talairis is built around what the founders call a four-layer architecture:

  • At the base is a large language model — the AI engine.
  • On top of that sits what they call an agentic layer, with more than 100 purpose-built AI agents covering the range of legal tasks a startup might need.
  • Above that is what they call the “client genome” — a stored profile of each client’s business, risk tolerance, contracts and operating history, so advice is never generic.
  • And at the top are Shaddox and Souza themselves, reviewing and signing off on every deliverable.

“You’re not getting one-off advice that doesn’t know what your company is or does or how it thinks and operates,” Shaddox said. “You’re getting bespoke outcomes.”

In practice: As an example, Shaddox and Souza point to SAFEs: simple agreements for future equity, a common bridge financing tool for startups. First-time founders often try to handle them on their own, or bring in outside counsel at $1,500 an hour. Either way, manually working through the notes, side letters and cap table implications is painful and error-prone.

Talairis has built an agent specifically for it. Send them a SAFE, they say, and you get back more than a legal opinion.

“They don’t just get back, ‘Hey, here’s our thoughts on this convertible note’ — anybody can do that,” Shaddox said. “Instead, they get back a fully built-out cap table that incorporates the latest note, incorporates the side letter terms, and shows how that’s going to flow through their next financing.”

Advertisement

The pitch to startups: The firm is launching with paying customers, though Shaddox and Souza aren’t naming them yet. Talairis is bootstrapped and it’s just the two lawyers for now.

  • Pricing: Shaddox says their hourly rate runs roughly half that of a typical big law attorney, and that the AI multiplies output enough that the effective cost to clients is a fraction of what they’d pay elsewhere.
  • Privacy: On the question of whether client data is being used to train AI models — a real concern for startups sharing sensitive legal documents — Shaddox is direct: “The answer is no. Your data is never used to train a model.” Talairis has built confidentiality and attorney-client privilege protections into its architecture from the ground up.

The launch comes the same week Anthropic released Claude for Legal, a suite of more than 20 new connectors and 12 practice-area plugins aimed at bringing AI tools to law firms and in-house legal teams. Shaddox sees the timing as validation.

“Claude for Legal and any other LLM is a base layer,” he said. “Our unique approach is what sits on top: a law firm with elite attorneys, significant proprietary enhancements, per-client scoping, privilege protections, and the agentic architecture a generic plugin lacks. That’s what turns an out-of-the-box LLM into the best possible legal counsel for startups.”

Source link

Advertisement
Continue Reading

Tech

Anthropic’s Mythos AI outsmarted Apple’s Mac security systems

Published

on

Security researchers have admitted that Anthropic’s Mythos AI model has been able to hack macOS, bypassing Apple’s security systems in a way never previously achieved.

Mythos is an early version of a new, more powerful Claude AI model software that is yet to be made public. Anthropic’s engineers have warned that it is too good at finding security exploits to allow it into the wild.

Now, proof of its abilities has come in the form of an escalation exploit. If used correctly, the exploit could potentially allow a hacker to gain control of a Mac despite Apple’s security measures.

Detailing the news, The Wall Street Journal says that the security researchers were “excited about their discovery.” In fact, they were so impressed with what Mythos had done that they drove to Apple’s Cupertino HQ to share their findings.

Advertisement

Chained attacks

The researchers, from a Palo Alto-based research outfit, say that Mythos didn’t use a single attack vector in its hack. Instead, it linked two bugs macOS together in an attempt to corrupt the Mac’s memory.

Person typing on a gray Apple laptop at a dark table, with a takeaway coffee cup and a pink container in the background, indoors with wooden wall panels

The macOS operating system has been hacked in a new way

Once the macOS memory had been compromised, Mythos was then able to “gain access to parts of the device that should be inaccessible.” It’s also possible that, should the hacks then be used alongside others, the Mac as a whole could become compromised.

For its part, a company spokesperson told the WSJ that it is reviewing and validating the security team’s findings.

Advertisement

“Security is our top priority, and we take reports of potential vulnerabilities very seriously,” Apple reportedly said. However, Apple hasn’t yet said whether it has patched the bugs Mythos used for its hack.

In fact, it isn’t clear what Mythos did and didn’t do right now. That shouldn’t be all that surprising, with details likely to remain fuzzy until Apple has addressed the security flaws that were leveraged.

However, the report also notes that the attack couldn’t be achieved by Mythos alone. Without the skills of the hackers working alongside the AI, it is believed the hack wouldn’t have been possible.

As for Mythos, Anthropic intends for it to be used for good. Project Glasswing was launched to allow Mythos to be used as a way to identify security flaws so they can be addressed.

Advertisement

Source link

Continue Reading

Tech

Recursive Superintelligence raises $650m at $4.65bn valuation to build self-improving AI

Published

on

TL;DR

Recursive Superintelligence, a startup founded by former leaders from Meta AI, Google DeepMind, OpenAI, and Salesforce AI, has emerged from stealth with $650 million in funding at a $4.65 billion valuation. Led by Richard Socher and co-founded by ex-Meta FAIR director Yuandong Tian, the company is pursuing recursive self-improvement: AI systems that autonomously improve themselves in an accelerating loop. GV, Greycroft, Nvidia, and AMD backed the round. The startup has fewer than 30 employees and no released product.

 

Advertisement

The idea that an AI system could improve itself, then use those improvements to improve itself again, faster, in an accelerating loop that eventually outpaces every human researcher on earth, has been a fixture of computer science folklore since at least the 1960s. For most of that time, it remained comfortably theoretical. Now someone has raised $650 million to build it.

Recursive Superintelligence, a startup founded by former leaders from Meta AI, Google DeepMind, OpenAI, Salesforce AI, and Uber AI, emerged from stealth on 13 May with a $4.65 billion valuation and a thesis that would have sounded like science fiction two years ago but now sits squarely within the Overton window of Silicon Valley ambition. The company’s stated mission: build AI systems that can autonomously discover knowledge, continuously optimise themselves, and evolve in an open-ended loop, much like biological evolution, but without the inconvenience of waiting millions of years.

The team behind the loop

The round was led by GV, Alphabet’s venture capital arm, and Greycroft, with participation from Nvidia and AMD, the two chipmakers whose hardware underpins virtually all frontier AI training. The involvement of both companies is notable: strategic investment from the firms that sell the picks and shovels suggests they see recursive self-improvement not as a theoretical curiosity but as a near-term compute customer.

The founding team is built to signal credibility. Richard Socher, the former chief scientist at Salesforce and founder of the AI search engine You.com, leads the company alongside seven co-founders: Yuandong Tian, formerly a research scientist director at Meta’s Fundamental AI Research lab (FAIR), where he led work on reinforcement learning, LLM reasoning, and AI-guided optimisation; Tim Rocktaschel, a professor of AI at University College London and former principal scientist at Google DeepMind; Alexey Dosovitskiy, one of the authors of the Vision Transformer (ViT), the 2020 paper that reshaped computer vision research; Josh Tobin, formerly of OpenAI; Caiming Xiong; Tim Shi; and Jeff Clune. Peter Norvig, co-author of Artificial Intelligence: A Modern Approach, the standard university textbook in the field, serves as an adviser.

Tian Yuandong’s involvement is particularly striking. A graduate of Shanghai Jiao Tong University who went on to earn a PhD in robotics from Carnegie Mellon, Tian spent over a decade at Meta FAIR, where his work spanned some of the most consequential problems in modern AI research. He led the DarkForest Go project, a CNN-based Go AI developed before DeepMind’s AlphaGo captured global attention, and later became lead scientist on ELF OpenGo. His departure from Meta and immediate entry into a startup pursuing the most ambitious goal in the field is itself a signal: the talent that built the current generation of AI systems is now betting that the next generation can build itself.

Advertisement

What recursive self-improvement actually means

The concept is deceptively simple. Instead of human researchers designing each new generation of AI, an AI system would automate parts of its own research and development process, generating improvements that in turn make it better at generating improvements. A company that achieves this first would, in theory, be able to extend its lead over competitors exponentially, because its development velocity would be compounding rather than linear.

Recursive Superintelligence has outlined a staged roadmap. The first step, according to company materials, is to train a system with the capabilities of “50,000 doctors” to automate AI scientific research itself. From there, the company plans to run what it calls a “Level 1” autonomous training system, with a public launch targeted for mid-2026. The funding will be used in part to secure the large-scale compute infrastructure required to run these experiments.

The company currently operates from offices in San Francisco and London, with a team that has expanded beyond 25 researchers and engineers. The round was described as heavily oversubscribed.

The race is already on

Recursive Superintelligence is not pursuing this thesis in isolation. The largest AI laboratories are already using their own models to accelerate research. Anthropic has said that the majority of its code is now written by Claude. OpenAI has reported that GPT-5.5 developed a parallelisation method that boosted token generation speeds by more than 20%. Google DeepMind has built AlphaEvolve, a coding agent designed for scientific and algorithmic discovery. Google co-founder Sergey Brin has reportedly described coding gains as a path to “AI takeoff” internally.

Advertisement

What distinguishes Recursive Superintelligence from these efforts is that none of the major laboratories has organised an entire company around recursive self-improvement as its core commercial thesis. OpenAI, Anthropic, and Google DeepMind all use AI to assist their research workflows, but their businesses are built around selling models and API access. Recursive is betting that the self-improvement loop itself is the product.

Whether that bet pays off depends on a question that remains genuinely open: whether recursive self-improvement produces the kind of runaway acceleration its proponents describe, or whether it converges on diminishing returns as each cycle of improvement yields smaller gains. Anthropic co-founder Jack Clark has estimated a roughly 60% probability that a system capable of training a more powerful successor on its own, without human involvement, will exist by the end of 2028, and a 30% chance by 2027.

For now, what is certain is the price the market has placed on the possibility. Recursive Superintelligence is four months old, has fewer than 30 employees, and has not released a product. It is valued at $4.65 billion. In the current AI investment climate, the promise of a machine that can improve itself is apparently worth more than many companies that have already built one.

Advertisement

Source link

Continue Reading

Tech

70% of Americans don't want AI data centers near their home, that's more opposition than nuclear plants get

Published

on


Given the huge number of negative stories about AI data centers, it’s little wonder that people are against any being built near them.
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025