Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

The Versatile And Easy Way To Organize Your Garage

Published

on





We may receive a commission on purchases made from links.

Since everyone uses their garage differently, there’s no one size fits all solution for how to organize it. For some people, peg boards might be a good option to hang all of your tools and knickknacks on the walls so that you can see them. Others might find that a tote rack might work to declutter and organize containers. However, this might not work for some garages as it requires quite a bit of floor space. To get some of your belongings off the floor and out of the way, you might also try installing some slat walls.

Slat walls have horizontal lines cut out of them and are frequently used for decoration. In the space between panels, you can hang hooks, shelves, or bins. This storage method can be particularly useful for awkwardly shaped items like bikes, ladders, or yard equipment that is impossible to shove inside a storage tote. Similar to peg boards, you can also mount certain tools or items you use frequently on slat walls. Compared to peg boards, this storage method can also generally carry more weight and gives a cleaner look that is better suited to professional spaces.

Advertisement

What to know when setting up your garage slat wall

Before you commit to a slat wall system, it’s important to take an inventory of everything you want to hang. For lighter loads up to roughly 25 pounds, you might be able to get away with MDF-based slat walls. However, if you’re hoping to hang heavy things, slat walls made with commercial-grade PVC or with metal reinforcements might be a better option. Consider also investing in panels that have other features, like waterproofing or fire-resistance, depending on the kind of work you plan to be doing in your garage. 

Advertisement

Alternatively, many tool manufacturers have built their modular wall storage systems, like the Ryobi LINK or Milwaukee Packout, with similar functions to a slat wall. Each system will have their own pros and cons, especially when it comes to the range of possible compatible accessories. If you want to start with a system that’s easy to expand, we’ve mentioned before that Costco’s $129.99 Trinity Modular Slatwall might make a great spring addition to your garage.

Advertisement

Accessorizing your slat wall

One of the best things about slat walls is that you reconfigure your set up just by reorganizing the accessories. A good place to start is by hanging some utility hooks that can hang everything from extension cords, hand tools, cleaning tools, to even large sports equipment. You might also consider installing some shelves that can hold paint cans, boxes, or other equipment that are more oddly shaped. Similarly, baskets can be good to keep related items organized. For example, you might get some wire baskets for improved airflow to hold sports equipment, or opaque ones to hide things you might consider eyesores, like cleaning supplies.

If you deal with a lot of small parts, you might also want to buy extra Milwaukee Packout bins that come in all shapes and sizes. You can use these small bins to hold everything from pens, hand tools, screws, bolts, and nails. Lastly, you can even go the extra mile by purchasing, making, or even 3D-printing tool holders that can fit your power tools, batteries, and chargers precisely.

Advertisement



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Are we on a Road to Nowhere? Seattle’s growth masks deeper anxieties about its future

Published

on

(GeekWire File Photo / Kurt Schlosser)

I can’t get “Road to Nowhere” out of my head.

The 1985 Talking Heads anthem is built on contradiction — upbeat and anxious at the same time. Songwriter David Byrne once described it as “a resigned, even joyful look at doom.”

That paradox felt especially relevant this week as two headlines collided.

Gene Balk reported in The Seattle Times that Seattle ranked fourth among large U.S. cities for population growth. At nearly the same moment, KUOW’s Monica Nickelsburg reported that Washington ranked second nationally in tech layoffs.

So, what is it? Are we growing, or dying? 

Advertisement

Are cracks beginning to form beneath one of the country’s most successful innovation economies?

Maybe a little of both.

For three decades, Seattle’s tech industry has been an extraordinary economic engine, transforming the region into a global center for cloud computing, e-commerce and artificial intelligence. The construction cranes that once dominated the skyline became symbols of seemingly unstoppable momentum.

But momentum and durability are not the same thing.

Advertisement

And the Seattle psyche — especially in the innovation community we closely follow — is ruptured. 

The office towers are still here. So are Amazon, Microsoft and a deep pool of engineering talent. But something less tangible — confidence — has shifted.

In nearly 30 years covering the tech industry, I’ve never sensed this level of uncertainty among founders, investors and business executives about Seattle’s long-term trajectory. Former business leaders, once proud to call Seattle home, now write op-ed pieces in The Wall Street Journal about how the city lost its way. 

It’s a bad look. 

Advertisement

At the GeekWire Awards last week, a longtime entrepreneur-turned-venture capitalist told me Washington state is “squandering its edge.” Over the past year, we’ve heard versions of that concern repeatedly from startup founders, investors, and technology leaders questioning whether Seattle still wants to compete as aggressively as other innovation hubs.

That doesn’t mean Seattle is collapsing. Far from it.

The region still possesses enormous advantages: world-class research institutions, elite technical talent, major AI leadership and one of the strongest concentrations of cloud and AI expertise anywhere in the world.

But successful cities often make the same mistake successful companies do: They assume the conditions that created prosperity will naturally continue.

Advertisement

History suggests otherwise. 

And in this period of change, our political leaders wave goodbye to entrepreneurs and job creators — smugly taking for granted our past success and essentially fumbling the ball on the 1-yard line. 

And speaking of fumbles on the 1-yard line — sorry, Browns fans, too soon? — that brings me to Cleveland.

Earlier this year on the GeekWire Podcast, Cleveland Mayor Justin Bibb reflected on what happened when one of America’s great industrial cities of the 1950s and 1960s failed to adapt as the economy changed.

Advertisement

“We didn’t pivot fast enough, and the world left us behind,” Bibb told GeekWire. “Now we’re a comeback story built on reinvention and resilience.”

Seattle is not Cleveland. The economic dynamics are different, the industries are different, and the scale of innovation here remains immense.

But the warning isn’t about collapse. It’s about complacency.

Artificial intelligence is already reshaping the industry that built modern Seattle. Venture capitalists are funding leaner startups with fewer employees. Large tech companies are reassessing hiring needs and organizational structures. Entire categories of work are being reevaluated in real time.

Advertisement

At the same time, Seattle faces growing questions around affordability, public safety, regulation, permitting, and whether political leaders fully appreciate how fragile innovation leadership can become once momentum shifts.

Other cities are competing aggressively for talent and investment.

San Francisco Mayor Daniel Lurie has been relentlessly promoting a simple message: “We are a city on the rise.” Miami, Austin, New York and emerging startup hubs across the country and planet are doing the same.

No one talks like that in Seattle. 

Advertisement

We feel oddly uncertain about the industry that helped build Seattle’s modern identity.

That uncertainty matters.

Because the danger facing Seattle is not sudden decline. It’s the slower erosion that happens when a region begins to take its advantages for granted while competitors grow hungrier.

Population growth alone is not proof of long-term economic strength. Neither are cranes, soaring valuations or the presence of a few corporate giants.

Advertisement

The real question is whether Seattle still has the ambition — and civic alignment — to remain one of the world’s leading innovation capitals as the AI era reshapes everything around it.

Cities rarely see the inflection point in the windshield. 

Usually, they only recognize the road has changed once the exit is in the rearview mirror.

Well, we know where we’re going
But we don’t know where we’ve been
And we know what we’re knowing
But we can’t say what we’ve seen

Advertisement

[Editor’s note: Tech veteran and angel investor Charles Fitzgerald — who wrote the guest commentary earlier this year, “A warning to Seattle: Don’t become the next Cleveland” — and GeekWire co-founder John Cook will spend time next month in Cleveland examining what happened there and what lessons Seattle might draw from it. Contact john@geekwire.comto share perspectives or lessons from the Rust Belt that may apply to Seattle’s future.]

Source link

Continue Reading

Tech

OpenAI expected an iPhone gold mine, Apple didn’t

Published

on

OpenAI is exploring possible legal action after users did what they wanted with Apple’s ChatGPT integrations and didn’t sign up for enough paid accounts, instead of doing what CEO Sam Altman expected.

A May 14 Bloomberg report says OpenAI has enlisted outside legal counsel and discussed options that could include sending Apple a breach-of-contract notice. OpenAI reportedly expected deeper ChatGPT integration across iOS, iPadOS, and macOS to drive large subscription growth through Apple’s ecosystem.

Executives at OpenAI now believe the partnership has been financially disappointing and far more limited than anticipated. Apple and OpenAI entered the agreement with distinct priorities.

Apple needed a recognizable AI partner while major Siri upgrades and Apple Intelligence features were still under development. OpenAI sought access to hundreds of millions of Apple users, believing the iPhone could become a significant source of recurring ChatGPT subscriptions worth billions annually.

Advertisement

Internally, the companies reportedly viewed the OpenAI agreement as a potential counterpart to Apple’s enormously profitable Google Search deal in Safari. The ChatGPT partnership never came close to generating that level of revenue or strategic value.

Neither company structured the partnership around large direct payments. Instead, both sides expected strategic benefits. OpenAI expected customer growth and subscriptions, while Apple gained an interim AI solution as it continued building its own generative AI systems.

Apple kept ChatGPT available, but tightly controlled

Despite Apple’s high-profile WWDC 2024 announcement, ChatGPT inside Apple’s software exposes fewer features than OpenAI’s standalone app and operates within much tighter limits.

Users often need to explicitly invoke “ChatGPT” inside Siri prompts before requests route to OpenAI’s systems. Responses also appear inside smaller interface windows with less information than the standalone app typically provides.

Advertisement

OpenAI’s internal studies found users overwhelmingly preferred the standalone ChatGPT app over Apple’s built-in integrations.

Apple’s version resembles a tightly managed Siri extension rather than a deeply integrated AI layer across the operating system. The standalone app offers features unavailable in Apple’s implementation, such as persistent memory, wider model access, advanced voice tools, custom GPTs, and direct subscription management.

OpenAI executives reportedly expected broader integration across Apple apps and more prominent placement inside Siri. Apple instead kept the implementation relatively narrow while continuing development of its own Apple Intelligence systems.

Apple reportedly worried internally about OpenAI’s privacy standards while negotiating the partnership. The company built Apple Intelligence around on-device processing and Private Cloud Compute to keep more user data inside Apple-controlled infrastructure.

Advertisement
Two iPhones on a purple background displaying Apple Intelligence and Siri setup screens, highlighting new AI tools, enhanced Siri, and privacy features with colorful gradients and dark mode interfacesApple Intelligence

That tension over data control goes to the heart of why the two companies were never fully aligned. OpenAI’s cloud-focused systems operate very differently, leaving Apple with less direct control over how outside AI models process user information.

OpenAI increasingly looks like a future Apple competitor

The relationship between the companies has also become more complicated outside software. OpenAI acquired Jony Ive’s AI hardware startup and has aggressively recruited Apple engineers for its growing device ambitions.

OpenAI offered some Apple recruits compensation packages worth millions more than Apple provided. Those moves increasingly position OpenAI as a competitor rather than a partner.

Apple is also building new hardware efforts tied to Jony Ive‘s AI startup while preparing for a future where outside AI providers become interchangeable services inside iOS.

Apple plans to introduce an “Extensions” system in iOS 27 that will let users choose among multiple outside AI models. ChatGPT, Anthropic’s Claude, and Google Gemini are all expected to be part of the system.

Advertisement

Outside AI models are becoming interchangeable services inside iOS rather than central platform features. ChatGPT appears set to operate alongside Siri and Apple Intelligence instead of becoming a dominant layer across Apple’s ecosystem.

Any legal case could still prove difficult. Large platform agreements often give companies like Apple broad control over implementation details, placement, and product design decisions. And, users will do what they want, regardless of Silicon Valley expectations.

OpenAI still hopes to resolve the disagreement privately and may wait until after its legal fight with Elon Musk concludes before taking formal action against Apple. A lawsuit isn’t guaranteed at this stage.

Advertisement

Source link

Continue Reading

Tech

Claude Code’s ‘/goals’ separates the agent that works from the one that decides it’s done

Published

on

A code migration agent finishes its run, and the pipeline looks green. But several pieces were never compiled — and it took days to catch. That’s not a model failure; that’s an agent deciding it was done before it actually was.

Many enterprises are now seeing that production AI agent pipelines fail not because of the models’ abilities but because the model behind the agent decides to stop. Several methods to prevent premature task exits are now available from LangChain, Google and OpenAI, though these often rely on separate evaluation systems. The newest method comes from Anthropic: /goals on Claude Code, which formally separates task execution and task evaluation.

Coding agents work in a loop: they read files, run commands, edit code and then check whether the task is done. 

Claude Code /goals essentially adds a second layer to that loop. After a user defines a goal, Claude will continue to turn by turn, but an evaluator model comes in after every step to review and decide if the goal has been achieved. 

Advertisement

The two model split

Orchestration platforms from all three vendors identified the same roadblock. But the way they approach these is different. OpenAI leaves the loop alone and lets the model decide when it’s done, but does let users tag on their own evaluators. For LangGraph and Google’s Agent Development Kit, independent evaluation is possible, but requires developers to define the critic node, write up the termination logic and configure observability. 

Claude Code /goals sets the independent evaluator’s default, whether the user wants it to run longer or shorter. Basically, the developer sets the goal completion condition via a prompt. For example, /goal all tests in test/auth pass, and the lint step is clean. Claude Code then runs, and every time the agent attempts to end its work, the evaluation model, which is Haiku by default, will check against the condition loop. If the condition is not met, the agent keeps running. If the condition is met, then it logs the achieved condition to the agent conversation transcript and clears the goal. There are only two decisions the evaluator makes, which is why the smaller Haiku model works well, whether it’s done or not. 

Claude Code makes this possible by separating the model that attempts to complete a task from the evaluator model that ensures the task is actually completed. This prevents the agent from mixing up what it’s already accomplished with what still needs to be done. With this method, Anthropic noted there’s no need for a third-party observability platform — though enterprises are free to continue using one alongside Claude Code — no need for a custom log, and less reliance on post-mortem reconstruction.

Competitors like Google ADK support similar evaluation patterns. Google ADK deploys a LoopAgent, but developers have to architect that logic.

Advertisement

In its documentation, Anthropic said the most successful conditions usually have: 

  • One measurable end state: a test result, a build exit code, a file count, an empty queue

  • A stated check: how Claude should prove it, such as “npm test exits 0” or “git status is clean.”

  • Constraints that matter: anything that must not change on the way there, such as “no other test file is modified”

Reliability in the loop

For enterprises already managing sprawling tool stacks, the appeal is a native evaluator that doesn’t add another system to maintain.

This is part of a broader trend in the agentic space, especially as the possibility of stateful, long-running and self-learning agents becomes more of a reality. Evaluator models, verification systems and other independent adjudication systems are starting to show up in reasoning systems and, in some cases, in coding agents like Devin or SWE-agent. 

Sean Brownell, solutions director at Sprinklr, told VentureBeat in an email that there is interest in this kind of loop, where the task and judge are separate, but he feels there is nothing unique about Anthropic’s approach.

Advertisement

“Yes, the loop works. Separating the builder from the judge is sound design because, fundamentally, you can’t trust a model to judge its own homework. The model doing the work is the worst judge of whether it’s done,” Brownell said. “That being said, Anthropic isn’t first to market. The most interesting story here is that two of the world’s biggest AI labs shipped the same command just days apart, but each of them reached entirely different conclusions about who gets to declare ‘done.’”

Brownell said the loop works best “for deterministic work with a verifiable end-state like migrations, fixing broken test suites, clearing a backlog,” but for more nuanced tasks or those needing design judgment, a human making that decision is far more important.

Bringing that evaluator/task split to the agent-loop level shows that companies like Anthropic are pushing agents and orchestration further toward a more auditable, observable system.

Source link

Advertisement
Continue Reading

Tech

OpenAI is reportedly preparing legal action against Apple; it wouldn’t be the first partner to feel burned

Published

on

OpenAI is so frustrated with Apple over a ChatGPT integration that failed to deliver the subscribers and prominence it expected that the company is now actively exploring legal action against the iPhone maker, Bloomberg News reported Thursday, citing people familiar with the matter.

According to Bloomberg, OpenAI has enlisted an outside law firm to work through its options, which could include sending Apple a formal breach-of-contract notice without necessarily escalating to a full lawsuit (at least not immediately). Any legal move would likely wait until after the conclusion of OpenAI’s ongoing trial with Elon Musk.

Still, it’s a reminder of what a difficult partner Apple can be for major software companies. The iPhone is an enormously attractive platform for growth, but it’s fully under Apple’s control — and companies that build there are only guests. From Google to Adobe, there’s a long history of Apple showing guests the door when they seem as if they’re getting too comfortable.

TechCrunch has reached out to both OpenAI and Apple for comment.

Advertisement

The OpenAI partnership, announced at Apple’s Worldwide Developers Conference in June 2024, wove ChatGPT into Apple’s operating systems as an option within Siri and as part of the iPhone’s Visual Intelligence feature (allowing users to use their camera to analyze their surrounds and send photos to ChatGPT with related questions).

OpenAI, along with industry watchers, expected the deal might eventually funnel billions of dollars in new subscriptions its way and give the company prime real estate across one of the world’s most-used mobile ecosystems. Instead, Bloomberg reports, OpenAI has grown increasingly aggravated, complaining that the integration has been buried, its features hard to find, and that revenue from the tie-up is nowhere close to projections. “They basically said, ‘OpenAI needs to take a leap of faith and trust us,’” one OpenAI executive told Bloomberg. “It didn’t work out well.”

Apple, for its part, has its own grievances, including concerns about OpenAI’s privacy standards and, according to Bloomberg, irritation over OpenAI’s push into hardware, an effort led by former Apple executives including ex-design chief Jony Ive.

Either way, OpenAI is hardly the first partner of Apple to regret hitching its wagon to the company. Apple has a long history of embracing partners and then alienating them. The most famous case is Google Maps, which was a flagship feature of the original iPhone. It was so central to the device’s appeal that its removal in 2012 — replaced by Apple’s markedly inferior Apple Maps product — became one of the biggest tech fiascos of the decade, prompting a rare public apology from CEO Tim Cook. The friction between the two companies had been building for years at that point, thanks to the rollout of Google’s Android phone a year after the iPhone’s 2007 debut; after Google’s then-CEO Eric Schmidt stepped down from Apple’s board in 2009, that rivalry only intensified.

Advertisement

Adobe has some scar tissue, too. Steve Jobs refused to support Flash on the iPhone and iPad, publishing a famous open letter in 2010 explaining why and effectively dooming the technology. Flash never recovered its footing on mobile.

Then there’s Spotify, which spent years arguing that Apple leveraged its control over the App Store to disadvantage rival music streaming services after launching Apple Music in 2015. The European Commission agreed, fining Apple nearly €1.8 billion in March 2024.

Sometimes these rifts can be overcome in the name of commercial interests. Google is now Apple’s AI infrastructure partner, having struck a multiyear deal in January to power the next generation of Apple Intelligence with Gemini models. Apple is paying Google roughly $1 billion a year.

In the meantime, OpenAI has had its own share of strained relationships lately. Elon Musk’s lawsuit against the company — which accuses OpenAI of abandoning its nonprofit founding mission and operating in bad faith — is currently at trial.

Advertisement

The company has also reportedly navigated tensions with Microsoft, its biggest backer and infrastructure partner, as it pushes for greater independence ahead of its own IPO ambitions.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Noise Master Buds 2 Review: Premium Sound Without The Premium Price

Published

on

The TWS earbuds category, at least in India, is a tricky business. There are too many players, each with their specific appeal. Noise is one such homegrown brand that’s always appealed to budget-conscious buyers. I was one of the early adopters when they first started doing smart bands. Their earphones have been solid overall, but have struggled to stand out among more established players like OPPO and OnePlus. On the flip side, Bose is the most recognized player in the premium headphone space, as they essentially invented ANC back in the 90s.

So, what if you combine the value proposition of Noise with the premiumness and sound of Bose? You get the Noise Master Buds. The first generation of these unique collaboration earbuds was a hit, with experts praising the sound quality and the ANC capabilities. Fast forward to today, and the second generation of the Master Buds are out, bringing some meaningful updates in the sound department. And I called Noise to get these for a review. It’s been a month since that call, and I even took them with me to Thailand. Spoiler alert: these earbuds are a hit. Here’s why.

Noise Master Buds 2

Hisan Kidwai

Advertisement

Summary

With the Master Buds 2, the design alone is a conversation starter. The Bose-tuned sound signature focuses on balance rather than overwhelming bass, making everything from vocals to instruments sound clean and detailed. ANC performance is strong enough for everyday commutes and flights, the gesture controls are surprisingly useful, and the companion app is genuinely well-designed rather than feeling like an afterthought.

Advertisement

Design & Comfort

Front design of the Master Buds

When I first got the Master Buds 2, I was genuinely in awe of the design. I genuinely haven’t seen a case this pretty before, yet still functional. The concentric circle pattern on the front is a really nice touch that adds character to the stale world of TWS earbuds. The semicircle shape is also unique and something I have yet to see from other makers. One more benefit of this shape is that you can keep the case standing vertically, almost resembling the clocks of the past, or maybe I’m just too old at this point.

Nevertheless, I was constantly asked by my friends and family what earbuds I was using, all of whom adored the design, so you know it’s not just me yapping. I’d recommend sticking to the silver color, as it’s the best and doesn’t pick up random scratches, but the black also looks pretty decent. There’s a diagonal white LED strip that’s meant to indicate when the earbuds are on. You also get a dedicated pairing button, so no hand gymnastics is needed. Still, the best part of the design for me is the opening/closing mechanism. There’s a good amount of weight behind it, which makes that heavy thud that I love.

Side by side of the Noise Master Buds 2 with regular earphones

I do, however, have a problem with the size. I know these earbuds pack a lot of hardware, but the case size is just massive. I’ve been daily driving the Enco Buds 3 Pro+ for months, and compared to them, the Master Buds feel huge.

Moving to the earbuds themselves, comfort is highly subjective, as everyone’s ears are different. All that being said, though, the first day of me using the Master Buds wasn’t the best. My ears are small, and the medium-sized tips that come pre-installed were just too big for my liking. Thankfully, when I put on the small eartips, the experience was much better. You’ll still feel them sticking out, but the rubber fins work well to keep them from falling out during my everyday gym struggles with weights. It’s been a couple of hours since I started writing this review, and the Master Buds 2 are still sitting comfortably in my ears.

Sound Quality & Battery Life

A person holding the individual earbuds

Often, budget earbuds lean heavily towards bass and compromise everything else. Well, that’s not the case with the Master Buds 2. The 10 mm drivers tuned by Bose prioritize balance over everything else, and I love that. The mids, where most of the vocals live, are crisp, with you being able to hear the small nuances of a singer’s voice. The highs are decent without that sharpness that pinches through, and I love the separation between the different elements. Everything is placed perfectly, and that’s what many earbuds fail to do. Bass isn’t the selling point of the Master Buds, but the lows are still there, working hard in the background. Just don’t expect a rumble.

If you’re not happy with the signature Bose tuning, which would be surprising, Noise does bundle a few different listening modes like Jazz, Club, and Rock, each with its unique style. For audiophiles, there’s also a custom equalizer. Beyond the basics, Noise has bundled spatial audio with head tracking. I tried some songs on Apple Music, and it works great with instruments placed around you that change as you move your head. Call quality was excellent, with the other person hearing me loud and clear. Speaking of loud, though, there’s a new feature called Sidetone that lets you hear yourself on calls to check if you’re talking too loudly. Honestly, I can think of a lot of people needing that feature.

The Master Buds 2 use their 6 microphones for up to 51dB of active noise cancellation, and I put that number to the test. Using the ANC at Max, the buds do a stellar job of quietening the everyday hum of an AC or fan without a hitch. Voices were muffled enough that I couldn’t hear someone talking near me at half volume. For commuters, the Master Buds 2 will be enough to block out the chatter of a metro, but sudden loud noises, like a horn, will still find their way through. Last but not least, the battery on the buds is very decent. I got about 4-5 hours of use on a single charge. The case can recharge the buds at least 3 times, and once you do run out, fast charging should come to the rescue.

Advertisement

Companion App

A person using the Noise app

It’s no secret that controls are super easy to mess up. Fortunately, that’s not the case with the Noise Master Buds. In fact, there’s a lot going on. Everything is controlled via the Noise Audio app, and it looks really well designed. It’s sophisticated, and everything is laid out neatly. Still, there are plenty of ways you can control music.

The obvious choice is the touch controls on the earbuds themselves. You can configure a single touch, a double touch, a triple touch, a quadruple touch, and even a tap-and-hold. But that’s not it. Noise has incorporated motion controls, meaning if you nod your head twice, you can play/pause the music. Beyond that, shaking your head twice to the right will skip to the next track and vice versa. This is a pretty niche feature that I thought would be a gimmick at best. Surprisingly, it works quite well, except for the occasional misses. You can also nod to accept calls.

No review of a 2026 product will be complete without mentioning AI, and the same is true here. Noise has bundled something called Noise AI, which, in theory, is meant to answer your questions. I gave it a go with one of the recommended questions: suggesting a good cafe near me. The answer was that it doesn’t have access to live restaurant listings and that I should search Google instead. If I have to search Google, then what’s even the point?

Verdict

A person holding the Master Buds 2

At ₹8,999, the Noise Master Buds 2 sit in a very competitive segment, but they do enough differently to stand out. The design alone is a conversation starter, and unlike many flashy earbuds, these actually back it up with substance. The Bose-tuned sound signature focuses on balance rather than overwhelming bass, making everything from vocals to instruments sound clean and detailed. ANC performance is strong enough for everyday commutes and flights, the gesture controls are surprisingly useful, and the companion app is genuinely well-designed rather than feeling like an afterthought.

Of course, they aren’t perfect. The case is noticeably larger than most competitors, and bass lovers might find the tuning a little too restrained. Noise AI also feels half-baked right now and adds very little to the experience. Still, if you’re shopping in this segment, the Master Buds 2 are a must-consider.

Source link

Advertisement
Continue Reading

Tech

Apple 14inch M5 MacBook Pro 1TB drops to $1,499 at Amazon

Published

on

The $200 discount applies to the M5 model with 1TB of storage, matching the lowest price on record. Plus, grab an even larger markdown on an upgraded spec.

The $1,499.99 special is available on the 14-inch MacBook Pro with a 10-core CPU and 10-core GPU. It also has 16GB of memory and 1TB of storage, with the $200 discount available in your choice of Space Black or Silver.

Buy M5/16GB/1TB MacBook Pro for $1,499.99

If you’d prefer additional RAM, B&H is running a $300 discount on the M5 model with 24GB of RAM. Note that this configuration comes with 512GB of storage, rather than 1TB, but it rings in at the same $1,499 price as the above deal when ordered in Silver.

Advertisement

Buy M5/24GB/512GB MacBook Pro for $1,499

Both of these deals can be found in our M5 MacBook Pro Price Guide, with markdowns on higher-end M5 Pro and M5 Max models available in our M5 Pro 14-inch MacBook Pro Price Guide and M5 Pro 16-inch MacBook Pro Price Guide.

Source link

Advertisement
Continue Reading

Tech

AI Promised the Audemars Piguet x Swatch Wristwatch. China Will Deliver It

Published

on

Laden with iconic Royal Oak design cues, most notably the octagonal case, eight-screw bezel, and Petite Tapisserie-patterned dial, the strapless design heavily references 1979’s Royal Oak Pocket Watch reference 5691. Inside is an entirely new hand-wound version of Swatch’s Sistem51 caliber, a movement that is completely machine assembled. Swatch has 15 active patents on this new iteration and has also squeezed in an impressive 90-hour power reserve. There’s even an antimagnetic Nivachron balance spring that was, incidentally, codeveloped with Audemars Piguet.

Swatch’s 1986 POP line, whose watch heads could be physically ejected from their frames and clipped elsewhere, has been plundered here to create a design that allows the Royal Pops to ping out of their bioceramic holder clips, too.

Why There’s No Wristwatch

The simple logic of the pocket watch design authorized by Audemars Piguet, which, unlike Omega, is not part of the Swatch Group, is that it doesn’t upset its existing high-net-worth customer base. Royal Oak owners will no doubt be breathing sighs of relief now that it’s confirmed a version of their coveted pieces won’t be coming to market for a mere few hundred bucks.

However, this doesn’t mean that AP would have been financially hit had it delivered what the public so clearly wanted. Omega, which was also concerned for its sales when shown the original MoonSwatch internal prototypes, enjoyed a sizable 50 percent bump in sales following the release of its budget cousin.

Advertisement

The Royal Pop pocket watch, cleverly, is a sidestep designed to generate as much hype as possible yet be as safe as can be for AP’s brand. The Royal Oak design language is unmistakable, but the wrist is off-limits. With Swatch, Audemars built something real for its aspirational fans; it just didn’t build them what they wanted.

What does Swatch get out of this? Valuable PR as well, but far more importantly, the potential of a much-needed sales hit. In 2025, the group posted a 6.75 percent drop in sales and a staggering 55.6 percent decline in operating profit, primarily attributed to a sharp drop in demand for its watches in China, Hong Kong, and Macau. Swatch Group shareholders are not happy.

How China Will Come to the Rescue

Here is where the story gets interesting for reasons neither Swatch nor AP planned. As Swatch resurrected its POP design, allowing the Royal Pop to be removed from its housing, within hours of the Royal Pop announcement, third-party strap brands seized on this prospect, looking to quickly fashion adaptations that convert the timepiece from pocket to wristwatch. As Royal Pops were designed to snap in and out of lanyards and desk stands, they should just as easily clip into bracelets and straps made specifically to receive them.

The market recognized in real time that the pocket watch from Swatch and AP tantalizingly contained all that was structurally needed to deliver the very wristwatch that the AI concepts had promised. All that was required now was something to connect the case to a wrist.

Advertisement

Source link

Continue Reading

Tech

Graphon AI raises $8.3M seed to build a pre-model intelligence layer for enterprise AI

Published

on

TL;DR

Graphon AI emerged from stealth with $8.3 million in seed funding to build a “pre-model intelligence layer” that discovers relationships across multimodal enterprise data before it reaches a foundation model. The round was led by Novera Ventures, with participation from Perplexity Fund, Samsung Next, GS Futures, Hitachi Ventures, and others. The company is named after a mathematical concept co-formalised by its technical advisors, UC Berkeley professors Jennifer Chayes and Christian Borgs. Founded by Arbaaz Khan (CEO), Deepak Mishra (COO), and Clark Zhang (CTO), with team members from Amazon, Meta, Google, Apple, NVIDIA, and NASA. Early customer GS Group (South Korean conglomerate) has deployed Graphon for convenience-store analytics and construction-site safety.

The name is the tell. Graphon AI, which emerged from stealth on Wednesday with $8.3 million in seed funding, is named after a mathematical object that most people in AI have never heard of and that its two most prominent advisors helped invent. A graphon is the limit of a sequence of dense graphs: a continuous function that captures the structure of relationships as networks grow infinitely large. It is the kind of concept that exists at the boundary between pure mathematics and theoretical computer science, and it is now the foundation of a startup that claims to have built the missing layer between enterprise data and the models that are supposed to make sense of it.

Advertisement

The company’s thesis is straightforward, even if the mathematics behind it are not. Today’s large language models can process roughly one million tokens at a time. Enterprises hold trillions of tokens across documents, video, audio, images, logs, and databases. Retrieval-augmented generation, the current standard approach, can surface relevant content from that mass, but it cannot discover relationships between pieces of data that were never stored together. An LLM using RAG can answer a question about a specific document. It cannot reason about how that document connects to a surveillance video, a compliance log, and a customer database, at least not without someone having already mapped those connections.

Graphon’s product sits before the model, not inside it. Using graphon functions, a mathematical framework that extends the academic concept into a software layer, the system ingests multimodal data and automatically discovers relational structure across it, producing what the company calls persistent relational memory. The result, in theory, is a representation of an organisation’s data that any foundation model or agent framework can query without being constrained by its context window.

The people behind the mathematics

The founding team comprises Arbaaz Khan as chief executive, Deepak Mishra as chief operating officer, and Clark Zhang as chief technology officer. The company says its broader team includes former researchers and engineers from Amazon, Meta, Google, Apple, NVIDIA, Samsung AI Center, MIT, Rivian, and NASA.

More notable, perhaps, are the technical advisors. Jennifer Chayes, dean of the College of Computing, Data Science, and Society at UC Berkeley, and Christian Borgs, a UC Berkeley computer science professor, are both listed as advisors. Borgs was among the group of researchers, alongside Chayes, László Lovász, Vera Sós, and Katalin Vesztergombi — who formalised the graphon as a mathematical concept in 2008. The company is, in effect, commercialising a framework that its advisors co-invented.

Chayes and Borgs described the approach in a joint statement as one that treats relational structure as a first-class element of the AI stack rather than something to be inferred after the fact. The distinction matters because most current AI systems treat data as collections of individual items to be retrieved, not as networks of relationships to be preserved.

Advertisement

An unusual investor table

The seed round was led by Arvind Gupta of Novera Ventures, who made Graphon his fund’s first investment from its flagship vehicle. Gupta is better known as the founder of IndieBio, the life-sciences accelerator, and his pivot toward an AI infrastructure company suggests he sees structural overlap between the problems Graphon addresses and the complex, multimodal data challenges that define scientific computing.

The rest of the cap table reads like a deliberate exercise in strategic diversity. Perplexity Fund, the $50 million venture arm of the AI search company, participated alongside Samsung Next, Hitachi Ventures, GS Futures (the venture arm of South Korean conglomerate GS Group), Gaia Ventures, B37 Ventures, and Aurum Partners, the investment fund affiliated with the ownership group of the San Francisco 49ers.

The mix is telling. A search-AI company, a consumer electronics giant, a Japanese industrial conglomerate, and a Korean chaebol all investing in the same pre-model data layer suggests that the context-window problem Graphon claims to solve is felt across industries that otherwise have little in common. GS Group, which ranks among South Korea’s largest conglomerates with interests spanning energy, retail, and construction, is also an early customer. Ally Kim, a vice president at GS, said the company’s multimodal AI solutions have been applied to analysing customer movement in convenience stores and enhancing safety through CCTV analysis at construction sites.

The technical bet

Graphon’s positioning reflects a broader shift in the AI infrastructure market. The past three years have been dominated by a race to build larger models with longer context windows. But even the most capable models still hit a ceiling: they can process more tokens, but they cannot maintain relational awareness across the volumes of data that large organisations generate. The question Graphon is betting on is whether the solution lies not in extending the context window further, but in structuring data before it enters the window at all.

Advertisement

The company says it has already deployed its platform for enterprise content management, industrial intelligence, agentic workflows, and on-device applications across phones, cameras, wearables, and smart glasses. The breadth of claimed use cases is ambitious for a company at the seed stage, and the absence of independent benchmarks or detailed customer case studies beyond GS Group makes it difficult to assess how far the technology has progressed from concept to production.

What is clear is that the problem Graphon describes is real. The gap between what LLMs can theoretically do and what they can actually do with enterprise data remains one of the most significant constraints on AI deployment. Retrieval-augmented generation has been the industry’s primary answer, and its limitations, flat retrieval that misses cross-dataset relationships, context windows that force artificial boundaries on what the model can see, are well documented. Whether graphon functions offer a fundamentally better approach or merely a more theoretically elegant version of graph-based data structuring is the question the company will need to answer as it moves from stealth-mode mathematics to production-grade infrastructure.

The $8.3 million gives it runway to try. The advisors who co-invented the underlying mathematics give it credibility. But in an AI market that has seen no shortage of startups claiming to have found the missing layer, Graphon’s challenge will be proving that the mathematics it is named after translates into a measurable improvement in how foundation models handle the messy, multimodal reality of enterprise data, not just in theory, but at the scale where theory stops being sufficient.

Advertisement

Source link

Continue Reading

Tech

Best Early Memorial Day Mattress Deals: Helix, Saatva (2026)

Published

on

Memorial DaY brings discounts to the mattress models we test all year long, and the sales have already started. As a seasoned deal hunter, I know that mattresses go on sale pretty often, but whenever someone asks me the best time to buy, I tell them to wait until Memorial Day or Black Friday and Cyber Monday. If you’ve been in the market for a new mattress, now’s the time to act.

The WIRED Reviews team thoroughly tests the best mattresses long-term. We don’t conduct “nap tests” or base recommendations on first impressions. Our top picks are tried-and-true, and they’re on sale right now. We’ll also include some deals on bedding, pillows, mattress toppers, and other sleep accessories as we update this story through Memorial Day on May 26. Prices shown are for queen sizes.

Feel free to check out our many other sleep recommendations, including the best pillows for neck pain, the best body pillows, and the best sunrise alarm cocks. You might also want to read our guide on how to choose a mattress.

WIRED Featured Deals:

Advertisement

Helix Sleep Midnight Luxe Hybrid for $1,824 ($675 off)—Use Code WIRED27

Helix Sleep

Midnight Luxe Hybrid Mattress (14-Inch)

Use our exclusive coupon code WIRED27 to get 27 percent off our very favorite mattress for most people. We’ve seen it sell for about $100 less before, and they’ve thrown in more freebies, but this is still a great deal. Just be aware that the price might drop a little later in the month. In any case, the Midnight Luxe Hybrid is springy and medium-firm and should be well-suited to any style of sleeping. The individually wrapped springs are zoned so that you have more support where you need it to prevent back pain. It also doesn’t get too warm, though it’s thick enough that you’ll want deep-pocketed sheets. It’s been our favorite mattress for over eight years.

Source link

Advertisement
Continue Reading

Tech

Can AI replicate an army of associates? These lawyers are betting their new firm on it

Published

on

Matt Souza, left, and Sam Shaddox, founders of Talairis Law Group. (Talairis Photo)

Sam Shaddox and Matt Souza have spent years on the inside of big-time legal work, as attorneys at a major Seattle firm and later as general counsels at tech companies. They’ve watched as law firms charge startup clients a fortune for work they believed AI could do faster and cheaper.

Talairis Law Group is their answer. The Seattle-based firm, launching this week, is built around the idea that AI can handle much of the work that associates at big law firms have traditionally done — and that startups shouldn’t have to pay big law prices for it.

“It’s a startup for this AI moment,” Shaddox said, “and it’s the startup that we all need and the Seattle startup scene needs. We’ve been on the other side of the aisle, and now it’s time for us to make a mark.”

The idea isn’t unique. Venture capital has poured into AI-native law firms over the past couple of years, with players like Crosby, Manifest OS, Eudia and Lawhive raising hundreds of millions of dollars combined. But Shaddox and Souza say those firms have each picked a single practice area — contract review, immigration, M&A diligence — leaving a gap nobody has filled.

“They’re all picking one lane,” Souza said, “and there’s not an AI-powered law firm that you can rely on to help you with your day-to-day as things come up, helping to pilot your ship.”

Advertisement

The founders: Shaddox and Souza were both partner-track attorneys at Perkins Coie, the prominent Seattle-based law firm, before moving in-house at Seattle-area tech companies.

Shaddox went on to legal roles at Big Fish Games and OfferUp before serving as general counsel at SeekOut, the AI-powered talent intelligence company. Souza was senior counsel at Zillow before becoming general counsel at Wrapbook, the entertainment payroll and financial platform.

It was that in-house experience, they say, that made the problem impossible to ignore.

“We were getting billed out the ears for work that — as we were adopting AI in-house — we saw law firms were not doing, or not doing it very well,” Souza said. “The whole economic model of law firms is broken. And so that’s where we started.”

Advertisement

How it works: Talairis is built around what the founders call a four-layer architecture:

  • At the base is a large language model — the AI engine.
  • On top of that sits what they call an agentic layer, with more than 100 purpose-built AI agents covering the range of legal tasks a startup might need.
  • Above that is what they call the “client genome” — a stored profile of each client’s business, risk tolerance, contracts and operating history, so advice is never generic.
  • And at the top are Shaddox and Souza themselves, reviewing and signing off on every deliverable.

“You’re not getting one-off advice that doesn’t know what your company is or does or how it thinks and operates,” Shaddox said. “You’re getting bespoke outcomes.”

In practice: As an example, Shaddox and Souza point to SAFEs: simple agreements for future equity, a common bridge financing tool for startups. First-time founders often try to handle them on their own, or bring in outside counsel at $1,500 an hour. Either way, manually working through the notes, side letters and cap table implications is painful and error-prone.

Talairis has built an agent specifically for it. Send them a SAFE, they say, and you get back more than a legal opinion.

“They don’t just get back, ‘Hey, here’s our thoughts on this convertible note’ — anybody can do that,” Shaddox said. “Instead, they get back a fully built-out cap table that incorporates the latest note, incorporates the side letter terms, and shows how that’s going to flow through their next financing.”

Advertisement

The pitch to startups: The firm is launching with paying customers, though Shaddox and Souza aren’t naming them yet. Talairis is bootstrapped and it’s just the two lawyers for now.

  • Pricing: Shaddox says their hourly rate runs roughly half that of a typical big law attorney, and that the AI multiplies output enough that the effective cost to clients is a fraction of what they’d pay elsewhere.
  • Privacy: On the question of whether client data is being used to train AI models — a real concern for startups sharing sensitive legal documents — Shaddox is direct: “The answer is no. Your data is never used to train a model.” Talairis has built confidentiality and attorney-client privilege protections into its architecture from the ground up.

The launch comes the same week Anthropic released Claude for Legal, a suite of more than 20 new connectors and 12 practice-area plugins aimed at bringing AI tools to law firms and in-house legal teams. Shaddox sees the timing as validation.

“Claude for Legal and any other LLM is a base layer,” he said. “Our unique approach is what sits on top: a law firm with elite attorneys, significant proprietary enhancements, per-client scoping, privilege protections, and the agentic architecture a generic plugin lacks. That’s what turns an out-of-the-box LLM into the best possible legal counsel for startups.”

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025