Connect with us

Tech

Ebike Charges At Car Charging Stations

Published

on

Electric vehicles are everywhere these days, and with them comes along a whole slew of charging infrastructure. The fastest of these are high-power machines that can deliver enough energy to charge a car in well under an hour, but there are plenty of slower chargers available that take much longer. These don’t tend to require any specialized equipment which makes them easier to install in homes and other places where there isn’t as much power available. In fact, these chargers generally amount to fancy extension cords, and [Matt Gray] realized he could use these to do other things like charge his electric bicycle.

To begin the build, [Matt] started with an electric car charging socket and designed a housing for it with CAD software. The housing also holds the actual battery charger for his VanMoof bicycle, connected internally directly to the car charging socket. These lower powered chargers don’t require any communication from the vehicle either, which simplifies the process considerably. They do still need to be turned on via a smartphone app so the energy can be metered and billed, but with all that out of the way [Matt] was able to take his test rig out to a lamppost charger and boil a kettle of water.

After the kettle experiment, he worked on miniaturizing his project so it fits more conveniently inside the 3D-printed enclosure on the rear rack of his bicycle. The only real inconvenience of this project, though, is that since these chargers are meant for passenger vehicles they’re a bit bulky for smaller vehicles like e-bikes. But this will greatly expand [Matt]’s ability to use his ebike for longer trips, and car charging infrastructure like this has started being used in all kinds of other novel ways as well.

Advertisement

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Tech Moves: Amperity and Siteimprove name CMOs; AWS director departs; Gong’s new exec

Published

on

Bridget Perry. (LinkedIn Photo)

Amperity, a Seattle-based startup that helps companies collect and manage customer data, named Bridget Perry as chief marketing officer.

Earlier in her career, Perry was a marketing director for Microsoft for nearly nine years and worked for more than eight years at Adobe, leaving the role of CMO of Europe, Middle East and Africa. She was most recently interim CMO for Later, an influencer marketing company, and has held strategic advisor roles.

“Bridget has led marketing teams through real platform shifts, not incremental change. She knows what it takes to build credibility in a market and scale it globally,” said Tony Owens, CEO of Amperity, in a statement. The company is ranked No. 39 on GeekWire 200, our list of top Pacific Northwest startups.

Simon Frey. (LinkedIn Photo)

— Seattle-based Simon Frey was promoted to chief customer officer of Gong. He was previously senior vice president of customer outcomes for the San Francisco startup that builds agentic AI technology to optimize revenue performance and automate workflows.

“Simon has spent years partnering closely with our customers, helping them unlock meaningful growth across their revenue organizations,” said Shane Evans, Gong’s chief revenue architect, in a statement.

Frey joined Gong in 2024 after leaving TaxBit, where he was VP of revenue. Other past employers include Qualtrics and McKinsey. He also served as an advisor to Jargon, which was acquired by Remitly.

Advertisement
Elizabeth Scallon, Find Ventures co-founder and board chair. (LinkedIn Photo)

Elizabeth Scallon is now director of healthcare AI startups at Nvidia where she will oversee its global Healthcare and Life Sciences Inception program. Scallon, a longtime leader in Seattle’s startup ecosystem, joins Nvidia from HP where she worked for nearly four years as director of technical and business incubation and strategy.

Scallon is also an affiliate instructor at the University of Washington and has held leadership roles at Amazon and WeWork. She was director of the UW’s CoMotion Labs for five years and co-founded Find Ventures.

“With this role, I’m returning to my roots in biotech and genetics and bringing the skills, experience, and connections I’ve built along the way to do my life’s work,” Scallon said on LinkedIn.

Jenny Brinkley. (LinkedIn Photo)

— After nearly a decade at Amazon Web Services, Jenny Brinkley is resigning as director of security readiness.

“I start a new role next week in a rapidly growing space, and I am excited to be part of something transformative once again. To my AWS colleagues, thank you for the kind words and support,” Brinkley said on LinkedIn.

Brinkley, who is based in Portland, Ore., earlier co-founded an AI startup and ran a consultancy.

Advertisement

 Siteimprove announced Jen Jones as its chief marketing officer. The company, which helps businesses improve their website functionality, is based in Denmark and has an office in Bellevue, Wash., where much of its executive leadership team is based. Jones was previously at commercetools.

Padmashree Koneti had departed her role as chief product officer of Yoodli after roughly five months. The Seattle startup has not yet named a replacement. Yoodli, which is using generative AI to analyze speech and offer tips for improving communication skills, also just hired Alexandra Breymeier as customer success lead. She previously worked at employee referral company ERIN.

Vandana Shah. (LinkedIn Photo)

Vandana Shah is now vice president of product for Scowtt, a Kirkland, Wash.-based startup that wants to reshape how advertisers optimize paid campaigns. The company in December announced a $12 million Series A funding round.

Shah joins Scowtt from Ladder. She was previously Google’s director of product management for the advertising platform, working at the Bay Area company for more than 16 years.

“Having spent years leading complex platform initiatives at Google Ads, I have seen the power of building resilient, customer-first foundations at scale. I am thrilled to bring that experience to Scowtt,” she said on LinkedIn.

Advertisement
Dinesh Govindasamy. (LinkedIn Photo)

—- Dinesh Govindasamy was promoted to director of engineering at Meta, supporting teams across Tupperware, Public Cloud and Meta Kubernetes Service. Govindasamy joined Meta in October 2023.

“This milestone is thanks to the mentors, collaborators, and teams who believed in me and pushed me to grow. You know who you are — thank you,” he said on LinkedIn.

Govindasamy, based in the Seattle area, was previously at Microsoft for more than 15 years, leaving the role of group engineering manager in which he led teams working on Azure Kubernetes Service Hybrid and other initiatives.

Beto Yarce. (City of Seattle Photo)

Beto Yarce has started his tenure as director of the City of Seattle’s Office of Economic Development. Yarce joins the city from the U.S. Small Business Administration where he was regional administrator for the Pacific Northwest.

“I am incredibly honored by Mayor Wilson’s trust in me to lead OED and to help shape the economic ecosystems that make Seattle not only a great place to live, work, and play, but also the best place in the country to open, run, and grow a business,” Yarce said in a statement.

He earlier served as executive director of the Seattle nonprofit Ventures for more than eight years. The organization supports underserved entrepreneurs including women, people of color, immigrants and low-income individuals.

Advertisement

Rob Lloyd, Seattle’s chief technology officer, will become executive director of the Center for Digital Government at the end of this month. The organization describes itself as “a national research and advisory institute on information technology policies and best practices in state and local government.”

“Looking forward to working with peers and leaders across the nation on solving the biggest challenges facing our communities, in smarter ways,” he said on LinkedIn.

Lloyd served as CTO for less than two years. Read more about his departure in earlier GeekWire coverage.

Dan Rodgers is now chief financial officer for CTL, a Beaverton, Ore., company that manufactures Chromebooks, desktop PCs, servers and Google Meet video conferencing tools. Rodgers’ past roles include leadership at companies including PwC, McCormick and Schmick’s, Nike and New Seasons Market.

Advertisement

“CTL’s commitment to innovation and its dedication to sustainability present a unique opportunity to pair financial discipline with a mission-driven strategy,” Rodgers said in a statement.

Scott Roberts, a longtime executive at LinkedIn where he is currently an AI product initiative advisor, has joined the board of directors for the San Francisco company Voices.

Source link

Advertisement
Continue Reading

Tech

Sydney Opera House to be lit up by art created on iPad

Published

on

Apple and the Sydney Opera House are collaborating on a series of creativity projects for young people, including the chance to have iPad-created art projected on the famous building.

Sydney Opera House sails lit with vivid rainbow graffiti-style projections, sweeping blue and orange strokes over neon greens, pinks, and yellows, against a black night sky and outlined forecourt
How the new artwork will look when projected onto the Sydney Opera House — image credit: Apple

Just as it did for Christmas with its UK headquarters, Apple is inviting people to submit artwork on the iPad to the Sydney Opera house. It’s part of a 12-month collaboration which will see Apple supporting arts programming, including a new international children’s festival later in 2026.
“For 50 years, Apple has been at the forefront of empowering creativity, providing tools that allow people to imagine, design, and share their unique visions with the world,” said Greg Joswiak, Apple’s senior vice president of Worldwide Marketing, in a statement. “We are thrilled to be working with such an iconic Australian cultural landmark to help inspire the next generation of creatives.”
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Databricks built a RAG agent it says can handle every kind of enterprise search

Published

on

Most enterprise RAG pipelines are optimized for one search behavior. They fail silently on the others. A model trained to synthesize cross-document reports handles constraint-driven entity search poorly. A model tuned for simple lookup tasks falls apart on multi-step reasoning over internal notes. Most teams find out when something breaks.

Databricks set out to fix that with KARL, short for Knowledge Agents via Reinforcement Learning. The company trained an agent across six distinct enterprise search behaviors simultaneously using a new reinforcement learning algorithm. The result, the company claims, is a model that matches Claude Opus 4.6 on a purpose-built benchmark at 33% lower cost per query and 47% lower latency, trained entirely on synthetic data the agent generated itself with no human labeling required. That comparison is based on KARLBench, which Databricks built to evaluate enterprise search behaviors.

“A lot of the big reinforcement learning wins that we’ve seen in the community in the past year have been on verifiable tasks where there is a right and a wrong answer,” Jonathan Frankle, Chief AI Scientist at Databricks, told VentureBeat in an exclusive interview. “The tasks that we’re working on for KARL, and that are just normal for most enterprises, are not strictly verifiable in that same way.”

Those tasks include synthesizing intelligence across product manager meeting notes, reconstructing competitive deal outcomes from fragmented customer records, answering questions about account history where no single document has the full answer and generating battle cards from unstructured internal data. None of those has a single correct answer that a system can check automatically.

Advertisement

“Doing reinforcement learning in a world where you don’t have a strict right and wrong answer, and figuring out how to guide the process and make sure reward hacking doesn’t happen — that’s really non-trivial,” Frankle said. “Very little of what companies do day to day on knowledge tasks are verifiable.”

The generalization trap in enterprise RAG

Standard RAG breaks down on ambiguous, multi-step queries drawing on fragmented internal data that was never designed to be queried.

To evaluate KARL, Databricks built the KARLBench benchmark to measure performance across six enterprise search behaviors: constraint-driven entity search, cross-document report synthesis, long-document traversal with tabular numerical reasoning, exhaustive entity retrieval, procedural reasoning over technical documentation and fact aggregation over internal company notes. That last task is PMBench, built from Databricks’ own product manager meeting notes — fragmented, ambiguous and unstructured in ways that frontier models handle poorly.

Training on any single task and testing on the others produces poor results. The KARL paper shows that multi-task RL generalizes in ways single-task training does not. The team trained KARL on synthetic data for two of the six tasks and found it performed well on all four it had never seen.

Advertisement

To build a competitive battle card for a financial services customer, for example, the agent has to identify relevant accounts, filter for recency, reconstruct past competitive deals and infer outcomes — none of which is labeled anywhere in the data.

Frankle calls what KARL does “grounded reasoning”: running a difficult reasoning chain while anchoring every step in retrieved facts. “You can think of this as RAG,” he said, “but like RAG plus plus plus plus plus plus, all the way up to 200 vector database calls.”

The RL engine: why OAPL matters

KARL’s training is powered by OAPL, short for Optimal Advantage-based Policy Optimization with Lagged Inference policy. It’s a new approach, developed jointly by researchers from Cornell, Databricks and Harvard and published in a separate paper the week before KARL.

Standard LLM reinforcement learning uses on-policy algorithms like GRPO (Group Relative Policy Optimization), which assume the model generating training data and the model being updated are in sync. In distributed training, they never are. Prior approaches corrected for this with importance sampling, introducing variance and instability. OAPL embraces the off-policy nature of distributed training instead, using a regression objective that stays stable with policy lags of more than 400 gradient steps, 100 times more off-policy than prior approaches handled. In code generation experiments, it matched a GRPO-trained model using roughly three times fewer training samples.

Advertisement

OAPL’s sample efficiency is what keeps the training budget accessible. Reusing previously collected rollouts rather than requiring fresh on-policy data for every update meant the full KARL training run stayed within a few thousand GPU hours. That is the difference between a research project and something an enterprise team can realistically attempt.

Agents, memory and the context stack

There has been a lot of discussion in the industry in recent months about how RAG can be replaced with contextual memory, also sometimes referred to as agentic memory.

For Frankle, it’s not an either/or discussion, rather he sees it as a layered stack. A vector database with millions of entries sits at the base, which is too large for context. The LLM context window sits at the top. Between them, compression and caching layers are emerging that determine how much of what an agent has already learned it can carry forward.

For KARL, this is not abstract. Some KARLBench tasks required 200 sequential vector database queries, with the agent refining searches, verifying details and cross-referencing documents before committing to an answer, exhausting the context window many times over. Rather than training a separate summarization model, the team let KARL learn compression end-to-end through RL: when context grows too large, the agent compresses it and continues, with the only training signal being the reward at the end of the task. Removing that learned compression dropped accuracy on one benchmark from 57% to 39%.

Advertisement

“We just let the model figure out how to compress its own context,” Frankle said. “And this worked phenomenally well.”

Where KARL falls short

Frankle was candid about the failure modes. KARL struggles most on questions with significant ambiguity, where multiple valid answers exist and the model can’t determine whether the question is genuinely open-ended or just hard to answer. That judgment call is still an unsolved problem.

The model also exhibits what Frankle described as giving up early on some queries — stopping before producing a final answer. He pushed back on framing this as a failure, noting that the most expensive queries are typically the ones the model gets wrong anyway. Stopping is often the right call.

KARL was also trained and evaluated exclusively on vector search. Tasks requiring SQL queries, file search, or Python-based calculation are not yet in scope. Frankle said those capabilities are next on the roadmap, but they are not in the current system.

Advertisement

What this means for enterprise data teams

KARL surfaces three decisions worth revisiting for teams evaluating their retrieval infrastructure.

The first is pipeline architecture. If your RAG agent is optimized for one search behavior, the KARL results suggest it is failing on others. Multi-task training across diverse retrieval behaviors produces models that generalize. Narrow pipelines do not.

The second is why RL matters here — and it’s not just a training detail. Databricks tested the alternative: distilling from expert models via supervised fine-tuning. That approach improved in-distribution performance but produced negligible gains on tasks the model had never seen. RL developed general search behaviors that transferred. For enterprise teams facing heterogeneous data and unpredictable query types, that distinction is the whole game.

The third is what RL efficiency actually means in practice. A model trained to search better completes tasks in fewer steps, stops earlier on queries it cannot answer, diversifies its search rather than repeating failed queries, and compresses its own context rather than running out of room. The argument for training purpose-built search agents rather than routing everything through general-purpose frontier APIs is not primarily about cost. It is about building a model that knows how to do the job.

Advertisement

Source link

Continue Reading

Tech

Is the MacBook Neo the one?

Published

on

It’s been a wild week for Apple. After announcing a slew of new hardware, the company capped things off with its cheapest laptop ever: the $599 MacBook Neo. It’s low on specs, but high on character and value. In this episode, Devindra and Engadget Deputy Editor Nathan Ingraham dive into the MacBook Neo, as well as the refreshed MacBook Air M5, MacBook Pro M5 Pro/Max, iPad Air M4 and iPhone 17e.

Also, Devindra chats with Spencer Ackerman, author of Forever Wars and recent Iron Man comics, about the ongoing battle between Anthropic and the Department of Defense. It turns out the DOD still used Claude for attacks on Iran, after banning Anthropic’/s AI last week. And really, what do these AI companies expect to happen when they jump at military contracts?

Subscribe!

Topic

  • Apple announces a the MacBook Neo priced at $599 and it’s shockingly great – 0:53

  • MacBook Air got the M5, MacBook Pro got the M5 Pro and M5 Max, and who needs the new iPad Air now? – 22:31

  • Anthropic vs. DoD with Spencer Ackerman, author of The Forever Wars – 30:34

  • Gemini encouraged a man to end his own life to be with his ‘AI wife’ – 58:53

  • Polymarket nixes bets on nuclear detonation after public outcry – 1:01:55

  • No Yōtei on PC: Sony closes down first party titles outside of PS5 – 1:03:56

  • Wildlight Studios’ Highguard shuts down after 46 days live – 1:08:23

  • Working on: Dell’s XPS 14 will be great when the keyboard fix comes through – 1:15:09

  • Pop culture picks – 1:15:58

Credits

Hosts: Devindra Hardawar and Nathan Ingraham
Guest: Spencer Ackerman
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien

Source link

Advertisement
Continue Reading

Tech

Building A Heading Sensor Resistant To Magnetic Disturbances

Published

on

Light aircraft often use a heading indicator as a way to know where they’re going. Retired instrumentation engineer [Don Welch] recreated a heading indicator of his own, using cheap off-the-shelf hardware to get the job done.

The heart of the build is a Teensy 4.0 microcontroller. It’s paired with a BNO085 inertial measurement unit (IMU), which combines a 3-axis gyro, 3-axis accelerometer, and 3-axis magnetometer into a single package. [Don] wanted to build a heading indicator that was immune to magnetic disturbances, so ignored the magnetometer readings entirely, using the rest of the IMU data instead.

Upon startup, the Teensy 4.0 initializes a small round TFT display, and draws the usual compass rose with North at the top of the display. Any motion after this will update the heading display accordingly, with [Don] noting the IMU has a fast update rate of 200 Hz for excellent motion tracking. The device does not self-calibrate to magnetic North; instead, an encoder can be used to calibrate the device to match a magnetic compass you have on hand. Or, you can just ensure it’s already facing North when you turn it on.

Advertisement

Thanks to the power of the Teensy 4.0 and the rapid updates of the BNO085, the display updates are nicely smooth and responsive. However, [Don] notes that it’s probably not quite an aircraft-spec build. We’ve featured some interesting investigations of just how much you can expect out of MEMS-based sensors like these before, too.

Advertisement

Source link

Continue Reading

Tech

Xbox surprise: Microsoft reveals ‘Project Helix’ as the codename of its next console

Published

on

(Xbox Image)

In the days leading up to one of the games industry’s bigger trade conferences, Microsoft has quietly unveiled the code name for its next-generation Xbox console: Project Helix.

The name appeared without initial fanfare in a post on X on Thursday morning.

Xbox CEO Asha Sharma, who just replaced longtime leader Phil Spencer, followed up in a post on her own account, in which she briefly discussed her team’s “commitment to the return of Xbox.” Sharma also noted that Project Helix will “lead in performance” and “play your Xbox and PC games.”

Next week marks the annual Game Developers’ Conference in San Francisco, which has gained some prominence for news and announcements in recent years. It’s possible that some new information about this next-gen Xbox will come out of this year’s GDC, which is both Sharma’s first time at the show and her first time attending as the head of Xbox. Sharma reportedly has plans to meet with both partners and studios while at GDC.

That marks the end of the information about Project Helix that’s currently publicly available. The most remarkable fact about it for now may simply be that it exists, in the face of persistent rumors that Microsoft’s executives would like to sunset Xbox entirely and an ongoing memory shortage caused by the rise of AI data centers.

Advertisement

Despite industry expectations, it looks like Microsoft’s games division plans to stick it out for at least one more console generation. The start of that generation may be pushed off a couple of years from its initially rumored late-2027 starting point, as RAM is currently getting scarcer on the market, but whenever it begins, it looks like Xbox will still be there.

Source link

Advertisement
Continue Reading

Tech

Linux Hotplug Events Explained | Hackaday

Published

on

There was a time when Linux was much simpler. You’d load a driver, it would find your device at boot up, or it wouldn’t. That was it. Now, though, people plug and unplug USB devices all the time and expect the system to react appropriately. [Arcanenibble] explains all “the gory details” about what really happens when you plug or unplug a device.

You might think, “Oh, libusb handles that.” But, of course, it doesn’t do the actual work. In fact, there are two possible backends: netlink or udev. However, the libusb developers strongly recommend udev. Turns out, udev also depends on netlink underneath, so if you use udev, you are sort of using netlink anyway.

If netlink sounds familiar, it is a generic BSD-socket-like API the kernel can use to send notifications to userspace. The post shows example code for listening to kernel event messages via netlink, just like udev does.

When udev sees a device add message from netlink, it resends a related udev message using… netlink! Turns out, netlink can send messages between two userspace programs, not just between the kernel and userspace. That means that the code to read udev events isn’t much different from the netlink example.

Advertisement

The next hoop is the udev event format. It uses a version number, but it seems stable at version 0xfeedcafe. Part of the structure contains a hash code that allows a bloom filter to quickly weed out uninteresting events, at least most of the time.

The post documents much of the obscure inner workings of USB hotplug events. However, there are some security nuances that aren’t clear. If you can explain them, we bet [Arcanenibble] would like to hear from you.

If you like digging into the Linux kernel and its friends, you might want to try creating kernel modules. If you get overwhelmed trying to read the kernel source, maybe go back a few versions.

Advertisement

Source link

Continue Reading

Tech

Silicon Valley tech vet: ‘No better time to start companies than now’

Published

on

Pablo Casilimas (left), founding partner at OneSixOne Ventures, with Sudheesh Nair, co-founder and CEO of TinyFish. (GeekWire Photo / Taylor Soper)

The AI moment is not just another tech cycle — it’s one of the best openings founders have seen in years.

That was the message from Sudheesh Nair, a longtime Bay Area tech leader and co-founder of enterprise web agent startup TinyFish, speaking Thursday at a Seattle Enterprise AI Summit event hosted by OneSixOne.

“There is no better time to start companies than now,” he said. “It’s just magical.”

He believes the AI boom could produce the same kind of lasting infrastructure and category-defining companies that came out of earlier economic and technology shifts. Nair said this wave may be as significant as the internet, and possibly even bigger, because “for the first time, reasoning can be on tap.”

He added: “The way I think of it is, completely be constrained by your imagination — but nothing else.”

Advertisement

Nair previously helped scale Nutanix and ThoughtSpot. In 2024 he launched TinyFish, which raised $47 million last year to build infrastructure for AI agents to operate across the web. “I couldn’t stand on the sidelines,” he said.

He likened today’s moment to a gold rush, noting that most of the enduring outcomes from 1849 were second‑order products and infrastructure: durable jeans, safer elevators, modern banking systems. He said these were built not for the gold rush, but because of the gold rush.

Nair pushed back on the instinct to wait for clarity in a fast‑moving market where even frontier AI labs are still figuring out how their models behave. “No one who knows what the heck is happening,” he said.

But Nair also was careful not to romanticize startups. He said company-building is not for everyone, and noted that some people are better suited to join startups or build inside larger organizations. His broader point was that the tools, the pace of change, and the raw opportunity around AI have created a rare moment for people willing make the startup leap.

Advertisement

“If you just happen to have a pickaxe and shovel, the best thing might be to just jump in,” Nair said.

Source link

Continue Reading

Tech

Fewer weddings, falling sales force The Chinese Wedding Shop to adapt

Published

on

Fewer couples are getting married, and it has impacted The Chinese Wedding Shop’s sales

Marriage has long been seen as an important union between two families across cultures. But in Singapore, fewer couples are choosing to tie the knot.

Recently released figures show that marriages in Singapore fell by about 6.2%, from 26,328 in 2024 to 24,687 in 2025. This decline follows a broader drop after the country hit a record peak of 29,389 marriages in 2022.

total number of marriages in singapore from 2020 and 2025total number of marriages in singapore from 2020 and 2025
After a 30% increase from 2020 to 2022, there has been an almost 16% drop in the total number of marriages in Singapore since 2022./ Data from the Singapore Department of Statistics

But this trend doesn’t just reflect shifting social priorities in the city-state—it’s forcing Singapore’s wedding industry, from banquet services to bridal studios, to rethink their strategies. And for niche retailers like The Chinese Wedding Shop, they need to find a way to balance tradition and staying relevant in a market where fewer people are saying “I do.”

Vulcan Post speaks to co-founder Michelle Neo on how The Chinese Wedding Shop, a specialist in traditional Chinese wedding products, is navigating a wedding recession.

The Chinese Wedding Shop has been around for almost 20 years

Michelle first established The Chinese Wedding Shop with her husband in 2009, investing S$400,000 of their savings to open their first store at Ang Mo Kio. From the start, they positioned the shop as a one-stop destination for couples seeking items for traditional Chinese wedding customs.

Advertisement

One example is the Guo Da Li (过大礼), a ceremony where the groom’s family presents wedding gifts to the bride’s family as a sign of respect as sincerity.

The Chinese Wedding Shop’s Guo Da Li package./ Image Credit: The Chinese Wedding Shop

Back then, the co-founder shared that there was strong demand for such products.

“At that time, many of our friends who were getting married were extremely stressed trying to source traditional Guo Da Li items,” said Michelle. “They had to run from shop to shop, often with little guidance, and were worried about ‘doing it wrong’ in front of the elders.”

Beyond retail, the business also guides couples through traditional wedding customs. Each visit starts with a conversation to understand the couple’s background, which includes details like:

  1. Dialect group
  2. Family expectations
  3. Wedding timeline
  4. How traditional or modern they wish the ceremony to be

After consolidating this information, the team guides customers step-by-step through the customary sequence, explaining the essentials and optional items, and how certain practices can be simplified or adapted.

“Our focus is to ensure couples feel confident and reassured, rather than overwhelmed,” emphasised Michelle. These consultations helped the business build credibility and eventually expand to five locations across Singapore.

Advertisement

Adapting to a shrinking market

The Chinese Wedding Shop’s store at Ang Mo Kio./ Image Credit: Rong Yi Lim, Amy Yanling Charles via Google Images

But shifting wedding trends over the past few years have forced the business to adapt.

Aside from the declining number of marriages, it has also become more expensive to hold weddings in Singapore. Banquet prices, for instance, increased as much as 10% in 2022 amid inflation, prompting many couples to opt for smaller, more intimate ceremonies.

While she did not disclose figures, Michelle shared that these trends have gradually reduced overall sales volumes. Customers have also become more intentional with their spending, carefully weighing what’s essential and what’s not.

“Previously, couples were more worried about following traditions strictly. Today, they are more focused on why certain customs exist and how they can adapt them meaningfully without it being unnecessarily complex,” said Michelle.

The Chinese Wedding ShopThe Chinese Wedding Shop
Image Credit: The Chinese Wedding Shop/ Junhong Khang via Google Images

To combat the decline, the shop has gradually introduced new strategies: diversifying its curated traditional wedding sets, offering rentals of individual items like wedding baskets, and creating more flexible packages that let couples personalise dowry sets and other ceremonial essentials.

As more consumers shift their shopping habits online and value convenience in acquiring products, particularly after the COVID-19 pandemic, the business also started selling its products online in 2020, both through its own website and e-commerce platforms like Shopee and Lazada.

Advertisement

“These new streams helped offset the drop in traditional transactions,” said Michelle, adding that the shift has pushed the business to “innovate faster” and “serve couples better” rather than relying on tradition alone.

Beyond these initiatives, the shop has embraced a one-stop wedding approach, aiming to position itself as a go-to destination where couples can source more than just traditional items.

For instance, the company also facilitates wedding cake and pastry orders by partnering with local bakeries such as Baker’s Brew, Tong Heng, and Thye Moh Chan.

Another way the shop is positioning itself as a one-stop wedding destination is by expanding beyond retail into an advisory and educational platform. Social media has become a key channel for the business to educate younger couples about traditional wedding customs.

Advertisement
In this video, Michelle breaks down what is needed for a Teochew family to prepare for their Guo Da Li ceremony.

“The goal is to reduce stress for couples while keeping traditions meaningful, not burdensome,” she added.

Diversification is key to survival, but weddings remain their bread and butter

the Chinese wedding shop event the Chinese wedding shop event
Michelle speaking at a wedding fair as a vendor./ Image Credit: The Chinese Wedding Shop

Since renewing its offerings post-pandemic, Michelle shared that they have been well received by both couples and parents alike, though she did not elaborate further.

Even so, the shrinking number of marriages means the overall market is likely to continue contracting, raising the question of whether the company should diversify beyond weddings.

Michelle and her team have explored this potential, considering expansions into other Chinese traditions—such as selling festive banners and red packets for Chinese New Year—but plans aren’t concrete yet, and any move into new areas would need the same level of cultural sensitivity, knowledge, and relevance.

Weddings continue to be the business’s bread and butter as of now, as the credibility they have gained over the years allowed them to establish a niche in Singapore’s crowded wedding scene.

Advertisement

“For now, our priority is to deepen our wedding-related offerings, such as rental sets for specific uses and modernised solutions, before extending into other areas.”

  • Learn more about The Chinese Wedding Shop here.
  • Read more stories we’ve wrote on Singaporean businesses here.

Featured Image Credit: The Chinese Wedding Shop

Source link

Advertisement
Continue Reading

Tech

HiFiMAN Arya & HE1000 WiFi Debut at CanJam NYC 2026 Bringing Planar Magnetic Headphones Into the Wireless Era

Published

on

The market for high end wireless headphones has expanded rapidly over the past few years. What was once dominated by mainstream Bluetooth models has evolved into a category that now includes serious audiophile contenders from brands such as Focal, Bowers & Wilkins, DALI, Mark Levinson, and Sennheiser. These companies have demonstrated that wireless headphones can deliver a level of performance that appeals to listeners who once insisted on wired designs and dedicated headphone amplifiers.

HiFiMAN is now pushing deeper into that space with the introduction of the HE1000 WiFi and Arya WiFi, two open back planar magnetic headphones that rely on Wi-Fi rather than Bluetooth as their primary wireless connection. The company has experimented with wireless concepts before, but these new models represent a more ambitious attempt to bring high bandwidth wireless audio to planar magnetic designs.

hifiman-he1000-wifi-headphones-lifestyle
HiFiMAN HE1000 WiFi

HiFiMAN has not announced pricing yet, but the company has indicated that both models will sit closer to the HE1000 Unveiled and Arya Unveiled in its lineup rather than its flagship tier. Both headphones are scheduled to begin shipping next month and will be demonstrated publicly at CanJam NYC 2026 this weekend, where we will have an opportunity to spend time listening to both models.

According to HiFiMAN, the key difference between these headphones and typical wireless designs is the use of Wi-Fi for audio transmission. Bluetooth’s limited bandwidth has long constrained wireless audio quality, while the Wi-Fi connection used here is designed to support full resolution lossless audio streams without compression.

Advertisement

Both headphones incorporate HiFiMAN’s proprietary Hymalaya R2R DAC, integrated amplification inside the earcups, and planar magnetic drivers based on the company’s established technologies.

hifiman-arya-wifi-headphones-lifestyle-woman
HiFiMAN Arya WiFi

Hymalaya What?

The Hymalaya DAC is HiFiMAN’s proprietary digital to analog converter built around a classic R2R ladder architecture, a design approach favored in many high end audio systems for its natural timing and accurate signal conversion. Unlike the delta sigma DAC chips used in most modern headphones and wireless devices, an R2R DAC converts digital audio using a network of precision resistors that translate binary data directly into analog voltage. This approach can deliver excellent transient response and tonal accuracy, but it is traditionally more complex and power hungry than conventional DAC designs.

HiFiMAN developed the Hymalaya DAC to overcome those limitations by combining the R2R ladder with an FPGA controlled architecture and extremely low power consumption. The result is a compact DAC capable of supporting high resolution audio formats, including PCM up to 768 kHz and native DSD, while drawing far less power than traditional ladder DACs. That efficiency allows HiFiMAN to integrate the technology into portable gear and wireless headphones.

In products such as the HE1000 WiFi and Arya WiFi, the Hymalaya DAC works alongside a built in amplifier to convert digital audio directly inside the headphone. This self contained signal chain allows the headphones to operate more like a complete playback system, handling the digital conversion and amplification internally rather than relying on the DAC and amplifier inside a phone or computer.

Advertisement

HiFiMAN HE1000 WiFi

hifiman-he1000-wifi-headphones-angle-left

The HE1000 WiFi is an open back planar magnetic headphone that combines HiFiMAN’s familiar driver architecture with onboard digital processing and wireless connectivity. The design incorporates the company’s Nano Diaphragm driver paired with its Stealth Magnet system, a magnet structure intended to reduce wave diffraction and maintain a more consistent sound wave path.

Inside the earcups is a custom Class A/B balanced amplifier working alongside HiFiMAN’s Hymalaya R2R DAC, which converts incoming digital audio streams directly within the headphone. This approach allows the headphone to function more like a self contained playback system rather than relying on the amplification stage of an external device.

hifiman-he1000-wifi-headphones-back

The HE1000 WiFi connects through a dedicated Wi-Fi network created by the headphone itself. Users simply select the headphone from the WiFi menu on a smartphone or tablet and stream audio directly to the headphone. This wireless link supports full resolution audio transmission and is capable of handling high bandwidth formats including PCM up to 768 kHz and native DSD up to DSD512.

Advertisement. Scroll to continue reading.

HiFiMAN lists a frequency response of 8 Hz to 65 kHz with THD + N rated at 0.009 percent at 32 ohms when the DAC and amplifier are operating together. Channel separation is specified at 105 dB at 1 kHz, and the headphone weighs 452 grams. The HE1000 WiFi can operate in Wi-Fi, USB audio, or Bluetooth modes and charges via USB Type-C with a typical charging time of three to four hours. Standby time is rated at more than 30 days.

HiFiMAN Arya WiFi

hifiman-arya-wifi-headphones-angle

The Arya WiFi uses a similar architecture but incorporates HiFiMAN’s Super Nano diaphragm driver, a thinner variation of the company’s planar driver design intended to improve transient response and overall efficiency. As with the HE1000 WiFi, the Arya WiFi also uses Stealth Magnet technology, integrated amplification, and the Hymalaya R2R DAC.

Like its sibling, the Arya WiFi connects directly to smartphones and tablets through a Wi-Fi network created by the headphone. Once connected, audio streams are transmitted at full resolution without relying on Bluetooth compression. Support for high resolution audio formats is extensive, with PCM playback up to 768 kHz and native DSD up to DSD512.

Advertisement

HiFiMAN specifies a frequency response of 8 Hz to 55 kHz with THD + N rated at 0.009 percent at 32 ohms when the DAC and amplifier are operating together. Channel separation is listed at 105 dB at 1 kHz and the headphone weighs 452 grams.

The Arya WiFi supports Wi-Fi streaming, Bluetooth 5.1 connectivity with SBC, AAC, aptX, aptX HD, and LDAC codecs, and USB-C wired playback. Battery life is rated at approximately 6.5 to 7.5 hours when operating in Wi-Fi mode and up to 23 hours when using Bluetooth. Charging takes roughly three to four hours, and standby time is listed at more than 30 days.

hifiman-arya-wifi-headphones-back

Comparison

Arya Unveiled Arya WiFi HE1000 Unveiled  HE1000 WiFi
Product Type Headphones Headphones Headphones Headphones
Price $1,449 ? $2,299 ?
Design Open-back Open-back Open-back Open-back
Driver Type Planar Magnetic with Stealth Magnets Planar Magnetic with Stealth Magnets. Planar Magnetic with Stealth Magnets Planar Magnetic with Stealth Magnets
Frequency Response 8Hz – 65kHz 8Hz – 55kHz 8Hz – 65kHz 8Hz – 65kHz
Impedance 27 Ohms Not indicated 28 Ohms Not indicated
Sensitivity 94dB Not Indicated 95dB Not indicated
Diaphragm Nanometer thickness Nanometer thickness Nanometer thickness Nanometer thickness
THD+N N/A DAC: 0.0055%@-9dB, 1kHz
DAC + Amp: 0.009% @32 ohms, 1kHz
N/A DAC: 0.0055%@-9dB, 1kHz
DAC + Amp: 0.009% @32 ohms, 1kHz
Channel Separation N/A 105dB @1kHz N/A 105dB @1kHz
Connection Modes Wired only WiFi, USB Audio, Bluetooth Wired only WiFi, USB Audio, Bluetooth
Battery Life (WiFi) N/A 6.5-7.5 hours N/A 6.5-7.5 hours
Battery Life (BT) N/A 23 hours N/A 23 hours
Charging Time N/A 3-4 hours N/A 3-4 hours
Standby Time N/A 30+ days N/A 30+ days
Audio Formats N/A PCM 44.1kHz-768kHz, 32/24/16Bit, DSD native 64-512 N/A PCM 44.1kHz-768kHz 32/24/16Bit, DSD native 64-512
Bluetooth Codecs N/A SBC, AAC, aptX, atpX HD, LDAC N/A SBC, AAC, aptX, atpX HD, LDAC
Weight 413g 452g 450g 452g

The Bottom Line

The HE1000 WiFi and Arya WiFi take a different approach to wireless headphone design by focusing on bandwidth and signal integrity rather than convenience features. By combining planar magnetic drivers with a built-in Class A/B amplifier, HiFiMAN’s Hymalaya R2R DAC, and a Wi-Fi based audio connection, these headphones are designed to handle high resolution audio without relying solely on Bluetooth compression.

That said, these are clearly not lifestyle wireless headphones. Both models use open-back earcups, which means they leak sound and provide no isolation from the outside world. They are better suited for listening at home, where wireless freedom can be useful without the compromises that normally come with portable wireless designs.

HiFiMAN has not revealed pricing yet, but both models appear positioned for listeners who want something closer to a traditional audiophile headphone system that happens to operate wirelessly. We expect to learn more once we have the opportunity to spend time with both models at CanJam NYC 2026 this weekend.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025