Connect with us

Tech

Is AI signalling the end of ‘learning on the job’ for young people?

Published

on

Dr Vivek Soundararajan of the University of Bath discusses how learning and training is changing for future employees in the wake of AI advancement.

For a long time, the deal for a wide range of careers has been simple enough. Entry-level workers carried out routine tasks in return for mentorship, skill development and a clear path towards expertise.

The arrangement meant that employers had affordable labour, while employees received training and a clear career path. Both sides benefitted.

But now that bargain is breaking down. AI is automating the grunt work – the repetitive, boring but essential tasks that juniors used to do and learn from.

Advertisement

And the consequences are hitting both ends of the workforce. Young workers cannot get a foothold. Older workers are watching the talent pipeline run dry.

For example, one study suggests that between late 2022 and July 2025, entry-level employment in the US in AI-exposed fields like software development and customer service declined by roughly 20pc. Employment for older workers in the same sectors grew.

And that pattern makes sense. AI currently excels at administrative tasks – things like data entry or filing. But it struggles with nuance, judgement and plenty of other skills which are hard to codify.

So experience and the accumulation of those skills become a buffer against AI displacement. Yet if entry-level workers never get the chance to build that experience, the buffer never forms.

Advertisement

This matters for organisations too. Researchers using a huge amount of data about work in the US described the way that professional skills develop over time, by likening career paths to the structure of a tree.

General skills (communication, critical thinking, problem-solving) form the trunk, and then specialised skills branch out from there.

Their key finding was that wage premiums for specialised skills depend almost entirely on having those strong general foundational skills underneath. Communication and critical thinking capabilities are not optional extras – they are what make advanced skills valuable.

The researchers also found that workers who lack access to foundational skills can become trapped in career paths with limited upward mobility: what they call “skill entrapment”. This structure has become more pronounced over the past two decades, creating what the researchers described as “barriers to upward job mobility”.

Advertisement

But if AI is eliminating the entry-level positions where those foundations were built, who develops the next generation of experts? If AI can do the junior work better than the actual juniors, senior workers may stop delegating altogether.

Researchers call this a “training deficit. The junior never learns, and the pipeline breaks down.

Uneven disruption

But the disruption will not hit everyone equally. It has been claimed, for example, that women face nearly three times the risk of their jobs being replaced with AI compared to men.

This is because women are generally more likely to be in clerical and administrative roles, which are among the most exposed to AI-driven transformation. And if AI closes off traditional routes into skilled work, the effects are unlikely to be evenly distributed.

Advertisement

So what can be done? Well, just because the old pathway deal between junior and senior human workers is broken, does not mean that a new one cannot be built.

Young workers now need to learn what AI cannot replace in terms of knowledge, judgement and relationships. They need to seek (and be provided with) roles which involve human interaction, rather than just screen-based tasks. And if traditional entry-level jobs are disappearing, they need to look for structured programmes that still offer genuine skill development.

Older workers meanwhile, can learn a lot from younger workers about AI and technology. The idea of mentorship can be flipped, with juniors teaching about new tools, while seniors provide guidance and teaching on nuance and judgement.

And employers need to resist the urge to cut out junior staff. They should keep delegating to those staff – even when AI can do the job more quickly. Entry level roles can be redesigned rather than eliminated. For ultimately, if juniors are not getting trained, there will be no one to hand over to.

Advertisement

Protecting the pipeline of skilled and valuable employees is in everyone’s interest. Yes, some forms of expertise will matter less in the age of AI, which is disorienting for people who may have invested years in developing them.

But expertise is not necessarily about storing information. It is also about refined judgement being applied to complex situations. And that remains valuable.

The Conversation
By Dr Vivek Soundararajan

Dr Vivek Soundararajan is a professor of work and equality at the University of Bath. He conducts research on the governance of labour rights in supply chains, inequalities in and around organisations and the future of work. He leads a research initiative called Embed-Dignity and acts as a deputy director of the Centre for Business, Organisations and Society (CBOS) at the University of Bath.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Apple says F1 streaming already exceeds everyone’s expectations

Published

on

Apple’s exclusive deal for US broadcast rights of Formula 1 was a big shift to streaming from ESPN’s cable coverage of the past, but after the first race (the Australian Grand Prix), it seems to be going well. “The 2026 Formula 1 season on Apple TV is off to a strong start, with fans responding positively and viewership up year over year for the first weekend, exceeding both F1 and Apple expectations,” Apple VP Eddy Cue told The Hollywood Reporter.

Apple didn’t give any ratings or other details, but we can glean some clues from previous data. Last year, ESPN said the Australian GP averaged 1.1 million viewers, way up from the previous record of 659,000 in 2019. If Cue’s comments were accurate, that means Apple TV’s audience was above that, which would be impressive considering that it’s a streaming-only service.

When Apple’s Formula streaming deal was first announced, F1 CEO Stefano Domenicali was bullish on the deal. “It will allow us to enter in the houses of other people in a different way, in great quality that is very important for us,” he told Racer. Indeed, Apple is pouring resources into it in a way that ESPN never did. That includes advanced tech that offers multiple ways for fans to watch, including Multiview, Podium Viewer, driver cams and 4K Dolby Vision coverage, Cue noted.

Apple has jumped into Formula 1 racing in other other ways as well, taking advantage of a surge in the sport’s popularity aided by Netflix’s series Formula 1: Drive to Survive. The streaming service’s F1 movie starring Brad Pitt did huge box office numbers and is likely to see a sequel. Apple also struck a deal with Netflix on the aforementioned Drive to Survive series to share streaming of the current season eight (which details the F1 2025 championship). That agreement will also allow Netflix to stream the F1 Canadian Grand Prix live, along with Apple TV.

Advertisement

Source link

Continue Reading

Tech

Channel Surfer lets you watch YouTube like it’s old-school cable TV

Published

on

There’s a fun new way to watch YouTube: by channel surfing like a boomer with cable TV. This creative idea comes from London-based developer Steven Irby, who has just launched a web app called Channel Surfer, which presents interesting YouTube videos in an interface resembling a retro-looking TV guide.

In the app, you can browse through different, topic-focused channels and click to tune in as if you were watching live TV.

At launch, there are 40 of these custom-built “channels” to choose from, including those focused on general topics like news, politics, sports, and lifestyle content, as well as a selection of music channels and others with a more tech focus.

The latter group includes channels like “AI & ML,” “Code & Dev,” “Space,” “Retro Tech,” “Tech & Gadgets,” and “Gaming.”

Advertisement
Image Credits:Channel Surfer

As you move between channels, you join the video being played mid-stream. Meanwhile, the guide informs you of the upcoming content on all the channels and what time of day it will play. You can also scroll ahead to look at programming planned for the next 24 hours.

This makes watching YouTube feel a lot like watching old-school live television — an experience that’s proven popular on free streaming services like Plex, Pluto TV, Tubi, and others, which offer lineups of live channels playing TV shows and movies. YouTube itself, meanwhile, dominates TV streaming in the U.S.

Plus, a small counter at the bottom of the screen tracks how many other people are currently watching YouTube with you.

Image Credits:Channel Surfer

Irby says he came up with the idea to build a similar experience at the streamers, but for YouTube videos, because finding something to watch can still be a struggle.

“I built Channel Surfer because I’m tired of the algorithms and indecision fatigue,” Irby told TechCrunch. “I miss channel surfing and not having to decide what to watch. I want to just sit and tune into what’s on and not think about what to watch next.”

“My boomer Mom watches cable TV. I want the same, but with my YouTube channels instead. Also, it’s weirdly comforting to know I’m watching with other people,” he said.

Advertisement

The project is one of many new experiments from Irby, a 40-year-old tech industry veteran who has spent the past decade-plus traveling the world.

“I have so much creativity from my long, weird journey. I can’t bear the thought of being a Jira ticket monkey anymore,” he said.

The app seems to be a hit, with Irby noting that Channel Surfer’s brand-new website saw more than 10,000 views on its first day.

Under the hood, Channel Surfer is, for now, a static Next.js site that uses PartyKit and is hosted on Cloudflare. The channels and music it offers are from Ibry’s own hand-picked list. GitHub Actions is used to run a script that refreshes the data daily. There’s no back end yet.

Advertisement

And while Claude assisted in the coding process, the site is not “vibe-coded,” Irby says.

The channels themselves are essentially playing YouTube embeds, including YouTube’s ads, so the app should not be violating policy. Eventually, Irby says he’d love to bring the app to TV platforms, like Fire TV, Google TV, and others. (It also runs on mobile devices and tablets, but needs more work.)

At launch, Channel Surfer is a free service offering access to 175 YouTube channels and 25 music playlists. But if you subscribe to Irby’s newsletter, you’re given the option to import your own YouTube subscriptions into the app.

It’s a quick-and-dirty process to do so: You drag a “Channel Surfer” bookmarklet to your bookmarks bar, then open your YouTube subscriptions, and click the bookmarklet. The process begins, directing you back to the app where you paste the copied JSON text into a box and click an “import” button. This adds your own channels to Channel Surfer’s existing lineup, potentially giving you hundreds more channels to watch in this format.

Advertisement

The site’s existence harkens back to the web’s earlier days, filled with fun experiments and creativity. For Irby, that’s the point.

“I’m obsessed with showing the world that the old web is still alive and well,” he says. “It’s just buried under a mountain of slop.”

Source link

Advertisement
Continue Reading

Tech

Borderlands 4’s first story pack is the game’s ‘biggest DLC yet’ and launches later this month

Published

on


  • Borderlands 4 Story Pack 1: Mad Ellie and the Vault of the Damned arrives on March 26
  • The paid DLC adds a new Vault Hunter named C4SH and a new zone filled with extra missions
  • An update also brings the highly requested shared progression across characters

Gearbox Software has revealed Borderlands 4 Story Pack 1: Mad Ellie and the Vault of the Damned, which will introduce all-new content when it arrives later this month.

The first paid story pack, which the studio is calling its “biggest [downloadable content] DLC yet”, will launch on March 26 and features the all-new Vault Hunter, C4SH the Rogue.

Source link

Advertisement
Continue Reading

Tech

When a Box Is No Longer a Castle: Restoring Wonder in a Screen-Filled World

Published

on

Recently I placed an empty cardboard box in the center of my preschool classroom of 4-year-olds. No label. No instructions. No purpose given. A few years ago, that simple box would have instantly transformed into something magical — a castle, a race car, a pirate ship, a cozy home for tiny animals. Instead, my students stood around it, waiting. One finally asked, “What is it supposed to be?”

In that moment, I realized something deeper than a simple change in play had occurred. When a box is no longer a castle, it isn’t just imagination that is missing, it is wonder. And in a world filled with screens, schedules and endless stimulation, wonder no longer appears on its own. It must now be intentionally restored.

Hema Khatri

Children today are just as bright, curious and capable as ever. What has changed is the way they engage with the world. Many of my students now hesitate to begin open-ended play without direct instruction. They wait for something to be defined for them instead of defining it themselves.

I often see children repeating lines from television shows or mimicking characters from online videos instead of creating their own stories. The pause before pretend play is longer. The ideas come slower. The confidence to imagine feels weaker.

This is not a sign of laziness or lack of intelligence. It is simply a reflection of the environment they are growing up in, one that is fast paced, highly structured and heavily influenced by screens. When children spend more time consuming content than creating it, the part of the brain responsible for imagination gets less opportunity to grow. Like any skill, imagination weakens when it is not practiced regularly.

Advertisement

Ready-Made Creations

Technology is not the enemy. Screens can teach, connect, entertain and inform. Many children learn letters, numbers, languages and songs through digital tools. But when screens begin to replace play instead of supporting it, something essential begins to disappear.

Screens provide ready-made worlds: characters, voices, sounds, colors and stories are already created. There is nothing left for the child to imagine. They move from being creators to being viewers.

In the past, boredom often led to creativity. A child with “nothing to do” would invent something. A stick became a wand. A blanket became a cape. A cardboard box became a castle. Today, even a few seconds of boredom is quickly filled with a device.

The silence that once gave birth to imagination is replaced by noise, movement and constant stimulation. Over time, children become more comfortable being entertained than entertaining themselves. Wonder does not disappear; it simply falls asleep.

Advertisement

Why Wonder Matters

Imagination is not just child’s play. It is essential to development. When children pretend, they practice:

  • communication and language
  • emotional expression
  • empathy and understanding
  • planning and problem-solving
  • cooperation and negotiation
  • confidence and independence

Wonder teaches children how to think, not just what to think. In a world that demands creativity, adaptability and emotional intelligence, imagination is not optional. It is foundational.

Restoring Wonder — Together

The responsibility to protect imagination does not belong to teachers alone. Nor does it belong only to parents. It lives in the space between them.

Restoring wonder in children requires partnership. When home and school move with the same intention, magic begins to return. Children feel safe enough to imagine freely again. Imagination does not return because we demand it. It returns when the adults in a child’s life agree to protect the space for it together. Here are simple yet powerful ways families and educators can work together:

  • Make space for unstructured play. Children need time with no agenda, no instructions, and no screen. Even thirty minutes a day can make a difference.
  • Offer open-ended materials. Boxes, fabric, paper, paint, blocks, tape, water, and natural items invite imagination far more than expensive, pre-designed toys.
  • Let boredom exist. When a child says “I’m bored,” it is not a problem to fix. It is an invitation to imagine. Instead of offering a screen, try asking: “What could you do?”
  • Ask open-ended questions. Instead of correcting, wonder with them: “What is this becoming?” Who lives here?” “What happens next in your story?”
  • Create screen-free moments. Choose a time each day when screens are put away. Protect it as imagination time.
  • Communicate across home and school. A simple conversation with the teacher helps: “What is my child interested in lately?” “What do you see them creating in class?” “How can we support that at home?”

A Quiet Call Back to Wonder

The world is louder now. Faster. More digital than ever before. But a box is still a box. A child is still a child. And inside every child, a castle is still waiting to be built.

Wonder is not gone. It is waiting.

Advertisement

Waiting for silence. Waiting for time. Waiting for trust. Waiting for space.

Perhaps the real question is not what children have lost, but what we, as adults, are willing to return to them. And maybe the moment we choose to slow down, to listen, and to leave a box unlabeled, we will begin to see castles rising again.

Source link

Advertisement
Continue Reading

Tech

Manna’s new Dublin drone trial to test deliveries between hospitals

Published

on

Manna already has a number of commercial tie-ups in Ireland with the likes of Uber, JustEat and Deliveroo.

Manna’s drones are simulating deliveries from Dublin’s Rotunda Hospital to the Connolly Hospital, in a trial that hopes to encourage drone adoption in healthcare.

The trio of partners want to demonstrate the potential in transporting blood and other life-saving medical supplies speedily using small aerial vehicles.

Manna has previously trialled transporting medical supplies with the UK’s National Health Service. According to the business, the trial showed transportation times between the Guy’s Hospital and St Thomas’ Hospital in London reduced by 28 minutes when drones were used.

Advertisement

Last year, in a joint project in collaboration with the HSE, the National Ambulance Service and Community First Responders, Manna demonstrated how defibrillators could be delivered to homes much faster using drones than ambulances. While a previous study in Sweden found that drones beat ambulances to the patient 70pc of the time.

“Today’s simulation is a glimpse of that future,” commented Rotunda Hospital’s laboratory manager John O’ Loughlin. “The ability to move blood, samples and other critical supplies between hospitals at speed could transform how we support emergency and planned care in Ireland.”

Manna’s CEO Bobby Healy added: “We’ve proven this technology works at scale. What we’re showing now is how it can be applied in healthcare where minutes matter. Ireland is well-placed to lead the way, and this simulation is about building trust and momentum toward full integration.”

Manna already has a number of commercial tie-ups in Ireland with the likes of Uber, JustEat and Deliveroo to deliver food and other small goods to suburban communities.

Advertisement

Last year, it expanded its focus from Dublin and announced an entry into Cork’s airspace. Overall, it claims to have made more than 250,000 successful deliveries to date.

The 2019-founded company announced a $30m raise last year. Its backers include Tapestry VC, Molten Ventures, Coca-Cola and Dynamo Ventures.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans

Published

on

When the user asks “What enemy military unit is in the region?” the AIP Assistant guesses that it’s “likely an armor attack battalion based on the pattern of the equipment.” This prompts the analyst to request a MQ-9 Reaper drone to survey the scene. They then ask the AIP Assistant to “generate 3 courses of action to target this enemy equipment,” and within moments, the assistant suggests attacking the unit with either an “air asset,” a “long range artillery,” or a “tactical team.” The user tells the assistant to send these options to a fictional commander, who ultimately chooses the tactical team.

The final steps play out quickly: The analyst asks the AIP Assistant to “analyze the battlefield,” then “generate a route” for troops to reach the enemy, and finally “assign jammers” to sabotage their communications equipment. Within seconds, the analyst gives the battle plan a final review and orders the troops to mobilize.

In this scenario, Claude would be the “voice” of the AIP Assistant, and the “reasoning” it uses to generate responses. Other AIP demos show users interacting with large language models in much the same way. In a blog published last week, for example, Palantir detailed how NATO, a Maven Smart Systems customer, could use an AIP Agent within the tool.

In one graphic, Palantir shows how a third-party defense contractor can select from several of Palantir’s built-in AI models, including different versions of OpenAI’s ChatGPT and Meta’s Llama. The user selects OpenAI’s GPT 4.1, but seemingly, this could be where a soldier would also have the option to pick Claude instead.

Advertisement

An analyst then views a digital map showing the locations of troops and weapons. In a panel labeled “COA” (courses of action), they click a button that prompts a tool powered by GPT-4.1 to generate five possible military strategies, including one called “Support-by-Fire-then-Penetration-Shock-and-Destruction.”

Another example shows how the system could help interpret satellite imagery: The analyst selects three tanker truck detections on a map, loads them into the AIP Agent’s chat interface, and asks it to “interpret” the imagery and suggest options for what to do next.

Claude may also be used by the military to create intelligence assessments that may inform strike planning later down the line. In June 2025, WIRED viewed a demonstration given by Kunaal Sharma, a public sector lead at Anthropic, showing how the enterprise version of Claude could be used to generate “advanced” reports about a real Ukrainian drone strike dubbed “Operation Spider’s Web.” In the demo, Sharma explained, Claude was relying only on publicly available information. But by partnering with Palantir, he said, the federal government can also pull from internal datasets.

“This is typically something that I might sit for like five hours with a cup of coffee, and read Google, and go into think tanks, and start writing reports and writing a citation, et cetera, et cetera,” Sharma said. “But I don’t have that kind of time.”

Advertisement

In the demo, Sharma asked Claude to create an “interactive dashboard” with information about Operation Spider’s Web, and then translate it into “object types” that could be analyzed in Foundry, one of Palantir’s off-the-shelf software products. He also asked Claude to write a detailed analysis of recent developments in Russia’s border provinces, as well as a 200-word synopsis of the operation’s “military and political effects.”

“Frankly, I’ve been reading these types of things for twenty years—I used to write them, I used to be an academic myself,” Sharma said, “This is actually pretty good.”

Source link

Advertisement
Continue Reading

Tech

London Man Wore Smart Glasses For High Court ‘Coaching’

Published

on

A witness in a London High Court case was caught using smart glasses connected to his phone to receive real-time coaching while giving evidence during cross-examination. “In my judgement, from what occurred in court, it is clear that call was made, connected to his smart glasses, and continued during his evidence until his mobile phone was removed from him,” said Judge Raquel Agnello KC. “Not only have I held that Jakstys was untruthful in denying his use of the smart glasses and his calls to abra kadabra, but the effect of this is that his evidence is unreliable and untruthful.” The BBC reports: The claim arose during a ruling by Judge Raquel Agnello KC in a case brought by Laimonas Jakstys over the directorship of a property development company that owns a flat in south-east London and land in Tonbridge. Jakstys was told to remove the glasses after the court noticed he “seemed to pause quite a bit” before answering questions, and that “interference” was heard coming from around the witness. The judge later found that he had been “assisted or coached in his replies to questions put to him during cross examination” during the January trial.

Once the glasses were taken off, an interpreter was still translating a question when Jakstys’ mobile phone began broadcasting a voice — which he later blamed on Chat GPT. Agnello said: “There was clearly someone on the mobile phone talking to Jakstys. He then removed his mobile phone from his inner jacket pocket.” He denied using the smart glasses to receive answers, and denied they were connected to his phone. But the judge said multiple calls had been made from his phone to a contact named “abra kadabra,” whom he claimed was a taxi driver.

Source link

Continue Reading

Tech

Bumble’s AI assistant Bee will learn what you want from a relationship

Published

on

The dating app unveiled Bee at its Q4 earnings alongside a broader ‘Bumble 2.0’ overhaul, as the company attempts to reverse years of declining user numbers by replacing gamified swiping with AI-driven compatibility.


There is a version of the future, Bumble’s version, at least, where you never swipe on a dating app again. Instead, you have a private conversation with an AI that learns what you actually want from a relationship, sits quietly in the background, and surfaces one carefully chosen match with a note explaining exactly why the two of you belong together.

That is the pitch behind Bee, the AI dating assistant Bumble reveled at its fourth-quarter earnings on 11 March 2026.

The product is still in internal testing, with a public beta coming “soon” according to founder and chief executive Whitney Wolfe Herd, who described it to investors as a personal matchmaker that learns users’ values, relationship goals, communication style, lifestyle, and dating intentions through private chats.

Advertisement

Once Bee identifies two people it considers compatible, both are notified in the app with a summary of why they make a strong match. From there, the conversation and the date are up to the humans.

The launch vehicle for Bee is a new in-app experience called Dates. Users opt in, complete an onboarding conversation with Bee via text or voice, and then wait for a curated introduction rather than scrolling through an open pool of profiles.

Bumble says future applications of Bee will extend to date-night suggestions based on shared interests, and an optional anonymous feedback loop from previous matches to help the system and the user understand what went wrong.

Bee is the headline act in a broader product reset, Bumble is calling Bumble 2.0. Alongside the AI assistant, the company is experimenting with removing the traditional left-right swipe mechanic entirely in select markets, replacing it with what it calls chapter-based profiles, structured layouts that let users showcase different dimensions of their lives, from work and hobbies to values and plans.

Advertisement

Wolfe Herd told investors that this richer format will feed the app’s AI with better signals and give members more texture to respond to than a single profile photo.

“We will be introducing more dynamic ways for somebody to express interest in your story, rather than just your profile, and this is going to drive more dynamic engagement, spark better conversation, and ultimately drive better KPIs across the board.”

The urgency behind Bumble 2.0 is not hard to read. Full-year 2025 revenue fell 10% to $966 million, paid users declined 11.5%, and the Q4 quarter alone saw revenue drop nearly 15% year on year to $224.2 million.

The company laid off 30% of its workforce in mid-2025, and Wolfe Herd, who returned as CEO in early 2025 after a period away, told investors she had cut performance marketing spend by 80% as part of a deliberate pivot away from volume-driven acquisition and towards what she described as “higher-intent, organically driven growth.”

Advertisement

That painful restructuring appears to be what investors chose to focus on. Bumble’s stock rose between 25% and 35% in the days following the earnings report, depending on the point of measurement, a rally that looks like a bet on the turnaround rather than a reaction to the numbers themselves.

JPMorgan upgraded BMBL from Underweight to Neutral, citing stabilising leading indicators and the Bumble 2.0 launch, targeted for Q2 2026, as a potential catalyst. Wells Fargo kept an Equal Weight rating but pointed to Q1 EBITDA guidance of $80 million, roughly 42% above analyst estimates, as a sign that margin discipline is taking hold.

Bumble is not alone in the AI pivot. Tinder’s Chemistry feature uses personal questions and camera roll access to sharpen match recommendations. Grindr’s Edge subscription tier offers AI summaries of past chats and compatibility statistics.

What Bee represents, if it works, is a more ambitious commitment: not augmenting the swipe, but replacing it as the primary discovery mechanism.

Advertisement

That is a significant trust ask. Users interacting with Bee will be sharing detailed, intimate information about what they want from relationships with an AI system, and implicitly allowing that system to make consequential social decisions on their behalf.

Wolfe Herd said that privacy and member control of data are “central” to how Bee has been designed, though the company has not yet published detailed documentation on what data Bee retains, how it is used to train models, or what opt-out mechanisms will look like in practice. 

The chapter-based profiles and potential no-swipe markets are scheduled for the second half of 2026, according to the product timeline Wolfe Herd shared with investors.

Bee’s beta will arrive before that. For a company that has spent most of the past two years shrinking, both in ambition and headcount, it is a notably bold sequence of bets all riding on the idea that what daters actually want is less choice, not more.

Advertisement

Source link

Continue Reading

Tech

Perfecting The Shape-Changing Fruit Bowl

Published

on

Fruit bowls have an unavoidable annoyance– not flies and rotten fruit, those would be avoidable if your diet was better. No, it’s that the bowl is never the right size. Either your fruit is sad and lonely in a too-large bowl, or it’s falling out. It’s the kind of existential nightmare that can only be properly illustrated by a late-night infomercial. [Simone Giertz] has a solution to the problem: a shape-changing fruit bowl.

See, it was one thing to make a bowl that could change shape. That was easy, [Simone] had multiple working prototypes. There are probably many ways to do it, but we like [Simone]’s use of an iris mechanism in a flat base to allow radial expansion of the walls. The problem was that [Simone] has that whole designer thing going on, and needs the bowl to be not only functional, but aesthetically pleasing. Oh, and it would be nice if expanding the bowl didn’t create escape routes for smaller fruits, but that got solved many prototypes before it got pretty.

It’s neat to see her design process. Using 3D printing and CNC machining for prototyping is very familiar to Hackaday, but lets be honest — for our own projects, it’s pretty common to stop at “functional”. Watching [Simone] struggle to balance aesthetics with design-for-manufacturing makes for an interesting 15 minutes, if nothing else. Plus she gives us our inspirational quote of the day: “As much as I feel like I’m walking in circles, I know that product development is a spiral”. Something to keep in mind next time it seems like you’re going around the drain in your own projects. Just be warned, she does have a bit of a potty mouth.

Advertisement

We’ve featured [Simone]’s design decisions here, if you’re interested in seeing how she goes the rest of the way from project to product. We’re pretty sure her face-slapping-alarm clock never made it into the SkyMall catalog, though.

Advertisement

Source link

Continue Reading

Tech

Sales automation startup Rox AI hits $1.2B valuation, sources say

Published

on

Rox, a startup developing autonomous AI agents to boost sales productivity, has raised a new funding round valuing the company at $1.2 billion, according to multiple sources.

The funding included a lead investment from returning backer General Catalyst, two of the people said. Rox and General Catalyst did not respond to TechCrunch’s request for comment.

At the time of the fundraise, which closed last year, Rox was projected to close 2025 with $8 million in annual recurring revenue (ARR), according to two people familiar with the deal.

In November 2024, Rox announced it had raised a total of $50 million, including a seed round led by Sequoia and a Series A round led by General Catalyst, with participation from GV.

Advertisement

Rox was founded in 2024 by the former chief growth officer of New Relic, Ishan Mukherjee. Mukherjee joined New Relic following its 2020 acquisition of Pixie, a software monitoring startup he co-founded.

The startup positions itself as an intelligent revenue operating system that plugs into a company’s current software setup — from Salesforce to Zendesk — and deploys hundreds of AI agents. These agents monitor existing accounts, research prospects, and update CRM software. By consolidating these functions, Rox aims to replace and streamline numerous fragmented software solutions currently used by sales teams.

“Rox’s unique system of AI agents levels up the CRM experience,” GV investor Dave Munichiello wrote in a 2024 blog post when announcing the Series A round. “These agents work constantly behind the scenes to monitor customer activity, identify potential risks and opportunities, and even suggest the best course of action.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Rox’s competition spans several categories, including established revenue intelligence providers like Gong and Clari, as well as AI sales development platforms such as 11x and Artisan. There is also a steady stream of new AI-native, all-in-one CRM competitors joining the field, such as Monaco — a startup founded by Sam Blond, the former president of corporate spending platform Brex — which launched out of stealth last month.

Advertisement

According to Rox’s website, the company’s customers include Ramp, MongoDB, and New Relic.  

Source link

Continue Reading

Trending

Copyright © 2025