Connect with us
DAPA Banner

Tech

These S’poreans built a bus navigation app for the visually impaired

Published

on

[This is a sponsored article with the Singapore Government Partnerships Office.]

For most of us, catching a bus is second nature and a routine. A shade of bright green comes into our view, we glance at the service number, take one step forward, and that’s it—we’re on our way. 

But for someone visually impaired, that everyday task can feel completely different. At a crowded bus stop, they rely on the distant hiss of brakes or snippets of conversation to guess whether the next bus is theirs. 

One wrong step could send them to an unfamiliar neighbourhood and derail their whole day.

Advertisement

This constant tension became painfully real for Lee Kiah Hong, who watched his uncle struggle in his daily commute after losing his sight. “Something as simple as a trip to the market became a source of real anxiety.”

Determined to find a solution, he and his three friends—Ryan Yeo, Chia Wee Leong, and Sriram “Ram” Jeyakumar—founded Oculis, a mobile app created to return autonomy, assurance, and dignity to visually impaired commuters. 

They had big ambitions, but reality hit 

(L-R): Ryan Yeo, Chia Wee Leong, Lee Kiah Hong and Sriram “Ram” Jeyakumar; founders of Oculis./ Image Credit: Oculis

Kiah Hong, Ryan, Wee Leong and Ram met each other whilst pursuing their Diplomas in Applied AI at Singapore Polytechnic, and bonded over a shared interest in using technology to create meaningful, real-world solutions. The quartet often participates in hackathons and competitions together, and Oculis is one of their many projects. 

“What really brought us together was discovering that we complemented each other incredibly well,” said Ram. “We’d often find ourselves debating not just how to build something but whether it was worth building in the first place.” 

So when they learnt about Kiah Hong’s uncle’s struggles after being diagnosed with glaucoma,  it revealed how independence can be hindered by visual impairment—and inspired them to develop a solution.

Advertisement

And it was an ambitious one at that. The founders initially aimed to tackle the broad challenge of navigation, designing a tool that could help with everything from finding your way through shopping malls to reading street signs and identifying landmarks. But the scale was too ambitious, and the quartet had to shift gears quickly. 

“Tasks that seem simple to us turned out to be far more challenging to replicate through technology, especially at the speed needed for real-world use,” explained Ryan. “We needed our solution to work instantly and reliably, and achieving that across all navigation scenarios felt impossible.”

To gain more firsthand insights and refine their app, the quartet connected with organisations like the Singapore Association of the Visually Handicapped (SAVH) and Purple Symphony. Through them, they met members of the community—not just to test their app, but to understand what daily life was really like.

“Before meeting them, we thought we understood the challenges like general navigation and getting around large spaces,” shared Wee Leong.

Advertisement

“But our understanding was quite surface-level and shaped more by assumptions than actual experiences. Working with members of the community showed us firsthand how they required a lot of reliance on other people when navigating more unfamiliar environments.”

Through their conversations, they discovered one pressing pain point: bus navigation. This insight ultimately led to the creation of Oculis. 

How the app works

Users first start by selecting the bus service they are waiting for, which they can save as their favourite if they wish. They will then wait for the audio signal and lift up their phone to scan for buses, which will audibly announce the bus number and arrival./ Image Credit: Oculis

Navigating the mobile app is simple—all it takes is 3 Ss: Select, Signal, Scan 

  1. Select: Users select the bus service they are currently waiting for at a specific bus stop. 
  2. Signal: Users wait for the audio signal, which informs them when any of the bus services they have selected is arriving by using data provided by LTA. 
  3. Scan: Users lift up their phone to scan for buses, which will audibly announce the bus number and arrival. 

The process sounds simple to execute, but it took over 200 navigation sessions at more than 100 bus stops with 30 visually impaired users to get it right. Kiah Hong recounted an instance when one of their testers told them he found it difficult to determine where to point the camera—a simple comment that made them realise what the app was missing. 

“We were building an app to help visually impaired people identify buses, yet we’d overlooked the fact that they might not be pointing in the right direction because we had been so focused on making the AI recognition accurate that we hadn’t fully considered the user experience from their perspective,” he explained. 

Oculis conducting their pilot tests with the visually impaired./ Image Credit: Oculis

That comment pushed the team to further develop the app’s haptic feedback functions, which use vibrations to guide users in aiming their cameras in the right direction. “It was a reminder that accessible technology isn’t just about what the app does but about how people will actually use it in real life,” Kiah Hong reflected.

With its new improvements, they have received positive feedback from testers. Some had even shared that Oculis was easier to use than existing navigation apps—signifying to the team that they are on the right track. 

Advertisement

That said, there is still room for improvement. Ryan shared that the app sometimes struggles with older LED displays on buses and that the team is working to fix this so that Oculis works reliably across Singapore’s entire bus fleet, regardless of how old the buses are. 

Filling the gaps through partnerships

While their tech backgrounds meant that they had the technical side down, that was only part of the equation. The team joined the Build For Good Accelerator, an initiative by Open Government Products (OGP), where they picked up skills beyond tech, such as operations, marketing, and business strategy. 

The financial support from the accelerator also allowed them to focus their efforts on building the best possible solution without worrying about costs. “We didn’t have to cut corners on testing, compromise on features due to budget constraints.” 

Beyond the resources and funding, what the team really needed was something they could not build themselves: connections and trust.

Advertisement

We had the technical skills and the energy to build quickly, but we had no relationships with the community. We were just sending cold emails, hoping someone would reply. The Build for Good team opened doors for us, connected us with organisations like Purple Symphony, and gave us credibility. Without that, we were just four students with an app idea.

Lee Kiah Hong, Ryan Yeo, Chia Wee Leong, and Sriram “Ram” Jeyakumar, founders of Oculis

Oculis founders at 2025’s National Day engagement event (left) and Innofest (right)./ Image Credit: Oculis

Since completing their pilot testing in 2025, Oculis has partnered with more organisations that helped to grow its reach and impact. MINDEF Nexus has provided the team with opportunities to present the app at events such as the National Day Parade Stakeholder Engagement, while Purple Symphony connected them with testers who used Oculis in their daily routines. 

“These partnerships helped us meet people we wouldn’t have reached otherwise,” the team shared. 

Keeping focused on creating solutions for those in need

The response has been encouraging so far, but it is only just the beginning for the quartet. Currently, Oculis is available via TestFlight for beta testing. Ryan explained that iPhones are the preferred choice for many visually impaired users due to their built-in accessibility features, which is why the team decided to focus on iOS first.

The app has already made its way through several accessibility group chats within the visually impaired community—including Kiah Hong’s uncle, who has finally tried the app for himself. 

Advertisement

It wasn’t a dramatic, movie-worthy moment, but seeing something we built to address the exact challenge that started the journey felt deeply meaningful. It started with a real person we cared about.

Lee Kiah Hong, founder of Oculis

And it’s not just that one moment that keeps them going. “Every time someone tells us Oculis made their commute less stressful, or we watch someone use it successfully on their own for the first time—those small wins remind us why we’re doing this”. 

Looking forward, Wee Leong revealed that the team will be focused on enhancing the overall user experience, such as refining the interface and smoothing out any friction points. They are also working towards developing the app to work independently on Android devices as well, even before expanding it to wearable devices such as smart glasses.

“Success for us isn’t just about the number of downloads or navigation sessions completed, but more about the meaningful impact. In the next one to two years, we want Oculis to become a trusted, everyday tool for the visually impaired community in Singapore,” the quartet shared. 

Advertisement

“Ultimately, we want Oculis to be so seamless and reliable that it fades into the background, just a tool that works, allowing people to focus on where they’re going, not how they’ll get there.” 

Their advice to others with ideas? Find people who can help. The team had the technical skills, the community had the lived experience, and the partnerships gave them access and credibility. None of them could have done it alone.

Through the Build For Good programme, the quartet could collaborate with like-minded individuals who remained focused on solving real-world issues for those in need, beyond profits and transactions. 

“This alignment in values made all the difference and allowed us to build something that truly serves the community rather than chasing commercial goals,” shared the founders. 

Advertisement

Learn more about Oculis here, or discover other initiatives through the Build For Good programme

Inspired to launch your own community project? On top of project guidance, the Singapore Government Partnerships Office has launched a new SG Partnerships Fund to support citizen-led initiatives at different stages of development. Applications for the Seed and Sprout tiers of the fund start from 1 Apr 2026. Visit www.sgpo.gov.sg/sgpf to learn more. 

Featured Image Credit: Oculis

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Playing DVDs On The Sega Dreamcast

Published

on

Although the Sega Dreamcast had many good qualities that made it beloved by the thousands of people who bought the console, one glaring omission was the lack of DVD video capabilities. Despite its optical drive being theoretically capable of such a feat, Sega had opted to use the GD-ROM disc format to not have to cough up DVD licensing fees, while the PlayStation 2 could play DVD movies. Fortunately it’s possible to hack DVD capability into the Dreamcast if you aren’t too fussy about the details, as [Throaty Mumbo] recently demonstrated.

For the Tl;dw folk among us, there’s a GitHub repository that contains the basic summary and all needed files. Suffice it to say that it is a bit of a kludge, but on the bright side it does not require one to modify the Dreamcast. Instead it uses a Pico 2 board that emulates a Sega DreamEye camera on the Dreamcast’s Maple bus via the controller port. The Dreamcast then requests image data as if from said camera.

On the DVD side of things there’s a Raspberry Pi 5 that connects to an external USB DVD drive and which encodes the video for transmission via USB to the Pico 2 board. Although somewhat sketchy, it totally serves to get DVDs playing on the Dreamcast. If only Sega had not skimped on those license fees, perhaps.

Advertisement

Source link

Advertisement
Continue Reading

Tech

5 Niche Craftsman Tools You Probably Shouldn’t Waste Your Money On

Published

on





We may receive a commission on purchases made from links.

The workshop has become a place with specialized gadgets for just about every task you can imagine. However, all this niche inventory often makes your workspace more complicated. It leaves you with a cluttered toolbox packed with pricey, single-purpose items that rarely get used. For many hobbyists and pros, that high-tech solution or a really specific manual tool can be tough to pass up when you’re browsing the hardware store aisles.

If you take a closer look at how useful these items actually are, you’ll see that the classic, versatile tools that have helped tradespeople for generations are often superior to modern, specialized versions. Many of these niche items aren’t good investments because they lack the adaptability of standard equipment.

Advertisement

By taking a close look at these pricey novelties, you can better appreciate the value of a streamlined, multipurpose tool kit. Tools like speed squares, bungee cords, and extraction sockets can handle a wide range of problems across different projects and have many uses, unlike tools designed for a single use. Even with professional marketing and shiny finishes, you’re probably better off leaving these on the shelf.

Advertisement

Digital Angle Gauge

The Craftsman Digital Angle Gauge is impressive, but it’s a lot more than you probably need. It’s built as a four-function tool, so it works as an angle finder, a compound cut calculator, a protractor, and a standard level. It can measure angles from 0 to 220 degrees and stays accurate to the nearest 0.1 degree. It’s made from durable aluminum, but is still pretty heavy at 2.7 pounds.

This is the kind of tool you could get from Home Depot that you wouldn’t realize existed. Digital gauges are great if you need decimal-point precision, but you don’t really need it for framing walls or building furniture. A standard speed square or a sliding T-bevel will give you plenty of accuracy for almost any project. Bringing a device with two delicate LCD screens onto a dusty, rough job site is just asking for problems.

One dropped board or a misplaced hammer swing can shatter those screens, turning your expensive tool into useless aluminum. You’re also going to get tired of dealing with batteries and electronic quirks. Even though the tool is built to be tough, an analog version will never run out of power in the middle of a measurement.

Advertisement

Universal Nut Cracker

The Craftsman Auto Universal Nut Cracker is meant to save you when a nut is stuck and just won’t budge. It uses a hardened steel cutter to split the hardware, working on sizes from 5/16-inch to 7/8-inch across the flats. It’s designed to break rusted or frozen nuts without messing up the threads on the bolt underneath. While that sounds pretty good, it’s often tough to use in real-world situations, like in a cramped engine bay where the frame just won’t fit.

Even though it looks small, it measures 8.35 inches long, 3.35 inches wide, and 1.34 inches high. The maker says you can’t use power tools with it, so you’re stuck using your hands in tight spots where you probably can’t get much leverage anyway. A good set of extraction sockets is usually a better pick for rounded or stuck nuts, since those work on many sizes and aren’t hard to find. Instead of fighting with this tricky gadget, you could just grab a hacksaw or a torch to get that hardware off.

Advertisement

Even the few people who bought it from Craftsman have left it an average of 1 star out of 5 possible stars. Store reviews, like these bad ones from Ace Hardware, often offer valuable insight from buyers. 

Advertisement

Auto Caliper Hanger Set

The Craftsman Auto Caliper Hanger Set is a classic example of a tool you just don’t need to pick up. This universal kit works for cars with disc brakes, and it’s supposed to hold the calipers securely while you’re doing brake work. It’s designed to keep the heavy caliper from hanging on your rubber brake lines, which could really damage them. It’s basically a heavy-duty S-hook with a tough coating, so you can reuse it.

Even with all that in mind, it’s really just a single-purpose item that’ll mostly just clutter up your toolbox, which shouldn’t have tools you never use anyway. You can get the same result with things you probably already have in your garage. A basic bungee cord from Tractor Supply, or even a piece of scrap wire from an old coat hanger works just as well. You just bend the wire into an S-shape, and you’re good to go.

This is basically just a simple piece of bent metal made in China. The set does come with a limited lifetime warranty, and the company says it’ll replace it for any reason, even without a receipt. Still, there’s really no reason to spend your money on a dedicated hanger when alternatives you probably have will work similarly.

Advertisement

Auto LED Inspection Mirror

The Craftsman Auto LED Inspection Mirror might seem like a smart way to check dark engine corners or behind walls, but it’s mostly a gimmick. It comes with a telescoping wand that has a rubber handle, a 2-inch mirror, and a swivel joint to help you get into tricky spots. The shaft begins at 6-1/4 inches and can stretch out to 37-1/2 inches.

The big selling point is its built-in LED light, which is meant to help you spot leaks or dropped bolts. However, that light is actually its main problem. Since it has an LED, the mirror needs a CR2032 battery to operate. These batteries last a while in a key fob, but drain relatively quickly with larger devices.

Advertisement

For daily work, a standard telescoping mirror along with a basic headlamp or flashlight is plenty. When you separate the light from the mirror, you actually get better lighting angles. You can bounce the light off the glass to see what you’re checking out without the glare from the built-in LED messing up the reflection. You could even just put a separate light source in the engine bay to light up the whole area instead of counting on one tiny light on a stick.

Advertisement

3-Jaw Oil Filter Wench

The Craftsman 3-jaw Oil Filter Wrench is another niche item that most people can live without. It’s marketed as a universal way to handle oil changes on different vehicles, promising to make the job simpler for anyone, regardless of their skill level. The tool uses metal jaws made from heat-treated steel. It’s designed to handle filters from 2 inches to 4-1/2 inches in diameter. It’s a low-profile item that’s 1.61 inches high and about 6.85 inches long, weighing in at 0.82 pounds.

Even with those specs and a lifetime warranty, this gadget isn’t a necessary purchase. It uses a gear mechanism to grip the filter while you turn it with a 3/8-inch or 1/2-inch drive ratchet. While it technically works, it’s not as versatile as some options. You likely already have many of the basic oil change tools from a store like Harbor Freight. A pair of filter pliers can handle the same job and will fit a much wider range of filter sizes.

This wrench is a heavy chunk of metal that takes up space. Sticking to a reliable strap wrench or standard pliers will save you money and keep your collection uncomplicated. Those tools also work for basic plumbing repairs, whereas this wrench does only one thing.

Advertisement

Why these were picked

The hardware aisle is filled with specialized gadgets, like those in the Craftsman catalog, that solve singular problems rather than being multi-function tools. While these get marketed as revolutionary solutions to common mechanical hurdles, they can be a poor investment. These niche items tend to prioritize flashy, single-purpose engineering over the rugged adaptability that has defined the trades for generations.

Standard equipment like speed squares, extraction sockets, bungee cords, and basic strap wrenches gives you a level of durability and broad utility that specialized gear can’t match. These classic alternatives aren’t just way more affordable; they also do the same job without electronic glitches or taking up too much space. Being smart in the workshop is often about being clever, not about buying the fanciest gadgets.

Advertisement



Source link

Advertisement
Continue Reading

Tech

If Samsung launches a Galaxy S27 Pro, the name alone won’t save it

Published

on

New rumors have started the chatter of a new Galaxy Pro flagship phone once again, and this immediately makes sense to me—but it’s not for flattering reasons. Samsung may be adding a fourth Galaxy S27 model next year, with a “Pro” variant expected to sit right below the top-of-the-line S27 Ultra.

This model essentially bridges the gap between the standard and Ultra Galaxy phones with high-end features, minus the S Pen. Some of these premium features could include the S26 Ultra’s new Privacy Display feature.

All of this sounds smart on paper, but it also sounds like acceptance.

After spending time with the Galaxy S26, I have a recurring thought. This compact phone has a solid software experience, reliable cameras, and is generally easy to recommend as a base flagship. But “reliable” is no longer enough when these devices carry flagship pricing.

Advertisement

The regular Galaxy S phones are where the problem is

Samsung’s own S26 comparison page shows the base S26 stuck at 25W charging, while the S26+ goes to 45W, and the Ultra got upgraded to 60W. The camera story lands the same way. Samsung’s Galaxy S26 and Galaxy S26+ share the same 12MP ultrawide, 50MP wide, and 10MP telephoto setup, while the Ultra gets the far more ambitious 50MP ultrawide, 200MP main, and 50MP + 10MP telephoto mix.

So apart from the Ultra, the other two models feel like an afterthought, but an expensive four-digit flagship one at that. This is why a Galaxy S27 Pro could make the S27 lineup feel less lethargic and more energetic. Just like the base Pixel 10 and Pixel 10 Pro and the base iPhone 17 and iPhone 17 Pro, there could be a clear distinction in the intermediate model. Right now, the base and Plus models do just enough. The Ultra does everything.

The Galaxy S27 Pro needs to be a course correction, not a rebrand

But a Pro model only works if Samsung uses it to create a truly convincing middle. One with faster charging, stronger camera hardware, and a better reason to exist below the Ultra.

I think Samsung is definitely in need of this change. But the name alone won’t be enough. If Samsung wants the Pro phone to matter, it has to make this non-ultra Galaxy S phone feel like more than just a safe default and start making it feel worth the premium money again. Otherwise, the S27 Pro will just be another label slapped onto a lineup where all the excitement only lives at the top.

Source link

Advertisement
Continue Reading

Tech

Google quietly launched an AI dictation app that works offline

Published

on

Google on Monday quietly released an offline-first dictation app called “Google AI Edge Eloquent” on iOS to take on the likes of Wispr Flow, SuperWhisper, Willow, and others.

The app is free to download, and once its Gemma-based automatic speech recognition (ASR) models are downloaded, you can start dictating on your phone. In the app, you can see the live transcription, and when you hit pause, the app automatically filters out filler words like “um” and “ah” and polishes the text.

Below the transcript are options like “Key points”, “Formal”, “Short”, and “Long” to transform the text.

Image Credit: Screenshot by TechCrunchImage Credits:Screenshot by TechCrunch

You can also turn off the cloud mode to use local-only processing. (When cloud mode is on, the app uses cloud-based Gemini models for text cleanup.) The Google AI Edge Eloquent can import certain keywords, names, and jargon from your Gmail account, if desired. Plus, you can add your own custom words to the list.

The app displays the history of the transcription session and lets you search through all of them as well. It can show you words dictated in the last session, your word per minute speed, and the total number of words spoken.

Advertisement

“Google AI Edge Eloquent is an advanced dictation app engineered to bridge the gap between natural speech and professional, ready-to-use text. Unlike standard dictation software that transcribes stumbles and filler words verbatim, Eloquent utilizes AI to capture your intended meaning. It automatically edits out ‘ums,’ ‘uhs,’ and mid-sentence self-corrections, outputting clean, accurate prose,” the company’s App Store description reads.

I was saying “Transcription”. Still early days for this app. Image Credits: TechCrunchImage Credits:Screenshot by TechCrunch

While the app is currently only available on iOS, the App Store description references an Android version. (We have reached out to Google for more information, and will update the story if we hear back.)

According to the description, Eloquent offers “seamless Android integration,” where it can be set as users’ default keyboard for system-wide access across any text field. Plus, the app will be able to use the floating button feature, similar to the one Wispr Flow uses on Android, for easy access to transcription from anywhere.

AI-powered transcription apps are gaining popularity among users as speech-to-text models get better. With this experimental app, Google is joining the trend. If this test is successful, we could see improved transcription features across Android, too.

Source link

Advertisement
Continue Reading

Tech

How MassMutual and Mass General Brigham turned AI pilot sprawl into production results

Published

on

Enterprise AI programs rarely fail because of bad ideas. More often, they get stuck in ungoverned pilot mode and never reach production. At a recent VentureBeat event, technology leaders from MassMutual and Mass General Brigham explained how they avoided that trap — and what the results look like when discipline replaces sprawl.

At MassMutual, the results are concrete: 30% developer productivity gains, IT help desk resolution times reduced from 11 minutes to one, and customer service calls cut from 15 minutes to just one or two.

“We’re always starting with why do we care about this problem?” Sears Merritt, MassMutual’s head of enterprise technology and experience, said at the event. “If we solve the problem, how are we gonna know we solved it? And, how much value is associated with doing that?”

Defining metrics, establishing strong feedback loops

MassMutual, a 175-year-old company serving millions of policy owners and customers, has pushed AI into production across the business — customer support, IT, customer acquisition, underwriting, servicing, claims, and other areas.

Advertisement

Merritt said his team follows the scientific method, beginning with a hypothesis and testing whether it has an outcome that will tangibly drive the business forward. Some ideas are great, but they may be “intractable in the business” due to factors like lack of data or access, or regulatory constraint.

“We won’t go any further with an idea until we get crystal clear on how we’re going to measure, and how we’re going to define success.”

Ultimately, it’s up to different departments and leaders to define what quality means: Choose a metric and define the minimum level of quality before a tool is placed into the hands of teams and partners.

That starting point creates a quick feedback loop. “The things that we find slow us down is where there isn’t shared clarity on what outcome we’re trying to achieve,” which can lead to confusion and constant re-adjusting, said Merritt. “We don’t go to production until there is a business partner that says, ‘Yes, that works.’”

Advertisement

His team is strategic about evaluating emerging tools, and “extremely rigorous” when testing and measuring what “good” means. For instance, they perform trust scoring to lower hallucination rates, establish thresholds and evaluation criteria, and monitor for feature and output drift.

Merritt also operates with a no-commitment policy — meaning the company doesn’t lock itself into using a particular model. It has what he calls an “incredibly heterogeneous” technology environment combining best of breed models alongside mainframes running on COBOL. That flexibility isn’t accidental. His team built common service layers, microservices and APIs that sit between the AI layer and everything underneath — so when a better model comes along, swapping it in doesn’t mean starting over.

Because, Merritt explained, “the best of breed today might be the worst of breed tomorrow, and we don’t want to set ourselves up to fall behind.”

Ai impact series 2

Credit: Brian Malloy Photo

Advertisement

Weeding instead of letting a thousand flowers bloom

Mass General Brigham (MGB), for its part, took more of a spray and pray approach — at first.

Around 15,000 researchers in the not-for-profit health system have been using AI, ML, and deep learning for the last 10 to 15 years, CTO Nallan “Sri” Sriraman said at the same VB event.

But last year, he made a bold choice: His team shut down a sprawl of non-governed AI pilots. Initially, “we did follow the thousand flowers bloom [methodology], but we didn’t have a thousand flowers, we had probably a few tens of flowers trying to bloom,” he said.

Like Merritt’s team at MassMutual, MGB pivoted to a more holistic view, examining why they were developing certain tools for specific departments of workflows. They questioned what capabilities they wanted and needed and what investment those required.

Advertisement

Sriraman’s team also spoke with their primary platform providers — Epic, Workday, ServiceNow, Microsoft — about their roadmaps. This was a “pivotal moment,” he noted, as they realized they were building in-house tools that vendors were already providing (or were planning to roll out).

As Sriraman put it: “Why are we building it ourselves? We are already on the platform. It is going to be in the workflow. Leverage it.”

That said, the marketplace is still nascent, which can make for difficult decisions. “The analogy I will give is when you ask six blind men to touch an elephant and say, what does this elephant look like?” Sriraman said. “You’re gonna get six different answers.”

There’s nothing wrong with that, he noted; it’s just that everybody is discovering and experimenting as the landscape keeps shifting.

Advertisement

Instead of a wild West environment, Sriraman’s team distributes Microsoft Copilot to users across the business, and uses a “small landing zone” where they can safely test more sophisticated products and control token use.

They also began “consciously embedding AI champions“ across business groups. “This is kind of a reverse of letting a thousand flowers bloom, carefully planting and nourishing,” Sriraman said.

Observability is another big consideration; he describes real-time dashboards that manage model drift and safety and allow IT teams to govern AI “a little more pragmatically.” Health monitoring is critical with AI systems, he noted, and his team has established principles and policies around AI use, not to mention least access privileges.

In clinical settings, the guardrails are absolute: AI systems never issue the final decision. “There’s always going to be a doctor or a physician assistant in the loop to close the decision,” Sriraman said. He cited radiology report generation as one area where AI is used heavily, but where a radiologist always signs off.

Advertisement

Sriraman was clear: “Thou shall not do this: Don’t show PHI [protected health information] in Perplexity. As simple as that, right?”

And, importantly, there must be safety mechanisms in place. “We need a big red button, kill it,” Sriraman emphasized. “We don’t put anything in the operational setting without that.”

Ultimately, while agentic AI is a transformative technology, the enterprise approach to it doesn’t have to be dramatically different. “There is nothing new about this,” Sriraman said. “You can replace the word BPM [business process management] from the ’90s and 2000s with AI. The same concepts apply.”

Source link

Advertisement
Continue Reading

Tech

Three YouTubers accuse Apple of illegal scraping to train its AI models

Published

on

Three YouTube channels have banded together and filed a class action lawsuit against Apple, as first spotted by MacRumors. According to the lawsuit, the creators behind h3h3 Productions, MrShortGameGolf and Golfholics have accused Apple of violating the Digital Millennium Copyright Act by scraping copyrighted videos on YouTube to train its AI models.

While the YouTubers’ videos are available to watch on the platform, the lawsuit alleged that Apple illegally circumvented the “controlled streaming architecture” that regular users are limited to. The creators claimed that Apple’s video scraping was used to train its generative AI products, adding that the tech giant’s “massive financial success would not have been possible without the video content created” by the YouTubers. MacRumors noted that these YouTube channels have also filed similar lawsuits against other tech companies, including Meta, Nvidia, ByteDance and Snap.

It’s not the first time a company’s alleged AI training methods have gotten them in legal trouble. OpenAI and Microsoft were both accused of using copyrighted articles from the NYTimes to train its AI chatbots. Similarly, Perplexity was recently sued by Reddit and Encyclopedia Britannica for alleged copyright and trademark infringements. Last year, Apple was also named in a separate class action lawsuit from two neuroscience professors who claimed their copyrighted works were used without permission. We reached out to Apple for comment and will update the story when we hear back.

Source link

Advertisement
Continue Reading

Tech

Trump Celebrates Easter By Dropping An F-Bomb, Threatening More War Crimes

Published

on

from the what-would-jesus-bomb dept

Before we get into this, let’s set the scene a little:

The latest Pew Research Center survey, conducted Jan. 20-26, 2026, finds that most White evangelicals (69%) approve of the way Trump is handling his job as president. And a majority (58%) say they support all or most of his plans and policies.

Let that sink in for a bit. The operative term here is probably “white,” but Trump has been embraced by the evangelical community, despite his being about as far removed from the ideals of Christianity as their arch-nemesis, trans people the Devil. (And let’s not forget I’m talking about the ideals, which are often preached but rarely practiced.)

Here’s how Trump handled Easter morning, one of the holiest (no pun intended) holidays observed by the people most likely to support him no matter what:

President Trump: “Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell – JUST WATCH! Praise be to Allah. President DONALD J. TRUMP”

Aaron Rupar (@atrupar.com) 2026-04-05T12:26:44.944Z

Advertisement

In Trump’s own words, at 5:03 am on Easter Sunday:

Tuesday will be Power Plant Day, and Bridge Day, all wrapped up in one, in Iran. There will be nothing like it!!! Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell – JUST WATCH! Praise be to Allah. President DONALD J. TRUMP

Now, I have to admit that when I first read this, I thought Trump was announcing some new celebration of US infrastructure before derailing his own train of thought. But it’s definitely not that.

It’s the other thing… which turns out to be Trump announcing planned war crimes. Again.

Both sides have threatened and hit civilian targets like oil fields and desalination plants critical for drinking water. Iran’s U.N. mission on social media called Trump’s threat “clear evidence of intent to commit war crime.”

Iran’s military joint command warned of stepped-up retaliatory attacks on regional oil and civilian infrastructure if the U.S. and Israel attack such targets there, according to state television.

Advertisement

The laws of armed conflict allow attacks on civilian infrastructure only if the military advantage outweighs the civilian harm, legal scholars say. It’s considered a high bar to clear, and causing excessive suffering to civilians can constitute a war crime.

While it looks like both sides in this war are willing to strike civilian infrastructure, the United States should be trying to take the high road (the one without war crimes). And if it can’t be bothered to do that, the administration should — at the very least — try to keep the president from publicly saying we’re going to commit war crimes.

But, alas, there’s no one willing to stop him. Pete Hegseth is definitely relishing his unearned role as the Secretary of Defense (“Back to the Stone Age.”) And he appears to be firing anyone who disagrees with things like drone-killing people in international waters and, you know, engaging in war crimes.

Both Trump and Hegseth have publicly claimed they’re doing God’s work by going to war with Iran, something that has been echoed by the same demographic detailed in the Pew Research survey.

Advertisement

Shamefully, they won’t see a drop in support despite Trump threatening war crimes, dropping an F-bomb, and promising to send people halfway around the world to hell, as if he were a god himself. And that’s a damning indictment of an entire segment of Americans who choose to treat their religion as a weapon and want the world to be remade in their own image — something they often accuse Muslims of doing. The irony is lost on them, along with the man they’ve chosen to treat as God’s appointed leader.

We’ve had a lot of low points as a nation, but usually we’ve at least tried to improve. That’s no longer the case. We’re under the rule of people who debase and abuse the nation they claim to love. Happy Fuckin’ Easter, you crazy bastards. Welcome to Hell.

Filed Under: evil, iran war, pete hegseth, trump administration, war crimes

Companies: truth social

Advertisement

Source link

Continue Reading

Tech

iOS 26.5 Public Beta: Is End-to-End Encrypted RCS Messaging Finally Coming to iPhone?

Published

on

Apple released the first public beta of iOS 26.5 on Monday, about two weeks after the company released the massive iOS 26.4 update, which included new emoji, video podcasts and more. The iOS 26.5 beta brings a few smaller — but significant — changes to the iPhones of developers and beta testers, including one feature that will be familiar to anyone who has kept up with past iOS betas.

The download page for iOS 26.5 beta 1.

Apple/Screenshot by CNET

Because this is a beta, I recommend downloading it only on something other than your primary device. This isn’t the final version of iOS 26.5, so the update might be buggy and battery life may be short, so it’s best to keep those troubles on a secondary device.

Also, since this isn’t the final version, there could be more features to land on your iPhone whenever 26.5 is released.

Advertisement

Here are some features developers and beta testers can try now, and what could land on your iPhone when Apple releases iOS 26.5.

End-to-end encrypted RCS messaging returns

The iOS 26.5 beta brings back an option to enable end-to-end encrypted RCS messaging on your device. When Apple brought RCS messaging to iPhones with iOS 18, one feature the messaging protocol was missing was end-to-end encryption, and iOS 26.5 could finally bring this privacy protection to your iPhone.

To find this setting, go to Settings > Apps > Messages > RCS Messaging and tap the slider next to End-to-End Encryption (Beta)

Advertisement
A screenshot showing the end-to-end encryption option in Messages on the iOS 26.5 beta.

Apple/Screenshot by CNET

Apple writes in the feature’s description that it’s still in beta and it works only on certain carriers and devices. Apple also writes that these encrypted messages will be labeled as such, so you should know when your messages do and don’t have this level of protection.

Apple included end-to-end encrypted RCS messaging in beta versions of iOS 26.4, but the tech giant didn’t include the feature in the final release.

Suggested Places in Maps

The iOS 26.5 beta also brings a new section called Suggested Places to your Maps app. Once in the app, tap your Search bar like you’re going to look up a nearby cafe or restaurant, and the section Suggested Places will appear below Recents.

The new Suggested Places section in Apple Maps.

Apple/Screenshot by CNET

Those are a few of the new features developers and public beta testers can try now with the first public beta of iOS 26.5. There will likely be more betas before the OS is released to the public, so there’s plenty of time for Apple to change these features and add others. Apple has not said when it will release iOS 26.5 to the general public.

For more iOS news, here’s everything you should know about iOS 26.4 and iOS 26.3. You can also check out our iOS 26 cheat sheet.

Advertisement

Watch this: Sharing Our Biggest Apple Memories After 50 Years

Source link

Advertisement
Continue Reading

Tech

Who Owns SRM Concrete & What Does The ‘SRM’ Stand For?

Published

on





SRM stands for Smyrna Ready Mix. SRM Concrete, which lays claim to the “largest privately-owned ready-mix concrete manufacturer in the country,” is owned by the Hollingshead family of Smyrna, Tennessee. The company’s founders, Mike and Melissa Hollingshead, got into the ready-mix concrete business as a way to improve the supply of concrete to Hollingshead Concrete. Mike Hollingshead started Hollingshead Concrete early in his career as a concrete finishing business that stands as the Hollingsheads’ first company, although recent iterations of that business are known as Hollingshead Cement.

In 1999, frustrated with the poor customer service he received from local concrete suppliers, Mike and Melissa bought their own ready-mix concrete plant, assembled it in their backyard, and acquired five used concrete trucks at an auction to start SRM Concrete. Even that first backyard operation likely exceeded the capacity of mixing multiple bags of concrete in a Harbor Freight cement mixer.

Advertisement

The Hollingsheads launched SRM Concrete with a tight budget and immediately had obstacles to overcome. While assembling SRM’s first ready-mix plant at their home in the backyard was a sizable commitment to the project, Mike had little knowledge of operating a ready-mix plant or the formula for making a quality mix. To make matters worse, two of the five used concrete trucks bought at auction, meant to deliver SRM’s product, suffered engine failure before making it back to the SRM Concrete plant.

Advertisement

Where is SRM Concrete today?

What started in Mike and Melissa Hollingshead’s backyard in 1999 has expanded dramatically over the past quarter-century. It took six months for word to spread that SRM Concrete was open for business. What started as a way for Mike to get the concrete he needed for his concrete finishing business quickly expanded to serving other concrete finishers in the area and across Middle Tennessee.

Today, SRM Concrete and Hollingshead Cement operate in 24 states across the U.S. with 563 concrete plants, 33 quarries, and 12 cement terminals. The company’s rapid growth is the result of a mixture of expansion and acquisition. SRM Concrete boasts the opening of 21 new facilities in 2025 alone, with three more announced in the first quarter of 2026. 

Like many family-owned businesses in the building trade, Mike and Melissa’s sons have grown up with the business and become part of the leadership team at SRM. Jeff took on the role of Chief Executive Officer in 2014, and Ryan is the President of the company’s materials division. Mike Hollingshead is still involved in the business. He’s currently serving as the company Chairman while still making deals with suppliers, overseeing the Smyrna quarry, and driving the occasional concrete truck.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Boston Dynamics Spot’s Interaction With the Public

Published

on

Building the next generation of robots for successful integration into our homes, offices, and factories is more than just solving the hardware and software problems – we also need to understand how they will be perceived and how they can work effectively with people in those spaces.

aspect_ratioRobotics and AI Institute logo with text about post originally appearing there

In summer 2025, RAI Institute set up a free popup robot experience in the CambridgeSide mall, designed to let people experience state-of-the-art robotics first hand. While news stories about robots and AI are common, with some being overly critical and some overly optimistic, most people have not encountered robots in the flesh (or metal) as it were. With no direct experience, their opinions are largely shaped by pop culture and social media, both of which are more focused on sensational stories instead of accurate information about how the robots might be used effectively and where the technology still falls short. Our goal with the popup was two-fold: first, to give people an opportunity to see robots that they would otherwise not have a chance to experience and second, to better understand how the public feels about interacting with these robots.

Designing a Robot Experience for the General Public

Three experimental robotic prototypes displayed behind barriers in a bright gallery. Some earlier versions legged robots, built by the RAI Institute’s Executive Director, Marc RaibertRAI Institute

Red robot dog and electric bike displayed in glass cases inside a modern mall. The ANYmal by ANYrobotics (left) and a previous model of the RAI Institute’s UMV (right)RAI Institute

The pop-up space had two areas: a museum area where people could see historical and modern robots, including some RAI Institute builds like the UMV and an interactive experience called “Drive-a-Spot”. This area was a driving arena where anyone who came by could take the controls of a Spot quadruped, one of the more recognizable, commercially available robots available today.

The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it. It featured basic controls: move forward, back, left, right, adjust height, sit, stand, and tilt. The buttons were large so that tiny or elderly hands could use the controller and the people who drove Spot ranged in age from two to over 90.

Advertisement

Adaptive gaming controller with large programmable buttons on a black table. The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it.RAI Institute

The demo area was designed to be a bit challenging for the Spot robot to maneuver in – it contained tight passages, low obstacles to step over, a barrier to crouch under, and taller objects the robot had to avoid. Much to the surprise of many of our guests, Spot is able to autonomously adjust itself to traverse and avoid those obstacles when being supervised by the joystick.

RAI Institute

The driving arena’s theme rotated every few weeks across four scenarios: a factory, a home, a hospital, and an outdoor/disaster environment. These were chosen to contrast settings where robots are broadly accepted (industrial, emergency response) with settings where public ambivalence is well-documented (domestic, healthcare).

The visitors who chose to drive the Spot robot could also participate in a short survey before and after their driving experience. The survey focused on two core dimensions:

Advertisement
  • Comfort: how comfortable would you feel if you encountered a robot in a factory, home, hospital, office, or outdoor/disaster scenario?
  • Suitability: how well would this robot work in each of those contexts?

The survey also recorded emotional reactions immediately after driving, likelihood to recommend the experience, and open-ended responses about what they found memorable or surprising. The researchers were careful to separate the environment participants drove through from the scenarios they were asked to evaluate in the survey). This distinction is important for interpreting the results given below.

Did Interacting with the Robot Change People’s Feelings about Robots?

Out of approximately 10,000 guests that visited the Robot Lab, 10 percent of those drove the Spot and opted-in to our surveys. Of those surveyed, more than 65% of people had seen images or videos of Spot robots online, but most had never seen one of the robots in person.

Increased Comfort Through Experience

Across all five contexts presented in the survey (factory, home, hospital, office, and outdoor/disaster scenarios), comfort scores increased significantly after the driving session. The effects were small to moderate in magnitude, but they were consistent and statistically robust after correcting for multiple comparisons across all participants spanning children to older adults.

The largest gain appeared in the outdoor/disaster context, which started with low comfort despite high-perceived suitability. People already thought Spot would be useful in search-and-rescue scenarios; they just weren’t comfortable with it performing in that scenario. This discomfort may stem from media portrayals of quadruped robots in military contexts. A few minutes of hands-on control appears to partially dissolve that apprehension.

Participants who drove through the factory-themed arena showed no significant increase in comfort, but this scenario already had the highest rating of any rated context at baseline, leaving little room for improvement.

Advertisement

No matter their previous experience, most people were neutral about having a Spot robot in their home before their driving experience. However, after the experience of controlling the Spot robot, people had a statistically significant increase in their comfort at having a Spot in their home and also felt that a Spot robot was more suitable for work in any environment, not just the one they had driven it in.

Better Understanding of Where Robots Can Fit into Daily Life

Perceived suitability for Spot to operate in each context also increased. However, the pattern in the data is different. The largest gains weren’t in the high-baseline industrial and outdoor contexts. They were in home, office, and hospital – the very environments where people started out most skeptical.

Participants who drove the Spot robot in a home-themed environment didn’t just consider homes more suitable for robots; they also rated hospitals and offices as more suitable. This result suggests that hands-on control alters something more fundamental than just context-specific familiarity. It may change a person’s underlying understanding of a robot’s capabilities and, consequently, where they believe robots are appropriate.

Results by Demographic

The hands-on experience seems to be similarly effective across genders, although it does not completely eliminate existing disparities. For example, men reported higher baseline comfort than women across all five contexts. However, all genders improved at similar rates after interaction. The gap didn’t significantly widen or close in most contexts, though it did narrow in factory and office settings.

Advertisement

Age effects were more context dependent. Children (aged 8–17) rated factory environments as less comfortable and less suitable before the study. However, this could be because most children do not have experience with factory settings or industrial environments. After interaction, this gap largely persisted. By contrast, children showed stronger gains in office comfort than older adults and entered the study rating home contexts more favorably than adults did.

Stacked bar chart of survey participants by age group and gender categories. Participants ranged from age 8 to over age 75.RAI Institute

Participants who had previously driven Spot (mainly robotics professionals) began with higher comfort across the board. But after the hands-on session, people with no prior exposure caught up to experienced drivers. This level of familiarity would be difficult to replicate with images and videos alone.

Post-Interaction Results

Post-interaction emotional data was overwhelmingly positive. “Excitement” was reported by 74% of participants, “happiness” by 50%, and only 12% reported “nervousness.” Over 55% rated the experience as “brilliant” and 62% said they were very likely to recommend it to a friend.

The open-ended responses added a lot more color. The most commonly mentioned moments were locomotion and terrain adaptation (22%). This included the way Spot navigated steps, tight spaces, and uneven ground and expressive tilt movements (22%), which people found surprisingly dog-like or dance-like. A smaller set of responses (3%) described anthropomorphic reactions: worrying about “hurting” the robot or finding its behavior “silly” in a way that prompted genuine emotional response.

Advertisement

When asked what tasks they’d want a robot to perform, responses shifted meaningfully. Before driving, answers clustered around domestic assistance and heavy or hazardous labor. After driving, domestic help remained prominent, but entertainment and play jumped from 7.5% to 19.4%. Companionship also appeared at 5%. References to hazardous or industrial tasks declined as people who had operated the robot began imagining it as a companion and playmate, not just a labor-replacement tool.

Key Takeaways from The Robot Lab

In the not-so-distant future, robots will become more common in public and private spaces. But whether that integration into daily life will be accepted by the general public remains to be seen. The standard approach to building acceptance has been passive exposure such as videos, exhibits, and articles. This study suggests giving people agency and letting them actually operate a robot is a qualitatively different intervention.

Short, well-designed, hands-on encounters can raise comfort in precisely the social domains where ambivalence is highest and where future robotics deployment will likely take place. This hands-on experience shouldn’t be limited to tech conferences and museums, as it may be more valuable than just entertaining.

Children control a robot car at a tech booth with staff and jungle-themed backdrop Fun for all ages!RAI Institute

We consider the popup a success, but as with all experiments, we also learned a lot along the way. For our takeaways, in addition to the increased comfort with robots, we also found that the guests to our space really enjoyed talking to the robotics experts that staffed the location. For many people, the opportunity to talk to a roboticist was as unique as the opportunity to drive a robot, and in the future, we are excited to continue to share our technical work as well as the experiences of our humans in addition to our humanoids.

Advertisement

Does building a space where folks can experience robots firsthand have the potential to create meaningful, long-term attitude shifts? That remains an open question. But the effect’s direction and consistency across different situations, ages, and genders are hard to ignore.

Pop-Up Encounters with Spot: Shaping Public Perceptions of Robots Through Hands-On Experience, by Hae Won Park, Georgia Van de Zande, Xiajie Zhang, Dawn Wendell, and Jessica Hodgins from the RAI Institute and the MIT Media Lab, was presented last month at the 2026 ACM/IEEE International Conference on Human-Robot Interaction in Edinburgh, Scotland.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025