Connect with us

Technology

Haptic’s touch-based navigation helps blind and sighted alike get around without looking

Published

on

Haptic Nav

Smartphones and navigation apps have become second nature these days. But for those with blindness and low vision, it’s not quite so convenient. Haptic has been building a non-visual, non-verbal way of telling people where to go, and they’ve decided it’s time to scale up and take it global.

Haptic presented onstage today as part of the Startup Battlefield at TechCrunch Disrupt 2024, showing their progress from concept to prototype to platform. The company got started in 2017 when, after a friend lost their sight in an accident, a group of colleagues began looking into ways someone could navigate without using visual or auditory cues.

Though there are plenty of screen-reading options and spoken directions in apps, these options aren’t always convenient or practical. But as co-founder and head of business Enzo Caruso pointed out, there are other interfaces we could be using. Touch, for instance.

“Why not receive info in a more robust, intuitive, and accessible way? Everyone can understand the sensation of touch. It’s global, it’s worldwide, it’s universal,” he said.

Advertisement

The advance Haptic has made — and patented, Caruso noted — is a way of using vibration and other tactile sensations to communicate the simple, intuitive idea that the user is going in the right direction. Your device will send a steady pulse when you’re on track, then quicken or intensify if you veer off course; they call it a “haptic corridor.” Though it’s hard to imagine, they say it’s intuitive enough to get after just a few seconds.

Image Credits:Haptic

The advantages of the approach are plentiful: It works in any language, requires no special hardware, and can be used to direct someone down a crowded city sidewalk, an open landscape, or even inside a building (though that part is still in development).

Originally this haptic corridor was communicated through a wearable of their own, but since then the company has embraced the progress made in the market.

“Technology advances while you’re advancing — and smartwatches got better. So, do you want to be in competition with the Googles and Apples out there… or do you want to have them as allies? You can take your SDK from thousands of users to billions of users,” said Caruso.

CEO and co-founder Kevin Yoo explained that this year marked the company’s change in focus from proving out the product to putting it in as many hands as possible. A partnership with the likes of Google or Uber would certainly go a long way toward doing that.

Advertisement

Imagine, he said, not having to even take your phone out of your pocket to walk straight to your Uber at the airport, or finding your way through a crowded venue by the pulse of your smartwatch. Anyone might find that useful, in addition to people with vision impairments for whom it may be an everyday navigation tool.

Here’s one user, James, getting around his neighborhood with the help of the app:

“Google and Apple, telecoms, Uber, governments… all of this is coming together into a common ground,” said Yoo. With the capabilities of today’s smartwatches and phones, combined with a new software focus at Haptic on “hyper-accurate location,” they hope to introduce indoor navigation and integration with other services.

Haptic currently partners with Waymap, Cooley, WID, and Infinite Access, and are in talks with many more. They just landed a million-dollar contract with Aira, an app that allows people with vision impairments to get live assistance from a sighted helper via their phone. The haptic navigation would reduce the need for that assistant to give step-by-step directions, instead just dropping pins on a map or providing other services.

Advertisement

This, and not monetizing their own app, is how they intend to make money, Yoo emphasized: “We have a free app available to the world, live in 31 countries right now… and we have the licensing and integration model — that’s the business.”

The company is mid-raise and hoping to close a funding round that will let them pursue bigger partners (the Ubers and T-Mobiles of the world) in earnest.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

The Simpsons will join Monday Night Football on ESPN+ and Disney+

Published

on

Menu

The town of Springfield will host a National Football League game in December at Atoms Stadium, but neither the Springfield Atoms nor the Shelbyville Sharks will take the field.

Instead, the Bengals-Cowboys game on December 9 will be transformed into the world of TV’s longest running sitcom The Simpsons for a special Funday Football edition of Monday Night Football. The special Simpsons-ized broadcast will air on the ESPN+ and Disney+ streaming services and the NFL+ mobile app. The game will broadcast in its regular form on ESPN, ESPN+, ABC, ESPN2 and ESPN Deportes.

The game will implement tracking technology to turn the players on the field and ESPN commentators Mina Kimes, Dan Orlovsky and Drew Carter into Simpsons characters. Kimes, Orlovsky and Carter will wear Meta Quest Pro headsets to see their virtual environments. The quarterbacks will be transformed into Bart for the Cincinnati Bengals and Homer for the Dallas Cowboys using Sony’s AI data analyzer and sports tracking and broadcast technology, according to .

The game will also feature more characters and pre-animated scenes from the show’s original cast including Hank Azaria, Nancy Cartwright, Dan Castellaneta, Julie Kavner and Yeardley Smith along with some surprise sports cameos. Characters like Lisa, Krusty the Clown, Carl, Lenny, Moe and Milhouse will be on the sidelines rooting for their respective teams. The announcement doesn’t mention Harry Shearer, so don’t expect Mr. Burns or Smithers to be at the game.

Advertisement
To view this content, you’ll need to update your privacy settings. Please click here and view the “Content and social-media partners” setting to do so.

This isn’t the first time that ESPN has turned a regular season NFL game into an animated spectacle. Last year, Disney, ESPN and the NFL teamed up to turn an October game between the Atlanta Falcons and the Jacksonville Jaguars into a Toy Story themed game that transformed London’s Wembley Stadium into Andy’s room. The kids’ cable network Nickelodeon has also aired a few NFL games for its NFL Slimetime broadcasts featuring live commentary from animated characters like SpongeBob voiced by Tom Kenny and Patrick Star voiced by Bill Fagerbakke and computerized slime spewing in the end zones after touchdowns.

Source link

Continue Reading

Technology

Study finds LLMs can identify their own mistakes

Published

on

Study finds LLMs can identify their own mistakes

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


A well-known problem of large language models (LLMs) is their tendency to generate incorrect or nonsensical outputs, often called “hallucinations.” While much research has focused on analyzing these errors from a user’s perspective, a new study by researchers at Technion, Google Research and Apple investigates the inner workings of LLMs, revealing that these models possess a much deeper understanding of truthfulness than previously thought.

The term hallucination lacks a universally accepted definition and encompasses a wide range of LLM errors. For their study, the researchers adopted a broad interpretation, considering hallucinations to encompass all errors produced by an LLM, including factual inaccuracies, biases, common-sense reasoning failures, and other real-world errors.

Most previous research on hallucinations has focused on analyzing the external behavior of LLMs and examining how users perceive these errors. However, these methods offer limited insight into how errors are encoded and processed within the models themselves.

Advertisement

Some researchers have explored the internal representations of LLMs, suggesting they encode signals of truthfulness. However, previous efforts were mostly focused on examining the last token generated by the model or the last token in the prompt. Since LLMs typically generate long-form responses, this practice can miss crucial details.

The new study takes a different approach. Instead of just looking at the final output, the researchers analyze “exact answer tokens,” the response tokens that, if modified, would change the correctness of the answer.

The researchers conducted their experiments on four variants of Mistral 7B and Llama 2 models across 10 datasets spanning various tasks, including question answering, natural language inference, math problem-solving, and sentiment analysis. They allowed the models to generate unrestricted responses to simulate real-world usage. Their findings show that truthfulness information is concentrated in the exact answer tokens. 

“These patterns are consistent across nearly all datasets and models, suggesting a general mechanism by which LLMs encode and process truthfulness during text generation,” the researchers write.

Advertisement

To predict hallucinations, they trained classifier models, which they call “probing classifiers,” to predict features related to the truthfulness of generated outputs based on the internal activations of the LLMs. The researchers found that training classifiers on exact answer tokens significantly improves error detection.

“Our demonstration that a trained probing classifier can predict errors suggests that LLMs encode information related to their own truthfulness,” the researchers write.

Generalizability and skill-specific truthfulness

The researchers also investigated whether a probing classifier trained on one dataset could detect errors in others. They found that probing classifiers do not generalize across different tasks. Instead, they exhibit “skill-specific” truthfulness, meaning they can generalize within tasks that require similar skills, such as factual retrieval or common-sense reasoning, but not across tasks that require different skills, such as sentiment analysis.

“Overall, our findings indicate that models have a multifaceted representation of truthfulness,” the researchers write. “They do not encode truthfulness through a single unified mechanism but rather through multiple mechanisms, each corresponding to different notions of truth.”

Advertisement

Further experiments showed that these probing classifiers could predict not only the presence of errors but also the types of errors the model is likely to make. This suggests that LLM representations contain information about the specific ways in which they might fail, which can be useful for developing targeted mitigation strategies.

Finally, the researchers investigated how the internal truthfulness signals encoded in LLM activations align with their external behavior. They found a surprising discrepancy in some cases: The model’s internal activations might correctly identify the right answer, yet it consistently generates an incorrect response.

This finding suggests that current evaluation methods, which solely rely on the final output of LLMs, may not accurately reflect their true capabilities. It raises the possibility that by better understanding and leveraging the internal knowledge of LLMs, we might be able to unlock hidden potential and significantly reduce errors.

Future implications

The study’s findings can help design better hallucination mitigation systems. However, the techniques it uses require access to internal LLM representations, which is mainly feasible with open-source models

Advertisement

The findings, however, have broader implications for the field. The insights gained from analyzing internal activations can help develop more effective error detection and mitigation techniques. This work is part of a broader field of studies that aims to better understand what is happening inside LLMs and the billions of activations that happen at each inference step. Leading AI labs such as OpenAI, Anthropic and Google DeepMind have been working on various techniques to interpret the inner workings of language models. Together, these studies can help build more robots and reliable systems.

“Our findings suggest that LLMs’ internal representations provide useful insights into their errors, highlight the complex link between the internal processes of models and their external outputs, and hopefully pave the way for further improvements in error detection and mitigation,” the researchers write.


Source link
Continue Reading

Technology

Here are the 5 Startup Battlefield finalists at TechCrunch Disrupt 2024

Published

on

Here are the 5 Startup Battlefield finalists at TechCrunch Disrupt 2024

The time has finally come to announce the five finalists of the Startup Battlefield. It all started earlier this year when the TechCrunch editorial team selected 200 companies from thousands that applied. From there, the team then chose the 20 finalists who pitched this week onstage at TechCrunch Disrupt 2024 to investor judges and packed crowds. 

This year’s finalists follow in the footsteps of Startup Battlefield legends like Dropbox, Discord, Cloudflare and Mint on the Disrupt Stage. With over 1,500 alumni having participated in the program, Startup Battlefield Alumni have collectively raised over $29 billion in funding with more than 200 successful exits.

The five finalists will pitch again on the Disrupt Stage on Wednesday, October 30 at 11:30 a.m. PT to Navin Chaddha (Mayfield), Chris Farmer (SignalFire), Dayna Grayson (Construct Capital), Ann Miura-Ko (Floodgate), and Hans Tung (Notable Capital).

Now, without further ado, here are the five TechCrunch Startup Battlefield 2024 finalists: 

Advertisement

It looks fake, or at least like a good illusion: There’s Gecko Materials founder Capella Kerst dangling a full wine bottle from her pinky finger, the only thing keeping it from smashing to pieces being the super-strong dry-adhesive her startup has brought to market. But it’s no trick. It’s the result of years of academic research that Kerst built on by inventing a method to mass-manufacture the adhesive. Inspired by the way real-life geckos’ feet grip surfaces, the adhesive is like a new Velcro — except it only needs one side, leaves no residue, and can detach as quickly as it attaches. It can do this at least 120,000 times and, as Kerst noted in a recent interview with TechCrunch, can stay attached for seconds, minutes, or even years.

Luna is a health and well-being app for teen girls that is designed to help them navigate teenhood. The app lets teens ask questions about their health and wellness and get responses from experts. It also lets them track their periods, moods, and skin. The London-based startup presented today on the Startup Battlefield stage at TechCrunch Disrupt 2024 to detail its mission to educate and support teen girls. Luna is the brainchild of best friend duo Jas Schembri-Stothart and Jo Goodall, who came up with the idea for the startup as part of an assignment during their MBA program at Oxford. 

For anyone who parties or goes out dancing, the risk of accidentally taking adulterated drugs is real. MabLab has created a testing strip that detects the five most common and dangerous additives in minutes. Co-founders Vienna Sparks and Skye Lam met in high school, and during college the pair lost a friend to overdose. It’s a story that, sadly, many people (including myself) can identify with. Thankfully, testing strips are a common sight now at venues and health centers, with hundreds of millions shipping yearly.

Six years ago, while researching for a college entrepreneurship competition, Valentina Agudelo identified a troubling gap in breast cancer survival rates between Latin America and the developed world, with women in her native Colombia and the rest of the continent dying at higher rates due to late detection. She realized that breast cancer is highly treatable when diagnosed early, yet many Latin American countries have large rural populations lacking access to mammograms and other diagnostic tools. So Agudelo and her two best friends decided to create a theoretical portable device that would detect breast cancer early.

Advertisement

In the summer of 2020, a fire broke out onboard a naval ship docked in San Diego Bay. For more than four days, the USS Bonhomme Richard burned as helicopters dropped buckets of water from above, boats spewed water from below, and firefighters rushed onboard to control the blaze. Before the embers had even cooled, lidar (light detection and ranging) scans were taken to assess how bad the damage was and to figure out how the fire even started. But the investigation was stalled, partially because of how hard it is to send lidar scans.

Source link

Continue Reading

Technology

AMD confirms its next-gen RDNA 4 GPUs will launch in early 2025

Published

on

AMD confirms its next-gen RDNA 4 GPUs will launch in early 2025

AMD’s Q3 2024 earnings call today wasn’t bullish on gaming revenue overall, but it did confirm a hot new rumor on GPUs — specifically, the launch of AMD’s next-gen RDNA 4 parts early next year. “We are on track to launch the first RDNA4 GPUs in early 2025,” said AMD CEO Lisa Su, and the company confirmed to PCWorld that it’s the first time it’s shared those plans publicly.

“In addition to a strong increase in gaming performance, RDNA 4 delivers significantly higher ray tracing performance and adds new AI capabilities,” Su said on the call.

AMD expects its gaming revenue to continue to decline this quarter, due in no small part to the PlayStation 5 and Xbox Series consoles aging out, and it’s not exactly the company’s primary focus these days anyhow. On today’s call, Su pointed out how gaming only accounts for two percent of the company’s revenue, while data center is now well over half of the company’s business. She says that after spending 10 years turning AMD around, her next task is to “make AMD the end-to-end AI leader.”

Source link

Continue Reading

Technology

Apple’s keyboards, mice, and trackpads are finally improving – now it’s time for more peripherals

Published

on

Apple's keyboards, mice, and trackpads are finally improving - now it's time for more peripherals

Apple has been dropping tons of new releases for its most popular product lines like the M4 iMac and M4 Mac mini this week, but one of the biggest surprises was the tech giant relaunching three of its most well-known peripherals — Magic Mouse, Magic Keyboard, and Magic Trackpad — now equipped with USB-Type C compatibility.

However, when looking at the list of Apple-branded accessories currently available, it all feels a bit…lacking.

Source link

Continue Reading

Technology

Follow Mars rover’s 18-mile trip in NASA’s animated route map

Published

on

Follow Mars rover’s 18-mile trip in NASA’s animated route map

Perseverance Mars Rover Drive Path Animation

NASA has shared a fascinating animation showing the route taken by the Perseverance rover on Mars since its arrival there in February 2021.

Perseverance is NASA’s most advanced Mars rover to date, and while its general routes are decided by a team at NASA’s Jet Propulsion Laboratory in Southern California, the rover actually moves forward autonomously, checking for hazards and moving around any problematic objects as it goes.

The animation covers the entire 18.7 miles (about 30 kilometers) traveled by Perseverance over the last 44 months, and includes the locations where it’s been collecting samples of Mars rock and soil.

Those samples will be returned to Earth in the coming years so that scientists can study them in laboratory conditions to try to determine whether microbial life ever existed on the red planet.

Advertisement

Most of Perseverance’s travels have taken place inside Jezero Crater, a place once filled with water and which scientists believe has the best chance of containing evidence of ancient life.

In recent months, however, Perseverance has embarked on a challenging climb up the side of the crater and is now tackling its steepest inclines to date.

Because much of the material it’s currently driving over comprises loosely packed dust and sand with a thin, brittle crust, Perseverance has recently been slipping a lot and has covered only about 50% of the distance that it would have managed on a more stable surface. On one occasion, it managed only 20% of the planned route.

“Mars rovers have driven over steeper terrain, and they’ve driven over more slippery terrain, but this is the first time one had to handle both, and on this scale,” said JPL’s Camden Miller, who is a rover planner, or “driver,” on the Perseverance mission. “For every two steps forward Perseverance takes, we were taking at least one step back. The rover planners saw this was trending toward a long, hard slog, so we got together to think up some options.”

Advertisement

The team used a replica rover on Earth to test out some new maneuvers aimed at reducing slippage, and also considered alternative routes featuring different terrain. Assessing the data, the planners settled on altering the route, and Perseverance is continuing on its way at a steady pace.

“That’s the plan right now, but we may have to change things up the road,” Miller said. “No Mars rover mission has tried to climb up a mountain this big, this fast. The science team wants to get to the top of the crater rim as soon as possible because of the scientific opportunities up there. It’s up to us rover planners to figure out a way to get them there.”

Those opportunities include access to rocks from the most ancient crust of Mars that were formed from a wealth of different processes. Rocks there have never been analyzed close up before, and they could potentially include once habitable environments.






Source link

Advertisement

Continue Reading

Trending

Copyright © 2024 WordupNews.com