Connect with us
DAPA Banner

Tech

What Exoskeleton Technology Learned From One User

Published

on

It’s easy to assume that Robert Woo was defined by the accident that took away his ability to walk.

Certainly, the day of his accident—14 December 2007—was a turning point. Woo, an architect working on the new Goldman Sachs headquarters in New York City, hadn’t attended his company’s holiday party the night before, and that morning he was the only one in the trailer that served as the construction-site office. He was bent over his laptop when, 30 floors above, a crane’s nylon sling gave way, sending about 6 tonnes of steel plummeting toward the trailer. The roof collapsed, folding Woo in half and smashing his face into his laptop, which smashed through his desk.

“I was conscious throughout the whole ordeal,” Woo remembers. “It was an out-of-body experience. I could hear myself screaming in pain. I could hear the voices of the rescue workers. I heard one firefighter say, ‘Don’t worry, we’re getting to you.’” The rescue workers hauled him out of the rubble and got him to the emergency room in 18 minutes flat; with one lung crushed and the other punctured, he wouldn’t have lasted much longer. In those frantic early moments, a doctor told him that he might be paralyzed from the neck down for the rest of his life. He remembers asking the doctors to let him die.

Woo simply couldn’t imagine how a paralyzed version of himself could continue living his life. Then 39 years old, he worked long hours and jetted around the world to supervise the construction of skyscrapers. More important, he had two young boys, ages 6 months and 2 years. “I couldn’t see having a life while being paralyzed from the neck down, not being able to teach my boys how to play ball,” he recalls. “What kind of life would that be?”

Advertisement

Robert Woo walks inside the Wandercraft facility in New York City using the company’s latest self-balancing exoskeleton. Nicole Millman

But in a Manhattan showroom last May, Woo showed that he’s not defined by that accident, which left him paralyzed from the chest down, but with the use of his arms. Instead, he has defined himself by how he has responded to his injury, and the new life he built after it.

In the showroom, Woo transferred himself from his wheelchair to a 80-kilogram (176-pound) exoskeleton suit. After strapping himself in, he manipulated a joystick in his left hand to rise from a chair and then proceeded to walk across the room on robotic legs. Woo’s steps were short but smooth, and he clanked as he walked.

This exoskeleton, from the French company Wandercraft, is one of the first to let the user walk without arm braces or crutches, which most other models require to stabilize the user’s upper body. The battery-powered exoskeleton took care of both propulsion and balance; Woo just had to steer. The bulky apparatus had a backplate that extended above Woo’s head, a large padded collar, armrests, motorized legs, and footplates. Walking across the room, he appeared to be half man, half machine. On the other side of the showroom’s plate-glass window, on Park Avenue, a kid walking by with his family came to a dead halt on the sidewalk, staring with awe at the cyborg inside.

Advertisement

Person seated wearing a full lower-body robotic exoskeleton for mobility assistance

Close-up of a hand operating the joystick and controls on a powered wheelchair armrestRobert Woo prepares to walk in a Wandercraft exoskeleton; the device’s controller enables him to stand up, initiate walk mode, and choose a direction. Bryan Anselm/Redux

The amazement on the boy’s face was reminiscent of Woo’s young sons’ reaction when they saw a photo of Woo trying out an early exoskeleton, back in 2011. “Their first comment was, ‘Oh, Daddy’s in an Iron Man suit,’” he remembers. Then they asked, “When are you going to start flying?” To which Woo replied, “Well, I’ve got to learn how to walk first.”

The title of exoskeleton superhero suits Woo. He’s as soft-spoken and mild-mannered as Clark Kent, with a smile that lights up his face. Yet the strength underneath is undeniable; he has built a new life out of sheer determination.

For 15 years, he’s been a test pilot, early adopter, and clinical-study subject for the most prominent exoskeletons under development around the world. He placed the first order for an exoskeleton that was approved for home use, and he learned what it was like to be Iron Man around the house. Throughout it all, he has given the companies detailed feedback drawn from both his architectural design skills and his user experience. He has shaped the technology from inside of it.

Saikat Pal, a researcher at the New Jersey Institute of Technology, in Newark, met Woo during clinical trials for Wandercraft’s first model. Like so many others in the field, Pal quickly recognized that Woo brought a lot to the table. “He’s a super-mega user of exoskeletons: very enthusiastic, very athletic,” Pal says. “He’s the perfect subject.”

Advertisement

By pushing the technology forward, Woo has paved the way for thousands of people with spinal cord injuries as well as other forms of paralysis, who are now benefiting from exoskeletons in rehab clinics and in their homes. “Our bionics program at Mount Sinai started with Robert Woo,” says Angela Riccobono, the director of rehabilitation neuropsychology at Mount Sinai Hospital, in New York City, where Woo became an outpatient after his accident. “We have a plaque that dedicates our bionics program to him.”

Robert Woo walks down a sidewalk in New York City in 2015 using a ReWalk exoskeleton, one of the first exoskeletons designed for use outside the rehab clinic. Eliza Strickland

It’s a fitting tribute. Woo’s post-accident life has been marked by victories, frustrations, deep love, and one devastating loss, and yet he has continued to devote himself to bionics. And while his vision for exoskeletons hasn’t changed, experience has reshaped what he expects from them in his lifetime.

Long before Woo ever stood up in a robotic suit, he had developed the habits of mind that would later make him an unusually perceptive test pilot.

Advertisement

Woo has always been a builder, a tinkerer, a fixer. Growing up in the suburbs of Toronto, he put together model kits of battleships and airplanes without looking at the instructions. “I just put things together the way I thought it would work out,” he says. He trained as an architect and in 2000 joined the Toronto-based firm Adamson Associates Architects, a job that soon had him traveling to Europe and Asia to work on corporate high-rises.

Adamson specializes in taking the stunning designs of visionary architects and turning them into practical buildings with elevators and bathrooms. “Most of the design architects don’t really have a clue about how to build buildings,” Woo says. He liked solving those problems; he liked reconciling beautiful designs with the stubborn reality of construction. That talent for understanding a structure from the inside and spotting the flaws would prove essential later.

After his accident, Woo had two major surgeries to stabilize his crushed spine, which required surgeons to cut through muscles and nerves that connected to his arms. For two months, he couldn’t feel or move his arms; there was a chance he never would again. Only when sensation began creeping back into his fingertips did he allow himself to imagine a different future. If he wasn’t paralyzed from the neck down, he thought, maybe more of his body could be brought back online. “My focus was to walk again,” he says.

Woo was discharged in March 2008 and went back to his New York City apartment. He was still bedridden and required around-the-clock care. He doesn’t much like to talk about this next part: By May, his then-wife had moved back to Canada and filed for divorce, asking for full custody of their two children. Woo remembers her saying, “I can’t look after three babies, and one of them for life.”

Advertisement

It was a dark time. Riccobono of Mount Sinai, who met Woo shortly after he became an outpatient there in 2008, recalls the despondent look on his face the first time they talked. “I wasn’t sure that he wasn’t going to take his life, to be honest,” she says. “He felt like he had nothing to live for.”

One photo shows a smiling man in an exoskeleton with his arm around a smiling woman. The other photo shows a metal plaque saying that the Rehabilitation Bionics Program was made possible by the advocacy and dedication of Robert Woo.Angela Riccobono of Mount Sinai Hospital (left) credits Woo with jump-starting the hospital’s bionics program; a plaque in the department of rehabilitation medicine recognizes his role.

Yet Woo harbors no animosity toward his ex-wife. “If we hadn’t separated and gone through the custody hearing, I don’t think I would have gotten this far,” he says. To win partial custody of his children, Woo had to become independent. He had to get off narcotic pain medications, regain strength, and learn how to navigate life in a wheelchair. He had to show that he no longer needed constant nursing, and that he could take care of both himself and his boys.

There were milestones: learning how to get back into his wheelchair after a fall, learning to drive a car with hand controls, learning to manage his body as it was, not as it had been. The biggest change came when he reconnected with his high school sweetheart, a vivacious woman named Vivian Springer. She was then dividing her time between Toronto and New York City, and she had a son who was almost the same age as Woo’s two boys. Springer had worked in a nursing home and knew how to change the sheets without getting him out of bed; she was currently working in human resources and knew how to deal with insurance companies. “You wouldn’t believe how much stress it lifted off of me,” Woo says. Over time, they became a family.

Man using a robotic exoskeleton with support, shopping and standing with children.Robert Woo’s wife, Vivian, was trained in how to operate the device he used at home. His sons, Tristan (left) and Adrien, grew up watching their dad test exoskeletons. Left: Lifeward; Right: Robert Woo

Once Woo had that foundation in place, Riccobono witnessed a profound change. “He went from focusing on ‘what I can’t do anymore’ to ‘What’s still possible? What can I do with what I have?’” At Mount Sinai, Woo remembers asking his doctor Kristjan Ragnarsson, who was then chairman of the department of rehabilitation medicine, if he would ever walk again. “His response was, ‘Yes, you can walk again,’” Woo remembers, “‘but not the way you used to walk.’”

Advertisement

First Steps in an Exoskeleton

As soon as he had regained use of his hands, Woo had started googling, looking for anything that could get him back on his feet. He tried rehab equipment like the Lokomat, which used a harness suspended above a treadmill to enable users to walk. But at the time, it required three physical therapists: one to move each leg and one to control the machine. It was a far cry from the independent strides he dreamed of.

Several years in, he learned about two companies that had built something radically different: exoskeleton suits for people with spinal cord injuries. These prototypes had motors at the knees and the hips to move the legs, with the user stabilizing their upper body with arm braces. Woo desperately wanted to try one, although the technology was still experimental and far from regulatory approval. So he took the idea to Ragnarsson, asking if Mount Sinai could bring an exoskeleton into its rehab clinic for a test drive. Ragnarsson, who’s now retired, remembers the request well. “He certainly gave us the kick in the behind to get going with the technology,” he says.

Man in robotic exoskeleton walks with canes during rehab demo as clinicians observeRobert Woo tries out an early exoskeleton from Ekso Bionics at Mount Sinai Hospital, where he first began testing the technology. Mario Tama/Getty Images

Ragnarsson had seen decades of failed attempts to get paraplegics upright, including “inflatable garments made of the same material the astronauts used when they went to the moon,” he says. All those devices had proved too tiring for the user; in contrast, the battery-powered exoskeletons promised to do most of the work. And he knew one of the founders of Ekso Bionics, a Berkeley, Calif.–based company that had built exoskeletons for the military. In 2011, Ekso brought its new clinical prototype to Mount Sinai.

The day came for Woo’s first walk. “I was excited, and I was also scared, because I hadn’t stood up for almost five years,” he remembers. “Standing up for the first time was like floating, because I couldn’t feel my feet.” In that first Ekso model, Woo didn’t control when he stepped forward; instead, he shifted his weight in preparation, and then a physical therapist used a remote control to trigger the step. Woo walked slowly across the room, using a walker to stabilize his upper body, his steps a symphony of clunks and creaks and whirs. He found it mentally and physically exhausting, but the effort felt like progress.

Advertisement

Robert Woo stands using an exoskeleton and embraces his wife, Vivian. Woo says that exoskeleton use has both physical and psychological benefits. Mt. Sinai

Riccobono was there for those first steps, with tears running down her face. “I remembered how he looked the day I first met him, so defeated,” she says. “To see him rise from the chair, to see him rise to a standing position, to see how tall he was, to see him take those first steps—it was beautiful.” Ragnarsson saw clear benefits to the technology. “Any type of walking is good physiologically,” he says. “And it’s a tremendous boost psychologically to stand up and look someone in the eye.” Woo remembers hugging his partner, Springer, and for the first time not worrying about running over her toes with his wheelchair. I first met Woo a few days later, during his third session with the Ekso at Mount Sinai.

Two people stand outside; one uses blue exoskeleton crutches for mobility.Ann Spungen (left), a researcher at a Veterans Affairs hospital, led early clinical trials of exoskeletons. Her research focused on the medical benefits of exoskeleton use. Robert Woo

Later that same year, at a Department of Veterans Affairs (VA) hospital in the Bronx, Woo got to try a prototype of the world’s other leading exoskeleton: the ReWalk, from the Israeli company of the same name (since renamed Lifeward). VA researchers, led by Ann Spungen, were keen to determine if exoskeleton use had real medical value for veterans with spinal cord injuries. Woo was part of that clinical trial, for which he had more than 70 walking sessions, and he’s since been in many others. But he remembers the first VA trial with the most gratitude. “Dr. Spungen’s first exoskeleton clinical trial really turned things around for me,” he says.

Over the course of the trial’s nine intense months, Woo says he saw noticeable improvements to many facets of his health. “By the end of the trial, I eliminated about three-quarters of my medication intake,” he says, including narcotic pain pills and medication for muscle spasms. He grew fitter, with less body fat, more muscle mass, and lower cholesterol. His circulation improved, he says, causing scrapes and cuts to heal more quickly, and his digestion improved too. The results Woo experienced have generally been borne out in research studies at the VA and elsewhere—exoskeletons aren’t just good for the mind, they’re good for the body.

Advertisement

Improving Exoskeletons From the Inside

During the VA trial, Woo began to think of exoskeletons not as miraculous machines, but as works in progress.

Man wearing robotic exoskeleton and using crutches on a city sidewalkPierre Asselin (right), a biomedical engineer, worked with Robert Woo during clinical trials of exoskeletons. He says Woo was always pushing the limits of the technology. Robert Woo

Pierre Asselin, the biomedical engineer coordinating the VA’s study, watched participants respond very differently to the equipment. “These devices are not the equivalent of walking—you’re tired after walking a mile,” he says. He notes that later models of both the Ekso and ReWalk enabled users to initiate each step through software that recognized when they shifted their weight. Asselin adds that the cognitive load is “like learning to drive a manual transmission car, where at first you’re really struggling to coordinate the clutch and the brake.” Woo picked it up immediately, he remembers.

Man in a leg exoskeleton reaches into a kitchen cabinet while another observes.Robert Woo uses an exoskeleton to reach items in a kitchen cabinet during a test of the device’s utility for everyday tasks. Eliza Strickland

Woo became an invaluable partner, Asselin says. “When we first started with the devices, there was no training manual. We developed all of that through collaboration with Robert and other participants.” Woo pushed the limits of the technology, Asselin says, whether it was seeing how many steps he could take on one battery charge or simulating a failure mode. “He’d say, ‘What happens if I was to fall? What would be the approach to getting up?’”

Woo approached the ReWalk the way he had approached buildings in his previous life: He looked inside the structure and found the weak points. An early model left some users with leg abrasions where the straps rubbed—a small injury for most people, but a serious risk for someone who can’t feel a wound forming. Woo suggested better padding and stronger abdominal supports to redistribute the load. He also hated the heavy backpack that carried the battery and computer, so one afternoon he grabbed an old pack, cut off the straps, and rebuilt it into a compact hip-mounted pouch. Then he snapped photos and sent them to the company. The next model arrived with a fanny pack.

Advertisement

Hand-drawn concept sketch of a modular device labeled \u201cReWack 6.0\u201d with notes and arrowsRobert Woo sent detailed design sketches as part of his feedback to exoskeleton engineers. Robert Woo

Sometimes his fixes were more ambitious. One Ekso unit that he used at Mount Sinai kept shutting down after 30 minutes. Woo felt the hip motors and found them hot to the touch. “I said, ‘Can I remove these? I’m going to make a really quick fix, okay? Give me a drill and I’ll put a couple of holes in it,” he recalls telling the therapists, proposing to create a DIY heat sink. He wasn’t allowed to modify the prototype, but a year later the company introduced improved cooling around the hip motors. “There is a Robert Woo design on this device,” one therapist told him.

Eythor Bender, who was then the CEO of Ekso, called Woo to thank him for his feedback and invite him to spend a week at Ekso’s headquarters. “There was no lack of engineering power in that building,” says Bender. “But sometimes when you work with engineers, they overlook important things.” Bender says Woo brought both design skills and lived experience to his weeklong residency. “He told the engineers, ‘Guys, this has to be something that people actually like to wear.’”

Patient in exoskeleton uses walker, flanked by doctor in lab coat and man in suitEkso Bionics CEO Eythor Bender and Mount Sinai physician Kristjan Ragnarsson were both on hand for Woo’s early trials of the Ekso device. Ragnarsson says he saw physical and psychological benefits of exoskeleton use. Robert Woo

The longer Woo tested, the further ahead he started thinking. With motors only at the hips and knees, every exoskeleton still required crutches. Add powered ankles, he told the Ekso and ReWalk teams, and the suits could balance themselves, freeing the user’s hands. But Woo was ahead of his time. “They said they weren’t going to do that. They weren’t going to change their whole platform,” he remembers. Years later, though, hands-free exoskeletons like those from Wandercraft would emerge built around exactly that principle.

When the Exoskeleton Came Home

By the mid-2010s, Woo had pushed the technology as far as he could in clinics. What he wanted now was to use an exoskeleton at home.

Advertisement

That milestone came after ReWalk’s exoskeleton became the first to win FDA approval for home use in 2014. ReWalk engineers still remember Woo’s help on the final tests for that personal-use model. It was the end of May in 2015, recalls David Hexner, the company’s vice president of research and development. “He said, ‘Guys, this is great. I’m going to buy it.’”

Woo was the first customer to buy an exoskeleton to bring home, paying US $80,000 out of pocket. His insurance wouldn’t cover the cost, but he was able to make the purchase in part because of a legal settlement after his accident. The home-use model came with a requirement that the user have at least one companion who was fully trained in operating the device. In Woo’s case, that meant that Springer learned to suit him up, realign his balance, and help him if he fell.

On delivery day, two SUVs drove up to a hotel down the street from Woo’s condo in the Toronto area. The technicians hauled two huge boxes into a hotel room and assembled his personal exoskeleton. They took Woo’s measurements, made adjustments, checked the software. This latest version could be controlled by either weight shifting or tapping commands on a smartwatch, and Woo had the app ready. He tested out everything in the hotel room, signed off, and then the technicians drove his robot legs to his home.

That was the start of his golden period with the ReWalk—similar to the excitement many people experience with a new piece of exercise equipment. “I used it every day for a few hours, and then I started logging how many steps I’d done,” Woo says. “My last count was probably just slightly over a million steps,” he says, with half of those steps taken in his home unit and half in training programs and clinical trials.

Advertisement

Person using a ReWalk exoskeleton with crutches beside stacked ReWalk shipping boxes The ReWalk was the first exoskeleton available for use outside the clinic. Robert Woo’s ReWalk arrived in two large boxes. ReWalk engineers assembled it in a hotel room, and Woo tried it out in the hallway before taking it home. Robert Woo

Tristan, Woo’s eldest son, remembers doing laps with his dad in the condo’s underground parking garage while his dad was training for a 5-kilometer race in New York City. Tristan admits that he had previously been embarrassed about his dad, but training for the race shifted something for him. “I was so used to not wanting to tell people that my dad was in a wheelchair, but then I shared his passion for the training,” he says. “When people would come up to us, I’d tell them about it.”

The ReWalk could turn ordinary moments into small engineering projects. On weekends, Woo would take his boys to the golf course behind their condo and bring a baseball. He had rigged two holsters to the sides of the suit so he could stash a crutch and stand on three points (two legs and one arm) while he pitched or caught. Throw, switch crutches, catch. On the day of his accident, he never thought such a scene would be possible. But with the exoskeleton, it became just another design problem to solve. “It’s a little more work. It’s not perfect,” he says. “But in the end, you still get to do what you want to do—which is play ball with your sons.”

Tristan, now a college student, says he didn’t realize at the time how hard his dad worked to make those mundane activities possible. “Reflecting on it now,” he says, “he has shaped almost every element of my life, and he definitely is my hero.”

But even during that golden stretch, the ReWalk had a way of asserting its limits. Every so often it would freeze mid-stride and require a reboot—a small technical hiccup in theory, but a serious problem when there’s a person strapped inside. Once, when he was walking on his own in the parking garage (without his mandated companion), the suit glitched and went into “graceful collapse” mode, lowering him to a seated position on the ground. Woo had to ask security to bring his wheelchair and a dolly.

Advertisement

He had imagined the exoskeleton would be most useful in the kitchen. Woo loves to cook, and he had pictured himself standing at the stove, looking down into pots, and moving easily between counter and sink. The reality, he found out, was more complicated. “It’s actually very time-consuming and troublesome” to cook in an exoskeleton, he says.

Preparing a meal meant first rolling through the kitchen in his wheelchair to gather every ingredient and utensil, then transferring himself into the ReWalk and moving himself into position at the counter, stopping at just the right moment. “That’s when I fell once,” Woo says. “I collided with the counter and then lost my balance and fell backward.” If all went well, he’d lean either on one crutch or the counter to keep his balance while he worked. But if he’d forgotten to grab the vinegar from the cabinet, he’d have to go into walk mode, crutch over to it, and figure out how to carry the bottle back to his workstation.

Powered exoskeleton suit and crutches positioned in a modern clinical room Sitting unused in Robert Woo’s home, his ReWalk exoskeleton reflects both the promise and the limits of early devices. Robert Woo

Gradually, he stopped trying. The suit, which he’d once worn every day, spent more time sitting idle in the hallway; like so many abandoned treadmills and stationary bikes, it gathered dust. Part of the reason was the exoskeleton’s practical limitations, but part of it was a shocking development: In 2024, Vivian was diagnosed with an aggressive form of breast cancer. She died in November of that year, at the age of 54.

Woo was scheduled to begin a new round of clinical trials for the Wandercraft home-use exoskeleton that month. In the aftermath of Vivian’s death, he postponed his sessions and questioned whether he would ever go back. “At the time, I thought, ‘What’s the point?’” he remembers.

Advertisement

He did go back, though. “He just rolled up, right into my office,” says Mount Sinai’s Riccobono. “He still had Vivian’s box of ashes on his lap. That’s how fresh it was.” Woo brought the box into a meeting of spinal cord injury patients and shared the story of losing the love of his life. And he told them that he heard his wife’s voice in his head every day, telling him to get back to work. Once again, he was figuring out how to move forward with what he had.

How Close Are We to Everyday Exoskeletons?

In the Wandercraft showroom last May, Woo steered toward the door to the street, technicians flanking him like spotters. The slope down to the sidewalk was barely an inch high, but everyone tensed. He shifted his weight and took a step forward. The suit halted automatically. He tried again—step, stop; step, stop—as the suit kept detecting the slight decline and a safety feature kicked in. The Wandercraft isn’t yet rated for slopes of more than 2 percent, and even the gentle pitch of Park Avenue was enough to trigger its safeguards. When he finally reached the sidewalk, Woo broke into a grin. A man in the back seat of a stopped Uber leaned out his window, filming.

Knee brace with straps and a leg showing a fresh, red incision scar.During testing of the Wandercraft exoskeleton, straps caused an abrasion on Robert Woo’s leg, which he documented as part of his feedback to the company. Robert Woo

Woo had recently completed seven sessions with the Wandercraft at the VA hospital and had been impressed overall. But at the showroom, he rolled up his pants leg to reveal an abrasion on his shin, the result of a strap that had worn away a patch of skin during a long walking session. He would later send Wandercraft a nine-page assessment with photos and a technology wish list, asking the company to work on things like padding, variable walking speeds, and deeper squats.

Wandercraft’s engineers relish that kind of user feedback, says CEO Matthieu Masselin. Exoskeletons are a far more difficult engineering problem than humanoid robots, he explains. “You basically have two systems of equal importance. You know about the robot—it’s fully quantified and measured. But you don’t know what the person is doing, and how the person is moving within the device.”

Advertisement

Since Woo began testing exoskeletons 15 years ago, both the technology and the market have made strides. ReWalk and Ekso won FDA clearance for clinical use in the 2010s, and both now sell home-use versions. The companies have sold thousands of exoskeletons to rehab clinics and personal users, and they see room for growth; in the United States alone, about 300,000 people live with spinal cord injuries, and millions more have mobility impairments from stroke, multiple sclerosis, or other conditions. The VA began supplying devices to eligible veterans in 2015, and Medicare recently established a system for reimbursement, a move that private insurers are beginning to follow. What was once experimental is slowly becoming established.

Researchers who test the devices say the technology still has significant limits. Pal, of the New Jersey Institute of Technology, mentions battery life, dexterity, and reliability as ongoing challenges. But, he says with a laugh, “Our bodies have evolved over many millions of years—these machines will need a bit more time.” Pal hopes the companies will keep pushing the technological frontier. “My lifetime goal is to see the day when someone like Robert Woo can wake up in the morning, put this device on, and then live an ordinary life.”

For Woo, the real question about the self-balancing Wandercraft was: Could he cook with it? In the VA hospital’s home mockup, he tried it out in the kitchen, stepping sideways to retrieve items from cabinets and squatting to grab something from the fridge’s lower shelf. For the first time in years, he could work at a counter without leaning on crutches. “The self-standing exoskeleton changes everything,” he says. He imagines a user placing a Thanksgiving turkey on a tray attached to the suit and walking it into the dining room.

Back in the showroom, Woo finishes the demo and brings the suit to a seated position before transferring back to his wheelchair. After so many years of testing prototypes, he’s now realistic about the technology’s timeline. A truly all-day exoskeleton—the kind you live in, the kind that replaces a wheelchair—may be a decade or more away. “It may not be for me,” he says. But that’s no longer the point. He’s thinking about young people who are newly injured, who are lying in hospital beds and trying to imagine how their lives can continue. “This will give them hope.”

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Does A Right Turn Traffic Light Mean ‘No Turn On Red’ In Florida?

Published

on





Traffic lights can be tricky, depending on where you go. The response you have to a red light at an intersection in one state may not be the same response you need at an intersection in another state. Turning right on red can even get you a ticket in some U.S. cites. But in Florida, a right turn traffic light may still allow a right turn after stopping. But there’s also a bit more to it than that.

First off, you must come to a complete stop at the red light. If you keep rolling through the turn instead, you could get a ticket. Next, if there are no posted warning signs at the light, Florida law says you can go ahead and turn right once it’s clear to do so. But if you have a sign warning you that there’s no turn on red, then you’re stuck. Stay where you are until you get the green light.

Similarly, if you have a red right arrow, you of course must fully stop then as well. But don’t let the arrow fool you, as it’s not an automatic signal that you can just turn once the way is clear. If there are no signs posted that say otherwise (such as a “No turn on red” sign), you may proceed after determining that it is safe to do so. This is the case whether you’re at an intersection or a crosswalk.

Advertisement

Crosswalks and malfunctioning traffic lights

If you come to a right turn traffic light at a crosswalk in Florida, keep in mind that you are expected to yield to any pedestrians who are crossing. Even if you’ve come to a complete stop and are otherwise allowed to turn, you must wait. If your light turns green and someone is still in the process of crossing, you should wait then as well. Additionally, if you’re at an intersection with sidewalks but no clearly marked crosswalk present, you still have to yield.

However, there could be times you arrive at a right turn traffic light that’s malfunctioning. Maybe it’s blinking, stuck, or completely dead. If this happens, Florida law states that you must treat it as a four-way stop sign. That means you must come to a complete stop and yield right of way to traffic coming from all directions. Of course, you must also yield to any pedestrians crossing in front of you. Once the way clears and you have an open right turn, you’re free to go. Always be cautious when arriving at a light that’s out of order and make sure the intersection is fully clear before you continue.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Meta will record employees’ keystrokes and use it to train its AI models

Published

on

Meta has found a new source of training data for its AI models: its own employees. The company plans to use data culled from the mouse movements and keystrokes of its own staff in its pursuit to build more capable and efficient artificial intelligence.

The story, which was first reported by Reuters, shows the lengths to which tech companies are going to find new sources of training data — the lifeblood of AI models that helps the programs learn how to more effectively carry out tasks and respond to user queries.

When reached for comment by TechCrunch, a Meta spokesperson provided the following statement: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus. To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”

This trend reveals a troublesome privacy dimension of the AI industry. Last week it was reported that old startups are being scavenged for their corporate communications (like Slack archives and Jira tickets), and converted into AI training data.

Advertisement

Source link

Continue Reading

Tech

Microsoft lowers Game Pass Ultimate and PC prices, won't include next Call of Duty

Published

on


The Game Pass front page on Microsoft’s website now shows revised pricing for the service’s two most expensive plans. Although delaying the addition of new Call of Duty titles marks a reversal of the company’s earlier strategy, the expanded library introduced during last year’s major price increase remains intact.
Read Entire Article
Source link

Continue Reading

Tech

Cash App now supports accounts for kids 6-12

Published

on

Cash App, the banking and payments app run by Block, has added support for parent-managed kids accounts. The new accounts include key benefits from the service’s normal account, with an eye towards teaching financial literacy to younger users ages 6 to 12. Cash App first allowed teenage users on its platform in 2021.

As part of the “expanded Cash App Families experience,” eligible legal guardians and parents can create managed accounts that offer “a dedicated place on the platform to send allowances, set aside savings, and track spending for their child, kickstarting their path to financial independence,” Cash App says. Adults managing these accounts will be able to set up recurring transfers, see how their child is spending and do things like lock their child’s account to prevent transactions. Kids will get a custom debit card and the ability to receive payments from up to five trusted accounts, though notably they won’t be able to access Cash App itself.

Cash App says managed accounts are designed for kids 6 through 12. Once those kids turn 13, Cash App says parents will be able to choose to convert their account to a “sponsored account” to unlock more features, like the ability to send and receive payments, invest in stocks or trade crypto. Those sponsored accounts are technically still monitored and controlled by a parent or legal guardian, but they do give 13-year-olds more control over how they use their money.

A parent-managed account for kids is not a new idea in the fintech space, though Cash App is trying to reach a younger audience than some of its competitors. Venmo rolled out access to its payment platform to teens between the ages of 13 to 17 in 2023. Separately, both Apple and Google also offer their own kids accounts in Google Wallet and Apple Cash Family.

Advertisement

Source link

Continue Reading

Tech

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting

Published

on

Florida’s attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is “not responsible for this terrible crime” and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner’s chat logs. “My prosecutors have looked at this and they’ve told me, if it was a person on the other end of that screen, we would be charging them with murder,” Uthmeier said. “We cannot have AI bots that are advising people on how to kill others.”

Uthmeier’s office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. “We are going to look at who knew what, designed what, or should have done what,” he said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”

[…] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU’s Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Source link

Advertisement
Continue Reading

Tech

Mozilla says it patched 271 Firefox vulnerabilities thanks to Anthropic’s Claude Mythos

Published

on

Anthropic’s buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company’s special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla’s team find and patch 271 vulnerabilities in the latest release of the Firefox browser. “So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” the foundation said.

The blog post from Mozilla feels like a positive sign for Anthropic’s Project Glasswing. Obviously the AI company would want to put itself in the best possible light while presenting its own initiative, but there’s something encouraging about hearing the benefits from a third party. Mozilla also noted that in its time with Claude Mythos, the AI wasn’t able to turn up any bugs that a human wouldn’t have been able to find, given enough time and resources, which indicates that AI isn’t presently able to do more to crack cybersecurity protections than a person can.

An organizaion successfully using AI for good is certainly a refreshing change of pace in tech news. And for those Firefox users who aren’t personally interested in applying any generative AI in their browsing, Mozilla has given the option to turn it all off for the past several months.

Source link

Advertisement
Continue Reading

Tech

Google’s new Deep Research and Deep Research Max agents can search the web and your private data

Published

on

Google on Monday unveiled the most significant upgrade to its autonomous research agent capabilities since the product’s debut, launching two new agents — Deep Research and Deep Research Max — that for the first time allow developers to fuse open web data with proprietary enterprise information through a single API call, produce native charts and infographics inside research reports, and connect to arbitrary third-party data sources through the Model Context Protocol (MCP).

The release, built on Google’s Gemini 3.1 Pro model, marks an inflection point in the rapidly intensifying race to build AI systems that can autonomously conduct the kind of exhaustive, multi-source research that has traditionally consumed hours or days of human analyst time. It also represents Google’s clearest bid yet to position its AI infrastructure as the backbone for enterprise research workflows in finance, life sciences, and market intelligence — industries where the stakes of getting information wrong are extraordinarily high.

“We are launching two powerful updates to Deep Research in the Gemini API, now with better quality, MCP support, and native chart/infographics generation,” Google CEO Sundar Pichai wrote on X. “Use Deep Research when you want speed and efficiency, and use Max when you want the highest quality context gathering & synthesis using extended test-time compute — achieving 93.3% on DeepSearchQA and 54.6% on HLE.”

Both agents are available starting today in public preview via paid tiers of the Gemini API, accessible through the Interactions API that Google first introduced in December 2025.

Advertisement

Why Google built two research agents instead of one

The launch introduces a tiered architecture that reflects a fundamental tension in AI agent design: the tradeoff between speed and thoroughness.

Deep Research, the standard tier, replaces the preview agent Google released in December and is optimized for low-latency, interactive use cases. It delivers what Google describes as significantly reduced latency and cost at higher quality levels compared to its predecessor. The company positions it as ideal for applications where a developer wants to embed research capabilities directly into a user-facing interface — think a financial dashboard that can answer complex analytical questions in near-real time.

Deep Research Max occupies the opposite end of the spectrum. It leverages extended test-time compute — a technique where the model spends more computational cycles iteratively reasoning, searching, and refining its output before delivering a final report. Google designed it for asynchronous, background workflows: the kind of task where an analyst team kicks off a batch of due diligence reports before leaving the office and expects exhaustive, fully sourced analyses waiting for them the next morning.

The Google DeepMind team framed the distinction on X: “Deep Research: Optimized for speed and efficiency. Perfect for interactive apps needing quicker responses. Deep Research Max: It uses extra time to search and reason. Ideal for exhaustive context gathering and tasks happening in the background.”

Advertisement

“Deep Research was our first hosted agent in the API and has gained a ton of traction over the last 3 months, very excited for folks to test out the new agents and all the improvements, this is just the start of our agents journey,” Logan Kilpatrick, who leads developer relations for Google’s AI efforts, wrote on X.

MCP support lets the agents tap into private enterprise data for the first time

Perhaps the most consequential feature in today’s release is the addition of Model Context Protocol support, which transforms Deep Research from a sophisticated web research tool into something more closely resembling a universal data analyst.

MCP , an emerging open standard for connecting AI models to external data sources, allows Deep Research to securely query private databases, internal document repositories, and specialized third-party data services — all without requiring sensitive information to leave its source environment. In practical terms, this means a hedge fund could point Deep Research at its internal deal-flow database and a financial data terminal simultaneously, then ask the agent to synthesize insights from both alongside publicly available information from the web.

Google disclosed that it is actively collaborating with FactSet, S&P, and PitchBook on their MCP server designs, a signal that the company is pursuing deep integration with the data providers that Wall Street and the broader financial services industry already rely on daily. The goal, according to the blog post authored by Google DeepMind product managers Lukas Haas and Srinivas Tadepalli, is to “let shared customers integrate financial data offerings into workflows powered by Deep Research, and to enable them to realize a leap in productivity by gathering context using their exhaustive data universes at lightning speed.”

Advertisement

This addresses one of the most persistent pain points in enterprise AI adoption: the gap between what a model can find on the open internet and what an organization actually needs to make decisions. Until now, bridging that gap required significant custom engineering. MCP support, combined with Deep Research’s autonomous browsing and reasoning capabilities, collapses much of that complexity into a configuration step. Developers can now run Deep Research with Google Search, remote MCP servers, URL Context, Code Execution, and File Search simultaneously — or turn off web access entirely to search exclusively over custom data. The system also accepts multimodal inputs including PDFs, CSVs, images, audio, and video as grounding context.

Native charts and infographics turn AI reports into stakeholder-ready deliverables

The second headline feature — native chart and infographic generation — may sound incremental, but it addresses a practical limitation that has constrained the usefulness of AI-generated research outputs in professional settings.

Previous versions of Deep Research produced text-only reports. Users who needed visualizations had to export the data and build charts themselves, a friction point that undermined the promise of end-to-end automation. The new agents generate high-quality charts and infographics inline within their reports, rendered in HTML or Google’s Nano Banana format, dynamically visualizing complex datasets as part of the analytical narrative.

“The agent generates HTML charts and infographics inline with the report. Not screenshots. Not suggestions to ‘visualize this data.’ Actual rendered charts inside the markdown output,” noted AI commentator Shruti Mishra on X, capturing the practical significance of the change.

Advertisement

For enterprise users — particularly those in finance and consulting who need to produce stakeholder-ready deliverables — this transforms Deep Research from a tool that accelerates the research phase into one that can potentially produce near-final analytical products. Combined with a new collaborative planning feature that lets users review, guide, and refine the agent’s research plan before execution, and real-time streaming of intermediate reasoning steps, the system gives developers granular control over the investigation’s scope while maintaining the transparency that regulated industries demand.

How Deep Research evolved from a consumer chatbot feature to enterprise platform infrastructure

Today’s release crystallizes a strategic narrative Google has been building for months: Deep Research is not merely a consumer feature but a piece of infrastructure that powers multiple Google products and is now being offered to external developers as a platform.

The blog post explicitly notes that when developers build with the Deep Research agent, they tap into “the same autonomous research infrastructure that powers research capabilities within some of Google’s most popular products like Gemini App, NotebookLM, Google Search and Google Finance.” This suggests that the agent available through the API is not a stripped-down version of what Google uses internally but the same system, offered at platform scale.

The journey to this point has been remarkably rapid. Google first introduced Deep Research as a consumer feature in the Gemini app in December 2024, initially powered by Gemini 1.5 Pro. At the time, the company described it as a personal AI research assistant that could save users hours by synthesizing web information in minutes. By March 2025, Google upgraded Deep Research with Gemini 2.0 Flash Thinking Experimental and made it available for anyone to try. Then came the upgrade to Gemini 2.5 Pro Experimental, where Google reported that raters preferred its reports over competing deep research providers by more than a 2-to-1 margin. The December 2025 release was the pivot to developer access, when Google launched the Interactions API and made Deep Research available programmatically for the first time, powered by Gemini 3 Pro and accompanied by the open-source DeepSearchQA benchmark.

Advertisement

The underlying model driving today’s improvements is Gemini 3.1 Pro, which Google released on February 19, 2026. That model represented a significant leap in core reasoning: on ARC-AGI-2, a benchmark evaluating a model’s ability to solve novel logic patterns, 3.1 Pro scored 77.1% — more than double the performance of Gemini 3 Pro. Deep Research Max inherits that reasoning foundation and layers autonomous research behaviors on top of it, achieving 93.3% on DeepSearchQA (up from 66.1% in December) and 54.6% on Humanity’s Last Exam (up from 46.4%).

gemini-3.1-pro deep-research-qualitative-advacements blog evals

Google’s new Deep Research Max agent outperformed its December predecessor across nearly all qualitative dimensions in internal expert evaluations — but the older version held an edge in internal consistency and faithfulness. (Source: Google DeepMind)

Google faces a crowded field of competitors building autonomous research agents

Google is not operating in a vacuum. The launch arrives amid intensifying competition in the autonomous research agent space. OpenAI has been developing its own agent capabilities within ChatGPT under the codename Hermes, which includes an agent builder, templates, scheduling, and Slack integration, according to reports circulating on social media. Perplexity has built its business around AI-powered research. And a growing ecosystem of startups is attacking various slices of the automated research workflow.

What distinguishes Google’s approach is the combination of its search infrastructure — which gives Deep Research access to the broadest and most current index of web information available — with the MCP-based connectivity to enterprise data sources. No other company currently offers a research agent that can simultaneously query the open web at Google Search’s scale and navigate proprietary data repositories through a standardized protocol. The pricing structure also signals Google’s intent to drive adoption: according to Sim.ai, which tracks model pricing, the Deep Research agent in the December preview was priced at $2 per million input tokens and $2 per million output tokens with a 1 million token context window — positioning it as cost-competitive for the volume of research output it generates.

Advertisement

Not everyone greeted the announcement with unalloyed enthusiasm, however. Several users on X noted that the new agents are available only through the API, not in the Gemini consumer app. “Not on Gemini app,” observed TestingCatalog News, while another user wrote, “Google keeps punishing Gemini App Pro subscribers for some reason.” Others raised concerns about the presentation of benchmark results, with one user arguing that Google’s charts could be “misleading” in how they represent percentage improvements. These complaints point to a broader tension in Google’s AI strategy: the company is increasingly directing its most advanced capabilities toward developers and enterprise customers who access them through APIs, while consumer-facing products sometimes lag behind.

gemini-3.1-pro deep-research-and-max blog evals

Deep Research Max led all competitors on DeepSearchQA and BrowseComp, but GPT 5.4 edged ahead on Humanity’s Last Exam, a benchmark measuring reasoning and knowledge. All results were evaluated by Google DeepMind using publicly available model APIs. (Source: Google DeepMind)

What Deep Research Max means for finance, biotech, and the future of knowledge work

The practical implications of today’s launch are most immediately felt in industries that depend on exhaustive, multi-source research as a core business function. In financial services, where analysts routinely spend hours assembling due diligence reports from scattered sources — SEC filings, earnings transcripts, market data terminals, internal deal memos — Deep Research Max offers the possibility of automating the initial research phase entirely. The FactSet, S&P, and PitchBook partnerships suggest Google is serious about making this work with the data infrastructure that financial professionals already use.

In life sciences, the blog post notes that Google has collaborated with Axiom Bio, which builds AI systems to predict drug toxicity, and found that Deep Research unlocked new levels of initial research depth across biomedical literature. In market research and consulting, the ability to produce stakeholder-ready reports with embedded visualizations and granular citations could compress project timelines from days to hours.

Advertisement

The key question is whether the quality and reliability of these automated outputs will meet the standards that professionals in these fields demand. Google’s benchmark numbers are impressive, but benchmarks measure performance on standardized tasks — real-world research is messier, more ambiguous, and often requires the kind of judgment that remains difficult to automate. Deep Research and Deep Research Max are available now in public preview via paid tiers of the Gemini API, with availability on Google Cloud for startups and enterprises coming soon.

Eighteen months ago, Deep Research was a feature that helped grad students avoid drowning in browser tabs. Today, Google is betting it can replace the first shift at an investment bank. The distance between those two ambitions — and whether the technology can actually close it — will define whether autonomous research agents become a transformative category of enterprise software or just another AI demo that dazzles on benchmarks and disappoints in the conference room.

Source link

Advertisement
Continue Reading

Tech

SpaceX and Cursor strike partnership that might end in a $60 billion acquisition

Published

on

SpaceX and AI company Cursor have struck a new partnership that could see the owner of X buy the AI company for $60 billion later this year. “SpaceXAI and  @cursor_ai  are now working closely together to create the world’s best coding and knowledge work AI,” SpaceX wrote in a post on X.

According to SpaceX, the deal allows for it to either invest $10 billion into the company known for its AI coding tool, or acquire it entirely “later this year” for $60 billion. If an acquisition were to happen, it’s not clear at what point Cursor could officially join the fold of Elon Musk’s rapidly expanding and increasingly enmeshed web of companies. SpaceX bought xAI, the billionaire’s AI company that also controls X, earlier this year. SpaceX is currently getting ready to go public this summer in what will likely be the biggest initial public offering (IPO) in history.

Cursor, which has reportedly been in talks to raise its own $2 billion round of funding, is known for its AI coding tool of the same name that’s become the vibe coding platform of choice for many developers. It allows people to use either its own models or those from other leading AI companies, including OpenAI, Google, Anthropic and xAI.

In a statement, Cursor said its partnership with SpaceX will “accelerate our model training efforts” while addressing infrastructure-related issues that have slowed it down in the past. “We’ve wanted to push our training efforts much further, but we’ve been bottlenecked by compute,” the company said. “With this partnership, our team will leverage xAI’s Colossus infrastructure to dramatically scale up the intelligence of our models for coding and beyond.”

Advertisement

Source link

Continue Reading

Tech

The Electromechanical Computer Of The B-52’s Star Tracker

Published

on

The Angle Computer of the B-52, opened. (Credit: Ken Shirriff)
The Angle Computer of the B-52, opened. (Credit: Ken Shirriff)

In the ages before convenient global positioning satellites to query for one’s current location military aircraft required dedicated navigators in order to not get lost. This changed with increasing automation, including the arrival of increasingly more sophisticated electromechanical computers, such as the angle computer in the B-52 bomber’s star tracker that [Ken Shirriff] recently had a poke at.

We covered star trackers before, with this devices enabling the automation of celestial navigation. In effect, as long as you have a map of the visible stars and an accurate time source you will never get lost on Earth, or a few kilometers above its surface as the case may be.

The B-52’s Angle Computer is part of the Astro Compass, which is the star tracker device that locks onto a star and outputs a heading that’s accurate to a tenth of a degree, while also allowing for position to be calculated from it. Inside the device a lot of calculations are being performed as explained in the article, though the full equations are quite complex.

Not burdening the navigator of a B-52 with having to ogle stars themselves with an instrument and scribbling down calculations on paper is a good idea, of course. Instead the Angle Computer solves the navigational triangle mechanically, essentially by modelling the celestial sphere with a metal half-sphere. The solving is thus done using this physical representation, involving numerous gears and other parts that are detailed in the article.

Advertisement

In addition to the mechanical components there are of course the motors driving it, feedback mechanisms and ways to interface with the instruments. For the 1950s this was definitely the way to design a computer like this, but of course as semiconductor transistors swept the computing landscape, this marvel of engineering would before long find itself too replaced with a fully digital version.

Source link

Advertisement
Continue Reading

Tech

NYT Strands hints and answers for Wednesday, April 22 (game #780)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Tuesday’s puzzle instead then click here: NYT Strands hints and answers for Tuesday, April 21 (game #779).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025