Connect with us
DAPA Banner

Tech

Google introduces TurboQuant, cutting LLM memory usage by 6x with no accuracy loss

Published

on


The biggest memory burden for LLMs is the key-value cache, which stores conversational context as users interact with AI chatbots. The cache grows as conversations lengthen, increasing both memory usage and power consumption. TurboQuant addresses this issue by reducing model size with “zero accuracy loss,” improving vector search efficiency, and…
Read Entire Article
Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Apple takes on Microsoft 365 and Google Workspace with new business platform

Published

on


Set to debut on April 14 in more than 200 countries and regions, Apple Business brings together the company’s existing enterprise programs – Apple Business Connect, Apple Business Essentials, and Apple Business Manager. The new service represents Apple’s most comprehensive effort yet to provide small and mid-sized companies with integrated…
Read Entire Article
Source link

Continue Reading

Tech

30 Years Ago, Robots Learned to Walk Without Falling

Published

on

When you hear the term humanoid robot, you may think of C-3PO, the human-cyborg-relations android from Star Wars. C-3PO was designed to assist humans in communicating with robots and alien species. The droid, which first appeared on screen in 1977, joined the characters on their adventures, walking, talking, and interacting with the environment like a human. It was ahead of its time.

Before the release of Star Wars, a few androids did exist and could move and interact with their environment, but none could do so without losing its balance.

It wasn’t until 1996 that the first autonomous robot capable of walking without falling was developed in Japan. Honda’s Prototype 2 (P2) was nearly 183 centimeters tall and weighed 210 kilograms. It could control its posture to maintain balance, and it could move multiple joints simultaneously.

In recognition of that decades-old feat, P2 has been honored as an IEEE Milestone. The dedication ceremony is scheduled for 28 April at the Honda Collection Hall, located on the grounds of the Mobility Resort Motegi, in Japan. The machine is on display in the hall’s robotics exhibit, which showcases the evolution of Honda’s humanoid technology.

Advertisement

In support of the Milestone nomination, members of the IEEE Nagoya (Japan) Section wrote: “This milestone demonstrated the feasibility of humanlike locomotion in machines, setting a new standard in robotics.” The Milestone proposal is available on the Engineering Technology and History Wiki.

Developing a domestic android

In 1986 Honda researchers Kazuo Hirai, Masato Hirose, Yuji Haikawa, and Toru Takenaka set out to develop what they called a “domestic robot” to collaborate with humans. It would be able to climb stairs, remove impediments in its path, and tighten a nut with a wrench, according to their research paper on the project.

“We believe that a robot working within a household is the type of robot that consumers may find useful,” the authors wrote.

But to create a machine that would do household chores, it had to be able to move around obstacles such as furniture, stairs, and doorways. It needed to autonomously walk and read its environment like a human, according to the researchers.

Advertisement

But no robot could do that at the time. The closest technologists got was the WABOT-1. Built in 1973 at Waseda University, in Tokyo, the WABOT had eyes and ears, could speak Japanese, and used tactile sensors embedded on its hands as it gripped and moved objects. Although the WABOT could walk, albeit unsteadily, it couldn’t maneuver around obstacles or maintain its balance. It was powered by an external battery and computer.

To build an android, the Honda team began by analyzing how people move, using themselves as models.

That led to specifications for the robot that gave it humanlike dimensions, including the location of the leg joints and how far the legs could rotate.

Once they began building the machine, though, the engineers found it difficult to satisfy every specification. Adjustments were made to the number of joints in the robot’s hips, knees, and ankles, according to the research paper. Humans have four hip, two knee, and three ankle joints; P2’s predecessor had three hip, one knee, and two ankle joints. The arms were treated similarly. A human’s four shoulder and three elbow joints became three shoulder joints and one elbow joint in the robot.

Advertisement

The researchers installed existing Honda motors and hydraulics in the hips, knees, and ankles to enable the robot to walk. Each joint was operated by a DC motor with a harmonic-drive reduction gear system, which is compact and offered high torque capacity.

To test their ideas, the engineers built what they called E0. The robot, which was just a pair of connected legs, successfully walked. It took about 15 seconds to take each step, however, and it moved using static walking in a straight line, according to a post about the project on Honda’s website. (Static walking is when the body’s center of mass is always within the foot’s sole. Humans walk with their center of mass below their navel.)

The researchers created several algorithms to enable the robot to walk like a human, according to the Honda website. The codes allowed the robot to use a locomotion mechanism, dynamic walking, whereby the robot stays upright by constantly moving and adjusting its balance, rather than keeping its center of mass over its feet, according to a video on the YouTube channel Everything About Robotics Explained.

“P2 was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.” —IEEE Nagoya Section

Advertisement

The Honda team installed rubber brushes on the bottom of the machine’s feet to reduce vibrations from the landing impacts (the force experienced when its feet touch the ground)—which had made the robot lose its balance.

Between 1987 and 1991, three more prototypes (E1, E2, and E3) were built, each testing a new algorithm. E3 was a success.

With the dynamic walking mechanism complete, the researchers continued their quest to make the robot stable. The team added 6-axis sensors to detect the force at which the ground pushed back against the robot’s feet and the movements of each foot and ankle, allowing the robot to adjust its gait in real time for stability.

The team also developed a posture-stabilizing control system to help the robot stay upright. A local controller directed how the electric motor actuators needed to move so the robot could follow the leg joint angles when walking, according to the research paper.

Advertisement

During the next three years, the team tested the systems and built three more prototypes (E4, E5, and E6), which had boxlike torsos atop the legs.

In 1993 the team was finally ready to build an android with arms and a head that looked more like C-3PO, dubbed Prototype 1 (P1). Because the machine was meant to help people at home, the researchers determined its height and limb proportions based on the typical measurements of doorways and stairs. The arm length was based on the ability of the robot to pick up an object when squatting.

When they finished building P1, it was 191.5 cm tall, weighed 175 kg, and used an external power source and computer. It could turn a switch on and off, grab a doorknob, and carry a 70 kg object.

P1 was not launched publicly but instead used to conduct research on how to further improve the design. The engineers looked at how to install an internal power source and computer, for example, as well as how to coordinate the movement of the arms and legs, according to Honda.

Advertisement

For P2, four video cameras were installed in its head—two for vision processing and the other two for remote operation. The head was 60 cm wide and connected to the torso, which was 75.6 cm deep.

A computer with four microSparc II processors running a real-time operating system was added into the robot’s torso. The processors were used to control the arms, legs, joints, and vision-processing cameras.

Also within the body were DC servo amplifiers, a 20-kg nickel-zinc battery, and a wireless Ethernet modem, according to the research paper. The battery lasted for about 15 minutes; the machine also could be charged by an external power supply.

The hardware was enclosed in white-and-gray casing.

Advertisement

P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.

P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.King Rose Archives

The following year, Honda’s engineers released the smaller and lighter P3. It was 160 cm tall and weighed 130 kg.

In 2000 the popular ASIMO robot was introduced. Although shorter than its predecessors at 130 cm, it could walk, run, climb stairs, and recognize voices and faces. The most recent version was released in 2011. Honda has retired the robot.

Advertisement

Honda P2’s influence

Thanks to P2, today’s androids are not just ideas in a laboratory. Robots have been deployed to work in factories and, increasingly, at home.

The machines are even being used for entertainment. During this year’s Spring Festival gala in Beijing, machines developed by Chinese startups Unitree Robotics, Galbot, Noetix, and MagicLab performed synchronized dances, martial arts, and backflips alongside human performers.

“P2’s development shifted the focus of robotics from industrial applications to human-centric designs,” the Milestone sponsors explained in the wiki entry. “It inspired subsequent advancements in humanoid robots and influenced research in fields like biomechanics and artificial intelligence.

“It was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.”

Advertisement

To learn more about robots, check out IEEE Spectrum’s guide.

A plaque recognizing Honda’s P2 robot as an IEEE Milestone is to be installed at the Honda Collection Hall. The plaque is to read:

In 1996 Prototype 2 (P2), a self-contained autonomous bipedal humanoid robot capable of stable dynamic walking and stair-climbing, was introduced by Honda. Its legged robotics incorporated real-time posture control, dynamic balance, gait generation, and multijoint coordination. Honda’s mechatronics and control algorithms set technical benchmarks in mobility, autonomy, and human-robot interaction. P2 inspired new research in humanoid robot development, leading to increasingly sophisticated successors.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Microsoft, Lime and others helping to celebrate opening of new light rail line from Seattle to Eastside

Published

on

The Link light rail 2 Line heads east toward Lake Washigton with downtown Seattle and Lumen Field in the background. (Sound Transit Photo)

Sound Transit’s Link light rail will carry passengers across Lake Washington for the first time on Saturday with the opening of the Crosslake Connection, and celebrations are planned at every stop.

Trains will begin running between Seattle and the Eastside at around 10 a.m. following a 9 a.m. street fair and ribbon-cutting ceremony at Sam Smith Park, across the street from the Judkins Park Station.

Events will take place at 10 stations across the expanded 2 Line from the International District to Bellevue and Redmond, lasting until 2 p.m. Here are a few tech-related highlights:

  • Microsoft is donating 3,000 commemorative ORCA cards, loaded with the value of one light rail round-trip. The cards will be available at the welcome tent at Sam Smith Park and from Sound Transit and Microsoft ambassadors while supplies last.
  • Lime is offering free electric bike and scooter rides on opening day with the code CROSSLAKE26.
  • The Seattle Orcas, the professional cricket team backed by big names in tech, will host a celebration at the Marymoor Village Station. Visitors can learn about the sport, get a picture in the photo booth with the Orcas mascot, and more.
  • Microsoft is also hosting activities at the Redmond Technology Station, with entertainment, complimentary food and coffee, photo opportunities, lawn games and more.

The opening of the Crosslake Connection could alter commute habits for thousands of tech workers from Microsoft, Amazon and other companies who travel in both directions between major office hubs in Seattle, Bellevue and Redmond.

Sound Transit projects that the fully integrated 2 Line will serve about 43,000 to 52,000 daily riders in 2026.

Trains over Lake Washington will operate at speeds of 55 mph, running every 10 minutes from approximately 5 a.m. to midnight seven days a week.

Advertisement

Source link

Continue Reading

Tech

OpenAI shelves erotic ChatGPT after staff, investors, & advisors revolt

Published

on

OpenAI has shelved its plans to add an erotic “adult mode” to ChatGPT indefinitely, the Financial Times reported on Wednesday, capping a five-month saga in which the feature was announced with confidence, delayed twice, and ultimately abandoned after pushback from staff, advisors, and investors. The retreat is the third major product reversal for OpenAI in a single week, following the shutdown of its Sora video generation app on Monday and the subsequent collapse of a planned $1 billion investment from Disney.

The adult mode was first announced by CEO Sam Altman in October 2025, when he wrote on X that OpenAI was confident it could age-gate sexually explicit conversations and that the move aligned with the company’s principle to “treat adult users like adults.” It was initially scheduled for December 2025, then pushed to the first quarter of 2026, and has now been postponed with no timeline for release. OpenAI told the Financial Times it plans to conduct “long-term research on the effects of sexually explicit chats and emotional attachments” before making a product decision.

What went wrong

The problems were technical, ethical, and commercial, and they compounded one another. Engineers working on the feature discovered that training models which had been built to avoid sexual content for safety reasons to produce explicit material reliably was harder than anticipated. When they used datasets that included sexual content, the models also generated outputs involving illegal scenarios, including bestiality and incest, that proved difficult to filter out. The feature was not merely controversial; it was resistant to being built safely.

OpenAI’s own advisory board raised concerns that went beyond content moderation. Advisors warned that sexually explicit ChatGPT interactions could foster unhealthy emotional attachments with serious mental health consequences. One advisor described the risk as turning ChatGPT into a “sexy suicide coach,” a phrase that resonates grimly given the company’s existing legal exposure. OpenAI currently faces at least eight lawsuits alleging that ChatGPT contributed to user deaths, including the case of Adam Raine, a 16-year-old from Southern California whose family alleges the chatbot discussed methods of suicide with him more than 200 times before he took his own life in April 2025. Earlier this week, OpenAI flagged these lawsuits as among the top risks to its business in a financial document disclosed to investors.

Advertisement

Staff, too, began to question whether the feature served OpenAI’s stated mission. The company’s charter commits it to building artificial general intelligence that benefits humanity. Some employees found it difficult to reconcile that ambition with the engineering effort required to make a chatbot talk dirty without breaking the law.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The investor calculation

Investors delivered what may have been the decisive objection: the economics did not justify the risk. Two people familiar with the matter told the Financial Times that some investors questioned why OpenAI would jeopardise its reputation for a product with “relatively small upside.” The AI-generated adult content market exists, but it is served by a constellation of smaller, less scrutinised companies. For a company raising capital at a $300 billion valuation and courting enterprise customers, the brand damage from association with explicit content outweighed the potential revenue.

Advertisement

The age verification problem sharpened this concern. OpenAI’s approach relied on AI-based age prediction rather than hard identity checks, and internal testing revealed an error rate of approximately 10 per cent, meaning roughly one in ten users could be misclassified. For a product designed to keep explicit content away from minors, that margin is not a rounding error. It is a regulatory and reputational catastrophe waiting to happen, particularly in a legal environment where multiple US states have passed or proposed laws requiring platforms to verify users’ ages before granting access to adult material.

A week of retreats

The adult mode decision does not exist in isolation. On Monday, OpenAI announced it would discontinue Sora, the AI video generation tool it had positioned as a creative platform for filmmakers and content creators. Sora consumed vast computing resources relative to its revenue, and its most prominent commercial partnership, a three-year licensing agreement with Disney that would have allowed users to generate videos featuring characters from Disney, Marvel, Pixar, and Star Wars, collapsed after the shutdown was announced. Disney had planned to invest $1 billion in OpenAI as part of the deal. No money had changed hands.

Together, the three reversals paint a picture of a company pulling back from consumer product experiments and refocusing on its core business. The Financial Times reported that investors are more interested in seeing OpenAI combine ChatGPT with coding assistants to develop a “super app” aimed at transforming how businesses operate, a vision with clearer monetisation and fewer reputational hazards than either video generation or erotic chatbots.

OpenAI has said it will reallocate resources to robotics and autonomous software agents, areas where the path from research to commercial value is more direct and the regulatory landscape, while complex, does not involve the specific toxicity of sexualised AI and child safety failures.

Advertisement

The pattern

There is a recurring dynamic in OpenAI’s product strategy: announce ambitiously, encounter the real-world complications that less confident organisations might have anticipated, and then retreat while framing the reversal as prudent research. The adult mode was announced before the technical problems of safe content generation were solved, before the age verification system could achieve acceptable accuracy, and before the advisory board’s concerns about mental health harms had been addressed. The Sora partnership with Disney was announced before the product had demonstrated commercial viability. In both cases, the announcement generated coverage and signalled ambition, but the follow-through revealed gaps between what was promised and what could be delivered.

The company’s willingness to shelve the feature, rather than push it out despite the risks, is itself worth noting. It suggests that the pressure from lawsuits, investors, and internal dissent is beginning to function as a corrective mechanism, pulling OpenAI back from the edges of what is technically possible toward what is commercially and ethically sustainable. Whether that mechanism is reliable, or merely responsive to the most visible crises, is a question the next product announcement will answer.

Source link

Advertisement
Continue Reading

Tech

Samsung Galaxy A57 vs Galaxy A56: What’s new this year?

Published

on

The Samsung Galaxy A57 is the company’s new mid-ranger for 2026, but what’s really new compared to 2025’s Galaxy A56?

While the two phones look similar at a glance, look a little closer and you’ll start to see subtle differences not only in the overall design, but key areas like display tech, performance and software that should make the Samsung Galaxy A57 a little more tempting – and quite possibly one of the best mid-range phones around.

While we’re yet to fully review this year’s mid-ranger, we’ve spent some time with the phone ahead of its launch, and here’s how it compares to the Samsung Galaxy A56 on paper. 

Slimmer, lighter and more durable

One of the most immediate differences between the Galaxy A57 and its predecessor comes in the design department. The Galaxy A57 is 0.5mm thinner than the 7.4mm-thick Galaxy A56, measuring in at 6.9mm – and while that doesn’t sound like much on paper, it makes a noticeable difference in the overall feel of the phone.

Advertisement
Samsung Galaxy A57 5GSamsung Galaxy A57 5G
Image Credit (Trusted Reviews)

Advertisement

Combined with a weight of just 179g, 20g lighter than the A56, it should feel much more comfortable to hold and use in day-to-day life, even if it isn’t quite as ultra-slim as the likes of the iPhone Air and Samsung Galaxy S25 Edge

As an added bonus, the Galaxy A57 is also more durable, with Gorilla Glass Victus Plus on both the front and rear glass panels, along with IP68 dust and water resistance, up from last year’s IP67. 

A brighter, more premium-looking screen

Apparently unhappy just making the phone thinner and lighter, Samsung also focused its sights on upgrading the display experience with this year’s mid-ranger. The Galaxy A57 may sport the same-sized 6.7-inch screen as the A56, but a cursory glance at the phones and the differences are immediate – especially when it comes to the size of the bezels.

The Galaxy A56 had massively mismatched bezels; there’s no getting around it. The sides measured in at 2.2mm thick, the forehead was 2mm thick, and the chin was a whopping 3.3mm thick, and as a result, it didn’t look particularly premium. 

Advertisement
Samsung Galaxy A57 5GSamsung Galaxy A57 5G
Image Credit (Trusted Reviews)

The Galaxy A57, for comparison, has 1.5mm-thick sides and forehead, with a slightly thicker 2.5mm chin. It’s still not completely symmetrical, but it at least feels more premium than last year’s panel. Elsewhere, Samsung has boosted the Vision Booster tech to make videos look sharper and brighter when displayed on the Super AMOLED panel. 

Advertisement

In other areas, however, the two panels are nearly identical; both offer a smooth 120Hz refresh rate, a peak brightness of 1900 nits, and FHD+ resolution. 

A boost in performance

The occasional outlier aside (I’m looking at you, Pixel 10a), you can always rely on boosted performance from newer smartphones, and that’s very much the case with the Galaxy A57 – though it still won’t compete with the most powerful phones in the mid-range market.

Display on Samsung Galaxy A57Display on Samsung Galaxy A57
Image Credit (Trusted Reviews)

At its heart is the Exynos 1680, up from the Exynos 1580 on the A56, coupled with either 8- or 12GB of RAM – and this is faster LPDDRX5 RAM too. Combined with either 256- or 512GB of storage, the latter of which is new for this year, the Galaxy A57 should deliver an uptick in performance and boosted storage to match.

There’s also an upgraded vapour chamber, which is apparently 13% bigger, though last year’s Galaxy A56 never really got all that hot in use – in our experience, anyway. 

Advertisement

Advertisement

First to get One UI 8.5, and more OS upgrades

The Galaxy A57 is the first in Samsung’s A-series to get the One UI 8.5 update that launched with the flagship Galaxy S26 range last month – though the Galaxy A56 will likely get the upgrade sometime in the near future. 

Samsung Galaxy A57 5GSamsung Galaxy A57 5G
Image Credit (Trusted Reviews)

What’s more impressive is the long-term software support. The Galaxy A56 offered a fine combination of four years of combined OS and security upgrades, but the Galaxy A57 takes that to six years. 

That’s a pretty solid promise for a mid-range phone, only really bested by the likes of the Pixel 10a and iPhone 17e, and should see the phone through to One UI 14 based on Android 22. The A56, on the other hand, will stop at One UI 11 based on Android 19. 

New ‘Awesome Intelligence’ features

Neither the Galaxy A56 nor A57 get the full suite of Galaxy AI features – that’s for the company’s flagships and foldables – but they do get a simplified toolkit under the ‘Awesome Intelligence’ umbrella. For the Galaxy A56, that meant features like object eraser, best face and auto trim.

Advertisement

With the Galaxy A57, Samsung has added the same improved Circle to Search tech and upgraded Bixby experience that shipped with the Galaxy S26 range, with the former allowing you to search for entire outfits at once, while the latter allows Samsung’s virtual assistant to control various aspects of your phone. 

Advertisement

Samsung Galaxy A57 5GSamsung Galaxy A57 5G
Image Credit (Trusted Reviews)

What’s more, you can use both on the phone at once; one is activated by pressing the power button, the other by voice. 

Samsung has also introduced voice transcription tech with this year’s mid-ranger, offering transcription not only in the recorder app but in calls too. 

The question is whether the Galaxy A56 will get the same features once it too receives the One UI 8.5 update – we’ll have to wait and see for now. 

Advertisement

Early thoughts

Compared to last year’s Galaxy A56, the Galaxy A57 feels like a much more refined mid-range smartphone. It’s thinner, lighter, and more durable, and it boasts a screen that, while not quite the best for the price, is certainly headed in the right direction. 

Added bonuses like faster LPDDR5X RAM, increased base storage, a longer software promise and more AI features all look to sweeten the deal – though key hardware, from the camera setup to battery life and charging speed, feel almost identical to last year’s model.

Advertisement

We likely won’t recommend upgrading from last year’s A56 if you’ve got one, though we’ll save our final thoughts until we’ve spent some more time with both phones side by side. 

Advertisement

Source link

Continue Reading

Tech

What is the release date for Marshals: A Yellowstone Story episode 5 on CBS and Paramount+?

Published

on

Like it or lump it, Marshals: A Yellowstone Story is becoming more of a mystery by the minute. In episode 4 alone, Randall (Michael Cudlitz) has allegedly left a gun shell on Kayce’s (Luke Grimes) porch to warn him about what was coming.

Why? In episode 3, Kayce had shot Randall’s son, Carson, to save his teammate Miles’ (Tantanka Means) life. In true Yellowstone fashion, past actions are now catching up to Kayce in the ugliest of ways.

Advertisement

Source link

Continue Reading

Tech

Spudnik-1 is the Purple Potato That Floated Through Space and Left Everyone Guessing

Published

on

NASA Astronaut Don Pettit Spudnik-1 Alien Purple Potato Space
Photo credit: Don Pettit
During Expedition 72, NASA astronaut Don Pettit used his free time on the International Space Station to work on a quite interesting side project. He went ahead and coaxed an early purple potato to sprout in a small improvised garden he’d created on his own. He’d removed a bit of the tuber and placed it in a container with grow lights connected, fastening it in place with a small piece of Velcro. This simple system kept everything stable even as the station zoomed around the Earth.


NASA Astronaut Don Pettit Spudnik-1 Alien Purple Potato Space
The potato had smooth purple skin and had grown into an oval form about the size of a huge egg. There were tiny little tendrils shooting out in all directions, looking like pallid threads snapped in mid-stretch. No dirt was visible on any of the surfaces. The photograph quickly went viral, and people went crazy in the comments section, asking all kinds of questions. Some wondered whether it was some unknown organism that had suddenly surfaced floating in space, while others compared it to some of the props seen in sci-fi films.

Sale


LEGO Technic Planet Earth and Moon in Orbit Building Set, Outer Space Birthday Gift for 10 Year Olds…
  • Interactive model – Inspire kids to build a representation of the Earth, Sun and Moon in orbit with this LEGO Technic Planet Earth and Moon in Orbit…
  • Educational space toy – Kids can turn the crank to see how the Earth and the Moon orbit around the Sun
  • Includes months and moon phases – This solar system toy includes printed details, like the month and moon phases to help kids see how the Earth’s…

Pettit ended up naming his little specimen Spudnik-1 and explaining to everyone what they were looking at. He got the idea from a story about a lone explorer who had to cultivate potatoes on Mars to survive. This was just his own personal experiment to explore how a familiar food like a potato would behave far away from home.

NASA Astronaut Don Pettit Spudnik-1 Alien Purple Potato Space
Microgravity changes everything about how a plant develops. Roots do not reach downward the way they would on Earth, instead spreading outward in every direction at once in search of water and nutrients. Shoots behave the same way, scattering rather than growing in a straight line upward. The whole plant takes on a loose, sprawling form that looks nothing like what you would find in a tidy garden back home. Growth is also slower than usual, since without the constant pull of gravity there is no physical stress on the living tissue to drive development forward.

Then there’s the fact that there’s no soil, so the potato skin remains smooth and even under the constant light of the artificial lamps, with no rough brown patches from hitting the earth. Moisture and light are properly metered, but that is the extent of management. It’s all simply these minor adjustments to try to imitate the natural pull of gravity and the cycle of sun and rain that we take for granted on Earth.

Advertisement


NASA teams have been cultivating a variety of plants aboard the station for years, including lettuce, Chinese cabbage, mustard greens, kale, and zinnias, all of which have survived under relatively comparable conditions. Of course, every harvest is a joy because it means they can consume some real food instead of vacuum-sealed meals. Of course, they collect a wealth of information that helps them plan for longer-term expeditions to the Moon or Mars, when every piece of food they bring must serve several functions.

Pettit kept things simple by selecting a potato variety that naturally contains a high concentration of the exotic pigments that give it its deep purple hue. It just so happens that those same molecules can help shelter cells from radiation, which is a significant benefit for longer missions. After the picture went viral, he kept folks informed with some fairly simple updates. The Velcro held the tuber in place, the grow lights provided a consistent supply of electricity, and then, well, it all came down to being patient and keeping an eye on things.
[Source]

Source link

Advertisement
Continue Reading

Tech

New Langflow flaw actively exploited to hijack AI workflows

Published

on

CISA: New Langflow flaw actively exploited to hijack AI workflows

The Cybersecurity and Infrastructure Security Agency (CISA) is warning that hackers are actively exploiting a critical vulnerability identified as CVE-2026-33017, which affects the Langflow framework for building AI agents.

The security issue received a critical score of 9.3 out of 10 and can be leveraged for remote code execution, allowing threat actors to build public flows without authentication.

The agency added the issue to the list of Known Exploited Vulnerabilities, describing it as a code injection vulnerability.

Researchers at application security company Endor Labs claim that hackers started exploiting CVE-2026-33017 on March 19, about 20 hours after the vulnerability advisory became public.

Advertisement

No public proof-of-concept (PoC) exploit code existed at the time, and Endor Labs believes that attackers built exploits directly from the information included in the advisory.

Automated scanning activity began in 20 hours, followed by exploitation using Python scripts in 21 hours, and data (.env and .db files) harvesting in 24 hours.

Langflow is a popular open-source visual framework for building AI workflows with 145,000 stars on GitHub. It provides a drag-and-drop interface for connecting nodes into executable pipelines, along with a REST API for running them programmatically.

The tool has widespread adoption across the AI development ecosystem, making it an attractive target for hackers.

Advertisement

In May 2025, CISA issued another warning about active exploitation in Langflow, targeting CVE-2025-3248, a critical API endpoint flaw that allows unauthenticated RCE and potentially leads to full server control.

The most recent flaw, CVE-2026-33017, lets attackers execute arbitrary Python code impacts versions 1.8.1 and earlier of Langflow, and could be exploited via a single crafted HTTP request due to unsandboxed flow execution.

CISA did not mark the flaw as exploited by ransomware actors, but gave federal agencies until April 8 to apply the security updates or mitigations, or stop using the product.

System administrators are recommended to upgrade to Langflow version 1.9.0 or later, which addresses the security problem, or disable/restrict the vulnerable endpoint.

Advertisement

Endor Labs also advised not to expose Langflow directly to the internet, to monitor outbound traffic, and to rotate API keys, database credentials, and cloud secrets when suspicious activity is detected.

CISA’s deadline formally applies to organizations covered by Binding Operational Directive (BOD) 22-01, but private sector companies, state and local governments, and other non-FCEB entities are also advised to treat it as a benchmark and respond accordingly.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Ajax football club hack exposed fan data, enabled ticket hijack

Published

on

Ajax football club hack exposed fan data, enabled ticket hijack

Dutch professional football club Ajax Amsterdam (AFC Ajax) disclosed that a hacker exploited vulnerabilities in its IT systems and accessed data belonging to a few hundred people.

The security issues also allowed transferring purchased tickets to others and enabled modifications to stadium bans imposed to certain individuals.

The club learned about the security issues and their effect from journalists who were tipped off by the hacker.

AFC Ajax is one of the most successful football clubs, winning the UEFA Champions League four times and with 36 Eredivisie titles, the premier professional football league in the Netherlands.

Advertisement

“We recently discovered that a hacker in the Netherlands unlawfully gained access to parts of our systems. Data was viewed,” AFC Ajax stated.

“What we now know is that only the email addresses of a few hundred people were viewed. In addition, for fewer than 20 people with a stadium ban, their names, email addresses, and dates of birth were accessed.”

RTL journalists who received a tip from the hacker independently verified the vulnerabilities and reported that they were able to transfer season tickets from their holders to arbitrary people, access and modify stadium ban records, and gain broad access to fan data via APIs and shared keys.

In a demonstration, they reassigned a VIP season ticket in seconds. Most worryingly, RTL stated it could manipulate 42,000 season tickets, 538 supporter stadium bans, and view details on over 300,000 accounts.

Advertisement

AFC Ajax says that it has engaged external experts to determine the scope of the incident and identify the root cause, while noting that the exposed data has not been leaked.

Meanwhile, all identified vulnerabilities have been patched, and additional security measures have been introduced.

The Dutch Data Protection authority, as well as the police, have also been notified accordingly.

RTL’s investigation was clearly non-malicious. Likewise, the attacker’s limited access and decision to disclose the flaws via the media, rather than exploit them for profit or extortion, suggest the vulnerabilities were not abused at scale.

Advertisement

However, it remains unclear whether this was the first time these weaknesses in Ajax’s systems were discovered or exploited.

Ajax fans who have registered with the club’s systems or purchased season tickets should remain vigilant for suspicious communications, especially those impersonating or claiming to come from the AFC Ajax club.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Google Gemini now lets you import your chats and data from other AI apps

Published

on

Google is adding a pair of new features to Gemini aimed at making it easier to switch to the AI chatbot. Personal history and past context are big components to how a chatbot provides customized answers to each user. Gemini now supports importing history from other AI platforms. Both free and paid consumer accounts can use these options.

With the first option, Gemini can create a prompt asking a competitor’s AI chatbot to summarize what it has learned about you. The result might include details such as your typical written communication style, your family members’ names or your key preferences. The other AI tool’s summary can then be pasted into Gemini, providing Google’s platform with a preliminary profile.

The second option allows users to import their entire chat history with a different AI assistant into Gemini. Doing so allows people to reference earlier conversations or requests made on a different platform after migrating to the Google option.

Anthropic recently introduced a similar memory import feature, so Google may also be hoping to scoop up some of the people who are dropping OpenAI following its shady-sounding new arrangement with the Department of War. Whatever the motivation, these options should make it easier to have a seamless transition between providers.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025