When I was finally able to experiment with Auto Browse (for real this time), I took Google’s suggestions of digital chores as my starting point and picked online tasks that could be helpful in my own life.
Whenever interacting with generative AI tools, a healthy sense of skepticism—and caution—is critical. Google even includes a disclaimer baked into its Gemini chatbot reminding users that it makes mistakes. The Auto Browse tool goes a step further. “Use Gemini carefully and take control if needed,” reads persistent text that shows in the chatbot sidebar every time Auto Browse is running. “You are responsible for Gemini’s actions during tasks.”
Before you try it out, you also need to think about the security risks associated with this kind of automation. Generative AI tools are vulnerable to being compromised through prompt injection attacks on malicious websites. These attacks attempt to divert the bot from its task. The potential vulnerabilities in Google’s Auto Browse have not been fully examined by outside researchers, but the risks may be similar to other AI tools that take control of your computer.
In addition, take extra caution if you’re using Auto Browse to make purchases. Google has safeguards in place that flag certain actions, like buying stuff or posting on social media, as sensitive and in need of user approval to continue. Still, I was unsure how the bot would behave and anxious about the havoc it could potentially wreak with my credit card, to say nothing of handing over financial info to it in the first place.
Advertisement
Here’s the first prompt I sent it, card in hand:
I want to book two tickets to the SF symphony tonight. I don’t want to pay for orchestra seating, but the tickets don’t need to be the cheapest ones available. Please pick the two seats next to an aisle.
It’s a bit bizarre to watch Google’s AI agent click around in the tab. First, I saw it use Gemini 3, Google’s latest model, to strategize and define goals, like getting two aisle seats at the symphony, in the sidebar text box for a few seconds. This process looks similar to a chatbot using a “reasoning” model, talking through the steps it might take before moving forward. Then, the clicking starts. Each step the bot takes as part of a task is logged for users.
Auto Browse’s ability to perform multistep tasks without getting sidetracked was noticeably better than similar agent tools that I tested last year. It navigated to the correct website, chose the right performance, and clicked on multiple seat sections to gauge availability. Everything listed in the log appeared to be what it actually executed.
Advertisement
After a couple of minutes of working on the symphony tickets, the bot stopped clicking. I received a notification to take over and press the Order Now button. At a glance, the AI tool had seemingly delivered what I’d asked for, and rather quickly.
But if I had unquestioningly ordered the two seats Auto Browse chose for a date at the symphony, the night would most likely have ended with my boyfriend making me sleep on the couch.
David Sacks has used up his days as Donald Trump’s AI and crypto czar.
Speaking with Bloomberg on Thursday, the longtime entrepreneur, investor, and podcaster, confirmed that his non-consecutive 130-day stint as a special government employee is over and that he’s moving on to co-chair the President’s Council of Advisors on Science and Technology (PCAST) alongside senior White House technology adviser Michael Kratsios.
“I think moving forward as co-chair of PCAST, I can now make recommendations on not just AI but an expanded range of technology topics,” he told Bloomberg via a video interview. “So yes, this is how I’ll be involved moving forward.”
What that means in practice is Sacks will be much further from power center in Washington than since the outset of this second Trump administration. As AI czar, Sacks had a direct line to Trump and a hand in shaping policy. PCAST is a federal advisory body, so while it studies issues, produces reports, and sends recommendations up the chain, it doesn’t make policy.
Advertisement
The council has existed in some form since FDR, though Sacks made a point to Bloomberg of noting that this particular iteration has “the most star power of any group like this” ever assembled, and it’s hard to argue he’s wrong. The initial 15 members include Nvidia’s Jensen Huang, Meta’s Mark Zuckerberg, Oracle’s Larry Ellison, Google co-founder Sergey Brin, Marc Andreessen, AMD’s Lisa Su, and Michael Dell, among others. (That’s a lot of billionaires.)
Sacks told Bloomberg the council will take up AI, advanced semiconductors, quantum computing, and nuclear power, and that near-term attention will go toward pushing Trump’s national AI framework, released just last week. The framework is aimed at replacing what Sacks described to Bloomberg as a mess of conflicting state-level rules. “You’ve got 50 different states regulating this in 50 different ways,” he said, “and it’s creating a patchwork of regulation that’s difficult for our innovators to comply with.”
What Sacks didn’t address head-on was why the transition is happening now and whether his recent comments were a factor. Earlier this month, on the popular “All In” podcast that he co-hosts, Sacks publicly urged the administration to find an exit from the U.S.-backed war with Iran, walking through a set of worsening scenarios — attacks on oil infrastructure in neighboring countries, the destruction of desalination plants, the possibility of nuclear use by Israel — and calling for a polite way out. Trump responded by telling reporters that Sacks hadn’t spoken to him about the war. (The U.S.-Israel war on Iran has now been going on for approximately 27 days.)
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
Asked about the podcast episode on Thursday by Bloomberg, Sacks figuratively threw his hands in the air: “I’m not on the foreign policy team or the national security team,” he said, adding that his podcast comments represented his personal view, not an official one.
Advertisement
For all the marquee names Sacks is bringing to PCAST, it’s worth reflecting on what the council has historically been, which is an advisory body with some influence in some administrations and almost none in others.
President Obama’s version was seemingly the most productive on record, churning out 36 reports over eight years — two of which led to concrete policy changes, including an FDA rule that opened the market for over-the-counter hearing aids.
President Trump’s first-term council, by contrast, took nearly three years just to name its first members, produced a handful of reports, and made no particular mark, while President Biden’s council skewed heavily academic — Nobel laureates, MacArthur fellows, National Academy members — and issued a modest number of reports before the administration ended.
The current PCAST is a completely different animal, built almost entirely from the executive suites of the companies shaping the technology it will advise on.
Advertisement
Now, Sacks is again one of those unencumbered executives, free to resume his life as an investor and entrepreneur. A spokesperson for Craft Ventures, the firm Sacks co-founded and where he remains a partner, has not yet responded to related questions about next steps; TechCrunch reported last year on the ethics waivers Sacks obtained to maintain financial stakes in AI and crypto companies while shaping federal policy in both areas — an arrangement that drew sharp criticism from ethics experts and lawmakers.
An anonymous reader quotes a report from 404 Media: Apple provided the FBI with the real iCloud email address hidden behind Apple’s ‘Hide My Email’ feature, which lets paying iCloud+ users generate anonymous email addresses, according to a recently filed court record. The move isn’t surprising but still provides uncommon insight into what data is available to authorities regarding the Apple feature. The data was turned over during an investigation into a man who allegedly sent a threatening email to Alexis Wilkins, the girlfriend of FBI director Kash Patel.
“On or about February 28, 2026, Person 1 received an email from the email address peaty_terms_1o@icloud.com,” the affidavit reads. Earlier on, the document explicitly says that Person 1 is Alexis Wilkins. […] The affidavit says Apple then provided records that indicated the peaty_terms_1o@icloud.com email address was associated with an Apple account in the name of Alden Ruml. The records showed that account generated 134 anonymized email addresses, according to the affidavit.
Law enforcement agents later interviewed Ruml and he confirmed he had sent the email, the affidavit says. Ruml said he sent the email after reading a February 28 article about how the FBI was using its own resources to provide security to Wilkins. The specific article is not named or linked in the affidavit, but a New York Times article published that same day described how Patel ordered a team to ferry his girlfriend on errands and to events.
Set to debut on April 14 in more than 200 countries and regions, Apple Business brings together the company’s existing enterprise programs – Apple Business Connect, Apple Business Essentials, and Apple Business Manager. The new service represents Apple’s most comprehensive effort yet to provide small and mid-sized companies with integrated… Read Entire Article Source link
When you hear the term humanoid robot, you may think of C-3PO, the human-cyborg-relations android from Star Wars. C-3PO was designed to assist humans in communicating with robots and alien species. The droid, which first appeared on screen in 1977, joined the characters on their adventures, walking, talking, and interacting with the environment like a human. It was ahead of its time.
Before the release of Star Wars, a few androids did exist and could move and interact with their environment, but none could do so without losing its balance.
It wasn’t until 1996 that the first autonomous robot capable of walking without falling was developed in Japan. Honda’s Prototype 2 (P2) was nearly 183 centimeters tall and weighed 210 kilograms. It could control its posture to maintain balance, and it could move multiple joints simultaneously.
In recognition of that decades-old feat, P2 has been honored as an IEEE Milestone. The dedication ceremony is scheduled for 28 April at the Honda Collection Hall, located on the grounds of the Mobility Resort Motegi, in Japan. The machine is on display in the hall’s robotics exhibit, which showcases the evolution of Honda’s humanoid technology.
In 1986 Honda researchers Kazuo Hirai, Masato Hirose, Yuji Haikawa, and Toru Takenaka set out to develop what they called a “domestic robot” to collaborate with humans. It would be able to climb stairs, remove impediments in its path, and tighten a nut with a wrench, according to their research paper on the project.
“We believe that a robot working within a household is the type of robot that consumers may find useful,” the authors wrote.
But to create a machine that would do household chores, it had to be able to move around obstacles such as furniture, stairs, and doorways. It needed to autonomously walk and read its environment like a human, according to the researchers.
Advertisement
But no robot could do that at the time. The closest technologists got was the WABOT-1. Built in 1973 at Waseda University, in Tokyo, the WABOT had eyes and ears, could speak Japanese, and used tactile sensors embedded on its hands as it gripped and moved objects. Although the WABOT could walk, albeit unsteadily, it couldn’t maneuver around obstacles or maintain its balance. It was powered by an external battery and computer.
To build an android, the Honda team began by analyzing how people move, using themselves as models.
That led to specifications for the robot that gave it humanlike dimensions, including the location of the leg joints and how far the legs could rotate.
Once they began building the machine, though, the engineers found it difficult to satisfy every specification. Adjustments were made to the number of joints in the robot’s hips, knees, and ankles, according to the research paper. Humans have four hip, two knee, and three ankle joints; P2’s predecessor had three hip, one knee, and two ankle joints. The arms were treated similarly. A human’s four shoulder and three elbow joints became three shoulder joints and one elbow joint in the robot.
Advertisement
The researchers installed existing Honda motors and hydraulics in the hips, knees, and ankles to enable the robot to walk. Each joint was operated by a DC motor with a harmonic-drive reduction gear system, which is compact and offered high torque capacity.
To test their ideas, the engineers built what they called E0. The robot, which was just a pair of connected legs, successfully walked. It took about 15 seconds to take each step, however, and it moved using static walking in a straight line, according to a post about the project on Honda’s website. (Static walking is when the body’s center of mass is always within the foot’s sole. Humans walk with their center of mass below their navel.)
The researchers created several algorithms to enable the robot to walk like a human, according to the Honda website. The codes allowed the robot to use a locomotion mechanism, dynamic walking, whereby the robot stays upright by constantly moving and adjusting its balance, rather than keeping its center of mass over its feet, according to a video on the YouTube channel Everything About Robotics Explained.
“P2 was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.” —IEEE Nagoya Section
Advertisement
The Honda team installed rubber brushes on the bottom of the machine’s feet to reduce vibrations from the landing impacts (the force experienced when its feet touch the ground)—which had made the robot lose its balance.
Between 1987 and 1991, three more prototypes (E1, E2, and E3) were built, each testing a new algorithm. E3 was a success.
With the dynamic walking mechanism complete, the researchers continued their quest to make the robot stable. The team added 6-axis sensors to detect the force at which the ground pushed back against the robot’s feet and the movements of each foot and ankle, allowing the robot to adjust its gait in real time for stability.
The team also developed a posture-stabilizing control system to help the robot stay upright. A local controller directed how the electric motoractuators needed to move so the robot could follow the leg joint angles when walking, according to the research paper.
Advertisement
During the next three years, the team tested the systems and built three more prototypes (E4, E5, and E6), which had boxlike torsos atop the legs.
In 1993 the team was finally ready to build an android with arms and a head that looked more like C-3PO, dubbed Prototype 1 (P1). Because the machine was meant to help people at home, the researchers determined its height and limb proportions based on the typical measurements of doorways and stairs. The arm length was based on the ability of the robot to pick up an object when squatting.
When they finished building P1, it was 191.5 cm tall, weighed 175 kg, and used an external power source and computer. It could turn a switch on and off, grab a doorknob, and carry a 70 kg object.
P1 was not launched publicly but instead used to conduct research on how to further improve the design. The engineers looked at how to install an internal power source and computer, for example, as well as how to coordinate the movement of the arms and legs, according to Honda.
Advertisement
For P2, four video cameras were installed in its head—two for vision processing and the other two for remote operation. The head was 60 cm wide and connected to the torso, which was 75.6 cm deep.
A computer with four microSparc IIprocessors running a real-time operating system was added into the robot’s torso. The processors were used to control the arms, legs, joints, and vision-processing cameras.
Also within the body were DC servo amplifiers, a 20-kg nickel-zinc battery, and a wireless Ethernet modem, according to the research paper. The battery lasted for about 15 minutes; the machine also could be charged by an external power supply.
The hardware was enclosed in white-and-gray casing.
Advertisement
P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.
P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.King Rose Archives
The following year, Honda’s engineers released the smaller and lighter P3. It was 160 cm tall and weighed 130 kg.
In 2000 the popular ASIMO robot was introduced. Although shorter than its predecessors at 130 cm, it could walk, run, climb stairs, and recognize voices and faces. The most recent version was released in 2011. Honda has retired the robot.
Advertisement
Honda P2’s influence
Thanks to P2, today’s androids are not just ideas in a laboratory. Robots have been deployed to work in factories and, increasingly, at home.
“P2’s development shifted the focus of robotics from industrial applications to human-centric designs,” the Milestone sponsors explained in the wiki entry. “It inspired subsequent advancements in humanoid robots and influenced research in fields like biomechanics and artificial intelligence.
“It was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.”
A plaque recognizing Honda’s P2 robot as an IEEE Milestone is to be installed at the Honda Collection Hall. The plaque is to read:
In 1996 Prototype 2 (P2), a self-contained autonomous bipedal humanoid robot capable of stable dynamic walking and stair-climbing, was introduced by Honda. Its legged robotics incorporated real-time posture control, dynamic balance, gait generation, and multijoint coordination. Honda’s mechatronics and control algorithms set technical benchmarks in mobility, autonomy, and human-robot interaction. P2 inspired new research in humanoid robot development, leading to increasingly sophisticated successors.
Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.
The Link light rail 2 Line heads east toward Lake Washigton with downtown Seattle and Lumen Field in the background. (Sound Transit Photo)
Sound Transit’s Link light rail will carry passengers across Lake Washington for the first time on Saturday with the opening of the Crosslake Connection, and celebrations are planned at every stop.
Trains will begin running between Seattle and the Eastside at around 10 a.m. following a 9 a.m. street fair and ribbon-cutting ceremony at Sam Smith Park, across the street from the Judkins Park Station.
Events will take place at 10 stations across the expanded 2 Line from the International District to Bellevue and Redmond, lasting until 2 p.m. Here are a few tech-related highlights:
Microsoft is donating 3,000 commemorative ORCA cards, loaded with the value of one light rail round-trip. The cards will be available at the welcome tent at Sam Smith Park and from Sound Transit and Microsoft ambassadors while supplies last.
Lime is offering free electric bike and scooter rides on opening day with the code CROSSLAKE26.
The Seattle Orcas, the professional cricket team backed by big names in tech, will host a celebration at the Marymoor Village Station. Visitors can learn about the sport, get a picture in the photo booth with the Orcas mascot, and more.
Microsoft is also hosting activities at the Redmond Technology Station, with entertainment, complimentary food and coffee, photo opportunities, lawn games and more.
The opening of the Crosslake Connection could alter commute habits for thousands of tech workers from Microsoft, Amazon and other companies who travel in both directions between major office hubs in Seattle, Bellevue and Redmond.
Sound Transit projects that the fully integrated 2 Line will serve about 43,000 to 52,000 daily riders in 2026.
Trains over Lake Washington will operate at speeds of 55 mph, running every 10 minutes from approximately 5 a.m. to midnight seven days a week.
OpenAI has shelved its plans to add an erotic “adult mode” to ChatGPT indefinitely, the Financial Times reported on Wednesday, capping a five-month saga in which the feature was announced with confidence, delayed twice, and ultimately abandoned after pushback from staff, advisors, and investors. The retreat is the third major product reversal for OpenAI in a single week, following the shutdown of its Sora video generation app on Monday and the subsequent collapse of a planned $1 billion investment from Disney.
The adult mode was first announced by CEO Sam Altman in October 2025, when he wrote on X that OpenAI was confident it could age-gate sexually explicit conversations and that the move aligned with the company’s principle to “treat adult users like adults.” It was initially scheduled for December 2025, then pushed to the first quarter of 2026, and has now been postponed with no timeline for release. OpenAI told the Financial Times it plans to conduct “long-term research on the effects of sexually explicit chats and emotional attachments” before making a product decision.
What went wrong
The problems were technical, ethical, and commercial, and they compounded one another. Engineers working on the feature discovered that training models which had been built to avoid sexual content for safety reasons to produce explicit material reliably was harder than anticipated. When they used datasets that included sexual content, the models also generated outputs involving illegal scenarios, including bestiality and incest, that proved difficult to filter out. The feature was not merely controversial; it was resistant to being built safely.
OpenAI’s own advisory board raised concerns that went beyond content moderation. Advisors warned that sexually explicit ChatGPT interactions could foster unhealthy emotional attachments with serious mental health consequences. One advisor described the risk as turning ChatGPT into a “sexy suicide coach,” a phrase that resonates grimly given the company’s existing legal exposure. OpenAI currently faces at least eight lawsuits alleging that ChatGPT contributed to user deaths, including the case of Adam Raine, a 16-year-old from Southern California whose family alleges the chatbot discussed methods of suicide with him more than 200 times before he took his own life in April 2025. Earlier this week, OpenAI flagged these lawsuits as among the top risks to its business in a financial document disclosed to investors.
Advertisement
Staff, too, began to question whether the feature served OpenAI’s stated mission. The company’s charter commits it to building artificial general intelligence that benefits humanity. Some employees found it difficult to reconcile that ambition with the engineering effort required to make a chatbot talk dirty without breaking the law.
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The investor calculation
Investors delivered what may have been the decisive objection: the economics did not justify the risk. Two people familiar with the matter told the Financial Times that some investors questioned why OpenAI would jeopardise its reputation for a product with “relatively small upside.” The AI-generated adult content market exists, but it is served by a constellation of smaller, less scrutinised companies. For a company raising capital at a $300 billion valuation and courting enterprise customers, the brand damage from association with explicit content outweighed the potential revenue.
Advertisement
The age verification problem sharpened this concern. OpenAI’s approach relied on AI-based age prediction rather than hard identity checks, and internal testing revealed an error rate of approximately 10 per cent, meaning roughly one in ten users could be misclassified. For a product designed to keep explicit content away from minors, that margin is not a rounding error. It is a regulatory and reputational catastrophe waiting to happen, particularly in a legal environment where multiple US states have passed or proposed laws requiring platforms to verify users’ ages before granting access to adult material.
A week of retreats
The adult mode decision does not exist in isolation. On Monday, OpenAI announced it would discontinue Sora, the AI video generation tool it had positioned as a creative platform for filmmakers and content creators. Sora consumed vast computing resources relative to its revenue, and its most prominent commercial partnership, a three-year licensing agreement with Disney that would have allowed users to generate videos featuring characters from Disney, Marvel, Pixar, and Star Wars, collapsed after the shutdown was announced. Disney had planned to invest $1 billion in OpenAI as part of the deal. No money had changed hands.
Together, the three reversals paint a picture of a company pulling back from consumer product experiments and refocusing on its core business. The Financial Times reported that investors are more interested in seeing OpenAI combine ChatGPT with coding assistants to develop a “super app” aimed at transforming how businesses operate, a vision with clearer monetisation and fewer reputational hazards than either video generation or erotic chatbots.
OpenAI has said it will reallocate resources to robotics and autonomous software agents, areas where the path from research to commercial value is more direct and the regulatory landscape, while complex, does not involve the specific toxicity of sexualised AI and child safety failures.
Advertisement
The pattern
There is a recurring dynamic in OpenAI’s product strategy: announce ambitiously, encounter the real-world complications that less confident organisations might have anticipated, and then retreat while framing the reversal as prudent research. The adult mode was announced before the technical problems of safe content generation were solved, before the age verification system could achieve acceptable accuracy, and before the advisory board’s concerns about mental health harms had been addressed. The Sora partnership with Disney was announced before the product had demonstrated commercial viability. In both cases, the announcement generated coverage and signalled ambition, but the follow-through revealed gaps between what was promised and what could be delivered.
The company’s willingness to shelve the feature, rather than push it out despite the risks, is itself worth noting. It suggests that the pressure from lawsuits, investors, and internal dissent is beginning to function as a corrective mechanism, pulling OpenAI back from the edges of what is technically possible toward what is commercially and ethically sustainable. Whether that mechanism is reliable, or merely responsive to the most visible crises, is a question the next product announcement will answer.
The Samsung Galaxy A57 is the company’s new mid-ranger for 2026, but what’s really new compared to 2025’s Galaxy A56?
While the two phones look similar at a glance, look a little closer and you’ll start to see subtle differences not only in the overall design, but key areas like display tech, performance and software that should make the Samsung Galaxy A57 a little more tempting – and quite possibly one of the best mid-range phones around.
While we’re yet to fully review this year’s mid-ranger, we’ve spent some time with the phone ahead of its launch, and here’s how it compares to the Samsung Galaxy A56 on paper.
Slimmer, lighter and more durable
One of the most immediate differences between the Galaxy A57 and its predecessor comes in the design department. The Galaxy A57 is 0.5mm thinner than the 7.4mm-thick Galaxy A56, measuring in at 6.9mm – and while that doesn’t sound like much on paper, it makes a noticeable difference in the overall feel of the phone.
Advertisement
Image Credit (Trusted Reviews)
Advertisement
Combined with a weight of just 179g, 20g lighter than the A56, it should feel much more comfortable to hold and use in day-to-day life, even if it isn’t quite as ultra-slim as the likes of the iPhone Air and Samsung Galaxy S25 Edge.
As an added bonus, the Galaxy A57 is also more durable, with Gorilla Glass Victus Plus on both the front and rear glass panels, along with IP68 dust and water resistance, up from last year’s IP67.
A brighter, more premium-looking screen
Apparently unhappy just making the phone thinner and lighter, Samsung also focused its sights on upgrading the display experience with this year’s mid-ranger. The Galaxy A57 may sport the same-sized 6.7-inch screen as the A56, but a cursory glance at the phones and the differences are immediate – especially when it comes to the size of the bezels.
The Galaxy A56 had massively mismatched bezels; there’s no getting around it. The sides measured in at 2.2mm thick, the forehead was 2mm thick, and the chin was a whopping 3.3mm thick, and as a result, it didn’t look particularly premium.
Advertisement
Image Credit (Trusted Reviews)
The Galaxy A57, for comparison, has 1.5mm-thick sides and forehead, with a slightly thicker 2.5mm chin. It’s still not completely symmetrical, but it at least feels more premium than last year’s panel. Elsewhere, Samsung has boosted the Vision Booster tech to make videos look sharper and brighter when displayed on the Super AMOLED panel.
Advertisement
In other areas, however, the two panels are nearly identical; both offer a smooth 120Hz refresh rate, a peak brightness of 1900 nits, and FHD+ resolution.
A boost in performance
The occasional outlier aside (I’m looking at you, Pixel 10a), you can always rely on boosted performance from newer smartphones, and that’s very much the case with the Galaxy A57 – though it still won’t compete with the most powerful phones in the mid-range market.
Image Credit (Trusted Reviews)
At its heart is the Exynos 1680, up from the Exynos 1580 on the A56, coupled with either 8- or 12GB of RAM – and this is faster LPDDRX5 RAM too. Combined with either 256- or 512GB of storage, the latter of which is new for this year, the Galaxy A57 should deliver an uptick in performance and boosted storage to match.
There’s also an upgraded vapour chamber, which is apparently 13% bigger, though last year’s Galaxy A56 never really got all that hot in use – in our experience, anyway.
Advertisement
Advertisement
First to get One UI 8.5, and more OS upgrades
The Galaxy A57 is the first in Samsung’s A-series to get the One UI 8.5 update that launched with the flagship Galaxy S26 range last month – though the Galaxy A56 will likely get the upgrade sometime in the near future.
Image Credit (Trusted Reviews)
What’s more impressive is the long-term software support. The Galaxy A56 offered a fine combination of four years of combined OS and security upgrades, but the Galaxy A57 takes that to six years.
That’s a pretty solid promise for a mid-range phone, only really bested by the likes of the Pixel 10a and iPhone 17e, and should see the phone through to One UI 14 based on Android 22. The A56, on the other hand, will stop at One UI 11 based on Android 19.
New ‘Awesome Intelligence’ features
Neither the Galaxy A56 nor A57 get the full suite of Galaxy AI features – that’s for the company’s flagships and foldables – but they do get a simplified toolkit under the ‘Awesome Intelligence’ umbrella. For the Galaxy A56, that meant features like object eraser, best face and auto trim.
Advertisement
With the Galaxy A57, Samsung has added the same improved Circle to Search tech and upgraded Bixby experience that shipped with the Galaxy S26 range, with the former allowing you to search for entire outfits at once, while the latter allows Samsung’s virtual assistant to control various aspects of your phone.
Advertisement
Image Credit (Trusted Reviews)
What’s more, you can use both on the phone at once; one is activated by pressing the power button, the other by voice.
Samsung has also introduced voice transcription tech with this year’s mid-ranger, offering transcription not only in the recorder app but in calls too.
The question is whether the Galaxy A56 will get the same features once it too receives the One UI 8.5 update – we’ll have to wait and see for now.
Advertisement
Early thoughts
Compared to last year’s Galaxy A56, the Galaxy A57 feels like a much more refined mid-range smartphone. It’s thinner, lighter, and more durable, and it boasts a screen that, while not quite the best for the price, is certainly headed in the right direction.
Added bonuses like faster LPDDR5X RAM, increased base storage, a longer software promise and more AI features all look to sweeten the deal – though key hardware, from the camera setup to battery life and charging speed, feel almost identical to last year’s model.
Advertisement
We likely won’t recommend upgrading from last year’s A56 if you’ve got one, though we’ll save our final thoughts until we’ve spent some more time with both phones side by side.
The biggest memory burden for LLMs is the key-value cache, which stores conversational context as users interact with AI chatbots. The cache grows as conversations lengthen, increasing both memory usage and power consumption. TurboQuant addresses this issue by reducing model size with “zero accuracy loss,” improving vector search efficiency, and… Read Entire Article Source link
Like it or lump it, Marshals: A Yellowstone Story is becoming more of a mystery by the minute. In episode 4 alone, Randall (Michael Cudlitz) has allegedly left a gun shell on Kayce’s (Luke Grimes) porch to warn him about what was coming.
Why? In episode 3, Kayce had shot Randall’s son, Carson, to save his teammate Miles’ (Tantanka Means) life. In true Yellowstone fashion, past actions are now catching up to Kayce in the ugliest of ways.
But I’ve got a feeling we’re still in the calm before the storm phase. So when does Marshals: A Yellowstone Story episode 5 arrive on CBS and Paramount+?
Advertisement
Article continues below
What time can I watch Marshals: A Yellowstone Story episode 5 on Paramount+?
Photo credit: Don Pettit During Expedition 72, NASA astronaut Don Pettit used his free time on the International Space Station to work on a quite interesting side project. He went ahead and coaxed an early purple potato to sprout in a small improvised garden he’d created on his own. He’d removed a bit of the tuber and placed it in a container with grow lights connected, fastening it in place with a small piece of Velcro. This simple system kept everything stable even as the station zoomed around the Earth.
The potato had smooth purple skin and had grown into an oval form about the size of a huge egg. There were tiny little tendrils shooting out in all directions, looking like pallid threads snapped in mid-stretch. No dirt was visible on any of the surfaces. The photograph quickly went viral, and people went crazy in the comments section, asking all kinds of questions. Some wondered whether it was some unknown organism that had suddenly surfaced floating in space, while others compared it to some of the props seen in sci-fi films.
Interactive model – Inspire kids to build a representation of the Earth, Sun and Moon in orbit with this LEGO Technic Planet Earth and Moon in Orbit…
Educational space toy – Kids can turn the crank to see how the Earth and the Moon orbit around the Sun
Includes months and moon phases – This solar system toy includes printed details, like the month and moon phases to help kids see how the Earth’s…
Pettit ended up naming his little specimen Spudnik-1 and explaining to everyone what they were looking at. He got the idea from a story about a lone explorer who had to cultivate potatoes on Mars to survive. This was just his own personal experiment to explore how a familiar food like a potato would behave far away from home.
Microgravity changes everything about how a plant develops. Roots do not reach downward the way they would on Earth, instead spreading outward in every direction at once in search of water and nutrients. Shoots behave the same way, scattering rather than growing in a straight line upward. The whole plant takes on a loose, sprawling form that looks nothing like what you would find in a tidy garden back home. Growth is also slower than usual, since without the constant pull of gravity there is no physical stress on the living tissue to drive development forward.
Then there’s the fact that there’s no soil, so the potato skin remains smooth and even under the constant light of the artificial lamps, with no rough brown patches from hitting the earth. Moisture and light are properly metered, but that is the extent of management. It’s all simply these minor adjustments to try to imitate the natural pull of gravity and the cycle of sun and rain that we take for granted on Earth.
Advertisement
NASA teams have been cultivating a variety of plants aboard the station for years, including lettuce, Chinese cabbage, mustard greens, kale, and zinnias, all of which have survived under relatively comparable conditions. Of course, every harvest is a joy because it means they can consume some real food instead of vacuum-sealed meals. Of course, they collect a wealth of information that helps them plan for longer-term expeditions to the Moon or Mars, when every piece of food they bring must serve several functions.
Pettit kept things simple by selecting a potato variety that naturally contains a high concentration of the exotic pigments that give it its deep purple hue. It just so happens that those same molecules can help shelter cells from radiation, which is a significant benefit for longer missions. After the picture went viral, he kept folks informed with some fairly simple updates. The Velcro held the tuber in place, the grow lights provided a consistent supply of electricity, and then, well, it all came down to being patient and keeping an eye on things. [Source]
You must be logged in to post a comment Login