Connect with us
DAPA Banner

Tech

Engineer Slips Lightning Back Into the iPhone 17 Pro With One Inventive Case

Published

on

iPhone 17 Pro Lightning Port Case
Ken Pillonel, a Swiss engineer, struck again. He’s well-known for refurbishing outdated iPhones with creative add-on cases, which he even sells. This time, however, he turned the tables. On April 1st, he completed a totally new prototype in just a few days, a slim protective cover that hands the iPhone 17 Pro a working Lightning port right where Apple moved on from it.



If you’ve recently updated from an iPhone 14 or earlier, you understand the pain. All of those old cords, docks, and chargers you used to love are now rendered worthless unless you carry a separate adapter with you everywhere. Pillonel effectively solved the challenge by working in reverse. Instead of forcing the phone to use a newer plug, he designed a cover that allows Lightning cables to plug right in while the iPhone 17 Pro remains safely tucked inside its USB-C shell.

Sale


Wireless Charger Stand Charging Station: 3 in 1 Charger Stand Multiple Devices for Apple – iPhone 17 16e…
  • 3 in 1 Wireless Charger Station: This 3-in-1 wireless charger is designed to work seamlessly with a variety of devices, including iPhone 17 16e…
  • Fast Charging Power: Ensure your devices are efficiently charged with up to 7.5W for phones, 5W for earbuds, and 3W for watches. The charger is…
  • Portable and Foldable Design: Featuring a foldable, lightweight design, this charging station is ideal for home, office, travel or trip. Manufacturer…

iPhone 17 Pro Lightning Port Case
It all starts with some careful effort on the electronics side. He designed tiny custom circuit boards to shrink a standard USB-C to Lightning adapter down to almost nothing. These boards are located inside the bottom border of the casing and add only a few mils of thickness. Next came the casing, which was produced in flexible TPU using a high-end 3D printer that is good at reducing waste. He also made a little jig to help get the MagSafe magnets in the appropriate place, and when he snapped everything together, it fit like a charm, no tools required.

iPhone 17 Pro Lightning Port Case
When it’s all put together, the case feels exactly like any other you’d get in a store, soft to the touch and durable enough for daily use. When you insert the iPhone 17 Pro inside, the internal cables align neatly with the phone’s USB-C port. Plugging a Lightning cable into the new hole outside just works; power flows exactly like it would on an older model. Yes, charging works well, as he demonstrated in his whole build video; now he just needs to test data transfer and other accessories.

iPhone 17 Pro Lightning Port Case
Pillonel never meant to sell this one. He refers to the finished piece as one of the oddest things he has ever put together, a tongue-in-cheek reference to Lightning’s official departure from the roster years ago. Nonetheless, the project illustrates a wider point. With some work and the correct parts, compatibility gaps between old and new technology can be bridged in inventive ways that keep favorite accessories alive.
[Source]

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Does A Right Turn Traffic Light Mean ‘No Turn On Red’ In Florida?

Published

on





Traffic lights can be tricky, depending on where you go. The response you have to a red light at an intersection in one state may not be the same response you need at an intersection in another state. Turning right on red can even get you a ticket in some U.S. cites. But in Florida, a right turn traffic light may still allow a right turn after stopping. But there’s also a bit more to it than that.

First off, you must come to a complete stop at the red light. If you keep rolling through the turn instead, you could get a ticket. Next, if there are no posted warning signs at the light, Florida law says you can go ahead and turn right once it’s clear to do so. But if you have a sign warning you that there’s no turn on red, then you’re stuck. Stay where you are until you get the green light.

Similarly, if you have a red right arrow, you of course must fully stop then as well. But don’t let the arrow fool you, as it’s not an automatic signal that you can just turn once the way is clear. If there are no signs posted that say otherwise (such as a “No turn on red” sign), you may proceed after determining that it is safe to do so. This is the case whether you’re at an intersection or a crosswalk.

Advertisement

Crosswalks and malfunctioning traffic lights

If you come to a right turn traffic light at a crosswalk in Florida, keep in mind that you are expected to yield to any pedestrians who are crossing. Even if you’ve come to a complete stop and are otherwise allowed to turn, you must wait. If your light turns green and someone is still in the process of crossing, you should wait then as well. Additionally, if you’re at an intersection with sidewalks but no clearly marked crosswalk present, you still have to yield.

However, there could be times you arrive at a right turn traffic light that’s malfunctioning. Maybe it’s blinking, stuck, or completely dead. If this happens, Florida law states that you must treat it as a four-way stop sign. That means you must come to a complete stop and yield right of way to traffic coming from all directions. Of course, you must also yield to any pedestrians crossing in front of you. Once the way clears and you have an open right turn, you’re free to go. Always be cautious when arriving at a light that’s out of order and make sure the intersection is fully clear before you continue.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Meta will record employees’ keystrokes and use it to train its AI models

Published

on

Meta has found a new source of training data for its AI models: its own employees. The company plans to use data culled from the mouse movements and keystrokes of its own staff in its pursuit to build more capable and efficient artificial intelligence.

The story, which was first reported by Reuters, shows the lengths to which tech companies are going to find new sources of training data — the lifeblood of AI models that helps the programs learn how to more effectively carry out tasks and respond to user queries.

When reached for comment by TechCrunch, a Meta spokesperson provided the following statement: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus. To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”

This trend reveals a troublesome privacy dimension of the AI industry. Last week it was reported that old startups are being scavenged for their corporate communications (like Slack archives and Jira tickets), and converted into AI training data.

Advertisement

Source link

Continue Reading

Tech

Microsoft lowers Game Pass Ultimate and PC prices, won't include next Call of Duty

Published

on


The Game Pass front page on Microsoft’s website now shows revised pricing for the service’s two most expensive plans. Although delaying the addition of new Call of Duty titles marks a reversal of the company’s earlier strategy, the expanded library introduced during last year’s major price increase remains intact.
Read Entire Article
Source link

Continue Reading

Tech

Cash App now supports accounts for kids 6-12

Published

on

Cash App, the banking and payments app run by Block, has added support for parent-managed kids accounts. The new accounts include key benefits from the service’s normal account, with an eye towards teaching financial literacy to younger users ages 6 to 12. Cash App first allowed teenage users on its platform in 2021.

As part of the “expanded Cash App Families experience,” eligible legal guardians and parents can create managed accounts that offer “a dedicated place on the platform to send allowances, set aside savings, and track spending for their child, kickstarting their path to financial independence,” Cash App says. Adults managing these accounts will be able to set up recurring transfers, see how their child is spending and do things like lock their child’s account to prevent transactions. Kids will get a custom debit card and the ability to receive payments from up to five trusted accounts, though notably they won’t be able to access Cash App itself.

Cash App says managed accounts are designed for kids 6 through 12. Once those kids turn 13, Cash App says parents will be able to choose to convert their account to a “sponsored account” to unlock more features, like the ability to send and receive payments, invest in stocks or trade crypto. Those sponsored accounts are technically still monitored and controlled by a parent or legal guardian, but they do give 13-year-olds more control over how they use their money.

A parent-managed account for kids is not a new idea in the fintech space, though Cash App is trying to reach a younger audience than some of its competitors. Venmo rolled out access to its payment platform to teens between the ages of 13 to 17 in 2023. Separately, both Apple and Google also offer their own kids accounts in Google Wallet and Apple Cash Family.

Advertisement

Source link

Continue Reading

Tech

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting

Published

on

Florida’s attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is “not responsible for this terrible crime” and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner’s chat logs. “My prosecutors have looked at this and they’ve told me, if it was a person on the other end of that screen, we would be charging them with murder,” Uthmeier said. “We cannot have AI bots that are advising people on how to kill others.”

Uthmeier’s office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. “We are going to look at who knew what, designed what, or should have done what,” he said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable.”

[…] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU’s Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Source link

Advertisement
Continue Reading

Tech

Mozilla says it patched 271 Firefox vulnerabilities thanks to Anthropic’s Claude Mythos

Published

on

Anthropic’s buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company’s special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla’s team find and patch 271 vulnerabilities in the latest release of the Firefox browser. “So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” the foundation said.

The blog post from Mozilla feels like a positive sign for Anthropic’s Project Glasswing. Obviously the AI company would want to put itself in the best possible light while presenting its own initiative, but there’s something encouraging about hearing the benefits from a third party. Mozilla also noted that in its time with Claude Mythos, the AI wasn’t able to turn up any bugs that a human wouldn’t have been able to find, given enough time and resources, which indicates that AI isn’t presently able to do more to crack cybersecurity protections than a person can.

An organizaion successfully using AI for good is certainly a refreshing change of pace in tech news. And for those Firefox users who aren’t personally interested in applying any generative AI in their browsing, Mozilla has given the option to turn it all off for the past several months.

Source link

Advertisement
Continue Reading

Tech

Google’s new Deep Research and Deep Research Max agents can search the web and your private data

Published

on

Google on Monday unveiled the most significant upgrade to its autonomous research agent capabilities since the product’s debut, launching two new agents — Deep Research and Deep Research Max — that for the first time allow developers to fuse open web data with proprietary enterprise information through a single API call, produce native charts and infographics inside research reports, and connect to arbitrary third-party data sources through the Model Context Protocol (MCP).

The release, built on Google’s Gemini 3.1 Pro model, marks an inflection point in the rapidly intensifying race to build AI systems that can autonomously conduct the kind of exhaustive, multi-source research that has traditionally consumed hours or days of human analyst time. It also represents Google’s clearest bid yet to position its AI infrastructure as the backbone for enterprise research workflows in finance, life sciences, and market intelligence — industries where the stakes of getting information wrong are extraordinarily high.

“We are launching two powerful updates to Deep Research in the Gemini API, now with better quality, MCP support, and native chart/infographics generation,” Google CEO Sundar Pichai wrote on X. “Use Deep Research when you want speed and efficiency, and use Max when you want the highest quality context gathering & synthesis using extended test-time compute — achieving 93.3% on DeepSearchQA and 54.6% on HLE.”

Both agents are available starting today in public preview via paid tiers of the Gemini API, accessible through the Interactions API that Google first introduced in December 2025.

Advertisement

Why Google built two research agents instead of one

The launch introduces a tiered architecture that reflects a fundamental tension in AI agent design: the tradeoff between speed and thoroughness.

Deep Research, the standard tier, replaces the preview agent Google released in December and is optimized for low-latency, interactive use cases. It delivers what Google describes as significantly reduced latency and cost at higher quality levels compared to its predecessor. The company positions it as ideal for applications where a developer wants to embed research capabilities directly into a user-facing interface — think a financial dashboard that can answer complex analytical questions in near-real time.

Deep Research Max occupies the opposite end of the spectrum. It leverages extended test-time compute — a technique where the model spends more computational cycles iteratively reasoning, searching, and refining its output before delivering a final report. Google designed it for asynchronous, background workflows: the kind of task where an analyst team kicks off a batch of due diligence reports before leaving the office and expects exhaustive, fully sourced analyses waiting for them the next morning.

The Google DeepMind team framed the distinction on X: “Deep Research: Optimized for speed and efficiency. Perfect for interactive apps needing quicker responses. Deep Research Max: It uses extra time to search and reason. Ideal for exhaustive context gathering and tasks happening in the background.”

Advertisement

“Deep Research was our first hosted agent in the API and has gained a ton of traction over the last 3 months, very excited for folks to test out the new agents and all the improvements, this is just the start of our agents journey,” Logan Kilpatrick, who leads developer relations for Google’s AI efforts, wrote on X.

MCP support lets the agents tap into private enterprise data for the first time

Perhaps the most consequential feature in today’s release is the addition of Model Context Protocol support, which transforms Deep Research from a sophisticated web research tool into something more closely resembling a universal data analyst.

MCP , an emerging open standard for connecting AI models to external data sources, allows Deep Research to securely query private databases, internal document repositories, and specialized third-party data services — all without requiring sensitive information to leave its source environment. In practical terms, this means a hedge fund could point Deep Research at its internal deal-flow database and a financial data terminal simultaneously, then ask the agent to synthesize insights from both alongside publicly available information from the web.

Google disclosed that it is actively collaborating with FactSet, S&P, and PitchBook on their MCP server designs, a signal that the company is pursuing deep integration with the data providers that Wall Street and the broader financial services industry already rely on daily. The goal, according to the blog post authored by Google DeepMind product managers Lukas Haas and Srinivas Tadepalli, is to “let shared customers integrate financial data offerings into workflows powered by Deep Research, and to enable them to realize a leap in productivity by gathering context using their exhaustive data universes at lightning speed.”

Advertisement

This addresses one of the most persistent pain points in enterprise AI adoption: the gap between what a model can find on the open internet and what an organization actually needs to make decisions. Until now, bridging that gap required significant custom engineering. MCP support, combined with Deep Research’s autonomous browsing and reasoning capabilities, collapses much of that complexity into a configuration step. Developers can now run Deep Research with Google Search, remote MCP servers, URL Context, Code Execution, and File Search simultaneously — or turn off web access entirely to search exclusively over custom data. The system also accepts multimodal inputs including PDFs, CSVs, images, audio, and video as grounding context.

Native charts and infographics turn AI reports into stakeholder-ready deliverables

The second headline feature — native chart and infographic generation — may sound incremental, but it addresses a practical limitation that has constrained the usefulness of AI-generated research outputs in professional settings.

Previous versions of Deep Research produced text-only reports. Users who needed visualizations had to export the data and build charts themselves, a friction point that undermined the promise of end-to-end automation. The new agents generate high-quality charts and infographics inline within their reports, rendered in HTML or Google’s Nano Banana format, dynamically visualizing complex datasets as part of the analytical narrative.

“The agent generates HTML charts and infographics inline with the report. Not screenshots. Not suggestions to ‘visualize this data.’ Actual rendered charts inside the markdown output,” noted AI commentator Shruti Mishra on X, capturing the practical significance of the change.

Advertisement

For enterprise users — particularly those in finance and consulting who need to produce stakeholder-ready deliverables — this transforms Deep Research from a tool that accelerates the research phase into one that can potentially produce near-final analytical products. Combined with a new collaborative planning feature that lets users review, guide, and refine the agent’s research plan before execution, and real-time streaming of intermediate reasoning steps, the system gives developers granular control over the investigation’s scope while maintaining the transparency that regulated industries demand.

How Deep Research evolved from a consumer chatbot feature to enterprise platform infrastructure

Today’s release crystallizes a strategic narrative Google has been building for months: Deep Research is not merely a consumer feature but a piece of infrastructure that powers multiple Google products and is now being offered to external developers as a platform.

The blog post explicitly notes that when developers build with the Deep Research agent, they tap into “the same autonomous research infrastructure that powers research capabilities within some of Google’s most popular products like Gemini App, NotebookLM, Google Search and Google Finance.” This suggests that the agent available through the API is not a stripped-down version of what Google uses internally but the same system, offered at platform scale.

The journey to this point has been remarkably rapid. Google first introduced Deep Research as a consumer feature in the Gemini app in December 2024, initially powered by Gemini 1.5 Pro. At the time, the company described it as a personal AI research assistant that could save users hours by synthesizing web information in minutes. By March 2025, Google upgraded Deep Research with Gemini 2.0 Flash Thinking Experimental and made it available for anyone to try. Then came the upgrade to Gemini 2.5 Pro Experimental, where Google reported that raters preferred its reports over competing deep research providers by more than a 2-to-1 margin. The December 2025 release was the pivot to developer access, when Google launched the Interactions API and made Deep Research available programmatically for the first time, powered by Gemini 3 Pro and accompanied by the open-source DeepSearchQA benchmark.

Advertisement

The underlying model driving today’s improvements is Gemini 3.1 Pro, which Google released on February 19, 2026. That model represented a significant leap in core reasoning: on ARC-AGI-2, a benchmark evaluating a model’s ability to solve novel logic patterns, 3.1 Pro scored 77.1% — more than double the performance of Gemini 3 Pro. Deep Research Max inherits that reasoning foundation and layers autonomous research behaviors on top of it, achieving 93.3% on DeepSearchQA (up from 66.1% in December) and 54.6% on Humanity’s Last Exam (up from 46.4%).

gemini-3.1-pro deep-research-qualitative-advacements blog evals

Google’s new Deep Research Max agent outperformed its December predecessor across nearly all qualitative dimensions in internal expert evaluations — but the older version held an edge in internal consistency and faithfulness. (Source: Google DeepMind)

Google faces a crowded field of competitors building autonomous research agents

Google is not operating in a vacuum. The launch arrives amid intensifying competition in the autonomous research agent space. OpenAI has been developing its own agent capabilities within ChatGPT under the codename Hermes, which includes an agent builder, templates, scheduling, and Slack integration, according to reports circulating on social media. Perplexity has built its business around AI-powered research. And a growing ecosystem of startups is attacking various slices of the automated research workflow.

What distinguishes Google’s approach is the combination of its search infrastructure — which gives Deep Research access to the broadest and most current index of web information available — with the MCP-based connectivity to enterprise data sources. No other company currently offers a research agent that can simultaneously query the open web at Google Search’s scale and navigate proprietary data repositories through a standardized protocol. The pricing structure also signals Google’s intent to drive adoption: according to Sim.ai, which tracks model pricing, the Deep Research agent in the December preview was priced at $2 per million input tokens and $2 per million output tokens with a 1 million token context window — positioning it as cost-competitive for the volume of research output it generates.

Advertisement

Not everyone greeted the announcement with unalloyed enthusiasm, however. Several users on X noted that the new agents are available only through the API, not in the Gemini consumer app. “Not on Gemini app,” observed TestingCatalog News, while another user wrote, “Google keeps punishing Gemini App Pro subscribers for some reason.” Others raised concerns about the presentation of benchmark results, with one user arguing that Google’s charts could be “misleading” in how they represent percentage improvements. These complaints point to a broader tension in Google’s AI strategy: the company is increasingly directing its most advanced capabilities toward developers and enterprise customers who access them through APIs, while consumer-facing products sometimes lag behind.

gemini-3.1-pro deep-research-and-max blog evals

Deep Research Max led all competitors on DeepSearchQA and BrowseComp, but GPT 5.4 edged ahead on Humanity’s Last Exam, a benchmark measuring reasoning and knowledge. All results were evaluated by Google DeepMind using publicly available model APIs. (Source: Google DeepMind)

What Deep Research Max means for finance, biotech, and the future of knowledge work

The practical implications of today’s launch are most immediately felt in industries that depend on exhaustive, multi-source research as a core business function. In financial services, where analysts routinely spend hours assembling due diligence reports from scattered sources — SEC filings, earnings transcripts, market data terminals, internal deal memos — Deep Research Max offers the possibility of automating the initial research phase entirely. The FactSet, S&P, and PitchBook partnerships suggest Google is serious about making this work with the data infrastructure that financial professionals already use.

In life sciences, the blog post notes that Google has collaborated with Axiom Bio, which builds AI systems to predict drug toxicity, and found that Deep Research unlocked new levels of initial research depth across biomedical literature. In market research and consulting, the ability to produce stakeholder-ready reports with embedded visualizations and granular citations could compress project timelines from days to hours.

Advertisement

The key question is whether the quality and reliability of these automated outputs will meet the standards that professionals in these fields demand. Google’s benchmark numbers are impressive, but benchmarks measure performance on standardized tasks — real-world research is messier, more ambiguous, and often requires the kind of judgment that remains difficult to automate. Deep Research and Deep Research Max are available now in public preview via paid tiers of the Gemini API, with availability on Google Cloud for startups and enterprises coming soon.

Eighteen months ago, Deep Research was a feature that helped grad students avoid drowning in browser tabs. Today, Google is betting it can replace the first shift at an investment bank. The distance between those two ambitions — and whether the technology can actually close it — will define whether autonomous research agents become a transformative category of enterprise software or just another AI demo that dazzles on benchmarks and disappoints in the conference room.

Source link

Advertisement
Continue Reading

Tech

SpaceX and Cursor strike partnership that might end in a $60 billion acquisition

Published

on

SpaceX and AI company Cursor have struck a new partnership that could see the owner of X buy the AI company for $60 billion later this year. “SpaceXAI and  @cursor_ai  are now working closely together to create the world’s best coding and knowledge work AI,” SpaceX wrote in a post on X.

According to SpaceX, the deal allows for it to either invest $10 billion into the company known for its AI coding tool, or acquire it entirely “later this year” for $60 billion. If an acquisition were to happen, it’s not clear at what point Cursor could officially join the fold of Elon Musk’s rapidly expanding and increasingly enmeshed web of companies. SpaceX bought xAI, the billionaire’s AI company that also controls X, earlier this year. SpaceX is currently getting ready to go public this summer in what will likely be the biggest initial public offering (IPO) in history.

Cursor, which has reportedly been in talks to raise its own $2 billion round of funding, is known for its AI coding tool of the same name that’s become the vibe coding platform of choice for many developers. It allows people to use either its own models or those from other leading AI companies, including OpenAI, Google, Anthropic and xAI.

In a statement, Cursor said its partnership with SpaceX will “accelerate our model training efforts” while addressing infrastructure-related issues that have slowed it down in the past. “We’ve wanted to push our training efforts much further, but we’ve been bottlenecked by compute,” the company said. “With this partnership, our team will leverage xAI’s Colossus infrastructure to dramatically scale up the intelligence of our models for coding and beyond.”

Advertisement

Source link

Continue Reading

Tech

The Electromechanical Computer Of The B-52’s Star Tracker

Published

on

The Angle Computer of the B-52, opened. (Credit: Ken Shirriff)
The Angle Computer of the B-52, opened. (Credit: Ken Shirriff)

In the ages before convenient global positioning satellites to query for one’s current location military aircraft required dedicated navigators in order to not get lost. This changed with increasing automation, including the arrival of increasingly more sophisticated electromechanical computers, such as the angle computer in the B-52 bomber’s star tracker that [Ken Shirriff] recently had a poke at.

We covered star trackers before, with this devices enabling the automation of celestial navigation. In effect, as long as you have a map of the visible stars and an accurate time source you will never get lost on Earth, or a few kilometers above its surface as the case may be.

The B-52’s Angle Computer is part of the Astro Compass, which is the star tracker device that locks onto a star and outputs a heading that’s accurate to a tenth of a degree, while also allowing for position to be calculated from it. Inside the device a lot of calculations are being performed as explained in the article, though the full equations are quite complex.

Not burdening the navigator of a B-52 with having to ogle stars themselves with an instrument and scribbling down calculations on paper is a good idea, of course. Instead the Angle Computer solves the navigational triangle mechanically, essentially by modelling the celestial sphere with a metal half-sphere. The solving is thus done using this physical representation, involving numerous gears and other parts that are detailed in the article.

Advertisement

In addition to the mechanical components there are of course the motors driving it, feedback mechanisms and ways to interface with the instruments. For the 1950s this was definitely the way to design a computer like this, but of course as semiconductor transistors swept the computing landscape, this marvel of engineering would before long find itself too replaced with a fully digital version.

Source link

Advertisement
Continue Reading

Tech

NYT Strands hints and answers for Wednesday, April 22 (game #780)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Tuesday’s puzzle instead then click here: NYT Strands hints and answers for Tuesday, April 21 (game #779).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025