Connect with us

Tech

Today’s NYT Mini Crossword Answers for Feb. 8

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


It’s Super Bowl Sunday! Fittingly, today’s Mini Crossword includes some related clues. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-feb-8-2026.png

The completed NYT Mini Crossword puzzle for Feb. 8, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: The Eagles have the only N.F.L. logo that faces this way
Answer: LEFT

5A clue: Statement that’s self-evidently true
Answer: AXIOM

7A clue: Wash vigorously
Answer: SCRUB

Advertisement

8A clue: Classic opera set in Rome
Answer: TOSCA

9A clue: To the ___ degree
Answer: NTH

Mini down clues and answers

1D clue: Fourth place in an N.F.L. division, for example
Answer: LAST

2D clue: Former inmate, informally
Answer: EXCON

Advertisement

3D clue: Successful gain of ten yards, when combined with this answer’s direction?
Answer: FIRST

4D clue: Trip to the end zone, when combined with this answer’s direction?
Answer: TOUCH

6D clue: Deg. held by many a C.E.O.
Answer: MBA

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Apple is planning to 3D print the chassis for future iPhones and Apple Watches

Published

on

Apple’s manufacturing design team is developing a process to 3D print aluminium chassis components for future iPhone and Apple Watch models, according to Bloomberg’s Mark Gurman, extending a technique the company has already quietly deployed across several recent products.

The move follows Apple’s use of 3D printing for the titanium shell of the Apple Watch Ultra 3, the Apple Watch Series 11, and the USB-C port on the iPhone Air, establishing a manufacturing foundation the company now appears confident enough to scale toward its highest-volume product lines.

Aluminium presents a more complex challenge than titanium, given the material’s different structural and thermal properties, but a successful transition would give Apple the same core advantages it has already realised with titanium, including reduced raw material waste, lower production costs, and a path toward using recycled source material across more of its hardware.

The environmental dimension carries weight beyond cost reduction, with 3D printing generating significantly less material waste than traditional forging and machining processes that remove large amounts of metal to arrive at a finished chassis shape.

Advertisement

Advertisement

The Apple Watch Ultra 3 demonstrated that 3D printing unlocks manufacturing possibilities beyond cost savings alone, with the process allowing textures to be printed in locations previously inaccessible through forging, which improved the bonding between plastic and metal in the antenna housing to enhance water resistance in cellular models.

The iPhone Air’s thinner USB-C port similarly depended on 3D printing to achieve its dimensions, with conventional manufacturing reportedly unable to produce the component at the thickness the device required, suggesting the process has already influenced form factor decisions rather than simply reducing the cost of existing designs.

Apple’s recently launched MacBook Neo also adopts a cost-reduced aluminium manufacturing approach that uses 50% less metal than traditional processes, though that method stops short of 3D printing, pointing to a broader internal push across multiple product lines to reduce material consumption without resorting to plastic chassis construction.

Advertisement

Apple’s existing 3D printing process for Apple Watch already saves an estimated 400 metric tons of raw titanium annually, giving the company a proven environmental and cost case for expanding the technique to aluminium and higher-volume products.

However, no timeline has been confirmed for 3D-printed aluminium across iPhone or Apple Watch, with Apple’s manufacturing design and operations teams still in active development on the process, according to Gurman’s reporting.

Advertisement

Source link

Advertisement
Continue Reading

Tech

European Consortium Wants Open-Source Alternative To Google Play Integrity

Published

on

An anonymous reader quotes a report from Heise: Pay securely with an Android smartphone, completely without Google services: This is the plan being developed by the newly founded industry consortium led by the German Volla Systeme GmbH. It is an open-source alternative to Google Play Integrity. This proprietary interface decides on Android smartphones with Google Play services whether banking, government, or wallet apps are allowed to run on a smartphone.

Obstacles and tips for paying with an Android smartphone without official Google services have been highlighted by c’t in a comprehensive article. The European industry consortium now wants to address some problems mentioned. To this end, the group, which includes Murena, which develops the hardened custom ROM /e/OS, Iode from France, and Apostrophy (Dot) from Switzerland, in addition to Volla, is developing a so-called “UnifiedAttestation” for Google-free mobile operating systems, primarily based on the Android Open-Source Project (AOSP).

According to Volla, a European manufacturer and a leading manufacturer from Asia, as well as European foundations such as the German UBports Foundation, have also expressed interest in supporting it. Furthermore, developers and publishers of government apps from Scandinavia are examining the use of the new procedure as “first movers.” In its announcement, Volla explains that Google provides app developers with an interface called Play Integrity, which checks whether an app is running on a device with specific security requirements. This primarily affects applications from “sensitive areas such as identity verification, banking, or digital wallets — including apps from governments and public administrations”.

The company criticizes that the certification is exclusively offered for Google’s own proprietary “Stock Android” but not for Android versions without Google services, such as /e/OS or similar custom ROMs. “Since this is closely intertwined with Google services and Google data centers, a structural dependency arises — and for alternative operating systems, a de facto exclusion criterion,” the company states. From the consortium’s perspective, this also leads to a “security paradox,” because “the check of trustworthiness is carried out by precisely that entity whose ecosystem is to be avoided at the same time”. The UnifiedAttestation system is built around three main components: an “operating system service” that apps can call to check whether the device’s OS meets required security standards, a decentralized validation service that verifies the OS certificate on a device without relying on a single central authority, and an open test suite used to evaluate and certify that a particular operating system works securely on a specific device model.

Advertisement

“We don’t want to centralize trust, but organize it transparently and publicly verifiable. When companies check competitors’ products, we can strengthen that trust,” says Dr. Jorg Wurzer, CEO of Volla Systeme GmbH and initiator of the consortium. The goal is to increase digital sovereignty and break free from the control of any one, single U.S. company, he says.

Source link

Continue Reading

Tech

Bluesky’s CEO is stepping down after nearly 5 years

Published

on

Bluesky CEO Jay Graber, who has led the upstart social platform since 2021, is stepping down from her role as its top executive. Toni Schneider, who has been an advisor and investor in Bluesky, will take over the job temporarily while Graber stays on as Chief Innovation Officer.

“As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things,” Graber wrote in a blog post. Schneider, who was previously CEO at WordPress parent Automattic, will be that “experienced operator and leader” while Blueksy’s board searches for a permanent CEO, she said.

Graber’s history with Bluesky dates back to its early days as a side project at Jack Dorsey’s Twitter. She was officially brought on as CEO in 2021 as Bluesky spun off into an independent company (it officially ended its association with Twitter in 2022 and Dorsey cut ties with Bluesky in 2024). She led the company through its launch and early viral success as it grew from an invitation-only platform to the 43 million-user service it is today. During that time, she’s become known as an advocate for decentralized social media and for trolling Mark Zuckerberg’s t-shirt choices.

Nearly three years since it launched publicly, Bluesky has carved out a small but influential niche in the post-Twitter social landscape. The platform is less than a third of the size of Meta’s competitor, Threads, which has also copied some of Bluesky’s signature features. Bluesky also has yet to roll out any meaningful monetization features, though it has teased a premium subscription service in the past.

Advertisement

As Chief Innovation Officer, Graber will presumably still be an influential voice at the company going forward. And, as Wired points out, she still has a seat on Bluesky’s board so she will get some say in who steps into the role permanently. Until then, Schneider, who is also a partner at VC firm Tre Ventures, will lead the company. “I deeply believe in what this team has built and the open social web they’re fighting for,” he wrote in a post on Bluesky.

Source link

Continue Reading

Tech

Real Consequences: Trump’s Bullshit Claim About Tylenol Is Seeing Real World Results

Published

on

from the fever-dream dept

There’s this insane subset of people who, when they talk about Donald Trump, I’ll never understand. It’s the ones who claim that taking what Donald Trump says seriously is a mistake that most people are unlikely to make. It’s also expressed by the crowd that claims something to the effect of: you shouldn’t take Trump literally, but you should take him seriously.

That this is said about the most powerful single individual on the planet is bonkers. This is typically how I’ve talked about my own kids when they were toddlers. Inevitably, one of my kids would be trying to say something entirely innocuous, only to have what came out of their mouth be some horrible word or swear or something. And I would hand-wave that away. C’mon, I’d tell people, you know that’s not what he meant to say.

Donald Trump is, unfortunately, the President of the United States of America. When he speaks, people listen. And a percentage of those listening will take him both literally and seriously. And when Donald Trump told American women last year to not take Tylenol, or give it to their young children, because it would give their kids autism, well, they listened.

Researchers found that in the wake of that batshit crazy announcement, use of Tylenol and its generic equivalents dropped significantly in use in emergency rooms and prescriptions written for children.

Advertisement

For nearly three months after that, new research found, Tylenol orders for pregnant women showing up in emergency rooms dropped and prescriptions of the generic drug for children rose. This happened despite sharp criticism of the president’s message from doctor groups saying that the drug, leucovorin, shouldn’t be broadly used for autism and Tylenol is safe during pregnancy.

“It just shows that in our country right now, health care has been politicized in a way that political messages are driving and impacting care — and not always for good,” said Dr. Susan Sirota, a pediatrician in Highland Park, Illinois, who wasn’t involved with the research.

The research suggested something like a 10% drop in measurable use of acetaminophen or paracetamol in the wake of Trump’s announcement. That doesn’t tell the whole story, of course, since so much of the use of Tylenol occurs through over the counter purchases at drug stores and the like. Based on market research, however, Tylenol specifically saw a nearly identical 11% or so drop in OTC sales as well back in November.

But that isn’t all. With all of this attention on a common drug supposedly giving children autism, parental anxiety about the condition has shot up as well. And, as a result, parents are turning toward experimental drugs for that that defy expert recommendations. That’s where leucovorin comes in.

Leucovorin is a derivative of folic acid used for, among other things, reducing the toxic side effects of certain chemotherapy drugs and treating a rare blood disorder. It has also been studied for a neurological condition known as cerebral folate deficiency and for a subset of autistic children, according to the American Academy of Pediatrics.

The pediatrics group doesn’t recommend routine use of the drug for autistic children. Early, small-scale studies have explored its use, “and some findings suggest potential benefit in carefully selected cases,” the group said.

Advertisement

Still, after the federal announcement about the drug, Sirota said some families in her practice asked about getting it for their autistic children. She educated them about the evidence, told them about the potential for side effects and didn’t prescribe it. Potential side effects include irritability, nausea and vomiting and skin issues like dermatitis.

This may sound melodramatic, but there is real psychological harm being done to those just starting families in this country. For most parents, their children become their entire world. Their raison d’etre. And if you scare the shit out of them about Tylenol giving their kids a disorder, they’re going to stop taking the common drug and turn to any hair-brained lifeline they can find to try to keep their children from that disorder.

Does leucovorin do anything at all for anyone with autism? I don’t have the slightest clue. And neither does the Trump administration. I’m quite confident that there is no current reason to see Tylenol as a danger to the general populace, however, and that didn’t stop Trump from going on television and playing doctor.

“It feels like a pattern with our government, right? They keep building on these houses of cards that just fall down,” she said. “This politicizing of medicine just in general, and moving away from science, has been so challenging.”

The consequences of this sort of thing are going to span decades. Let that sink in.

Advertisement

Filed Under: donald trump, health, rfk jr., tylenol

Companies: kenvue

Source link

Advertisement
Continue Reading

Tech

Satya Nadella denies Xbox death rumors, insists Microsoft is "long on gaming"

Published

on


Speaking with newly appointed Microsoft Gaming head Asha Sharma, Nadella dismissed speculation that the company might abandon gaming in favor of Windows, Azure, and AI. He described gaming as one of Microsoft’s “main identities” over the past couple of decades and said it will remain a core part of the…
Read Entire Article
Source link

Continue Reading

Tech

OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

Published

on

More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief on Monday in support of Anthropic in its legal fight against the US government.

“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote.

The brief was filed just hours after Anthropic sued the Department of Defense and other federal agencies over the Pentagon’s decision to designate the company a “supply-chain risk.” The sanction, which severely limits Anthropic’s ability to work with military contractors, went into effect after Anthropic’s negotiations with the Pentagon fell apart. The AI startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief specifically supports this motion.

Signatories of the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal filings submitted by parties that are not directly involved in a court case but that have expertise relevant to it. The employees signed in a personal capacity and don’t represent the views of their companies, according to the brief.

Advertisement

OpenAI and Google did not immediately respond to WIRED’s request for comment.

The amicus brief says that the Pentagon’s decision to blacklist Anthropic “introduces an unpredictability in [their] industry that undermines American innovation and competitiveness” and “chills professional debate on the benefits and risks of frontier AI systems.” It notes that the Pentagon could have simply dropped Anthropic’s contract if it no longer wished to be bound by its terms.

The brief also says that the red lines Anthropic claims it requested, including that its AI wouldn’t be used for mass domestic surveillance and the development of autonomous lethal weapons, are legitimate concerns and require sufficient guardrails. “In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the brief says.

Several other AI leaders have also publicly questioned the Pentagon’s decision to label Anthropic a supply-chain risk. OpenAI CEO Sam Altman said in a post on social media that “enforcing the SCR [supply-chain risk] designation on Anthropic would be very bad for our industry and our country.” He added that “this is a very bad decision from the DoW and I hope they reverse it.” As Anthropic’s relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military, a decision some people criticized as opportunistic.

Advertisement

Source link

Continue Reading

Tech

Worried about Australian age verification measures? I’ve found the cheapest way to secure your personal data for years to come

Published

on

Australia’s new age verification legislation has left Australians raising their eyebrows.

Similar to enforcements in the UK and US, citizens will be required to verify that they are over 18 to access adult content. But, with this comes concerns that people’s most sensitive data will be put at risk by communications with third parties.

Advertisement

Source link

Continue Reading

Tech

New York lawmakers move to block AI chatbots from giving legal or medical advice

Published

on


  • A proposed New York bill would ban AI chatbots from providing legal or medical advice
  • The legislation would allow users to sue companies if their chatbots impersonate licensed professionals
  • Lawmakers say the measure is meant to protect the public as AI tools become more widely used

AI chatbots have spent the past few years answering nearly every kind of question imaginable, but New York lawmakers are preparing to draw a firm line around at least a couple of categories of conversation. A bill advancing through the state legislature would prohibit AI chatbots from providing legal or medical advice and would allow users to sue the companies behind those systems if they cross that boundary.

The proposal, Senate Bill S7263, would apply to AI chatbots that mimic or impersonate licensed professionals such as lawyers or physicians. The heart of the bill applies the same principle about how individuals cannot practice law or medicine without the appropriate licenses to AI. That rule is meant to ensure that people receive guidance from trained professionals who can be held accountable for their advice.

Source link

Advertisement
Continue Reading

Tech

The open-source AI red-teaming tool used by Fortune 500 companies is now part of OpenAI

Published

on

The acquisition of Promptfoo, which counts more than 125,000 developers and 30-plus Fortune 500 companies among its users, is OpenAI’s most direct move yet into AI application security. Its technology will go into Frontier, the company’s enterprise agent platform launched just a month ago.

When Ian Webster was leading the LLM engineering team at Discord, shipping AI products to 200 million users, he noticed something the security industry had not yet caught up with: the tools his team relied on to keep those products safe were built for a different era. Traditional vulnerability scanners could not reason about prompt injection. Static analysis had nothing to say about a model that promised a user something it had no authority to deliver. The testing infrastructure for AI applications, he concluded, simply did not exist.

So he built it himself, nights and weekends, as an open-source project. That project became Promptfoo. On Monday, OpenAI announced it is acquiring the company.

The deal, terms of which were not disclosed, will see Promptfoo’s technology integrated into OpenAI Frontier, the enterprise agent management platform that OpenAI launched in early February. In a post on X, OpenAI said the acquisition would “strengthen agentic security testing and evaluation capabilities” within Frontier, and pledged that Promptfoo would remain open source under its current licence, with continued support for existing customers.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Promptfoo, which Webster co-founded with Michael D’Angelo – a former VP of engineering and head of AI at identity verification firm Smile Identity – launched commercially in 2024 with $5 million in seed funding from Andreessen Horowitz. The seed round attracted backing from a notable roster of angels, including Shopify CEO Tobi Lütke, Discord CTO Stanislav Vishnevskiy, and Okta co-founder Frederic Kerrest. By July 2025, the company had raised an $18.4 million Series A led by Insight Partners, with a16z again participating. Total funding ahead of the acquisition was approximately $23.4 million.

At the time of the Series A, Promptfoo said it had more than 125,000 developers using its open-source framework and over 30 Fortune 500 companies running its enterprise platform in production. Customers span retail, telecoms, financial services, and media, sectors with acute exposure to the regulatory and reputational risks of AI failures.

Advertisement

The product works by acting as an automated adversary. Rather than relying on manual penetration testing, Promptfoo’s platform talks directly to a customer’s AI application, through its chat interface or APIs, using specialised models and agents that behave like users, or specifically like attackers. When an attack succeeds, the platform records it, analyses why it worked, and iterates through an agentic reasoning loop to refine the test and expose deeper vulnerabilities. Risks the platform targets include prompt injection, data leakage, jailbreaks, and what Webster has called “application-level” failures: AI systems that promise users things they cannot deliver, or that reveal database contents to a customer service query, or that stray into political opinion in a homework tutor.

It is precisely those application-level risks that make Promptfoo’s acquisition a strategic fit for OpenAI’s current direction. Frontier, which OpenAI has described as an attempt to create “AI coworkers” for the enterprise, is designed to give AI agents access to production systems, CRM platforms, data warehouses, internal ticketing tools, and to execute workflows with real-world consequences. Agents operating at that level of access create a correspondingly enlarged attack surface. Early customers named by OpenAI for Frontier include Uber, State Farm, Intuit, and Thermo Fisher Scientific: organisations for whom a misbehaving agent is not an inconvenience but a liability.

OpenAI has been building out Frontier at speed. Since launching the platform on 5 February, the company has announced Frontier Alliances with Accenture, Boston Consulting Group, Capgemini, and McKinsey, enlisting the consulting firms to drive enterprise deployment. Separately, the company has been rolling out Codex Security, an AI-powered application security agent for software repositories, formerly known internally as Aardvark, which entered wider availability on the same day as the Promptfoo acquisition announcement.

Promptfoo is not the only AI security product entering broader availability this month. Anthropic launched Claude Code Security in February, targeting similar vulnerability scanning use cases. The convergence suggests that as AI agents move into production at scale, the question of who secures them, and how,  is fast becoming one of the defining commercial battlegrounds in enterprise AI.

Advertisement

For Promptfoo’s open-source community, OpenAI’s commitment to keeping the project open source under its current licence will be the line to watch. The project has over 248 contributors, and its adoption by developers at companies across the AI industry – including, according to Promptfoo’s own website, teams at Anthropic and Google – was built on the premise that the tool belonged to the developer community rather than to any one vendor. That promise now sits alongside a commercial integration into one of the most powerful enterprise AI platforms in the market.

Source link

Advertisement
Continue Reading

Tech

Microsoft Teams phishing targets employees with A0Backdoor malware

Published

on

Microsoft Teams phishing targets employees with backdoors

Hackers contacted employees at financial and healthcare organizations over Microsoft Teams to trick them into granting remote access through Quick Assist and deploy a new piece of malware called A0Backdoor.

The attacker relies on social engineering to gain the employee’s trust by first flooding their inbox with spam and then contacting them over Teams, pretending to be the company’s IT staff, offering assistance with the unwanted messages.

To obtain access to the target machine, the threat actor instructs the user to start a Quick Assist remote session, which is used to deploy a malicious toolset that includes digitally signed MSI installers hosted in a personal Microsoft cloud storage account.

According to researchers at cybersecurity company BlueVoyant, the malicious MSI files masquerade as Microsoft Teams components and the CrossDeviceService, a legitimate Windows tool used by the Phone Link app.

Advertisement
Commandline argument for CrossDeviceService.exe
Command line argument to install the malicious CrossDeviceService.exe
Source: BlueVoyant

Using the DLL sideloading technique with legitimate Microsoft binaries, the attacker deploys a malicious library (hostfxr.dll) that contains compressed or encrypted data. Once loaded in memory, the library decrypts the data into shellcode and transfers execution to it.

The researchers say that the malicious library also uses the CreateThread function to prevent analysis. BlueVoyant explains that the excessive thread creation could cause a debugger to crash, but it does not have a significant impact under normal execution.

The shellcode performs sandbox detection and then generates a SHA-256-derived key, which it uses to extract the A0Backdoor, which is encrypted using the AES algorithm.

Encrypted payload in the shellcode
Encrypted payload in the shellcode
Source: BlueVoyant

The malware relocates itself into a new memory region, decrypts its core routines, and relies on Windows API calls (e.g., DeviceIoControl, GetUserNameExW, and GetComputerNameW) to collect information about the host and fingerprint it.

Communication with the command-and-control (C2) is hidden in DNS traffic, with the malware sending DNS MX queries with encoded metadata in high-entropy subdomains to public recursive resolvers. The DNS servers respond with MX records containing encoded command data.

Captured DNS communication
Captured DNS communication
Source: BlueVoyant

“The malware extracts and decodes the leftmost label to recover command/configuration data, then proceeds accordingly,” explains BlueVoyant.

“Using DNS MX records helps the traffic blend in and can evade controls tuned to detect TXT-based DNS tunneling, which may be more commonly monitored.”

Advertisement

BlueVoyant states that two of the targets of this campaign are a financial institution in Canada and a global healthcare organization.

The researchers assess with moderate-to-high confidence that the campaign is an evolution of tactics, techniques and procedures associated with the BlackBasta ransomware gang, which has dissolved after the internal chat logs of the operation were leaked.

While there are plenty of overlaps, BlueVoyant notes that the use of signed MSIs and malicious DLLs, the A0Backdoor payload, and using DNS MX-based C2 communication are new elements.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025