Connect with us
DAPA Banner

Tech

NYT Strands hints and answers for Sunday, March 15 (game #742)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Saturday’s puzzle instead then click here: NYT Strands hints and answers for Saturday, March 14 (game #741).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

NYT Strands hints and answers for Saturday, April 4 (game #762)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Thursday’s puzzle instead then click here: NYT Strands hints and answers for Thursday, April 2 (game #761).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Karpathy shares ‘LLM Knowledge Base’ architecture that bypasses RAG with an evolving markdown library maintained by AI

Published

on

AI vibe coders have yet another reason to thank Andrej Karpathy, the coiner of the term.

The former Director of AI at Tesla and co-founder of OpenAI, now running his own independent AI project, recently posted on X describing a “LLM Knowledge Bases” approach he’s using to manage various topics of research interest.

By building a persistent, LLM-maintained record of his projects, Karpathy is solving the core frustration of “stateless” AI development: the dreaded context-limit reset.

As anyone who has vibe coded can attest, hitting a usage limit or ending a session often feels like a lobotomy for your project. You’re forced to spend valuable tokens (and time) reconstructing context for the AI, hoping it “remembers” the architectural nuances you just established.

Advertisement

Karpathy proposes something simpler and more loosely, messily elegant than the typical enterprise solution of a vector database and RAG pipeline.

Instead, he outlines a system where the LLM itself acts as a full-time “research librarian”—actively compiling, linting, and interlinking Markdown (.md) files, the most LLM-friendly and compact data format.

By diverting a significant portion of his “token throughput” into the manipulation of structured knowledge rather than boilerplate code, Karpathy has surfaced a blueprint for the next phase of the “Second Brain”—one that is self-healing, auditable, and entirely human-readable.

Beyond RAG

For the past three years, the dominant paradigm for giving LLMs access to proprietary data has been Retrieval-Augmented Generation (RAG).

Advertisement

In a standard RAG setup, documents are chopped into arbitrary “chunks,” converted into mathematical vectors (embeddings), and stored in a specialized database.

When a user asks a question, the system performs a “similarity search” to find the most relevant chunks and feeds them into the LLM.Karpathy’s approach, which he calls LLM Knowledge Bases, rejects the complexity of vector databases for mid-sized datasets.

Instead, it relies on the LLM’s increasing ability to reason over structured text.

The system architecture, as visualized by X user @himanshu in part of the wider reactions to Karpathy’s post, functions in three distinct stages:

Advertisement
  1. Data Ingest: Raw materials—research papers, GitHub repositories, datasets, and web articles—are dumped into a raw/ directory. Karpathy utilizes the Obsidian Web Clipper to convert web content into Markdown (.md) files, ensuring even images are stored locally so the LLM can reference them via vision capabilities.

  2. The Compilation Step: This is the core innovation. Instead of just indexing the files, the LLM “compiles” them. It reads the raw data and writes a structured wiki. This includes generating summaries, identifying key concepts, authoring encyclopedia-style articles, and—crucially—creating backlinks between related ideas.

  3. Active Maintenance (Linting): The system isn’t static. Karpathy describes running “health checks” or “linting” passes where the LLM scans the wiki for inconsistencies, missing data, or new connections. As community member Charly Wargnier observed, “It acts as a living AI knowledge base that actually heals itself.”

By treating Markdown files as the “source of truth,” Karpathy avoids the “black box” problem of vector embeddings. Every claim made by the AI can be traced back to a specific .md file that a human can read, edit, or delete.

Implications for the enterprise

While Karpathy’s setup is currently described as a “hacky collection of scripts,” the implications for the enterprise are immediate.

As entrepreneur Vamshi Reddy (@tammireddy) noted in response to the announcement: “Every business has a raw/ directory. Nobody’s ever compiled it. That’s the product.”

Karpathy agreed, suggesting that this methodology represents an “incredible new product” category.

Advertisement

Most companies currently “drown” in unstructured data—Slack logs, internal wikis, and PDF reports that no one has the time to synthesize.

A “Karpathy-style” enterprise layer wouldn’t just search these documents; it would actively author a “Company Bible” that updates in real-time.

As AI educator and newsletter author Ole Lehmann put it on X: “i think whoever packages this for normal people is sitting on something massive. one app that syncs with the tools you already use, your bookmarks, your read-later app, your podcast app, your saved threads.”

Eugen Alpeza, co-founder and CEO of AI enterprise agent builder and orchestration startup Edra, noted in an X post that: “The jump from personal research wiki to enterprise operations is where it gets brutal. Thousands of employees, millions of records, tribal knowledge that contradicts itself across teams. Indeed, there is room for a new product and we’re building it in the enterprise.”

Advertisement

As the community explores the “Karpathy Pattern,” the focus is already shifting from personal research to multi-agent orchestration.

A recent architectural breakdown by @jumperz, founder of AI agent creation platform Secondmate, illustrates this evolution through a “Swarm Knowledge Base” that scales the wiki workflow to a 10-agent system managed via OpenClaw.

The core challenge of a multi-agent swarm—where one hallucination can compound and “infect” the collective memory—is addressed here by a dedicated “Quality Gate.”

Using the Hermes model (trained by Nous Research for structured evaluation) as an independent supervisor, every draft article is scored and validated before being promoted to the “live” wiki.

Advertisement

This system creates a “Compound Loop”: agents dump raw outputs, the compiler organizes them, Hermes validates the truth, and verified briefings are fed back to agents at the start of each session. This ensures that the swarm never “wakes up blank,” but instead begins every task with a filtered, high-integrity briefing of everything the collective has learned

Scaling and performance

A common critique of non-vector approaches is scalability. However, Karpathy notes that at a scale of ~100 articles and ~400,000 words, the LLM’s ability to navigate via summaries and index files is more than sufficient.

For a departmental wiki or a personal research project, the “fancy RAG” infrastructure often introduces more latency and “retrieval noise” than it solves.

Tech podcaster Lex Fridman (@lexfridman) confirmed he uses a similar setup, adding a layer of dynamic visualization:

Advertisement

“I often have it generate dynamic html (with js) that allows me to sort/filter data and to tinker with visualizations interactively. Another useful thing is I have the system generate a temporary focused mini-knowledge-base… that I then load into an LLM for voice-mode interaction on a long 7-10 mile run.”

This “ephemeral wiki” concept suggests a future where users don’t just “chat” with an AI; they spawn a team of agents to build a custom research environment for a specific task, which then dissolves once the report is written.

Licensing and the ‘file-over-app’ philosophy

Technically, Karpathy’s methodology is built on an open standard (Markdown) but viewed through a proprietary-but-extensible lens (note taking and file organization app Obsidian).

  • Markdown (.md): By choosing Markdown, Karpathy ensures his knowledge base is not locked into a specific vendor. It is future-proof; if Obsidian disappears, the files remain readable by any text editor.

  • Obsidian: While Obsidian is a proprietary application, its “local-first” philosophy and EULA (which allows for free personal use and requires a license for commercial use) align with the developer’s desire for data sovereignty.

  • The “Vibe-Coded” Tools: The search engines and CLI tools Karpathy mentions are custom scripts—likely Python-based—that bridge the gap between the LLM and the local file system.

This “file-over-app” philosophy is a direct challenge to SaaS-heavy models like Notion or Google Docs. In the Karpathy model, the user owns the data, and the AI is merely a highly sophisticated editor that “visits” the files to perform work.

Librarian vs. search engine

The AI community has reacted with a mix of technical validation and “vibe-coding” enthusiasm. The debate centers on whether the industry has over-indexed on Vector DBs for problems that are fundamentally about structure, not just similarity.

Advertisement

Jason Paul Michaels (@SpaceWelder314), a welder using Claude, echoed the sentiment that simpler tools are often more robust:

“No vector database. No embeddings… Just markdown, FTS5, and grep… Every bug fix… gets indexed. The knowledge compounds.”

However, the most significant praise came from Steph Ango (@Kepano), co-creator of Obsidian, who highlighted a concept called “Contamination Mitigation.”

He suggested that users should keep their personal “vault” clean and let the agents play in a “messy vault,” only bringing over the useful artifacts once the agent-facing workflow has distilled them.

Which solution is right for your enteprise vibe coding projects?

Feature

Advertisement

Vector DB / RAG

Karpathy’s Markdown Wiki

Data Format

Opaque Vectors (Math)

Advertisement

Human-Readable Markdown

Logic

Semantic Similarity (Nearest Neighbor)

Explicit Connections (Backlinks/Indices)

Advertisement

Auditability

Low (Black Box)

High (Direct Traceability)

Compounding

Advertisement

Static (Requires re-indexing)

Active (Self-healing through linting)

Ideal Scale

Millions of Documents

Advertisement

100 – 10,000 High-Signal Documents

The “Vector DB” approach is like a massive, unorganized warehouse with a very fast forklift driver. You can find anything, but you don’t know why it’s there or how it relates to the pallet next to it. Karpathy’s “Markdown Wiki” is like a curated library with a head librarian who is constantly writing new books to explain the old ones.

The next phase

Karpathy’s final exploration points toward the ultimate destination of this data: Synthetic Data Generation and Fine-Tuning.

As the wiki grows and the data becomes more “pure” through continuous LLM linting, it becomes the perfect training set.

Advertisement

Instead of the LLM just reading the wiki in its “context window,” the user can eventually fine-tune a smaller, more efficient model on the wiki itself. This would allow the LLM to “know” the researcher’s personal knowledge base in its own weights, essentially turning a personal research project into a custom, private intelligence.

Bottom-line: Karpathy hasn’t just shared a script; he’s shared a philosophy. By treating the LLM as an active agent that maintains its own memory, he has bypassed the limitations of “one-shot” AI interactions.

For the individual researcher, it means the end of the “forgotten bookmark.”

For the enterprise, it means the transition from a “raw/ data lake” to a “compiled knowledge asset.” As Karpathy himself summarized: “You rarely ever write or edit the wiki manually; it’s the domain of the LLM.” We are entering the era of the autonomous archive.

Advertisement

Source link

Continue Reading

Tech

Edward ‘Big Balls’ Coristine Is Helping Out on Viral Fraud Videos Now

Published

on

Nick Shirley—the right-wing creator whose YouTube investigation sparked the Trump administration’s immigration crackdown in Minnesota—claims that his most recent video about alleged fraud in California was bolstered by data provided by none other than Edward Coristine, one of the first members of the so-called Department of Government Efficiency (DOGE) known online as “Big Balls.”

Coristine, who joined DOGE at 19 years old with no prior government experience, was staffed across several agencies including the Social Security Administration (SSA) and the Small Business Administration (SBA). Before joining DOGE, Coristine worked at Elon Musk’s Neuralink for several months and founded a startup known for hiring black hat hackers.

In an interview with Coristine published on Shirley’s YouTube channel on Thursday, Shirley claims that Coristine personally pulled data on Medicaid spending for businesses based in California as potential targets. Coristine nodded along, telling Shirley that the government must create more opportunities to crowdsource fraud investigations.

The information Coristine allegedly pulled for Shirley was from a dataset published by the DOGE team at the Department of Health and Human Services (HHS) in February. In a post to X at the time, the HHS DOGE team referred to it as “the largest Medicaid dataset in department history.” The post also claimed that the dataset could be used to “detect” large-scale fraud.

Advertisement

“After that, I went to California based off that dataset you had helped me extract, and these fraudsters also weren’t even trying to hide it,” Shirley told Coristine in Thursday’s interview.

Coristine said that by open-sourcing data on government spending, vigilante investigators like Shirley who are “more well-positioned” could uncover fraudulent payments. “You are someone who actually went to the places where we were spending all this money and confronted the people and got to know the truth. I think we just have to create more opportunities for that to happen. We have to continue to open source data,” Coristine said.

The intersection of the right’s favorite fraud influencer and one of the most notorious DOGE engineers exemplifies the next evolution of DOGE and the Trump administration’s fight against “waste, fraud, and abuse.”

Shirley’s videos have become key pieces of evidence for the Trump administration’s fraud and immigration crackdowns. When Shirley released his December video claiming to have uncovered more than $100 million in Somali-run childcare fraud in Minnesota, figures like vice president JD Vance shared it. A surge of immigration agents were then sent to Minnesota, resulting in mass arrests, detainments, and the deaths of two protesters, Renee Good and Alex Pretti.

Advertisement

Early in their YouTube video, Shirley and Coristine directly tie fraud to immigrant communities and foreigners. “A lot of the money is being stolen and siphoned out of the country,” Coristine says, without providing evidence. “Once that money is in a suitcase to Somalia, that’s never coming back,” Shirley replies.

Later in the video, Shirley and Coristine cite specific examples of “waste and fraud” identified by DOGE, including funding for a “Sesame Street style children’s TV program in Iraq” and “tax policy consulting in Liberia.” Both programs were supported by the US Agency for International Development (USAID), which DOGE effectively shut down in the early months of 2025. Coristine also alleged that the SBA “did a terrible job,” particularly with loans during the height of COVID, and that there were “no checks at all on who’s receiving money, not even the most basic checks of like, if [a Social Security number] is real.”

Source link

Advertisement
Continue Reading

Tech

From kelp pots to kilns: UW’s CoMotion Labs reveals 8 startups joining its newest climate cohort

Published

on

Emily Power, CEO of Ocean Made, shows the difference in the root structure of tomato plants grown in the startup’s kelp-based pots, on the left, versus plastic pots. (Ocean Made Photo)

The University of Washington’s CoMotion Labs has selected the second cohort of startups for its Climate Tech Incubator. The founders are tackling wide-ranging sustainability challenges including boosting EV adoption, reducing plastic use, supporting local food and beverage production, and developing smart climate strategies for cities.

The six-month program is located at the Seattle Climate Innovation Hub, a public-private partnership in the city’s downtown. The venue supports climate entrepreneurship beyond the incubator and hosts regular public events.

Eight early-stage startups participating in the program receive support in building teams, developing their business plans, forging strategic partnerships and preparing to make their pitch to investors. The cohort will share their progress at a demo day in September.

Jared Silvia, partner at Gliding Ant Ventures and former CEO of BlueDot Photonics, is a CoMotion mentor.

“If our region is serious about being a leader in climate tech, we need to find more ways to support more founders,” Silvia said. “The Climate Tech Incubator is a fantastic addition to the support ecosystem.”

Advertisement

Here are the participants:

Astraeus Ocean Systems is a maritime ag-tech startup, offering water-quality monitoring and crop modeling for shellfish and seaweed growing operations. The Bellingham, Wash.-based company’s founding team includes two Ph.D.-holding research scientists and a leader in business development.

Benchmark Star helps facilities managers comply with clean building regulations by automating regulation tracking and streamlining utility data reporting. The effort launched out of a Seattle Climate Innovation Hub hackathon last year and is led by Renee Gastineau, who has worked in clean energy for more than a decade.

Climate Solutions International is the brainchild of Jan Whittington, a UW urban planning professor. Whittington developed strategies for helping cities take action on climate change while making their infrastructure more resilient in a warming world. The World Bank funded her to apply the approach across 300 cities in 30 countries, and her startup is turning that expertise into a business.

Advertisement

EVQ is a one-stop, AI-powered platform helping drivers find, buy and operate electric vehicles, demystifying battery charging and other hurdles to EV ownership. The Seattle startup spun out of Coltura, a nonprofit promoting EV policies and research founded by EVQ CEO Matthew Metz.

FlameWise produces portable kilns for individuals and communities to turn unwanted woody debris into biochar that sequesters carbon and provides soil benefits. The kilns are a low-smoke alternative to burn piles. Seattle’s Korina Stark launched the effort following challenges to manage wood waste on her own 20-acre forested property.

OceanMade offers seaweed-based pots for nurseries, landscapers, gardeners and small farms who want to avoid plastic waste. The kelp containers also support root development and naturally degrade in the soil after planting. CEO Emily Power previously worked at Microsoft for nearly eight years before founding the Seattle startup in 2021.

REearthable is manufacturing biodegradable plastics from waste limestone recovered from mining operations. The material from the Seattle-area startup is suitable for cosmetics, food packaging and other applications. CEO Charlotte Wintermann is a serial entrepreneur with a background in sales, marketing and business strategy.

Advertisement

Seeking Ferments produces fermented beverages and kombucha that are brewed in Seattle from locally sourced ingredients. Co-founders Jeanette Macias and Lyz Macias launched their startup in 2019 and now sell their beverages online and at farmers markets and their “filling station.”

Related: UW’s CoMotion Labs names six startups for inaugural climate and green tech incubator

Source link

Advertisement
Continue Reading

Tech

OpenAI’s Fidji Simo Is Taking Medical Leave Amid an Executive Shake-Up

Published

on

OpenAI announced a major reorganization on Friday as the company’s CEO of AGI deployment, Fidji Simo, takes medical leave to focus on her health. OpenAI president Greg Brockman will handle the product teams in Simo’s absence. Simo’s previous title was CEO of applications.

Brad Lightcap, the chief operating officer and one of CEO Sam Altman’s top deputies, is transitioning to a “special projects” role. Kate Rouch, the chief marketing officer, is taking a leave of absence to focus on her health. Rouch has been undergoing treatment for breast cancer. When she returns, it will be in “a different, more narrowly scoped role,” according to a note Simo shared with OpenAI staff which was viewed by WIRED.

“As I shared when I joined, I had a relapse of my neuroimmune condition a few weeks before starting the job,” Simo said in the note which was sent in OpenAI’s “core” Slack channel. “It’s been a bit of a rollercoaster since, and the last month has been particularly rough health-wise. For my entire time here, I’ve postponed medical tests and new therapies to stay completely focused on the job and not miss a single day of work. I took time off for the first time two weeks before the break for some medical tests, and it’s now clear that I’ve pushed a little too far and I really need to try new interventions to stabilize my health.”

Simo is expected to take “several weeks” of leave according to her internal post.

Advertisement

In his new role, Lightcap will be in charge of the company’s forward-deployed engineers, which embed within enterprise organizations and help integrate OpenAI’s technology, among other duties.

OpenAI will begin searching for a new CMO, Simo said. The company is also looking for a chief communications officer to replace Hannah Wong, who left her position in January. Chris Lehane has taken over as the leader of the communications team in the interim.

“We have a strong leadership team focused on our biggest priorities: advancing frontier research, growing our global user base of nearly 1 billion users, and powering enterprise use cases,” said an OpenAI spokesperson in a statement. “We’re well-positioned to keep executing with continuity and momentum.”

Simo joined OpenAI in August 2025, where she took over many of the company’s consumer-facing products, including ChatGPT, Codex, and the social-video app Sora. She recently shuttered the Sora app and told staff that the company needed to cut side projects and refocus around its core products.

Advertisement

The decision comes as OpenAI eyes an IPO as soon as this year. The company recently raised $122 billion in the largest funding round the tech industry has ever seen, which valued the company at $852 billion.

Source link

Continue Reading

Tech

Google's Gemma 4 AI can run on smartphones, no Internet required

Published

on


The two largest Gemma 4 models – 26B Mixture of Experts and 31B Dense – require an 80GB Nvidia H100 GPU to run unquantized in bfloat16 format. Google claims these models deliver “frontier intelligence on personal computers” for students, researchers, and developers, providing advanced reasoning capabilities for IDEs, coding assistants, and agentic workflows.
Read Entire Article
Source link

Continue Reading

Tech

After Cutting Down on ‘Side Quests,’ OpenAI Bought a Talk Show

Published

on

OpenAI has spent the last few weeks seemingly trying to refocus on using AI for business instead of what execs dubbed “side quests,” dumping its AI video generator and its plans for an adult-themed chatbot. So this week, of course, the company announced it’s jumping into the media business.

OpenAI said it was acquiring Technology Business Programming Network, better known as TBPN, which runs a 3-hour show streamed on weekdays that delves into the biggest topics — and brings in the biggest names — in tech business.

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Advertisement

OpenAI said it added TBPN to “help create a space for a real, constructive conversation about the changes AI creates,” Fidji Simo, CEO of AGI deployment at OpenAI, wrote in a message to employees shared by OpenAI. Simo said the company also wanted to take advantage of TBPN’s marketing prowess. “They have a strong pulse on where the industry is going, their comms and marketing ideas have really impressed me,” Simo said.

TBPN launched in October 2024 and has been compared to ESPN in how it covers tech — two guys at a big desk with news, analysis, commentary and banter about topics such as AI, crypto, startups and the defense industry. The show’s two hosts and co-founders, Jordi Hays and John Coogan, have had some of tech’s biggest names in studio — OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Satya Nadella, entrepreneur Mark Cuban and Salesforce’s Marc Benioff, to name some.

The show is streamed live from 11 a.m. to 2 p.m. PT Monday through Friday on YouTube and X from the Ultradome, a studio on a Hollywood film lot. The show has 70,000 viewers daily and looks set to make more than $30 million in revenue this year, according to the Wall Street Journal.

TBPN co-host Hays acknowledged in a statement that the show has been “critical” of the AI industry.

Advertisement
AI Atlas

“After getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right,” Hays said. “Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”

In an era of fast-moving media consolidation, it’s a fair question — can TBPN keep saying what they really think, even if that ruffles OpenAI’s feathers? In her statement, Simo said OpenAI wants the show to maintain its “editorial independence.”

“TBPN will continue to run their programming, choose their guests, and make their own editorial decisions,” she said. “That’s foundational to their credibility, and it’s something we’re explicitly protecting as part of this agreement.”

Altman, OpenAI founder, echoed that sentiment with a posting on X. also calling TBPN his “favorite tech show.”

Advertisement

“We want them to keep that going and for them to do what they do so well,” Altman posted. “I don’t expect them to go any easier on us, am sure I’ll do my part to help enable that with occasional stupid decisions.”

The acquisition prompted some criticism and concern on social media as people wondered whether TBPN could really maintain editorial independence.

“Reporters doing accountability journalism are getting mowed down by mass layoffs & are now almost extinct — while the targets of their accountability reporting are giving hundreds of millions of dollars to pundits,” David Sirota, a longtime columnist and founder of the investigative news outlet The Lever, posted on X. “What stage of the media dystopia is this?”

TBPN will be under the supervision of OpenAI’s Chief Global Affairs Officer Chris Lehane, who joined the company in October 2024 and is the company’s main strategist in working with government officials. Decades ago, he worked in the White House of President Bill Clinton — helping to handle the Whitewater and Monica Lewinsky investigations — and as press secretary to Vice President Al Gore. Lehane also set up a procrypto super PAC called Fairshake that helped defeat anticrypto candidates during the 2024 elections and helped Airbnb battle housing regulations.

Advertisement

Source link

Continue Reading

Tech

AI animation studio Toonstar will turn books into digital shows for HarperCollins

Published

on

HarperCollins is tapping into AI to bring some of its book franchises to life. Specifically, the publisher is teaming up with Toonstar, an AI animation studio, to turn them into digital shows. The first project will be an adaptation of Lisa Greenwald’s “Friendship List” series, which will also be joined by a graphic novel.

You’d be forgiven for being unaware of Toonstar, a studio that received some buzzy early on for simplifying typically complex animation pipelines with AI, but has mostly remained under the radar. Its biggest claim to fame is producing StEvEn and Parker YouTube series, which has amassed 3.38 million subscribers and sometimes has episodes reaching around a million views. It’s not something I’ve heard animation fans speaking about, though. And honestly, it was tough to sit through a few minutes of its sub-South Park animation.

“By leaning into the [AI] technology, we can make full episodes 80 percent faster and 90 percent cheaper than industry norms,” Toonstar co-founder John Attanasio, told The New York Times last year. In that same interview, the company revealed that it uses AI across its production, including having it dub dialog for international audiences, as well as working on storylines.

Toonstar initially pitched itself as an animation studio leaning into Web3 and NFTs, but those technologies seem virtually absent from the company’s presence today. Space Junk, one of its early series, was “put on hold for a variety of reasons,” a representative told Engadget. “It’s possible we’ll resurrect the concept in the future,” they added. Its original domain now points to a crypto gambling site.

Advertisement

“We’re honored to bring Friendship List to life as an animated series,” Attanasio said in a press release. “Our artist-centered approach ensures these beloved characters and stories stay true to the author’s vision, while our Ink & Pixel production technology enables fast, high-quality production at scale which unlocks the ability to meet audiences where and when they enjoy content today.”

Toonstar has certainly proved it can make “content” for YouTube. Can it actually produce an enjoyable animat edshow? That’s another question entirely.

Source link

Advertisement
Continue Reading

Tech

Iran Strikes Leave Amazon Availability Zones ‘Hard Down’ In Bahrain and Dubai

Published

on

Iranian strikes have reportedly knocked out key AWS availability zones in Bahrain and Dubai, leaving parts of both regions effectively offline for an extended period and forcing Amazon to urge teams and customers to shift workloads elsewhere. “These two regions continue to be impaired, and services should not expect to be operating with normal levels of redundancy and resiliency,” an internal Amazon communication memo reads. “We are actively working to free and reserve as much capacity as possible in the region for customers, and services should be scaled to the minimal footprint required to support customer migration.” Big Technology reports: With the war now nearing its sixth week, Iran has made Amazon infrastructure in the Gulf an economic target and is now eyeing its peers. Amazon’s Bahrain facilities have been hit multiple times, including a Wednesday strike that caused a fire. And its facilities in the UAE also sustained multiple hits. The IRGC is threatening multiple other U.S. tech giants, including Microsoft, Google, and Apple.

Amazons infrastructure in Bahrain and Dubai each have three ‘availability zones’ or clusters of compute. Both Bahrain and Dubai have a zones that are “hard down” and and “impaired but functioning,” per the internal communication. “We do not have a timeline for when DXB and BAH will return to normal operations,” the internal post said.

Source link

Continue Reading

Tech

Microsoft's LinkedIn is scanning installed browser extensions without user permission

Published

on

Researchers have determined that Microsoft’s LinkedIn is scanning browser plug-ins and other information without permission, building user profiles using data that the company did not get permission to take.

Safari web browser logo centered over overlapping Apple devices displaying web pages, highlighting Safari as the main browsing app on iPhone and Mac screens
Safari

A European advocacy group claims LinkedIn is probing browser extensions through its website code. Fairlinked e.V. published its “BrowserGate” report alleging LinkedIn detects installed browser extensions by probing for known identifiers through JavaScript. The group says the technique reveals personally identifiable information.
Safari users are less likely to be affected by this specific mechanism, based on how extension detection typically works across browsers. Apple’s browser model limits fingerprinting surfaces, which reduces how much information sites can infer from installed extensions.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025