Connect with us
DAPA Banner

Tech

Former NSA director Keith Alexander stepping down from Amazon’s board

Published

on

Retired Gen. Keith Alexander. (Amazon Photo)

Keith Alexander, a retired four-star Army general and former director of the National Security Agency, is leaving Amazon’s board of directors after more than five years.

Alexander, 74, informed the company April 7 that he wouldn’t stand for re-election at its annual meeting next month, according to the company’s proxy statement

“We’re grateful to General Alexander for his service on our Board since 2020 and for the many contributions he’s made to our company, and we wish him every success in the future,” a spokesperson said in a statement, responding to GeekWire’s inquiry.

No reason was given for his departure. Amazon’s board, which has fluctuated by one or two directors over time, will consist of 11 people after his departure.

Alexander joined Amazon’s board in September 2020, when Jeff Bezos was still CEO and the company was navigating a massive surge in demand during the early days of the pandemic. He previously chaired the board’s Security Committee, which oversees Amazon’s cybersecurity policies, data protection compliance, and response to significant cyber incidents.

Advertisement

Alexander served as commander of U.S. Cyber Command and led the NSA from 2005 to 2014, a tenure that included the surveillance disclosures of former NSA contractor Edward Snowden.

After retiring from the military, Alexander founded IronNet, a cybersecurity company, serving as CEO and president from 2014 to July 2023 and as board chair until February 2024. 

With his departure, eleven members of the board are up for re-election.

  • Jeff Bezos, founder and executive chair
  • Andy Jassy, president and CEO
  • Edith W. Cooper, co-founder of Medley Living and former EVP of Goldman Sachs
  • Jamie S. Gorelick, lead independent director; senior counsel at WilmerHale
  • Daniel P. Huttenlocher, dean of MIT Schwarzman College of Computing
  • Andrew Y. Ng, managing general partner of AI Fund; founder of DeepLearning.AI
  • Indra K. Nooyi, former chair and CEO of PepsiCo
  • Jonathan J. Rubinstein, former co-CEO of Bridgewater Associates
  • Brad D. Smith, president of Marshall University; former CEO of Intuit
  • Patricia Q. Stonesifer, former president and CEO of Martha’s Table
  • Wendell P. Weeks, chairman, CEO, and president of Corning

Amazon’s annual shareholder meeting will be held virtually May 20.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Harold Perrineau Teases ‘Despicable’ Town and What’s Next in Season 4 of ‘From’

Published

on

Warning: This article contains spoilers for season 3 of From. You can catch up with MGM Plus.

Three mysterious and terrifying seasons in, MGM’s popular series From continues to put its unwitting residents through unimaginable horrors. The town, known to fans as Fromville, has a sick way of teasing hope and taking it away, with characters seeming to escape or kill one of its monsters only to end up right back where they were before.

Last season, town sheriff and decision-maker Boyd Stevens, played by Harold Perrineau, watched as one of Fromville’s seemingly dead creatures returned with its signature creepy grin intact.

Advertisement

“I think it splintered his brain,” Perrineau told CNET. “I think when we start at the beginning of season 4, that’s where Boyd is. Like, I think his mind is splintered into pieces and he’s either got to pick up those pieces or just lay down and give up because it’s just unfathomable.”

Perrineau’s Boyd is a source of hope and resilience for the people in Fromville, but monsters torture him continuously, and he revealed in season 3 that he’s dealing with worsening Parkinson’s while stuck in the nightmarish town. According to Perrineau, Boyd and the other characters we root for in From are in for a “really hard” fourth season.

“In season 4, the town becomes more present as a character, if that makes sense,” Perrineau said. “You actually recognize, ‘Oh, this town is pushing back … and it is so mean and so ruthless, and it’s doing things that are, I mean, just despicable.”

From as a character study

From is the most-viewed series in MGM Plus’ (formerly Epix) history, according to the network. Perrineau said while From is billed as a horror series, “I think at the end of the day it’s more of a character study.”

Advertisement

“It doesn’t surprise me that people will … find characters that they identify with and they think, ‘Oh my God, what would I do if I were in that position?’ If I were Donna, what, would I just go and drink?” Perrineau said. “I think that’s the thing that we want from our entertainment, from art, and all those kind of things.”

Earlier this week, fans learned that From has a confirmed final season, with season 4 as the penultimate. From will end with season 5, which is expected to debut in 2027.

In a joint statement, executive producers John Griffin, Jeff Pinkner and Jack Bender said they “will get the chance to see our story to its conclusion. Which means questions will be answered. Answers will be questioned. And there will surely be a cascade of tears and terrors in between.”

If you want to tune in as it airs on MGM Plus, stream it, or need a refresher on previous petrifying events in From, keep reading.

Advertisement

What happened in the season 3 finale of From?

From’s season 3 finale was a doozy. I’m hesitant to try to recap the episode because it still feels like we can say a few things with certainty about this show. But characters like Fatima, Jade and Tabitha appeared to piece together some pivotal parts of the endlessly dark puzzle.

In the final episode, we saw the Fromville monster Smiley again (more specifically — and gruesomely — Fatima gave birth to the creature). Sara did some serious damage to Elgin to get him to give up information. Also, Jade and Tabitha seemed to realize they had set foot in Fromville before as other people, including Fromville denizens Miranda and Christopher. They both were in Fromville at the beginning, when they tried to save their daughter and failed. According to Fatima, the town’s monsters sacrificed their children because they were promised they would live forever. 

Am I misunderstanding or leaving out key details? Probably. For example, Julie is a “story walker,” which will probably play a part in the events to come. But the fun is in at least trying to decipher the From mystery.

I asked Perrineau what he could tease about the Man in Yellow, who appeared at the end of season 3 and shockingly ripped out the throat of a major character, Jim. He cryptically said Man in Yellow is “the most unexpected character you’ll see all year.”

Advertisement

From season 4 release schedule on MGM Plus

From season 4 premieres on the MGM Plus linear channel on April 19 at 9 p.m. ET and 9 p.m. PT. You can also access new episodes in the MGM Plus streaming app or with the Prime Video add-on for MGM Plus. In general, you can watch one new episode of From each Sunday night through June 28. 

  • Episode 1: April 19
  • Episode 2: April 26
  • Episode 3: May 3
  • Episode 4: May 10
  • Episode 5: May 17
  • Episode 6: May 31
  • Episode 7: June 7
  • Episode 8: June 14
  • Episode 9: June 21
  • Episode 10: June 28

Sarah Tew/CNET

If you don’t have cable, you can sign up to watch From season 4 on MGM Plus directly from its website for $8 per month or $62 per year. Your MGM Plus subscription includes ad-free streaming and the ability to download titles to watch offline.

Advertisement

Source link

Continue Reading

Tech

Anthropic’s relationship with the Trump administration seems to be thawing

Published

on

Despite recently being designated a supply-chain risk by the Pentagon, Anthropic is still talking to high-level members of the Trump administration.

There were earlier signs of a thawing relationship — or a sense that not every part of the administration wanted to cut off Anthropic — with reports saying that Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell were encouraging the heads of major banks to test out Anthropic’s new Mythos model.

Anthropic co-founder Jack Clark seemed to confirm this, claiming that the ongoing fight over the supply-chain risk designation is a “narrow contracting dispute” that would not interfere with the company’s willingness to brief the government about its latest models.

Then on Friday, Axios reported that Bessent and White House Chief of Staff Susie Wiles had met with Anthropic CEO Dario Amodei. In a statement, the White House described this as an “introductory meeting” that was “productive and constructive.”

Advertisement

“We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology,” the White House said.

Similarly, Anthropic issued a statement confirming that Amodei had met with “senior administration officials for a productive discussion on how Anthropic and the U.S. government can work together on key shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety.”

The company added that it’s “looking forward to continuing these discussions.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

The dispute between Anthropic and the Pentagon seemingly began after failed negotiations over the military’s use of Anthropic’s models; the AI company sought to maintain safeguards around the use of its technology for fully autonomous weapons and mass domestic surveillance. (OpenAI quickly announced a military deal of its own, leading to some consumer backlash.)

Advertisement

The Pentagon subsequently declared Anthropic a supply-chain risk — a label that’s generally reserved for foreign adversaries and could severely limit the use of Anthropic’s models by the government. The company is challenging that designation in court

But it sounds like the rest of the Trump administration doesn’t share the Pentagon’s hostility, with an administration source telling Axios that “every agency” except the Department of Defense wants to use the company’s technology.

Source link

Advertisement
Continue Reading

Tech

US Government Now Wants Anthropic’s ‘Mythos’, Preparing for AI Cybersecurity Threats

Published

on

Friday Anthropic’s CEO met with top U.S. officials and “discussed opportunities for collaboration,” according to a White House spokesperson itedd by Politico, “as well as shared approaches and protocols to address the challenges associated with scaling this technology.”

CNN notes the meeting happens at the same time Anthropic “battles the Trump administration in court for blacklisting its Claude AI model…”

The meeting took place as the US government is trying to balance its hardline approach to Anthropic with the national security implications of turning its back on the company’s breakthrough technology — including its Mythos tool that can identify cybersecurity threats but also present a roadmap for hackers to attack companies or the government… The Office of Management and Budget has already told agencies it is preparing to give them access to Mythos to prepare, Bloomberg reported. Axios reported the White House is also in discussion to gain access to Mythos.
The Trump administration “recognizes the power” of Mythos, reports Axios, “and its highly sophisticated — and potentially dangerous — ability to breach cybersecurity defenses.”

“It would be grossly irresponsible for the U.S. government to deprive itself of the technological leaps that the new model presents,” a source close to negotiations told us. “It would be a gift to China”… Some parts of the U.S. intelligence community, plus the Cybersecurity and Infrastructure Security Agency (CISA, part of Homeland Security), are testing Mythos. Treasury and others want it.

The White House added they plan to invite other AI companies for similar discussions, Politico reports. But Mythos “is also alarming regulators in Europe, who have told POLITICO they have not been able to gain access…”

Advertisement

U.S. government agency tech leaders sought access to the model after Anthropic earlier this year began testing the model and granted limited access to a select group of companies, including JPMorgan, Amazon and Apple… after finding it had hacking capabilities far outstripping those of previous AI models. This includes the ability to autonomously identify and exploit complex software vulnerabilities, such as so-called zero-day flaws, which even some of the sharpest human minds are unable to patch. The AI startup also wrote that the model could carry out end-to-end cyberattacks autonomously, including by navigating enterprise IT systems and chaining together exploits. It could also act as a force-multiplier for research needed to build chemical and biological weapons, and in certain instances, made efforts to cover its tracks when attacking systems, according to Anthropic’s report on the model’s capabilities and its safety assessments.

Those findings and others have inspired fears that the model could be co-opted to launch powerful cyberattacks with relative ease if it fell into the wrong hands. Logan Graham, a senior security researcher at Anthropic, previously told POLITICO that researchers and tech firms had been given early access to Mythos so they could find flaws in their critical code before state-backed hackers or cybercriminals could exploit them. “Within six, 12 or 24 months, these kinds of capabilities could be just broadly available to everybody in the world,” Graham said.

Source link

Advertisement
Continue Reading

Tech

The tough new realities for startups, Amazon’s next big strategic bets, and Allbirds’ crazy AI pivot

Published

on

This week on the GeekWire Podcast, a week of Seattle-area startup news shows how the AI era is reshaping the regional tech scene. Q1 venture numbers reveal bigger checks going to fewer companies, with Seattle slipping behind the likes of Austin and Miami on deal volume.

And yet the distributed nature of modern startups is complicating what it even means to be a regional tech hub. (Does a mailbox in Pioneer Square really count as a Seattle headquarters?)

Founders and CEOs are navigating this in different ways. Those with enough cash are eyeing strategic acquisitions, including opportunities to absorb startups caught up in the AI shakeout.

Many are also rethinking how they hire and expand. More than a third of the GeekWire 200, our ranking of top Pacific Northwest startups, saw year-over-year employment declines, as agents boost individual productivity and reshape the workforce.

Plus: Andy Jassy’s shareholder letter signals Amazon is making bets again, in areas including chips and robotics. Driving home the point, the tech giant’s Amazon’s ambitious Globalstar acquisition effectively means it’s inheriting Apple’s satellite roadmap.

Advertisement

Of course, we have to talk about Allbirds. The sustainable shoe brand, which once challenged Amazon over knock-off sneakers, pivoted to AI infrastructure and saw its stock soar.

And in our final segment, a trivia challenge on the No. 1 companies in GeekWire 200 history.

With GeekWire co-founders John Cook and Todd Bishop. Edited by Curt Milton. 

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Advertisement

Source link

Continue Reading

Tech

Are we ready to place lab experiments in non-human hands?

Published

on

Stephen D Turner of the University of Virginia explores the importance of governance and oversight around AI in the design and execution of lab experiments.

Artificial intelligence is rapidly learning to autonomously design and run biological experiments, but the systems intended to govern those capabilities are struggling to keep pace.

AI company OpenAI and biotech company Ginkgo Bioworks announced in February 2026 that OpenAI’s flagship model GPT-5 had autonomously designed and run 36,000 biological experiments. It did this through a robotic cloud laboratory, a facility where automated equipment controlled remotely by computers carries out experiments. The AI model proposed study designs, and robots carried them out and fed the data back to the model for the next round. Humans set the goal, and the machines did much of the work in the lab, cutting the cost of producing a desired protein by 40pc.

This is programmable biology: designing biological components on a computer and building them in the physical world, with AI closing the loop.

Advertisement

For decades, biology mostly moved from observation toward understanding. Scientists sequenced the genomes of organisms to catalogue all of their DNA, learning how genes encode the proteins that carry out life’s functions. The invention of tools like CRISPR then allowed scientists to edit that DNA for specific purposes, such as disabling a gene linked to disease. AI is now accelerating a third phase, where computers can both design biological systems and rapidly test them.

The process looks less like traditional benchwork in a lab and more like engineering: design, build, test, learn and repeat. Where a traditional experiment might test a single hypothesis, AI-driven programmable biology explores thousands of design variations in parallel, iterating the way an engineer refines a prototype.

As a data scientist who studies genomics and biosecurity, I research how AI is reshaping biological research and what safeguards that demands. Current safety measures and regulations have not kept pace with these capabilities, and the gap between what AI can do in biology and what governance systems are prepared to handle is growing.

What AI makes possible

The clearest example of how researchers are using AI to automate research is AI-accelerated protein design.

Advertisement

Proteins are the molecular machines that carry out most functions in living cells. Designing new ones has traditionally required years of trial and error because even small changes to a protein’s sequence can alter its shape and function in unpredictable ways.

Protein language models, which are AI systems trained on millions of natural protein sequences, can quickly predict how mutations will change a protein’s behavior or design new proteins. These AI models are designing potential new drugs and speeding vaccine development.

Paired with automated labs, these models create tight loops of experimentation and revision, testing thousands of variations in days rather than the months or years a human team would need.

Faster protein engineering could mean faster responses to emerging infections and cheaper drugs.

Advertisement

The dual-use problem

Researchers have raised concerns that these same AI tools could be misused, a challenge known as the dual-use problem: technologies developed for beneficial purposes can also be repurposed to cause harm.

For example, researchers have found that AI models integrated with automated labs can optimise how well a virus spreads, even without specialised training. Scientists have developed a risk-scoring tool to evaluate how AI could modify a virus’s capabilities, such as altering which species it infects or helping it evade the immune system.

Current AI models are able to walk users through the technical steps of recovering live viruses from synthetic DNA. Researchers have determined that AI could lower barriers at multiple stages in the process of developing a bioweapon, and that current oversight does not adequately address this risk.

Risk from bio AI

Experienced scientists are already using AI to plan and design biological experiments. The question of whether AI can help people with limited biology training carry out dangerous lab work is the subject of active research.

Advertisement

Two recent studies have reached different conclusions.

A study by AI company Scale AI and biosecurity nonprofit SecureBio found that when people with limited biology experience were given access to large language models, which is the type of AI behind tools like ChatGPT, they were able to complete biosecurity-related tasks such as troubleshooting complex virology lab protocols with four times greater accuracy. In some areas, these novices outperformed trained experts. Around 90pc of these novices reported little difficulty getting the models to provide risky biological information, such as detailed instructions on working with dangerous pathogens, despite built-in safety filters meant to block such outputs.

In contrast, a study led by Active Site, a research nonprofit that studies the use of AI in synthetic biology, found that AI help did not lead to significant differences in the ability of novices to complete the complex workflow to produce a virus in a biosafety laboratory. However, the AI-assisted group succeeded more often on most tasks and finished some steps faster, most notably on growing cells in the lab.

Hands-on work in the lab has traditionally been a bottleneck to translating designs into results. Even a brilliant study plan still depends on skilled human hands to carry out. That may not last, as cloud laboratories and robotic automation become cheaper and more accessible, allowing researchers to send AI-generated experimental designs to remote facilities for execution.

Advertisement

Responding to AI-driven biological risks

AI systems are now able to run experiments autonomously and at scale, but existing regulations were not designed for this. Rules governing biological research do not account for AI-driven automation, and rules governing AI do not specifically address its use in biology.

In the US, the Biden administration had issued a 2023 executive order on AI security that included biosecurity provisions, but the Trump administration revoked it. Screening the synthetic DNA that commercial providers make to ensure it cannot be misused to make pathogens or toxins remains mostly voluntary. A bipartisan bill introduced in 2026 to mandate DNA screening does not yet address AI-designed sequences that evade current detection methods.

The 1975 Biological Weapons Convention, an international treaty prohibiting the production and use of bioweapons, contains no provisions for AI. The UK AI Security Institute and the US National Security Commission on Emerging Biotechnology have both called for coordinated government action.

The safety evaluations that AI labs run before releasing new models are often opaque and unsuited to capture real-world risk. Researchers have estimated that even modest improvements in an AI model’s ability to help plan pathogen-related experiments could translate to thousands of additional deaths from bioterrorism per year. Timelines for when these capabilities cross critical thresholds remain unclear.

Advertisement

The Nuclear Threat Initiative has proposed a managed access framework for biological AI tools, matching who can use a given tool to the risk level of the model rather than blanket restrictions. The RAND Center on AI, Security and Technology outlined a set of actions researchers could take to improve biosecurity, including improved DNA synthesis screening and model evaluations before release. Researchers have also argued that biological data itself needs governance, especially genomic data that could train models with dangerous capabilities.

Some AI companies have started voluntarily imposing their own safety measures. Anthropic activated its highest safety tier when it released its most advanced model in mid-2025. At the same moment, OpenAI updated its Preparedness Framework, revising the thresholds for how much biological risk a model can pose before additional safeguards are required. But these are voluntary, company-specific steps. Anthropic’s CEO, Dario Amodei, wrote that the pace of AI development may soon outrun any single company’s ability to assess the risk of a given model.

When used in a well-controlled setting, AI can help scientists quickly reach their research goals. What happens when the same capabilities operate outside those controls is a question that policy has not yet answered. Overreact, and talent and investment may move elsewhere while the technology continues advancing anyway. Underreact, and the risks of that technology could be exploited to cause real harm.

The Conversation

Stephen D Turner

Advertisement

Stephen D Turner is an associate professor of data science and an assistant dean for research at the University of Virginia School of Data Science. He has worked on biosecurity applications in national security and writes about AI, biosecurity and other topics.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

How to Clean Your Vinyl Records (2026): Vacuum, Ultrasonic, Solution, Brush

Published

on

With the ultrasonic cleaning machine, you don’t need to vacuum out the grime for each record you clean, because the machine shakes all the gunk off for you. It collects at the bottom of the basin, so you just need to make sure it all gets dumped out when you empty the liquid from the machine between uses. Once your records have taken their bath in the diluted cleaning solution mixture, place them on the drying rack.

If a record (or, more realistically, stack of records) is especially dirty, I clean them two times with either method in progressively cleaner fluid. In my ultrasonic machine, I do all my records once, then change the fluid and do them again. Be sure to have a clean microfiber towel ($5) handy so that the record is fully dry before returning it to its packaging.

Some people prefer to also rinse the clean records in distilled water at the end of the cleaning cycle to remove any remaining solution. If you do that, just dry them the same way before putting them away.

Scratches or Warps?

Advertisement

These cleaning methods can’t repair scratches or effectively fix warped records. The only way to prevent those things from inflicting your collection is store your records properly: in an upright, clean environment. Records stacked on top of one another or stored sitting diagonally can warp from their own weight. Don’t store your records somewhere especially hot or cold, or anywhere where temperature varies a lot, as it can affect the vinyl’s longevity.

When buying used records at a store, it’s important to know the difference between a dirty disc and a scratched or warped one. I recommend using a bright handheld flashlight or the light on your smartphone to inspect any used records you’re interested in buying for scratches. Also look at them from different angles to make sure they’re nice and flat. If a used record is sealed inside a polyvinyl bag with tape, a store clerk will almost always cut the tape so you can inspect the disc.

How Often Should I Clean?

Whenever your records are dirty! For most people, a single thorough cleaning of all their records followed by cleaning every 20 or 30 plays is a good start. I clean mine once a year. I make a pile of LPs that have been played a lot, plus newer records that I’ve never cleaned. (New records can have oils used to separate them from the press still on the surface, and thus get gunky faster than previously cleaned records.) From there, it’s Netflix and clean.

Advertisement

I’m not such a clean freak that I wear white gloves when I handle my vinyl, but you should always touch the record’s playing surface as little as possible. Grip the disc from the edges or from the edge and the label rather than touching the grooves.

Before playing a record, clean the needle (I like gel cleaners like this $16 option), and make sure you’ve brushed your record so the needle isn’t grinding dust into the surface (the source of many pops when listening). Properly maintained, your records should last many decades.

Source link

Advertisement
Continue Reading

Tech

Irish-founded Ulysses raises $46m in rounds featuring A16Z

Published

on

The San Francisco-based start-up is building networked autonomous vehicles that operate above and below the surface of the ocean, ‘Earth’s last frontier’.

Ulysses, founded in Dublin in 2023 by Akhil Voorakkara, Will O’Brien, Jamie Wedderburn and Colm O’Brien – who say they are united by a shared belief “that the ocean is the planet’s most strategic and underserved domain” – will use newly acquired funding to build “the Ocean company”.

A $38m Series A round was led by Andreesen Horowitz (A16Z), while the San Francisco-based Ulysses also announced an $8m seed round led by Pebblebed, bringing total new funding to $46m. Other investors included Booz Allen Hamilton, Harpoon and Genius Ventures, while existing investors Lowercarbon Capital, ReGen Ventures and Superorganism have also followed with further investment.

“The founders, Akhil, Will, Colm and Jamie, came to this country and created something we had been struggling to produce: a small, autonomous underwater vehicle that aims to outperform the primes at a fraction of the cost,” a statement from A16Z said. “We’re excited to partner with the Ulysses team for their Series A.”

Advertisement

Will O’Brien, in a LinkedIn post, said: “We are building The Ocean Company. The ocean is 71pc of the planet. But it is less explored than Mars, and full of secrets, waiting to be told. It is the backbone of global defence. Home to the critical infrastructure that powers our world. And the key to the health of our planet. This frontier needs technology to protect and steward it. We are building it.”

Ulysses describes its mission as “building the operating system for the ocean: massive, networked fleets of low-cost, autonomous vehicles that operate above and below the surface”, using hardware “trusted to function in the harshest maritime environments  – whether restoring seagrass meadows, securing critical infrastructure or conducting persistent [intelligence, surveillance and reconnaissance] in contested waters”.

Players like the US Navy have recognised the potential and come calling. Ulysses is now actively recruiting for engineers and scientists at their San Francisco base.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

The App Store is booming again, and AI may be why

Published

on

Everyone said AI would kill apps. Instead, new app launches are soaring.

According to a new analysis from market intelligence provider Appfigures, worldwide app releases in the first quarter of 2026 were up 60% year-over-year across both Apple’s App Store and Google Play. That percentage was an even higher 80% when looking at the iOS App Store alone. In April 2026 so far, the total number of app releases is up 104% across both stores compared to the same time last year, and up 89% on iOS.

As Apple’s Senior Vice President of Worldwide Marketing, Greg “Joz” Joswiak, quipped In a recent interview: rumors of the App Store’s death in the AI age “may have been greatly exaggerated.”

Image Credits:Appfigures

These findings come amid concerns that the rise of AI chatbots and agents would ultimately see users turning away from apps — a theory that’s already being floated by those in the industry, like Nothing CEO Carl Pei, who is focused on building a smartphone for the AI era. The New York Times also reported last year on the potential for new computing platforms to eclipse the smartphone, like smart glasses, ambient computing devices, or reimagined smartwatches with AI features.

OpenAI is even working on an AI hardware device with famed Apple designer Jony Ive.

Advertisement

But there’s another possibility, too: AI will make it easier for anyone to create apps, driving a rebirth of the App Store. The new app gold rush could be led by creators who have ideas but not the technical skills to design mobile software.

Appfigures’ data indicates that certain categories of apps are seeing more new releases than others.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

Mobile games still account for most of the new app releases worldwide as of Q1 2026, as they have in prior years. But “productivity” apps have moved into the top five this year. The “utilities” category has also moved up to the number two slot, and the “lifestyle” apps category moved up from the No. 5 slot last year to now No. 3. Finally, “health and fitness”-style applications rounded out the top five categories.

Image Credits:Appfigures

The working hypothesis here is that AI-powered tools, like Claude Code or Replit, could be behind the surge of new launches. It also seems possible that we’re hitting some sort of tipping point in terms of AI usability, where it’s easy enough for people to leverage these tools to build their own desired mobile apps more quickly — or even build their first apps ever.

The explosion of new apps for Apple to review could also be behind some of the tech giant’s recent missteps. This week, Apple pulled the rewards app Freecash from the App Store for rules violations, after letting the app climb the store’s Top Charts and sit in the top five for months. Apple was also caught off guard by a malicious cryptocurrency app, a clone of Ledger Live, that drained $9.5 million in crypto from victims’ accounts.

Advertisement

While high-profile problems like this can generate bad PR for the App Store, the company still does a lot of heavy lifting in terms of blocking and rejecting dangerous or spammy apps. Apple’s most recent analysis from 2024 said the company had removed or rejected more than 17,000 apps for bait-and-switch violations that year; rejected more than 320,000 app submissions that were found to be spam, copying other apps, or misleading; and took action to prevent more than 37,000 potentially fraudulent apps from reaching users on the App Store.

Still, Apple pundits like John Gruber have long argued that the App Store needs a “bunco squad” of sorts that watches for scammy or fraudulent apps that are gaining in popularity or high-grossing.

If AI-assisted vibe coding turns out to be behind the recent surge of app releases, that need will only grow as more new apps flood the marketplace, not all of which will be benign.

Source link

Advertisement
Continue Reading

Tech

Shuttered Startups Are Selling Old Slack Chats, Emails To AI Companies

Published

on

Some failed startups are reportedly selling old Slack messages, emails, and other internal records to AI companies as training data, creating a new way to cash out after shutting down. Fast Company reports: Shanna Johnson, the CEO of now-defunct software company Cielo24, told the publication that she was able to sell every Slack message, internal email, and Jira ticket as training data for “hundreds of thousands of dollars.”

This isn’t a one-off scenario. SimpleClosure, a startup that helps companies like Cielo24 shut down, told Forbes that there’s been major interest from AI companies trying to get their hands on workplace data. Because of this, SimpleClosure launched a new tool that allows companies to sell their wealth of internal communications — from Slack archives to email chains — to AI labs. The company said it’s processed 100 such deals in the past year. Payouts ranged from $10,000 to $100,000. “I think the privacy issues here are quite substantial,” Marc Rotenberg, founder of the Center for AI and Digital Policy, told Forbes. “Employee privacy remains a key concern, particularly because people have become so dependent on these new internal messaging tools like Slack. … It’s not generic data. It’s identifiable people.”

Source link

Continue Reading

Tech

Amazon issues $589 MacBook Neo deal, lowest price on new release

Published

on

Avoid backorder delays and grab the lowest price ever with Amazon’s MacBook Neo deal that drops the standard model to $589.99.

Citrus MacBook on an outdoor cafe table, wicker chair behind it, with bold red SALE banner and yellow label reading MacBook Neo in the upper left corner
Save on every new MacBook Neo, including this popular Citrus option.

A popular option for families and bargain hunters, Apple’s MacBook Neo is on sale at Amazon today, with the standard 256GB model marked down to $589.99 after a $10 discount. At press time, all four colorways are eligible for the savings, with units shipping now or in 1-2 days, depending on the color.
Buy MacBook Neo for $589.99
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025