Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Global AI deal volumes reached 1,245 during Q3 2024, a level not seen since Q1 2022 reflecting how confident and resilient investors are about investing in AI.
CB Insights says that “while AI deals in Q3’24 included massive $1B+ rounds to defense tech provider Anduril and AI lab Safe Superintelligence, global AI funding actually dropped by 29% QoQ.” The 77% decline in funding from $1B+ AI rounds QoQ contributed to the 29% QoQ decline.
Advertisement
The average AI deal size increased 28% this year, climbing from $18.4M in 2023 to $23.5M. Deal size gains this year are attributable to five $1B+ rounds this year, including xAI’s $6B Series B at a $24B valuation, Anthropic’s $2.8B Series D at an $18.4B valuation, Anduril’s $1.5B Series F at a $14B valuation, G42’s $1.5B investment from Microsoft and CoreWeave’s $1.1B Series C at a $19B valuation. CB Insights notes that these deals alone aren’t responsible for increasing the average entirely on their own, mentioning that the median AI deal size is up 9% in 2024 so far.
U.S.-based AI startups attracted $11.4B in investment across 566 deals in Q3, 2024 accounting for over two-thirds of global AI funding and 45% of global AI deals. European AI startups attracted $2.8B from 279 deals, and Asian AI startups received $2.1B from 316 deals.
Generative AI and industry-specific AI lead investments
The anticipated productivity gains and potential cost reductions that generative AI and industry-specific AI are delivering are core to investors’ confidence and driving more AI deals.
Enterprises have already learned how to prioritize gen AI and broader AI investments that deliver measurable value at scale. That’s one of the primary factors continuing to fuel more venture investments over other opportunities. Gartner’s2024 Generative AI Planning Survey reflects how impatient senior management is for results, correlating back to CB Insight’s findings.
One of the key findings from the Gartner Survey is that senior executives are expecting—and driving—gen AI projects to boost productivity by 22.6%, outpacing revenue growth at 15.8% and cost savings at 15.2%. While cost efficiency and revenue gains matter, Gartner predicts the most immediate and substantial impact will be on driving greater operational efficiency. Gartner predicts that enterprises that prioritize gen AI integration will see significant increases in both workflow optimization and financial performance.
Advertisement
CB Insights provides a comprehensive analysis of the deals completed in Q3, reflecting the growing dominance of gen AI and industry-specific AI investments. The following deals support this finding:
Gen AI investments in Q3:
Safe Superintelligence raised a massive $1 billion Series A round, indicating continued strong interest in large language models (LLM) and general-purpose AI systems.
Baichuan AI, a Chinese generative AI company, secured $688 million in Series A funding.
Moonshot AI, another gen AI startup, raised $300 million in a Series B round.
Codeium, a code generation AI company, became a unicorn with a $150 million Series C round.
Industry-specific AI investments in Q3:
Anduril, an AI-powered defense technology company, raised $1.5 billion in a Series F round, highlighting interest in AI for national security applications.
ArsenalBio secured $325 million for AI in biotechnology and drug discovery.
Helsing raised $488 million for AI applications in defense and security.
Altana AI received $200 million for AI in supply chain management and logistics.
Flo Health raised $200 million for AI-powered women’s health applications.
New AI unicorns more than doubled in Q3
Gen AI continues to be one of the primary catalysts driving the formation and growth of unicorns (private companies reaching $1B+ valuations). CB Insights found that the number of unicorns more than doubled QoQ, reaching 13 in the latest quarter. That’s 54% of the broader venture total for Q3 2024.
More than half of the AI unicorns launched last quarter are gen AI startups. They’re targeting a broad spectrum of areas, including AI for 3D environments (World Labs), code generation (Codeium), and legal workflow automation (Harvey). Among new GenAI unicorns in Q3’24, Safe Superintelligence, co-founded by OpenAI co-founder Ilya Sutskever received the most sizable valuation. The AI lab was valued at $5B after raising a $1B Series A round in September 2024.
Gen AI’s enterprise challenges are just beginning
The potential of gen AI and industry-specific AI to improve productivity, help drive new revenue streams and reduce costs keeps investors resilient and focused on results.
From the many organizations getting additional late-stage funding to startups and new unicorns, the challenge will be gaining adoption at scale and solidly enough to sustain recurring revenue while reducing costs.
With CIOs and CISOs looking to reduce the tool and app sprawl they already have, the most successful startups will have to find new ways to embed and integrate gen AI into existing apps and workflows. That’s going to be challenging, as every enterprise has its data management challenges, siloed legacy systems, and the need to update its data accuracy, quality and security strategies.
Advertisement
Startups and unicorns that can take on all these challenges and improve their customers’ operations at the data level first are most likely to deliver the results investors expect.
VB Daily
Stay in the know! Get the latest news in your inbox daily
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Hugging Face today has released SmolLM2, a new family of compact language models that achieve impressive performance while requiring far fewer computational resources than their larger counterparts.
The new models, released under the Apache 2.0 license, come in three sizes — 135M, 360M and 1.7B parameters — making them suitable for deployment on smartphones and other edge devices where processing power and memory are limited. Most notably, the 1.7B parameter version outperforms Meta’s Llama 1B model on several key benchmarks.
Small models pack a powerful punch in AI performance tests
“SmolLM2 demonstrates significant advances over its predecessor, particularly in instruction following, knowledge, reasoning and mathematics,” according to Hugging Face’s model documentation. The largest variant was trained on 11 trillion tokens using a diverse dataset combination including FineWeb-Edu and specialized mathematics and coding datasets.
This development comes at a crucial time when the AI industry is grappling with the computational demands of running large language models (LLMs). While companies like OpenAI and Anthropic push the boundaries with increasingly massive models, there’s growing recognition of the need for efficient, lightweight AI that can run locally on devices.
Advertisement
The push for bigger AI models has left many potential users behind. Running these models requires expensive cloud computing services, which come with their own problems: slow response times, data privacy risks and high costs that small companies and independent developers simply can’t afford. SmolLM2 offers a different approach by bringing powerful AI capabilities directly to personal devices, pointing toward a future where advanced AI tools are within reach of more users and companies, not just tech giants with massive data centers.
Edge computing gets a boost as AI moves to mobile devices
SmolLM2’s performance is particularly noteworthy given its size. On the MT-Bench evaluation, which measures chat capabilities, the 1.7B model achieves a score of 6.13, competitive with much larger models. It also shows strong performance on mathematical reasoning tasks, scoring 48.2 on the GSM8K benchmark. These results challenge the conventional wisdom that bigger models are always better, suggesting that careful architecture design and training data curation may be more important than raw parameter count.
The models support a range of applications including text rewriting, summarization and function calling. Their compact size enables deployment in scenarios where privacy, latency or connectivity constraints make cloud-based AI solutions impractical. This could prove particularly valuable in healthcare, financial services and other industries where data privacy is non-negotiable.
Industry experts see this as part of a broader trend toward more efficient AI models. The ability to run sophisticated language models locally on devices could enable new applications in areas like mobile app development, IoT devices, and enterprise solutions where data privacy is paramount.
The race for efficient AI: Smaller models challenge industry giants
However, these smaller models still have limitations. According to Hugging Face’s documentation, they “primarily understand and generate content in English” and may not always produce factually accurate or logically consistent output.
Advertisement
The release of SmolLM2 suggests that the future of AI may not solely belong to increasingly large models, but rather to more efficient architectures that can deliver strong performance with fewer resources. This could have significant implications for democratizing AI access and reducing the environmental impact of AI deployment.
The models are available immediately through Hugging Face’s model hub, with both base and instruction-tuned versions offered for each size variant.
VB Daily
Stay in the know! Get the latest news in your inbox daily
On Friday evening, Okta posted an odd update to its list of security advisories. The latest entry reveals that under specific circumstances, someone could’ve logged in by entering anything for a password, but only if the account’s username had over 52 characters.
According to the notepeople reported receiving, other requirements to exploit the vulnerability included Okta checking the cache from a previous successful login, and that an organization’s authentication policy didn’t add extra conditions like requiring multi-factor authentication (MFA).
Here are the details that are currently available:
On October 30, 2024, a vulnerability was internally identified in generating the cache key for AD/LDAP DelAuth. The Bcrypt algorithm was…
Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.
SPOILER WARNING: Information about NYT Strands today is below, so don’t read on if you don’t want to know the answers.
Your Strands expert
Your Strands expert
Marc McLaren
NYT Strands today (game #244) – hint #1 – today’s theme
What is the theme of today’s NYT Strands?
• Today’s NYT Strands theme is… Good on paper
Advertisement
NYT Strands today (game #244) – hint #2 – clue words
Play any of these words to unlock the in-game hints system.
LATE
LAST
STALE
STARE
PUFF
CLIP
NYT Strands today (game #244) – hint #3 – spangram
What is a hint for today’s spangram?
• Stationery cupboard
NYT Strands today (game #244) – hint #4 – spangram position
What are two sides of the board that today’s spangram touches?
First: left, 4th row
Last: right, 4th row
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Strands today (game #244) – the answers
The answers to today’s Strands, game #244, are…
PRINTER
SCISSORS
PENCILS
STAPLER
RULER
SPANGRAM: OFFICESUPPLIES
My rating: Easy
My score: Perfect
As the father of teenage daughters I am well aware of all of the OFFICESUPPLIES in today’s Strands. Not because they work in an office, obviously, but because they are at school and seem to get through about 20 RULERs and 50 PENCILS a year, constantly need me to help them use the PRINTER and still seem a little clueless about how to use SCISSORS or a STAPLER. Kids today, eh? Too much time spent in front of a screen, clearly.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
My own parental issues aside, this was an easy Strands puzzle to solve. The theme clue provided a good push in the right direction, and when I found PRINTER by accident my course was duly charted. None of the words were had to think of, and only the rather long and complex spangram provided any real challenge.
Yesterday’s NYT Strands answers (Friday, 1 November, game #243)
QUEEN
KING
ROOK
TIMER
BISHOP
PAWN
KNIGHT
BOARD
SPANGRAM: CHECKMATE
What is NYT Strands?
Strands is the NYT’s new word game, following Wordle and Connections. It’s now out of beta so is a fully fledged member of the NYT’s games stable and can be played on the NYT Games site on desktop or mobile.
I’ve got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you’re struggling to beat it each day.
Due to its unique model that includes only original content, Apple TV+ tends to have a very slim new release slate. However, just about every Apple TV+ release features A-list talent, and it has set a high bar for quality. Just look at Best Picture winner CODA and Emmy-winning drama Severance (returning in January).
This month is no exception, as there are only four new additions to the library in November. We’ve highlighted the two most anticipated, but don’t overlook Season 2 of the critically acclaimed comedy Bad Sisters or the Malala Yousafzai and Jennifer Lawrence documentary Bread & Roses.
There are only a few new arrivals each month to Apple TV+, but they’re usually all worth at least a glance. This month is no exception. Read on for everything coming to Apple TV+ in October 2024.
Google may upgrade the “Now Playing” feature by adding the much-needed album art to the history page. Now Playing has been able to identify songs with a high degree of accuracy, but the list only included the name of the song and the artist.
Now Playing is constantly operating in the background, but only for music
Introduced way back in 2017 along with the Pixel 2, the Now Playing feature has remained exclusive to the Google Pixel phones. It essentially identifies songs that are playing nearby and works well even on the latest Pixel 9 devices.
Apps like Shazam have been recognizing music and songs for quite some time. However, Now Playing has some tricks for the Pixel phones. Now Playing works entirely in the background. Pixel users don’t even need to pull out their phones.
While working in the background, Now Playing relies on the low-power efficiency cores to continuously analyze audio through the microphone. If it picks up audio that seems like music or a song, Now Playing requests the performance cores to record a few seconds of the audio.
Advertisement
Now Playing then matches the recorded audio on a database containing tens of thousands of fingerprints of the most popular songs in a particular region. After processing and matching, Now Playing displays the name and artist of the song on the lock screen as well as in a notification.
Needless to say, Now Playing is fairly accurate. However, the list of songs it recognizes contains only the name of the song, the artist, and a timestamp.
Google’s Now Playing feature for Pixel devices may get album art
The songs that Now Playing recognized are visible under Settings > Sound & vibration > Now Playing. The page lists the history of identified songs in reverse chronological order.
Although there’s an icon next to each song, Google has refused to append any album art to the songs Now Playing recognizes. According to Android Authority, this might change in the future.
Advertisement
The hidden system app that downloads the Now Playing database may soon also grab album art. The code change is titled “#AlbumArt Add Now Playing album art downloads to the network usage log”.
Google has yet to assign a dedicated online repository from where Now Playing will download album art for the songs it recognizes. However, Ambient Music Mod, an open-source port of Now Playing by developer Kieron Quinn, already has the feature. The reverse-engineered version essentially replaces the generic music note icon with album art.
Disney is adding another layer to its AI and extended reality strategies. As first reported by Reuters, the company recently formed a dedicated emerging technologies unit. Dubbed the Office of Technology Enablement, the group will coordinate the company’s exploration, adoption and use of artificial intelligence, AR and VR tech.
It has tapped Jamie Voris, previously the CTO of its Studios Technology division, to oversee the effort. Before joining Disney in 2010, Voris was the chief technology officer at the National Football League. More recently, he led the development of the company’s Apple Vision Proapp. Voris will report to Alan Bergman, the co-chairman of Disney Entertainment. Reuters reports the company eventually plans to grow the group to about 100 employees.
“The pace and scope of advances in AI and XR are profound and will continue to impact consumer experiences, creative endeavors, and our business for years to come — making it critical that Disney explore the exciting opportunities and navigate the potential risks,” Bergman wrote in an email Disney shared with Engadget. “The creation of this new group underscores our dedication to doing that and to being a positive force in shaping responsible use and best practices.”
A Disney spokesperson told Engadget the Office of Technology Enablement won’t take over any existing AI and XR projects at the company. Instead, it will support Disney’s other teams, many of which are already working on products that involve those technologies, to ensure their work fits into the company’s broader strategic goals.
Advertisement
“It is about bringing added focus, alignment, and velocity to those efforts, and about reinforcing our commitment being a positive force in shaping responsible use and best practices,” the spokesperson said.
It’s safe to say Disney has probably navigated the last two decades of technological change better than most of Hollywood. For instance, the company’s use of the Unreal Engine in conjunction with a digital set known as The Volume has streamlined the production of VFX-heavy shows like The Mandalorian. With extended reality and AI in particular promising tidal changes to how humans work and play, it makes sense to add some additional oversight to how those technologies are used at the company.
If you buy something through a link in this article, we may earn commission.
You must be logged in to post a comment Login