Connect with us
DAPA Banner

Tech

Microsoft launches 3 new AI models in direct shot at OpenAI and Google

Published

on

Microsoft on Wednesday launched three new foundational AI models it built entirely in-house — a state-of-the-art speech transcription system, a voice generation engine, and an upgraded image creator — marking the most concrete evidence yet that the $3 trillion software giant intends to compete directly with OpenAI, Google, and other frontier labs on model development, not just distribution.

The trio of models — MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 — are available immediately through Microsoft Foundry and a new MAI Playground. They span three of the most commercially valuable modalities in enterprise AI: converting speech to text, generating realistic human voice, and creating images. Together, they represent the opening salvo from Microsoft’s superintelligence team, which Suleyman formed just six months ago to pursue what he calls “AI self-sufficiency.”

“I’m very excited that we’ve now got the first models out, which are the very best in the world for transcription,” Suleyman told VentureBeat in an exclusive interview ahead of the launch. “Not only that, we’re able to deliver the model with half the GPUs of the state-of-the-art competition.”

The announcement lands at a precarious moment for Microsoft. The company’s stock just closed its worst quarter since the 2008 financial crisis, as investors increasingly demand proof that hundreds of billions of dollars in AI infrastructure spending will translate into revenue. These models — priced aggressively and positioned to reduce Microsoft’s own cost of goods sold — are Suleyman’s first answer to that pressure.

Advertisement

Microsoft’s new transcription model claims best-in-class accuracy across 25 languages

MAI-Transcribe-1 is the headline release. The speech-to-text model achieves the lowest average Word Error Rate on the FLEURS benchmark — the industry-standard multilingual test — across the top 25 languages by Microsoft product usage, averaging 3.8% WER. According to Microsoft’s benchmarks, it beats OpenAI’s Whisper-large-v3 on all 25 languages, Google’s Gemini 3.1 Flash on 22 of 25, and ElevenLabs’ Scribe v2 and OpenAI’s GPT-Transcribe on 15 of 25 each.

The model uses a transformer-based text decoder with a bi-directional audio encoder. It accepts MP3, WAV, and FLAC files up to 200MB, and Microsoft says its batch transcription speed is 2.5 times faster than the existing Microsoft Azure Fast offering. Diarization, contextual biasing, and streaming are listed as “coming soon.” Microsoft is already testing MAI-Transcribe-1 inside Copilot’s Voice mode and Microsoft Teams for conversation transcription — a detail that underscores how quickly the company intends to replace third-party or older internal models with its own.

Alongside it, MAI-Voice-1 is Microsoft’s text-to-speech model, capable of generating 60 seconds of natural-sounding audio in a single second. The model preserves speaker identity across long-form content and now supports custom voice creation from just a few seconds of audio through Microsoft Foundry. Microsoft is pricing it at $22 per 1 million characters. MAI-Image-2, meanwhile, debuted as a top-three model family on the Arena.ai leaderboard and now delivers at least 2x faster generation times on Foundry and Copilot compared to its predecessor. Microsoft is rolling it out across Bing and PowerPoint, pricing it at $5 per 1 million tokens for text input and $33 per 1 million tokens for image output. WPP, one of the world’s largest advertising holding companies, is among the first enterprise partners building with MAI-Image-2 at scale.

The contract renegotiation with OpenAI that made Microsoft’s model ambitions possible

To understand why these models matter, you have to understand the contractual tectonic shift that made them possible. Until October 2025, Microsoft was contractually prohibited from independently pursuing artificial general intelligence. The original deal with OpenAI, signed in 2019, gave Microsoft a license to OpenAI’s models in exchange for building the cloud infrastructure OpenAI needed. But when OpenAI sought to expand its compute footprint beyond Microsoft — striking deals with SoftBank and others — Microsoft renegotiated. As Suleyman explained in a December 2025 interview with Bloomberg, the revised agreement meant that “up until a few weeks ago, Microsoft was not allowed — by contract — to pursue artificial general intelligence or superintelligence independently.” The new terms freed Microsoft to build its own frontier models while retaining license rights to everything OpenAI builds through 2032.

Advertisement

Suleyman described the dynamic to VentureBeat in characteristically blunt terms. “Back in September of last year, we renegotiated the contract with OpenAI, and that enabled us to independently pursue our own superintelligence,” he said. “Since then, we’ve been convening the compute and the team and buying up the data that we need.”

He was quick to emphasize that the OpenAI partnership remains intact. “Nothing’s changing with the OpenAI partnership. We will be in partnership with them at least until 2032 and hopefully a lot longer,” Suleyman said. “They have been a phenomenal partner to us.” He also highlighted that Microsoft provides access to Anthropic’s Claude through its Foundry API, framing the company as “a platform of platforms.” But the subtext is unmistakable: Microsoft is building the capability to stand on its own. In March, as Business Insider first reported, Suleyman wrote in an internal memo that his goal is to “focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years.” CNBC reported that the structural shift freed Suleyman from day-to-day Copilot product responsibilities, with former Snap executive Jacob Andreou taking over as EVP of the combined consumer and commercial Copilot experience.

How teams of fewer than 10 engineers built models that rival Big Tech’s best

Perhaps the most striking detail Suleyman shared with VentureBeat is how small the teams behind these models actually are. “The audio model was built by 10 people, and the vast majority of the speed, efficiency and accuracy gains come from the model architecture and the data that we have used,” Suleyman said. “My philosophy has always been that we need fewer people who are more empowered. So we operate an extremely flat structure.” He added: “Our image team, equally, is less than 10 people. So this is all about model and data innovation, which has delivered state of the art performance.”

This matters for two reasons. First, it challenges the prevailing industry narrative that frontier AI development requires thousands of researchers and billions in headcount costs. Meta, by contrast, has pursued what Suleyman described in his Bloomberg interview as a strategy of “hiring a lot of individuals, rather than maybe creating a team” — including reported compensation packages of $100 million to $200 million for top researchers. Second, small teams producing state-of-the-art results dramatically improve the economics. If Microsoft can build best-in-class transcription with 10 engineers and half the GPUs of competitors, the margin structure of its AI business looks fundamentally different from companies burning through cash to achieve similar benchmarks.

Advertisement

The lean-team philosophy also echoes Suleyman’s broader views on how AI is already reshaping the work of building AI itself. When asked by VentureBeat how his own team works, Suleyman described an environment that resembles a startup trading floor more than a traditional Microsoft engineering org. “There are groups of people around round tables, circular tables, not traditional desks, on laptops instead of big screens,” he said. “They’re basically vibe coding, side by side all day, morning till night, in rooms of 50 or 60 people.”

Why Suleyman’s “humanist AI” pitch is aimed squarely at enterprise buyers

Suleyman has been steadily building a philosophical brand around Microsoft’s AI efforts that he calls “humanist AI” — a term that appeared prominently in the blog post he authored for the launch and that he elaborated on in our interview. “I think that the motivation of a humanist super intelligence is to create something that is truly in service of humanity,” he told VentureBeat. “Humans will remain in control at the top of the food chain, and they will be always aligned to human interests.”

The framing serves multiple purposes. It differentiates Microsoft from the more acceleration-oriented rhetoric coming from OpenAI and Meta. It resonates with enterprise buyers who need governance, compliance, and safety assurances before deploying AI in regulated industries. And it provides a narrative hedge: if something goes wrong in the broader AI ecosystem, Microsoft can point to its stated commitment to human control. In his December Bloomberg interview, Suleyman went further, describing containment and alignment as “red lines” and arguing that no one should release a superintelligence tool until they are “confident it can be controlled.”

Suleyman also stressed data provenance as a competitive advantage, describing a conversation with CEO Satya Nadella about developing “a clean lineage of models where the data is extremely clean.” He drew an implicit contrast with open-source alternatives, noting that “many of the open-source models have been trained on data in, let’s say, inappropriate ways. And there are potentially security issues with that.” For enterprise customers evaluating AI vendors amid a thicket of copyright lawsuits across the industry, that is a meaningful commercial argument — if Microsoft can credibly claim that its training data was acquired through properly licensed channels, it reduces the legal and reputational risk of deploying these models in production.

Advertisement

Microsoft’s aggressive pricing puts pressure on Amazon, Google, and the AI startup ecosystem

Today’s launch positions Microsoft on three competitive fronts simultaneously. MAI-Transcribe-1 directly targets the transcription workloads that OpenAI’s Whisper models have dominated in the open-source community, with Microsoft claiming superior accuracy on all 25 benchmarked languages. The FLEURS results also show it winning against Google’s Gemini 3.1 Flash Lite on 22 of 25 languages — a direct challenge as Google aggressively pushes Gemini across its own product suite. And MAI-Voice-1‘s ability to clone voices from seconds of audio and generate speech at 60x real-time puts it in competition with ElevenLabs, Resemble AI, and the growing ecosystem of voice AI startups, with Microsoft’s distribution advantage — any Foundry developer can now access these capabilities through the same API they use for GPT-4 and Claude — acting as a powerful moat.

Suleyman framed the competitive position confidently: “We’re now a top three lab just under OpenAI and Gemini,” he told VentureBeat. The pricing strategy — MAI-Voice-1 at $22 per million characters, MAI-Image-2 at $5 per million input tokens — reflects a deliberate decision to compete on cost. “We’re pricing them to be the very best of any hyperscaler. So there will be the cheapest of any of the hyperscalers out there, Amazon. And obviously Google,” Suleyman said. “And that’s a very conscious decision.”

This makes strategic sense for Microsoft, which can amortize model development costs across its enormous installed base of enterprise customers. But it also speaks to the question investors have been asking with increasing urgency: when does AI spending start generating returns? Microsoft’s stock has fallen roughly 17% year-to-date, according to CNBC, part of a broader selloff in software stocks. By building models that run on half the GPUs of competitors, Microsoft reduces its own infrastructure costs for internal products — Teams, Copilot, Bing, PowerPoint — while offering developers pricing designed to undercut the rest of the market. In his March memo, Suleyman wrote that his models would “enable us to deliver the COGS efficiencies necessary to be able to serve AI workloads at the immense scale required in the coming years.” These three models are the first tangible delivery on that promise.

Suleyman says a frontier large language model is coming — and Microsoft plans to be “completely independent”

Suleyman made clear that transcription, voice, and image generation are just the beginning. When asked whether Microsoft would build a large language model to compete directly with GPT at the frontier level, he was unequivocal. “We absolutely are going to be delivering state of the art models across all modalities,” he said. “Our mission is to make sure that if Microsoft ever needs it, we will be able to provide state of the art at the best efficiency, the cheapest price, and be completely independent.”

Advertisement

He described a multi-year roadmap to “set up the GPU clusters at the appropriate scale,” noting that the superintelligence team was formally stood up only in October 2025. Suleyman spoke to VentureBeat from Miami, where the full team was convening for one of its regular week-long in-person sessions. He described Nadella flying in for the gathering to lay out “the roadmap of everything that we need to achieve for our AI self-sufficiency mission over the next 2, 3, 4 years, and all the compute roadmap that that would involve.”

Building a competitive frontier LLM, of course, is a different order of magnitude in complexity, data requirements, and compute cost from what Microsoft demonstrated Wednesday. The models launched today are specialized — they handle audio and images, not the general reasoning and text generation that underpin products like ChatGPT or Copilot’s core intelligence. Suleyman has the organizational mandate, Nadella’s public backing, and the contractual freedom. What he doesn’t yet have is a track record at Microsoft of delivering on the hardest problem in AI.

But consider what he does have: three models that are best-in-class or near it in their respective domains, built by teams smaller than most seed-stage startups, running on half the industry-standard GPU footprint, and priced below every major cloud competitor. Two years ago, Suleyman proposed in MIT Technology Review what he called the “Modern Turing Test” — not whether AI could fool a human in conversation, but whether it could go out into the world and accomplish real economic tasks with minimal oversight. On Wednesday, his own models took a step toward that vision. The question now is whether Microsoft’s superintelligence team can repeat the trick at the scale that actually matters — and whether they can do it before the market’s patience runs out.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Amazon names AWS exec Prasad Kalyanaraman to S-team, promotes Dave Brown to SVP

Published

on

Prasad Kalyanaraman, VP of AWS Infrastructure Services, has been named to Amazon’s senior leadership team. (Amazon Photo)

Amazon added a new member to its senior leadership team Wednesday, naming AWS infrastructure chief Prasad Kalyanaraman to the group known as the S-team or “steam,” while also promoting cloud computing and AI services leader Dave Brown to senior vice president.

CEO Andy Jassy announced the changes internally, according to a memo viewed by GeekWire, and the company updated its public list of S-team members to reflect the changes.

Kalyanaraman oversees AWS infrastructure, including data centers, networking, and supply chain. He has been with the company for more than 20 years, starting in Amazon’s fulfillment and supply chain operations before moving to the cloud division in 2012.

Jassy’s memo praised his “customer obsession, high standards, ability to be right often, delivery, and missionary approach (always focusing on what’s best for customers — and the company as a whole vs. just his own area),” alluding in part to Amazon’s leadership principles

Dave Brown, newly promoted to senior vice president at Amazon, leads AWS EC2 and AI services including Bedrock and SageMaker. (Amazon Photo)

Brown leads AWS compute services (EC2) along with fast-growing AI services including Bedrock and SageMaker. He has been on the S-team since 2023, previously as a vice president.

“There are several reasons for his promotion, but chief among them are his outstanding delivery, propensity to look around corners and deliver services customers want, being right a lot, obsessing about customers, and continuing to develop strong teams,” Jassy wrote.

Advertisement

The addition of Kalyanaraman brings the S-team back up to 28 members. That’s still down from more than 30 when the last big round of additions was made in September 2023. 

In the meantime, the group has seen departures including Adam Selipsky as AWS CEO (replaced by Matt Garman); longtime devices chief Dave Limp, (succeeded by former Microsoft executive Panos Panay); artificial intelligence leader Rohit Prasad; grocery head Tony Hoggett; and device software leader Rob Williams. 

Here’s the full list as it stands now.

Source link

Advertisement
Continue Reading

Tech

‘Simply by doing their daily work’: Meta tracks staff activity to teach AI how to replace them

Published

on


  • Meta is recording employee clicks, keystrokes, and screen activity to train AI agents on real work behavior
  • The program is part of a broader push to build AI systems that can perform everyday tasks with minimal human input
  • The move comes just ahead of reports of layoffs at the company

Meta has begun collecting everything its employees do as they go about their normal work to train its AI models, as first reported by Reuters. The Model Capability Initiative records mouse movements and clicks, keyboard keystrokes, and even occasional screenshots from computers used by Meta employees in the U.S. The company wants to observe how people actually use software, then feed that behavior into AI models so they can learn to do the same things.

Meta essentially wants to make its systems more reliable for the small actions that define a workday. That means everything from navigating a menu and moving between windows to parsing different website formats. These aren’t easily solved with text data alone.

Source link

Continue Reading

Tech

How Gut Bacteria May Affect The Outcome Of Cancer Immunotherapy

Published

on

In the ongoing development of cancer immunotherapy, as well as our still developing understanding of the human immune system, there’s always been a bit of massive elephant in the room. The thing about human bodies is that they’re not just human cells, but also consist of trillions of bacteria that mostly live in the intestines. What effect these bacteria have on the immune system’s functioning and from there on immunotherapies was recently investigated by [Tariq A. Najar] et al., with an article published in Nature.

The relevant topic here is that of antigenic mimicry, involving microbial antigens that resemble self-antigens. Since these self-antigens are a crucial aspect of both autoimmune diseases and cancer immunotherapy there is considerable room for interaction with their microbial mimics. Correspondingly these mimics can have considerable negative as well as positive implications, ranging from potentially triggering an autoimmune condition to hindering or boosting cancer immunotherapy.

In this study mice were used to investigate the effect of such microbial interference, in particular focusing on immune checkpoint blockade (ICB), which refers to negative feedback responses within the immune system that some cancers use to protect themselves. In some immunotherapy patients ICB inhibiting using e.g. anti programmed cell death protein (anti-PD-1) treatment does not provoke a response for some reason.

Advertisement

For the study mice had tumors implanted and the effect of a particular microbe (segmented filamentous bacteria, SFB) on it studied, with the presence of it markedly improving the response to anti-PD-1 treatment due to anti-gens expressed by SFB despite the large gut-skin distance. Whether in humans similar mechanisms play a similarly strong role remains to be investigated, but it offers renewed hope that cancer immunotherapies like CAR T-cell immunotherapy will one day make cancer an easily curable condition.

Source link

Advertisement
Continue Reading

Tech

Complete will combine remasters and a sequel into one package

Published

on

Last year, Ecco the Dolphin creator Ed Annunizata teased plans to remaster the first two games in the series and create an entirely new sequel. Ecco the Dolphin: Complete, announced by Annunziata’s studio A&R Atelier, appears to be the result of that work. The game doesn’t have a release date yet, but A&R Atelier says it combines the planned remasters and third title into “the complete, definitive Ecco the Dolphin experience, created by the people who made the originals.”

Complete includes “all versions of Ecco the Dolphin and Ecco: The Tides of Time,” according to the developer, alongside “a brand-new contemporary Ecco game.” Besides graphical improvements, A&E Atelier says the game will introduce “built-in speedrunning support, achievements and leaderboards,” and things like the ability to create custom courses from existing levels. And while A&R Atelier’s announcement doesn’t include footage of the new game or the platforms it’ll release on, the official Ecco the Dolphin website has a countdown clock that could point to when more information will be released.

Annunziata sued Sega to try and win the rights to the Ecco the Dolphin IP in 2013, the same year he failed to get The Big Blue, a spiritual sequel to Ecco the Dolphin, fully funded on Kickstarter. Sega and Annunziata ultimately settled their lawsuit in 2016, which may have laid the groundwork for Ecco the Dolphin: Complete to happen.

Source link

Advertisement
Continue Reading

Tech

Ping-Pong Robot Makes History By Beating Top-Level Human Players

Published

on

Sony AI’s autonomous table-tennis robot Ace has become the first robot to compete against top-level human players. Reuters reports: Ace, created by the Japanese company Sony’s AI research division, is the first robot to attain expert-level performance in a competitive physical sport, one that requires rapid decisions and precision execution, the project’s leader said. Ace did so by employing high-speed perception, AI-based control and a state-of-the-art robotic system. There have been various ping-pong-playing robots since 1983, but until now they were unable to rival highly skilled human competitors. Ace changed that with its performances against human elite-level and professional players in matches following the rules of the International Table Tennis Federation, the sport’s governing body, and officiated by licensed umpires.

The project’s goal was not only to compete at table tennis but to develop insights into how robots can perceive, plan and act with human-like speed and precision in dynamic environments. In matches detailed in the study, Ace in April 2025 won three out of five versus elite players and lost two matches against professional players, the top skill level in the sport. Sony AI said that since then Ace beat professional players in December 2025 and last month. “The success of Ace, with its perception system and learning-based control algorithm, suggests that similar techniques could be applied to other areas requiring fast, real-time control and human interaction — such as manufacturing and service robotics, as well as applications across sports, entertainment and safety-critical physical domains,” said Peter Durr, director of Sony AI Zurich and leader for Sony AI’s project Ace.

The findings have been published in the journal Nature.

Source link

Advertisement
Continue Reading

Tech

Ultrahuman Launched the First Smart Ring Integration for Expert-Led Workouts

Published

on

Health tech company Ultrahuman, makers of the Ultrahuman Ring Air and Ring Pro, launched a partnership with group workout brand Les Mills on Wednesday. Together, the companies created the Les Mills PowerPlug in the Ultrahuman app, which recommends workouts based on data collected by its smart rings, like sleep, recovery and cycle phase. 

Traditionally, when your smartwatch or ring tells you that your body is fatigued and that you should take it easy during your workout, it doesn’t provide the workout. With this new integration, the Les Mills PowerPlug offers expert-led, on-demand workout videos that take your current health status into account and help prevent overtraining.

“With Les Mills, we’re closing the loop — your ring doesn’t just tell you how recovered you are, it tells you what to do about it. The right workout, at the right intensity, every day. That’s what training smarter actually looks like,” Mohit Kumar, CEO of Ultrahuman, said in a press release.

Advertisement

How the PowerPlug works

Upon downloading the Les Mills PowerPlug, Ultrahuman Ring users will be asked to choose their ideal training days, session length and a fitness goal from the following: cardio, strength, flexibility or general fitness. Going forward, the app’s home screen will then recommend two to three daily workouts based on your health data, along with a quick workout shortcut. 

You’ll also have access to Les Mills’s entire workout catalog, which you can sort by goal, program or duration. Yoga, strength, HIIT and stretching are just a few examples of the type of exercises you can perform.

Phone screens over a white background showing Les Mills workouts in the Ultrahuman app.

If you have accumulated sleep debt and your body is showing signs of fatigue, the Less Mills PowerPlug will likely suggest a recovery-forward yoga session.

Advertisement

Ultrahuman x Less Mills

To select your workout recommendation, Ultrahuman uses its Dynamic Recovery score, a percentage from zero to 100 that symbolizes how prepared your body is to take on the day. It takes into account your sleep, temperature, stress rhythm, resting heart rate and heart rate variability and can change throughout the day with movement, naps and non-sleep deep rest like breathwork.

The Les Mills PowerPlug will also adapt its selections based on a user’s menstrual cycle. If they’re in a phase with more energy, such as the follicular or ovulatory phases, they’ll be advised to try a more intense workout. Low-energy luteal and menstrual phases will correlate with workouts that prioritize recovery, like yoga. During menstruation, high-impact workouts that are tough on the pelvic floor will be avoided. 

Once you complete your workout, you can then view your workout stats (duration, heart rate zones and calories), movement score, muscle group radar chart, daily goal progress and a post-workout recovery prediction that estimates your readiness for the next day.

The Les Mills PowerPlug price

Global Ultrahuman Ring Air and Ring Pro users can now purchase the Les Mills PowerPlug for $12 per month or $100 per year. 

Advertisement

Due to a patent lawsuit with Oura, makers of the Oura Ring, the Ultrahuman Ring Air was previously banned in the US. However, in March, Ultrahuman launched its Ring Pro, which the US Customs and Border Protection approved for sale in the US. It is currently available for preorder and will start shipping on May 15. With a charging case, it costs $479.

Source link

Advertisement
Continue Reading

Tech

Opera Callas Diva Special Edition Loudspeakers at AXPONA 2026: Understated Italian Design That Doesn’t Care If You Notice

Published

on

Italian loudspeakers tend to follow their own playbook, and the Opera Callas Diva Special Edition distributed in the U.S. by Fidelity Imports, leans into that identity without apology. Priced at $13,999, this is a reflex, floor-standing design with a rear-firing radiation system (dipole), built around the kind of materials and construction choices that set Italian brands apart: hand-crafted wood cabinetry, leather-clad baffles, and tank-like assembly that feels more atelier than assembly line.

Whether the leather actually changes the sound is still a matter of debate, but as with most things Italian, it’s as much about feel and intent as measurable outcome.

There’s also a clear voicing philosophy here. Like most offerings from Sonus faber and Opera, the goal isn’t clinical neutrality; it’s a more romantic, expressive presentation that leans into tone and texture. That doesn’t mean these speakers lack drama; if anything, they just deliver it with better timing and less shouting over Sunday gravy at Nonna’s house. Think Sophia Loren, not a reality TV meltdown—controlled, confident, and fully aware of the effect… the kind of presence that makes a room go quiet when she crosses her legs, looks your way, and lets you wonder if you’re worth the match.

Fidelity Imports is pushing Opera hard in the U.S. right now, and it’s not difficult to understand why. Paired with electronics from Unison Research, the system synergy is obvious—cohesive, deliberate, and unmistakably Italian. Bellissima, but not in a way that begs for attention. It just assumes you’re paying attention already.

Advertisement

Italian Engineering in a Tailored Suit, Not a Tracksuit

The Opera Callas Diva Special Edition is a reflex loaded, floor standing loudspeaker that combines a traditional forward firing driver array with a rear firing dipole tweeter system. It’s a hybrid approach that aims to balance direct sound with controlled rear radiation, adding spatial cues without turning the room into an echo chamber.

Up front, the speaker uses a single 8-inch long throw woofer paired with a 7-inch midrange driver featuring a re cooked polypropylene cone and phase plug. High frequencies are handled by a 1-inch Scan Speak 9700 tweeter, notably run without ferrofluid and incorporating a double decompression chamber, choices that typically favor openness and low mechanical damping over sheer robustness.

Around back, Opera adds two 1-inch tweeters in what it describes as a “natural dipole” configuration. This rear array expands the soundstage by introducing ambient high frequency energy, effectively making the system a 3-way plus rear dipole design rather than a conventional forward only speaker.

The crossover network is relatively straightforward, using 12 dB per octave slopes across all drivers, woofer, midrange, front tweeter, and rear tweeters, with crossover points centered approximately at 200 Hz and 2,000 Hz. This suggests a focus on phase coherence and smoother driver integration rather than aggressive filtering.

Frequency response is rated at 30 Hz to 25 kHz, covering full range playback without immediate reliance on a subwoofer. Sensitivity is specified at 90 dB (2.83V at 1 meter), making the speaker reasonably amplifier friendly, though the 4 ohm nominal impedance with a minimum above 3.2 ohms means it will benefit from stable current delivery.

Advertisement

Power handling is listed at 240 watts without clipping, and placement guidelines recommend at least 10 cm, about 4 inches, from the rear wall, which is modest considering the inclusion of rear firing drivers.

Physically, the Callas Diva Special Edition is substantial: 116 x 37 x 53.5 cm (H x W x D), approximately 45.7 x 14.6 x 21.1 inches, and each speaker weighs 65 kg, about 143 pounds, including its metal base. This is not a lightweight cabinet, so think carefully about which relative still has the energy to help you move it after sausage and peppers. And don’t forget the cannoli. Marone!

Advertisement. Scroll to continue reading.

Italian Soul, British Precision, No Passport Required

Fidelity Imports had a lot of rooms at AXPONA. Enough that you start making choices. I only had time for a few. This one, and the Ruark Audio room were the ones that actually made me stop, close my eyes and listen, and silently wish that I didn’t have 30 more rooms to cover on the next two floors.

Part of it was the system; Opera speakers, Unison Research electronics, and the new Michell Gyro Turntable spinning records like it knew that a certain American competitor was MIA and that this was its moment to make everyone take notice.

Advertisement

But it was also the reaction. People didn’t just walk in and walk out. They slowed down. Took a step closer. Leaned in to look at the front baffle, then drifted over to the turntable like it might tell them something if they got close enough. Weird that. Especially because it happened more than a few times.

Nobody rushed. Nobody talked too loud. That’s usually a sign. People stood along the back of the room and listened.

I wasn’t the only one who noticed. And in a show full of rooms fighting for attention, this one didn’t have to. Steve Jain needs to make this set-up a permanent hi-fi show experience.

Michell Gryo Turntable with Unison Research Unico PRE V2 and Unico DM V2 power amplifier at AXPONA 2026
Michell Gryo Turntable with Unison Research Unico PRE V2 and Unico DM V2 power amplifier at AXPONA 2026

The room was driven by the Unison Research Unico PRE V2 and Unico DM V2 power amplifier. Together, they retail for $18,498 USD. That’s not inexpensive, but in the context of AXPONA, it sits well below many of the larger systems on display.

Advertisement

The Unico DM V2 is a high power, dual mono hybrid design using Unison Research’s A.S.H.A. Class A-AB output stage. The emphasis is on current delivery and stability into more demanding loudspeaker loads rather than chasing extreme specifications.

The Unico PRE V2 is a fully balanced preamplifier with a tube based input stage. It includes a well equipped MM/MC phono stage with selectable gain and loading, making it a viable option for vinyl playback without requiring an external phono stage.

There is no built in streaming platform or Bluetooth support. That appears to be a deliberate choice, leaving digital source selection to external components.

The PRE V2 does include an internal DAC based on the Sabre ES9018K2M converter. It uses a balanced output stage designed to integrate with the tube input section, with the goal of maintaining consistent tonal balance between digital and analog inputs.

Advertisement

Digital connectivity includes USB-B, two S/PDIF, and two optical inputs. USB supports PCM up to 384 kHz and native DSD up to 256, along with DoP up to 128. S/PDIF and optical inputs support resolutions up to 192 kHz.

The Unico DM V2 is rated at 220 watts into 8 ohms and 340 watts into 4 ohms in stereo operation, with stability down to 2 ohms. In bridged mono configuration, it delivers 650 watts into both 8 ohm and 4 ohm loads.

Advertisement. Scroll to continue reading.

My biggest takeaway from this room? Synergy matters. A lot.

Advertisement

Having spent time with and reviewed some of Unison Research’s tube amplifiers, the new pairing has a lot more palle, but it doesn’t trade away the qualities that made those designs stand out. The tonal balance, clarity, and sense of flow are still intact. It just brings more control and authority when the music asks for it.

Unison deserves your attention. So do these Opera loudspeakers. They’re expressive without being aggressive. They don’t grab your Members Only jacket and threaten you with brute force. They take a different approach and pull you in, keep you there, and let the music do the work.

There’s something to that. Not everything needs to hit you over the head to make its point.

More info at: operaloudspeakers.com | unisonresearch.com | michellaudio.com

Advertisement

Source link

Continue Reading

Tech

Kioxia says its new QLC SSDs can match TLC performance at lower cost

Published

on


Kioxia has announced the EG7 series of solid-state drives, the company’s first SSD line built around its quadruple-level cell (QLC) technology, branded as BiCS FLASH. Despite using QLC NAND, the new SSDs are said to deliver performance comparable to TLC-based drives – at least according to Kioxia. However, the EG7…
Read Entire Article
Source link

Continue Reading

Tech

OpenAI taps Airbnb exec as first EMEA managing director

Published

on

Marill previously led financial services and automotive industry at Facebook and Instagram.

OpenAI has hired Airbnb’s former European, Middle East and Africa (EMEA) lead Emmanuel Marill as its first EMEA managing director, as the company continues to expand globally with an initial public offering in sight.

Marill, who led Airbnb’s EMEA, Australia and New Zealand operations for a number of years, also previously worked with Meta as their financial services and automotive industry lead for Facebook and Instagram.

Marill’s appointment reflects “strong momentum in EMEA”, OpenAI said. He will be based in Paris and report to chief strategy officer Jason Kwon. He will also collaborate with OpenAI’s EU headquarters in Dublin, which currently has around 80 employees.

Advertisement

“As demand for ChatGPT and Codex continues to grow rapidly all over the world, we are investing significantly in our international leadership and operations”, Kwon said.

Marill added: “There’s real momentum across EMEA, with many countries here leading globally in adoption of AI.”

OpenAI has had a tougher time in Europe than in the US. The company faces increasing regulatory pressures from EU officials, as well as resistance from businesses over digital sovereignty.

However, the company said that weekly active ChatGPT users in the EMEA region have grown by 70pc since last year, with Germany, France, the UK and Spain among its top markets for ChatGPT and Codex.

Advertisement

On its home turf in the US, OpenAI faces an even bigger challenge from the likes of Anthropic, which is fast encroaching on the company’s clientele.

Anthropic appointed long-time technology executive Pip White as the head of its UK, Ireland, Northern Europe and Israeli operations last November.

OpenAI reportedly plans to double its headcount by the end of the year. Late last year, Bloomberg reported that OpenAI and its rival Anthropic were both looking to expand their office footprints in their Dublin headquarters.

Meanwhile, OpenAI also announced its first permanent London office for next year with a capacity of more than 500. The announcement came despite an indefinite hold on the company’s plans for a Stargate UK over energy costs and regulatory burden.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

This AI Robot Flips a Thousand-Euro Coin Fifty Thousand Times in a Row

Published

on

AI Robot Flip Coin
Makers are constantly coming up with innovative methods to transform everyday objects into automated displays, and Terence Grover has just added another to the list. His robot flips a rare €1000 Monaco commemorative coin repeatedly, with a live audience cheering and calling the shots the entire time. People watching his YouTube livestream give commands to choose when to flip the coin next, and the robot just does its job without troubling its inventor.



Grover began with a relatively bare-bones setup, utilizing only a bit of cardboard, a basic solenoid, and a 9v battery to get the ball going. He had created a rudimentary prototype that showed promise, but the coin kept landing off-center and refusing to play ball more than once or twice. He then upgraded to a proper design, producing a strong tray on a 3D printer and adding a set of blades placed similarly to a camera’s aperture. After each landing, the new servo motor closes those blades, nudging the coin back to the exact same area above the solenoid so the next flip may take place smoothly right away.

Sale


ELEGOO UNO R3 Smart Robot Car Kit V4 for Arduino Robotics for Kids Ages 8-12 12-16 STEM Science Kits…
  • ELEGOO Smart Robot Car: An educational STEM kit beginners (kids) to get hands-on experience about programming, electronics assembling and robotics…
  • Complete Package: Contains 24 kinds of module parts including obstacle avoidance, line tracing module, infrared remote control and also you can…
  • Easy to Assemble: All the module interface has been modified with XH2. 54 ports as to make it much easier and convenient to assemble the car and…

The power comes from a 12V source, which delivers a fast burst to the solenoid, propelling the coin skyward in a crisp arc. Once it’s returned to the tray, a small security camera above captures a clear 2,000-pixel image. Grover chose this cheap Amazon camera since the original Raspberry Pi camera was too weak for long sessions in changing lighting conditions.

Advertisement

AI Robot Flip Coin
Inside the Raspberry Pi, a Python script is simply sitting there, waiting for that picture to appear. OpenCV converts the image to a grayscale snapshot to look for the coin’s contour, and then a machine learning model trained on over 400 photographs of the coin that Grover had personally labeled determines which side of the coin is visible, heads or tails. The model is performed locally using Tensorflow Lite, so the decision is returned to the stream in a flash, and the results are immediately visible to everyone on the livestream.

AI Robot Flip Coin
Grover set up a simple web server on the same Raspberry Pi so viewers could trigger flips through a neat and clear interface or even just talk to them on YouTube, as the relay board keeps the high-voltage solenoid safe by flipping it on command, preventing damage to the low-power electronics. Grover adjusted every timing variable, down to the fraction of a second, until everything felt absolutely smooth.

AI Robot Flip Coin
Grover was most concerned with getting the device to be reliable over time, since they’d experienced a couple jams early on when the coin landed on its edge or slid too far to one side. Every tiny adjustment to the tray and blade tension reduced the amount of times it would freeze up, eventually allowing it to continue for hours without requiring a human intervention. During a particularly long livestream session in July 2024, the machine managed to flip the coin 50,000 times in a row without requiring any user assistance.
[Source]

Source link

Continue Reading

Trending

Copyright © 2025