Intercom is taking an unusual gamble for a legacy software company: building its own AI model.
The 15-year-old, Dublin, Ireland-based massive customer service platform announced Fin Apex 1.0 on Thursday, a small, purpose-built AI model that the company claims outperforms leading frontier models from OpenAI and Anthropic on the metrics that matter most for customer support.
According to benchmarks shared with VentureBeat, Fin Apex 1.0 achieves a 73.1% resolution rate—the percentage of customer issues fully resolved without human intervention—compared to 71.1% for both GPT-5.4 and Claude Opus 4.5, and 69.6% for Claude Sonnet 4.6. That roughly 2 percentage point margin may sound modest, but it’s wider than the typical gap between successive generations of frontier models.
Advertisement
Fin Apex 1.0 select benchmarks comparison chart. Credit: Intercom
“If you’re running large service operations at scale and you’ve got 10 million customers or a billion dollars in revenue, a delta of 2% or 3% is a really large amount of customers and interactions and revenue,” Intercom CEO Eoghan McCabe told VentureBeat in a video call interview earlier this week.
The model also shows significant improvements in speed and accuracy. Fin Apex delivers responses in 3.7 seconds—0.6 seconds faster than the next-fastest competitor—and demonstrates a 65% reduction in hallucinations compared to Claude Sonnet 4.6.
Perhaps most striking for enterprise buyers: it runs at roughly one-fifth the cost of using frontier models directly, and is included in Intercom’s existing “per-outcome”-based pricing structure for its existing customer plans.
Advertisement
What’s the base model? Does it even matter?
But there’s a catch. When asked to specify which base model Apex was built on—and its parameter size—Intercom declined.
“We’re not sharing the base model we used for Apex 1.0—for competitive reasons and also because we plan to switch base models over time,” a company spokesperson told VentureBeat. The company would only confirm that the model is “in the size of hundreds of millions of parameters.”
That’s a notably small model. For comparison, Meta’s Llama 3.1 ranges from 8 billion to 405 billion parameters; even efficient open-weights models like Mistral 7B dwarf the sub-billion scale Intercom describes.
Whether Apex’s performance claims hold up against that context—or whether the benchmarks reflect optimizations possible only in narrow, domain-specific applications—remains an open question.
Advertisement
Intercom says it learned from the backlash AI coding startup Cursor faced when critics accused the coding assistant of burying the fact that its Composer 2 model was built on fine-tuned open-weights models rather than proprietary technology. But the lesson Intercom drew may not satisfy skeptics: the company is transparent that it used an open-weights base, just not which one.
“We are very transparent that we have” used an open-weights model, the spokesperson said. Yet declining to name the model while claiming transparency is a contradiction that will likely draw scrutiny—particularly as more companies tout “proprietary” AI that amounts to post-trained open-source foundations.
Post-training as the new frontier
Intercom’s argument is that the base model simply doesn’t matter much anymore.
“Pre-training is kind of a commodity now,” McCabe said. “The frontier, if you will, is actually in post-training. Post-training is the hard part. You need proprietary data. You need proprietary sources of truth.”
Advertisement
The company post-trained its chosen foundation using years of proprietary customer service data accumulated through Fin, which now resolves 2 million customer queries per week. That process involved more than just feeding transcripts into a model. Intercom built reinforcement learning systems grounded in real resolution outcomes, teaching the model what successful customer service actually looks like—the appropriate tone, judgment calls, conversational structure, and critically, how to recognize when an issue is truly resolved versus when a customer is still frustrated.
“The generic models are trained on generic data on the internet. The specific models are trained on hyper-specific domain data,” McCabe explained. “It stands to reason therefore that the intelligence of the generic models is generic, and the intelligence of the specific models is domain-specific and therefore operates in a far superior way for that use case.”
If McCabe is right that the magic is entirely in post-training, the reluctance to name the base becomes harder to justify. If the foundation is truly interchangeable, what competitive advantage does secrecy protect?
A $100 million bet paying off
The announcement comes as Intercom’s AI-first pivot appears to be working. Fin is approaching $100 million in annual recurring revenue and growing at 3.5x, making it the fastest-growing segment of the company’s $400 million ARR business. Fin is projected to represent half of Intercom’s total revenue early next year.
Advertisement
That trajectory represents a remarkable turnaround. When Fin launched, its resolution rate was just 23%. Today it averages 67% across customers, with some large enterprise deployments seeing rates as high as 75%.
To make this happen, Intercom grew its AI team from roughly 6 researchers to 60 over the past three years—a significant investment for a company that McCabe admits was “in a really bad place” before its AI pivot. The average growth rate for public software companies sits around 11%; Intercom expects to hit 37% growth this year.
“We’re by far the first in the category to train our own model,” McCabe said. “There’s no one else that’s going to have this for a year or more.”
The speciation and specialization of AI
McCabe’s thesis aligns with a broader trend that Andrej Karpathy, former AI leader at Tesla and OpenAI, recently described as the “speciation” of AI models—a proliferation of specialized systems optimized for narrow tasks rather than general intelligence.
Advertisement
Customer service, McCabe argues, is uniquely suited for this approach. It’s one of only two or three enterprise AI use cases that have found genuine economic traction so far, alongside coding assistants and potentially legal AI. That’s attracted over a billion dollars in venture funding to competitors like Decagon and Sierra—and made the space, in McCabe’s words, “ruthlessly competitive.”
The question is whether domain-specific models represent a durable advantage or a temporary arbitrage that frontier labs will eventually close. McCabe believes the labs face structural limitations.
“Maybe the future is that Anthropic has a big offering of many different specialized models. Maybe that’s what it looks like,” he said. “But the reality is that I don’t think the generic models are going to be able to keep up with the domain-specific models right now.”
Beyond efficiency to experience
Early enterprise AI adoption focused heavily on cost reduction—replacing expensive human agents with cheaper automated ones. But McCabe sees the conversation shifting toward experience quality.
Advertisement
“Originally it was like, ‘Holy shit, we can actually do this for so much cheaper.’ And now they’re thinking, ‘Wait, no, we can give customers a far better experience,’” he said.
The vision extends beyond simple query resolution. McCabe imagines AI agents that function as consultants—a shoe retailer’s bot that doesn’t just answer shipping questions but offers styling advice and shows customers how different options might look on them.
“Customer service has always been pretty shit,” McCabe said bluntly. “Even the very best brands, you’re left waiting on a call, you’re bounced around different departments. There’s an opportunity now to provide truly perfect customer experience.”
Pricing and availability
For existing Fin customers, the upgrade to Apex comes at no additional cost. Intercom confirmed that customer pricing remains unchanged—users continue to pay per outcome as before, at $0.99 per resolved interaction, and automatically benefit from the new model.
Advertisement
Apex is not available as a standalone model or through an external API. It is accessible only through Fin, meaning businesses cannot license the model independently or integrate it into their own products. That constraint may limit Intercom’s ability to monetize the model beyond its existing customer base—but it also keeps the technology proprietary in a practical sense, regardless of what the underlying base model turns out to be.
What’s next
Intercom plans to expand Fin beyond customer service into sales and marketing—positioning it as a direct competitor to Salesforce’s Agentforce vision, which aims to provide AI agents across the customer lifecycle.
For the broader SaaS industry, Intercom’s move raises uncomfortable questions. If a 15-year-old customer service company can build a model that outperforms OpenAI and Anthropic in its domain, what does that mean for vendors still relying on generic API calls? And if “post-training is the new frontier,” as McCabe insists, will companies claiming breakthroughs face pressure to show their work—or continue hiding behind competitive secrecy while touting transparency?
McCabe’s answer to the first question, laid out in a recent LinkedIn post, is stark: “If you can’t become an agent company, your CRUD app business has a diminishing future.”
The breakthrough addresses a critical structural failure known as delamination, where layers in fiber reinforced polymer (FRP) materials begin to separate over time. The new composite looks similar to traditional FRPs but is designed to be tougher, making it less prone to cracking or breaking. Read Entire Article Source link
In short: Roblox is upgrading its built-in AI assistant with agentic capabilities including a planning mode that analyses game code before proposing action plans, procedural 3D model generation, mesh generation, and self-correcting loops that test and refine outputs. The update also adds MCP client integration with third-party tools like Claude and Cursor, with a roadmap toward multi-agent parallel workflows in the cloud.
Roblox is upgrading its built-in AI assistant with agentic capabilities that let it plan, build, and test games rather than just answer questions about how to make them. The update adds a planning mode that analyses a game’s code and data model before proposing action plans, procedural model generation that creates editable 3D objects through prompts, and a self-correcting loop that lets the assistant test its own work and incorporate the results into future iterations.
The changes turn Roblox Assistant from a code-suggestion tool into something closer to a junior development partner: one that can examine an existing project, ask clarifying questions, propose an approach, execute it, test the results, and refine its work based on what it finds. For a platform whose 380 million monthly active users include a vast number of creators with limited programming experience, the implications are significant.
What the new tools do
Planning Mode transforms the assistant into a collaborative planner. Rather than responding to individual prompts with code snippets, it analyses a game’s existing codebase and data model, asks the developer clarifying questions about what they want to achieve, and translates the conversation into an editable action plan. The developer can review, modify, and approve the plan before the assistant begins implementation. It is the difference between asking an AI to write a function and asking it to design an approach to a problem.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Procedural Models, coming soon, will let developers create 3D objects that are defined by code rather than static meshes. A developer can prompt the assistant to generate a bookcase and then adjust its attributes, the number of shelves, the height, the material, through parameters rather than manual modelling. The objects understand physical relationships: a staircase knows how its steps relate to its height, and a table knows that its legs support its surface. This is not generative art; it is parametric design driven by natural language.
Mesh Generation adds the ability to place fully textured 3D objects directly into a game world through prompts, building on Roblox’s Cube foundation model. The company introduced 4D generation in February 2026, powered by Cube, which adds an interactivity dimension to generated objects so they behave correctly in-game rather than sitting as static props. More than 160,000 objects were generated during early access, and Roblox says players using 4D generation showed a 64% increase in play time on average.
Advertisement
The agentic loop
The most consequential change is the self-correcting system. The assistant can now test different aspects of a game, identify problems, surface suggested solutions, and feed those results back into its planning process. This creates what Roblox describes as agentic loops: cycles of planning, execution, testing, and refinement that the AI performs with decreasing human intervention over time.
The roadmap extends this further. Roblox is working on enabling multiple AI agents to work together in parallel, running long and complex workflows in the cloud rather than within the constraints of a local Studio session. The company is also building integration with third-party tools, including Claude, Cursor, and Codex, and has added a built-in MCP client to Roblox Studio’s assistant, letting it connect to external AI services through theModel Context Protocol standard.
The long-term vision, which Roblox has been articulating since it open-sourced the Cube foundation model in March 2025, is that a developer should be able to describe a game in natural language and have AI generate the assets, environments, code, animations, and interactive behaviour to make it real. The agentic tools announced today are incremental steps toward that goal, but they represent a meaningful shift from AI as autocomplete to AI as collaborator.
The vibe-coding parallel
Roblox’s update arrives in the middle of a broader shift in how software is made.Vibe coding, the practice of describing what you want in natural language and letting AI generate the code, drove an 84% jump in App Store submissions earlier this year and prompted Apple to crack down on low-quality AI-generated apps. The same dynamic is playing out in game creation, where the barrier to building something playable is dropping rapidly.
Advertisement
For Roblox, this is both an opportunity and a quality problem. More creators making more games drives engagement on the platform, but only if those games are worth playing. The planning mode and self-correcting loops are, in part, a response to this tension: they are designed to produce better outputs than a single-shot prompt, guiding creators through astructured processrather than letting them generate and publish whatever the AI produces on the first try.
Third-party AI tools for Roblox game creation have already emerged, including Lemonade, SuperbulletAI, and BloxBot. By building agentic capabilities directly into Roblox Studio, the company is trying to ensure that the primary creation experience remains on its own platform rather than fragmenting across external tools that it does not control.
The business context
Roblox’s investment in AI creation tools is backed by strongcommercial momentum. The company’s daily active users reached 144 million in Q4 2025, up from 85 million a year earlier. Monthly active users grew from 280 million to 380 million through the year. Full-year 2025 revenue was $4.9 billion, a 36% increase, with 2026 guidance projecting $6 to $6.2 billion. Total Robux purchases reached $6.79 billion in 2025.
These numbers matter because they determine how much Roblox can invest in AI infrastructure and how large the creator ecosystem is that benefits from better tools. A platform with 380 million monthly users and nearly $5 billion in revenue can afford to build foundation models, train agentic systems, and absorb the compute costs of running AI-assisted game creation at scale. Smaller platforms cannot, which means AI creation tools become a competitive moat rather than just a feature.
Advertisement
The Roblox Developers Conference, scheduled for September in San Jose, will likely showcase the next stage of this roadmap. For now, the agentic assistant update positions Roblox as one of the first major platforms to move beyond AI-assisted coding into AI-assistedproduct development, where the AI does not just write code but plans, builds, tests, and improves what it creates. Whether that produces better games or just more of them is the question that the next year of Roblox development will answer.
Elizabeta Gjorgievska Joshevski’s career spans multiple continents and leadership roles, yet her current focus reflects a consistent theme: understanding how technology translates into business outcomes. As founder and CEO of EverCognitive, she brings that perspective into an AI landscape where many organizations are still defining their AI transformation and how to apply it in practical ways.
Originally from North Macedonia, Elizabeta began her career in programming and technology education, where she developed both technical expertise and early leadership experience. According to her, those early roles were shaped by building relationships and expanding capabilities, which eventually connected her to global technology ecosystems. That period, she explains, formed her understanding of how organizations adopt technology beyond theory and into real operations.
Her move into Cisco marked a turning point, opening the door to a global career spanning 19 years that would take her from the Balkans to Vienna and later to Dubai to lead multicultural EMEA teams. From her perspective, progression came through exposure to different markets and business challenges rather than a predefined trajectory. She notes that working across regions required constant adaptation, particularly in aligning customer needs with evolving technology strategies.
In Vienna, she led telecom operations across Eastern Europe, managing teams that combined sales leadership with technical execution. She frames this phase as one that demanded both operational discipline and strategic clarity. Elizabeta promotes Servant Leadership principles as a way to elevate the team to connect effectively with C-suite executives. This is what guided her to lead global teams to deliver exceptional results.
Advertisement
Her relocation to Dubai reflected a deliberate choice. Elizabeta explains that Dubai offered a setting where opportunity is closely tied to capability, creating space to continue building her leadership profile. She notes that the region’s pace of development and openness to innovation shaped how she approached large-scale initiatives and organizational transformation.
Over time, her roles expanded to include overseeing enterprise-level operations across EMEA, where she was responsible for a business portfolio. She explains that this level of responsibility requires coordination across global teams and alignment with product and strategy functions. According to Elizabeta, the experience provided her the opportunity to shape the decisions made on a corporate level to accelerate growth across the regions while managing execution complexity at scale.
A gradual shift in perspective began during a leadership program at Harvard between 2013 and 2014. Elizabeta explains this period as a moment of reflection, when she evaluated her career and experiences both professionally and personally, which planted the seed of her next chapter. “I knew my next challenge would be to take all of my experience and the immense knowledge I gained from my time at Harvard to build my own company,” she says.
That idea remained in the background and started to grow as artificial intelligence became the front and center of every technology conversation. She explains that this was the catalyst for her stepping away from the corporate world. It was during this time that allowed her to focus on understanding Generative AI more deeply, including completing a program at MIT in 2024 and studying how organizations could apply AI to their ongoing digital transformations. According to Elizabeta, she recognized the gap between the rapid development of AI tools and the market readiness of implementing AI, which is where she saw her opportunity.
Advertisement
She states, “The starting point should always be the business outcome, to understand the client and how their AI ambition can convert into operational reality and growth.”
EverCognitive was built around that principle. The company operates as an AI transformation firm that works with organizations to assess organizational health, perform AI readiness audits, and build and select AI solutions to map out the client’s business outcomes. Elizabeta explains that this includes executive advisory, organizational assessments, and frameworks designed to implement architecture that operators can execute.
Her approach reflects her experience working within large organizations. She notes that decisions around technology are often shaped by leadership alignment, organizational structure, and operational priorities. From her perspective, this is where many companies require guidance when approaching AI.
“I have spent years working with the companies that are now trying to leverage AI,” Elizabeta explains. “My deep understanding of the digital transformation journeys different vertical markets went through gives me the leverage to be able to accelerate their AI transformation with confidence and tangible business outcomes.”
Advertisement
Today, EverCognitive is engaging with organizations on AI leadership and strategy, focusing on translating technological potential into measurable outcomes. For Elizabeta, the emphasis remains on applying experience to a space that is still developing.
Elizabeta maintains that while AI will continue to evolve, the ability to guide its application will remain essential. Technology will continue to advance, but the way it is applied within organizations will determine its real impact over time.
She states, “The question is not if but when. And in the age of the AI revolution, those who adopt quickly and wisely won’t just survive; they will win.”
The key players leading 2026 GeekWire Awards Startup of the Year finalists, clockwise from top left: Grin Lord, CEO of mpathic; Edward Wu, Dropzone AI CEO; Loopr CEO Priyansha Bagari; Dopl Technologies co-founders Wayne Monsky, Ryan James and Steve Seslar; and ElastixAI co-founders Saman Naderiparizi, Mohammad Rastegari, and Mahyar Najibi.
From making AI safer for kids in crisis to guiding robotic arms through remote ultrasounds, from sniffing out factory defects to slashing the cost of running large language models — the 2026 GeekWire Awards Startup of the Year finalists are building across a variety of frontiers in tech.
The finalists are: mpathic, ElastixAI, Dropzone AI, Dopl Technologies, and Loopr AI.
Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.
Last year’s winner was Auger, the startup that makes supply chain software that unifies data, targets inefficiencies and provides real-time insights and automation.
Continue reading for information on the Startup of the Year finalists, who were chosen by a panel of independent judges from community nominations. You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through today.
Mpathic is a Seattle startup building safety infrastructure for AI models that interact with vulnerable users, including children and people in mental health crises. The company helps foundational model developers and LLM-powered app teams stress-test model behavior, evaluate responses, and monitor live interactions with safeguards that can flag or intervene when AI-generated advice veers into dangerous territory.
Advertisement
Mpathic was co-founded in 2021 by CEO Grin Lord, a board-certified psychologist and NLP researcher, in a bid to bring more empathy to corporate communication. The company raised $15 million in 2025 and says its global network of thousands of licensed clinical experts is growing by hundreds weekly to keep up with demand. Mpathic is No. 188 on the GeekWire 200, a ranked index of the Pacific Northwest’s top startups.
ElastixAI is a Seattle startup building an AI inference platform designed to make running large language models faster, cheaper, and more flexible across edge devices and cloud deployments. The platform lets customers configure their inference infrastructure for specific use cases, and the company says it could serve everyone from hyperscalers to enterprises weaving AI into daily operations.
Dropzone AI is a Seattle startup building AI security agents that work alongside human analysts in security operations centers, handling repetitive tasks and investigating alerts. The company’s pre-trained agents use large language models to mimic the thought process of expert security analysts, helping teams keep pace with a growing volume of cybersecurity threats.
Advertisement
Dropzone AI was founded by CEO Edward Wu, who previously spent eight years at Seattle-based security company ExtraHop. The company raised $16.8 million in Series A funding a year ago.
Dopl Technologies is a Seattle startup using telerobotics to bring diagnostic exams and interventional procedures to underserved communities, particularly rural patients who would otherwise travel long distances to reach specialists. Its robotic ultrasound system can be controlled remotely by a sonographer in a different location, with advanced haptics and visual tools designed to give the operator a sense of touch — and AI assistance to optimize workflows.
Dopl was co-founded by CEO Ryan James, COO Steve Seslar, and chief medical officer Wayne Monsky, who began researching novel care delivery methods together at the University of Washington in 2017. The company, ranked No. 193 on the GeekWire 200, raised $1.5 million in a pre-seed round last year.
Loopr is a Seattle startup selling AI-powered computer vision software that helps manufacturers detect defects and quality issues in real time. Unlike legacy vision systems that require fixed cameras and custom installs, Loopr’s software is hardware-agnostic and can run on tablets, making it accessible across aerospace, automotive, and chemical manufacturing — where it is already working with 10 Fortune 1000 companies.
Advertisement
Loopr was founded in 2021 by CEO Priyansha Bagaria, who drew inspiration from building defect-detection software for her family’s manufacturing business in India. The company raised $5.4 million in a funding round last August.
The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships are available. Contact events@geekwire.com to reserve a spot for your team today.
(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey
Bluesky is once again having a wobble. The platform said some of its systems are down and that it’s “investigating an incident with service in one of our reginos” (that’s Bluesky’s typo, not mine). The issue appears to have started at 1:42AM ET and was still persisting as of 11AM when this story was originally published. Since then, the site has been experiencing intermitent interuptions, including at times to its status page where users should be able to monitor outages.
At 7:47PM ET, the platform explained that it’s been attempting to mitigate “a sophisticated Distributed Denial-of-Service (DDoS) attack, which intensified throughout the day.” It said the attack had caused interruptions to users’ feeds, notifications, threads and search, all of which the Engadget team experienced first-hand at various points through the day. While DDoS attacks are frequently used as virtual smokescreens for hacks, Bluesky says it has “not seen any evidence of unauthorized access to private user data.” The social media service had another brief outage earlier this month.
The outage is ongoing, but due to its intermittent nature it’s more of a rolling blackout than a power outage. Bluesky says it will provide another update on the situation by 1PM ET on April 17.
Update, April 16, 8PM ET: This story was updated after publish with an of the outage from Bluesky.
One of the genuine bright spots in my pre-Record Store Day inbox this year was news of a 1LP retrospective spotlighting Brian Wilson’s late-1990s comeback and the transformational musical run that carried him deep into the 2000s.
The good folks at Oglio Records kindly sent me a preview copy of the album titled Brian Wilson On Tour 1999-2007. The single LP collection offers a tasty overview of Brian’s live work from this period including choice late’60s Beach Boys nuggets, primo solo cuts and special cover tunes. Given the quality of Brian’s tremendous backing group at that time, there is a remarkable consistency of performance and sound quality on these recordings across the years.
The album opens with a rousing version of “This Could BeThe Night,” a particularly special tune originally written by Harry Nilsson in tribute to Brian and eventually recorded by Wilson himself on the 1995 Nilsson tribute album For The Love Of Harry: Everybody Sings Nilsson. Do look up the fascinating back story about this song on the wiki.
Brian Wilson On Tour 1999-2007 offers a mini medley of opening songs from Wilson’s legendary 1966/2004 SMiLE album which lead into a terrific version of “Heroes & Villains.” Three fan favorite Beach Boys LP cuts from 1968’s Friends including the title track are also featured. If you saw any of the tours around this time you know that when Brian performed “Marcella” — from the under appreciated 1972 Beach Boys LP Carl & The Passions — he took full ownership of the tune, turning it into a brilliant rocker only hinted at in the original.
The Beach Boys deep album cut “Drive In” from 1964’s All Summer Long is a special kick to hear performed live, with its decidedly humorous and slyly racy lyrics — apparently this song was one of Brian’s transformational early productions where the band’s sound first came together as he’d envisioned.
Advertisement
Heartstring-tugger “Melt Away” is one of my all-time faves from Wilson’s 1988 solo debut — such an incredible song, performed gorgeously. Brian delivers a genuinely rocking cover of the Chuck Berry classic “Johnny B. Goode” without sounding tired or cliche. The album ends with a curiously upbeat pop arrangement of “She’s Leaving Home” from The Beatles’ Sergeant Pepper.
Sonics-wise, Brian Wilson On Tour 1999–2007 happily sounds really good start to finish despite its likely digital sourcing (hey, these are modern live concert recordings, folks). I was pleasantly surprised that the opaque marble vinyl actually is pretty nice overall — well centered, overall quiet. I did not notice any surface noise issues, which doesn’t always happen with highly patterned color vinyl.
Fourteen great songs performed live by music legend Brian Wilson at the peak of his late period renaissance seems like an equation for a must-get album for Record Store Day. Only 2000 copies of Brian Wilson On Tour 1999–2007 are being made so get to your favorite vinyl shop early to grab your copy!
Mark Smotroff is a deep music enthusiast / collector who has also worked in entertainment oriented marketing communications for decades supporting the likes of DTS, Sega and many others. He reviews vinyl for Analog Planet and has written for Audiophile Review, Sound+Vision, Mix, EQ, etc. You can learn more about him at LinkedIn.
The Nintendo Switch 2 is not a console that needs a hard sell, but a bundle that includes Mario Kart World and shaves money off the combined price is the kind of offer worth paying attention to.
The Switch 2 itself centres on a 7.9-inch 1080p touchscreen with HDR10 support and Variable Refresh Rate up to 120fps, which is a meaningfully sharper and smoother handheld experience than the original Switch ever delivered.
Advertisement
Dock it to your television and output jumps to up to 4K resolution, so the same device that fits in a bag on your commute becomes a proper living room gaming setup the moment you get home.
The redesigned Joy-Con 2 controllers also attach magnetically rather than sliding into place which makes them noticeably easier to grab and go, and each one can double as a mouse in compatible games, which opens up some genuinely different ways to play.
GameChat lets you press a single button to start a voice or video call with friends and share your screen mid-session, connecting via the built-in camera or any compatible USB-C camera, which brings a degree of social play that the original never had built in.
Advertisement
Storage lands at 256GB, which is eight times what the original Switch shipped with, and the console is backwards compatible with the majority of physical and digital original Switch games re-downloaded via the Nintendo eShop.
Mario Kart World is the headline inclusion, an open-world racing game with over 40 playable characters and support for up to 24 players across modes including Grand Prix, Knockout Tour, and Free Roam.
The Nintendo Switch 2 bundle at this price is a strong entry point for anyone coming from the original console or buying in fresh, with a launch title included that gives you something to play immediately rather than an empty game library on day one.
SiliconRepublic.com has asked Anthropic whether Irish financial institutions will take part in Project Glasswing.
Anthropic will release Mythos to UK financial institutions within the next week, said the company’s UK, Ireland and northern Europe head Pip White.
White, in an interview with Bloomberg, said that Project Glasswing is coming to the UK “in the next week”. “The engagement I have had from UK CEOs in the last week has been significant,” she said. White was appointed to the role last November.
Anthropic’s newest model Mythos vastly outperforms other AI models in vulnerability detection and exploitation. The model was launched as part of a limited release earlier this month, with access granted to big businesses and financial organisations to bolster their security.
Advertisement
The company’s approach to launch Mythos in a controlled fashion has been called “responsible” by the Irish National Cyber Security Centre (NCSC).
Involved parties include Amazon Web Services, Apple, Google, Microsoft, Nvidia, JP Morgan Chase, Goldman Sachs and Morgan Stanley among others. SiliconRepublic.com has reached out to Anthropic, AIB and the Bank of Ireland to query a potential Mythos deployment within financial institutions in Ireland.
Meanwhile, an Oireachtas Joint Committee on AI earlier this week heard on the dangers that Mythos poses for the future of cybersecurity. “In five months – six months – it’ll be in the hands of an active state [actor],” Richard Browne, the director of the NCSC said. “Governance is great, very important, but it doesn’t stop criminal actors.”
Advertisement
“The issue is not that Anthropic has created this. The issue is that Anthropic has demonstrated that this is possible,” he said. The Claude-maker will be creating 200 new jobs in Dublin by 2027 as its premises in the city expands.
Following Mythos, OpenAI said this week that it will only allow select verified users access to its latest AI model for cybersecurity operations. The cyber-specific version of GPT-5.4 lowers the refusal boundary for “legitimate” cybersecurity work, the company said.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Netflix co-founder and current chairman Reed Hastings is leaving the streaming company’s board in June to focus on “his philanthropy and other pursuits,” according to a shareholder letter released alongside Netflix’s Q1 earnings. Hastings has served as chairman of Netflix’s board since 2023, a role he assumed after stepping down as co-CEO and promoting Greg Peters in his place.
“Netflix changed my life in so many ways, and my all‑time favorite memory was January 2016, when we enabled nearly the entire planet to enjoy our service,” Hastings said in a statement. “My real contribution at Netflix wasn’t a single decision; it was a focus on member joy, building a culture that others could inherit and improve, and building a company that could be both beloved by members and wildly successful for generations to come. A special thanks to Greg and Ted, whose commitment to Netflix’s greatness is so strong that I can now focus on new things.”
Hastings founded Netflix in 1997 as a DVD-by-mail rental service with his co-founder and the company’s first CEO Marc Randolph. In 1999, Hastings became CEO, and eventually led the company through its transformation into a streaming service in 2007. Netflix started producing its own television series and movies in 2013, and in 2020, the company’s board named Ted Sarandos as Hasting’s co-CEO, in part to oversee its growing production business. Hastings stepped down as co-CEO in 2023 to become Netflix’s executive chairman, as then COO Greg Peters was promoted to co-CEO. Among his other contributions, Hasting is also the architect of Netflix’s infamous “culture memo,” which codified the company’s high-performance culture.
While he’ll no longer be on Netflix’s board, Hastings still has a seat on the board of AI startup Anthropic and media and financial software company Bloomberg. Netflix, for its part, is continuing to expand outside of the television and film business Hastings helped build, by offering a selection of curated party games, a growing library of video podcasts and live sports.
Xshift created a modular racing dash box for simulators that simply clicks together like a set of puzzle pieces, each held in place by a magnet. Each element has its own set of controls and readouts, and they all connect to a central unit for stability and data collection. The end result is a fully equipped control panel that is just as detailed as a real rally vehicle cockpit.
To finalize the design, Xshift began with some initial Photoshop sketches to ensure that the look, feel, and details were just correct. They then used 3DS Max to make accurate replicas of every button, dial, and screen, taking real-world measurements with their trusty calipers to ensure that every last detail was spot on. The printed parts were then sent to the 3D printer, where they were reinforced to withstand the subsequent sanding and painting. Meanwhile, the acrylic sheets were laser cut and then glued with a sophisticated carbon fiber wrap for a truly polished look.
Immersive Gaming Experience: Perfect for PlayStation 5, PS4 and PC gaming titles, the Driving Force simulates the feeling of driving a real car with…
Premium Control: The Driving Force feedback racing wheel provides a detailed simulation of driving a real car, with helical gearing delivering smooth…
Customizable Pedals: These pressure-sensitive nonlinear brake pedals provide a responsive, accurate braking feel on a sturdy base – with adjustable…
The ESP32-S3 circuit board is at the heart of the system, handling all of the inputs and outputs without the need for any additional components. To keep things orderly, the buttons and switches are placed in a grid, allowing you to get twelve controllers from only seven pins, while the rotary encoders have their own dedicated wires for clean signals. There are also optocouplers to keep the 12-volt LED buttons isolated from the rest of the board and prevent electrical noise from entering. Xshift even created a unique PCB from scratch, using Fusion 360 to ensure it has a firm ground plane and all of the necessary manual traces to keep everything functioning properly.
The beauty of it is that you can simply remove a module and replace it with another when necessary. One module features a large LCD screen that displays your current gear selection and lap times in real time from the simulator program. If you want more information, you may add some supplementary LCD screens or even a strip of LEDs to display your RPM gauge (or leave it off completely if you’re driving an electric vehicle). The dials and switches control everything from radio settings to pit stops, with a single button press providing fast reaction.
On the software side, Xshift connected all of this hardware to multiple sim racing titles using SimHub, and they even went to the bother of designing a bespoke dashboard interface in Photoshop that refreshes in real time with all of the game’s statistics. They employed some complex JavaScript expressions to connect each static graphic element to the live data feeds, ensuring that your screens always reflect exactly what’s happening on the track. He designed the circuitry on the microcontroller to handle button presses, encoder spins, and LED patterns with no lag, all before they finished the matrix scanning as well as input tests.
When you put it all together, you have a really neat item that fits nicely on your sim rig. The magnets hold everything in place, but you can still remove a portion when you need to change it out for something else. If you’re feeling daring, you can even download all of the files from the Xshift Patreon page and build your own at home, replete with every 3D model, laser-cut template, PCB layout, and code snippet you’ll require. The end result is a cockpit that seems like it just came out of the factory, yet with plenty of room for you to customize and future-proof your setup. [Source]
You must be logged in to post a comment Login