Connect with us

Tech

OpenAI launches a Codex desktop app for macOS to run multiple AI coding agents in parallel

Published

on

OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.

The Codex app for macOS functions as what OpenAI executives describe as a “command center for agents,” allowing developers to delegate multiple coding tasks simultaneously, automate repetitive work, and supervise AI systems that can run for up to 30 minutes independently before returning completed code.

“This is the most loved internal product we’ve ever had,” Sam Altman, OpenAI’s chief executive, told VentureBeat in a press briefing ahead of Monday’s launch. “It’s been totally an amazing thing for us to be using recently at OpenAI.”

The release arrives at a pivotal moment for the enterprise AI market. According to a survey of 100 Global 2000 companies published last week by venture capital firm Andreessen Horowitz, 78% of enterprise CIOs now use OpenAI models in production, though competitors Anthropic and Google are gaining ground rapidly. Anthropic posted the largest share increase of any frontier lab since May 2025, growing 25% in enterprise penetration, with 44% of enterprises now using Anthropic in production.

Advertisement

The timing of OpenAI’s Codex app launch — with its focus on professional software engineering workflows — appears designed to defend the company’s position in what has become the most contested segment of the AI market: coding tools.

Why developers are abandoning their IDEs for AI agent management

The Codex app introduces a fundamentally different approach to AI-assisted coding. While previous tools like GitHub Copilot focused on autocompleting lines of code in real-time, the new application enables developers to “effortlessly manage multiple agents at once, run work in parallel, and collaborate with agents over long-running tasks.”

Alexander Embiricos, the product lead for Codex, explained the evolution during the press briefing by tracing the product’s lineage back to 2021, when OpenAI first introduced a model called Codex that powered GitHub Copilot.

Advertisement

“Back then, people were using AI to write small chunks of code in their IDEs,” Embiricos said. “GPT-5 in August last year was a big jump, and then 5.2 in December was another massive jump, where people started doing longer and longer tasks, asking models to do work end to end. So what we saw is that developers, instead of working closely with the model, pair coding, they started delegating entire features.”

The shift has been so profound that Altman said he recently completed a substantial coding project without ever opening a traditional integrated development environment.

“I was astonished by this…I did this fairly big project in a few days earlier this week and over the weekend. I did not open an IDE during the process. Not a single time,” Altman said. “I did look at some code, but I was not doing it the old-fashioned way, and I did not think that was going to be happening by now.”

How skills and automations extend AI coding beyond simple code generation

The Codex app introduces several new capabilities designed to extend AI coding beyond writing lines of code. Chief among these are “Skills,” which bundle instructions, resources, and scripts so that Codex can “reliably connect to tools, run workflows, and complete tasks according to your team’s preferences.”

Advertisement

The app includes a dedicated interface for creating and managing skills, and users can explicitly invoke specific skills or allow the system to automatically select them based on the task at hand. OpenAI has published a library of skills for common workflows, including tools to fetch design context from Figma, manage projects in Linear, deploy web applications to cloud hosts like Cloudflare and Vercel, generate images using GPT Image, and create professional documents in PDF, spreadsheet, and Word formats.

To demonstrate the system’s capabilities, OpenAI asked Codex to build a racing game from a single prompt. Using an image generation skill and a web game development skill, Codex built the game by working independently using more than 7 million tokens with just one initial user prompt, taking on “the roles of designer, game developer, and QA tester to validate its work by actually playing the game.”

The company has also introduced “Automations,” which allow developers to schedule Codex to work in the background on an automatic schedule. “When an Automation finishes, the results land in a review queue so you can jump back in and continue working if needed.”

Thibault Sottiaux, who leads the Codex team at OpenAI, described how the company uses these automations internally: “We’ve been using Automations to handle the repetitive but important tasks, like daily issue triage, finding and summarizing CI failures, generating daily release briefs, checking for bugs, and more.”

Advertisement

The app also includes built-in support for “worktrees,” allowing multiple agents to work on the same repository without conflicts. “Each agent works on an isolated copy of your code, allowing you to explore different paths without needing to track how they impact your codebase.”

OpenAI battles Anthropic and Google for control of enterprise AI spending

The launch comes as enterprise spending on AI coding tools accelerates dramatically. According to the Andreessen Horowitz survey, average enterprise AI spend on large language models has risen from approximately $4.5 million to $7 million over the last two years, with enterprises expecting growth of another 65% this year to approximately $11.6 million.

Leadership in the enterprise AI market varies significantly by use case. OpenAI dominates “early, horizontal use cases like general purpose chatbots, enterprise knowledge management and customer support,” while Anthropic leads in “software development and data analysis, where CIOs consistently cite rapid capability gains since the second half of 2024.”

When asked during the press briefing how Codex differentiates from Anthropic’s Claude Code, which has been described as having its “ChatGPT moment,” Sottiaux emphasized OpenAI’s focus on model capability for long-running tasks.

Advertisement

“One of the things that our models are extremely good at—they really sit at the frontier of intelligence and doing reliable work for long periods of time,” Sottiaux said. “This is also what we’re optimizing this new surface to be very good at, so that you can start many parallel agents and coordinate them over long periods of time and not get lost.”

Altman added that while many tools can handle “vibe coding front ends,” OpenAI’s 5.2 model remains “the strongest model by far” for sophisticated work on complex systems.

“Taking that level of model capability and putting it in an interface where you can do what Thibault was saying, we think is going to matter quite a bit,” Altman said. “That’s probably the, at least listening to users and sort of looking at the chatter on social that’s that’s the single biggest differentiator.”

The surprising satisfies on AI progress: how fast humans can type

The philosophical underpinning of the Codex app reflects a view that OpenAI executives have been articulating for months: that human limitations — not AI capabilities — now constitute the primary constraint on productivity.

Advertisement

In a December appearance on Lenny’s Podcast, Embiricos described human typing speed as “the current underappreciated limiting factor” to achieving artificial general intelligence. The logic: if AI can perform complex coding tasks but humans can’t write prompts or review outputs fast enough, progress stalls.

The Codex app attempts to address this by enabling what the team calls an “abundance mindset” — running multiple tasks in parallel rather than perfecting single requests. During the briefing, Embiricos described how power users at OpenAI work with the tool.

“Last night, I was working on the app, and I was making a few changes, and all of these changes are able to run in parallel together. And I was just sort of going between them, managing them,” Embiricos said. “Behind the scenes, all these tasks are running on something called gate work trees, which means that the agents are running independently, and you don’t have to manage them.”

In the Sequoia Capital podcast “Training Data,” Embiricos elaborated on this mindset shift: “The mindset that works really well for Codex is, like, kind of like this abundance mindset and, like, hey, let’s try anything. Let’s try anything even multiple times and see what works.” He noted that when users run 20 or more tasks in a day or an hour, “they’ve probably understood basically how to use the tool.”

Advertisement

Building trust through sandboxes: how OpenAI secures autonomous coding agents

OpenAI has built security measures into the Codex architecture from the ground up. The app uses “native, open-source and configurable system-level sandboxing,” and by default, “Codex agents are limited to editing files in the folder or branch where they’re working and using cached web search, then asking for permission to run commands that require elevated permissions like network access.”

Embiricos elaborated on the security approach during the briefing, noting that OpenAI has open-sourced its sandbox technology.

“Codex has this sandbox that we’re actually incredibly proud of, and it’s open source, so you can go check it out,” Embiricos said. The sandbox “basically ensures that when the agent is working on your computer, it can only make writes in a specific folder that you want it to make rights into, and it doesn’t access network without information.”

The system also includes a granular permission model that allows users to configure persistent approvals for specific actions, avoiding the need to repeatedly authorize routine operations. “If the agent wants to do something and you find yourself annoyed that you’re constantly having to approve it, instead of just saying, ‘All right, you can do everything,’ you can just say, ‘Hey, remember this one thing — I’m actually okay with you doing this going forward,’” Embiricos explained.

Advertisement

Altman emphasized that the permission architecture signals a broader philosophy about AI safety in agentic systems.

“I think this is going to be really important. I mean, it’s been so clear to us using this, how much you want it to have control of your computer, and how much you need it,” Altman said. “And the way the team built Codex such that you can sensibly limit what’s happening and also pick the level of control you’re comfortable with is important.”

He also acknowledged the dual-use nature of the technology. “We do expect to get to our internal cybersecurity high moment of our models very soon. We’ve been preparing for this. We’ve talked about our mitigation plan,” Altman said. “A real thing for the world to contend with is going to be defending against a lot of capable cybersecurity threats using these models very quickly.”

The same capabilities that make Codex valuable for fixing bugs and refactoring code could, in the wrong hands, be used to discover vulnerabilities or write malicious software—a tension that will only intensify as AI coding agents become more capable.

Advertisement

From Android apps to research breakthroughs: how Codex transformed OpenAI’s own operations

Perhaps the most compelling evidence for Codex’s capabilities comes from OpenAI’s own use of the tool. Sottiaux described how the system has accelerated internal development.

“A Sora Android app is an example of that where four engineers shipped in only 18 days internally, and then within the month we give access to the world,” Sottiaux said. “I had never noticed such speed at this scale before.”

Beyond product development, Sottiaux described how Codex has become integral to OpenAI’s research operations.

“Codex is really involved in all parts of the research — making new data sets, investigating its own screening runs,” he said. “When I sit in meetings with researchers, they all send Codex off to do an investigation while we’re having a chat, and then it will come back with useful information, and we’re able to debug much faster.”

Advertisement

The tool has also begun contributing to its own development. “Codex also is starting to build itself,” Sottiaux noted. “There’s no screen within the Codex engineering team that doesn’t have Codex running on multiple, six, eight, ten, tasks at a time.”

When asked whether this constitutes evidence of “recursive self-improvement” — a concept that has long concerned AI safety researchers — Sottiaux was measured in his response.

“There is a human in the loop at all times,” he said. “I wouldn’t necessarily call it recursive self-improvement, a glimpse into the future there.”

Altman offered a more expansive view of the research implications.

Advertisement

“There’s two parts of what people talk about when they talk about automating research to a degree where you can imagine that happening,” Altman said. “One is, can you write software, extremely complex infrastructure, software to run training jobs across hundreds of thousands of GPUs and babysit them. And the second is, can you come up with the new scientific ideas that make algorithms more efficient.”

He noted that OpenAI is “seeing early but promising signs on both of those.”

The end of technical debt? AI agents take on the work engineers hate most

One of the more unexpected applications of Codex has been addressing technical debt — the accumulated maintenance burden that plagues most software projects.

Altman described how AI coding agents excel at the unglamorous work that human engineers typically avoid.

Advertisement

“The kind of work that human engineers hate to do — go refactor this, clean up this code base, rewrite this, write this test — this is where the model doesn’t care. The model will do anything, whether it’s fun or not,” Altman said.

He reported that some infrastructure teams at OpenAI that “had sort of like, given up hope that you were ever really going to long term win the war against tech debt, are now like, we’re going to win this, because the model is going to constantly be working behind us, making sure we have great test coverage, making sure that we refactor when we’re supposed to.”

The observation speaks to a broader theme that emerged repeatedly during the briefing: AI coding agents don’t experience the motivational fluctuations that affect human programmers. As Altman noted, a team member recently observed that “the hardest mental adjustment to make about working with these sort of like aI coding teammates, unlike a human, is the models just don’t run out of dopamine. They keep trying. They don’t run out of motivation. They don’t get, you know, they don’t lose energy when something’s not working. They just keep going and, you know, they figure out how to get it done.”

What the Codex app costs and who can use it starting today

The Codex app launches today on macOS and is available to anyone with a ChatGPT Plus, Pro, Business, Enterprise, or Edu subscription. Usage is included in ChatGPT subscriptions, with the option to purchase additional credits if needed.

Advertisement

In a promotional push, OpenAI is temporarily making Codex available to ChatGPT Free and Go users “to help more people try agentic workflows.” The company is also doubling rate limits for existing Codex users across all paid plans during this promotional period.

The pricing strategy reflects OpenAI’s determination to establish Codex as the default tool for AI-assisted development before competitors can gain further traction. More than a million developers have used Codex in the past month, and usage has nearly doubled since the launch of GPT-5.2-Codex in mid-December, building on more than 20x usage growth since August 2025.

Customers using Codex include large enterprises like Cisco, Ramp, Virgin Atlantic, Vanta, Duolingo, and Gap, as well as startups like Harvey, Sierra, and Wonderful. Individual developers have also embraced the tool: Peter Steinberger, creator of OpenClaw, built the project entirely with Codex and reports that since fully switching to the tool, his productivity has roughly doubled across more than 82,000 GitHub contributions.

OpenAI’s ambitious roadmap: Windows support, cloud triggers, and continuous background agents

OpenAI outlined an aggressive development roadmap for Codex. The company plans to make the app available on Windows, continue pushing “the frontier of model capabilities,” and roll out faster inference.

Advertisement

Within the app, OpenAI will “keep refining multi-agent workflows based on real-world feedback” and is “building out Automations with support for cloud-based triggers, so Codex can run continuously in the background—not just when your computer is open.”

The company also announced a new “plan mode” feature that allows Codex to read through complex changes in read-only mode, then discuss with the user before executing. “This means that it lets you build a lot of confidence before, again, sending it to do a lot of work by itself, independently, in parallel to you,” Embiricos explained.

Additionally, OpenAI is introducing customizable personalities for Codex. “The default personality for Codex has been quite terse. A lot of people love it, but some people want something more engaging,” Embiricos said. Users can access the new personalities using the /personality command.

Altman also hinted at future integration with ChatGPT’s broader ecosystem.

Advertisement

“There will be all kinds of cool things we can do over time to connect people’s ChatGPT accounts and leverage sort of all the history they’ve built up there,” Altman said.

Microsoft still dominates enterprise AI, but the window for disruption is open

The Codex app launch occurs as most enterprises have moved beyond single-vendor strategies. According to the Andreessen Horowitz survey, “81% now use three or more model families in testing or production, up from 68% less than a year ago.”

Despite the proliferation of AI coding tools, Microsoft continues to dominate enterprise adoption through its existing relationships. “Microsoft 365 Copilot leads enterprise chat though ChatGPT has closed the gap meaningfully,” and “Github Copilot is still the coding leader for enterprises.” The survey found that “65% of enterprises noted they preferred to go with incumbent solutions when available,” citing trust, integration, and procurement simplicity.

However, the survey also suggests significant opportunity for challengers: “Enterprises consistently say they value faster innovation, deeper AI focus, and greater flexibility paired with cutting edge capabilities that AI native startups bring.”

Advertisement

OpenAI appears to be positioning Codex as a bridge between these worlds. “Codex is built on a simple premise: everything is controlled by code,” the company stated. “The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of technical and knowledge work.”

The company’s ambition extends beyond coding. “We’ve focused on making Codex the best coding agent, which has also laid the foundation for it to become a strong agent for a broad range of knowledge work tasks that extend beyond writing code.”

When asked whether AI coding tools could eventually move beyond early adopters to become mainstream, Altman suggested the transition may be closer than many expect.

“Can it go from vibe coding to serious software engineering? That’s what this is about,” Altman said. “I think we are over the bar on that. I think this will be the way that most serious coders do their job — and very rapidly from now.”

Advertisement

He then pivoted to an even bolder prediction: that code itself could become the universal interface for all computer-based work.

“Code is a universal language to get computers to do what you want. And it’s gotten so good that I think, very quickly, we can go not just from vibe coding silly apps but to doing all the non-coding knowledge work,” Altman said.

At the close of the briefing, Altman urged journalists to try the product themselves: “Please try the app. There’s no way to get this across just by talking about it. It’s a crazy amount of power.”

For developers who have spent careers learning to write code, the message was clear: the future belongs to those who learn to manage the machines that write it for them.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

8849 TANK X Smartphone Boasts a Built-in DLP Projector, Night Vision Camera

Published

on

8849 TANK X Smartphone Projector
Most smartphones are preoccupied with being as slim and shiny as possible, but the 8849 TANK X doesn’t care. At 1.26 inches thick and 750 grams, it’s a hefty, heavy beast designed for places where your precious little smartphone would sugarcoat and die: dust storms, getting rained on, being dropped from chest level, -28°C or 56°C heat, you name it. It has IP68 and IP69K classifications, as well as military-grade ruggedness that would make even the most ardent outdoor enthusiast happy.



One of the Tank X’s most notable features is its built-in DLP projector, which will either convert you to the Church of Portable Movie Nights or make you laugh at the expense of some unfortunate soul who thought it sounded like a half-baked idea. The resolution is full 1080p (up to 1920×1080), and the brightness is 220 lumens. Plus, with laser focusing, you can expect razor-sharp shots from about half a meter to 3-4 meters away, and keystone correction ensures that the image remains level even if the phone is held at an angle. The projection area is around 10 feet square, making it ideal for movie evenings under the stars or displaying a map on a wall to confuse all of your lost buddies. You can get 5 hours of use out of it at maximum brightness in high mode or 6 in night mode. The previous Tank models were stuck to 720p, so this is a significant advance.

Sale


Google Pixel 9a with Gemini – Unlocked Android Smartphone with Incredible Camera and AI Photo Editing,…
  • Google Pixel 9a is engineered by Google with more than you expect, for less than you think; like Gemini, your built-in AI assistant[1], the incredible…
  • Take amazing photos and videos with the Pixel Camera, and make them better than you can imagine with Google AI; get great group photos with Add Me and…
  • Google Pixel’s Adaptive Battery can last over 30 hours[2]; turn on Extreme Battery Saver and it can last up to 100 hours, so your phone has power…

The battery capacity is a whopping 17,600mAh, split between two cells to keep it going for ages, and by “ages,” I mean several days of average use or 25 hours of movie playback. Or, if you’re having a lengthy phone session, you could easily talk for dozens, if not hundreds, of hours. Now, I get what you’re thinking: “But what about when the projector turns on?” Well, the power management is fairly conscientious, so it does not drain the battery.

Advertisement


So, what makes this thing tick? It’s powered by a MediaTek Dimensity 8200, a very strong 4nm octa-core processor paired with 16GB of LPDDR5 RAM (expandable by another 16GB, since who doesn’t like that?) and 512GB of UFS 3.1 storage. It’s all powered by Android 15, which, even on a beast like this, manages to keep things running smoothly whether you’re running multiple apps, playing a few games, or simply goofing around.

Connectivity is excellent, including 5G bands, Wi-Fi 6, Bluetooth 5.4, and GPS accuracy to within a few feet. There’s also a 3.5mm jack, an IR blaster for controlling your fancy TVs and appliances, and an FM radio for when you’re out of the loop.

8849 TANK X Smartphone Projector
Cameras are more than just good for taking casual images, beginning with the 50MP primary sensor, which employs Sony’s IMX766 to capture solid daylight shots with full-pixel focusing. Then there’s the 8MP telephoto, which can zoom in three times and should come in handy, but the true star of the show is the 64MP night vision camera, which is equipped with four infrared LEDs and autofocus, allowing you to see as clearly as day in almost complete darkness. A 50MP front camera for selfies and video calls completes the self-portrait package. With a dual-tone flash and a pair of extra IR lights to help you in low-light settings, you should look sharp.

8849 TANK X Smartphone Projector
On the back, there’s a 1,200-lumen RGB camping light that functions as a little spotlight; you can vary between modes such as white light, some great color options, SOS patterns, or even just a strobe or sound alert. It’s useful for emergencies or simply navigating a trail in the dark.

8849 TANK X Smartphone Projector
The 6.78-inch LCD display has 2460×1080 resolution and a 120Hz refresh rate, with a maximum brightness of 750 nits. It’s nice to see they eliminated the PWM flicker that causes eye strain after long viewing periods. Furthermore, with this display, outside visibility is acceptable, and the panel works well with the projector.

8849 TANK X Smartphone Projector
The Tank X was priced at $1,049.99 (ugh), but an early bird pricing of $549.99 made it slightly more affordable. You can also place a pre-order beginning February 1, 2026, and they will ship from warehouses in the United States, Canada, the United Kingdom, Australia, and other locations beginning March 1.
[Source]

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Mini Crossword Answers for Feb. 4

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I don’t know my Greek letters, so whenever there’s a clue like today’s 7-Across, I just have to hope the other answers fill it in for me. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-feb-4-2026.png

The completed NYT Mini Crossword puzzle for Feb. 4, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: “The Rachel Maddow Show” channel, after a 2025 rebranding
Answer: MSNOW

6A clue: Childhood
Answer: YOUTH

7A clue: Greek letter after zeta and eta
Answer: THETA

Advertisement

8A clue: What helicopter parents do
Answer: HOVER

9A clue: Sound at the dog park
Answer: ARF

Mini down clues and answers

1D clue: “Cats always land on their feet,” e.g.
Answer: MYTH

2D clue: Neighborhood in both London and Manhattan
Answer: SOHO

Advertisement

3D clue: ___ York, Spanish name for New York
Answer: NUEVA

4D clue: Furry mammal that eats crustaceans
Answer: OTTER

5D clue: Docking area
Answer: WHARF


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

Advertisement

Source link

Continue Reading

Tech

Tech Moves: Tableau CEO steps down; Microsoft taps new executive VPs; Avanade’s new CEO

Published

on

Former Tableau CEO Ryan Aytay, pictured during a visit to Seattle in 2024. (GeekWire File Photo / Todd Bishop)

Ryan Aytay, a longtime Salesforce exec who has led Tableau as CEO since 2023, is departing.

Aytay revealed the news on LinkedIn on Tuesday. He called his 19-year tenure “a front-row seat to innovation, a masterclass in leadership, and a community that has shaped who I am professionally and personally.” Aytay said he’ll share more about a “new challenge” later.

Aytay joined Salesforce in 2007 and became chief business officer in 2020 before taking the president role at Tableau in February 2022. A year later he replaced Mark Nelson as CEO.

The appointment came four years after Salesforce paid $15.7 billion to acquire Seattle-based Tableau, a leader in the data visualization sector.

Tableau reported 4% revenue growth in Salesforce’s most recent quarter — down from 15% growth in the previous quarter.

Advertisement

In his post, Aytay praised Tableau’s “DataFam” community and said “the future of Tableau and Salesforce is incredibly bright.”

His departure follows the recent exit of Denise Dresser, who led Slack, another Salesforce division. Dresser is now chief revenue officer at OpenAI. Salesforce’s cybersecurity leader announced Monday that he left the company last week.

Salesforce stock is down more than 14% over the past week amid investor fears over AI disrupting traditional software providers. The company maintains an office in Seattle’s Fremont neighborhood and another in Bellevue.

— Microsoft is naming four new executive vice presidents, according to a memo viewed by CNBC.

Advertisement

Deb Cupp, Nick Parker, Ralph Haupter, and Mala Anand will get the new titles. They will continue reporting to Judson Althoff, who took on the newly created position of CEO of Microsoft’s commercial business in October. Althoff is overseeing a reformulated commercial team that includes engineering, sales, marketing, operations, and finance leaders representing more than 75% of Microsoft’s revenue.

Microsoft reported better-than-expected earnings last week but its shares fell as much as 12% in trading the day after the earnings report — erasing $357 billion from its market value. Several factors may be contributing to market skepticism, including the company’s massive AI spending bets and concern about dependence on OpenAI.

— Avanade named Chris Howarth as its new CEO. Howarth previously spent nearly three decades at Accenture, where he was a senior managing director leading the firm’s Accenture Business Group that focuses on Microsoft, Accenture, and Avanade.

Howarth replaces Rodrigo Caserta, who is joining Microsoft as a corporate vice president. He spent more than a decade at Avanade, and was named CEO in 2024.

Advertisement

“Rodrigo’s leadership has positioned Avanade for sustained momentum, and his move to Microsoft further strengthens our partnership,” Howarth said in a statement. “I’m excited to work with our people, clients, and partners at this pivotal moment, delivering on the huge potential of AI to drive transformation and accelerate value.”

Avanade formed in 2000 by Accenture and Microsoft and provides various digital, cloud, and AI-related services across the Microsoft ecosystem.

Kelsey Peterson, a former vice president at Weber Shandwick and senior director at Rubrik, joined Microsoft as a senior communications manager for the company’s security business.

Read about other Tech Moves from earlier today here.

Advertisement

Source link

Continue Reading

Tech

Robot Videos: DARPA Triage Challenge, Extreme Cold Test

Published

on

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

One of my favorite parts of robotics is watching research collide with non-roboticists in the real (or real-ish) world.

Advertisement

[ DARPA ]

Spot will put out fires for you. Eventually. If it feels like it.

[ Mechatronic and Robotic Systems Laboratory ]

Advertisement

All those robots rising out of their crates is not sinister at all.

[ LimX ]

The Lynx M20 quadruped robot recently completed an extreme cold-weather field test in Yakeshi, Hulunbuir, operating reliably in temperatures as low as –30°C.

Advertisement

[ DEEP Robotics ]

This is a teaser video for KIMLAB’s new teleoperation robot. For now, we invite you to enjoy the calm atmosphere, with students walking, gathering, and chatting across the UIUC Main Quad—along with its scenery and ambient sounds, without any technical details. More details will be shared soon. Enjoy the moment.

The most incredible part of this video is that they have publicly available power in the middle of their quad.

[ KIMLAB ]

Advertisement

For the eleventy-billionth time: Just because you can do a task with a humanoid robot doesn’t mean you should do a task with a humanoid robot.

[ UBTECH ]

Advertisement

[ KAIST ]

Okay, so figuring out where Spot’s face is just got a lot more complicated.

[ Boston Dynamics ]

Advertisement

An undergraduate team at HKU’s Tam Wing Fan Innovation Wing developed CLIO, an embodied tour-guide robot, just in months. Built on LimX Dynamics TRON 1, it uses LLMs for tour planning, computer vision for visitor recognition, and a laser pointer/expressive display for engaging tours.

[ CLIO ]

The future of work is doing work so that robots can then do the same work, except less well.

Advertisement

[ AgileX ]

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Apple integrates Anthropic’s Claude and OpenAI’s Codex into Xcode 26.3 in push for ‘agentic coding’

Published

on

Apple on Tuesday announced a major update to its flagship developer tool that gives artificial intelligence agents unprecedented control over the app-building process, a move that signals the iPhone maker’s aggressive push into an emerging and controversial practice known as “agentic coding.”

Xcode 26.3, available immediately as a release candidate, integrates Anthropic’s Claude Agent and OpenAI’s Codex directly into Apple’s development environment, allowing the AI systems to autonomously write code, build projects, run tests, and visually verify their own work — all with minimal human oversight.

The update is Apple’s most significant embrace of AI-assisted software development since introducing intelligence features in Xcode 26 last year, and arrives as “vibe coding” — the practice of delegating software creation to large language models — has become one of the most debated topics in technology.

Apple says that while integrating intelligence into the Xcode developer workflow is powerful, the model itself still has a somewhat limited aperture. It answers questions based on what the developer provides, but it doesn’t have access to the full context of the project, and it’s not able to take action on its own. That changes with this update, the company said during a press conference Tuesday morning.

Advertisement

How Apple’s new AI coding features let developers build apps faster than ever

The key innovation in Xcode 26.3 is the depth of integration between AI agents and Apple’s development tools. Unlike previous iterations that offered code suggestions and autocomplete features, the new system grants AI agents access to nearly every aspect of the development process.

During a live demonstration, an Apple engineer showed how the Claude agent could receive a simple prompt — “add a new feature to show the weather at a landmark” — and then independently analyze the project’s file structure, consult Apple’s documentation, write the necessary code, build the project, and take screenshots of the running application to verify its work matched the requested design.

According to Apple, the agent is able to use tools like build and screenshot previews to verify its work, visually analyze the image, and confirm that everything has been built accordingly. Before, when interacting with a model, it would provide an answer and just stop there.

The system creates automatic checkpoints as developers interact with the AI, allowing them to roll back changes if results prove unsatisfactory — a safeguard that acknowledges the unpredictable nature of AI-generated code.

Advertisement

Apple says it worked directly with Anthropic and OpenAI to optimize the experience with particular attention paid to reducing token usage — the computational units that determine costs when using cloud-based AI models — and improving the efficiency of tool calling.

According to the company, developers can download new agents with a single click, and they update automatically.

Why Apple’s adoption of the Model Context Protocol could reshape the AI development landscape

Underlying the integration is the Model Context Protocol, or MCP, an open standard that Anthropic developed for connecting AI agents with external tools. Apple’s adoption of MCP means that any compatible agent — not just Claude or Codex — can now interact with Xcode’s capabilities.

Apple says this also works for agents that are running outside of Xcode. Any agent that is compatible with MCP can now work with Xcode to do all the same things — project discovery and change management, building and testing apps, working with previews and code snippets, and accessing the latest documentation.

Advertisement

The decision to embrace an open protocol, rather than building a proprietary system, represents a notable departure for Apple, which has historically favored closed ecosystems. It also positions Xcode as a potential hub for a growing universe of AI development tools.

Xcode’s troubled history with AI tools — and why Apple says this time is different

The announcement comes against a backdrop of mixed experiences with AI-assisted coding in Apple’s tools. During the press conference, one developer described previous attempts to use AI agents with Xcode as “horrible,” citing constant crashes and an inability to complete basic tasks.

Apple acknowledged the concerns while arguing that the new integration addresses fundamental limitations of earlier approaches.

The company says the big shift is that Claude and Codex have so much more visibility into the breadth of the project. If they hallucinate and write code that doesn’t work, they can now build, see the compile errors, and iterate in real time to fix those issues — in some cases before presenting it as a finished work.

Advertisement

Apple argues that the power of IDE integration extends beyond error correction. Agents can now automatically add entitlements to projects when needed to access protected APIs — a task the company says would be otherwise very difficult for an AI operating outside the development environment dealing with binary files it may not have the format for.

From Andrej Karpathy’s tweet to LinkedIn certifications: The unstoppable rise of vibe coding

Apple’s announcement arrives at a crucial moment in the evolution of AI-assisted development. The term “vibe coding,” coined by AI researcher Andrej Karpathy in early 2025, has transformed from a curiosity into a genuine cultural phenomenon that is reshaping how software gets built.

LinkedIn announced last week that it will begin offering official certifications in AI coding skills, drawing on usage data from platforms like Lovable and Replit. Job postings requiring AI proficiency doubled in the past year, according to edX research, with Indeed’s Hiring Lab reporting that 4.2% of U.S. job listings now mention AI-related keywords.

The enthusiasm is driven by genuine productivity gains. Casey Newton, the technology journalist, recently described building a complete personal website using Claude Code in about an hour — a task that previously required expensive Squarespace subscriptions and years of frustrated attempts with various website builders.

Advertisement

More dramatically, Jaana Dogan, a principal engineer at Google, posted that she gave Claude Code “a description of the problem” and “it generated what we built last year in an hour.” Her post, which accumulated more than 8 million views, began with the disclaimer: “I’m not joking and this isn’t funny.”

Security experts warn that AI-generated code could lead to ‘catastrophic explosions’

But the rapid adoption of agentic coding has also sparked significant concerns among security researchers and software engineers.

David Mytton, founder and CEO of developer security provider Arcjet, warned last month that the proliferation of vibe-coded applications “into production will lead to catastrophic problems for organizations that don’t properly review AI-developed software.”

“In 2026, I expect more and more vibe-coded applications hitting production in a big way,” Mytton wrote. “That’s going to be great for velocity… but you’ve still got to pay attention. There’s going to be some big explosions coming!”

Advertisement

Simon Willison, co-creator of the Django web framework, drew an even starker comparison. “I think we’re due a Challenger disaster with respect to coding agent security,” he said, referring to the 1986 space shuttle explosion that killed all seven crew members. “So many people, myself included, are running these coding agents practically as root. We’re letting them do all of this stuff.”

A pre-print paper from researchers this week warned that vibe coding could pose existential risks to the open-source software ecosystem. The study found that AI-assisted development pulls user interaction away from community projects, reduces visits to documentation websites and forums, and makes launching new open-source initiatives significantly harder.

Stack Overflow usage has plummeted as developers increasingly turn to AI chatbots for answers—a shift that could ultimately starve the very knowledge bases that trained the AI models in the first place.

Previous research painted an even more troubling picture: a 2024 report found that vibe coding using tools like GitHub Copilot “offered no real benefits unless adding 41% more bugs is a measure of success.”

Advertisement

The hidden mental health cost of letting AI write your code

Even enthusiastic adopters have begun acknowledging the darker aspects of AI-assisted development.

Peter Steinberger, creator of the viral AI agent originally known as Clawdbot (now OpenClaw), recently revealed that he had to step back from vibe coding after it consumed his life.

“I was out with my friends and instead of joining the conversation in the restaurant, I was just like, vibe coding on my phone,” Steinberger said in a recent podcast interview. “I decided, OK, I have to stop this more for my mental health than for anything else.”

Steinberger warned that the constant building of increasingly powerful AI tools creates the “illusion of making you more productive” without necessarily advancing real goals. “If you don’t have a vision of what you’re going to build, it’s still going to be slop,” he added.

Advertisement

Google CEO Sundar Pichai has expressed similar reservations, saying he won’t vibe code on “large codebases where you really have to get it right.”

“The security has to be there,” Pichai said in a November podcast interview.

Boris Cherny, the Anthropic engineer who created Claude Code, acknowledged that vibe coding works best for “prototypes or throwaway code, not software that sits at the core of a business.”

“You want maintainable code sometimes. You want to be very thoughtful about every line sometimes,” Cherny said.

Advertisement

Apple is gambling that deep IDE integration can make AI coding safe for production

Apple appears to be betting that the benefits of deep IDE integration can mitigate many of these concerns. By giving AI agents access to build systems, test suites, and visual verification tools, the company is essentially arguing that Xcode can serve as a quality control mechanism for AI-generated code.

Susan Prescott, Apple’s vice president of Worldwide Developer Relations, framed the update as part of Apple’s broader mission.

In a statement, Apple said its goal is to make tools that put industry-leading technologies directly in developers’ hands so they can build the very best apps. The company says agentic coding supercharges productivity and creativity, streamlining the development workflow so developers can focus on innovation.

But the question remains whether the safeguards will prove sufficient as AI agents grow more autonomous. Asked about debugging capabilities, Apple noted that while Xcode has a powerful debugger built in, there is no direct MCP tool for debugging.

Advertisement

Developers can run the debugger and manually relay information to the agent, but the AI cannot yet independently investigate runtime issues — a limitation that could prove significant as the complexity of AI-generated code increases.

The update also does not currently support running multiple agents simultaneously on the same project, though Apple noted that developers can open projects in multiple Xcode windows using Git worktrees as a workaround.

The future of software development hangs in the balance — and Apple just raised the stakes

Xcode 26.3 is available immediately as a release candidate for members of the Apple Developer Program, with a general release expected soon on the App Store. The release candidate designation — Apple’s final beta before production — means developers who download today will automatically receive the finished version when it ships.

The integration supports both API keys and direct account credentials from OpenAI and Anthropic, offering developers flexibility in managing their AI subscriptions. But those conveniences belie the magnitude of what Apple is attempting: nothing less than a fundamental reimagining of how software comes into existence.

Advertisement

For the world’s most valuable company, the calculus is straightforward. Apple’s ability to attract and retain developers has always underpinned its platform dominance. If agentic coding delivers on its promise of radical productivity gains, early and deep integration could cement Apple’s position for another generation. If it doesn’t — if the security disasters and “catastrophic explosions” that critics predict come to pass — Cupertino could find itself at the epicenter of a very different kind of transformation.

The technology industry has spent decades building systems to catch human errors before they reach users. Now it must answer a more unsettling question: What happens when the errors aren’t human at all?

As Apple conceded during Tuesday’s press conference, with what may prove to be unintentional understatement: “Large language models, as agents sometimes do, sometimes hallucinate.”

Millions of lines of code are about to find out how often.

Advertisement

Source link

Continue Reading

Tech

Optical Combs Help Radio Telescopes Work Together

Published

on

Very-long baseline interferometry (VLBI) is a technique in radio astronomy whereby multiple radio telescopes cooperate to bundle their received data and in effect create a much larger singular radio telescope. For this to work it is however essential to have exact timing and other relevant information to accurately match the signals from each individual radio telescope. As VLBI is used for increasingly higher ranges and bandwidths this makes synchronizing the signals much harder, but an optical frequency comb technique may offer a solution here.

In the paper by [Minji Hyun] et al. it’s detailed how they built the system and used it with the Korean VLBI Network (VLB) Yonsei radio telescope in Seoul as a proof of concept. This still uses the same hydrogen maser atomic clock as timing source, but with the optical transmission of the pulses a higher accuracy can be achieved, limited only by the photodiode on the receiving end.

In the demonstration up to 50 GHz was possible, but commercial 100 GHz photodiodes are available. It’s also possible to send additional signals via the fiber on different wavelengths for further functionality, all with the ultimately goal of better timing and adjustment for e.g. atmospheric fluctuations that can affect radio observations.

Advertisement

Source link

Advertisement
Continue Reading

Tech

AMD suggests the next-gen Xbox will arrive in 2027

Published

on

Microsoft could launch the next-generation Xbox console sometime in 2027, AMD CEO Lisa Su has revealed during the semiconductor company’s latest earnings call. Valve is on track to start shipping its AMD-powered Steam Machine early this year, she said, while Microsoft’s development of an Xbox with a semi-custom SOC from AMD is “progressing well to support a launch in 2027.” While it doesn’t necessarily mean Microsoft is releasing a new Xbox console next year, that seems to be the company’s current goal.

Xbox president Sarah Bond announced Microsoft’s multi-year partnership with AMD for its consoles in mid-2025. Based on Bond’s statement back then, Microsoft is embracing the use of artificial intelligence and machine learning in future Xbox games. She also said that the companies are going to “co-engineer silicon” across devices, “in your living room and in your hands,” implying the development of future handheld consoles.

Leaked documents from the FTC vs. Microsoft court battle revealed in the past that Microsoft was planning to make the next Xbox a “hybrid game platform,” which combines local hardware and cloud computing. The documents also said that Microsoft was planning to release the next Xbox in 2028. Whether the company has chosen to launch the new Xbox early remains to be seen, but it is possible when the Xbox X and S were released in 2020, and they haven’t sold as well as the Xbox One.

Source link

Advertisement
Continue Reading

Tech

Washington’s ‘millionaires tax’ targets top earners as tech leaders warn of startup fallout

Published

on

Washington state’s Legislative Building, which houses the Legislature. (GeekWire Photo / Brent Roraback)

Washington state Democratic leaders on Tuesday at last unveiled their so-called “millionaires tax” — a proposed 9.9% tax applied to taxable, personal annual income that exceeds $1 million.

For the first time in decades, the lawmakers are advancing a personal income tax aimed at high‑income residents that would go into effect in two years, and pairing it with small business and low‑income tax breaks.

The action comes as the state is struggling to plug a more than $2 billion budget hole with spending cuts and a slate of potential tax changes, while at the same time some of Washington’s largest employers are cutting thousands of jobs from their payrolls.

The combined pressures — set against a backdrop of ongoing uncertainty around federal policies and funding — has leaders in the business community concerned about additional financial burdens in an increasingly shaky economy.

“Proposing a personal income tax is a major economic move for our state — one that will have consequences — and it’s not something that we, or anyone in Washington, is taking lightly,” said Rachel Smith, president of Washington Roundtable, nonprofit representing business executives, in a statement.

Advertisement

Others were more blunt.

“This tax is just another brick in the wall of anti-entrepreneurialism from state and local legislators. The average Amazon employee probably won’t mind, but this stuff is devastating to company creation,” Kirby Winfield, founding general partner at Seattle venture capital firm Ascend, said via email.

The message, said Winfield, is that “Washington does not value job creation or wealth creation for risk-taking founders and startup employees.”

In a state that has historically relied heavily on property, sales and business taxes to balance its books, Gov. Bob Ferguson has repeatedly expressed support in recent months for an income tax on the state’s highest earners.

Advertisement

In December, he said that a tax similar to what has been proposed would apply to fewer than 0.5% of Washington residents and would raise more than $3 billion each year. An official fiscal note on the bill has not been released.

But the governor on Tuesday said the draft legislation fell short in supporting small businesses and lower-income residents in the state. The bill is “a good start, but we still have a long way to go,” he said in a press conference.

“We are listening and hearing the voices of many, many Washingtonians who are struggling right now and having a lack of affordability in our state,” Ferguson said. “And we need to address that head on.”

Gov. Bob Ferguson holds a press conference in Olympia on Tuesday regarding a proposed income tax in Washington state. (Screenshot via TVW stream)

Tax increases and new deductions

The proposed tax, which is being introduced as Senate Bill 6346 and House Bill 2724, includes multiple provisions:

  • A 9.9% tax on Washington taxable income above a $1 million standard deduction per individual, built off of federal adjusted gross income.
  • It allows up to $50,000 a year in charitable deductions per filer (or per couple), and nonrefundable credits to avoid double‑taxing income already hit by Washington’s B&O, capital‑gains taxes, or other specific exemptions.
  • There are multiple definitions of residents subject to the tax, including someone who lives here more than 183 days per year.
  • It would apply to income earned beginning Jan. 1, 2028, with the first payments dues in April 2029.

Supporters of the tax say it brings more fairness to the state’s tax structure. Washington is one of nine states that lack an income tax, and has prohibited the taxation of personal wages.

“Washington’s antiquated tax code is the second-most regressive in the country, which means that working people pay more, while the gap between rich and poor continues to widen,” Invest in Washington Now, a Seattle nonprofit supporting progressive tax policy, said in a statement.

Advertisement

The measure includes targeted tax breaks:

  • The small business B&O tax credit doubles, so businesses with annual gross receipts of less than $250,000 would no longer pay that tax.
  • The temporary B&O surcharge on high-grossing companies would end one year early, in 2028.
  • The Working Families Tax Credit removes the age limit for participation.
  • A new sales tax exemption for grooming and hygiene products would take effect Jan. 1, 2029.

At his press conference Tuesday, Ferguson called for bigger benefits for small businesses and families. The governor said he wants to devote $1 billion of tax relief to small business owners, while the proposed bill provides a little more than $100 million. Ferguson also called for expanded eligibility for the family tax credit and to provide larger amounts to recipients, plus more extensive sales tax relief.

Now comes negotiations on a tight timeline. This year’s 60-day legislative session is scheduled to end March 12.

“So it’s a challenge for something this big and this complex” to find solution, Ferguson said, but added that he sees potential for “a lot of collaboration.”

If approved by lawmakers, the governor said the proposed tax was certain to go before voters for approval and would face legal challenges as well.

Advertisement

Nixing Washington’s ‘tax advantage’

While the new income tax has worried some on the business community, it’s not the only controversial tax being considered in Olympia this year.

Tech industry leaders have been up in arms over a separate proposal that would broaden the state’s capital gains tax to apply to profits from the sale of qualified small business stock (QSBS) even when gains are exempt under federal law. The change, codified in SB 6229 and HB 2292, would impact startup company founders, early employees and investors.

Aviel Ginzburg, a Seattle-based venture capitalist at Founders’ Co-op and leader of the startup community Foundations, recently posted a satirical video to highlight his opposition to the QSBS and millionaires tax.

“People are happy to pay more taxes. I am too, especially when the …. money is spent well,” Ginzburg said, asserting that’s not the case here. “We’re about to kill the golden goose.”

Advertisement

Another piece of legislation that’s modeled on Seattle’s payroll tax, which targets Amazon and other big companies, was floated unsuccessfully last year and is not gaining traction this session.

Other states are likewise struggling with affordability issues and looking to raise income taxes on the highest earners, with Colorado moving toward a ballot measure and Michigan considering a similar move. California, meanwhile, is exploring a one-time, 5% tax on residents a net worth exceeding $1 billion — which has caused at least six billionaires to flee the state.

Winfield of Ascend dismisses comparisons between Washington’s and California’s tax burdens given other, outsized strengths in the state to the south.

“Given the choice between paying absurd taxes here or California, founders will just move to the Bay Area,” he said. The billions of dollars of venture capital, massive tech talent and tolerance for risk are beyond comparison.

Advertisement

“Seattle is great but it doesn’t come close,” Winfield said. “And when you remove the tax advantage you lose your biggest draw.”

Source link

Continue Reading

Tech

Project G Stereo: A 60s Design Icon

Published

on

Dizzy Gillespie was a fan. Frank Sinatra bought one for himself and gave them to his Rat Pack friends. Hugh Hefner acquired one for the Playboy Mansion. Clairtone Sound Corp.’s Project G high-fidelity stereo system, which debuted in 1964 at the National Furniture Show in Chicago, was squarely aimed at trendsetters. The intent was to make the sleek, modern stereo an object of desire.

By the time the Project G was introduced, the Toronto-based Clairtone was already well respected for its beautiful, high-end stereos. “Everyone knew about Clairtone,” Peter Munk, president and cofounder of the company, boasted to a newspaper columnist. “The prime minister had one, and if the local truck driver didn’t have one, he wanted one.” Alas, with a price tag of CA $1,850—about the price of a small car—it’s unlikely that the local truck driver would have actually bought a Project G. But he could still dream.

The design of the Project G seemed to come from a dream.

“I want you to imagine that you are visitors from Mars and that you have never seen a Canadian living room, let alone a hi-fi set,” is how designer Hugh Spencer challenged Clairtone’s engineers when they first started working on the Project G. “What are the features that, regardless of design considerations, you would like to see incorporated in a new hi-fi set?”

Advertisement

Black and white photo of a young woman sitting on the floor in front of a stereo system and looking toward the floor. The film “I’ll Take Sweden” featured a Project G, shown here with co-star Tuesday Weld.Nina Munk/The Peter Munk Estate

The result was a stereo system like no other. Instead of speakers, the Project G had sound globes. Instead of the heavy cabinetry typical of 1960s entertainment consoles, it had sleek, angled rosewood panels balanced on an aluminum stand. At over 2 meters long, it was too big for the average living room but perfect for Hollywood movies—Dean Martin had one in his swinging Malibu bachelor pad in the 1965 film Marriage on the Rocks. According to the 1964 press release announcing the Project G, it was nothing less than “a new sculptured representation of modern sound.”

The first-generation Project G had a high-end Elac Miracord 10H turntable, while later models used a Garrard Lab Series turntable. The transistorized chassis and control panel provided AM, FM, and FM-stereo reception. There was space for storing LPs or for an optional Ampex 1250 reel-to-reel tape recorder.

The “G” in Project G stood for “globe.” The hermetically sealed 46-centimeter-diameter sound globes were made of spun aluminum and mounted at the ends of the cantilevered base; inside were Wharfedale speakers. The sound globes rotated 340 degrees to project a cone of sound and could be tuned to re-create the environment in which the music was originally recorded—a concert hall, cathedral, nightclub, or opera house.

Between 1965 and 1967, Clairtone sponsored the Miss Canada beauty pageant. Miss Canada 1963 was Diane Landry, seen here with a Project G2 at Clairtone\u2019s factory showroom in Rexdale, Ontario. Diane Landry, winner of the 1963 Miss Canada beauty pageant, poses with a Project G2. Nina Munk/The Peter Munk Estate

Initially, Clairtone intended to produce only a handful of the stereos. As one writer later put it, it was more like a concept car “intended to give Clairtone an aura of futuristic cool.” Eventually fewer than 500 were made. But the Project G still became an icon of mod ’60s Canadian design, winning a silver medal at the 13th Milan Triennale, the international design exhibition.

Advertisement

And then it was over; the dream had ended. Eleven years after its founding, Clairtone collapsed, and Munk and cofounder David Gilmour lost control of the company.

The birth of Clairtone Sound Corp.

Clairtone’s Peter Munk lived a colorful life, with a nightmarish start and many fantastic and dreamlike parts too. He was born in 1927 in Budapest to a prosperous Jewish family. In the spring of 1944, Munk and 13 members of his family boarded a train with more than 1,600 Jews bound for the Bergen-Belsen concentration camp. They arrived, but after some weeks the train moved on, eventually reaching neutral Switzerland. It later emerged that the Nazis had extorted large sums of cash and valuables from the occupants in exchange for letting the train proceed.

As a teenager in Switzerland, Munk was a self-described party animal. He enjoyed dancing and dating and going on long ski trips with friends. Schoolwork was not a top priority, and he didn’t have the grades to attend a Swiss university. His mother, an Auschwitz survivor, encouraged him to study in Canada, where he had an uncle.

Before he could enroll, though, Munk blew his tuition money entertaining a young woman during a trip to New York. He then found work picking tobacco, earned enough for tuition, and graduated from the University of Toronto in 1952 with a degree in electrical engineering.

Advertisement

Color photo of two men in office attire. Clairtone cofounders Peter Munk [left] and David Gilmour envisioned the company as a luxury brand.Nina Munk/The Peter Munk Estate

At the age of 30, Munk was making custom hi-fi sets for wealthy clients when he and David Gilmour, who owned a small business importing Scandinavian goods, decided to join forces. Their idea was to create high-fidelity equipment with a contemporary Scandinavian design. Munk’s father-in-law, William Jay Gutterson, invested $3,000. Gilmour mortgaged his house. In 1958, Clairtone Sound Corp. was born.

From the beginning, Munk and Gilmour sought a high-end clientele. They positioned Clairtone as a luxury brand, part of an elegant lifestyle. If you were the type of woman who listened to music while wearing pearls and a strapless gown and lounging on a shag rug, your music would be playing on a Clairtone. If you were a man who dressed smartly and owned an Arne Jacobsen Egg chair, you would also be listening on a Clairtone. That was the modern lifestyle captured in the company’s advertisements.

In 1958, Clairtone produced its first prototype: the monophonic 100-M, which had a long, low cabinet made from oiled teak, with a Dual 1004 turntable, a Granco tube chassis, and a pair of Coral speakers. It never went into production, but the next model, the stereophonic 100-S, won a Design Award from Canada’s National Industrial Design Council in 1959. By 1963, Clairtone was selling 25,000 units a year.

Black and white photo of a line of stereo components under assembly, with a man in a lab coat at one end and a man in a suit at the other.  Peter Munk visits the Project G assembly line in 1965. Nina Munk/The Peter Munk Estate

Design was always front and center at Clairtone, not just for the products but also for the typography, advertisements, and even the annual reports. Yet nothing in the early designs signaled the dramatic turn it would take with the Project G. That came about because of Hugh Spencer.

Advertisement

Spencer was not an engineer, nor did he have experience designing consumer electronics. His day job was designing sets for the Canadian Broadcast Corp. He consulted regularly with Clairtone on the company’s graphics and signage. The only stereo he ever designed for Clairtone was the Project G, which he first modeled as a wooden box with tennis balls stuck to the sides.

From both design and quality perspectives, Clairtone was successful. But the company was almost always hemorrhaging cash. In 1966, with great fanfare and large government incentives, the company opened a state-of-the-art production facility in Nova Scotia. It was a mismatch. The local workforce didn’t have the necessary skills, and the surrounding infrastructure couldn’t handle the production. On 27 August 1967, Munk and Gilmour were forced out of Clairtone, which became the property of the government of Nova Scotia.

Despite the demise of their first company (and the government inquiry that followed), Munk and Gilmour remained friends and went on to become serial entrepreneurs. Their next venture? A resort in Fiji, which became part of a large hotel chain in that country, Australia, and New Zealand. (Gilmour later founded Fiji Water.) Then Munk and Gilmour bought a gold mine and cofounded Barrick Gold (now Barrick Mining Corp., one of the largest gold mining operations in the world). Their businesses all had ups and downs, but both men became extremely wealthy and noted philanthropists.

Preserving Canadian design

As an example of iconic design, the Project G seems like an ideal specimen for museum collections. And in 1991, Frank Davies, one of the designers who worked for Clairtone, donated a Project G to the recently launched Design Exchange in Toronto. It would be the first object in the DX’s permanent collection, which sought to preserve examples of Canadian design. The museum quickly became Canada’s center for the promotion of design, hosting more than 50 programs each year to teach people about how design influences every aspect of our lives.

Advertisement

In 2008, the museum opened The Art of Clairtone: The Making of a Design Icon, 1958–1971, an exhibition showcasing the company’s distinctive graphic design, industrial design, engineering, and photography.

Color photo of a modern stereo system in the foreground and a woman sitting in a modern arm chair in the back. David Gilmour’s wife, Anna Gilmour, was the company’s first in-house model.Nina Munk/The Peter Munk Estate

But what happened to the DX itself is a reminder that any museum, however worthy, shouldn’t be taken for granted. In 2019, the DX abruptly closed its permanent collection, and curators were charged with deaccessioning its objects. Fortunately, the Royal Ontario Museum, Carleton and York Universities, and the Archives of Ontario, among others, were able to accept the artifacts and companion archives. (The Project G pictured at top is now at the Royal Ontario Museum.)

Researchers at York and Carleton have been working to digitize and virtually reconstitute the DX collection, through the xDX Project. They’re using the Linked Infrastructure for Networked Cultural Scholarship (LINCS) to turn interlinked and contextualized data about the collection into a searchable database. It’s a worthy goal, even if it’s not quite the same as having all of the artifacts and supporting papers physically together in one place. I admit to feeling both pleased about this virtual workaround, and also a little sad that a unified collection that once spoke to the historical significance of Canadian design no longer exists.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

Advertisement

An abridged version of this article appears in the February 2026 print issue as “The Project G Stereo Defined 1960s Cool.”

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Epstein-linked longevity guru Peter Attia leaves David Protein, and his own startup ‘won’t comment’

Published

on

The founder of David Protein, maker of popular high-protein nutrition bars, announced on X on Monday that longevity guru Dr. Peter Attia “has stepped down from his role as Chief Science Officer at David.”

The announcement comes after Attia’s name appeared in more than 1,700 documents, including email correspondence, released on Friday as part of a massive file dump related to convicted sex offender Jeffrey Epstein, according to The New York Times. Attia served on the executive team of the food startup and was also an early investor.

For those unfamiliar, Attia is a Canadian American physician who has become one of the most prominent voices in longevity and preventive health. He’s best known for his bestselling book “Outlive: The Science and Art of Longevity” and his now seven-year-old podcast, wherein he explores optimization strategies. He was also hired just last month as a contributor to CBS.

Three-year-old, New York-based David Protein raised a $75 million Series A funding round in May of last year led by Greenoaks, with participation from Valor Equity Partners. The company has experienced significant growth since launching its flagship protein bar in September 2024, a product it describes as having 28 grams of protein, zero sugar, and 150 calories.

Advertisement

In a lengthy post on X, Attia wrote that he was “ashamed” of some of the crude content in his emails with Epstein, but he also said he was not involved in criminal activity and never visited Epstein’s island or traveled on his plane. Attia also discussed at length how he came to know Epstein and why he stayed involved with him even after Epstein’s 2008 conviction.

The fallout appears to extend beyond David Protein. It also appears that Biograph, the healthcare testing and longevity startup that Attia co-founded with entrepreneur John Hering, may be distancing itself from the physician. The company declined to comment on Attia’s ongoing participation with the startup or about the pages on its website that used to mention him but now omit his name or return a “file not found” error.

Biograph came out of stealth a year ago, TechCrunch previously reported, with backing from investors that include Vy Capital, Human Capital, Alpha Wave, and WndrCo, along with angel investors, including Balaji Srinivasan. Like a growing number of concierge medical service companies, Biograph offers premium preventive health services to subscribers who pay between $7,500 and $15,000 per year. Attia was previously named on the company’s press release and site as a co-founder.

Techcrunch event

Advertisement

Boston, MA
|
June 23, 2026

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025