To help you acquire the skills you need to distinguish yourself from other cybersecurity job candidates, the IEEE Computer Society offers a “What Makes a Great Cybersecurity Consultant” guide. The 23-page PDF includes hard and soft skills you need, a list of certifications to pursue, and key IEEE cybersecurity conferences for staying updated on developments in the field.
“Technology, remote work, and a shortage of skilled workers make this the ideal time to consider becoming a cybersecurity consultant,” Johnson says in the guide. “Consulting can give you the flexibility, variety, and control over where you want your career to go.”
Hard and soft skills
At a minimum, cybersecurity professionals should have a general understanding of IT including operating systems, communication protocols, network architecture, and programming languages such as C++, Java, and Python. They also should be well-versed in security auditing, firewall management, penetration testing, and encryption technologies.
The principles of ethical hacking and coding would be handy as well.
“To be able to defend a system well, you first have to know how to attack it,” Rodriguez says.
Advertisement
The guide explains that there are now more technologies available to help cybersecurity consultants monitor threats and protect systems. They include security orchestration, automation, and response (SOAR) platforms, which automate workflows to collect security data, streamline incident response, and automate repetitive tasks.
Rodriguez points to advances in domain name system security extensions (DNSSEC), which uses digital signatures based on public-key cryptography to strengthen the authentication of the domain name system. By validating data authenticity, DNSSEC safeguards against attacks such as DNS spoofing and guarantees that users connect to the correct IP address.
Although hard skills are important, soft skills are just as crucial, according to the guide. Critical thinking, project management, flexibility, teamwork, and organizational and presentation skills are essential.
Advertisement
It’s not enough to be good at analyzing security vulnerabilities; you also need to clearly describe the situation and explain possible solutions.
“Soft skills are important to achieve good team cohesion,” Rodriguez says, “because consultants often lead diverse teams from within their client’s organization.”
“It’s essential,” Johnson adds, “that you demonstrate to clients you’re a team player and a capable communicator, and that you meet your commitments.”
Security certifications
Possessing security-specific credentials is a valuable way to demonstrate your expertise to potential clients, according to the guide. Because hundreds of certifications are available, Johnson says, pinpointing the most relevant ones can be challenging. Some people focus on theoretical knowledge, while others want to cover practical applications of technology.
Advertisement
“Survey the industry and compare it to your skills,” Johnson recommends. “Decide what you want to do, and identify where you have gaps in your skills and experience.”
Here are four of the nine certifications listed in the guide that are frequently cited as being important. All the providers are cybersecurity organizations.
Additional industry-specific certifications might be required for organizations in finance, government, health care, or manufacturing.
Sound general knowledge—backed by experience, training, and certification—is an essential foundation for being a specialist, Johnson says.
Advertisement
Conferences and networking opportunities
Events sponsored by the IEEE Computer Society can help you learn about the latest research and advancements in cybersecurity:
Conferences can give you insight into the field and let you do some networking, but it’s important to network elsewhere as well, experts say. Consider joining the IEEE Technical Community on Security and Privacy, which connects experts and professionals advancing research in areas such as encryption, operating system security, and data privacy.
Learning and meeting people keeps your knowledge sharp and can lead to mentorship opportunities with established cybersecurity consultants, Johnson says.
As AI model providers increasingly move downstream, launching products and agents for specific enterprise applications and sectors like finance, one big question still remains: how will said AI agents be equipped with the proper context surrounding a task — who assigned it, which other stakeholders are involved, what data or discussions have taken place about it and how it should be done?
This practice of “context engineering” remains one of the great unsolved problems of the AI era. But SageOx, a Seattle-based startup founded by the veterans who built the original AWS EC2 and EBS infrastructure, believes it has the answer: a new systems layer it calls “agentic context infrastructure.”
Using a combination of small hardware recording devices and the existing applications enterprises already rely on — Slack, email, documents, files — and applying new, open-source frameworks and instructions atop it all, SageOX has developed a system by which enterprises can keep agents as “in-the-loop” and updated on the enterprise’s tasks as their human employees are, and prevent them from “drifting” off their assigned tasks and the firm’s larger goals.
“We are capturing all of this context where it happens,” said Ajit Banerjee, founder and CEO of SageOX and a former Hugging Face, Meta, Amazon and Apple engineer said in a recent video call interview with VentureBeat. “Product development is a team sport, and the context doesn’t just come from people typing on a keyboard. It happens in conversations.”
Advertisement
SageOX founding team. Credit: SageOX
By capturing the “why” behind the “what”—the intent that lives in Slack threads, whiteboarding sessions, and water-cooler conversations—SageOx aims to provide a “hivemind” that ensures agents don’t drift and humans stay in flow.
“The way people have to work is not old-school coordination, where I write down an issue and then it goes through a sequence. It has to be almost like playing jazz,” Banerjee added.
Today, the company emerged from stealth to announce its $15 million seed round led by Canaan and participation
from A.Capital, Pioneer Square Labs, and Founders’ Co-op.
Today’s AI agents operate in isolated sessions, lacking a shared memory of prior decisions or architectural intent.
Every task effectively starts from scratch, forcing developers to manually recap context—a process that undermines the very speed agents are meant to provide. SageOx addresses this through a multi-surface product suite designed to capture context wherever it naturally occurs.
At the center of this ecosystem is the Ox Dot. A customized hardware device designed for the shared office, the Dot captures meetings, standups, and design reviews with a single touch.
Advertisement
Its most distinctive feature is “Auto Rewind”—a fail-safe for the spontaneous brilliance of a team. If a breakthrough happens during an unrecorded conversation, Auto Rewind allows the team to “go back” and capture the discussion after the fact. This audio is transcribed, speaker-identified, and distilled into team memory, where it becomes accessible to both humans and agents.
For the developer, the open-source, MIT-licensedOx CLI provides the bridge. Commands like ox agent prime allow coding assistants—including Claude Code and Codex—to consult the team’s shared history before writing code. This ensures that if a team decided in a meeting to use a specific authentication pattern, the agent knows it without being explicitly told in a prompt.
As Dr. Rupak Majumdar, Scientific Director, Max Planck Institute for Software Systems, noted after seeing the team’s development speed, they are effectively “treating code like assembler.”
Agentic engineering: moving Beyond “clean” code
The shift to an agent-first workflow has forced the SageOx team to reconsider nearly every principle of modern software management.
In the agentic era, 10,000-line PRs spread across the codebase make it impossible for an agent to reason about intent.
Instead, SageOx advocates for smaller, high-volume, and highly focused commits. This “agent-readable” history allows the machine to look back and understand exactly why a specific change was made. The team is even re-evaluating repo structures; while they currently utilize a monorepo for their 750,000 lines of code, they are exploring a future where agents manage a constellation of micro-repos, as agents can “get lost” when a codebase grows too large for their context window.
This philosophy of “speed-over-stasis” allowed the team to build their own firmware for the Ox Dot in less than two weeks, despite having no recent hardware experience.
Advertisement
By feeding technical PDFs and documentation into AI models, they bypassed months of traditional research. CEO Ajit Banerjee calls this the “unlearning” of old habits—realizing that the “undifferentiated heavy lifting” of knowledge work can now be offloaded to a system that remembers everything the team knows.
Radical transparency: beyond open source to an “open work” model
Perhaps as significant as the technology is SageOx’s commitment to “Open Work.” Moving beyond traditional open-source software, the company is practicing a form of radical transparency in an effort to foster the acceleration of development across the entire open source community and any enterprises who wish to learn from the way they work.
SageOx’s team openly shares their internal prompts, their planning sessions, and even their unfiltered internal debates with the public. Users can sign in to the SageOx console and watch the team build SageOx in real-time.
This “open kimono” approach was an intentional decision to lead by example. Banerjee argues that since they are asking teams to change how they work, they must be willing to show the “WTF” moments and the course corrections as they happen.
Advertisement
“The revolution is not going to be televised,” Banerjee says. “It’s going to be SageOxed.”
This transparency is intended to prove that a small, lean team—”yoking up lean”—can outpace massive organizations by leveraging a shared context layer.
As for how SageOx plans to monetize and become profitable, Banerjee said the revenue path is modeled on the AWS EC2 playbook: start with early adopters, especially small AI-native startups, then expand toward enterprises as the need becomes obvious.
The pedigree of infrastructure
The technical foundation of SageOx is rooted in the early days of cloud infrastructure.
Advertisement
Banerjee was an original member of the AWS EC2 team, and Snodgrass was one of Amazon’s first engineers, leading the transition from monolithic architectures to microservices.
This background is reflected in the company’s name: the “Ox” represents the “Yeoman work” they aim to do—a dependable animal that handles the heavy lifting of data and context so the team can move forward.
The SageOx vision is one where humans are no longer the manual assemblers of context.
Instead, they act as the directors of a “parallel processing” engine.
Advertisement
In a recent demonstration, a feature request moved from a verbal discussion to a completed implementation in under seven minutes. By priming coding agents with the recorded context of the original discussion, the team bypassed the need for formal specs or Jira tickets.
The new way of work
SageOx is currently focusing its efforts on “AI-native” startups—teams that operate primarily through prompts and rely heavily on agentic coworkers.
Their suite of tools, from the open-source Ox CLI to the hardware-enabled Ox Dot, is designed to solve the immediate problem of alignment drift.
As AI moves from being a tool to a teammate, the most valuable asset a company possesses is no longer its raw source code, but its shared context.
Advertisement
SageOx suggests that the way forward is not to hoard information behind “private fences,” but to create a communal ground where intent is visible to every teammate—human or machine. In this new epoch, the teams that win will be the ones that can remember as fast as they can execute.
Communication with satellites often involves the use of high-gain directional antennas coupled with careful positioning to find and track the target. With a geostationary satellite the mount is either fixed or a single-axis polar mount, but when the craft is moving in a different orbit it becomes more of a challenge to stay locked on. An azimuth-elevation mount is needed to cover the whole sky, and [Ham Radio Passion] has one as a work in progress. It’s 3D printed and looks straightforward, making it a project to watch.
An az-el mount has two parts, the first being a turntable to set the azimuth, and the second being a horizontal rotating axis to set the elevation. He’s mounting the antenna to a piece of aluminium extrusion and driving it through a set of 3D printed gears driven from a 360 degree servo with a worm drive. He explains why the servo makes more sense to him here.
The result is not yet a finished project, but it shows enough promise to make it worth keeping an eye on. It’s by no means big enough for a huge antenna array, but we can imagine antennas for higher frequencies would be well within its capabilities. Meanwhile it’s certainly not the first az-el mount we’ve seen.
We all should know by now that Nintendo is incredibly protective of its IP. When it comes to anything having to do with Pokémon specifically, all the more so. While they would tell you that they’re just protecting their IP, the end result is that some of the biggest Pokémon fans out there that just want to do some fun things that represent no harm to Nintendo get shut down by threats, lawyers, or copyright strikes.
Take the YouTube series called PokeNational Geographic, for instance. While this YouTube series has been pushing out faux nature documentary videos about Pokémon for several years, the channel behind it just got hit with a bunch of copyright strikes from Nintendo.
In a video posted to an alternate channel, Elious says that Nintendo of America suddenly issued numerous strikes on large batches of his videos, all in the space of 12 hours. At the time he posted the video, a total of 20 videos had been caught up in four separate copyright strikes which encompass the entirety of the videos. With YouTube’s three-strikes policy, this means his channel is now pending deletion by YouTube and will disappear in seven days.
Elious says the strikes claim his channel is inappropriately using “content used in Pokémon video games including audiovisual works, characters, and imagery.” Elious’ videos consist of original 3D animation of various Pokémon in the “wild,” with a David Attenborough–style narration sharing various facts about Pokémon like Magikarp, Squirtle, Magnemite, Snom, Mew, Charizard, and more. He has been producing these videos on this channel since as far back as 2023 without issue, and claims in his video that the only actual content he took directly from the games was “tiny sprite roars” that last less than three seconds, adding that numerous other Pokémon creators on YouTube, as well as AI-produced channels mimicking his own, use images or footage directly from the games with no issue.
So, why now? There’s no way to know for sure, but Elious did recently launch a Patreon account so that fans could compensate them for the series. The general speculation is that once Elious attempted to make any kind of money from his video series, that spurred Nintendo to send the copyright strikes. And for many people, that will make complete sense.
Advertisement
I don’t understand that point of view. Regardless of any money changing hands, this still doesn’t represent any threat or harm to Nintendo or the Pokémon franchise. If anything, fun little fan videos like this only propel interest in the product. They represent free engagement lures for fans of Pokémon. Why in the world is copyright striking this channel to hell a better option than working out a free or cheap licensing arrangement with Elious so that they can keep producing the series and Nintendo can reap some of the benefit?
Or, hell, Nintendo could have tried to have a conversation with Elious, at least.
Elious continues by saying that he isn’t opposed to just deleting all the Pokémon videos if Nintendo of America asks, but he wishes he could keep his nearly 100,000 subscribers so he can keep making videos of other things, as he has on the channel in the past.
“I can’t really fight this,” Elious says. “It all seems legitimate, it does seem to come from the actual, real Nintendo of America. I can’t fight this. I don’t…I don’t know what to do about it because it’ll remove everything. I’m downloading stuff, of course, I have like, all the videos myself. But I’ll never be able to post them again, and I’ll never be able to use this channel again. Almost 100,000 subscribers over three years of making these animations and it’s all going to be gone in seven days.”
It’s simply too bad that Nintendo would rather worship at the altar of intellectual property than get creative with how it can support its fans. Thanks to IP maximalist thought, here is just a little more fun that Nintendo has flushed down the toilet.
China’s AI chatbot labs are attracting big investors, as DeepSeek is also reported to be raising at a $45bn valuation.
In a week where rival DeepSeek is reported to be raising at a $45bn valuation, the makers of the pupular Kimi AI models has beaten it to the headlines with its own $2bn raise.
This $2bn round was led by Meituan Dragonball, with participation from Tsinghua Capital, China Mobile, and CPE Yuanfeng, among others, according to a statement from Huafeng Capital, the financial advisor to some of the investors in the transaction.
Advertisement
The news comes as the Financial Times cites sources saying its biggest rival DeepSeek could be valued at around $45bn as it looks set to raise some $4bn to $5bn in coming days.
DeepSeek took the world by storm in January 2025 when it released its powerful large language model R1, sending Silicon Valley leaders into a flurry, especially as the start-up claimed that its model was leagues cheaper than its US competitors – taking only $5.6m to train – while performing on par with models from industry heavyweights like OpenAI and Anthropic.
Moonshot’s Kimi models were the first from a major Chinese competitor to take DeepSeek on, and today its K2.6 model ranks in OpenRouter’s top three in the world for token usage.
Tsinghua University graduate Yang Zhilin founded Moonshot in 2023 and it has influential backers including Chinese e-commerce giant Alibaba. It had strong initial success with Kimi, thanks in part to its AI search functions and long text analysis. DeepSeek’s release saw it lose ground at the time, but various iterations of Kimi 2 since have seen it grow in popularity among developers.
Advertisement
In March of this year, San Fracisco-based AI darling Cursor had to come clean and admit its latest model was based on Kimi 2.5, after it was spotted by an eagle-eyed user and posted on X. Cursor has since inked a deal with SpaceX that allows Elon Musk’s company to acquire Cursor for $60bn.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Roles span eGates, passports, visas, asylum applications, and enterprise services – yours for up to £105K
The Home Office’s digital division
is recruiting three chief technology officers (CTOs) for
migration and borders and enterprise services, each paid
£81,000 to £105,000 a year.
It is looking for two CTOs for
Migration and Borders Digital, which runs passport control eGates and
electronic travel authorizations, which people notice when they go down or start working differently. The unit’s other high-profile systems include those
supporting passenger data services, digital identity, visas, asylum
applications, and immigration status.
Advertisement
“Applying for a passport is now a
seamless, self-service experience where renewals are printed and
dispatched in just 48 hours,” writes Mike McCarthy, the
department’s director general for digital and innovation, in
material published with the job ad. “Our airport eGates support 76
million UK border crossings each year, with digitally assisted
electronic travel authorisation decisions made in just 45 seconds.”
“These aren’t just technical
achievements. They are real, measurable changes to improve millions
of people’s lives, and we’re extremely proud of the difference
we’ve made so far,” he adds of Home Office Digital, the name the department has adopted for its IT function.
McCarthy is himself a recent recruit,
having joined the Home Office in January after working for consultancy McKinsey and spending eight years in the British Army’s Corps of Royal Engineers. According to the job ad from last September, he is paid £160,000 and
oversees 4,000 people with a budget of £1.8 billion.
Home Office Digital is also looking
for a CTO for its enterprise services unit, which designs, builds, and
operates core services including networks, end user services, and
operational support for more than 35,000 users. McCarthy writes that
the department has “moved most of our technology services into the
cloud, saving money while boosting efficiency.”
Advertisement
The department expects successful
applicants to agree to serve for at least three years, although this
is not a contractual requirement, and undertake the Security Check
level of national security clearance. They can be based in Cardiff,
Croydon, Glasgow, Manchester, or Sheffield. Applications close at 11:55pm BST on Sunday, May 24, with interviews expected to take place in early July. ®
The explosion of AI usage since 2023 is unprecedented. In terms of adoption, AI is moving faster than cloud, faster than mobile, and certainly faster than the internet did. Research group Gartner predicts that 80% of enterprises will deploy AI tools this year.
Donnchadh Casey
VP for AI Security at F5.
Advertisement
When we classify a company’s journey through AI adoption, we see maturity falling into four categories:
Category 1 is general purpose AI and productivity – think employees using ChatGPT, Gemini, CoPilot, etc
Category 2 is when organizations have internal use cases, building custom chatbots for HR or IT, for example
Category 3 includes external use cases like building public-facing GenAI applications, like customer service chatbots
Category 4 is agentic workflows which are made up of complex systems that take actions autonomously on behalf of users
These categories often run in parallel rather than in sequence, but it is in the last three categories that security becomes critical. That’s because organizations are building complex software on top of non-deterministic AI models, creating vulnerabilities that traditional firewalls simply cannot see.
Article continues below
Security is always a priority for business but, with AI, the concern is different – it’s a blind spot.
Security leaders have spent 20 years deploying and configuring firewalls and web application firewalls (WAFs) to protect the network, but those tools look at network traffic and usage, whereas AI attacks use natural language – and you can’t firewall a conversation.
That’s why 75% of CISOs are reporting AI security incidents, because their existing shields simply aren’t designed to catch these threats; why 91% have already detected attempted attacks on their AI infrastructure; and that is exactly why a whopping 94% are now prioritizing testing of their AI systems.
Advertisement
New categories of cognitive attacks
There are plenty of real-world examples of how AI is changing the threat model. A breach at Asana last summer stemmed from a tenant-isolation logic flaw in the MCP server that allowed cross-organization data exposure.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
That’s a classic multi-tenant bug but it’s more dangerous in LLM systems because leaked data appears as fluent language, which makes it much more difficult to detect.
Meanwhile, an incident at Lenovo reflected a different failure: broken trust boundaries. Prompt injection redefined a Lenovo chatbot’s role and the back-end systems trusted its tool requests without enforcing server-side authorization. The issue wasn’t the AI model ignoring rules but authorization being delegated to it.
Advertisement
These are just two examples that map to a much broader emerging risk landscape. Organizations aren’t just dealing with code vulnerabilities any more, they are facing entirely new categories of cognitive attacks, including:
Prompt injection, both direct and indirect
Data poisoning during the training phase
Sophisticated jailbreak techniques like symbolic language attacks
Token compression, where attackers hide malicious instruction in formats that the AI model(s) can read but humans can’t
While traditional security guardrails handle deterministic input, prompt injection and other natural language attacks are semantic problems, not pattern-matching ones. These aren’t isolated bugs; they are systemic business risks introduced by new AI-driven architectures.
The industry is racing to categorize these AI vulnerabilities. There are frameworks emerging like the OWASP Top 10 for GenAI and Agentic Applications, Mitre Atlas and the NIST AI Risk Management Framework but we don’t have a definitive database or unified standard for what secure actually looks like.
Advertisement
The old approach can’t keep up
The pressure on industry right now to ship AI is existential. Developers are using AI to write code ten times faster than ever before; organizations are literally shipping new features, and even products, overnight.
At the same time, regulation is accelerating matters on the compliance side.
The EU AI Act, for example, explicitly calls for adversarial testing for high-risk and general-purpose AI systems. In practice, that means that purpose-built red-teaming – testing AI systems with simulated adversarial attacks – must now be considered a core component of the AI security stack, and in a way that addresses the real-world challenges these systems face.
So, CISOs and security teams are expected to secure changes that are happening at machine speed. How? By manually typing prompts into a chat box? It feels like trying to stop a tsunami with a bucket. The math doesn’t work. The speed doesn’t work. The AI attack surface is fundamentally different and the old approach can’t keep up.
Advertisement
It’s clear that traditional red-teaming is ineffective and AI red-teaming is needed to resolve the tension point of speed versus control. From speaking to customers, helping them to secure their AI systems, there are four key areas we need to consider:
Threat evolution: AI attacks evolve faster than static test suites. As soon as checks are automated, the AI model or the attack changes, and security teams end up maintaining tests instead of reducing risk.
Agent complexity: because AI agents aren’t deterministic systems, once you add retrieval, tools, memory, there are almost infinite permutations. You are no longer testing code, you’re testing a conversation that changes based on context.
Automation and scale: manual red-teaming does not scale for these systems. One chatbot may be manageable. Hundreds or thousands of chatbots are not. You can’t rely on humans to replay thousands of adversarial conversations every time the model or the system prompt is updated
Actionable reporting: findings must be reproduceable and actionable. ‘The bot behaved badly’ is not actionable. Engineers need the conversation parameters and trigger conditions, otherwise the fixes, the remediations, will stall.
Ensuring AI systems behave as intended, even under attack
These are the real-world gaps that security teams are trying to close right now, and the reasons why AI red-teaming is coming to the forefront. For example, one of our customers is a global bank, operating in a highly regulated environment.
When we first engaged with them, they had over 50 AI use cases across HR, procurement and cyber but they couldn’t ship any of them because they couldn’t prove safety to their internal auditors.
AI red-teaming gave the bank the evidence it needed to understand how its AI systems actually behaved – where data could leak, how prompts could be abused, and where controls broke down in their environment.
Advertisement
This customer is taking the findings from red-teaming to improve its defensive posture with custom security controls. This combination allows the bank to scale AI across the business with confidence in their security posture and governance program.
In the public sector, meanwhile, the imperative shifts from voluntary testing to mandatory – guided by agencies including NIST and CISA – such as conducting adversarial stress tests to identify mission-critical risks like the weaponization of biological data.
Here, AI red-teaming isn’t just about reducing risk, it’s about maintaining authority to operate and mission continuity.
In other words, whether you’re protecting customer data or public services, the requirement is the same – continuous, evidence-backed assurance that AI systems behave as intended, even when someone is trying to break them.
Advertisement
Deploying enterprise AI with confidence
It’s clear that enterprises deploying AI need automated testing against known vulnerabilities just to establish a baseline. Context is the new attack surface; static defenses fail against agentic attacks so they must test workloads, not just models.
Finally, compliance is a competitive advantage. With the right reporting, security stops being a blocker and becomes the enabler that gets an enterprise’s AI to market faster. In that world, the 80% of enterprises that plan to deploy AI this year can do so with confidence rather than fear, whatever phase of their journey they’re on.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
The lawsuit alleges that a series of updates pushed to certain Roku-powered TVs introduced recurring issues that, in some cases, rendered the devices unusable. The models named include Roku Select Series and Roku Plus Series sets, along with TCL’s 3-, 4-, 5-, and 6-series TVs running Roku OS. Read Entire Article Source link
Valve has released CAD files for the new Steam Controller and its Puck under a Creative Commons license. “The idea is to let enterprising modders create their own Steam Controller add-ons, like skins, charging stands, grip extenders or smartphone mounts,” reports Digital Foundry. From the report: The Valve release includes files for the external shell (“surface topology”) of the Controller and Puck, with a .STP, .STL and engineering diagram of each device, with the latter showing areas that must remain uncovered to let the device maintain its signal strength and otherwise function as designed. Valve has previously released CAD files for its Steam Deck handheld, Valve Index VR suite and even the original Steam Controller a decade ago, so this release is welcomed but not unexpected.
The release is under a fairly restrictive Creative Commons license which allows for non-commercial use and requires attribution and sharing of designs back to the community. However, the license also suggests that commercial entities interested in making accessories for the Steam Controller or its Puck can contact Valve directly to discuss terms. You can find the files here.
Sam Altman’s management style came under scrutiny on the seventh day of Elon Musk’s high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman’s brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI’s former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI’s interim CEO after the board briefly ousted Sam Altman. Murati’s testimony focused on her concerns about Altman’s “difficult and chaotic” management style. She said Altman had trouble “making decisions on big controversial things.” He also had a habit of telling people what they wanted to hear.
“My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with,” said Murati. Murati said that her issue with Altman was not about safety, “it is about Sam creating chaos.” She said she supported Altman’s return to OpenAI because the company “was at catastrophic risk of falling apart” at the time of his ousting. “I was concerned about the company completely blowing up.”
Zilis said she was upset that Altman rolled out ChatGPT without involving the board. “It wasn’t just me but the entire board raised concern about that whole thing happening without any board communication,” she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It “felt super out of left field,” she said. “How is it the case that we want to place a major bet on a speculative technology?”
In a video deposition, Helen Toner, a former member of OpenAI’s board who resigned in 2023, said she first became aware of ChatGPT’s release when an OpenAI employee asked another board member whether the board was aware of the development. […] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. “There were a number of things — the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes,” said Toner. Recap:
Get the lowest price ever on AirPods Max 2 over-ear headphones.
Apple’s new AirPods Max 2 have dropped to the lowest price ever, making now a great time to pick up the over-ear headphones as a gift for Mom this Mother’s Day.
AirPods Max 2 are now $40 off at Amazon and Walmart, as both retailers compete for your business this week.
With Mother’s Day on May 10, there’s still time to pick up a pair for Mom and have them delivered by Sunday (check the ETA for your individual shipping address, though, to confirm).
Advertisement
Apple AirPods Max 2 features
AirPods Max 2, which were announced in March 2026, are equipped with Apple’s H2 chip. The chip offers enhanced sound quality and better Active Noise Cancellation (ANC) compared to the first-generation AirPods Max.
About AirPods Max 2
Powered by Apple’s H2 chip
Up to 1.5x more Active Noise Cancellation than first-gen AirPods Max
Transparency mode
Adaptive EQ
Lossless Audio and ultra-low latency audio via a wired USB-C connection (requires a supported service)
If you’re open to buying the first-gen AirPods Max, closeout deals are in effect on remaining inventory, with Amazon running a $100 discount on the purple colorway, bringing the price down to $449.
You must be logged in to post a comment Login