In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Advertisement
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant’s products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials.
The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Advertisement
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling — which included a kind of “buyer beware” notice to any federal agency considering GCC High — helped Microsoft expand a government business empire worth billions of dollars.
“BOOM SHAKA LAKA,” Richard Wakeman, one of the company’s chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in “The Wolf of Wall Street.” Wakeman did not respond to requests for comment.
It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government’s cybersecurity. The program’s layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government’s secrets. But ProPublica’s investigation — drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors — found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company’s products and practices were central to two of the most damaging cyberattacks ever carried out against the government.
FedRAMP first raised questions about GCC High’s security in 2020 and asked Microsoft to provide detailed diagrams explaining its encryption practices. But when the company produced what FedRAMP considered to be only partial information in fits and starts, program officials did not reject Microsoft’s application. Instead, they repeatedly pulled punches and allowed the review to drag out for the better part of five years. And because federal agencies were allowed to deploy the product during the review, GCC High spread across the government as well as the defense industry. By late 2024, FedRAMP reviewers concluded that they had little choice but to authorize the technology — not because their questions had been answered or their review was complete, but largely on the grounds that Microsoft’s product was already being used across Washington.
Advertisement
Today, key parts of the federal government, including the Justice and Energy departments, and the defense sector rely on this technology to protect highly sensitive information that, if leaked, “could be expected to have a severe or catastrophic adverse effect” on operations, assets and individuals, the government has said.
“This is not a happy story in terms of the security of the U.S.,” said Tony Sager, who spent more than three decades as a computer scientist at the National Security Agency and now is an executive at the nonprofit Center for Internet Security.
For years, the FedRAMP process has been equated with actual security, Sager said. ProPublica’s findings, he said, shatter that facade.
“This is not security,” he said. “This is security theater.”
Advertisement
ProPublica is exposing the government’s reservations about this popular product for the first time. We are also revealing Microsoft’s yearslong inability to provide the encryption documentation and evidence the federal reviewers sought.
The revelations come as the Justice Department ramps up scrutiny of the government’s technology contractors. In December, the department announced the indictment of a former employee of Accenture who allegedly misled federal agencies about the security of the company’s cloud platform and its compliance with FedRAMP’s standards. She has pleaded not guilty. Accenture, which was not charged with wrongdoing, has said that it “proactively brought this matter to the government’s attention” and that it is “dedicated to operating with the highest ethical standards.”
Microsoft has also faced questions about its disclosures to the government. As ProPublica reported last year, the company failed to inform the Defense Department about its use of China-based engineers to maintain the government’s cloud systems, despite Pentagon rules stipulating that “No Foreign persons may have” access to its most sensitive data. The department is investigating the practice, which officials say could have compromised national security.
Microsoft has defended its program as “tightly monitored and supplemented by layers of security mitigations,” but after ProPublica’s story published last July, the company announced that it would stop using China-based engineers for Defense Department work.
Advertisement
In response to written questions for this story and in an interview, Microsoft acknowledged the yearslong confrontation with FedRAMP but also said it provided “comprehensive documentation” throughout the review process and “remediated findings where possible.”
“We stand by our products and the comprehensive steps we’ve taken to ensure all FedRAMP-authorized products meet the security and compliance requirements necessary,” a spokesperson said in a statement, adding that the company would “continue to work with FedRAMP to continuously review and evaluate our services for continued compliance.”
The program was an early target of the Trump administration’s Department of Government Efficiency, which slashed its staff and budget. Even FedRAMP acknowledges it is operating “with an absolute minimum of support staff” and “limited customer service.” The roughly two dozen employees who remain are “entirely focused on” delivering authorizations at a record pace, FedRAMP’s director has said. Today, its annual budget is just $10 million, its lowest in a decade, even as it has boasted record numbers of new authorizations for cloud products.
Advertisement
The consequence of all this, people who have worked for FedRAMP told ProPublica, is that the program now is little more than a rubber stamp for industry. The implications of such a downsizing for federal cybersecurity are far-reaching, especially as the administration encourages agencies to adopt cloud-based artificial intelligence tools, which draw upon reams of sensitive information.
The General Services Administration, which houses FedRAMP, defended the program, saying it has undergone “significant reforms to strengthen governance” since GCC High arrived in 2020. “FedRAMP’s role is to assess if cloud services have provided sufficient information and materials to be adequate for agency use, and the program today operates with strengthened oversight and accountability mechanisms to do exactly that,” a GSA spokesperson said in an emailed statement.
The agency did not respond to written questions regarding GCC High.
A “Cloud First” World
About two decades ago, federal officials predicted that the cloud revolution, providing on-demand access to shared computing via the internet, would usher in an era of cheaper, more secure and more efficient information technology.
Advertisement
Moving to the cloud meant shifting away from on-premises servers owned and operated by the government to those in massive data centers maintained by tech companies. Some agency leaders were reluctant to relinquish control, while others couldn’t wait to.
In an effort to accelerate the transition, the Obama administration issued its “Cloud First” policy in 2011, requiring all agencies to implement cloud-based tools “whenever a secure, reliable, cost-effective” option existed. To facilitate adoption, the administration created FedRAMP, whose job was to ensure the security of those tools.
FedRAMP’s “do once, use many times” system was intended to streamline and strengthen the government procurement process. Previously, each agency using a cloud service vetted it separately, sometimes applying different interpretations of federal security requirements. Under the new program, agencies would be able to skip redundant security reviews because FedRAMP authorization indicated that the product had already met standardized requirements. Authorized products would be listed on a government website known as the FedRAMP Marketplace.
On paper, the program was an exercise in efficiency. But in practice, the small FedRAMP team could not keep up with the flood of demand from tech companies that wanted their products authorized.
Advertisement
The slow approval process frustrated both the tech industry, eager for a share in the billions of federal dollars up for grabs, and government agencies that were under pressure to migrate to the cloud. These dynamics sometimes pitted the cloud industry and agency officials together against FedRAMP. The backlog also prompted many agencies to take an alternative path: performing their own reviews of the products they wanted to adopt, using FedRAMP’s standards.
It was through this “agency path” that GCC High entered the federal bloodstream, with the Justice Department paving the way. Initially, some Justice officials were nervous about the cloud and who might have access to its information, which includes highly sensitive court and law enforcement records, a Justice Department official involved in the decision told ProPublica. The department’s cybersecurity program required it to ensure that only U.S. citizens “access or assist in the development, operation, management, or maintenance” of its IT systems, unless a waiver was granted. Justice’s IT specialists recommended pursuing GCC High, believing it could meet the elevated security needs, according to the official, who spoke on condition of anonymity because they were not authorized to discuss internal matters.
Pursuant to FedRAMP’s rules, Microsoft had GCC High evaluated by a so-called third-party assessment organization, which is supposed to provide an independent review of whether the product has met federal standards. The Justice Department then performed its own evaluation of GCC High using those standards and ruled the offering acceptable.
By early 2020, Melinda Rogers, Justice’s deputy chief information officer, made the decision official and soon deployed GCC High across the department.
Advertisement
It was a milestone for all involved. Rogers had ushered the Justice Department into the cloud, and Microsoft had gained a significant foothold in the cutthroat market for the federal government’s cloud computing business.
Moreover, Rogers’ decision placed GCC High on the FedRAMP Marketplace, the government’s influential online clearinghouse of all the cloud providers that are under review or already authorized. Its mere mention as “in process” was a boon for Microsoft, amounting to free advertising on a website used by organizations seeking to purchase cloud services bearing what is widely seen as the government’s cybersecurity seal of approval.
That April, GCC High landed at FedRAMP’s office for review, the final stop on its bureaucratic journey to full authorization.
Microsoft’s Missing Information
In theory, there shouldn’t have been much for FedRAMP’s team to do after the third-party assessor and Justice reviewed GCC High, because all parties were supposed to be following the same requirements.
Advertisement
But it was around this time that the Government Accountability Office, which investigates federal programs, discovered breakdowns in the process, finding that agency reviews sometimes were lacking in quality. Despite missing details, FedRAMP went on to authorize many of these packages. Acknowledging these shortcomings, FedRAMP began to take a harder look at new packages, a former reviewer said.
This was the environment in which Microsoft’s GCC High application entered the pipeline. The name GCC High was an umbrella covering many services and features within Office 365 that all needed to be reviewed. FedRAMP reviewers quickly noticed key material was missing.
The team homed in on what it viewed as a fundamental document called a “data flow diagram,” former members told ProPublica. The illustration is supposed to show how data travels from Point A to Point B — and, more importantly, how it’s protected as it hops from server to server. FedRAMP requires data to be encrypted while in transit to ensure that sensitive materials are protected even if they’re intercepted by hackers.
But when the FedRAMP team asked Microsoft to produce the diagrams showing how such encryption would happen for each service in GCC High, the company balked, saying the request was too challenging. So the reviewers suggested starting with just Exchange Online, the popular email platform.
Advertisement
“This was our litmus test to say, ‘This isn’t the only thing that’s required, but if you’re not doing this, we are not even close yet,’” said one reviewer who spoke on condition of anonymity because they were not authorized to discuss internal matters. Once they reached the appropriate level of detail, they would move from Exchange to other services within GCC High.
It was the kind of detail that other major cloud providers such as Amazon and Google routinely provided, members of the FedRAMP team told ProPublica. Yet Microsoft took months to respond. When it did, the former reviewer said, it submitted a white paper that discussed GCC High’s encryption strategy but left out the details of where on the journey data actually becomes encrypted and decrypted — so FedRAMP couldn’t assess that it was being done properly.
A Microsoft spokesperson acknowledged that the company had “articulated a challenge related to illustrating the volume of information being requested in diagram form” but “found alternate ways to share that information.”
Rogers, who was hired by Microsoft in 2025, declined to be interviewed. In response to emailed questions, the company provided a statement saying that she “stands by the rigorous evaluation that contributed to” her authorization of GCC High. A spokesperson said there was “absolutely no connection” between her hiring and the decisions in the GCC High process, and that she and the company complied with “all rules, regulations, and ethical standards.”
Advertisement
The Justice Department declined to respond to written questions from ProPublica.
A Fight Over “Spaghetti Pies”
As 2020 came to a close, a national security crisis hit Washington that underscored the consequences of cyber weakness. Russian state-sponsored hackers had been quietly working their way through federal computer systems for much of the year and vacuuming up sensitive data and emails from U.S. agencies — including the Justice Department.
At the time, most of the blame fell on a Texas-based company called SolarWinds, whose software provided hackers their initial opening and whose name became synonymous with the attack. But, as ProPublica has reported, the Russians leveraged that opening to exploit a long-standing weakness in a Microsoft product — one that the company had refused to fix for years, despite repeated warnings from one of its engineers. Microsoft has defended its decision not to address the flaw, saying that it received “multiple reviews” and that the company weighs a variety of factors when making security decisions.
In the aftermath, the Biden administration took steps to bolster the nation’s cybersecurity. Among them, the Justice Department announced a cyber-fraud initiative in 2021 to crack down on companies and individuals that “put U.S. information or systems at risk by knowingly providing deficient cybersecurity products or services, knowingly misrepresenting their cybersecurity practices or protocols, or knowingly violating obligations to monitor and report cybersecurity incidents and breaches.”
Advertisement
Deputy Attorney General Lisa Monaco said the department would use the False Claims Act to pursue government contractors “when they fail to follow required cybersecurity standards — because we know that puts all of us at risk.”
But if Microsoft felt any pressure from the SolarWinds attack or from the Justice Department’s announcement, it didn’t manifest in the FedRAMP talks, according to former members of the FedRAMP team.
The discourse between FedRAMP and Microsoft fell into a pattern. The parties would meet. Months would go by. Microsoft would return with a response that FedRAMP deemed incomplete or irrelevant. To bolster the chances of getting the information it wanted, the FedRAMP team provided Microsoft with a template, describing the level of detail it expected. But the diagrams Microsoft returned never met those expectations.
“We never got past Exchange,” one former reviewer said. “We never got that level of detail. We had no visibility inside.”
Advertisement
In an interview with ProPublica, John Bergin, the Microsoft official who became the government’s main contact, acknowledged the prolonged back-and-forth but blamed FedRAMP, equating its requests for diagrams to a “rock fetching exercise.”
“We were maybe incompetent in how we drew drawings because there was no standard to draw them to,” he said. “Did we not do it exactly how they wanted? Absolutely. There was always something missing because there was no standard.”
A Microsoft spokesperson said without such a standard, “cloud providers were left to interpret the level of abstraction and representation on their own,” creating “inconsistency and confusion, not an unwillingness to be transparent.”
But even Microsoft’s own engineers had struggled over the years to map the architecture of its products, according to two people involved in building cloud services used by federal customers. At issue, according to people familiar with Microsoft’s technology, was the decades-old code of its legacy software, which the company used in building its cloud services.
Advertisement
One FedRAMP reviewer compared it to a “pile of spaghetti pies.” The data’s path from Point A to Point B, the person said, was like traveling from Washington to New York with detours by bus, ferry and airplane rather than just taking a quick ride on Amtrak. And each one of those detours represents an opportunity for a hijacking if the data isn’t properly encrypted.
Other major cloud providers such as Amazon and Google built their systems from the ground up, said Sager, the former NSA computer scientist, who worked with all three companies during his time in government.
Microsoft’s system is “not designed for this kind of isolation of ‘secure’ from ‘not secure,’” Sager said.
A Microsoft spokesperson acknowledged the company faces a unique challenge but maintained that its cloud products meet federal security requirements.
Advertisement
“Unlike providers that started later with a narrower product scope, Microsoft operates one of the broadest enterprise and government platforms in the world, supporting continuity for millions of customers while simultaneously modernizing at scale,” the spokesperson said in emailed responses. “That complexity is not ‘spaghetti,’ but it does mean the work of disentangling, isolating, and hardening systems is continuous.”
The spokesperson said that since 2023, Microsoft has made “security‑first architectural redesign, legacy risk reduction, and stronger isolation guarantees a top, company‑wide priority.”
Assessors Back-Channel Cyber Concerns
The FedRAMP team was not the only party with reservations about GCC High. Microsoft’s third-party assessment organizations also expressed concerns.
The firms are supposed to be independent but are hired and paid by the company being assessed. Acknowledging the potential for conflicts of interest, FedRAMP has encouraged the assessment firms to confidentially back-channel to its reviewers any negative feedback that they were unwilling to bring directly to their clients or reflect in official reports.
Advertisement
In 2020, two third-party assessors hired by Microsoft, Coalfire and Kratos, did just that. They told FedRAMP that they were unable to get the full picture of GCC High, a former FedRAMP reviewer told ProPublica.
“Coalfire and Kratos both readily admitted that it was difficult to impossible to get the information required out of Microsoft to properly do a sufficient assessment,” the reviewer told ProPublica.
The back channel helped surface cybersecurity issues that otherwise might never have been known to the government, people who have worked with and for FedRAMP told ProPublica. At the same time, they acknowledged its existence undermined the very spirit and intent of having independent assessors.
A spokesperson for Coalfire, the firm that initially handled the GCC High assessment, requested written questions from ProPublica, then declined to respond.
Advertisement
A spokesperson for Kratos, which replaced Coalfire as the GCC High assessor, declined an interview request. In an emailed response to written questions, the spokesperson said the company stands by its official assessment and recommendation of GCC High and “absolutely refutes” that it “ever would sign off on a product we were unable to fully vet.” The company “has open and frank conversations” with all customers, including Microsoft, which “submitted all requisite diagrams to meet FedRAMP-defined requirements,” the spokesperson said.
Kratos said it “spent extensive time working collaboratively with FedRAMP in their review” and does not consider such discussions to be “backchanneling.”
FedRAMP, however, was dissatisfied with Kratos’ ongoing work and believed the firm “should be pushing back” on Microsoft more, the former reviewer said. It placed Kratos on a “corrective action plan,” which could eventually result in loss of accreditation. The company said it did not agree with FedRAMP’s action but provided “additional trainings for some internal assessors” in response to it.
The Microsoft spokesperson told ProPublica the company has “always been responsive to requests” from Kratos and FedRAMP. “We are not aware of any backchanneling, nor do we believe that backchanneling would have been necessary given our transparency and cooperation with auditor requests,” the spokesperson said.
Advertisement
In response to questions from ProPublica about the process, the GSA said in an email that FedRAMP’s system “does not create an inherent conflict of interest for professional auditors who meet ethical and contractual performance expectations.”
GSA did not respond to questions about back-channeling but said the “correct process” is for a third-party assessor to “state these problems formally in a finding during the security assessment so that the cloud service provider has an opportunity to fix the issue.”
FedRAMP Ends Talks
The back-and-forth between the FedRAMP reviewers and Microsoft’s team went on for years with little progress. Then, in the summer of 2023, the program’s interim director, Brian Conrad, got a call from the White House that would alter the course of the review.
Chinese state-sponsored hackers had infiltrated GCC, the lower-cost version of Microsoft’s government cloud, and stolen data and emails from the commerce secretary, the U.S. ambassador to China and other high-ranking government officials. In the aftermath, Chris DeRusha, the White House’s chief information security officer, wanted a briefing from FedRAMP, which had authorized GCC.
Advertisement
The decision predated Conrad’s tenure, but he told ProPublica that he left the conversation with several takeaways. First, FedRAMP must hold all cloud providers — including Microsoft — to the same standards. Second, he had the backing of the White House in standing firm. Finally, FedRAMP would feel the political heat if any cloud service with a FedRAMP authorization were hacked.
DeRusha confirmed Conrad’s account of the phone call but declined to comment further.
Within months, Conrad informed Microsoft that FedRAMP was ending the engagement on GCC High.
“After three years of collaboration with the Microsoft team, we still lack visibility into the security gaps because there are unknowns that Microsoft has failed to address,” Conrad wrote in an October 2023 email. This, he added, was not for FedRAMP’s lack of trying. Staffers had spent 480 hours of review time, had conducted 18 “technical deep dive” sessions and had numerous email exchanges with the company over the years. Yet they still lacked the data flow diagrams, crucial information “since visibility into the encryption status of all data flows and stores is so important,” he wrote.
Advertisement
If Microsoft still wanted FedRAMP authorization, Conrad wrote, it would need to start over.
A FedRAMP reviewer, explaining the decision to the Justice Department, said the team was “not asking for anything above and beyond what we’ve asked from every other” cloud service provider, according to meeting minutes reviewed by ProPublica. But the request was particularly justified in Microsoft’s case, the reviewer told the Justice officials, because “each time we’ve actually been able to get visibility into a black box, we’ve uncovered an issue.”
“We can’t even quantify the unknowns, which makes us very uncomfortable,” the reviewer said, according to the minutes.
Microsoft and the Justice Department Push Back
Microsoft was furious. Failing to obtain authorization and starting the process over would signal to the market that something was wrong with GCC High. Customers were already confused and concerned about the drawn-out review, which had become a hot topic in an online forum used by government and technology insiders. There, Wakeman, the Microsoft cybersecurity architect, deflected blame, saying the government had been “dragging their feet on it for years now.”
Advertisement
Meanwhile, to build support for Microsoft’s case, Bergin, the company’s point person for FedRAMP and a former Army official, reached out to government leaders, including one from the Justice Department.
The Justice official, who spoke on condition of anonymity because they were not authorized to discuss the matter, said Bergin complained that the delay was hampering Microsoft’s ability “to get this out into the market full sail.” Bergin then pushed the Justice Department to “throw around our weight” to help secure FedRAMP authorization, the official said.
That December, as the parties gathered to hash things out at GSA’s Washington headquarters, Justice did just that. Rogers, who by then had been promoted to the department’s chief information officer, sat beside Bergin — on the opposite side of the table from Conrad, the FedRAMP director.
Rogers and her Justice colleagues had a stake in the outcome. Since authorizing and deploying GCC High, she had receivedaccolades for her work modernizing the department’s IT and cybersecurity. But without FedRAMP’s stamp of approval, she would be the government official left holding the bag if GCC High were involved in a serious hack. At the same time, the Justice Department couldn’t easily back out of using GCC High because once a technology is widely deployed, pulling the plug can be costly and technically challenging. And from its perspective, the cloud was an improvement over the old government-run data centers.
Advertisement
Shortly after the meeting kicked off, Bergin interrupted a FedRAMP reviewer who had been presenting PowerPoint slides. He said the Justice Department and third-party assessor had already reviewed GCC High, according to meeting minutes. FedRAMP “should essentially just accept” their findings, he said.
Then, in a shock to the FedRAMP team, Rogers backed him up and went on to criticize FedRAMP’s work, according to two attendees.
In its statement, Microsoft said Rogers maintains that FedRAMP’s approach “was misguided and improperly dismissed the extensive evaluations performed by DOJ personnel.”
Bergin did not dispute the account, telling ProPublica that he had been trying to argue that it is the purview of third-party assessors such as Kratos — not FedRAMP — to evaluate the security of cloud products. And because FedRAMP must approve the third-party assessment firms, the program should have taken its issues up with Kratos.
Advertisement
“When you are the regulatory agency who determines who the auditors are and you refuse to accept your auditors’ answers, that’s not a ‘me’ problem,” Bergin told ProPublica.
The GSA did not respond to questions about the meeting. The Justice Department declined to comment.
Pressure Mounts on FedRAMP
If there was any doubt about the role of FedRAMP, the White House issued a memorandum in the summer of 2024 that outlined its views. FedRAMP, it said, “must be capable of conducting rigorous reviews” and requiring cloud providers to “rapidly mitigate weaknesses in their security architecture.” The office should “consistently assess and validate cloud providers’ complex architectures and encryption schemes.”
But by that point, GCC High had spread to other federal agencies, with the Justice Department’s authorization serving as a signal that the technology met federal standards.
Advertisement
It also spread to the defense sector, since the Pentagon required that cloud products used by its contractors meet FedRAMP standards. While it did not have FedRAMP authorization, Microsoft marketed GCC High as meeting the requirements, selling it to companies such as Boeing that research, develop and maintain military weapons systems.
But with the FedRAMP authorization up in the air, some contractors began to worry that by using GCC High, they were out of compliance. That could threaten their contracts, which, in turn, could impact Defense Department operations. Pentagon officials called FedRAMP to inquire about the authorization stalemate.
The Defense Department acknowledged but did not respond to written questions from ProPublica.
Rogers also kept pressing FedRAMP to “get this thing over the line,” former employees of the GSA and FedRAMP said. It was the “opinion of the staff and the contractors that she simply was not willing to put heat to Microsoft on this” and that the Justice Department “was too sympathetic to Microsoft’s claims,” Eric Mill, then GSA’s executive director for cloud strategy, told ProPublica.
Advertisement
Authorization Despite a “Damning” Assessment
In the summer of 2024, FedRAMP hired a new permanent director, government technology insider Pete Waterman. Within about a month of taking the job, he restarted the office’s review of GCC High with a new team, which put aside the debate over data flow diagrams and instead attempted to examine evidence from Microsoft. But these reviewers soon arrived at the same conclusion, with the team’s leader complaining about “getting stiff-armed” by Microsoft.
“He came back and said, ‘Yeah, this thing sucks,’” Mill recalled.
While the team was able to work through only two of the many services included in GCC High, Exchange Online and Teams, that was enough for it to identify “issues that are fundamental” to risk management, including “timely remediation of vulnerabilities and vulnerability scanning,” according to a summary of the team’s findings reviewed by ProPublica.
Those issues, as well as a lack of “proper detailed security documentation” from Microsoft, limit “visibility and understanding of the system” and “impair the ability to make informed risk decisions.”
Advertisement
The team concluded, “There is a lack of confidence in assessing the system’s overall security posture.”
A Microsoft spokesperson said in a statement that the company “never received this feedback in any of its communications with FedRAMP.”
When ProPublica read the findings to Bergin, the Microsoft liaison, he said he was surprised.
“That’s pretty damning,” Bergin said, adding that it sounded like language that “would’ve generally been associated with a finding of ‘not worthy.’ If an assessor wrote that, I would be nervous.”
Advertisement
Despite the findings, to the FedRAMP team, turning Microsoft down didn’t seem like an option. “Not issuing an authorization would impact multiple agencies that are already using GCC-H,” the summary document said. The team determined that it was a “better value” to issue an authorization with conditions for continued government oversight.
While authorizations with oversight conditions weren’t unusual, arriving at one under these circumstances was. GCC High reviewers saw problems everywhere, both in what they were able to evaluate and what they weren’t. To them, most of the package remained a vast wilderness of untold risk.
Nevertheless, FedRAMP and Microsoft reached an agreement, and the day after Christmas 2024, GCC High received its FedRAMP authorization. FedRAMP appended a cover report to the package laying out its deficiencies and noting it carried unknown risks, according to people familiar with the report.
It emphasized that agencies should carefully review the package and engage directly with Microsoft on any questions.
Advertisement
“Unknown Unknowns” Persist
Microsoft told ProPublica that it has met the conditions of the agreement and has “stayed within the performance metrics required by FedRAMP” to ensure that “risks are identified, tracked, remediated, and transparently communicated.”
But under the Trump administration, there aren’t many people left at FedRAMP to check.
While the Biden-era guidance said FedRAMP “must be an expert program that can analyze and validate the security claims” of cloud providers, the GSA told ProPublica that the program’s role is “not to determine if a cloud service is secure enough.” Rather, it is “to ensure agencies have sufficient information to make these risk decisions.”
The problem is that agencies often lack the staff and resources to do thorough reviews, which means the whole system is leaning on the claims of the cloud companies and the assessments of the third-party firms they pay to evaluate them. Under the current vision, critics say, FedRAMP has lost the plot.
Advertisement
“FedRAMP’s job is to watch the American people’s back when it comes to sharing their data with cloud companies,” said Mill, the former GSA official, who also co-authored the 2024 White House memo. “When there’s a security issue, the public doesn’t expect FedRAMP to say they’re just a paper-pusher.”
Meanwhile, at the Justice Department, officials are finding out what FedRAMP meant by the “unknown unknowns” in GCC High. Last year, for example, they discovered that Microsoft relied on China-based engineers to service their sensitive cloud systems despite the department’s prohibition against non-U.S. citizens assisting with IT maintenance.
Officials learned about this arrangement — which was also used in GCC High — not from FedRAMP or from Microsoft but from a ProPublica investigation into the practice, according to the Justice employee who spoke with us.
A Microsoft spokesperson acknowledged that the written security plan for GCC High that the company submitted to the Justice Department did not mention foreign engineers, though he said Microsoft did communicate that information to Justice officials before 2020. Nevertheless, Microsoft has since ended its use of China-based engineers in government systems.
Advertisement
Former and current government officials worry about what other risks may be lurking in GCC High and beyond.
The GSA told ProPublica that, in general, “if there is credible evidence that a cloud service provider has made materially false representations, that matter is then appropriately referred to investigative authorities.”
Ironically, the ultimate arbiter of whether cloud providers or their third-party assessors are living up to their claims is the Justice Department itself. The recent indictment of the former Accenture employee suggests it is willing to use this power. In a court document, the Justice Department alleges that the ex-employee made “false and misleading representations” about the cloud platform’s security to help the company “obtain and maintain lucrative federal contracts.” She is also accused of trying to “influence and obstruct” Accenture’s third-party assessors by hiding the product’s deficiencies and telling others to conceal the “true state of the system” during demonstrations, the department said. She has pleaded not guilty.
There is no public indication that such a case has been brought against Microsoft or anyone involved in the GCC High authorization. The Justice Department declined to comment. Monaco, the deputy attorney general who launched the department’s initiative to pursue cybersecurity fraud cases, did not respond to requests for comment.
Advertisement
She left her government position in January 2025. Microsoft hired her to become its president of global affairs.
A company spokesperson said Monaco’s hiring complied with “all rules, regulations, and ethical standards” and that she “does not work on any federal government contracts or have oversight over or involvement with any of our dealings with the federal government.”
Spring has a way of revealing everything your garage doesn’t have. All the clutter you accumulated last winter suddenly needs a place to go so you can get the lawn mower out. The half-finished projects from last year are calling your name again, but the tools you swore you had somehow aren’t where you left them. And all that dust and leaves piled up in the corners can only be ignored for so long.
It’s why, for many homeowners, spring is about more than just cleaning. It’s a time for upgrading, as well. Lucky for you, Costco’s got plenty of the good stuff to get your garage ready for the season. We’ve put together a mix of must-haves ranging from storage systems to power tools to lawn equipment and beyond. No matter what projects await you in the coming months, these five Costco finds should be good enough to get you through.
Advertisement
Trinity Modular Slatwall
No functional garage is complete without proper organization and storage. That’s easier said than done, though, especially if your car or cars take up all the extra room in there. That’s what’s nice about the Trinity Modular Slatwall: it lets you store tons of stuff right there on the wall, no floor space required. The kit comes with four 48-inch by 12-inch PVC panels, covering a total of 16 square feet. Plus, you get 13 hooks in multiple sizes. It’s modular, too, so you can adapt it to whatever kind of wall space you have. The panels can go either vertically in a 4-foot by 4-foot square or horizontally in an 8-foot by 2-foot layout.
The panels can support up to 75 pounds per square foot, which comes out to 1,200 pounds total. It mounts flush with the wall, too, so no worries about it protruding into the workspace. Costco members say it’s sturdy and easy to install, and that it works even better when you order more hooks than the 13 it comes with. It’s an online exclusive priced at $129.99, and you can get it in gray or white.
Advertisement
Ingersoll Rand Combination Wrench Set
If mechanical work is on your spring to-do list, a reliable wrench set is a must-have. To get you taken care of, Costco sells this Ingersoll Rand 16-piece combination wrench set as an online exclusive. It covers sizes from 6mm to 22mm, which should be suitable for everything from basic household fixes to more demanding automotive jobs. Each wrench is engineered to exceed ANSI standards for torque, length, and hardness. They also have a non-slip grip design to help minimize the chances of you stripping your fasteners.
Advertisement
Another big plus: the long handles. They increase your leverage, which means more torque with less effort. Costco members say they’re well-made and feel comfortable enough for all your wrenching needs. It’s going for $99.99 on Costco’s site, which means you’ll be paying about $6.25 a wrench. Sure, there are cheaper mechanics’ tool sets out there, but this one comes with Costco’s excellent customer service to protect your purchase.
Advertisement
DeWalt Wet Dry Vacuum
Everybody likes getting a project done, but very few like the cleanup that comes after. This DeWalt wet-dry vacuum does make it a little easier to manage, though. It’s powered by a 4 peak horsepower motor, so you get strong suction and high airflow for everything from fine dust to heavier debris like nails and wood fragments. And even with all that horsepower, it’s still built with Stealthsonic technology that keeps noise levels below 65 decibels. That makes it about 50% quieter than standard wet/dry vacuums.
The vacuum’s stainless steel tank is pretty resilient against wear and tear, and the crush-resistant hose is built to last just the same. It comes with a 15-foot power cord to get you into every corner of the garage, plus a wrap handle for convenient storage. Costco members call it nice and quiet and perfectly adequate for inside and outside cleanup. Yeah, there are more powerful shop vacs out there, but this should be more than enough for the average Joe. It’s $99.99 in stores and online.
Advertisement
Fanttik Mini Chainsaw
Not every yard work task requires a full-size chainsaw. Sometimes, you just need to do a little cutting and trimming. That’s where the Fanttik Mini Chainsaw comes in handy. It’s a much more compact alternative to the full-size thing that you can even use with one hand. Even with its smaller size, the chainsaw can still handle over 135 cuts on a single charge of its 2500mAh battery. When it comes time to recharge, its fast USB-C charging means very little downtime. You also have a built-in LED display to give you real-time information on battery life and speed settings during operation.
The tool gives you three adjustable speed levels and an integrated LED light for visibility in low-light conditions. Don’t take that as your sign to go chainsawing in the dark, though. Be careful. While it’s not intended for heavy-duty logging, the mini chainsaw is plenty for quick, efficient yard work. Costco members agree, saying it works like a charm and can get the job done with power to spare. It’s $79.99 and is available in-store and online. And if you want to take your garage upgrade a step further, there are several other Costco Finds that can help you do that.
Advertisement
Greenworks Drill & Impact Driver Kit
If your power tools need an upgrade for spring, Costco has a Greenworks 24V drill and impact driver kit that comes with both tools, three batteries, a fast charger, and a range of other bits and accessories. It’s $299.99, and it’s an online exclusive. The drill has a 1/2-inch keyless chuck, a two-speed gearbox reaching up to 2,000 RPM, and an 18-position clutch. The impact driver has a 1/4-inch quick-release hex collet and up to 1,950 inch-pounds of torque. Together, you’ll have just what you need to start knocking out all those projects haunting your to-do list.
The 24V lithium-ion batteries have enough power and runtime to help you get through bigger projects. They also use USB-C fast charging, and they double as an input and output. That means, in a pinch, the batteries can serve as portable power banks for your phone or laptop. Costco members like the sturdy build, the robust torque, and the overall value of the kit itself. It’s definitely one of the most underrated tool brands at Costco.
Advertisement
Methodology
Each item included here was available to purchase from Costco warehouses or Costco.com as of the time of this writing. Items were chosen based on the highest-rated items from member reviews, sorted by newest arrivals on Costco’s website. Particular attention was also given to tools that solve common seasonal challenges during springtime (such as organization, cleanup, and DIY projects around the house or the yard). Each tool represents a different category of need (storage, fastening, cleaning, cutting, and drilling) to provide a well-rounded list of upgrades for your garage this spring.
New renders of the Pixel 11 Pro XL have surfaced, giving us one of the clearest looks yet at Google’s next flagship.
Fresh CAD-based images suggest Google is reworking its signature camera bar, swapping the familiar two-tone look for a more unified, monochromatic design that stretches cleanly across the rear. This is a subtle shift on paper. However, it could give the Pixel 11 lineup a noticeably sharper, more modern feel.
The renders, first shared by Android Headlines, follow earlier leaks of the standard Pixel 11 and Pixel 11 Pro. They complete the picture of Google’s 2026 non-foldable range. While CAD renders aren’t official, they’re typically based on manufacturing dimensions. This makes them a fairly reliable preview of overall shape and layout.
Credit: Android Headlines/OnLeaks
Alongside the new camera bar, there are hints that Google could be dropping the infrared thermometer seen on previous Pro models. That’s one detail worth treating with caution, as smaller features don’t always show up accurately in CAD leaks.
Advertisement
Advertisement
In terms of size, the Pixel 11 Pro XL is expected to come in at 162.7 x 76.5 x 8.5mm, making it marginally smaller than its predecessor. However, the display is tipped to remain unchanged. Google is likely sticking to a 6.8-inch AMOLED panel.
Under the hood, there aren’t many surprises yet. A next-gen Tensor G6 chip is widely expected, but beyond that, details around RAM and storage are still unclear. There’s also no strong indication of a major hardware shake-up this time around.
If Google follows its usual schedule, the Pixel 11 series is still a few months away from launch. An August reveal looks likely.
Senior defense officials told The Wall Street Journal that the autonomous attack drones have been used in strikes against Iranian military and IRGC targets, including weapons facilities, manufacturing sites, and air-defense nodes. They said this contributed to an 83% decline in Iranian drone attacks during the early days of the conflict. Read Entire Article Source link
Mikko Hyppönen is pacing back and forth on the stage, with his trademark dark blonde ponytail resting on an impeccable teal suit. A seasoned speaker, he is trying to make an important point to a room full of fellow hackers and security researchers at one of the industry’s global annual meet-ups.
“I often call this ‘cybersecurity Tetris’,” he tells the audience with a serious face, reeling off the rules of the classic video game. When you complete a whole line of bricks, the row vanishes, leaving the rest of the bricks to fall into a new line.
“So your successes disappear, while your failures pile up,” he tells the audience during his keynote at Black Hat in Las Vegas in 2025. “The challenge we face as cybersecurity people is that our work is invisible… when you do your job perfectly, the end result is that nothing happens.”
Hyppönen’s work, however, has certainly not been invisible. As one of the industry’s longest serving cybersecurity figures, he has spent more than 35 years fighting malware. When he started in the late 1980s, the term “malware” was still far from everyday parlance; the terms instead were computer “virus” or “trojans.” The internet was still something few people had access to, and some viruses relied on infecting computers with floppy disks.
Advertisement
Since then, Hyppönen estimated he has analyzed thousands of different kinds of malware. And thanks to his frequent talks at conferences all over the world, he has become one of the most recognizable faces and respected voices of the cybersecurity community.
While Hyppönen has spent much of his life trying to keep malware from getting into places it is not supposed to, now he is still doing much of the same, albeit a slightly different tack: His new challenge is to protect people against drones.
Hyppönen, who is Finnish, told me during a recent interview that he lives about two hours away from Finland’s border with Russia. An increasingly hostile Russia and its 2022 full-scale invasion of Ukraine, where the majority of deaths have reportedly come from unmanned aerial attacks, have made Hyppönen believe he can have renewed impact by fighting drones.
For Hyppönen, it is also a matter of recognizing that while there are still long-standing problems to solve in the world of cybersecurity — malware is not going anywhere and there are plenty of new problems on the horizon — the industry has made huge strides over the last two decades. An iPhone, Hyppönen brought up as an example, is an extremely secure device. The cybersecurity aspects of drone warfare, on the other hand, remain almost uncharted territory.
Advertisement
Image Credits:courtesy of Mikko Hypponen
From viruses and worms to malware and spyware…
Hyppönen started early in cybersecurity by hacking video games during the 1980s. His love for cybersecurity came from reverse engineering software to figure out a way to remove anti-piracy protections from a Commodore 64 games console. He learned to code by developing adventure games, and sharpened his reverse engineering skills by analyzing malware at his first job at Finnish company Data Fellows, which later became the well-known antivirus maker F-Secure.
Since then, Hyppönen has been on the front lines of the fight against malware, witnessing how it evolved.
In the early years, virus writers developed their malicious code often exclusively out of passion and curiosity to see what was possible with code alone. While some cyberespionage existed, hackers had yet to discover ways to monetize hacking by today’s standards, like ransomware attacks. There was no cryptocurrency to facilitate extortion, nor a criminal marketplace for stolen data.
Form.A, for example, was one of the most common viruses in the early 1990s, which infected computers with a floppy disk. A version of that virus did not destroy anything — sometimes just displaying a message on the person’s screen, and that was it. But the virus travelled around the world, including landing on the research stations at the South Pole, Hyppönen told me.
Hyppönen recounted the infamous ILOVEYOU virus, which he and his colleagues were the first to discover in 2000. ILOVEYOU was wormable, meaning it spread automatically from computer to computer. It arrived via email as a text file, purportedly a love letter. If the target opened it, it would overwrite and corrupt some files on the person’s computer, and then send itself to all their contacts.
Advertisement
The virus infected over 10 million Windows computers worldwide.
Malware has changed dramatically since then. Virtually no one develops malware as a hobby, and creating malicious software that self-replicates is practically a guarantee that it will get caught by cybersecurity defenders capable of neutralizing it quickly, and potentially catching its author.
No one does it for the love of the game anymore, according to Hyppönen. “The age of viruses is firmly behind us,” he said.
Seldom do we now see self-spreading worms — with rare exceptions, such as the destructive WannaCry ransomware attack by North Korea in 2017; and the NotPetya mass-hacking campaign launched by Russia later that year, which crippled much of the Ukrainian internet and power grid. Now, malware is almost exclusively used by cybercriminals, spies, and mercenary spyware makers who develop exploits for government-backed hacking and espionage. Those groups typically stay in the shadows, and want to keep their tools hidden to continue their activities and to avoid cybersecurity defenders or law enforcement.
Advertisement
The other differences today are that the cybersecurity industry is now estimated to be worth $250 billion. The industry has professionalized, in part as a necessity, to fight the increase in malware attacks. Defenders went from giving away their software for free, to turning it into a paid service or product, said Hyppönen.
Computers and newer inventions like smartphones, which began to take off during the early 2000s, have become much harder to hack. If the tools to hack an iPhone or the Chrome browser cost six-figures or even a few million dollars, Hyppönen argued, this effectively makes an exploit so expensive that only the highly resourced, like governments, can use them, rather than financially motivated cybercriminals. That’s a huge win for consumers, and for the cybersecurity industry that’s a job well done.
Image Credits:courtesy of Mikko Hypponen
From fighting spies and criminals… to countering drones
In mid-2025, Hyppönen pivoted from cybersecurity to a different kind of defensive work. He became the chief research officer at Sensofusion, a Helsinki-based company that develops an anti-drone system for law enforcement agencies and the military.
Hyppönen told me that was motivated to get into a developing new industry because of what he saw happening in Ukraine, a war defined by drones. As a Finnish citizen, who serves in the military reserves (“I can’t tell you what I do, but I can tell you that they don’t give me a rifle because I’m much more destructive with a keyboard,” he tells me), and with two grandfathers who fought the Russians, Hyppönen is acutely aware of the presence of an enemy just over his country’s border.
“The situation is very, very important to me,” he tells me. “It’s more meaningful to work fighting against drones, not just the drones that we see today, but also the drones of tomorrow,” he said. “We’re on the side of humans against machines, which sounds a little bit like science fiction, but that’s very concretely what we do.”
Advertisement
The cybersecurity and drone industries may seem leagues apart from one another, but there are clear parallels between fighting malware and fighting drones, according to Hyppönen. To fight malware, cybersecurity companies have come up with mechanisms, known as signatures, to identify what is malware and what is not and then detect and block it. In the case of drones, Hyppönen explained, defenses involve building systems that can locate and jam radio drones, and by recognizing frequencies that are being used to control the autonomous vehicles.
Hyppönen explained that it’s possible to identify and detect drones by recording their radio frequencies, known as their IQ samples.
“We detect the protocol from there and build up signatures for detecting unknown drones,” he said.
He also explained that if you detect the protocol and frequencies used to control the drone, you can also try to conduct cyberattacks against it. You can cause the drone’s system to malfunction, and crash the drone into the ground. “So in many ways, these protocol level attacks are much, much easier in the drone world because the first step is the last step,” Hyppönen said. “If you find a vulnerability, you’re done.”
Advertisement
The strategy in fighting malware and fighting drones is not the only thing that hasn’t changed in his life. The cat-and-mouse game of learning how to stop a threat, and then the enemy learning from that and devising new ways to get around defenses, and on and on, is the same in the world of drones. And then, there’s the identity of the enemy.
“I spent a big part of my career fighting against Russian malware attacks,” he said. “Now I’m fighting Russian drone attacks.”
Starting this week, Microsoft has begun force-upgrading unmanaged devices running Windows 11 24H2 Home and Pro editions to Windows 11 25H2.
According to the company’s Lifecycle Policy site, Windows 11 24H2 will reach end of support in roughly six months, on October 13, 2026.
Also known as the Windows 11 2025 Update, Windows 11 25H2 began rolling out in September to eligible Windows 10 or Windows 11 devices as a minor update installed through enablement packages less than 200 KB in size.
“The machine learning-based intelligent rollout has expanded to all devices running Home and Pro editions of Windows 11, version 24H2 that are not managed by IT departments,” Microsoft said in a Monday update to the Windows release health dashboard.
Advertisement
“Devices running these editions will no longer receive fixes for known issues, time zone updates, technical support, or monthly security and preview updates containing protections from the latest security threats,” it added.
“These devices will automatically receive the update to Windows 11, version 25H2 when they’re ready. No action is required, and you can choose when to restart your device or postpone the update.”
Those who don’t want to wait for the automatic upgrade can manually check whether the update is available in Settings > Windows Update and click the link to download and install Windows 11 25H2.
Advertisement
If you’re not ready to upgrade, you can also pause updates from Settings > Windows Update by selecting the amount of time you’d like to pause them. However, you must install the latest updates after the time limit has passed.
Microsoft also provides a support document and a step-by-step guide to help users resolve problems encountered during the Windows 11 25H2 upgrade process.
Since the March 2026 Patch Tuesday updates were released, Microsoft has issued several emergency updates, including one that addresses a known issue breaking sign-ins with Microsoft accounts across multiple Microsoft apps, such as Teams and OneDrive.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
[Washington, DC – April 2, 2026] – IREX, a global pioneer in ethical AI and intelligent video analytics deployed across 10+ countries and over 300,000 cameras, announced a major update to its FireTrack smoke and fire detection module. The update doesn’t require any additional hardware and broadens FireTrack’s applicability to critical infrastructure such as energy facilities and transportation hubs, public institutions including schools and hospitals, residential and commercial buildings, and parks, national parks, and forests.
Built on IREX’s ethical AI platform, the new module processes visual data in just 75–105 milliseconds –or about 0.1 second-, identifying danger almost instantly. This advancement – combined with improved model accuracy and resilience in poor lighting or weather – empowers early intervention by first responders, reducing the risk of catastrophic loss.
The updated model analyzes how fire and smoke evolve over time, distinguishing genuine hazards from harmless visuals like fog, headlights, or glare. This dramatically cuts down false alarms, allowing safety teams to focus on incidents that truly require attention.
To boost accuracy, IREX changed how the system “sees” fire and smoke. Instead of traditional bounding boxes around objects, the updated module uses segmentation, applying a color mask over the exact areas where fire or smoke appears: green for fire and red for smoke, thus better reflecting their irregular shapes. This approach improves the system’s ability to localize hazards precisely within the scene.
Advertisement
Credit: Irex
The updated FireTrack delivers early warning that is significantly faster than traditional optical or heat-based detectors by analyzing live video feeds for the visual signatures of smoke and fire in real time.
“Because the IREX AI platform seamlessly operates on existing camera networks, cities and organizations can strengthen fire safety without installing specialized sensor hardware – simply by connecting their CCTV systems to IREX,” said Serge Smirnoff, Head of PR at IREX. “Each detection event comes with a video snapshot for instant visual verification, enabling operators and first responders to quickly assess the situation and respond effectively.”
By leveraging the surveillance infrastructure already in place, the new FireTrack model offers a cost-effective path to comprehensive fire safety across both built environments and natural landscapes.
“The pride I feel for the IREX team today is immense. This FireTrack launch is a monumental achievement that reflects our core mission, to deploy ethical, intelligent AI to solve the world’s most critical problems,” said Calvin Yadav, CEO of IREX. “We are strengthening the resilience of entire communities globally, proving that every hour of hard work put into responsibly designed artificial intelligence is actively saving lives long before a single alarm sounds.”
The SDIC 8-bit MCU. (Credit: electronupdate, YouTube)
In this wonderful world of MEMS technology, sensor technology has been downsized and reduced in cost to the point where you can buy a car tire pressure sensor for less than $3 USD on a site like AliExpress. Recently [electronupdate] got his mittens on one of these items to take a look inside, and compare it against his trusty old mechanical tire pressure gauge.
Perhaps unsurprisingly, there isn’t a whole lot inside these devices once you pop them open to reveal the PCB. The MEMS device is a tiny device at the top, which has the pressurized air from the tire guided to it. The small hole inside the metal can leads to the internals that consist of a thin diaphragm with four piezoresistors that enable measurements on said diaphragm from which pressure can be determined.
Handling these measurements and displaying results on the small zebra connector-connected LCD is an 8-bit MCU manufactured by Chinese company SDIC. Although the part number on the die doesn’t lead to any specific part on the SDIC site, similar SDIC parts have about 256 bytes of SRAM and a few kB of one-time programmable ROM.
This MCU also integrates the clock oscillator, thus requiring virtually no external parts to work. Finally, its sigma-delta ADC interacts with the MEMS device, rounding out a very simple device that’s nevertheless more than accurate enough for a spot check as well as quite portable.
For decades, modern navigation has relied heavily on GPS, but another, less visible system plays an equally critical role in helping aircraft, ships, smartphones, and military platforms determine their position.
Earth’s magnetic field, constantly shifting and evolving, underpins the World Magnetic Model (WMM), a global reference that supports navigation systems used by billions of people every day.
Maintaining the accuracy of that model depends on reliable measurements of the magnetic field, yet much of the satellite infrastructure used to gather this data is aging, while the field itself is changing at an accelerating rate.
Advertisement
Article continues below
Quantum diamond magnetometers
These pressures have driven a search for new technologies capable of monitoring the magnetic field with greater precision and frequency.
Advertisement
In response, the US National Geospatial-Intelligence Agency (NGA) launched the MagQuest Challenge in 2019, a seven-year, multi-million-dollar competition designed to identify next-generation sensing technologies.
The goal is to develop compact, highly accurate systems that can provide continuous magnetic data, reducing reliance on periodic measurements and helping ensure the long-term reliability of global navigation systems.
One of the companies emerging from this effort is SBQuantum, a Canadian firm specializing in quantum sensing technology. Its approach centers on quantum diamond magnetometers, compact devices that use the principles of quantum physics to measure magnetic fields with exceptional sensitivity.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Advertisement
Recently, the company reached a major milestone when its sensor was launched into orbit as part of the final phase of the MagQuest program. The deployment represents a step toward continuous, space-based monitoring of Earth’s magnetic field and highlights the growing role of quantum technologies in navigation, defense, and public safety.
To better understand the development of this technology, the challenges involved in bringing it to space, and the potential applications beyond navigation, I spoke with David Roy-Guay, Founder of SBQuantum.
Before we start, can you give us a brief overview of what the WMM is and why it is so important for us.
The World Magnetic Model (WMM) is what powers every electronic compass, including the one in your watch and cellphone. It is essential to keep up to date as the Magnetic North Pole is moving. It was in the Canadian north and is now shifting toward Siberia. This has a real impact on the precision of every analog and digital compass.
Everyday, we use the WMM, just think of the blue arrow in your favorite navigation application telling you to head left or right as you exit a subway station or a hotel. This directional information is complementary to GPS, which provides location information, but doesn’t tell you which way you are facing.
Advertisement
You mentioned that the satellites feeding it with data are reaching their end of life. What happens next?
Typically the WMM is updated every 5 years when a new official version is released. However recently a new update was released after only 4 years because the movement of the field had accelerated.
Once the mission of the current ESA SWARM constellation of satellites comes to an end, the existing magnetic field maps will be of little value 2-3 years after that. This means the navigation systems on board aircraft and drones will be off significantly, especially in the northernmost areas, possibly up to dozens of degrees. I can think of one example in Alaska when recently a landing strip had to have its numbers changed since it was no longer facing the same direction according to the WMM.
In comparison, our platform ‘Diamond Polaris – 1’ will allow the continuous production of magnetic data for the WMM. This approach is far more cost-effective, gathers and assembles faster, and offers data well suited for accurate positioning.
How does the data from the WMM project convert into something that can be an alternative to the ubiquitous GPS?
Data collected over a year of orbit is processed and curated by the US NOAA and the US NGA, to inform future versions of the WMM. Although the data is coarse it is applicable to compass applications. Higher resolution versions can be produced by deploying multiple satellites and drones to gather data at different altitudes.
These high-resolution maps will act as a calibration reference to navigation systems (INS systems) and could provide positioning data without GPS to up to 100m precision.
Our spring 2026 space-launch came after years of testing and retesting with NASA and other organizations. SBQuantum’s sensor was deemed to be fit for use in space. This first space deployment is the next step on the road to making magnetic navigation widely available as an alternative to GPS which cannot be jammed or distorted.
Advertisement
Your company built something called a diamond quantum magnetometer. Why diamond and why quantum?
Being solid state, diamonds are exceptionally stable and provide the right environment to preserve quantum coherence for an extended period, even at room temperature. This enables highly sensitive and very accurate magnetic field measurements for extended satellite missions at a global scale.
Furthermore, the atomic structure of diamonds is well suited to provide measurement of magnetic fields along three axes. For the purposes of navigation it is essential to gather all of that in order to provide directional information.
You mentioned the size of the device (roughly a quart of milk — about 1L in metric or a cube with 10cm size). Does your roadmap contain products that are smaller? What would something “better” differ in terms of features?
We are still in the early stages of this diamond technology. One of its advantages is that it can eventually be shrunk further, to about the size of a matchbox, without degrading its performance.
This is not the case for classical directional magnetometer technologies. We expect to reach that point in about 3 years, once we scale the production to industry standard wafers, which are of course widely used in the semiconductor industry.
How does the data captured by a quantum sensor allow for “advanced interpretation algorithms” that conventional sensors simply cannot support? What other applications could these sensors have?
By building an array of directional diamond magnetometers, we can enable real-time magnetic signals interpretation in a way which was otherwise not possible. For instance, we can locate metallic objects underwater, in real-time.
This is also true for metallic objects on the other side of a wall or underground. We are therefore also looking to employ the technology to support security and defense applications.
For instance this could be used for tracking submarines from a drone, or enhancing security at sporting events, or even security at schools and corporate events.
We miss the old Heathkit. You could build equipment that rivaled or even surpassed commercial devices. The cost was usually reasonable and, even if you could get by with less, the satisfaction of using gear you built yourself was worth a lot. Not to mention the knowledge you’d gain and your confidence in troubleshooting should the need arise. So we were jealous of [RCD66] when he found a Heathkit AJ-43C stereo tuner in the recycle bin.
As you can see in the video below, it needed a lot of love to get back to its former self. The device dates from around 1965, when the kit cost $130. In 1965, that was a lot of money. Back then, that would have bought you about four ounces of gold and would have been a great down payment on a $1,500 VW bug.
Things were a bit of a mess, so he removed all the parts and replaced most of them. Unsurprisingly, the electrolytic capacitors all tested bad. The transistors were all germanium, but if they tested good, his plan was to reuse them. There were several PCBs inside, and he made some changes, such as replacing the zener diode power supply with something more modern.
Advertisement
How did it sound? Watch the video and see for yourself. We usually like troubleshooting specific problems on gear like this, but in this case, it was probably smart to just do a total rework.
Season 1 hasn’t even aired yet, and Star Wars: Maul – Shadow Lord is already coming back for more. Chief Creative Officer Dave Filoni has announced that Season 2 is officially in the works at Lucasfilm Animation.
Star Wars: Maul – Shadow Lord Season 1 kicks off on Disney+ with a two-episode premiere on April 6, dropping two episodes weekly after that. No release date for Season 2 has been shared yet, but the early renewal signals serious confidence in the show.
This 10-episode animated series picks up after The Clone Wars, with Maul trying to rebuild his criminal syndicate on a planet the Empire hasn’t touched. Along the way, he encounters a disillusioned young Jedi Padawan, who might become the apprentice he needs.
With Season 2 locked in before Season 1 even premieres, Maul’s story is clearly just getting started.
Advertisement
The stellar cast includes Golden Globe winner and Oscar nominee Wagner Moura as Brander Lawson, Richard Ayoade as Two-Boots, Dennis Haysbert as Master Eeko-Dio Daki, Gideon Adlon as Devon Izara, and several others.
When are the new episodes of Star Wars: Maul – Shadow Lord season 1 coming?
Star Wars
Star Wars: Maul – Shadow Lord follows a two-episode-per-week format, rolling out every Sunday this month. Here’s the full breakdown:
April 6 – Episodes 1 and 2: “The Dark Revenge” and “Sinister Schemes”
April 13 – Episodes 3 and 4: “Whispers in the Unknown” and “Pride and Vengeance”
April 20 – Episodes 5 and 6: “Inquisition” and “Night of the Hunted”
April 27 – Episodes 7 and 8: “Call to the Oblivion” and “The Creeping Fear”
May 4 – Episodes 9 and 10: “Strange Allies” and the as-yet-untitled Season 1 finale
You must be logged in to post a comment Login