Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
This article is part of a VB Special Issue called “Fit for Purpose: Tailoring AI Infrastructure.” Catch all the other stories here.
Unlocking AI’s potential to deliver greater efficiency, cost savings and deeper customer insights requires a consistent balance between cybersecurity and governance.
AI infrastructure must be designed to adapt and flex to a business’ changing directions. Cybersecurity must protect revenue and governance must stay in sync with compliance internally and across a company’s footprint.
Advertisement
Any business looking to scale AI safely must continually look for new ways to strengthen the core infrastructure components. Just as importantly, cybersecurity, governance and compliance must share a common data platform that enables real-time insights.
“AI governance defines a structured approach to managing, monitoring and controlling the effective operation of a domain and human-centric use and development of AI systems,” Venky Yerrapotu, founder and CEO of 4CRisk, told VentureBeat. “Packaged or integrated AI tools do come with risks, including biases in the AI models, data privacy issues and the potential for misuse.”
A robust AI infrastructure makes audits easier to automate, helps AI teams find roadblocks and identifies the most significant gaps in cybersecurity, governance and compliance.
“With little to no current industry-approved governance or compliance frameworks to follow, organizations must implement the proper guardrails to innovate safely with AI,” Anand Oswal, SVP and GM of network security at Palo Alto Networks, told VentureBeat. “The alternative is too costly, as adversaries are actively looking to exploit the newest path of least resistance: AI.”
Defending against threats to AI infrastructure
While malicious attackers’ goals vary from financial gain to disrupting or destroying conflicting nations’ AI infrastructure, all seek to improve their tradecraft. Malicious attackers, cybercrime gangs and nation-state actors are all moving faster than even the most advanced enterprise or cybersecurity vendor.
“Regulations and AI are like a race between a mule and a Porsche,” Etay Maor, chief security strategist at Cato Networks, told VentureBeat. “There’s no competition. Regulators always play catch-up with technology, but in the case of AI, that’s particularly true. But here’s the thing: Threat actors don’t play nice. They’re not confined by regulations and are actively finding ways to jailbreak the restrictions on new AI tech.”
Chinese, North Korean and Russian-based cybercriminal and state-sponsored groups are actively targeting both physical and AI infrastructure and using AI-generated malware to exploit vulnerabilities more efficiently and in ways that are often undecipherable to traditional cybersecurity defenses.
Advertisement
Security teams are still at risk of losing the AI war as well-funded cybercriminal organizations and nation-states target AI infrastructures of countries and companies alike.
One effective security measure is model watermarking, which embeds a unique identifier into AI models to detect unauthorized use or tampering. Additionally, AI-driven anomaly detection tools are indispensable for real-time threat monitoring.
All of the companies VentureBeat spoke with on the condition of anonymity are actively using red teaming techniques. Anthropic, for one, proved the value of human-in-the-middle design to close security gaps in model testing.
“I think human-in-the-middle design is with us for the foreseeable future to provide contextual intelligence, human intuition to fine-tune an [large language model] LLM and to reduce the incidence of hallucinations,” Itamar Sher, CEO of Seal Security, told VentureBeat.
Advertisement
Models are the high-risk threat surfaces of an AI infrastructure
Every model released into production is a new threat surface an organization needs to protect. Gartner’s annual AI adoption survey found that 73% of enterprises have deployed hundreds or thousands of models.
Malicious attackers exploit weaknesses in models using a broad base of tradecraft techniques. NIST’s Artificial Intelligence Risk Management Framework is an indispensable document for anyone building AI infrastructure and provides insights into the most prevalent types of attacks, including data poisoning, evasion and model stealing.
AI Security writes, “AI models are often targeted through API queries to reverse-engineer their functionality.”
Getting AI infrastructure right is also a moving target, CISOs warn. “Even if you’re not using AI in explicitly security-centric ways, you’re using AI in ways that matter for your ability to know and secure your environment,” Merritt Baer, CISO at Reco, told VentureBeat.
Advertisement
Put design-for-trust at the center of AI infrastructure
Just as an operating system has specific design goals that strive to deliver accountability, explainability, fairness, robustness and transparency, so too does AI infrastructure.
Implicit throughout the NIST framework is a design-for-trust roadmap, which offers a practical, pragmatic definition to guide infrastructure architects. NIST emphasizes that validity and reliability are must-have design goals, especially in AI infrastructure, to deliver trustworthy, reliable results and performance.
The critical role of governance in AI Infrastructure
AI systems and models must be developed, deployed and maintained ethically, securely and responsibly. Governance must be designed to deliver workflows, visibility and real-time updates on algorithmic transparency, fairness, accountability and privacy. The cornerstone of strong governance starts when models are continuously monitored, audited and aligned with societal values.
Governance frameworks should be integrated into AI infrastructure from the first phases of development. “Governance by design” embeds these principles into the process.
Advertisement
“Implementing an ethical AI framework requires focus on security, bias and data privacy aspects not only during the designing process of the solution but also throughout the testing and validation of all the guardrails before deploying the solutions to end users,” WinWire CTO Vineet Arora told VentureBeat.
Designing AI infrastructures to reduce bias
Identifying and reducing biases in AI models is critical to delivering accurate, ethically sound results. Organizations need to step up and take accountability for how their AI infrastructures monitor, control and improve to reduce and eliminate biases.
Organizations that take accountability for their AI infrastructures rely on adversarial debiasing train models to minimize the relationship between protected attributes (including race or gender) and outcomes, reducing the risk of discrimination. Another approach is resampling training data to ensure a balanced representation relevant to different industries.
“Embedding transparency and explainability into the design of AI systems enables organizations to understand better how decisions are being made, allowing for more effective detection and correction of biased outputs,” says NIST. Providing transparent insights into how AI models make decisions allows organizations to better detect, correct and learn from biases.
Advertisement
How IBM is managing AI governance
IBM’s AI Ethics Board oversees the company’s AI infrastructure and AI projects, ensuring each stays ethically compliant with industry and internal standards. IBM initially established a governance framework to include what they’re calling “focal points,” or mid-level executives with AI expertise, who review projects in development to ensure compliance with IBM’s Principles of Trust and Transparency.
IBM says this framework helps reduce and control risks at the project level, alleviating risks to AI infrastructures.
Christina Montgomery, IBM’s chief privacy and trust officer, says, “Our AI ethics board plays a critical role in overseeing our internal AI governance process, creating reasonable internal guardrails to ensure we introduce technology into the world responsibly and safely.”
Governance frameworks must be embedded in AI infrastructure from the design phase. The concept of governance by design ensures that transparency, fairness and accountability are integral parts of AI development and deployment.
Advertisement
AI infrastructure must deliver explainable AI
Closing gaps between cybersecurity, compliance and governance is accelerating across AI infrastructure use cases. Two trends emerged from VentureBeat research: agentic AI and explainable AI. Organizations with AI infrastructure are looking to flex and adapt their platforms to make the most of each.
Of the two, explainable AI is nascent in providing insights to improve model transparency and troubleshoot biases. “Just as we expect transparency and rationale in business decisions, AI systems should be able to provide clear explanations of how they reach their conclusions,” Joe Burton, CEO of Reputation, told VentureBeat. “This fosters trust and ensures accountability and continuous improvement.”
Burton added: “By focusing on these governance pillars — data rights, regulatory compliance, access control and transparency — we can leverage AI’s capabilities to drive innovation and success while upholding the highest standards of integrity and responsibility.”
VB Daily
Advertisement
Stay in the know! Get the latest news in your inbox daily
Looking for a neat place to store your switches? In this guide we chuck our huge oversized server rack and downsize into a first aid kit.
BUY NOW
First Aid Cabinet: https://vtudio.com/a/?a=first+aid+cabinet
Server Rack: https://vtudio.com/a/?a=server+rack
Netgear POE+ Switch: https://vtudio.com/a/?a=netgear+poe+8+port
Netgear XS716E Switch: https://vtudio.com/a/?a=netgear+xs716e
8 Bay NAS: https://vtudio.com/a/?a=qnap+tvs-872xt
Apple is preparing to take a fresh run at the smart home that starts with a rumored smart display that it may release next year. That’s according to Bloomberg’s Mark Gurman, who writes in his Power On newsletter today that the display will use a new operating system, called homeOS, that’s based on the Apple TV’s tvOS (much like the software that drives HomePods now.)
Gurman reports that the display will run Apple apps like Calendar, Notes, and Home, and that Apple has tested prototypes with magnets for wall-mounting. And it will support Apple Intelligence — something Apple’s HomePods don’t currently do.
Another recent rumor suggested that a “HomeAccessory” device coming soon would be square-shaped, and that users might be able to use hand gestures from afar to control it, as 9to5Mac wrote earlier this week. And MacRumors has reported on apparent code references to the device and homeOS.
A display like this sounds more down-to-Earth than Apple’s robotic screen idea. It could also be less fiddly and hopefully less expensive than trying to use an iPad as a dedicated smart home controller (I’ve tried; it’s not a great experience!) We’ll find out if and when it launches — which doesn’t sound terribly far off.
There are some major updates to Google Maps, Street View, and Google Earth to know about – and the new and upgraded features should prove helpful in all kinds of ways for users of Google’s mapping tools.
The updates are outlined in a blog post by Google, and first up we’ve got the addition of historical imagery on Google Earth, going back as far as 80 years in some places. Some of this imagery has previously been available in the paid-for, Pro version of the software, but it’s now going to be accessible for all users across the web and mobile.
“Maybe you want to travel back in time and see what your neighborhood looked like decades ago,” explains Stafford Marquardt, a senior product manager at Google Maps. “Or you want to understand how forests have been affected by human activity and the changing climate.”
Certain cities, including London, Berlin, Warsaw, and Paris, now offer satellite imagery stretching back to the 1930s – making it possible to get a detailed look at how these places have evolved and adapted over the decades.
Advertisement
More Street View imagery
In Google Maps, there’s going to be expanded Street View imagery across 80 countries – with some of those countries getting Street View pictures for the first time. What’s more, Google says even more places will be getting Street View in the future.
The example photos included in Google’s blog post cover Iceland, New Zealand, Brazil, Mexico, Tasmania, Japan, Denmark, and France. According to Google, it’s one of the most significant updates to Street View in its history (it launched back in 2007), and the total number of images now exceeds 280 billion.
And finally in this round of updates, Google says it’s “sharpening” satellite imagery across Google Earth and Google Maps. With a little help from AI, new cloud removal tools have been applied to reveal more of the globe than ever before, giving you “a refreshed global mosaic that gives you a clearer, more accurate look at Earth”.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
If you’re not already seeing the updates to Google Earth and Google Maps on your device, they should show up soon. We saw another update earlier this week, when Google Maps improved its lane navigation interface on Android Auto.
Features – VEVOR 20U Open Frame Server Rack, 23”-40” Adjustable Depth, Free Standing or Wall Mount Network Server Rack, 4 Post AV Rack with Casters, Holds All Your Networking IT Equipment AV Gear Router Modem
// FOLLOW ME ON SOCIAL //
Website ► https://www.augusttech.org/
Facebook ► https://www.facebook.com/augusttech9
Twitter ► https://twitter.com/augusttech9
Instagram ► https://www.instagram.com/augusttech9/
// DISCLAIMERS //
The links for the devices are an affiliate advertising program for us to earn advertising fees by advertising and linking to www.amazon.com.
These solitary structures were once a key pillar of aviation navigation, but, due to their remote locations, today they are little known. Photographer Ignacio Evangelista’s starkly beautiful shots shine a light on the beacons, dubbed VORs (very high-frequency omnidirectional range stations), and their role in carving out routes in the sky for aircraft.
Essentially giant antennas, VORs beam out radio signals from secluded spots to allow planes to fix their location and stay on course by flying from VOR to VOR. The signals can be thought of as “breadcrumbs”, says Evangelista. The isolation is necessary to avoid interference in broadcasts between VORs.
Stations like the ones pictured here are a dying breed, as they are increasingly being decommissioned in favour of satellite-based GPS. But although GPS may be a more accurate means of navigation, VORs offer a back-up during events like solar storms or GPS interference, without which there could be a great deal of chaos, says Evangelista.
Because their locations are publicly available, anyone can seek out a remaining VOR simply by using GPS – a “curious technological pirouette”, as Evangelista puts it. This set him on course to document some of the more photo-worthy stations before they disappear for good.
Pictured from the top, the first two stations are in Spain – VOR NVS is on the edge of the village of Navas del Rey, 50 kilometres from Madrid, while VOR CMA is 1.5 km from the village of Calamocha. The last, VOR BRY, is on the edge of French village Bray-sur-Seine.
You must be logged in to post a comment Login