Connect with us

Technology

Understand Microsoft Copilot security concerns

Published

on

Understand Microsoft Copilot security concerns

Microsoft Copilot can improve end-user productivity, but it also has the potential to create security and data privacy issues.

Copilot streamlines workflows in Microsoft 365 applications. By accessing company data, it can automate repetitive tasks, generate new content and ideas, summarize reports and improve communication.

Productivity benefits depend on the data Copilot can access. But security and data privacy issues can arise if Copilot uses data that it shouldn’t have access to. Understanding and mitigating various Copilot security concerns requires a high-level understanding of how Copilot for Microsoft 365 works.

How Copilot accesses company data

Like other AI chatbots, such as ChatGPT, users interact with Copilot via prompts. The prompt is displayed within Microsoft Office applications, such as Microsoft Word or Excel, or within the Microsoft 365 Web portal.

Advertisement

When a user enters a request into the prompt, Copilot uses a technique called grounding to improve the quality of the response it generates. The grounding process expands the user’s prompt — though this expansion is not visible to the end user — based on Microsoft Graph and Microsoft Semantic Index. These components rewrite the user’s prompt to include key words and data references that are most likely to generate the best results.

After modifying the prompt, Copilot sends it to a large language model. LLMs use natural language processing to interpret the modified prompt and enable Copilot to converse in written natural language with the user.

Screenshot of Copilot open in Microsoft Word.
Users can open Microsoft Copilot in other Microsoft applications, such as Word, and interact with it via prompts.

The LLM formulates a response to the end user’s prompt based on the available data. Data can include internet data, if organization policies allow Copilot to use it. The response usually pulls from Microsoft 365 data. For example, a user can ask Copilot to summarize the document they currently have open. The LLM can formulate a response based on that document. If the user asks a more complex question that is not specific to one document, Copilot will likely pull data from multiple documents.

The LLM respects any data access controls the organization currently has in place. If a user does not have access to a particular document, Copilot should not reference that document when formulating a response.

Before the LLM sends a response to the user, Copilot performs post processing checks to review security, privacy and compliance. Depending on the outcome, the LLM either displays the response to the user or regenerates. The response is only displayed when it adheres to security, privacy and compliance requirements.

Advertisement

How Copilot threatens data privacy and security

Copilot can create data security or privacy concerns despite current safeguards.

Copilot uses any data that is available to it, even if it’s a resource that the user should not have access to.

The first potential issue is users having access to data that they shouldn’t. The problem tends to be more common in larger organizations. As a user gets promoted or switches departments, they might retain previous access permissions that they no longer need.

It’s possible that a user might not even realize they still have access to the data associated with their former role, but Copilot will. Copilot uses any data that is available to it, even if it’s a resource that the user should not have access to.

A second concern is Copilot referencing legitimately accessed data that it shouldn’t. For example, it might be better if Copilot is not able to formulate responses based upon documents containing your organization’s confidential information. Confidential or sensitive data might include plans for mergers or acquisitions that have not been made public or data pertaining to future product launches.

Advertisement

An organization’s data stays within its own Microsoft 365 tenant. Microsoft does not use an organization’s data for the purpose of training Copilot. Even so, it’s best to prevent Copilot from accessing the most sensitive data.

If a user has legitimate access to this sensitive data, it still can be harmful to let that user access it through Copilot. Some users who create and share Copilot-generated documents might not take the time to review them and could accidentally leak sensitive data.

Mitigate the security risks

Before adopting Copilot, organizations should engage in an extremely thorough access control review to determine who has access to what data. Security best practices stipulate that organizations should practice least user access. Normally, LUA is in response to compliance requirements or as a way of limiting the damage of a potential ransomware infection — ransomware cannot encrypt anything that the user who triggered the infection does not have access to. In the case of a Copilot deployment, adopting the principles of LUA is the best option to ensure Copilot does not expose end users to any data that they should not have access to.

Restricting Copilot from accessing sensitive data can be a tricky process. Microsoft recommends applying sensitivity labels through Microsoft Purview. Configure the sensitivity labels to encrypt sensitive data and ensure users do not receive the Copy and Extract Content (EXTRACT) permission. EXTRACT prevents users from copying sensitive documents and blocks Copilot from referencing the document.

Advertisement

Brien Posey is a 22-time Microsoft MVP and a commercial astronaut candidate. In his more than 30 years in IT, he has served as a lead network engineer for the U.S. Department of Defense and a network administrator for some of the largest insurance companies in America.

Source link

Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Top Strategies to Secure Machine Learning Models

Published

on

Top Strategies to Secure Machine Learning Models

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Adversarial attacks on machine learning (ML) models are growing in intensity, frequency and sophistication with more enterprises admitting they have experienced an AI-related security incident.

AI’s pervasive adoption is leading to a rapidly expanding threat surface that all enterprises struggle to keep up with. A recent Gartner survey on AI adoption shows that 73% of enterprises have hundreds or thousands of AI models deployed.

HiddenLayer’s earlier study found that 77% of the companies identified AI-related breaches, and the remaining companies were uncertain whether their AI models had been attacked. Two in five organizations had an AI privacy breach or security incident of which 1 in 4 were malicious attacks.

Advertisement

A growing threat of adversarial attacks

With AI’s growing influence across industries, malicious attackers continue to sharpen their tradecraft to exploit ML models’ growing base of vulnerabilities as the variety and volume of threat surfaces expand.

Adversarial attacks on ML models look to exploit gaps by intentionally attempting to redirect the model with inputs, corrupted data, jailbreak prompts and by hiding malicious commands in images loaded back into a model for analysis. Attackers fine-tune adversarial attacks to make models deliver false predictions and classifications, producing the wrong output.

VentureBeat contributor Ben Dickson explains how adversarial attacks work, the many forms they take and the history of research in this area.

Gartner also found that 41% of organizations reported experiencing some form of AI security incident, including adversarial attacks targeting ML models. Of those reported incidents, 60% were data compromises by an internal party, while 27% were malicious attacks on the organization’s AI infrastructure. Thirty percent of all AI cyberattacks will leverage training-data poisoning, AI model theft or adversarial samples to attack AI-powered systems.

Advertisement

Adversarial ML attacks on network security are growing  

Disrupting entire networks with adversarial ML attacks is the stealth attack strategy nation-states are betting on to disrupt their adversaries’ infrastructure, which will have a cascading effect across supply chains. The 2024 Annual Threat Assessment of the U.S. Intelligence Community provides a sobering look at how important it is to protect networks from adversarial ML model attacks and why businesses need to consider better securing their private networks against adversarial ML attacks.

A recent study highlighted how the growing complexity of network environments demands more sophisticated ML techniques, creating new vulnerabilities for attackers to exploit. Researchers are seeing that the threat of adversarial attacks on ML in network security is reaching epidemic levels.

The quickly accelerating number of connected devices and the proliferation of data put enterprises into an arms race with malicious attackers, many financed by nation-states seeking to control global networks for political and financial gain. It’s no longer a question of if an organization will face an adversarial attack but when. The battle against adversarial attacks is ongoing, but organizations can gain the upper hand with the right strategies and tools.

Cisco, Cradlepoint( a subsidiary of Ericsson), DarkTrace, Fortinet, Palo Alto Networks, and other leading cybersecurity vendors have deep expertise in AI and ML to detect network threats and protect network infrastructure. Each is taking a unique approach to solving this challenge. VentureBeat’s analysis of Cisco’s and Cradlepoint’s latest developments indicates how fast vendors address this and other network and model security threats. Cisco’s recent acquisition of Robust Intelligence accentuates how important protecting ML models is to the network giant. 

Advertisement

Understanding adversarial attacks

Adversarial attacks exploit weaknesses in the data’s integrity and the ML model’s robustness. According to NIST’s Artificial Intelligence Risk Management Framework, these attacks introduce vulnerabilities, exposing systems to adversarial exploitation.

There are several types of adversarial attacks:

Data Poisoning: Attackers introduce malicious data into a model’s training set to degrade performance or control predictions. According to a Gartner report from 2023, nearly 30% of AI-enabled organizations, particularly those in finance and healthcare, have experienced such attacks. Backdoor attacks embed specific triggers in training data, causing models to behave incorrectly when these triggers appear in real-world inputs. A 2023 MIT study highlights the growing risk of such attacks as AI adoption grows, making defense strategies such as adversarial training increasingly important.

Evasion Attacks: These attacks alter input data to mispredict. Slight image distortions can confuse models into misclassified objects. A popular evasion method, the Fast Gradient Sign Method (FGSM) uses adversarial noise to trick models. Evasion attacks in the autonomous vehicle industry have caused safety concerns, with altered stop signs misinterpreted as yield signs. A 2019 study found that a small sticker on a stop sign misled a self-driving car into thinking it was a speed limit sign. Tencent’s Keen Security Lab used road stickers to trick a Tesla Model S’s autopilot system. These stickers steered the car into the wrong lane, showing how small carefully crafted input changes can be dangerous. Adversarial attacks on critical systems like autonomous vehicles are real-world threats.

Advertisement

Model Inversion: Allows adversaries to infer sensitive data from a model’s outputs, posing significant risks when trained on confidential data like health or financial records. Hackers query the model and use the responses to reverse-engineer training data. In 2023, Gartner warned, “The misuse of model inversion can lead to significant privacy violations, especially in healthcare and financial sectors, where adversaries can extract patient or customer information from AI systems.”

Model Stealing: Repeated API queries are used to replicate model functionality. These queries help the attacker create a surrogate model that behaves like the original. AI Security states, “AI models are often targeted through API queries to reverse-engineer their functionality, posing significant risks to proprietary systems, especially in sectors like finance, healthcare, and autonomous vehicles.” These attacks are increasing as AI is used more, raising concerns about IP and trade secrets in AI models.

Recognizing the weak points in your AI systems

Securing ML models against adversarial attacks requires understanding the vulnerabilities in AI systems. Key areas of focus need to include:

Data Poisoning and Bias Attacks: Attackers target AI systems by injecting biased or malicious data, compromising model integrity. Healthcare, finance, manufacturing and autonomous vehicle industries have all experienced these attacks recently. The 2024 NIST report warns that weak data governance amplifies these risks. Gartner notes that adversarial training and robust data controls can boost AI resilience by up to 30%. Implementing secure data pipelines and constant validation is essential to protecting critical models.

Advertisement

Model Integrity and Adversarial Training: Machine learning models can be manipulated without adversarial training. Adversarial training uses adverse examples and significantly strengthens a model’s defenses. Researchers say adversarial training improves robustness but requires longer training times and may trade accuracy for resilience. Although flawed, it is an essential defense against adversarial attacks. Researchers have also found that poor machine identity management in hybrid cloud environments increases the risk of adversarial attacks on machine learning models.

API Vulnerabilities: Model-stealing and other adversarial attacks are highly effective against public APIs and are essential for obtaining AI model outputs. Many businesses are susceptible to exploitation because they lack strong API security, as was mentioned at BlackHat 2022. Vendors, including Checkmarx and Traceable AI, are automating API discovery and ending malicious bots to mitigate these risks. API security must be strengthened to preserve the integrity of AI models and safeguard sensitive data.

Best practices for securing ML models

Implementing the following best practices can significantly reduce the risks posed by adversarial attacks:

Robust Data Management and Model Management: NIST recommends strict data sanitization and filtering to prevent data poisoning in machine learning models. Avoiding malicious data integration requires regular governance reviews of third-party data sources. ML models must also be secured by tracking model versions, monitoring production performance and implementing automated, secured updates. BlackHat 2022 researchers stressed the need for continuous monitoring and updates to secure software supply chains by protecting machine learning models. Organizations can improve AI system security and reliability through robust data and model management.

Advertisement

Adversarial Training: ML models are strengthened by adversarial examples created using the Fast Gradient Sign Method (FGSM). FGSM adjusts input data by small amounts to increase model errors, helping models recognize and resist attacks. According to researchers, this method can increase model resilience by 30%. Researchers write that “adversarial training is one of the most effective methods for improving model robustness against sophisticated threats.”

Homomorphic Encryption and Secure Access: When safeguarding data in machine learning, particularly in sensitive fields like healthcare and finance, homomorphic encryption provides robust protection by enabling computations on encrypted data without exposure. EY states, “Homomorphic encryption is a game-changer for sectors that require high levels of privacy, as it allows secure data processing without compromising confidentiality.” Combining this with remote browser isolation further reduces attack surfaces ensuring that managed and unmanaged devices are protected through secure access protocols.

API Security: Public-facing APIs must be secured to prevent model-stealing and protect sensitive data. BlackHat 2022 noted that cybercriminals increasingly use API vulnerabilities to breach enterprise tech stacks and software supply chains. AI-driven insights like network traffic anomaly analysis help detect vulnerabilities in real time and strengthen defenses. API security can reduce an organization’s attack surface and protect AI models from adversaries.

Regular Model Audits: Periodic audits are crucial for detecting vulnerabilities and addressing data drift in machine learning models. Regular testing for adversarial examples ensures models remain robust against evolving threats. Researchers note that “audits improve security and resilience in dynamic environments.” Gartner’s recent report on securing AI emphasizes that consistent governance reviews and monitoring data pipelines are essential for maintaining model integrity and preventing adversarial manipulation. These practices safeguard long-term security and adaptability.

Advertisement

Technology solutions to secure ML models

Several technologies and techniques are proving effective in defending against adversarial attacks targeting machine learning models:

Differential privacy: This technique protects sensitive data by introducing noise into model outputs without appreciably lowering accuracy. This strategy is particularly crucial for sectors like healthcare that value privacy. Differential privacy is a technique used by Microsoft and IBM among other companies to protect sensitive data in their AI systems.

AI-Powered Secure Access Service Edge (SASE): As enterprises increasingly consolidate networking and security, SASE solutions are gaining widespread adoption. Major vendors competing in this space include Cisco, Ericsson, Fortinet, Palo Alto Networks, VMware and Zscaler. These companies offer a range of capabilities to address the growing need for secure access in distributed and hybrid environments. With Gartner predicting that 80% of organizations will adopt SASE by 2025 this market is set to expand rapidly.

Ericsson distinguishes itself by integrating 5G-optimized SD-WAN and Zero Trust security, enhanced by acquiring Ericom. This combination enables Ericsson to deliver a cloud-based SASE solution tailored for hybrid workforces and IoT deployments. Its Ericsson NetCloud SASE platform has proven valuable in providing AI-powered analytics and real-time threat detection to the network edge. Their platform integrates Zero Trust Network Access (ZTNA), identity-based access control, and encrypted traffic inspection. Ericsson’s cellular intelligence and telemetry data train AI models that aim to improve troubleshooting assistance. Their AIOps can automatically detect latency, isolate it to a cellular interface, determine the root cause as a problem with the cellular signal and then recommend remediation.

Advertisement

Federated Learning with Homomorphic Encryption: Federated learning allows decentralized ML training without sharing raw data, protecting privacy. Computing encrypted data with homomorphic encryption ensures security throughout the process. Google, IBM, Microsoft, and Intel are developing these technologies, especially in healthcare and finance. Google and IBM use these methods to protect data during collaborative AI model training, while Intel uses hardware-accelerated encryption to secure federated learning environments. Data privacy is protected by these innovations for secure, decentralized AI.

Defending against attacks

Given the potential severity of adversarial attacks, including data poisoning, model inversion, and evasion, healthcare and finance are especially vulnerable, as these industries are favorite targets for attackers. By employing techniques including adversarial training, robust data management, and secure API practices, organizations can significantly reduce the risks posed by adversarial attacks. AI-powered SASE, built with cellular-first optimization and AI-driven intelligence has proven effective in defending against attacks on networks.


Source link
Continue Reading

Technology

Adam Neumann’s startup Flow opens co-living community in Saudi Arabia

Published

on

Adam Neumann’s startup Flow opens co-living community in Saudi Arabia

Flow, Adam Neumann’s co-living startup, opened a compound with 238 apartments in Saudi Arabia’s capital, Riyadh, and Forbes has some details. The opening included an Aztec-themed hot chocolate ceremony and tote bags with the words “holy s— I’m alive” on them. The rent for the furnished units starts at $3,500 a month and includes hotel-style services such as laundry and housekeeping and amenities like pools, co-ed gyms (unusual in Saudi Arabia), and bowling alleys. Flow is building three other properties with nearly 1,000 apartments in Riyadh.

The company’s first but less luxurious properties were opened in Fort Lauderdale and Miami in April.

Flow raised $350 million from Andreessen Horowitz in 2022. The funding raised eyebrows given the problematic history of Neumann’s previous startup, WeWork. Once valued at $47 billion, WeWork filed for bankruptcy protection last year and was ultimately acquired by Yardi, a real estate group, for $450 million.

Source link

Continue Reading

Technology

Alexis Ohanian is premiering his women’s soccer show on X

Published

on

Alexis Ohanian is premiering his women’s soccer show on X

In a late Friday email, X CEO Linda Yaccarino announced the launch of a new “video tab” feature (resembling a TikTok-style endless scroll, according to a source at X) and an X-exclusive reality series, called The Offseason, starring soccer star Midge Purce, and produced by investor Alexis Ohanian.

This announcement comes shortly after a gathering of X partners and clients at the New York office on Tuesday, while Yaccarino works to retain advertisers and content creators — both vital to the platform but steadily fleeing due to the behavior of its owner, Elon Musk.

Yaccarino added that Purce and Ohanian came to the office to share more about the upcoming premiere of The Offseason — which is set to go live October 18. X has been securing content deals with creators like MrBeast and celebrities like Don Lemon (who is now suing Musk after his show was canceled) aiming to strengthen its pivot to video and challenge YouTube as a video-hosting platform.

The Offseason is produced in partnership with reality TV producer Alex Baskin (who produced Vanderpump Rules), and Box to Box Films (Drive to Survive), alongside Ohanian, according to Variety. The show focuses on 11 national women’s league soccer players during their off-season, living together for two weeks in Miami, offering “uncensored access to their personal stories, interpersonal relationships and on-field journey.”

Advertisement

Ohanian and Yaccarino also promoted his women’s track event Athlos in other posts, saying that on Thursday night it will be streamed live from New York City on X.

Source link

Continue Reading

Technology

Quordle today – hints and answers for Saturday, September 21 (game #971)

Published

on

Quordle on a smartphone held in a hand

Quordle was one of the original Wordle alternatives and is still going strong now nearly 1,000 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.

Enjoy playing word games? You can also check out my Wordle today, NYT Connections today and NYT Strands today pages for hints and answers for those puzzles.

Source link

Continue Reading

Technology

The Edge of Intelligent Photography

Published

on

The Edge of Intelligent Photography

Octobers excite us at Halide HQ. Apple releases new iPhones, and they’re certain to upgrade the cameras. As the makers of a camera app, we tend to take a longer look at these upgrades. Where other reviews might come out immediately and offer a quick impression, we spend a lot of time testing it before coming to our verdict.

This takes weeks (or this year, months) after initial reviews, because I believe in taking time to understand all the quirks and features. In the age of smart cameras, there are more quirks than ever. This year’s deep dive into Apple’s latest and greatest — the iPhone 13 Pro — took extra time. I had to research a particular set of quirks.

“Quirk”? This might be a bit of a startling thing to read, coming from many reviews. Most smartphone reviews and technology websites list the new iPhone 13 Pro’s camera system as being up there with the best on the market right now.

I don’t disagree.  

Advertisement

Source link

Continue Reading

Technology

The deepfakes of Trump and Biden that you are most likely to fall for

Published

on

The deepfakes of Trump and Biden that you are most likely to fall for

This is a real photo of Joe Biden giving a speech

SHAWN THEW/EPA-EFE/Shutterstock

People can generally spot when videos of famous politicians giving speeches are actually AI-generated deepfakes. But we have more trouble discerning counterfeits from reality when listening to audio or reading supposed text transcripts.

“Audio deepfakes are, in my opinion, a little more dangerous in the current time because visual deepfakes are still harder to create,” says Aruna Sankaranarayanan at the Massachusetts Institute of Technology.

Advertisement

Sankaranarayanan and her colleagues collected text transcripts, audio and video of political speeches…

Source link

Continue Reading

Trending

Copyright © 2017 Zox News Theme. Theme by MVP Themes, powered by WordPress.