Ensuring generative AI across the technology stack

Research shows that in 2026, more than 80% of businesses use generative AI models, APIs, or applications, down from less than 5% today.

This rapid adoption raises new considerations regarding cybersecurity, ethics, privacy, and risk management. Among the companies using generative AI today, Only 38% mitigate cybersecurity risksand only 32% work to address model inaccuracy.

My conversations with security practitioners and entrepreneurs have centered around three key factors:

  1. Enterprise generative AI adoption brings more complexity to security challenges, such as overly privileged access. For example, while conventional data loss prevention tools effectively monitor and control the data flowing into AI applications, they often fail with unstructured data and many nuances. factors such as behavioral norms or bias content within the prompts.
  2. Market demand for GenAI’s various security products is closely tied to the trade-off between ROI potential and inherent security vulnerabilities in the underlying use cases in which the applications are deployed. This balance between opportunity and risk continues to evolve based on the ongoing development of AI infrastructure standards and the regulatory landscape.
  3. Like traditional software, generative AI must be secured at all architectural levels, especially the core interface, application, and data layers. Below is a snapshot of the various security product categories within the technology stack, highlighting areas where security leaders perceive significant ROI and risk potential.
Table showing data for securing the GenAI tech stack

Image Credits: Forgepoint Capital

Widespread adoption of GenAI chatbots will primarily be the ability to accurately and quickly intercept, review, and validate inputs and corresponding outputs at scale without diminishing the user experience.

Interface layer: Balancing usability with security

Businesses see great potential in using customer-facing chatbots, especially customized models trained on industry and company-specific data. The user interface is susceptible to trigger injections, a variant of injection attacks aimed at manipulating the model’s response or behavior.

In addition, chief information security officers (CISOs) and security leaders are increasingly under pressure to implement GenAI applications within their organizations. While enterprise consumption is an ongoing trend, the rapid and widespread adoption of technologies like ChatGPT has sparked an unprecedented, employee-driven drive for their use in the workplace.

Widespread adoption of GenAI chatbots will primarily be the ability to accurately and quickly intercept, review, and validate inputs and corresponding outputs at scale without diminishing the user experience. Existing data security tools often rely on preset rules, resulting in false positives. Tools like Protect AI’s refusing and Harmonic Security using AI models to dynamically determine whether or not data passing through a GenAI application is sensitive.

Leave a comment