Insight and analysis on the information technology space from industry thought leaders.
Why Adopting Generative AI Doesn't Have to Be Risky
As generative AI transforms IT operations and business processes, organizations must address data security, compliance, and governance to harness its potential safely and effectively.
January 7, 2025
By Jonathan Rende, PagerDuty
The race to adopt generative AI (genAI) is in full swing, with IDC's Worldwide AI and Generative AI spending guide showing that genAI spending will reach $202 billion, representing 32% of overall AI spending.
This isn't surprising given that genAI can be applied to almost any business process across industries, with popular use cases including supply chain management, customer service automation, and predictive maintenance. GenAI is also a great fit in IT operations (ITOps), helping teams manage unplanned urgent work, as well as augment IT teams and save time on manual ITOps tasks.
However, in the rush to adopt genAI, organizations must ensure they are asking the right questions about data security, and where or how humans are kept in the loop. Many executives are concerned about genAI copyright and legal exposure, while others fear sensitive information disclosure and data privacy violations.
Leveraging genAI doesn't need to put organizations at risk, and it can have big gains for ITOps if deployed and used correctly.
GenAI Concerns Persist
Despite the hype and predicted spending, we still see many organizations cautiously proceed. According to one study, businesses cited the threats to an organization's legal and intellectual property rights (69%), and the risk of disclosure of information to the public or competitors (68%) as key concerns. Worryingly, 48% have admitted to entering non-public company information into genAI tools. As a result, senior leaders may direct teams to pause genAI initiatives while their organization establishes guidelines and processes.
What we're talking about is applicable to both genAI and agentic AI.
For the "normal" AI and machine learning tools organizations have been deploying for the last decade, there isn't a huge amount of concern about running data through open source or tailored/vendor-created algorithms. GenAI and agentic AI are an entirely new ball game that requires special attention.
How Can Organizations Secure Their Use of GenAI?
To secure genAI use, organizations must take proactive measures to keep their data protected within company walls.
First and foremost, it's essential to review and implement robust terms and conditions that align with corporate data governance policies. This ensures that any data processed by genAI remains confined to authorized internal systems, preventing leakage to external entities.
Next, organizations should ensure genAI tools and usage restrict the processing and storage of information strictly within company data sources. GenAI will look to use data from reputable internal data sources. Organizations must also decide whether to allow LLMs to look externally. This is where external SaaS integrations become necessary. These should be limited to connectors that are thoroughly vetted for security and compliance. Trusted connectors, typically offered by major cloud providers, ensure that data exchange occurs within secure channels, safeguarding against unauthorized access or accidental data exposure.
Additionally, organizations should regularly audit genAI usage and access logs, ensuring that data handling complies with company policy and regulatory requirements. Access controls, encryption, and monitoring tools are vital in securing data flows. Having humans in the loop is also vital for some tasks, ensuring genAI's output is accurate. By setting clear governance practices and only using trusted secure channels, organizations can harness genAI's power without compromising data security, reinforcing trust among stakeholders while supporting safe and innovative growth.
Safely Realizing GenAI's Potential
Safe use of genAI can produce big rewards if executed in the right way, particularly in IT operations for managing unplanned work and interruptions. With the safe adoption of genAI, organizations can unleash the full potential of their digital operations team.
GenAI can help responders with:
1. Faster Identification of Critical Context During an Outage – GenAI's capability to summarize data in real time provides essential context on what has changed and how an outage may have started. This helps teams rapidly pinpoint contributing factors, saving valuable time in high-stakes and revenue-impacting issues. Quick decision-making is vital.
2. Summarization and Collaboration – In a war room scenario, genAI can be used to automatically generate summaries of incidents and actions being taken to be shared with internal channels. This keeps everyone up to date, including those in customer-facing roles. This eases the workload on IT and customer service teams, allowing them to focus on issue resolution instead of managing constant status updates. These updates also maintain transparency with external stakeholders such as customers, helping to build empathy and trust. Many customers in technical roles have been in war room situations and know things can go wrong. While incidents are frustrating, the simple act of keeping stakeholders informed can help to build trust.
3. Automating and Enriching Learning From Major Events – After an incident has occurred, genAI can collect and collate all information about an incident and suggest a narrative of what happened and why. It could also go one step further by suggesting what steps could be followed to prevent it. For example, accelerating the development and deployment of automation by suggesting runbook automation based on learnings. These could be created via manually engineered prompts or pre-engineered prompts as a starting point.
By implementing genAI securely within their operational workflows, organizations can accelerate response times, improve transparency, and strengthen resilience. This controlled deployment of genAI empowers teams to tackle challenges efficiently while building trust with both internal and external stakeholders.
About the author:
As the Senior Vice President of Products, Jonathan Rende leads PagerDuty's emerging products representing key growth markets across the PagerDuty Operations Cloud including AIOps, customer service Ops, and automation products including workflow automation and runbook automation. Additionally, Jonathan is responsible for AI and genAI product investments across the PagerDuty Operations Cloud. Jonathan has decades of experience in the software industry, including various product, marketing, and engineering executive roles at companies such as Mercury, HP Software, Appcelerator, and Keynote Systems. Prior to joining PagerDuty, he was the Chief Product and Engineering Officer at Castlight Health, Inc., where his focus was on delivering and launching their next-generation predictive machine learning analytics platform for enterprises. Jonathan holds a bachelor's degree in engineering from the University of California, Davis, and an MBA from Santa Clara University.
About the Author
You May Also Like