Threats In SaaS Are Closer Than They Appear

Organizations know that securing SaaS is vital. But security around SaaS apps is falling short and efforts to improve that security are complicated now that security teams act more as SaaS supervisors than app owners. How can we reduce the glaring gaps in our SaaS defenses?

This week’s episode is hosted by me, David Spark (@dspark), producer of CISO Series and Andy Ellis (@csoandy), operating partner, YL Ventures.  Joining us is our sponsored guest, Rohan Sathe, co-founder and CTO, Nightfall AI.

Got feedback? Join the conversation on LinkedIn.

Huge thanks to our sponsor, Nightfall AI

Nightfall
Nightfall is the leader in cloud data leak prevention. Integrate in minutes with cloud apps such as Slack and Jira to instantly protect data (PII, PHI, Secrets and Keys, PCI) and prevent breaches. Stay compliant with frameworks such as ISO 27001 and more — all powered by Nightfall’s industry-leading ML detection.

Full Transcript

[Voiceover] Best advice for a CISO. Go!

[Rohan Sathe] Everyone knows why they shouldn’t block generative AI technology just given all of the massive productivity benefits they have. And so really the upside here is you may already have some existing technology in your security stack like a DLP tool that can really solve some of the visibility challenges with tools like ChatGPT.

[Voiceover] It’s time to begin the CISO Series Podcast.

[David Spark] Welcome to the CISO Series Podcast. My name is David Spark, I am the producer of said CISO Series and my co-host for this very episode, you’ve heard him before, it’s Andy Ellis, he’s the operating partner over at YL Ventures, and he’s also the author of 1% Leadership. Have you sold a few books, Andy?

[Andy Ellis] We’ve sold a couple.

[David Spark] Excellent. Awesome. That’s good to hear. We’re available at CISOseries.com where you can check out all our programs. If you want to buy Andy’s book though, I would just look up 1% Leadership or go to CSOAndy.com. Correct, it’s there?

[Andy Ellis] Yes, it is.

[David Spark] All right. You can get it there. Our sponsor for today’s episode is Nightfall AI – prevent sensitive data leaks to SaaS apps and GenAI. In fact, our guest talks about that in just a moment. But before we get to introducing our guest, both of us are doing lots of events coming up so I do want to make kind of a plug.

This episode is dropping on the 10th of October. If you happen to live in the Miami area, I will be doing a live podcast recording on the 11th at the Claroty Conference called Nexus 23. And also on the 16th, I will be in Las Vegas doing the DigiTrust Summit in Las Vegas. And also on the 17th, I’ll be doing a live podcast recording in Mountain View at Microsoft’s campus.

That’s part of the ISSA San Francisco and Silicon Valley Chapter. All of this is available if you go to the Events page on CISOseries.com. Andy, where will you be?

[Andy Ellis] I’ll be at InfoSeCon in Raleigh on October 20th, and then the following week I’ll be at SIM Boston on the 25th, and you can always see my availability in my newsletter which is at duhaone.substack.com or you can find that on CSOAndy.com, there’s a link to it right near the top.

[David Spark] You know what our audience says? It says can you list more dates on your show because our audience loves it when we list dates.

[Andy Ellis] I bet they do. We should just do like the next three years of events.

[David Spark] Bad ideas on creating a podcast brought to you by CISO Series. All right, let’s introduce our guest. It is our sponsor guest. So thrilled. They’ve actually been a great sponsor of the CISO Series. They’ve actually sponsored one of our live events, one we did in New Orleans which was a ton of fun.

It is the co-founder and CTO of Nightfall.ai, Rohan Sathe. Rohan, thank you so much for joining us.

[Rohan Sathe] Yeah. Thanks for having me, David. Appreciate it.

How can we secure new technology without creating new risks?

3:07.215

[David Spark] Now the rise of generative AI has been so fast, like in the last 12 months it’s been astonishing, but we’re lagging in hard data on how businesses are responding to it. Reuters and Ipsos recently published a poll, finding that more respondents were already using them regular at work, even without explicit authorization, and 10% said work outright banned the tools, while 25% didn’t know their company policy.

My guess is those 25% didn’t even have a company policy. But Andy, given the glacial way new tech can come to the enterprise, are you surprised to see only 10% with an outright ban? Because historically, anything new comes, just ban it. Or does it make sense since companies never been able to stop new tech proliferating or are they clueless what’s going on within the four walls of their organization?

Or is it a mix of all of them?

[Andy Ellis] So, it’s a mix of all of them. And of course, first I have to pull out my soapbox about data science and polling and this is an awful poll because it polled individuals, and so we don’t know if that 10% represents all people in the same organization with a ban or if they were evenly distributed.

So, we really can’t posit what organizations are doing, it’s just what the people polled were experiencing in their own workplaces.

[David Spark] Mm-hmm.

[Andy Ellis] And so if 50 people were in the same workplace, they’ll show up a lot. What I actually found interesting about this one because I dug in on the poll and it was like 22% are explicitly authorized to use it and 28% are using it. That’s a really small delta.

[David Spark] Right.

[Andy Ellis] That says only 6% of respondents were using it without explicit permission. That’s tiny when I look at the adoption of shadow SaaS.

[David Spark] By the way, yeah, I think there’s a lot missing in that too because I think that delta’s way higher.

[Andy Ellis] I absolutely do. And I think a lot of companies really haven’t thought through what GenAI really is from an outsourcing perspective. Let’s ignore the technology. Just say look, we outsource a lot of things to small businesses and offshoring or wherever it is. I have slides that are done by a small company located in a different country.

And the difference is, like when I have a human on the other side of it, there’s a limit to how much I can give them and I’ve got to contract with that company. And all that GenAI did was say, “Well, if you need some writing done, you don’t really need a contract and you didn’t carefully look at the terms of service, but you can basically outsource as much as you want as fast as you can.” And I think that’s where companies are getting into challenges is it’s like, uh-oh, you can basically have 1 content manager outputting the work of 50 people.

[David Spark] Mm-hmm. Rohan, I’m throwing this to you. What insight do you have to AI usage? Because it’s kind of all over the map right now. What’s the insight you have?

[Rohan Sathe] Yeah. So, we basically start by working with our customers to understand how they philosophically think about embracing or not embracing the technology. And so really that stems from do they actually see the massive productivity benefits to the organization? Does that provide value for them?

And if so, do they actually want visibility on the technology or are they okay with just letting their employees use the technology without any visibility?

[David Spark] That’s an interesting point. Okay, so let’s get to what you were talking about DLP at the beginning of the show. What is the visibility DLP will provide into ChatGPT usage?

[Rohan Sathe] Yeah. So, the primary concerns with this usage is that, A, it really centers around the data that is being submitted to the prompts of these language models, right? And so the primary objectives are – are we even allowed to share that sensitive data, be it something related to a particular compliance regulation that we have to follow like HIPAA or PCI?

Or just general security hygiene where we don’t want to share sensitive data that might make its way into the training datasets of these language models, right, and in turn be things that a bad actor could use some sort of training data exfiltration attacks against the models to extract out? So, those are the things that we see most commonly our customers are concerned with and therefore a DLP tool naturally extends to solving some of those visibility challenges.

Surprising research just in!

7:34.877

[David Spark] Over the last two years, more than half of all organizations saw a SaaS security incident and most organizations say their current SaaS security only covers a minority of their apps. Because departments are launching these apps with little to no supervision, cyber teams are seeing a shift away from being the one in control of app security.

They’re more supervisors without ownership. Now this was the finding of AJ Yawn of Armanino, referencing the findings of Cloud Security Alliance’s annual SaaS security survey. I’m going to start with you, Rohan. If security no longer owns SaaS security, so it goes to the business units, then how can they go about closing these gaps?

Like what is the best way? I guess working with the business units, managing since they’re not owning, supervising, whatever it is, what is the relationship and how should they engage?

[Rohan Sathe] Yeah, I mean, I think I generally agree with AJ that cloud security is a pretty critical blind spot for most security teams, just given sort of the ownership of who’s allowed to use these applications, and so really employees and security teams alike are both still adjusting to that world view, right?

And at Nightfall we’ve found that one in five employees exposes sensitive data via some of these SaaS applications at least once a month.

[David Spark] Let me pause you just for a second. Give me an idea of what that means – exposing stuff once a month, one in five. What exactly are they doing?

[Rohan Sathe] Yeah. So, in that case, they’re sharing something sensitive that should not exist in these applications, right? So, if you’re a health tech company, let’s say, and you are a customer support agent and you basically want to share something to another employee of yours, well now you may admit a customer’s Social Security number, let’s say, on Slack, for example.

And so that’s something that, A, violates HIPAA compliance that these healthcare health tech companies need to go through, but also just poor security hygiene, right? Like if somebody were able to get access into Slack, then now suddenly you’ve exposed this trove of information that they can do further damage with.

[David Spark] All right. Let me throw this to you because, Andy, I know you’ve got a lot of opinions on this. What is the supervising role of security versus the outright I guess management of security when it comes to SaaS? Again, you sort of see where I’m going with the split here? It’s like I no longer can put my hands on it, I just more can direct, I guess.

And again, not my department; other departments.

[Andy Ellis] Yeah. So, in a sense in most organizations, security has always been chasing the business and it’s been aided and abetted by slow IT organizations. So, if you said, “Oh, we want to deploy a new app,” by the time you could get it through all the IT provisioning process, security had a chance to show up and be like, “Oh, here’s all the things you have to do right.” And the advent of SaaS basically took IT out of the loop which took security’s sort of unwitting partner out of the loop.

So, security’s ability to get its job done, which has always been an influence job of, “We’re going to tell you what we want you to do and we’re going to try to help it be easier for you.” Well, the challenge is what could be easier than I need to write copy for our upcoming investor relations call and I’ve got a bunch of stats.

And so I just go to OpenAI.com, I put in my stats, and I say, “Write me an investor call with these stats in it.” There’s nothing simpler than that. And so now security’s trying to show up and be like, “Yeah, those stats will be all great and public in five days,” but right now that’s a material leak from an SEC perspective.

[David Spark] Andy, and I understand the average user does not have insight into exactly that. My feeling of that requires not just security’s direction but also legal’s direction as well. Rohan, you’re nodding your head.

[Rohan Sathe] Yeah, exactly. I think the legal team has the ability to kind of influence what the policies and what compliance regulations a lot of these companies need to follow. And in turn, really help to educate the employee base around some of the risk, right? Because a lot of times, as you said, employees just don’t even know that this is a problem and that they shouldn’t be doing these things, and so it’s really about controlling some of the hygiene issues I think more or less.

[Andy Ellis] Yeah. I think you’re dead right on whose responsibility is this to set the policy. Security has the responsibility to make sure there is a policy and that it’s one that makes sense and works with the business. But at the end of the day, if your company wants to take a risk with using GenAI for new product launches, then great, that’s a risk your company can choose to take.

You just need to make sure that they’re informed, they understand that, oh, OpenAI might be training itself on our new product launch material. But if you’re like, “Yeah, I don’t really care because 30 days later it’s all public anyway and they’re going to train it on our website,” great, more power to you.

Sponsor – Nightfall

12:38.425

[David Spark] Who’s our sponsor this week? Hey! It’s Nightfall! It’s awesome! Let me tell you a little bit about them before I go on any further. Okay, you and your team using SaaS apps like Slack, Jira, or Salesforce, or any number of the GenAI tools to increase your team’s efficiency, we’ve been actually talking about this a lot already, So, if so, then your data could be at risk.

Ah. We’ve been already bringing it up. So, here’s what Nightfall AI does. They understand that building a trusted brand begins with safeguarding your company’s and your customers’ data. That’s why they’ve developed a state-of-the-art data leak prevention platform that ensures your company’s valuable information remains safe whether it’s in a SaaS app or a GenAI tool.

So, imagine this – your team is working diligently on a critical product using GenAI tools or sharing sensitive company or customer information over Slack. With Nightfall AI watching your back, you can be confident that your data is shielded from any prying eyes or accidental slips. Nightfall AI-enabled product discovers, classifies, and protects your data in cloud apps and GenAI in real-time, putting a stop to potential leaks before they even happen.

Whether it’s customer data, proprietary information, or financial records, Nightfall AI has you covered. So, don’t let your hard-earned reputation be tarnished by data exposure. Join hundreds of leading companies like Snyk, Splunk, and Chime that trust Nightfall AI to keep their data protected from bad actors or human error.

Visit Nightfall.ai/CISOseries to learn more and get 25% off your first year’s subscription cost. Remember – that’s Nightfall.ai/CISOseries, you know how to spell that.

It’s time to play “What’s Worse?”

14:43.053

[David Spark] Rohan, you know how “What’s Worse?” is played, yes?

[Rohan Sathe] Yes. I think I understand.

[David Spark] You can handle it. It’s a risk management exercise.

[Rohan Sathe] All right.

[David Spark] Both options stink. Actually this one, it’s not that both options stink, they’re two positives that have a little bit of a tweak to them, if you will. So, they’re not actually two bad ones but they’re two things that you’re like, “Oh, what’s going to get me the better thing out of it?” All right.

This comes from Paul Lanzi of Remediant, actually one of our earliest sponsors, great. And I throw this to Andy first so Rohan, you get to agree or disagree with Andy here and give your explanation as to why.

So, your InfoSec department budget has to voluntarily transferred by the line of business leaders out of their own budgets. Now, I know you like this, Andy, but here’s the rub. you have to convince each one of them the value of their investment. Or you get a CEO-assigned budget that you only have to justify to the CEO but it’s only half the money you feel you need.

All right? Which one is worse?

[Andy Ellis] So, I think implicit in that is I’m going to end up with more money.

[David Spark] But you got a lot more people to convince.

[Andy Ellis] But I got a lot more people. I would rather have the first one, so the second one is worse.

[David Spark] Okay.

[Andy Ellis] Because at the end of the day, you’re just a tax on the business, you’re not actually being successful so everybody’s going to look over at you and be like, “Why are we paying for you? You’re not doing anything.” Whereas on the other side, you’re actually doing your job, which is not to reduce risk.

Everybody thinks the job of security and InfoSec is to reduce risk. It’s not. The job is to make sure the business is making wiser risk choices. And you might be the one that’s helping reduce risk but if you’re reducing risk in a closet and nobody knows you’re doing it, that’s not helping the company.

Your job is to make sure that those product managers understand the risks of what they’re rolling out, and if there’s places where you can help them and you’re basically an insource sort of outsourcing firm, great.

I used to go to teams and say, “Look. Here’s what it will cost you to implement good security. I’ll explain the risk and here’s how you can mitigate it, and if you want to give me some money, I already have a center of excellence here and it’s cheaper for me to do it than for you to do it yourself but it’s up to you.” And people would often happily hand you money.

They’re like, “Great. You take this problem off my hands and I can write down that we’re doing something about it.”

[David Spark] All right. Good argument. I think I should have adjusted the numbers because the half is kind of severe. I should have made it like 75, 80%.

[Andy Ellis] Well, I assume I’m not getting everything I need if I have to go justify to everybody, so I was already thinking I’ll get 75 to 80% of what I need so worrying that same…

[David Spark] Right. Because even if you fail, you’ll get 50%, right?

[Andy Ellis] Right. If you just said I get the same amount of money, I would still take going and justifying what I do to the lines of business.

[David Spark] That is also a good point. Okay. Rohan, same question and let me know if your answer would change if instead of half the money it was like 75, 80% of the money from the CEO. Let me know.

[Rohan Sathe] I don’t think it would change, yeah. I mean, I think Andy has an interesting viewpoint. Perhaps this is just from me being a founder but I think the CEO has the ability to really control kind of the philosophy of the company and how they think about security. And so trying to get every department to justify a security initiative when in many startups or many organizations they’re already just slammed with all of the other things that they have to deal with, I think in my mind is quite difficult.

I think if you’re convincing enough you can try and get your CEO to really push for the budget whether to their finance team or really to their board. And so I think it comes down to the sales ability of the security professional at the organization.

[David Spark] Hold it, wait, wait. So, I’m trying to see where you’re landing here.

[Andy Ellis] He’s disagreeing with me on this. And there’s a great point here which is…

[David Spark] Yeah. Are you disagreeing with Andy? Because I do like your argument because I think your line of the CEO kind of sets the tone, and if you don’t have one person setting the tone from high up, then it becomes disjointed. So, are you disagreeing with Andy here?

[Rohan Sathe] I’m disagreeing. Yes, exactly.

[Andy Ellis] The real answer is you want to be right in the middle on this, which is there are things that should be a centralized budget and that should come, that should just be part of normal corporate budgeting, but there are things like the compliance activities to bring a new product to market are part of the cost of the product.

That should not come out of InfoSec’s core budget. You should say, “Oh, we want to build this new product and we’re going into healthcare and one of the costs is making it compliant,” and that should absolutely come from the line of business.

[Rohan Sathe] Yeah, that makes sense. I think I agree with that.

[David Spark] Hold it, wait. You coming back to agreeing with Andy now?

[Andy Ellis] No, he’s agreeing with me that it’s a compromise in the middle.

[David Spark] It’s a compromise.

[Rohan Sathe] I’m agreeing with the compromise, exactly.

[David Spark] But you’re still disagreeing with Andy at the beginning and up front?

[Andy Ellis] He’s disagreeing at the beginning. If you only have to have one of these bad things.

[David Spark] Yeah. That’s the way, you have to stay that way.

[Andy Ellis] [Laughter] Very rarely does the submitter win and get somebody to disagree with me, so…

[David Spark] Oh, not true at all! Not true. Plenty of people disagree with you, Andy.

[Andy Ellis] Go count on “What’s Worse?” I think I’m like four to one.

[David Spark] All right. Actually, I’ll ask the audience. Go ahead and count the number of times people have disagreed with Andy.

[Andy Ellis] Yeah. And count how many have agreed with me. I think it’s probably 80% agreement rate.

[David Spark] Eighty percent agreement rate?

[Andy Ellis] I’d say 80% just off the top of my head.

[David Spark] Yeah. This is a fact that you pulled out of wherever.

[Andy Ellis] I pulled it out of my nether so if somebody in the audience wants to go count…

[David Spark] Go ahead and count.

[Andy Ellis] Go count.

[David Spark] I’d love to know.

Please, enough. No more!

20:35.590

[David Spark] So, today’s topic is generative AI and we’ve been talking about this a little bit already. But Andy, and I just also want to point out that we’re at the end of August right now and this episode’s going to drop in early October. So, what you’re frustrated with AI, it could conceivably change in the next month, okay?

[Laughter]

[Andy Ellis] I doubt it.

[David Spark] Well, let’s see. So, what have you heard enough about with generative AI and what would you like to hear a lot more? From the product sales and from security, from both angles.

[Andy Ellis] So, what I’d actually like to hear about is how does generative AI fit into a larger ecosystem because too many people are thinking about how do I just use GenAI to do something, and I equate it to human intelligence because it’s always fun to model expert systems on humans. And if you think about you as a human, right, you have a general-purpose AI, right, your metacognition and you have expert systems.

Yeah, you tie your shoes without thinking about it, very similar to computer expert systems. And you have a language model, like when you decide to yell at your kids or say something, you don’t actually have to think about the words when you’re speaking in your native tongue. The words just come out; they pop up.

That’s generative AI.

[David Spark] By the way, I like how you use the analogy of yelling at your children.

[Andy Ellis] Yeah. Something you do quickly with emotion and not a lot of thought behind it. Sorry, I have two teenagers who just headed off to school. But there’s a thing that humans do which is we process the words that we say and reflect and improve and make sure they make sense. And I want to hear more about how do you use GenAI to create language but then how are you using things like natural language processing and semantic analysis and other expert systems to ensure that what you are passing is actually meaningful.

Because ChatGPT does not write meaning, it writes syntax and things that sound good, and I see too many people who are using it and don’t have a system to say, “Hey, is what it just said useful and meaningful?

[David Spark] That’s a good point. The way we’ve heard it described is that ChatGPT is really good at sounding confident when it’s wrong.

[Andy Ellis] Yes. It’s the bull [Beep] at a cocktail party that just shows up and can opine on anything and unless you have your phone on Wikipedia checking to see if they’re wrong, you think they’re smart.

[David Spark] Rohan, I’m going to ask you the same question and you also can respond to Andy’s thoughts. But let’s just start where you’re at here with what have you heard enough about with regards to generative AI and what would you like to hear a lot more?

[Rohan Sathe] Yeah. I think we continue to hear about the capabilities of some of these AI-based agents, right, things like Auto-GPT really taking over most of the workflows that an individual would have day to day, and I haven’t really seen that just yet. I think a lot of it just feels fairly hypothetical.

I think there are some pretty clear strong use cases that have been talked about quite a bit, but I think we haven’t really seen it go to the next level and that’s really what I’m excited to see over the next let’s call it 6 to 12 months. And then some of these sort of theoretical attacks that security teams have started to think about, like prompt injection for example, I’ll be curious to see if any of those things emerge from some of these known sort of bad actors and they take advantage of some of those things

[David Spark] So, let’s actually use this as an opportunity to discuss what you’re doing over at Nightfall with generative AI. First of all, I’m assuming given the work you’ve been doing that this has just become a pretty hot topic, yes?

[Rohan Sathe] Yes. Exactly.

[David Spark] So, walk me through it.

[Rohan Sathe] As we’d kind of alluded to earlier, we’re just seeing a lot of organizations either embracing the technology by force from their employees or that they want to embrace the technology but they just really don’t have an idea of how they should think about the security and the compliance implications.

And so for us, we’ve gotten sort of an influx of questions about how can we use the Nightfall technology to provide some of the same security invisibility controls that you provide us for the other SaaS applications that we’re concerned with, right?

And specifically the risks with generative AI are, again, around the data itself that is being transmitted to the language models, and so this could be employees using tools like ChatGPT directly in their browser or it could be employees using SaaS applications that have generative AI features built into their product and so indirectly these companies have added OpenAI as a subprocessor.

And so when you use one of these generative AI capabilities in their product, that data’s making its way to the subprocessor, right?

Or you as a product or engineering organization want to build some features yourself that are powered by generative AI technology, and so you’re either rolling your own homegrown sort of models or you’re using a third party. And so your customers are probably not cool with you sending their information to a third party, or your customers don’t want their information to be inadvertently exposed to another customer by virtue of sort of this language model technology, right?

So, those are the risks that we’re commonly hearing and sort of thinking about what are the solutions that we at Nightfall have already today that can help organizations tackle this or what can we sort of continue to iterate on to help them out.

[David Spark] Let’s back up a little bit here. A year ago, I’m going to guess this started to appear at low levels a year ago. Am I sort of right on that assumption?

[Rohan Sathe] Yeah, that’s right.

[David Spark] Okay. So, think where you were a year ago and today. What has been most surprising to you about behavior in AI and what you’ve had to adapt as a result of it?

[Rohan Sathe] I think the space is just, I mean, ChatGPT, OpenAI got a hundred million users really quickly, right, and so the space has just exploded because of the usage. And so that’s just attracted all types of interest, mostly from the VC community, the tech community, thinking about how they can sort of embrace the technology and find ways to embed it in their own products, right?

But the security implications just haven’t been really well thought through and you’ll see some companies try to build their own internal sort of variance of proxies to get some visibility. But I think security companies should really catch up to some of these things and find ways to enable the technology.

[David Spark] Okay. Let me ask a close. Where do you think security companies are falling behind with regards to keeping up with how generative AI is behaving?

[Rohan Sathe] I think there’s a lot of technology that’s just designed in a manner that doesn’t provide the best end user experience, so if you’re trying to bolt on certain types of architectures and try and solve the visibility challenge, it can be quite complex and quite frustrating for the end users and they just end up finding workarounds.

So, really it’s like an architectural change that some companies are just unable to meet because of the way their existing product is designed, and so I think that’s been the biggest challenge, and really I think that’s where startups can innovate the most.

There’s got to be a better way to handle this.

28:07.902

[David Spark] Every company puts locks on the doors. It’s table stakes for physical security. Everybody knows if you didn’t, you’d simply be out of business. We’re now the digital equivalent of locks on the door that many companies simply aren’t doing. This isn’t just a best practices question or a question of staying above the security poverty line.

It’s what basic cyber protection you need to just exist, to survive. Adrian Sanabria of Valence Security has been maintaining a spreadsheet of companies destroyed by cybersecurity incidents. In fact, it’s essentially cyber incident happens, company stops existing. He’s up to 24 so far. Andy, do we have an equivalent of locks on the door in cyber to maintain the basics of survival?

But even if you do have locks on the door it doesn’t necessarily mean you’ll survive, but locks can and are broken. What are your thoughts here?

[Andy Ellis] So, I have two answers here, one humorous, one less so. So, my humorous answer is I asked ChatGPT to answer this as if it were a cybersecurity expert. That’s the text I just pasted into our rundown doc.

[David Spark] Okay.

[Andy Ellis] And I’m actually fascinated because I was reading it, said, “Oh, do patch management, strong access control, network segmentation, regular backups, employee training, intrusion detection and prevention systems, and an incident response plan.” And I’m like those are great sounding words but honestly those are not locks on the door.

[David Spark] No.

[Andy Ellis] Those are some hygiene you should do but we just talked about things that don’t fall into this. But here’s my real thing is that I’ve looked at Adrian’s list, it’s fantastic, I love the work he’s done here. Twenty-four companies in 22 years that we can attribute their failure to a cyber incident.

Of those 24 companies…

[David Spark] What, so I’m sorry, 22 years? It wasn’t that long, is it?

[Andy Ellis] Twenty-two years since he’s been maintaining this spreadsheet.

[David Spark] Wow, I didn’t realize that.

[Andy Ellis] Yeah, I know. So, this is one a year, and 1/3 of them were businesses under 10 employees. So, what he’s really doing is pointing out the fact that it’s really hard to attribute business failure to a cyber incident. Bad things happen but at the end of the day when we doom and gloom and go inside our companies and say, “Oh, my God.

We’ll be put out of business,” we kind of don’t have credibility because the number of companies that have gone out of business because of a cyber incident are pretty small.

[David Spark] Yeah. Well, one a year, people go out of business for lots of other reasons than just that.

[Andy Ellis] Yeah. And if you look through some of them, you’re like, “I think you were blaming a cyber incident and you were probably a business that was failing already.” And it was like, “Okay, we had no margin for error and we’re headed downhill and we had a cyber incident. Let’s just close up shop.”

[David Spark] All right. Rohan, I’m going to throw this to you. You’ll have the closing thought on this. So, is there a way we can just basically talk about locks on the door or is this not a problem to concern ourselves since it’s really just one business a year goes down because of this? Andy thinks this isn’t a locks in the door issue, it’s just a company who’s probably failing to begin with.

[Rohan Sathe] Yeah. I think it’s probably a little bit extreme, so I definitely agree with that. I think on the cyber side we don’t want to make it seem like there’s some hyperbole and a company’s going to die if we don’t solve X, Y, and Z challenge, right? I think really the goal of security is to articulate the risk.

Ideally you can quantify that risk financially and then you can even kind of extrapolate and think about sort of the reputational brand damage that would accompany a security incident, right? And so that’s how I think about it and the best way to sort of think about what locks should you even have on the door are really about once you articulate those things and thinking about what are the most common sources of issues that occur that would then result in financial or reputational damage and then think about justifying sort of your security spend on the basis of those risks.

That’s how I think about it and I think there’s a pretty easy argument to be made then if you can quantify those things and then it’s not like, “Oh, the company’s going to die or something or implode.” It’s more just about let’s be pretty measured about how we think about security spend and make sure that there’s high ROI.

[David Spark] That’s a very good point. And by the way, that’s kind of the philosophy of most of the people on this very show as well.

Closing

32:31.809

[David Spark] Well, that brings us to the tail end of this episode. Thank you so much, Rohan, for your insight on this. I’m going to let you have the very last word on this. But first, I want to thank your company – Nightfall.ai. Let me mention the web address, so it’s nightfall.ai. Throw in the CISO Series there, Nighfall.ai/CISOseries, prevent sensitive data leaks to SaaS apps and GenAI there as well.

Andy, any last thoughts on our discussion? We’ve actually talked a lot about GenAI today.

[Andy Ellis] Yeah. I think GenAI, everybody’s interested in it and I think the big focus is how do we effectively use GenAI for our business, deal with the security issues, but really I think the biggest risk people have is the reputational risk of bad GenAI usage. And so really keep an eye on that rather than just focusing on the data leaks.

Pay attention to the data leaks but also pay attention to the data you’re using.

[David Spark] Everything’s a balance, right?

[Andy Ellis] Yep.

[Rohan Sathe] Yep.

[David Spark] Rohan, as we said, you guys are offering a discount, 25% for the first year for our audience. Remember, go to the Nightfall.ai/CISOseries. Anything else you’d like to say in closing? Any offers or anything else you’d like to say to our audience? And by the way, are you hiring? We always ask that question.

[Rohan Sathe] Yes. Definitely hiring across the board. I’d be remiss if I didn’t mention our latest sort of product offering, right? I talked about sort of the three data planes of generative AI that are namely employees using browser-based technologies like ChatGPT, employees using SaaS applications that contain generative AI features, and then engineers or product teams building their own sort of language-model-based technology into their products.

And so the product suite that we have sort of offers coverage across those three different planes. Really, we have a browser extension that gives visibility to organizations on the browser, we have SaaS-based integrations that give organizations visibility and remediative capabilities for SaaS applications, and then we have a developer API that allows product engineering and data science teams to think about data hygiene as it relates to any LLM or generative AI technologies that they’re building into their products.

[David Spark] Thank you very much, Rohan. Thank you very much, Andy. And thank you to your company Nightfall.ai for being a wonderful sponsor of the CISO series. We greatly appreciate it. And thank you to our audience, we greatly appreciate your contributions. Send me more “What’s Worse?” scenarios.

If anyone can tell me what Andy’s, quote, “success rate” of getting people to agree with him is, I would greatly appreciate that. It’s actually pretty easy to find. If you just go to the “What’s Worse?” section on all of Andy’s episodes, you can actually look through the transcript and figure it all out.

[Andy Ellis] Yeah.

[David Spark] So, I will be greatly appreciative if someone can figure that out for us. I believe though that you are definitely well over 50% in terms of agreement.

[Andy Ellis] Oh, I know I’m well over 50%. And if you want to find all my episodes, I actually now link to all of them from my website. So, if you go to CSOAndy.com, click on Podcasts, there’s a dropdown for CISO Series that will show you every episode I’ve co-hosted.

[David Spark] Oh, there you go. Or you can just type in Andy Ellis on our site.

[Andy Ellis] Oh, does it sort that way?

[David Spark] Yeah. If you type in “Andy Ellis,” wherever your name appears on our show, you only appear on the episodes you’ve appeared on.

[Andy Ellis] No. Actually I’m sometimes quoted on other episodes.

[David Spark] Ah, you’re quoted on articles and episodes. That’s true as well.

[Andy Ellis] Yeah.

[David Spark] That’s true. All right. So, then just go to CSOAndy and you’ll find it all. Thank you, everybody, audience. We greatly appreciate your contributions and listening to the CISO Series Podcast.

[Voiceover] That wraps up another episode. If you haven’t subscribed to the podcast, please do. We have lots more shows on our website, CISOseries.com. Please join us on Fridays for our live shows – Super Cyber Friday, our virtual meetup, and Cybersecurity Headlines Week in Review. This show thrives on your input.

Go to the Participate menu on our site for plenty of ways to get involved, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at David@CISOseries.com. Thank you for listening to the CISO Series Podcast.