Large language models and generative AI are today’s disruptive technology. This is not the first time companies just want to ban a new technology that everyone loves. Yet, we’re doing it all over again. Whether its ChatGPT or BYOD, people are going to use desirable new tech. So if our job isn’t to stop it, how do we secure it?
Check out this post for the discussion that is the basis of our conversation on this week’s episode co-hosted by me, David Spark (@dspark), the producer of CISO Series, and Geoff Belknap (@geoffbelknap), CISO, LinkedIn. Joining us is our special guest, Carla Sweeney, SVP, InfoSec, Red Ventures.
Got feedback? Join the conversation on LinkedIn.
Huge thanks to our sponsor, Censys

Full Transcript
[David Spark] Large language models and generative AI are today’s disruptive technology. This is not the first time companies just want to ban a new technology that everyone loves. Yet, we’re doing it all over again. Whether it’s ChatGPT or BYOD, people are going to use desirable new tech. So, if our job isn’t to stop it, how do we secure it?
[Voiceover] You’re listening to Defense in Depth.
[David Spark] Welcome to Defense in Depth. My name is David Spark. I’m the producer of the CISO Series and joining me for this wonderful episode of Defense in Depth is Geoff Belknap. He’s the CISO over at LinkedIn. Say hello to the nice audience, Geoff.
[Geoff Belknap] Hey, everybody! It is wonderful to be here with you.
[David Spark] We like being here with you. And you know what we also like? We like our sponsor being here with us as well. It’s Censys – the leading internet intelligence platform for threat hunting and exposure management, something we’re all dealing with. And guess what? We’re going to talk about that a little bit later in the show, I promise you! But first, dealing with new technology is part of the job for any security professional.
Each new innovation adds different challenges, and none is more unique than what we’re seeing right now with advances in generative AI. Matt Sullivan of Instacart noted that it wasn’t too long ago some security professionals complained about challenges of the cloud, DevOps, or BYOD. So, Geoff, are tools like ChatGPT so different from what we’ve seen before that we can’t apply lessons already learned?
[Geoff Belknap] No. Security’s a journey and the path is constantly changing. As much as we’d like it to be easy, the paved road often runs out and we have to kind of cut our own path and the cloud, BYOD, just as you said, and now AI are really no exceptions.
[David Spark] Good point. And by the way, I just love the way people responded to this and thrilled to have our guest on. We have had her on two other shows but not on Defense in Depth, so we have to get her on Defense in Depth. It is the SVP of InfoSec for Red Ventures, Carla Sweeney. Carla, thank you so much for joining us.
[Carla Sweeney] Thanks for having me, happy to be here.
We’ve seen this one before.
2:14.745
[David Spark] Jason Popp of GEICO said, “For an employee posting proprietary data, ChatGPT is one destination of millions. It’s notable but the exfiltration risk exists for any destination.” And Xander W. said, “While there’s a lot of unprecedented capability within the world of ML and AI, it’s just another thing we need to know better than the attacker.
This means new attack vectors, new defense in depth considerations, just like in every other tech invention.” So, Geoff, I want to more talk about the fact that this story keeps repeating itself, like, “New tech, we need to ban it.” What Jason and Xander said and what you said earlier is like it’s just the course of how technology and security works.
Why do you think we keep seeing this story repeating itself or is it just in our nature to be scared of something new?
[Geoff Belknap] I think in a complicated space like security, like I said at the beginning, it’s a journey and as much as we’d all like to take a rest or we’d like it to just stay the same the whole time because we figure that out, it just doesn’t. I think this situation right now is like every kind of new technology we’ve had to adapt to in the last 15 years.
It might be a paradigm shift for our business, for technology in general, but for security people it’s all about bringing it back to fundamentals. In this case, we already have some pretty strong patterns we can follow to protect data, to assess third-party risk and services. The new part is really learning to protect our own AI models.
I think we’re going to figure this all out, but here we are again. And mark my words, we’ll be here again in five years when the next thing is here.
[David Spark] Carla, have you had to work with a company that said, “Just ban it,” for any technology, by the way, not specifically this?
[Carla Sweeney] I think I’ve seen that more and experienced it at banks or companies that are historically a lot more conservative, and I’ve seen articles with larger companies saying, “We’re banning specifically ChatGPT,” or “We’re limiting ChatGPT,” and they’re worried about what data’s going in there, and they should be.
But I think to Geoff’s point, we can lean on our fundamentals and remember that this is about vendor risk and education and guidance and thinking about how we can enable our business to move forward and leverage this new technology. We learned a long time ago that being the Office of No isn’t the way and we know our way out of it.
We’ve done this before and we’ll do it again.
[David Spark] Then let me follow up on that. Then when a new technology presents itself, and ChatGPT is moving at a pretty rapid clip and as are all these AI models, how do you kind of communicate to your security team like, “This is how we need to try to get our arms around it”? What’s sort of the methodology you think at that point?
[Carla Sweeney] I think about what does the business want out of this or what is the goal, what is the problem that we’re trying to solve, and then how are we going to support that. Whether it’s learning ourselves, whether it’s talking with our friends in Legal about various risks to understand what the risks are and then what mitigation strategies we can employ, and what the business appetite is for that risk.
[David Spark] Let me throw this to you, Geoff. I mean, I see these things, especially AI, as innovation opportunities. In some businesses, if you double down on it and you’re taking advantage of it, you could start surpassing your competition. To not do it, to ban it, could be business detrimental. Yes?
[Geoff Belknap] Absolutely. Look, I get it. Depending on the kind of business you’re in, banning it upfront might be the only thing you can do. Maybe you’re not in a position where you can really deploy new controls or figure this out, but it cannot be the only thing you do. And frankly, to Carla’s point, if you do this you’re just making an enemy of your stakeholders that are really trying to leapfrog whatever business objective they’re trying to get to.
You have to face the newness of this and really figure out how to make it work for your stakeholders.
How do I start?
6:25.830
[David Spark] Timothy Shea of PlayStation said, “Some tech has lost over the decades, most recently NFTs and blockchain. Our job is to rationally validate tech on its merit and not blindly accept every fad that comes along. Ask yourselves what risk am I solving for with it and go from there.” Anshuman Mishra of Netsurion said, “Tech evolves over time and we need to evolve as well to be the same frequency to make the best of it.
With AI, if the delta is too high, it may sometimes cause more harm than good. The only thing that is worse than bad implementation is unprecedented implementation.”
And Val Akkapeddi of Collectors said, “Questioning and exploring a new thing from all angles before mainstreaming it is just wisdom, not obstructionism. Security of any kind is by definition friction.” So, Carla, this last comment from Val sort of speaks to what you were talking about of you got a question, you got to explore it.
We can take generative AI or any other technology, like when you did that exploration, are there things you discover like, “Oh, we should be aware of this,” kind of a thing, as opposed to the people like, “Let’s just run and use it,” kind of a thing.
[Carla Sweeney] In security, we frequently walk around with a grey cloud over our heads thinking, “How could this go wrong? How could this come back and bite us? How is this dangerous?”
[David Spark] Lawyers think the same way, by the way, [Laughter] Carla.
[Carla Sweeney] Yes. Just anyone in a control function, “How could this go wrong?” And so I think that’s a benefit to us to understand that and explore that. Think about the worst-case scenario and how likely is that and what controls can we use or guidelines, how can we educate people to reduce the worst of it.
So, I think that’s important and maybe Val’s quote was thinking about friction like added layers, like various locks on a door, but we don’t necessarily have to create friction to slow things down. I think we’re part of the solution and part of that where we’re a key player in here, understanding how can we do this safely, how can we do this and minimize risk to the business.
[David Spark] And I did acknowledge Val’s comment about that and there is I know an ongoing struggle for security to not be seen as friction because being seen as friction is often being seen as the Department of No. And often people want to avoid friction, and security doesn’t want to be avoided so we kind of want to get rid of that branding.
Am I right, Geoff?
[Geoff Belknap] 100%. And I think Carla put it really well. Security does naturally add a little bit of friction, but it’s our job to go beyond that and make sure that we’re adding the least amount of friction possible so that the business can do its job and to enable the business to do its job. The business needs to experiment.
We didn’t know when NFT and blockchain was coming out that that wouldn’t be a thing that was going to be like the next generation of business for everybody. So, you have to enable the business to figure that out for itself. And to be clear, it is not security’s job to determine whether a certain technology is okay for the business to use or not.
The business decides, security follows that and ensures it can move forward without risk or with as little risk as possible. But the key is we have to break those problems into small bite-sized problems, figure them out, and then if we have to invent new technology for the little problems that are left over we don’t have solutions for, then so be it.
[David Spark] Can you think of, and again, it can be on any of these sort of new technology fronts – BYOD and generative AI now – but just something in your exploration that you discovered that you’re like, “Hey, business. Here’s a big risk that we were not aware of.” And I think possibly with generative AI, a lot of people initially were not aware of, “Oh.
If you put private data in, others can take private data out.”
[Geoff Belknap] I think AI is the best example of that today where at first glance you look at it, you’re like, “Hey, this is like using a Word doc or a Google doc or something like that where you’re collaborating with something on a website.” But the reality is because you’re training a model on the back end and we don’t really know all the sharp edges that are to be known about AI, I can be working with it one way, Carla can be working with it another way, and then maybe Carla gets a hint of my private information that I’m working with.
That’s kind of a new thing. We haven’t really approached that before, and we’re figuring it out.
Sponsor – Censys
11:03.263
[David Spark] All right, before I go on any further, you do know I want to tell you about Censys, our sponsor. I’m pretty excited about this. So, protecting your company from a cyber-attack is a pretty monumental task. We wouldn’t have a network if it weren’t a monumental task. That’s all we talk about here.
So, not only do you have to stay a step ahead of threat actors who, let’s face it, are getting increasingly good at what they do, you have to secure a technology landscape that’s becoming more vast, complex, and fragmented. Oh, my God. That’s exactly what we’re talking about today.
So, think about all of your company’s interconnected tech. We’re talking about assets living in the cloud, your software and web properties, remote devices, not to mention all the shadow IT you don’t even know about. As your digital footprint grows, it becomes more challenging to identify, monitor, and defend all that you own, and just one unknown or under-managed asset can be an attacker’s point of entry to your network.
I’m speaking your language, right, people? All right. That’s why continuous visibility into your entire attack surface and larger threat landscape is critical. To prevent an attack, you need visibility that’s informed by a comprehensive, highly contextualized set of internet intelligence for both proactive and reactive security analysis at scale.
You need visibility into all of the exposures an attacker could exploit.
And this is exactly the kind of visibility Censys can provide to you. With the Censys Internet Intelligence Platform, your security team can access the most comprehensive, accurate, and up-to-date internet data available, so that you can take down threats in as close to real-time as possible, with no deployment or configuration required.
Government, enterprises, and researchers around the world use Censys to defend their attack surfaces and hunt for threats, including the US government and over half of the Fortune 500. You can learn more about Censys on their website. It’s not spelled like the US Census, it’s spelled censys.com. Go there.
Would this work?
13:16.560
[David Spark] Steven Smith of Freshworks said, “I sent out a company-wide email warning people of the potential pitfalls.” I’m assuming of AI. “I gave suggestions on how to use it safely. Speak in generalities, say ‘the company’ instead rather than you. Partner with your people, tell them what they can do, don’t focus on what they cannot.
Turn the negative into a positive.” I thought that was healthy. Ben Kingshott of LMNTRIX said, “Standard rules apply, don’t disclose sensitive information to random sites on the interwebs. We need to ensure that the idea of AI websites are added to our user training pieces.” So, Carla, this is some pretty logical advice of just add it to what you’re doing and stay smart in the ways that you normally stay smart.
I think, like what Geoff was mentioning earlier, ChatGPT and these other programs seem like magic boxes, but I don’t think people realize that these magic boxes have deeper value that can be used by others. Is that the case, Carla?
[Carla Sweeney] Yeah. I really like what Geoff said and I like I think a former Defense in Depth episode on ChatGPT specifically, and that’s the difference here as opposed to any other SaaS app. It’s just easier to pull that data out. But standard rules apply is so true. So, with any SaaS application, stick with tools that you have an agreement, if you have an enterprise agreement.
Risks of public ChatGPT where your prompts are training the model are different than if they aren’t or if you have some sort of agreement protecting that. We can work with our friends in Legal to get the right contractual terms in there. And AI companies seem to be committing to those more and more as they want more and more adoption.
And then education. So, it’s kind of, Geoff said at the beginning, back to basics, back to fundamentals, standard rules apply here.
[David Spark] And like what both Ben and Steven said here too as well.
[Carla Sweeney] Right, exactly. I really liked what Steven said about turning the negative into a positive. If we come in here being arm wavers like, “Don’t do it! Don’t do it! Don’t touch it!” we’re going to be discredited and quickly ignored because people are going to do it anyway. So, here’s how to do it safely, here’s what we as a company, I like, “We as a company, we are leaning into this technology, here’s what we hope to accomplish and here’s how we’re doing it.”
[David Spark] Good point. Geoff, how would you add the sort of positive way of training people of taking advantage of something new and cool that could be of great benefit to the company?
[Geoff Belknap] Well, I think first of all, plus one everything Carla just said. So, I think anytime you’re dealing with something new – Steven’s got a great point – awareness is a fantastic strategy. We do eventually have to get around to solving the parts that are new, but in this case telling people about it, making them aware of it, and then most importantly, the quickest you can get to, “Hey, here’s your proved paved path for this solution.
If you need to use AI or your business unit is experimenting with it, insert link here, here’s how to do that. Now it might have constraints and restrictions but here is the path to do it.” And as soon as you eliminate the mystery of “how I’m going to figure this out on my own,” you really are engaging your stakeholders, you’re providing people the opportunity and they know that you’re going to work on this for them.
That is a wonderful strategy.
This isn’t just a security issue.
16:38.936
[David Spark] Todd Luther of Solü Technology Partners said, “It is up to us to partner with the stakeholders using the tools to guide them to the appropriate path to meet a secure development. There is always a middle road, and it is for us all to partner to find that middle road.” Oh, my God. This is like the theme of the episode we’ve been discussing.
Gregory Smith of Smartsheet said, “They don’t call it the cutting edge or the bleeding edge for nothing. You need to make the business aware of potential risks and let leadership weigh it against the possible gains.” Again, this is our theme. Mysti Williams of Express Employment International said, “We may not get replaced with AI itself but we will definitely get replaced with someone who took the time to learn how to use it!” That quote I like right there.
All right, Geoff. I’m going to have you double down on that, and that has to do with if you don’t learn how to secure it, they’ll [Laughter] find somebody else that will.
[Geoff Belknap] Yeah. I think the shorter version of this is if your strategy is to be the Department of No, there will be a new department in short order.
[David Spark] And your department will be shut down. [Laughter]
[Geoff Belknap] Yeah. They’re going to find somebody else to lead this that says something that’s not, “No.” And I think there’s some really good points here. Especially what Todd said is something that’s very close to I think the way both Carla and I think about this which is it’s always good to take your partners along and it’s important to remind your partners that they have some shared responsibility here.
When we have to protect the company or in this case, if we have to develop an AI strategy, security can’t do it all alone, right? We need our partners to meet us partway and to participate in this together, but we have to, again, lean on the “together.” We have to bring our security and privacy partners along, understand what the business wants, and figure out how to get it to them.
Otherwise they’re going to find somebody that can.
[David Spark] Carla, maybe you’ll admit to a mistake that you’ve made in this past in any of these technologies in the past where you misstepped trying to either embrace it or not embrace it and then realized, “Oh, wait. This is a better way to do it.” Just interesting, have you misstepped in the past?
[Carla Sweeney] I am personally, I mean, just generally a risk-averse person which is probably how I ended up in security worrying about everything all the time. So, I think it is my…
[David Spark] And you’re also a mom, right?
[Carla Sweeney] And I am a mother, right, exactly. Your whole life is living outside of you and vulnerable to everything. So I think sometimes my gut reaction is a little scary. And Geoff said at the beginning, everything is changing whether we want it to or not. Everything’s always – the risks, the threats – everything is evolving whether we want it to or not.
So, I think I have sometimes jumped to like, “How could this go wrong?” or a little arm wavy. I think I have made that misstep in the past. It has never helped. It has never really worked out for me. It maybe helps people understand the risk but eventually everyone wants to adopt it. I think AI is incredibly exciting to a lot of people and businesses.
It’s going to unlock a lot of potential. Thinking about it’s AWS-certified, how fast it can code. I think all businesses have a risk appetite whether it’s defined and documented and discussed or not, and some are going to be more willing to take on more risk in this experimentation stage while this is evolving than others.
[David Spark] And so yeah, that’s why it’s not a one-size-fits-all by any stretch. It’s like you said, the way the bank handles it versus the way the social media site handles it are going to be two very different things.
Closing
20:17.518
[David Spark] Well, that brings us to the very tail end of this show where I’m going to ask both of you – of you first, Carla – which quote was your favorite and why?
[Carla Sweeney] I think Robin Austin from the Colliers Group said, “If you don’t support AI or intelligent automation and work to secure data and embrace the innovation, you will be left behind and stand in the way of market share for your company.” I thought that was great. And then Matthew Sullivan, who is the author, jumped in later in the comments and said, “We have two options – grow or die.”
[David Spark] Good point. All right. Geoff, your favorite quote and why.
[Geoff Belknap] My very favorite quote was where we just agreed that, hey, if you have an anxiety disorder, security business is for you. You will get along here just fine.
[Laughter]
[David Spark] We embrace you. Welcome.
[Geoff Belknap] Yes. Trust me. As someone who suffers myself, it’s weird that I’m so good at this. It tingles all the same tingles. I feel like I really like a quote that is very similar to the one Carla picked and it’s not specific to AI but it’s just a great thing to keep in mind if you’re an up-and-coming security leader.
Mysti Williams from Express Employment International said, “We may not get replaced with AI itself but we will definitely get replaced with someone who took the time to learn how to use it!” If we don’t take the time to learn how to enable the business with new technologies, somebody else will be doing it for us.
[David Spark] And by the way, not the first time we’ve seen that too. If you’re not willing to grow… And that’s why, by the way, a lot of security professionals are trying to always upskill because they realize if I do not upskill, I’ll be left in the dust. Maybe it’s just the nature of technology in general.
[Geoff Belknap] And it’s just the nature of leadership. If you’re here to help your organization grow, this is what that means. We can never get truly comfortable.
[David Spark] Excellent. Well, thank you so much, Geoff. Thank you very much, Carla. That was awesome, Carla. I’m going to actually let you have the last word here. I do want to thank our sponsor Censys, remember censys.com – the leading internet intelligence platform for threat hunting and exposure management.
Check them out. We greatly appreciate Censys sponsoring us. Geoff, I know that when people want to go look for jobs, they can just go to your site called LinkedIn, right?
[Geoff Belknap] It’s a great website. There are jobs, there’s training, there’s services and products and people.
[David Spark] Yes. You could upskill and I bet you there’s some AI training on there too.
[Geoff Belknap] There absolutely is, yeah.
[David Spark] All right, Carla. Any last words on today’s topic for our audience?
[Carla Sweeney] I think we are in violent agreement here. We have to grow or die.
[David Spark] Violent agreement. We have to get people who disagree with you, Geoff, is what we’ll do.
[Geoff Belknap] Yeah, yeah. Carla’s too much on my wavelength. We’ve got to figure this out.
[Carla Sweeney] Next time.
[David Spark] All right. Thank you very much, Geoff. Thank you very much, Carla. And thank you, audience. We greatly appreciate your contributions and listening to Defense in Depth.
[Voiceover] We’ve reached the end of Defense in Depth. Make sure to subscribe so you don’t miss yet another hot topic in cybersecurity. This show thrives on your contributions. Please write a review, leave a comment on LinkedIn or on our site CISOseries.com where you’ll also see plenty of ways to participate, including recording a question or a comment for the show.
If you’re interested in sponsoring the podcast, contact David Spark directly at David@CISOseries.com. Thank you for listening to Defense in Depth.






