Responsibly Embracing Generative AI

Businesses are walking a tightrope with generative AI. One the one hand, it’s a potentially disruptive technology, and no one wants to be the last one to adopt it. On the other hand, we’re only just starting to understand the risks it presents to an organization. So how can organizations implement generative AI responsibly?

Check out this post for the discussion that is the basis of our conversation on this week’s episode co-hosted by me, David Spark (@dspark), the producer of CISO Series, and Steve Zalewski. Joining us is our sponsored guest, Russell Spitler, CEO and co-founder, Nudge Security.

Got feedback? Join the conversation on LinkedIn.

Huge thanks to our sponsor, Nudge Security

Nudge Security provides complete visibility of every SaaS and cloud account ever created by anyone in your org, in minutes. No agents, browser plug-ins or network proxies required. With this visibility, you can discover shadow IT, manage your SaaS attack surface, secure SaaS access, and respond effectively to SaaS breaches.

Full Transcript

[David Spark] Businesses are walking a tightrope with generative AI. It’s a potentially disruptive technology, and no one wants to be the last one to adopt it. But at the same time, we’re only just starting to understand the risks it presents to an organization. So, how can organizations implement generative AI responsibly?

[Voiceover] You’re listening to Defense in Depth.

[David Spark] Welcome to Defense in Depth. My name is David Spark. I am the producer of the CISO Series. And guess what? I’ve got Steve Zalewski here with me. He also cohosts this show. Steve, say hello to the audience.

[Steve Zalewski] Hello, audience.

[David Spark] He’s so friendly that way. By the way, if you’re not familiar with us, there’s more to find out about us on ciso-dev.davidspark.dcgws.com. We have many other programs as well. Our sponsor for today’s episode is Nudge Security, SaaS security for modern work. But we’re talking about AI today. And guess what?

They have awesome solutions for discovering who is using AI in your environment. In fact, that’s what we’re going to be talking about on today’s episode. More about that coming up very quickly. So, Steve, help me set this up. No one wants to be caught flat footed with generative AI.

There are great opportunities and potentially great risks. Both are seriously unknown. By the way, I’ve been to a lot of conferences lately. This is literally the topic that comes up again, and again, and again. It’s like we don’t want to block it because the business needs it. Then again, we don’t know really what the risk is to providing it.

So, we’re all trying to hit that baby bear just right sweet spot. We’ve all seen new tech come into the business before. What can we learn from our previous transitions of dealing with new tech that can help us responsibly bring generative AI into the workplace milieu?

[Steve Zalewski] So, when I was at Black Hat, and that wasn’t too long ago, I posted on LinkedIn that this was the number one conversation amongst all the security practitioners.

[David Spark] Even at RSA before that, too.

[Steve Zalewski] And at RSA. But at Black Hat it was the topic. It overwhelmed everything else. So, I at that point was simply saying, “All right. Well, if that’s the hot topic from a security perspective, what are you doing?” Not, “What are you talking about?” But how are you enabling the business to sell more jeans?

How are you actually executing on this? And it’s a dark tunnel at that point. There is silence.

[David Spark] Here’s my answer to this – no one seems to know at all. [Laughs]

[Steve Zalewski] Right? And so here we are talking about things like being responsible for leveraging it or that it has all this great capability, or we have thousands of ways that it can be insecure. And yet I simply said, “Look, it’s here. It’s being deployed. It’s being way over hyped. Every product has to have it in some capacity.

So, let’s move beyond that, and let’s just talk about where are we in the maturation of being able to responsibly let a company use it.” And that was what today’s episode really was about, was let’s have that conversation not about why it’s a great idea or all of the potential problems, but let’s net it out.

Let’s talk about what the state of the art actually is from the perspective of enabling companies, and what are we, as security practitioners, practically able to do.

[David Spark] All right, let’s jump into this. And we have the perfect guest for this topic because actually their product is answering pretty much a significant portion of this question about AI. Essentially who the heck is using it. Anyways, our sponsor guest today is from Nudge Security, the CEO and cofounder, Russ Spitler.

Russ, thank you so much for joining us.

[Russell Spitler] It’s great to be here with you guys today.

What do most people think it is, and what’s the reality?

3:52.008

[David Spark] Jason Fruge of Risksilience said, “AI is going to be part of your business going forward.” I think we all agree on that. He goes on, saying, “Banning it is like banning the use of the internet. We have to figure out how to use it safely while avoiding disruption to our business.” George Kamide of Cinder said, “Trying to ban it will only put you in the crosshairs of growth and revenue leadership.

There is several enterprises level systems, and the are a myriad of ways to build in house using platforms or training proprietary LLMs modeled on internal data. There are security solutions built to protect company LLMs.” So, Steve, I’m going to start with you. I think the security community in whole knows that banning it is not the solution, and I would say from the conferences I’ve been to, five to ten percent are working at companies that are banning it.

And I think it’s just because they’re such massive global companies, they just don’t know what to do at moment. And they’re like, “This is what we have to do now.” But I think people are all on board. Yes?

[Steve Zalewski] I agree. There are a few that are banning, and they either have a corporate edict to do it so it’s not the security people simply saying, “I don’t know what to do about it.” But the larger corporations believe there’s more risk in enabling it than not.

[David Spark] And it’s because of who they are. They’re like in finance or a big… Like, “We don’t have a choice now. We don’t understand it, so we got to ban it for not.”

[Steve Zalewski] Right. Or like healthcare where your medical record data… They just don’t want to take the chance at letting it out because it’s pandora’s box. You can’t put it back. And so until they’re really confident they understand how to secure it, they’re just not going to take the risk. But here’s the rub – for all the rest that are simply saying, “We have to do it,” most of them are writing policies.

Well, a policy is awesome. It’s just how do you either track it or enforce it. And that’s where the real rub is.

[David Spark] I think this is where you jump in here, Russ, is policy is wonderful, but it doesn’t actually create action or understanding, does it?

[Russell Spitler] I think that’s a really great way to say it. And even as you guys discussed that, I always transplant in my head ban it with drive it underground because that’s the reality in most of these organizations. People are reaching for these technologies because they’re increasing productivity.

They’re enabling some new business process or workflow that wasn’t available before. And the idea that it’s just not going to be available is sort of like putting the cookie jar on the top shelf. The kids just want it more when it’s out of reach.

[David Spark] Yeah, and all it takes is a step ladder to get there.

[Russell Spitler] Exactly.

[David Spark] Yeah. Like when you ban it from the corporate email, it goes, “Well, I do have my phone here. I do have my personal laptop. It’s not that hard.”

[Russell Spitler] Absolutely. And that’s kind of an interesting piece. We actually did some research on blocking technologies – essentially secure web gateway. Is what happens when an employee hits a big red wall of death saying, “Hey, you can’t go to openai.com.” And 67% of employees said they’d work around that, which is in line with probably my personal action.

[Crosstalk 00:07:09]

[David Spark] I’m actually surprised it’s that low. I would think more like 80 to 90%.

[Russell Spitler] I think you’re probably right.

[David Spark] I think they were maybe lying, trying to be conscientious as well. But so I bet you have customers that come to you that are either working in an environment of blocking and they’re like, “We need to understand this so we can start to unblock it…” Or what is it that customers to you…what stage are they in?

[Russell Spitler] It’s always about that awareness stage, which is… Right now, I’ve had so many conversations with CISOs around the world who say, “I don’t even have a report I can give to my board about how widespread the usage is, who’s using it.” And the real challenge these days is regardless of the sort of bell leaders out there of open AI or those companies, there is 50 or 100 other variations or skims on top of that that have popped up over the last few months as well.

And so all of a sudden we have this long tail of services we need to track, and people don’t even have an understanding of who’s using it and how they’re using it, where they’re using it. And so really what we use is how do we put that policy in place, how do we get that traction of enabling people to see who’s using it, and get them aware of the policy that the organization…the acceptable use that the organization wants to have.

What’s everybody talking about?

8:30.099

[David Spark] Caleb Sima, who actually used to be the CSO over at Robin Hood, said, “One, data leakage in LLMs is overhyped and not the most critical risk. Build a policy usage for AI for employees. Overall enterprises have far more likely areas of vulnerability than this.” I think it’s always a general unknow, but it’s like this is not where you’re going to see data leakage.

People go straight to the source, the data. Not trying to get into the LLM. But anyways, Caleb goes on to say, “If you’re using other data stores then ensure your LLM is…” By the way, large language model. “…is doing identity pass through, and the LLM service itself does not have elevated privileges.

This mitigates prompt injection effects.” And lastly, he says, “You need to invest in good data pipelines, cleaning, organizing, making on data is painful but has to be done.” Caleb has sort of promoted the fact that he’s been educating himself the past few months on artificial intelligence, so he’s sort of becoming a quick sage on this topic.

What say you on his advice, and have you learned the same as well, Russ?

[Russell Spitler] So, that first point that he makes I think is probably the most applicable across the board, which is you need to have an acceptable use policy. You need to educate your employees. You need to make sure that they’re aware of that when they take the action of signing up for these services.

Now, when you get to his other two points, I really feel Caleb here is probably a few steps ahead of most of the people out there in the market.

[David Spark] Oh, yeah, he is.

[Russell Spitler] [Laughs] Which is if we’re talking about building LLMs in house, if we’re worried about prompt injection, or we’re worried about privilege escalation from in house models, those are concerns that I think most people have on next quarter’s roadmap, not necessarily the urgent issues that they’re trying to deal with today.

But I think his advice is certainly good. And as we kind of look to the future and as we look at more of this proprietary technology in house, these are other concerns that are going to start to pop up. But right now, the pressing issue is who’s using it, and are they aware of the particular implications and risks of using an LLM or a service that wraps an LLM out there on the internet.

[David Spark] I’m just going to echo what Russ just said. That is the story, who’s using it, and do they understand the implications. I don’t think we can… Like you said, because Caleb’s other two points… He’s ahead of the curve. I don’t think anyone is near that point because this first point is such a big deal, Steve.

Yes?

[Steve Zalewski] We’re still at the stage of thinking about vulnerabilities and all the ways we could be vulnerable. What we haven’t done is really started to think through where are we exploitable, which of those vulnerabilities are actionable, that we actually care for our particular company to then look at where the controls can be.

Caleb does a good job of kind of starting to netting that out for people. Is it a data problem? Is it an access problem? Is it that you have a product that’s integrating AI into your product so you have to build trust for somebody to buy your product?

Or is it that your company is using all of these open source AI tools and you as the CISO are simply trying to put some form of visibility around that to put some protection around the business, not to try to protect the individual LLM instances? These are all examples of use cases, and we’re not being prescriptive yet.

And so when I look at it that way and when I look at what Caleb is, I go he is starting to give people some hard use cases to start to hold themselves accountable. Then I also say quite honestly we’re at a visibility stage.

For AI, for where this is, whether you believe it’s a data issue or not, most of the technologies are still at visibility – can I just see where the problem is. What’s moving, and where? What open AI tools are being deployed? So that I am just trying to understand the problem. But we’ve been burned in the past, because visibility without actionability is me taking on more responsibility with no ability to do anything about it.

And so I also think we’re getting smart as CISOs and saying, “We don’t want just visibility. We want actionability. And if you don’t bring that, don’t bring a solution yet.” Which I think is also resonating into the marketing for why the slow adoption of technical solution.

Sponsor – Nudge Security

13:23.271

[David Spark] All right, before I go on any further, I do want to want to talk a little bit more about our sponsor, Nudge Security. And, well, the big question we’ve been asking, do you know who’s experimenting with AI tools in your organization. So, you can find out actually with Nudge Security. Their patented approach to SaaS discovery gives you a full inventory of the AI apps ever introduced by anyone in your organization in minutes including the generative AI.

So, the best part, you don’t even have to know what apps you’re looking for. After a quick one-time setup with your email provider, Nudge Security discovers and categorizes every SaaS and cloud account ever created by anyone in your organization. No agents, browser plugins, or network proxies required.

Now, how do they do it?

Their patented discovery method takes advantage of a simple yet consistent design pattern of every modern SaaS and AI provider, the use of email to confirm account creation, and many other security relevant activities making email the perfect event log for SaaS security. Now, after starting a free 14-day trial, you’ll have a full list of all generative AI apps and users in minutes.

For each AI provider discovered, Nudge Security provides insights on the security posture, compliance certifications, breach history, and more so you can quickly vet new AI tools and guide users towards more secure alternatives. And when new AI accounts are created, you can immediate nudge the user to review and acknowledge your AI acceptable use policy or ask for clarification.

Empower your organization to embrace generative AI while mitigating risk with Nudge Security. This is the awesome part – start a free 14-day trial today at nudgesecurity.com/ai. Go check it out now.

What are the elements that make a great solution?

15:32.385

[David Spark] Mauricio Ortiz of Merck said, “What companies must do is implement few initial metrics and monitoring controls to assess how people embrace and adopt AI policies and protect the enterprise from the key AI risks. Analyzing the metric results will help to drive improvements or decisions.” John Scrimsher of Kontoor Brands said, “As with any new tech, the business will need to explore it and develop use cases.

Understanding how your business desires to use the tech can only help you identify the risk factors and hone in on mitigation strategies rather than being an inhibitor to business by attempting to play whack a mole by abandoning it and everyone looking for ways around the ban.”

So, I’ll start with you, Steve, here. I’ll tell you, what we’re doing here at CISO Series is we are actually just educating ourselves every day on this. And being that there’s a lot of ignorance at all levels from the use to the securing, I think just sort of ongoing education is what’s necessarily.

Literally me and my team, we’re providing tips to each other like, “Hey, this is what I learned. This is what I learned.” And we’re slowly all learning at the same time. Is there any other way we can do this?

[Steve Zalewski] I differentiate between learning what large language models are and why all of a sudden in the last year this blew up as the next major crises. And I think a lot of us are going deep into the weeds to understand LLM, what it is, how it’s done. What we should be doing is simply stepping back and saying, “Let’s look at the use cases for either improving the efficiency and the effectiveness of my security organization to weaponize it,” or, “How am I allowing the business to embrace it and managing the risk for them to do that?” Not try to remove the risk but just manage the risk.

And so stop worrying about making it a technical problem or making it urgent and be a good CISO and simply say your job is to mange the risks for he business where you can. And simply identify them and be clear about areas you can’t do much about yet, and let the business make a decision.

[Russell Spitler] I really like what you said there, Steve, especially as you kind of bring in words like embracing. Because I think that is always the difference in the security programs that I’m able to observe, obviously at arm’s length. But when an organization takes a viewpoint of how do I enable productivity, how do I embrace business change and starts to do things like what Mauricio mentioned here is, “Let me figure out the metrics that matter.

Let me start tracking those and start to drive the behavioral change in the organization to align with where my organizational security policy needs to go.” That’s when you see not only positive business outcomes but a positive business experience for those employees as well. And so it’s a really strong path to think about of how do I ensure that I am not only enabling this, but I’m enabling this in a way that’s risk informed and helping the organization mitigate as much of those risks as possible while still taking benefit of new technologies.

[Steve Zalewski] So, we’re going to go back and forth. And I agree. Russel and I…you’re seeing where we’re resonating, which was one of the things I talk about is is your job as a CISO to embrace the risk or to reduce the risk. So, embrace the sprawl or control the sprawl. And this is a sprawl problem right now, whether you believe it’s a data centric one or not.

It’s the business is letting this real.  And so what do we, as security practitioners, do? Which of the two sides of the house do you fall on? Am I going to try to jus prevent it? So, I’m going to do risk reduction by simply saying no, especially if the business isn’t there. 90% aren’t. Or do I embrace it?

What I say is, “Look, I’m going to understand that the business is going to do what the business does.” But as I become aware of it with my tools for visibility, comes with an opportunity for me to be able to hold the business accountable to the risk that they’re accepting. And let’s have a conversation around what insurance policies I can bring to the conversation.

It might be as simple as a policy. It could be as complex as what Caleb was saying, where we can actually implement a set of security controls that aren’t net new. It’s just a repositioning of what we already have to embrace this additional business case.

[Russell Spitler] And when you think about that, especially in that context in bringing in some real world examples, me taking a marketing email and checking the grammar in grammerly.ai as opposed to me taking the quarterly results we’re going to release next week and reformatting them in ChatGPT… Again, nothing against either of those services.

But there’s a substantially different potential impact for sharing those two data sets. That’s really where I think a lot of the awareness and sort of just in time education for those users really comes into play. Because exactly as you sort of remarked, there’s a much different risk profile for those actions.

And most organizations probably accept the first and certainly shy away from the second in terms of those use cases. That’s really a last mile problem, a sprawl problem in terms of use. But also a last mile in terms of making sure those employees are educated in their risks and are aware of the potential impacts that they may have.

[Steve Zalewski] I’m going to go back to you, too. I know we’re coming to the end of this segment, but this is the key part, what makes a great solution. The answer is people need to understand generative AI is not decision making. It’s an augmentation of storytelling. And so same thing – risk to the company, which is if you have people that are using gen AI to be able to be more efficient at their job, what they have to understand is that they have to trust the results, but they have to verify that they’re accurate.

Okay? This is an augmentation technology. It can augment what you do.

It cannot replace what you do. And I think to your point, if you make that clear to your business that that ultimately is what it is then you can say, “My job is to make sure that the story it tells isn’t poisoned by bad data.” But I also have to make sure that the story that it tells doesn’t deviate substantially from the story we’re expecting it to tell.

And now all of a sudden everybody is realizing what the negative implications are and what the value of security is to be able to sell more product.

What else are we missing?

22:49.016

[David Spark] Charles Stewart of Validin said, “I foresee a market for self-hosted and on prem AI systems getting hot, possibly on the back of open source implementations. But enterprises always fight the on prem versus no one wants to run a server anymore fight. Alternatively, vendor compliance certifications like SOC2 would help, but most certs would predate the new tech and are likely to lack the language addressing generative AI.” So, Charles throw out some ideas of where this could go, where we could be pushing this.

I’m going to start with you, Russ. What is your most advanced customer when it comes to AI? Like maybe we could learn from them as to what the heck they’re doing to push, “All right, I know…” They’ve gotten to the point like, “I know who’s using what in my environment. Now I can sort of take control, and we can do things.” Do you have a customer like that yet?”

[Russell Spitler] Yeah, we certainly do. I think what’s really interesting about Charles’ comment here is it really is a rehash of conversations we’ve had over the last five to ten years. This absolute control versus a balance of a shared responsibility with a provider. And where I see this inevitably going is as we get through this initial educational phase and adoption of LLMs… Excuse me, I didn’t mean to give the whole market to open AI.

As we start to go through this initial phase of adoption, we’ll start to get a lot more comfortable with what the data risks are with putting data into these models, what data risks are mitigated by potentially running some of these models in our premise as opposed to sharing that. And I think ultimately we’ll probably end up back where we started, which is we have more controls and more validations provided by these providers in terms of how they’re isolating our data from other models, how they’re ensuring that there’s no cross model pollution and our data is not getting shared with other customers of theirs.

And we’ll get to a much better balance over time. But that tension is going to be real in the short-term. What I’m seeing in our customer base right now is there are a large number of people who are running either custom models or customized versions of models provided by vendors, and that’s something that’s done under a microscope.

Where we’re seeing our most advanced customers today is they’re using these models under a microscope. There’s certainly people who are using their own models on prem or at least in their own cloud environment with a modern version of on prem. And there’s some who are using customized versions or wrappers around third party models.

And I think as we look to the future, we’re going to increasingly see more transparency from the vendors, more assurances from the vendors, and a lot more comfort as we end up ultimately where we are today, which is where we started in a lot more off the shelf usage of these models going forward.

[Steve Zalewski] Everything Russ said is right on. But here’s what I come back to as a practitioner. I’m like take a step back. Look for the fire. Don’t ingest all the smoke. And most of what’s out here is smoke, and it’s of this type. Everybody is talking about all the ways it could be compromised, but how many real cases of compromise have we seen?

How are the bad guys actually succeeding? We don’t have much of that. So, most of what we’re doing is creating a lot of smoke by ourselves.

[David Spark] I agree with that. But let me quote something I overheard or I read. It’s saying… And this had to do with the Black Hat Conference. You know the thing you were worried about before you went to Black Hat? Guess what? That’s going to be the same thing you’re going to be worried about when you leave Black Hat.

[Steve Zalewski] Okay.

[David Spark] Because Black Hat is all about new stuff that you’re supposed to be worried about.

[Steve Zalewski] Worried about now. And then I’m going to want Russ to respond to this. So, the actuality…the reality is it’s going to be the mistakes that hurt us in the short-term, not the malicious attack. Okay? Its going to be people trying to do the right thing but just not understanding. And so therefore, what are we doing with policies, procedures, and technology?

Not to try to stop the malicious attack but just try to limit the mistakes that are made as everybody is just embracing the technology. And that’s the what are we missing is worry about the day to day job here, use this kind of technology to jus prevent the likelihood that somebody makes a mistake with data sprawl out, or mistakes just how to interpret the value of generative AI.

So, it’s not just education, but it’s addressing the viability of the mistakes. And that’s going to take us a long way. Russ, what do you think?

[Russell Spitler] I think that’s a really good perspective to bring. And just to be slightly contrarian, we have had one notable case of a large model at least allegedly sharing back proprietary source code and answers for other peoples’ queries. And certainly when I look at that, I sort of look at that as you say, probably an early stage product mistake.

Where I actually get a little bit more concerned is all of the sort of use case specific subservices that have come up that are largely new companies, immerging companies, trying to take advantage of this disruptive technology.

And those are the companies that I get a little bit more concerned about in the short-term, of, “Hey, two guys and I had a great idea, and then we drank a bunch of coffee. And on Monday, we have a new product that we launched out to the world.” And the reality that comes into play there is that long tail of those providers is where we need to do the most education and awareness for our employees and end users.

And as you sort of remarked earlier, that just in time education, those wise interventions, that opportunity to give practical advice to that user as they’re about to type on the keyboard or upload the file to the newest service that they found out there, that’s the time you can have the biggest impact in terms of your organizational risk.

And of course that’s where we do a lot of our focus and work with our customers.

Closing

29:15.614

[David Spark] Awesome. Well, that brings us to the portion of the show where I ask both of you which quote was your favorite, and why. And I will start with you, Russ. Which quote was your favorite, and why?

[Russell Spitler] I love what Mauricio had to say. I actually say that is probably pretty generic advice for any security problem that you have is figure out what you’re worried about, figure out how you can track it, create a scoreboard, and help people drive that metric down. And I think that’s basic advice that applies everywhere but is especially applicable here because until you actually understand the scope of the problem and the risk you’re trying to address, how do we start to change the behavior, and how do we start to drive things down.

[David Spark] Yeah, free floating worries without metrics is not helping anybody, just creating more anxiety. Steve, what’s your favorite quote, and why?

[Steve Zalewski] I’m going to go with Caleb Sima for the same thing, which is what is the actionable ability to prevent mistakes rather than limit malicious attack. What I liked about Caleb was he gave some very practical ways to look at your existing security controls and looking at what the true risk is to just deploy these language models either as a technology within your product or as open technology public that the businesses are using as a way for you to have an intelligent conversation with your leadership and everybody else about the near term practical role that the technology like Nudge can bring to the practical problem for the next [Inaudible 00:30:51]

[David Spark] Very good. Well, that comes to the end of our very show here. I have to thank you, Steve. I have to thank you, Russ. And I have to thank your company, Nudge Security for sponsoring this very episode. Now, Russ, this is where I give you the opportunity to tell people more about Nudge Security.

And also I believe you have a 14-day trial, which I suggested. Like you want the product to speak for itself. People can see for themselves. What more can you tell them?

[Russell Spitler] I’ve been building security products for years, and this is by far the favorite product that I’ve ever built. And the simplest thing and the simplest…

[David Spark] You’re telling everyone that your previous children you don’t love as much as this child, correct?

[Russell Spitler] I will say that without any hesitation. [Laughs] So, one of the things that I love is you can deploy in five minutes, and you can get the first results within a few years. And unlike any other product I’ve ever made, it’ll give you historical results before it’s deployed. That makes it especially applicable to this conversation.

Anybody who wants to know who’s using generative AI tools or the 150 stepchildren or open AI that are out there, we can give you that answer by the end of the day. I would encourage everybody to try the product for itself, and you can get started without talking to anybody. I look forward to the chance to talk to each and every one of you once you do that.

[David Spark] But why wouldn’t you want to talk to you?

[Russell Spitler] I’ll let them talk to the product first, and then they can have a conversation with me.

[David Spark] Talk to the product first and then chat with Russell. Okay. The web address you want to go to is nudgesecurity.com/ai. Remember, nugesecurity.com/ai. All right. Thank you, everybody, for listening to the show. Thank you to Nudge. Thank you to Russ. Thank you to Steve. We greatly appreciate your contributions and for listening to Defense in Depth.

[Voiceover] We’ve reached the end of Defense in Depth. Make sure to subscribe so you don’t miss yet another hot topic in cyber security. This show thrives on your contributions. Please write a review, leave a comment on LinkedIn or on our site, ciso-dev.davidspark.dcgws.com, where you’ll also see plenty of ways to participate including recording a question or a comment for the show.

If you’re interested in sponsoring the podcast, contact David Spark directly at david@ciso-dev.davidspark.dcgws.com. Thank you for listening to Defense in Depth.

David Spark
David Spark is the founder of CISO Series where he produces and co-hosts many of the shows. Spark is a veteran tech journalist having appeared in dozens of media outlets for almost three decades.