Skip to main content
Back to Podcasts

Emerging AI Technologies and Their Impact on Cybersecurity

Published on: September 11, 2024

Today, I’m joined by Sehkar Sarukkai, technologist, entrepreneur, educator, and advisor to Skyhigh Security to explore how companies can leverage artificial intelligence to drive productivity and boost their security measures against evolving cyber threats. We’ll discuss practical strategies and the evolving landscape of AI that can help organizations stay ahead in this rapidly changing world.

You can find Sehkar Sarukkai on his LinkedIn page here.

———–

CloudCast is hosted by Skyhigh Security’s very own Digital Experience Manager, Scott Schlee. Scott’s engaging demeanor and wit, backed by over 20 years in digital media production and web development, has led to successful collaborations with top-tier brands. His experience includes hosting and producing a wide range of podcasts and videos. Scott has been recognized for his outstanding work, including an award-winning digital short and a Webby Awards nomination for Viral Marketing (Branded). Beyond his professional achievements, Scott’s personal journey as a decade-long pancreatic cancer survivor has led him to share his story with the U.S. Congress and other organizations as an advocate for increased cancer research funding.

Scott Schlee:
It’s September 11th, 2024. I’m your host, Scott Schlee, and you’re listening to Skyhigh Security Cloudcast. Today, I’m joined by Sekhar Sarukkai, technologist, entrepreneur, educator, and advisor to Skyhigh Security to explore how companies can leverage artificial intelligence to drive productivity and boost their security measures against evolving cyber threats.

We’ll discuss practical strategies and the evolving landscape of AI that can help organizations stay ahead in this rapidly changing world. Sekhar, thank you for joining us today. How are you?

Sekhar Sarukkai:
Oh, I’m great. Thank you so much, Scott, for having me here.

Scott Schlee:
Sekhar, can you tell me your position within Skyhigh Security?

Sekhar Sarukkai:
Yeah, it’s it’s an interesting one. I, as you may know, I was one of the Sekhar of Skyhigh Networks. And then after it got acquired by McAfee, I was there with McAfee for a couple of years, and then I left to do other things like teach, which I do cybersecurity at UC Berkeley, and I’ve worked with a lot of VCs and stuff.

But recently, when Vishal, the new CEO came aboard he had reached out to see if there’s some consulting, more like helping with some of the trends and newer issues to deal with, in the industry. And that’s how I’m engaged with Skyhigh now, purely as a, on a consultative basis as an advisor, essentially.

Scott Schlee:
I know we’re very happy to have you as a part of the team, and more I’m happy to have you today joining us to talk about AI, future trends, what Skyhigh Security is doing with AI, and so I was hoping maybe we could spend a few minutes just talking about, first, What are some emerging AI technologies that are likely to shape the future of cybersecurity?

Sekhar Sarukkai:
That’s such a loaded question that can be in our conversation, but all of us, I think the world, literally the IT world, at least transformed a couple of years ago when ChatGPT just took off, right? It’s there’s something there where people felt in their gut that it is different and there’s a lot of different uses of it, which have sprouted, incredible amount of innovation in the space.

And I think that will continue to grow. There are some key areas where I think which are stable and people know where it’s going to evolve, like the foundation models. But what what open AI and. Bunch of others are doing. I think it’s got a steady path. There’s a lot of interesting work around multimodal, which is not just text, which is where it’s generated, but with audio video images.

Fascinating how much it’s changed. Today you can give a prompt. And outcomes, a video clip of an imagined space, right? Which looks pretty realistic. And so there’s a lot of work in that area, I think, which will evolve. And as deep fake is one of those. Bad uses of that technology where I can make it look like I’m used God is saying something which he never did and it’s going to be very challenging to discern that from reality So that’s that whole multi modal ai is has a lot of cool things you can do but also has You know security issues which come with it the other big area I would say is It is getting attention recently in the last, I would say, couple of quarters is around agentic AI, which is, if you go to ChatGPT, you ask it to summarize a document or a movie, maybe it does a great job of it, but if you want to ask it to go book a ticket for your to go to a movie, coordinate calendars with your friends to go watch it.

It’s not going to be able to do that. And so there’s this new evolution of extending LLMs and AI to actually take action. So these are large action models. And there’s a lot of enterprise use. Or use cases of that which sort of feed into where I think the big value for all these AI technologies may evolve into.

And this is, what some people call the next generation of SaaS, right? Today, if you, if anybody talks about SaaS, it’s like Office 365 or Salesforce or the world. It’s basically software as a service. Anything which was previously had to, download or deploy in your servers, you actually consume it as a service.

What the next generation of that, which agentic systems and others will evolve to help is to create what’s called, what can be thought of, as service as software, which is, if you look at what a sales guy does today, but Salesforce has They use salesforce as a way to make them more productive and going and closing a deal or running a campaign or whatever it may be what the next generation would do is take the next step, which is not.

Just to be almost like a Copilot, but to make it an autopilot where you can actually get a virtual sales agent, right? So rather than helping some human complete their task better, this is going to basically be able to do what the human does, right? And that’s the automation. That’s the task analysis.

And there’s obviously good and bad to it. I don’t know where it’s going to end up in terms of. Society and jobs and people, but, the motion towards saying, okay, I have I want to accomplish something, give it a goal and being able to decompose that into tasks which need to be completed and actually go and execute it.

And complete the tasks is something which has evolved quite rapidly. I would say there’s some fascinating demos that you can see online. There’s one of a developer a developer agent, if you will. And this is you can search for a video, in YouTube for Devin. Devin is The persona of a developer, but it’s fascinating to see it work in real time, where it generates code.

And then if there is some exception or an error, it actually goes and searches. All the watering holes for developers sub stacks or wherever else it may go find it and go fix the problem and read on it and deployed all of that. So that whole notion of being able to get to that automation is something which is real and can have profound impact for enterprises.

Scott Schlee:
And I guess a lot of people, developers such as myself, will look at that and take it one of two ways, be afraid that this is going to somehow take my job away, or do I look at it as this is going to be an assistant for me to be more efficient in my work?

Sekhar Sarukkai:
Yeah, and that’s the million dollar question, right?

I don’t know where it’s going to end and some, you see some demos, you say, you know what, that’s not true. That can’t, really do what an expert programmer would do. And then you see a next demo where it’s wow, that’s doing it. Yeah. Or you don’t need, 20 people in a team, maybe there’s a one person developer.

Who can build a startup from scratch and develop the code as well with some of this technology,

Scott Schlee:
I know that the saying is that today’s AI is the dumbest it’s ever going to be It’s only gonna get smarter and better. So I’ve seen just in the past three years, six months to a year, we’ve all seen AI progressing dramatically leaps and bounds from where it was, at its infancy.

Sekhar Sarukkai:
Absolutely, Scott. And I think that’s also depending on which side of the bed you get up, I think it’s either a huge opportunity or a big concern.

Scott Schlee:
So how are these advancements in gen AI? We’re talking like open AIs out there so anybody can get their hands on it and actually use it for good or nefarious purposes.

How are those advancements going to impact cyber security in terms of threats and defenses?

Sekhar Sarukkai:
Yeah, this is again It’s so rapidly evolving that you’ve already seen issues you would have not thought was a real issue just two years ago. And I think as cybersecurity practitioners The job is just going to get harder.

I think it’s because of a lot of things. Today, if you look at, if you ask people about security with, in the GNI context, there’s a lot of startups and, even established companies like Skyhigh, you’re looking in, into what does it mean to secure LLM itself, right? There’s this notion of jailbreaking, or, how can you actually coax the model to give out an answer which is not appropriate, things like that.

There’s also concerns around data leakage, there’s a classic example with the Fortune 100 company where the the salary of the CEO was exposed by the chatbot, which had been created on top of a JNI platform. Things like that, which are, very important and something which A lot of people are working on and that it will continue to be either augmented into platforms like Skyhigh SSE or partnerships with, emerging startups to help fill that natural gap.

But there are other issues which come out from from the offensive side, right? So this is a classic case of where the bad actors are faster in adapting to some of these changes, right? So I’ll give you a classic. We talked about Devin, right? So let me use that example. So Devin, when it finds an issue it can go to one of these watering holes.

stack overflow to see, okay, why did I get this error? And what is the fix for it? And it’ll look at, comments by various developers and choose the right one and implement that fix. Now, what it turned out actually this was published a couple of months ago, actually, was there’s a bad actor.

I think it’s a state sponsored actor who are doing something very, Interesting. They were actually publishing backdoored malicious Python libraries in Stack Overflow and not just publishing it. Publishing in GitHub, talking about it in Stack Overflow. And

Scott Schlee:
As a solution for a problem.

Sekhar Sarukkai:
To that, exception or whatever.

And so imagine if you are a human and if it’s a Copilot, like GitHub Copilot, which tells me to go fix this, but I would go to stack overflow, find it. Take that, use my judgment to say, you know what, this sounds like something I may not use or whatever, right? And it’s up to me to decide whether to update with that new Python library.

But if it is agentic like Devin, what are the controls and how, because it’s all about speed and efficiency and how quickly and cool cool ways it can solve it. And I don’t even know if there is a quantitative way in which you can judge whether those comments in Stack Overflow should be ignored or not.

So the risk of essentially a data supply chain attack in this case, right? So is very real for AI because I’m talking about In this example, in, at inference time, at time when you’re actually using the model. There’s a lot of these kinds of issues around the data supply chain, which is around what data was actually used to train the model itself.

And there’s definitely questions around, malware and malicious data, but there’s also fundamental questions around data ownership, copyright issues, things like that. And these may become increasingly challenging for enterprises who may be held liable for breaking a copyright. whether or not it was generated by Gen AI for them, right?

If they use it in something which is of value to them or it’s public somebody can sue them for it. So there’s definitely those kinds of issues, which I think will need to be thought out further. And I think fundamentally, finally, I would say. Because there is this strong move towards an agentic world and this automation of things and replacing people and the choices we make by AI, A bad actor’s campaign may not even touch a human because it happens at the speed of AI.

So there is a bad actor who would launch an attack, which is based purely on AI because the AI finds a vulnerability in your environment and launches the attack. Like Devin did for good reasons to create code, a bad actor can use it to penetrate a network, right? To exfiltrate data. And If that happens, reacting to it at the speed of humans is not going to help because humans will have to then, like today’s world, you’d create a, it goes to the sock and you create a ticket and somebody needs to figure out who owns it, finds an issue.

That’s, that’s not going to work.

Scott Schlee:
Hours to days before it gets addressed.

Sekhar Sarukkai:
And so you need, it’s almost like these comics, which used to be there, I’m dating myself. I was growing up, which is a spy was a spy, it’s the same kind of thing, which is AI versus AI, right? That’s where I think the real cybersecurity action would be over time.

Scott Schlee:
That’s an interesting concept that I know a lot of people have, theorized how that would work out AI versus AI and. In more of a Wild West scenario, you just really have to prepare for the worst.

Sekhar Sarukkai:
Yeah, no, actually the way I think about it is Skyhigh is one of the pioneers in CASB, Skyhigh, when Skyhigh was Skyhigh’s network and then it morphed to Skyhigh Security.

And the domain itself, Gartner’s Lingo, it’s morphed from CASB to SSE Secure Service Edge, and it has CASB, it has SWG, RBI, a bunch of other features, but where I think Skyhigh is going, and I think it’s Natural extension of the SSC platform is to address AI holistically, because if you think about enterprises the way and we’ve looked at in Skyhigh has looked at enterprise data, and I believe they’re going to publish this report on some very fascinating data, which I’m sure you’ll do another podcast on it.

And at some point not to reveal too much of the data, but Some of the interesting findings are that a lot of enterprises, they really are embracing gen AI, but a lot of the big enterprises are actually blocking ChatGPT because it’s a, it’s almost like a contradictory perspective, but really what they’re saying is it’s so important.

And there’s so many unknowns that I want to deploy in my own private instance. Rather than use ChatGPT as a SASS, right? So they don’t allow their employees to make queries to ChatGPT, but they’ll have their own enterprise instance of it. Where you can, the employees can go and ask those queries because they’re worried that data could, Be leaked into this fast based ChatGPT environment and they’re OK to get the value. If it’s internal.

Scott Schlee:
As long as they’ve set up the guardrails that they’ve approved.

Sekhar Sarukkai:
Yeah, no, not only that, they actually in almost all of these cases, they use like Azure AI, they’re it’s a pass deployment on a public infrastructure, but within their VPC, essentially only their employee employees are using that instance of GPT.

Either through API or through the UI, which is a ChatGPT, and similarly with AWS, all of these public cloud environments have grown some Pretty healthy business around that. So the first step is really to get visibility into what Gen AI is being used. This is almost the shadow IT of 10 years ago when cloud was being used by all businesses.

Now a lot of the CISOs want to know what are the AI, which is being used by my enterprise, which lines of business, is it risky or not? There’s some LLMs, which are okay. Some which are suspect because you don’t want to, maybe it’s more amenable to malware attacks and jailbreaking, which could impact, employees or the customer’s perception of this this enterprise.

So that, that level of visibility. And with attributes which are specific to JNI and LLM is something which sky has being one of the first, if not the first SSE vendor to bring to market, right? Which is to be able to not just discover. And report on what A. I. Is being used by an enterprise, but also have a risk based view into which ones are risky, which ones are not and so on.

But then it goes beyond that, I think, because I think the Skyhigh platform as we all know, right? It’s actually very strong in data security. It’s the CASB roots there in data security as well. A lot of the concerns which customers have is around data leakage, right? And that’s why they even block ChatGPT, but they enable, the private use and so on.

And there’s a big use case for SSEs around Office 365. And the other interesting finding, which you will see in the SkyHive report as well, is that There’s one LLM or Gen AI, which is really, which really stands out in enterprise usage compared to anything else, by orders of magnitude, like three orders of magnitude.

And that is the use of Office 365 Copilot. It’s the Microsoft Copilot and Copilots are built on top of, your Gen AI foundational models. OpenAI’s GPT 4. 0, or maybe it’s a LLAMA model or, a bunch of others. Anthropic, whatever it may be. But basically Microsoft uses OpenAI, but they created this Microsoft Office 365 Copilot, which is an add on to Office 365.

And what it does is it not only Allows their users of Office 365 to have a free form interaction like you do a ChatGPT, but also contextualizes it to all of the enterprise data, which Office 365 has visibility into. And if you think about a large enterprise, they’re all standardized on Office 365, right?

They all have SharePoint, OneDrive Microsoft Teams, all of the email, everything which a business and you, employees in the business use today to interact and be productive and create value is in a Microsoft platform, is stored somewhere in Microsoft platform. And the Microsoft Copilot actually indexes All of that.

And they have some controls to say which ones you don’t want to index and so on. But because of the fact that it’s bundled in or it’s an add on to Office 365, the adoption of Microsoft Copilot is really, head and shoulders above anything else. And that may pull in the usage of OpenAI and stuff, but Because employees are using Office 365, because they interact with the Copilot, because a lot of sensitive data is in the Microsoft environment, I think the data protection, natural natural extensions of Sky has data protection.

Into the Copilot world are can be humongously valuable for customers. There’s a lot of issues around that. I know that Microsoft has some baseline controls, but just SSE is needed for some of these larger enterprises, I think, with Copilots, it becomes even more important.

There are classic examples with Microsoft Copilot, for example there’s applications which can actually connect to Copilot through APIs. And so they, that becomes a vector of exfiltration of data potentially. There’s use, situations where there may be there’s some very interesting attacks, which were presented for Copilot in the recent Black Hat conference, you should, if somebody’s interested, they should search for Copilot and you’ll see some attack vectors, which try to do to craft, as an example, craft a spear phishing attack using the intelligence, which is there in the Microsoft Copilots platform.

You can actually coax the Copilot to respond to who all were in a meeting or what’s the email address of somebody you communicated with the most. And what’s. What did you talk about last with that person? And then get, extract enough information from this interaction with Copilot and then craft a message to ask this person to do something, which

Scott Schlee:
To members as a group, from your CEO

Sekhar Sarukkai:
Exactly.

Scott Schlee: Shaker, what AI solutions does Skyhigh Security have and how are we preparing for the future?

Sekhar Sarukkai:
I think customers are asking for. a better way to manage Copilot deployments today. And what I’ve, so it’s not as much in the future as much as it’s a need, right? And Skyhigh I know is positioned to take the strength in data protection for Office 365 environments and apply it To Copilot basically by doing the same DLP kind of test, right?

For example, Skyhigh already has capabilities to say, and if I’m collaborating with you on a document and suddenly I, and let’s say I’m a contractor. Or let’s say I’m an advisor in this case, Scott, you’re sharing something with me and you’re about to share something very sensitive, which is not amenable for non employees or non executives or whatever.

We already SkyHive already has an ability to say, okay stop collaboration on something which has changed in terms of the content is more sensitive than it should be allowed to be shared, right? So that’s the same kind of thing. With Copilot being able to say, as you as you’re generating content, if there’s something sensitive, which is more at a sensitive sensitivity, sorry, level more than I have, they should be able to block it.

So sky can potentially help with that. Sky is actually working with some AI, a specific AI startup, which is focused on creating guardrails. This is a startup called Encrypt which we did integrate for the risk scoring from the red teaming perspective, but we’re also integrating them for guardrails, right?

So the same guardrails you have for ChatGPT to say, you shouldn’t jailbreak is important for Office 365 as well, right? So you don’t want to have Office 365 spit out something. Inappropriate so being able to address that is something which Skyhigh can help customers with.

I think being able to understand sensitivity. Of the document being able to protect it, but in terms of collaboration, in terms of who can see that and being able to remediate it in near real time either through our API integrations, which we have with Office 365 already, or through our proxy, either a forward or reverse proxy our approaches, which I think are being considered and being worked on right now.

Scott Schlee:
I really appreciate you coming on and clarifying quite a lot of this, because I know it is, like I said before, it’s the wild west right now with AI and There’s a lot of conflicting information out there. Some people just don’t fully grasp where the AI is, where it’s going, and how to protect your sensitive data.

I really hope we can talk again very soon because I’m sure within three months, everything’s going to be completely different. So I would love to have you on again for an update.

Sekhar Sarukkai:
Absolutely, Scott. My pleasure. And I’d be glad to jump on. There’s a lot of interesting topics around this and we’re covering that.

And I think there’s a lot, there’s a need for enterprises to some visibility into how the industry is thinking about. This evolution of the space and, some of the questions you asked around the relevance of Gen AI. To security and the security of Jenny, I both are super important, and this is going to be a longer conversation for sure.

Scott Schlee:
That is true. Shaker. How can people reach out to you if they want to find you online?

Sekhar Sarukkai:
I think the best way is through LinkedIn to search for shaker. So okay and let’s get connected and love to get everybody’s perspective on this emerging area. Which is of utmost importance.

Scott Schlee: Very good.

And his information will be in the show notes. I will have a quick link for you there and. Feel free to reach out to him, to me, to Skyhigh Security. We’d love to talk to you about the future of AI and how we can help out.

Sekhar Sarukkai:
Thank you.

Scott Schlee:
Thank you again for listening to Skyhigh Cloudcast. If you’ve enjoyed this episode, be sure to subscribe on your favorite platform so you never miss an update.

If you liked the show, please leave us a review. It helps others find the podcast. For more information about Skyhigh Security or Cloudcast, please visit SkyhighSecurity.com.


Please Note: All transcripts are generated using speech recognition software and human transcription, and may contain errors. Please check the corresponding audio before quoting in print.

Back to Podcasts
Scott Schlee

Hosted By

Scott Schlee

Digital Experience Manager