AI Solutions

Dave Gerry And Casey Ellis On Tackling AI Bias

Dave Gerry, CEO at Bugcrowd, and Casey Ellis, Founder and Chief Strategy Officer at Bugcrowd, discuss how AI bias assessments can help improve enterprise trust in generative AI tools and LLMs.

The 2024 RSAC Conference at the Moscone Center in San Fransisco

GenAI and LLM applications have quickly become a part of everyday life for consumers around the world. But many enterprises are still hesitant about enabling the use of these tools in the workplace due to a number of security vulnerabilities and privacy concerns.

“We know these models are trained by humans, so there’s going to be bias that exists inside of them,” says Dave Gerry, CEO at Bugcrowd. “So, how do you make sure that you’re aware of the bias that exists in those models, and how do you then mitigate against it and remove that bias from the models?”

“What we’ve done is find people within our existing community that have deep knowledge of AI and machine learning—the kind of technologies that we’re talking about here,” says Casey Ellis, Bugcrowd’s Founder and Chief Strategy Officer. “And we’ve structured a contest format, where these people are looking for bias that might exist within an LLM that’s not been declared.”

Bugcrowd is a crowdsourced security platform that offers penetration testing, bug bounty hunting, attack surface management, and vulnerability disclosure. The platform also recently launched an AI bias assessment, which enables developers to identify and remediate unknown or undisclosed biases in their LLMs.

In an exclusive interview with Expert Insights at the 2024 RSA Conference in San Fransisco, Gerry and Ellis discuss the security and privacy concerns associated with GenAI and LLMs, how AI bias assessments can help improve enterprise trust in these tools, and Bugcrowd’s new AI bias assessment. The pair also talk about the need for transparency in the cybersecurity industry, and share their top tips for CISOs that may be overwhelmed by all the noise in the market.

Note: This interview has been edited for clarity.

Could you please introduce yourselves and tell us a bit about your security background, and your current roles at Bugcrowd?

Ellis: My name is Casey Ellis, I’m the founder and Chief Strategy Officer for Bugcrowd. In terms of my security background, I grew up as a hacker. I never broke the law, but I always wanted to turn things upside down to see what fell out, from that computer and technology standpoint. I got into pentesting, moved across to sales, did a stint as a CISO, and then ‘broke bad’ and decided I wanted to become an entrepreneur. The origin of Bugcrowd was, coming from the hacker community that I grew up in, I could see all the potential and the ability to solve cybersecurity problems that existed in that community. And I wanted to build a solution that plugs in to all the problems that we’ve got as defenders.

Gerry: I’m Dave Gerry. I’m the CEO here at Bugcrowd; I joined close to two years ago now. I’ve spent most of my career in the security space. I was recruited right out of business school into a very small startup called Veracode. I spent some time there, then spent most of my career in high growth, SaaS startups. I actually met Casey and the Bugcrowd team almost eight years ago, when I was at a company called WhiteHat Security, where I was responsible for both the revenue side and the delivery side. I was Bugcrowd’s first million-dollar customer. We did a lot of work together on the pentesting side and got to know each other in an advisory capacity, and then I jumped in full-time here in August of ‘22.

That’s a great origin story, and it’s really cool that you’d worked with Bugcrowd, Dave, before coming on as CEO.

Gerry: Yeah. Casey is close with the founder of WhiteHat, so it was this culmination of really cool people and we all get to work together. And we’ve now added a lot of folks from Veracode, WhiteHat, and Rapid7; we’ve all come together and we’re all at Bugcrowd now.

We’ve spoken a bit about how the platform came about, and today, Bugcrowd is a crowdsourced security platform that encompasses pentesting, bug bounty, attack surface management, and vulnerability disclosure. One vulnerability that organizations are particularly concerned about at the moment is the use of generative AI tools and LLM applications. Before we dive into those concerns, why do you think these technologies have become so popular in the workplace over the past 18 months?

Ellis: It’s a transformational shift, in terms of the technology that’s available. That’s both in its power and what you can do with it—it’s incredibly powerful, useful, and flexible—but also in its accessibility, which means that a broader range of people have access to this tooling when, historically, that kind of capability would be in the realm of people that are really highly skilled. If you ‘drop the bar’ in that way, you bring in a wider range of folks that can make use of this stuff.

On top of that, businesses are recognizing the way that it’s captured the imagination of the consumer and their employees, and they’re trying to figure out how to make it relevant to their products and how to capitalize on it.

Gerry: It brought scalability to the masses. It normalized what we’ve all talked about in tech around automation, and how you efficiently scale and improve productivity across your teams. And now, suddenly, your grandmother’s talking about it and using it, too! That’s part of what’s led into some of the safety and security concerns, because now it’s this thing that’s front and centre for the consumer. That’s where a lot of the noise is generated, from a safety and privacy concern standpoint.

While they clearly offer lots of benefits, these technologies also introduce new risks. What are some of the main vulnerabilities surrounding gen AI and LLM apps that security leaders should be aware of?

Ellis: We’ve been working in machine learning and AI since before the chatbots dropped; since before it was cool, in a sense. We have a partnership with OpenAI that kicked off about three months before GPT dropped, as well as Anthropic, Google, and others in that space. We’ve been very deeply involved in partnering with these foundational providers, not just to secure themselves but also to tell that story of safety and security out to market.

The biggest thing that people should be concerned about is the speed with which we’re trying to integrate GenAI and LLMs, and also the ambiguity created by all the hype around it. We’ve broken out a way to talk about AI in three ways. There’s AI as a tool; a thing that can help attackers and defenders get to a point of success more quickly. There’s AI as a target; talking about AI bias and some of the concerns on that side of things. And then there’s AI as a threat, which is when you’re integrating AI with what you’ve already got, and thinking about the unintended consequences of that. Personally, I think the third one is the one that people should be most concerned about because of the speed with which we’re trying to implement this technology.

Gerry: From an enterprise standpoint, I would agree; the integration points are probably the point of greatest vulnerability as it exists today, because organizations are trying to disrupt what they’ve done for X amount of years, and suddenly modernize their tech stack very quickly with an LLM. That’s going to break a lot of other things and introduce potential risks across their entire enterprise state.

From a consumer standpoint, the privacy, safety, and bias side of things is really where we’re seeing traction. That’s one of the reasons that we’ve seen a strong push—both from the public sectors and government, as well as from the commercial sector—for bias assessments; the ability to have somebody come in and actually say whether an inherent bias exists. We know these models are trained by humans, so there’s going to be bias that exists inside of them. So, how do you make sure that you’re aware of the bias that exists in those models, and how do you then mitigate against it and remove that bias from the models?

I’ll give you an example. We had our customer advisory board meeting yesterday, and one of the customers was talking about an HR system that they were using, and they were leveraging AI in their recruiting model. We want to make sure that there isn’t a bias that means you’re eliminating candidates, or putting people at the top of a stack in a way that’s unfair to other candidates, or that you have salary discrepancies because of a candidate’s name or background. So, making sure that there is an inherent fairness that exists within the AI models.

You’ve recently added an AI Bias Assessment to the Bugcrowd platform. How does this assessment work, and what was some of the thinking behind the launch?

Ellis: The power and the capability we’ve got is to bring together a whole bunch of people that think like hackers. They’re looking at the way a system’s been built, and their initial instinct is to tip it upside down and see what it shouldn’t do. That’s what we deliver as a service across all sorts of different technology domains, and AI is no different.

So, what we’ve done is find people within our existing community that have deep knowledge of AI and machine learning—the kind of technologies that we’re talking about here. And we’ve structured a contest format, where these people are looking for bias that might exist within an LLM that’s not been declared.

There’s a policy where, if you’ve got an LLM that’s integrated into a public facing system, you have to have a ‘model card’ that declares known bias within that particular model. So, we’re looking for things that have been missed. We’re looking for anything that’s going to happen within this LLM that’s not known or pre-empted, and can’t be mitigated because it’s effectively almost the same as a zero day vulnerability in that dataset.

We partnered with CDO, which is basically the innovation arm of the Department of Defence, to do our first bias assessment. When the chatbots dropped, that was the pilot project that brought the thinking together, and it was successful. We’ve been talking to customers about this ever since; we’ve released the product and we’re doing it again.

We’ve spoken a bit about the platform and some of the new features you’ve launched, and we’d like to change course a little now to get your thoughts on a security incident that’s been fairly high-profile in the last few months. The SCRB recently released a review of the 2023 Microsoft Exchange Online intrusion. What effect might the intrusion itself and Microsoft’s response to it have on the industry?

Ellis: The announcement Satya Nadella last week was almost a carbon copy of Bill Gates in 2003 with the Trustworthy Computing memo. I think the CSRB report, and a lot of the APT [advanced persistent threat] targeting of Microsoft over the past couple of years, is prompting that in the same way that it did back then. The Summer of Worms was what triggered Bill Gates to write the TWC; APT activity against exchange has been part of why Satya wrote the wrote the memo that he did.

In terms of the impact, one of the things I find really interesting about what he’s done is to basically tie executive compensation to security outcomes within Microsoft. That’s where this is going, so to see Microsoft actually push things in that direction—good on them for doing that. That signals to other similar organizations that this is the right way to go. And I suspect that that’s going to cascade out as a response to this type of thing.

There’s an element of transparency to that as a part of the solution. It goes back to Kerckhoff’s principle in cryptography—if something’s secret, you can’t rely on the fact it’s going to stay secret forever. If that’s the only way you’re keeping things secure, that’s a fragile assumption. I think that’s a universal truth in how we approach security. The antidote is transparency, wherever you can find it. I do think that’s the response that Microsoft’s taken to this, and hopefully that will catch on.

Gerry: It’s also no longer a technology problem; it’s a business problem. Whether it’s the outcome of what happened with Microsoft, whether it’s the SEC guidelines that have come out, whether it’s any other regulatory action that’s pending across the globe—this is no longer a technology problem. In the same way that we would look at macroeconomic indicators, cyber is just going to be another indicator for the health of a business and the long-term sustainability and investability of that business. So, I think we’re going to see that the markets will start responding to this, and we’ll hopefully see that the folks that take security seriously and the companies that invest heavily in security are rewarded for that.

We’re still not seeing that totally today. The UnitedHealth Group is a great example; they had a massive security incident, and their stock price actually went up. There’s almost this reverse psychology in the investment market where, if you’re big enough to be attacked and breached on a scale of that magnitude, you’re important enough to invest in.

If you could give one last piece of advice to the CISOs and security leaders attending the conference this week, what would it be?

Gerry: My advice would be, get back to security fundamentals and foundations; don’t get distracted by the hype. If you walk around the expo floor, every single one of these booths is going say AI in some way. The people at the booth have been trained, so the first thing they’re going to lead with is AI and how they’re implementing AI in their security stack, and how that tool is going to revolutionize what CISOs are doing today. And yes, AI is going to revolutionize the way we do a lot of things but, ultimately, the weakest link in your business is still the human.

If your users aren’t secure, if they’re not trained, if they’re not practising the right processes and you don’t have the right policies in place, then the rest of it really doesn’t matter. So, focus on the fundamentals, focus on the security foundations, and then build from there.

Ellis: I think the antidote to that is to go in understanding what the security priorities are for you as an individual CISO leading your unique organization, and just be diligent about getting an answer to those questions. And for anyone who’s struggling with that or trying to figure out what those things are, there’s an incredible community around RSAC, so spend as much time as possible with peers, figuring out what priorities they’re thinking about, and what you can learn from that.

Finally, what are you most excited to see in the cybersecurity space as we continue into 2024, then beyond into 2025? 

Ellis: Figuring out what AI actually is. That’s partly tongue in cheek, but also not in some ways. AI in 2024 is a lot like having a website in 1998; there’s this incredibly steep spike in the hype cycle that we’re just coming off of now, and when that disappears, you’ll be able to see what we’re actually going to have to deal with and work with going forward. I’m looking forward to that.

The other thing is increasing adoption of hacker feedback, and the normalization of hacking as being a powerful skill set that can be used for good or bad. It’s not an inherently bad thing, and there’s a growing awareness and understanding of that. It’s obviously a big part of Bugrowd’s mission, so we’re biased and we want to see that happen. But as someone who grew up in that community, it’s pretty exciting.

Gerry: I’m excited to see the continued development of the community side; we’re going to see that CISOs are continuing to learn from each other and invest heavily in understanding what their peers are doing. There’s less competition. Even in two competing organizations, the security teams may still meet and talk, and share feedback and advice, and what we’re realizing as an industry is that it’s a team sport.

Everyone is facing very similar attacks; they may be different based on industry, vertical, segment, or where they sit in the market but, fundamentally, they’re seeing many of the same attacks and we’re starting to see a need for sharing of information. We’re hearing it from our customers, who want the ability to interact and share vulnerability data back and forth. And I think we’re going to see that across the entire industry; it’s not just a Bugcrowd specific thing.


Thank you to Dave Gerry and Casey Ellis for taking part in this interview. You can find out more about Bugcrowd’s crowdsourced threat hunting solutions via their website.

Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions with confidence.

For more interviews with industry experts, visit our podcast page here.