Network Firewalls

Interview: How Crowdsourcing Intelligence Can Improve Your Proactive Security Posture

Casey Ellis, Chairman, Founder, and CTO at Bugcrowd, discusses the most common security shortfalls and how businesses can mitigate them, the benefits of taking a proactive approach to security, and why businesses must accept that to err is human.

Expert Insights Interview With Casey Ellis Of Bugcrowd

Casey Ellis is the Chairman, Founder, and CTO of Bugcrowd. Ellis has been in the information security space for almost 25 years, working with both startups and global enterprises as a pentester, security consultant and solutions architect. Ellis pioneered the Crowdsourced Security-as-a-Service model, and co-founded Bugcrowd as the first platform that connects crowdsourced intelligence with organizations in need of those skills and knowledge. 

At RSAC 2022, we spoke to Ellis to discuss the most common security shortfalls that Bugcrowd identify and how businesses can mitigate them, the benefits of taking a proactive approach to security, and why businesses must accept that to err is human. 

Could you give us an introduction to Bugcrowd, your key use cases, and what differentiates you from other pen-testing and vulnerability disclosure providers?

Most definitely. So, Bugcrowd is a crowdsourced security platform—I’ve taken lightly to referring to it as “multi-sourced” because there are all sorts of different ways to access and consume the creativity that exists out in the white hat community. 

A differentiator for us, and something that we’re quite proud of, is that we actually created this category; we were the first ones to plant the shovel in the dirt, so to speak. We didn’t invent vulnerability disclosure, or bug bounty programmes, or even the researcher community—all that already existed. But this idea of building a platform to connect the latent potential that exists within the wider community with all the unmet demand in the cybersecurity industry—we were the first mover there, which is pretty cool, because it has definitely since caught on as an idea. 

And it continues to grow. Not only Bugcrowd as a company, but there are others that have come into the space as well, which I take as validation of the overall idea.

How does Bugcrowd help organizations identify security vulnerabilities and fill gaps in their attack surface?

The fundamental that we work with as an organization is that cybersecurity is an inherently human problem—the technology just makes it go faster. I was actually working as a security practitioner prior to starting Bugcrowd, and a lot of the cybersecurity technology solutions out there do their bit, but they don’t fundamentally solve this question of outsmarting all the adversaries that the average defender is trying to compete against. 

So, if you think about the problems that you’ve got and all of the different potential baddies that you’ve got to look out for, they’ve got lots of different skills, lots of different motivations, and lots of different reasons for coming after your stuff as a defender. 

On the defensive side, you’ve got automated tools, which can never be as smart as a human, and you’ve got as many people as you can afford to hire full-time, who might be really good at their job, but are fundamentally outmatched in terms of their ability to get ahead. 

So, this idea that it takes an army of allies to outsmart an army of adversaries was a big part of what we were looking to solve there. 

On top of that, growing up in the hacker community myself, I know that that there are all sorts of folk like myself that are in that space, who enjoy thinking like a criminal, but have absolutely no desire to be one; we’re actually here to help organizations answer these security questions, and be able to reduce their risk before it’s exploited. 

So that’s a conceptual answer. A bit more specifically, what we’ve done is build a platform that connects all parts of our community with customers that have problems to solve. That problem could be running a vulnerability disclosure programme, where they’re just wanting to get feedback from the internet, from anyone that might be wanting to help. It could also be that they need very targeted, crowdsourced or even multi-sourced security assessments, where there are high levels of trust involved with giving out source code to do reviews and all sorts of things that you wouldn’t necessarily expect from a crowdsourced setup. 

So, effectively, the platform manages the different kinds of projects that our customers want to run.

The last piece is this graph that we have on the crowd. It looks almost like a dating website, in a lot of ways. We collect traits; we actually have the researchers and the hunters on the platform give us information to help us understand what they know, what they’re good at, and what their preferences are. It’s essentially a matching service. When a customer comes in and says, “I’ve got these set of things in my environment that make me an ideal fit for this set of researcher,” we take that data and use it to create as good a match as we possibly can, to make sure that we’re getting the right eyes on target.

What are some of the most common vulnerabilities that Bugcrowd discovers? Are there any that crop up so often that you could tell people to go and fix them before you’ve even begun a test?

I like how you ended that question there, because the answer of “just go out and fix it” is never quite that simple. If there’s something that’s systemic and keeps on cropping up, nine times out of 10, that’s because it’s actually quite hard to avoid or fix. 

Cross site scripting (XSS) is a really good example of this. It’s a vulnerability that I think a lot of people thumbed their nose at, because it’s not necessarily as impactful as vulnerabilities like command injection. But the thing with XSS is that it’s everywhere, because, as a developer, it’s actually quite difficult to fully avoid. So, you end up in this position where you’ve got lots and lots of different instances of this particular issue popping up, mostly because of the nature of how code works to begin with. So XSS is one that we see a tonne. 

Over the past two or three years, we’ve also seen a lot more by way of poor access and authentication setups and configuration issues. I put that down to the pandemic forcing everyone to move quite quickly. Speed is the natural enemy of security. And when COVID hit, there was this really rapid work from home transformation that had to happen pretty much globally. But then, shortly after that, all these different organisations started to adjust their business, to cater for the fact that people weren’t leaving their homes in the same way they used to. 

All of that resulted in a whole bunch of code being written, like a whole bunch of changes happening on the internet. And things get left out by mistake as a result of that. So, we’ve definitely seen that pick up over the past two or three years. 

The last one would probably be unpatched remote access systems, the kind of stuff that we’ve been hearing as being exploited by nation state actors and cybercriminal groups. I’ve been doing this for coming up on 25 years, and it’s almost a throwback to the late 90s, early 2000s, in terms of what that looks like on the internet. But what is old seems to be new again, in terms of the amount of the amount of vulnerabilities of that nature that are out there to exploit. And now we’ve got this new dynamic of the adversary actually taking advantage of that.

Once you’ve helped organizations to identify some of these risks, what are the first steps they should be taking to remediate them? 

Our category is commonly thought of as a bunch of bug bounty platforms, which I think in the minds of a lot of people implies this idea of just going out to the open internet and triggering a conversation with everyone all at the same time. Most companies aren’t actually ready for that.

That’s why about 80% of what we do on the platform is actually in the form of private programmes, where they have the opportunity to crawl, then walk, then run, as well as deploy the crowd into private use cases instead of just public ones. 

I say that because I think step zero for organizations is to actually understand what their vulnerability management and remediation process is going to look like. Some companies have that to a state where it’s really mature. For companies that have a cloud-native environment and an SDLC established, this is a process that’s already in place, and inserting security into that is less difficult for them. 

But for companies that don’t have these engineering features to begin with, they’ve basically got to build that part first. So, this whole idea of, “If we get some kind of trash fire issue come in through the front door that we’ve got to go off and fix immediately because of the risk of presents, how are we going to create time for that? Who’s going to look after that? How are we going to make sure that it’s done properly?” Nine times out of 10, that’s actually a process issue, versus a pure technology one. 

So, I think that part gets missed, because it’s not just about making sure your coders understand that security is important. That is the tactical answer. There are a bunch of things that happen around that to make security awareness actionable and effective, which can get overlooked quite easily.

Step one, then, is really understanding the priority. This is another thing that we’ve definitely seen out of the pandemic, and we observed this quite clearly when Log4j happened. The security industry itself has a limited capacity to get around to all the things it’s being asked to do. There are so many vulnerabilities, there are so many adversaries, and there are so many potential security events going on, that it’s just not realistic to expect to fix them all. So, as a practitioner, you’ve got to be able to prioritize. You’ve got to understand which things are most likely, which things are most impactful, and which things have the greatest risk to your organization. 

Then you need to work out how you’re going to prioritize based on that information, which is a lot of what Bugcrowd helps people do, through the fact that we’ve got human creativity generating this information in the first place, as well as all of the work we’ve done on the platform to decorate vulnerabilities individually and say, “This is the comparative risk of this issue compared to some of the others that you might have.” 

Why is it important for organizations to proactively carry out tests to discover vulnerabilities, rather than taking a reactive approach to security?

Security is ultimately everyone’s responsibility. I enjoy the unique nature of the cybersecurity industry within IT. Ideally, as a kind of a purist around the whole thing, I would love to see a time in the future where it actually merges with engineering, and with how companies do business. You see leading indicators of that, with the SEC talking about boards reporting their cybersecurity experience in public companies. That, to me, is like the SEC starting to view cyber risk as just a part of normal business risk. 

The ideal state is where security isn’t this oddball thing that hangs out the side, instead, it’s actually a part of core corporate culture, and it becomes almost less special in the process. If we get to a point where we’re boring, I think we’ve probably done our job properly. 

Security should just be the thing that you do along the way, it’s not something to slap on at the end or treat as some sort of special side project. It’s a fundamental of everything your business does, especially as we get more reliant on technology to do business and, indeed, to do life in general. 

To me, what that nets out to is a better preventative posture. It’s not any one specific thing, it’s more the fact that people know that they should lock their door when they leave the house in the morning because they live in a bad neighbourhood. But you’ve got to know that and acknowledge it, before you’ll actually do that, and half the time people don’t. 

Practice paranoia. You don’t want to have people get freaked out by all this to the point where they freeze up, but people need to believe in the boogeyman before they’ll actually undergo the expense and inconvenience of doing things securely. 

In light of that, do you tend to work with organizations on a more continuous basis, or do they come to you for a one-time service?

It’s definitely continuous in terms of how they engage. As I mentioned before, we’ve got private and public programmes and variations of how we can deliver that. The private stuff gets delivered usually into a pen test use case, but we often also deliver private testing on a continuous basis per programme or point-in-time. Think of it like Uber for hackers: you push a button and get a bunch of smart computer people to come in, safely break your stuff, then tell you what’s most likely to get hacked for real until it’s fixed. 

Across the board, we usually see a fully mature customer of Bugcrowd doing continuous, point-in-time, private, and public programmes all side by side. They’ve got continuous programmes running, but they might also have a pre pre-production system that they’re wanting to run through its paces before they integrate it into their main production suite. So that’ll be a project, because ultimately, it’ll get integrated into something that already has continuous testing in place. 

The short answer is that the use of Bugcrowd platform is continuous, but then the different things that they might do underneath that can be continuous, point-in-time, or a combination of both.

Finally, what is your advice to organizations looking to use a platform such as Bugcrowd to assess their vulnerability risk?

I think I’ve already dropped a few things that would go into that in terms of security being a team sport. My biggest belief when it comes to this question is that to err is human. As you’re writing this, you’ll have spellcheck pop up and say, “Hey, you goofed that word!” And then at some point, if you got to edit it, you might see that none of these things is major, or there might be something that materially changes the meaning of a sentence. It’s not because you’re a bad writer, it’s because you’re human. 

That’s an analogy I use often to explain the presence of vulnerabilities in code. Oftentimes, I get asked by investigative folk, “Shouldn’t they not have had this vulnerability in the first place?” – In an ideal world, yes that’s true, but it’s not an ideal world. The trade-off for human creativity is that we’re able to come up with stuff that computers never could. That’s the upside, but the downside is that we’re not bound by math in ways that allow for these mistakes to be made. 

Practically and organizationally, this means the ability to get over that hump of viewing vulnerabilities as being a bad thing, and security risk being like dirty laundry, is really important. We need to progress from that to the point where we know this is just a thing that happens because we have humans working for us. Let’s accept that, actually get in front of identifying where it’s putting our customers at risk, and then try to learn from it so we can do it less in the future. 

That, to me, is one of the most powerful mindset shifts an organization can make. And for folks that are in that position, this whole conversation between people with a breaker mindset in the crowd and people with a builder mindset gets really productive, because they learn from each other. If we can get to that point, then things get really fun.


Thank you to Casey Ellis for taking part in this interview. You can find out more about Bugcrowd’s penetration testing and attack surface management platform via their website.

Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions with confidence.