Application Security

“Protecting The Application Is An Extension Of Protecting The Data”

Expert Insights speaks to Terry Ray of Imperva to discover how organizations can secure their data and all paths leading to it.

Expert Insights Interview With Terry Ray Of Imperva

There are various layers to data security. Not only do you have to secure the data itself, but also the server it’s used and stored on, while monitoring who can access it, how they access it, and what they can do with it. This task presents quite the challenge for many security teams—particularly those charged with securing data across multiple on-prem and cloud servers, and often hundreds of different application components. It’s like a game of “Capture the flag”, except that you actually have 50 flags and, if someone steals one of them, you could face an average loss of 3.86 million dollars.

So how can organizations protect ­all the paths leading to their data?

We spoke to Terry Ray, SVP and technology fellow at Imperva to find out. Imperva was founded in 2002 as a solution to the security gaps present given the prevalence of network and endpoint security at the time. The company recognized a common issue among many organizations: they had an inherent trust in the security of their applications, giving little thought to how data was processed in the backend or who was accessing it. This problem caused countless security incidents, in which large enterprises, many of which are recognizable household names, lost millions to billions of private data records. Imperva identified that neither network firewalls, nor endpoint solutions, were adequately securing application servers; they completely failed to protect data at the database level.

Ray has spent the majority of his career working in the application security and data security space. For the last 18 years, since joining Imperva, he’s worked with customers to identify their problems and help them to strengthen their security architectures.

Securing Data And All Paths To It

Databases are traditionally protected with encryption and limited native logging, as well as authentication settings for database administrators and other human and application users. Database admins often have extensive access privileges within their organization’s environment, which makes them a lucrative target for cybercriminals seeking to gain access to corporate data. Because of this, it is fundamentally important for security teams to monitor the access and activities of database administrators.

However, securing the immediate database perimeter itself is not enough to completely secure the data within it. Organizations need to consider all paths leading to and from that data; all of which can be left vulnerable by traditional network security controls.

Let’s take a look at an example. If your company has a website, you want as many people as possible to visit it. So, your network firewall doesn’t restrict access to your website, but this leaves the application vulnerable to breaches—both those caused by external threat actors, and by internally compromised application components like “trusted” APIs and serverless code.

“Protecting the data requires that you look not only at direct human database users, but also look at your applications and APIs, because they are the primary users of that data,” Ray says. “And, from our perspective, you have to monitor the applications and the data as a connected thread of activity.”

This unified approach to database and application security gives organizations the intelligence to determine who is using an application and whether they’re using it in a way they’re supposed to—all the way back to the data they access.

“Let’s say I’ve been monitoring an application, and I know this application isn’t supposed to access credit cards,” explains Ray, “but, all of a sudden, I see credit card information flowing across the application from the database. It’s an application, which certainly indicates a fully authorized connection, but I also know through behavior analysis of the application, that accessing credit cards is abnormal for this app. The combination of unusual application behavior on highly sensitive data types raises the incident to a level that bears examination and possibly mitigation.”

A traditional, segregated approach to database and application security would fail to provide this type of insight. Organizations can see who is accessing an application, but not how they’re transacting with the database from within that application.

“This is one of Imperva’s unique differentiators, giving that complete edge to data visibility. We have competitors in the database space, and we have competitors in the application space, but there isn’t anyone else who takes our stance that protecting the application is an extension of protecting the data.”

The Challenges Of Managing An Evolving Environment

In recent years, more organizations have been going through processes of digital transformation. One part of that is the migration of their data and workloads to the cloud—a shift which almost all businesses around the world had to embrace during the last 18 months to enable their employees to work more efficiently from home. And operating in the cloud presents a number of challenges.

“As companies moved to the cloud, we saw teams moving to a mindset where they didn’t want the responsibility of managing the infrastructure or the software itself,” Ray explains. “So, they started moving to managed databases, like AWS and Azure. In those environments, you just log into your database, and the provider manages everything except your fundamental database functionality.”

This led to many organizations managing their data in a number of different cloud native data services, making it much more difficult to have a clear, unified overview as to where data was being managed, and who was accessing it via which applications. The prevailing thought in most teams is that this introduces particularly challenging issues when using a “niche” cloud database, such as Snowflake, Aurora or Dynamo.

“Now, our customers are saying, ‘I need you to secure my two on-prem databases for me, but also some number of different cloud databases, including data lakes, data warehouses—some provider managed and some not—attached to 50 different applications that have micro services and all the other components that are in modern applications. I need you to monitor the back end, and support all of that.’”

“That’s another of Imperva’s differentiators; our ability to quickly cover a new database or data source of any kind. We made a key acquisition last year of jSonar, now Imperva Sonar, which gives us a platform to be able to cover cloud and on-prem natively as if there were one ecosystem. It’s given us the rapid capabilities to support any data source any customer could ever want to look at.”

Shifting Security To The Left: The DevSecOps Concept

For many modern organizations going through the process of digital transformation, embracing the DevSecOps concept of “shifting security to the left” has become a priority. This concept refers to the process of implementing security checks during the development phase to ensure that an application’s code is secure from the get-go, rather than having to be fixed at the end of the development cycle.

But why do organizations need to integrate security into their DevOps cycles?

First, applications are made up of lots of different components, many of which are third-party code and not internally developed, that all communicate with one another. Without the integration of security, that communication isn’t monitored. This allows room for human error, but also exposes the code to external and internal threats.

Second, building security into the development cycle reduces the risk of applications being released with vulnerabilities or errors in their code.

“The communication piece is only one part of DevSecOps,” Ray says. “The other part is the CI/CD—constant integration/custom development—process of building and testing applications. If you have a problem with your application, do you stop and fix it, or do you move ahead with the development and accept the risk of the business pushing it out?”

To help organizations embrace a DevSecOps strategy, Imperva offers runtime application self-protection (RASP). RASP technology is designed to detect attacks during and within application runtime and prevent vulnerabilities caused by human error in real-time by monitoring the code for unwanted activity.

“Our customers want to do two things. Number one, they want full visibility into all application activity, not just ‘North/South’ traffic, but also internal, or ‘East/West’ traffic. Our RASP technology lets us embed security into an application or application component, so we can see anything happening regardless of the source, destination or direction of the direction.

“But more importantly, it lets us embed visibility and security right into that CI/CD process, allowing teams to monitor everything that users are doing and how your code works. This allows for problems in the code to be closed before it reaches production. Finding a vulnerability during a security review means you don’t have to go back and rush a fix and delay production roll-out. Instead, when it comes to testing, RASP informs security and DevOps where the problems are. Knowing you’ve built a hardened, secure application with security built in means you can move to produce with RASP actively mitigating exploits on your known vulnerabilities while DevOps fixes those vulnerabilities in the next iteration of the code without a rush process or slowing the business. RASP shortens the development process so you don’t have to spend months developing and testing iteratively, while delaying a business plan.”

The DevSecOps approach works well for large organizations who want to shorten their development cycles, Ray tells me, but its benefits can also be reaped by SMBs, whose supply chain integrates security into their development and operations.

“It doesn’t matter whether you’re doing the development yourself, or whether you’ve hired someone to do it for you. Most applications have 60-70% of third-party code in them; do you trust that third-party coder to have done their due diligence and for the code to be secured? If not, that’s where RASP comes into play. RASP secures third-party code you shouldn’t trust; just as well as it will your own internally developed code.”

Step One, Visibility; Step Two, Quality…

When it comes to data and application security, Ray advises organizations first and foremost not to wait for a breach; everyone starts somewhere. Pre-breach is obviously the best timing, but often not the case for many unfortunate organizations. And the best place to start, he tells me, is with visibility.

“You need to know where all of your classified, sensitive data is, and whether it could be anywhere else in your environment without you knowing about it. Most organizations know where their sensitive data is supposed to be, but not where it all really is. It’s kind of like my house keys—I know where they’re supposed to be, but half the time they’re not there.”

“The other piece is knowing who’s accessing your data, when they’re accessing it, how they’re accessing it, and how much they accessed. These are the questions that are going to be asked if you have a breach. And if you don’t know, that’s a big problem.”

Not having this visibility presents a huge problem when it comes to recovering from a data breach, particularly in terms of fines from compliance bodies. A lot of breaches that make headlines today scream of big, round numbers. This is because the organization didn’t know exactly how many records were compromised, says Ray. If a company can’t say with confidence that exactly 10 or 500 records were compromised, for example, they have to assume that all of their 200 million records were compromised.

“There’s a big difference on the GDPR or PCI fine between five records and 200 million records.”

“When you have complete visibility in that environment—when you look at the data, watch what people are doing, and see what they’re doing wrong—your challenges really start to melt away when it comes to data and application security.”

Then, once an organization has achieved that visibility and comes to actually implementing a security solution, quality must come first, says Ray.

“The overwhelming majority of organizations already have anti-phishing, anti-malware and a network firewall. However, when you revisit your security landscape and requirements, ask yourself, ‘Is it good enough?’ If you are breached, did you have something in place that was really effective at preventing it, or did you have something that just ticked the box saying you had a web application firewall or data security?”

“You’ve got to look at the quality; they’re not all made the same.”

Achieving the proper level of visibility into data use across your organization and implementing a high-quality solution to tackle the threats you identify will help lead you to step three: security.


Thank you to Terry Ray for taking part in this interview. You can find out more about Imperva and their edge, application and data security platform at their website and via their LinkedIn profile.