Interview: Virtru SVP On How To Build A Stronger Security Foundation
Expert Insights interviews Rob McDonald, Senior Vice President of Strategy and Field CPO at Virtru, to discuss their data security and encryption platform.
All organizations today, from the smallest start-ups to the largest global corporate behemoths, share a single common challenge: securing data. Data protection provider Virtru recently launched OpenTDF, a new open-source initiative designed to become a universal standard for data control, enabling organizations to incorporate the concept of “zero trust” into their data processes.
Expert Insights recently spoke to Rob McDonald, the Senior Vice President of Strategy and Field CPO at Virtru, to discuss the Virtru platform and the OpenTDF standard, as well as wider industry trends. McDonald is a long-time privacy advocate who has worked in various engineering and cybersecurity CTO and CIO roles across multiple industries before joining the Virtru team when the company first launched in 2011.
Our interview covered the scale of the data challenge today, the importance of Zero Trust when it comes to data security, and McDonald’s advice for how organizations can better protect their data environment. This interview has been edited for clarity and length.
Can you give an overview of the Virtru platform and your typical customers?
Virtru was born out of necessity: A crossroads between the need to mobilize mission-critical data, and the need to protect that data, particularly when it’s highly sensitive. And those crossroads are almost always fighting each other. Our original concept was: “Can we execute on a data centric protection solution, while creating an economy of ease of use so that adoption is impactful?” Because without adoption, it’s useless.
We chose a pervasive medium in the beginning, which was email. If you’ve been in security long enough, you’ll think of S/MIME and PGP as being complicated, and not easy to use. And that’s really what we were trying to overcome in the beginning. Our early portfolio was centered on end-to-end email protection, and over time that has evolved to support file collaboration, general data collaboration, and us bringing the underlying platform to the market so that developers and organizations can bake those end-to-end encryption capabilities into their own workflows and automations.
In addition to expanding the portfolio to include different and more diverse data types, we’ve also added a more diverse set of activities you can perform on that data. So, instead of just an email, where I can read, consume, and reply, now we’re doing things like secure analytics for some of the structured data.
That really has been the journey: Assembling these capabilities that make end-to-end encryption possible within a fast-paced, fast-moving, data-rich environment so that organizations can get the value out of that data, even while they are protecting it end-to-end.
Given the nature of what we do, we have a lot of customers in regulated spaces. These span broad regulations, like HIPAA in healthcare, which is very foundational, up to and including very specific and high-bar regulations, like ITAR and CMMC. Our typical customers usually have some of these regulatory pressures, or they have very sensitive intellectual property or company-sensitive data that is very valuable yet needs to be mobilized.
And because of that very horizontal need, we have customers ranging from SMBs up to organizations with 200,000 plus employees, including multi-national enterprises. Today, we are continuing our march up-market, so we are seeing more and more of these much larger customers.
How have the challenges in the data security landscape evolved since you first launched in the email space, and how have these regulatory pressures you mentioned impacted the challenges around securing data without impacting on mobilization?
Email was our initial entry vector, but we protect data of any kind. That’s really the key. When you look at it from that lens and view our platform as data agnostic in terms of where that data is traveling through, you’re also going to get a look at the problems and challenges that most organizations are facing.
I’ll try and give you a broad view of the challenges we see right now. Most organizations are facing a handful of questions. One is, “Where is my data?” There is still a huge discoverability and risk assignment issue. It’s complicated. Most organizations are behind the pace of the application of data, because it’s really difficult.
The second question is, “What is my risk quantity?” At the end of the day, the question of where your data is can only be answered at one point in time. Because that data moves. As that data moves and transforms and takes on additional recipients, organizations are really struggling to translate that into a risk quantity to justify remediation or investment granularly enough and real-time enough to be impactful. That whole discovery, mapping and risk assessment calculus is still one of the biggest challenges.
The third question is, “What is within my reach of control?” Amidst a landscape of data mobility, movement, access, and collaboration, one of the biggest aspects of risk quantity is discovering how much of this is just out of your control. This question is being asked a lot more now than it used to be.
How has the real explosion we’ve seen in the use of cloud services and decentralized SaaS applications impacted these challenges?
That’s a big part of it. We tell teams in the department: “Use all the tools you’ve got to get the mission done.” And then we slap their hands when they use a tool we don’t have control over!
Businesses have a North Star. For some organizations, it’s just revenue. For other organizations, it’s a more altruistic goal. But fundamentally, that goal is why the business is there. So, you have consumerization with some of these great tools and employees say: “I can be more productive with this!” That explosion is a big part of it, and it took the industry by storm over the last ten years.
And then you compound that with the increasing privacy regulation in the landscape. And that is interesting because we’re still in this place where we are looking to these legal entities to try and protect us. We trust frameworks like GDPR and CCPA, which are great, but we have to realize that they impose a lot of complexity on organizations. Organizations now grapple with: “Oh jeez, can I even attribute ownership to where all of this data is, because I’ve got to claw that back or manage it differently?”
What really is the long tail here is the awakening of the data owner, the consumer, the individual, realizing their loss of control and sovereignty over their data. That awakening is really what is going to change the industry long term. We’re in this current state of these regulations complicating things, because organizations now not only have to know where their data is, they also have to attribute ownership and lineage to that data—but we’re also seeing data owners awakening to their loss of control, which is going to be interesting to watch unfold.
Moving on to discuss the Virtru platform, what sets your solution apart in the encryption and data security space, and how are you helping businesses to address these challenges?
Some of this is philosophical, and some of this is manifested in Virtru’s product. We really have a “start now, where you work” mentality. Some of these ecosystems, frameworks, and environments are very complex. And what ultimately happens is that organizations never get enough adoption to be impactful.
We have taken a user-first, experience-oriented approach, which means you’re on your journey to Zero Trust, or you’re on your journey to some kind of improved security posture. When you’re asking the question, where’s my data? And where are your employees working? We likely integrate there. And when we integrate there, that end user does not have that adoption burden.
Having that empathy for those individuals that are actually carrying out the work and not imposing additional work on them has resulted in a significant amount of success for us. This is because a lot of cyber organizations are still very punitive.
Architecturally, we are built for this age. We’re a data owner-first architecture. This means when data is minted, from the beginning, we can apply policies to it so that the data owner stays in ownership of that data throughout its journey. That’s unique to this industry overall.
Just recently, we created an open source initiative: OpenTDF. The underlying core kernel capabilities are key management, attribute management, and authorization. Those key kernels are now open. So, there’s less concern about adopting that technology. And you’ll have more community collaboration on this concept, which is unique.
One last piece that I think is worth mentioning here is, if you think about this technical convergence with the real world, it’s sometimes been a square peg in a round hole. We exist in this complex state where trust, intent, and consent of multiple parties should be enforceable by technology, but as of today it is not. You’ve got to look to a legal authority – say GDPR, ITAR, or CMMC – and hope they’re going to do the right thing for you.
We’ve built a multi-party policy framework which means that we can take the intent and consent from all these parties, bake that into a policy, and apply it to the data level so that regardless of where that data travels, you have visibility, control, and enforcement.
This is a unique combination of technology, in my opinion, and it’s the differentiator for us. We believe this is what the future should be: Technical enforceability, as opposed to relying on legal proxies alone and hoping they do the best for you.
Virtru describes its platform as “Zero Trust Data Control”. We see a lot of interest in the cybersecurity industry around the Zero Trust framework, so how would Virtru describe the concept of Zero Trust, and where does your platform sit in that process?
Shout out to my cybersecurity brothers and sisters out there, because my immediate response to Zero Trust is always like: “What do you mean?” So first off, let me start by saying that Zero Trust is a decisioning framework for an organization. It’s not a tool, it’s a means to view your operational service area through a lens of ensuring authorization at the right time, with just enough access. Zero Trust is about choosing solutions that allow you to implement that lens. That sets the groundwork so that we were not coming off as a vendor that claims, “We solve all your Zero Trust problems.” We’re a component of your overall Zero Trust program.
I think that where we really elevate the Zero Trust impact is by moving the identity and policy intent to the data itself. This is because data is the ultimate common denominator. There are various Zero Trust solutions focusing on the application container, the network container. But what is the one thing that actually moves? It’s the data.
We’re trying to take that identity and policy intent and move it to the data itself. So as that data moves through its enrichment journey—which could be into a CRM, out of a CRM, to the board, whatever—you’re getting value of it while also achieving a level of visibility and control. It is that element of control that organizations simply do not have when they focus solely on the application container or network endpoint where the data is being accessed.
That’s what’s key. That data is going to move, it has to move. And when it moves and goes beyond the boundary that you have insight into, if you didn’t apply that data pillar in the Zero Trust philosophy, you will lose visibility, and you will lose control. That’s really what we mean by Zero Trust and we’re demonstrating that in our platform with secure collaboration, file transfer, and email.
What final piece of advice would you give to organizations when it comes to better managing and securing their data?
The last three or four years, I hope, has been a moment where everybody has been able to stop, remove all assumptions they were making, and say: “Where can I create the most impact?” I hope every organization is able to do that both on the operational side and on the cyber side.
Start from a foundation and build on the reality that data will have to move. From that vantage point, it becomes obvious that risk evolves as that data moves. And because of this, response time alone is simply insufficient. So, the earlier in the data lifetime that you can find your intent—the who and under what conditionsthat data can be accessed—the more control you’re going to have as that data travels. If data has to move, the earlier you bind it to access controls, the more insurance you have against the unintended sharing consequences you’re going to get. It also gives you visibility beyond the boundaries of your organization.
So, I think start there. Start at the data layer, because that is ultimately the common denominator as you move up the cybersecurity stack. That is what is going to give you these economies of scale, and I believe give you a stronger foundation for the overall program you’re trying to build for current and future threats to your organization.
Learn more about Virtru here: https://www.virtru.com