Interview: Re-Engineering Cybersecurity for the AI-Driven Enterprise

We spoke to OpenText’s EVP of Cybersecurity Engineering, Muhi Majzoub, at the 2025 OpenText World conference in Nashville.

Published on Dec 4, 2025
Mirren McDade Written by Mirren McDade
Muhi Majzoub Interview Cover

As organizations accelerate development with AI, their attack surface is expanding at an unprecedented rate. This creates a landscape where vulnerabilities can emerge long before security teams have the visibility, context, or tooling to keep pace.

At OpenText World 2025 in Nashville, Expert Insights sat down with OpenText’s senior vice president of cybersecurity, Muhi Majzoub, who leads the company’s enterprise, SMB/consumer, and services security divisions. In this wide-ranging conversation we discussed how organizations can defend themselves in a rapidly evolving threat landscape.

Muhi explains the innovation gaps that enterprises must close in identity, application security, data protection, and SOC operations, and how AI is reshaping everything from secure code development to connector-building for modern detection and response. He also explains how agentic AI is becoming central to both offense and defense, and the areas that organizations should prioritize as they plan their security strategies for 2026.

I’d like to start off by asking you a bit about your background – how you got started in cybersecurity, and how you came to your current role at OpenText?

I’ve been an engineer and a software developer for 30 plus years; 17 years at Oracle, 6 and half at CA Technologies, two years in the UK at NorthgateArinso, and then 13 and a half years at OpenText.

In January of this year, I took on a new role. I was the Chief Product Officer previously, and I’m now building the Cyber Security Business Unit. I manage the three divisions in cyber security that do enterprise R&D, SMB and consumer, and then the services division.

As the person defining OpenText’s security product strategy, what areas of cybersecurity do you believe are most in need of innovation over the next few years?

That’s a great question, and I’ll break my answer down into a few different areas. 

In the enterprise space, identity management is one of the most important – followed by application security, and then by data security – to secure the data that AI agents are producing. Next is SIM operations, for Security Operations Center and protection against threats and the ability to do threat detection and response, and finally forensic intelligence and forensic analytics.

The cybersecurity space is very fast-paced. When they need to keep up with evolving threats, how can product development and engineering teams balance the need to innovate quickly with the needs for governance and security testing?

We use AI internally, but we use AI with a human in control. So, AI is now making our engineers and our managers more efficient by automating tasks that in the past would take three or four hours, that we can now accomplish in 15 minutes.

In the end, a human being needs to review and validate and confirm that this code is ready. The second level involves using our own platform called OpenText Application Security to test our application for vulnerabilities, to validate data accuracy, or to validate the authenticity of libraries. 

For example, say an engineer is building a Python or a Java code line; they could go on the internet and grab 50 libraries. Each of those libraries could be thousands of lines of code and once they plug them in, they can start using them. Our platform comes in and crawls those libraries. It crawls the code behind these libraries and identifies security vulnerabilities, open-source vulnerabilities, and risks at patch level.

What are the main challenges your team faces when building products for modern SOC teams?

How many connectors can we develop and how fast we can develop them is a real challenge. We have a great product that runs in the public cloud, and we partner with Microsoft to help support two of their platforms, EntraID and Defender, and we have been asked to also support their Sentinel platform.

We would like to develop hundreds of connectors which would connect into our own open text platform. We would like connectors to have the ability to connect to SAP, to Salesforce, etc., and to be able to identify vulnerabilities or anomalies when there are security risks. So, the biggest challenge is essentially finding enough hours in a day to develop these connectors to different systems, which all behave and operate differently.

How do you overcome those challenges?

AI helps us to become more efficient in speeding up work and leveraging our own platform, application security, and other solutions that we have in our DevSec. Our core discovery software delivery tools also help us to speed up the process and the life cycle of software development.

How important is user feedback when it comes to delivering new products? How do you access that feedback and translate it into a new product or feature?

We take pride into collecting feedback and bringing it back to our development center because depending on the industry you work in, customers tend to know their business better than we do. So, we collect that data. We brief them on our vision. Sometimes we ask them to vote on new features. When they get very involved in the process, customers can feel heard, and we can act on their feedback.

A big topic at OpenText World this year is AI. What role does agentic AI play in shaping the future of cybersecurity products?

AI plays a big role in cyber security and it’s a double-edged sword because it’s both offense and defense. It helps the attackers to act smarter, but it also helps the defenders, us, to respond smarter.

We hope that we can always stay one step ahead of our attackers. Throughout the last 20 years, we have used AI and machine learning in their earlier stages of technology. Our web root antivirus and endpoint protection product has had machine learning and AI embedded in the engine for 17 years, so we know what we put in and we don’t allow any external influence.

What are some of the risks or limitations of AI that you’re most mindful of within your engineering teams?

The big risk is making the attackers stronger. That’s the biggest risk. 

Prompt injection will be the next big attack vector, I believe. Hackers will take advantage of being able to manipulate prompts to take malicious action, and I suspect we will start seeing more phishing attacks that involve AI as well, since attackers can now mimic voices and faces, which will make it harder to detect phishing attacks.

I encourage people to have a secret code that only our family knows. So, if you receive a text asking for urgent help from your family, you can ask: what’s the code? 

Finally, taking a step back, what should organizations’ top cybersecurity planning priorities be for 2026?

I think the most important things is to protect the identities of their employees and any external entities that work with them and have access to system. Protecting identities is at the top of every CIOs mind. 

Second, ensuring you have a documented process that every engineer and every technologist in your company is following, that dictates how they develop and roll out applications. So, asking, what are your standards? How do you develop code? How do you secure the code? How you validate that that security is good?

Third, protect the data. Data is critical. It’s the currency of a digital world. Protecting data is critical in ensuring it doesn’t fall into the wrong hands.