Sashkin – stock.adobe.com
Red Hat, the open source juggernaut known for its enterprise-grade Linux distribution and OpenShift container application platform in more recent years, undertook a leadership change in July 2022 when it appointed Matt Hicks as president and CEO.
Hicks, who previously served as Red Hat’s executive vice-president of products and technologies, took over the top job from Paul Cormier, who will serve as chairman of the company.
In a wide-ranging interview with Computer Weekly in Asia-Pacific (APAC), the newly minted CEO said he hopes to continue building on Red Hat’s core open source model and tap new opportunities in edge computing with OpenShift as the underlying technology platform.
Having taken over as Red Hat CEO recently, could you tell us more about how you’d like to take the company forward?
Hicks: I’ve been at Red Hat for a long time and what drew me to Red Hat was its core open source model which is very unique and empowering. I distil it down to two fundamental things. One, we genuinely want to innovate and evolve on the shoulders of giants because there are thousands of creative minds across the world who are building and contributing to the products that we refine.
The second piece is that customers also have access to the code, and they understand what we’re doing. They can see our roadmaps, and our ability to innovate and co-create with them is unique. Those two things go back a long time and make us special. For me, that’s the core mentality we want to hold on to at Red Hat because that’s what differentiates us in the industry.
In terms of where we want to go with that open source model, we’ve talked about the open hybrid cloud for quite a while because we think customers are going to get the best in terms of being able to run what they have today, as well as where they want to be tomorrow. We want to help customers be productive in cloud and on-premise, and use the best that those environments offer, whether it’s from regional providers, hyperscalers as well as specialised hardware. We see hybrid cloud as a huge trillion dollar opportunity, with just 25% of workloads having moved to the cloud today.
Potentially, there are more exciting opportunities with the extension to edge. We’re seeing this accelerate with technologies like 5G, where you still need to have computing reach and move workloads closer to users while pushing technologies like AI [artificial intelligence] at the point of interaction with users.
So, it’s going from the on-premise excellence we have today, extending that reach into public cloud and eventually into edge use cases. That’s Red Hat’s three to five year challenge and opportunity which we are addressing with the same strategy of open source based innovation that we’ve had in the past.
Against the backdrop of what you’ve just described, what is your outlook for APAC, given that the region is very diverse with varying maturities in adopting cloud and open-source technologies?
Hicks: If we look at APAC as a market, I think the core fundamentals of using software to drive digital transformation and innovation is key and that could be for a lot of reasons. It could be controlling costs due to inflation. It could be tighter labour markets, where we need to drive automation. It could be adjusting to the Covid-19 situation where you might not be able to access workers. And I think for all of these reasons, we’ve seen the drive to software innovation in APAC, similar to the other markets.
DBS Bank is a good example in Singapore. They pride themselves in driving innovation and by using OpenShift and adopting open source and cloud technologies, they were able to cut operating costs by about 80%. But they are not just trying to cut costs, they also want to push innovation and I think that’s very similar to other customers we have across the globe.
Kasikorn Business Technology Group in Thailand has very similar approach where they’re using technologies like OpenShift to cut development times from a month to two weeks while increasing scale. Another example is Tsingtao Alana, which is using Ansible to drive network automation and improve efficiencies.
Like other regions, the core theme of using software innovation and getting more comfortable with open source and leveraging cloud technologies is similar in APAC. But one area where we might see an acceleration in APAC – more so than in the US – is the push to edge technologies driven by the innovation from telcos.
You spoke a lot about OpenShift which has been a priority for Red Hat for a number of years. Moving forward, what’s the balance in priorities between OpenShift and Red Hat Enterprise Linux (RHEL) which Red Hat is known for among many companies in APAC?
Hicks: It’s a great question and here’s how I tend to explain that to customers that are new to the balance between OpenShift and RHEL.
The core innovation capability that RHEL provides on a single server is still the foundation that we build on. It’s done really well for decades, for being able to provide that link to open source innovation in the operating system space. I call it the Rosetta Stone between development and hardware – and being able to get the most out of that is what we aspire to do with RHEL.
That said, if you look at what modern applications need – and I’ve been in this space for over 20 years – they far exceed the resources of a single computer today. And in many cases, they far exceed the resources of a dozen, 100 or 1,000 computers. OpenShift is like going from a single bee to a swarm of bees, which gives you all the innovation in RHEL and lets you operate hundreds or 1,000s of those machines as a single unit so you can build a new class of applications.
So, RHEL is part and parcel of OpenShift, but it’s not a single-server model anymore. It’s that distributed computing model. For me, that’s exciting because I started my open source journey with Linux and then with RHEL when I was in consulting. Since then, the power of RHEL has expanded across datacentres and helps you drive some incredible innovation. That’s why the pull to OpenShift doesn’t really change our investment footprint as RHEL offers a great model to leverage all of those servers more efficiently.
Could you dive deeper into the product roadmap for OpenShift? Over the years, OpenShift has been building up more capabilities, including SaaS [software-as-a-service] based services for data science, for example. Are we expecting more SaaS applications in the future?
Hicks: When we think about OpenShift, or platforms in general, we try to focus on the types of workloads that customers are using with them and how we can help make that work easier.
One of the popular trends is AI-based workloads, and that comes down to the training aspects of it, which requires capabilities like GPU rather than CPU acceleration. Being able to take trained models and incorporate them into traditional development are things that companies struggle with. So, the way to get your Nvidia GPUs to work with your stack, and then get your data scientists and developers working together is our goal with OpenShift Data Science.
We know hardware enablement, we have a great platform to leverage both training and deployment, and we know developers and data scientists, so that MLOps space is a very natural fit. What you will see more from us in the portfolio is what we call the operating model, where for decades, the prevalent model in the industry was having customers run their own software supplied and supported by us.
The public cloud has changed some of the expectations around that. While there’s still going to be a ton of software run by customers, they are also increasingly leveraging managed platforms and cloud services. So, once we know the workloads that we need to get to, we will try to offer that in multiple models where customers can run the software themselves if they have a unique use case.
But at the same time, we want to improve our ability to run that software for them. One area where you’ll see a lot of innovation is managed services, in addition to the software and edge components.
If you look at telcos, for example, they run big datacentres with lots of layers in between where the technology stack gets smaller and smaller. They also have embedded devices, which may have RHEL on them even if they are running containers. In the middle, we’re seeing a pull for OpenShift to get smaller and smaller. You can think of it as the telephone pole use case for 5G or maybe it’s closer to the metropolitan base station that runs MicroShift, a flavour of OpenShift optimised for the device edge.
That ability to run OpenShift on lightweight hardware is key as edge devices don’t have the same power and compute capabilities of a datacentre. So, those areas, coupled with specific use cases like AI or distributed networking based applications, is where you’ll see a lot of the innovation around OpenShift.
Red Hat has done some security work in OpenShift to support DevSecOps processes. I understand that currently there isn’t any kind of software bill of materials (SBOM) capabilities that are embedded within OpenShift. What are your thoughts around that?
Hicks: If we picked one of the most important security trends that we try to cater to, it is understanding your supply chain and being confident in the security of it. Arguably, this is what we do – we take open source, where you might not have that understanding of its provenance or the expertise to understand it, and add a layer of provenance so you know where it’s coming from.
I would argue that for the last 20 years, whether it was the driving decision or not, you are subscribing to security in your supply chain if you are a Red Hat customer. And we’re excited about efforts around how you build that bill of materials when you’re not only running Red Hat software but also combining Red Hat software with other things.
There are a few different approaches, and this is always Red Hat’s challenge: when we make a bet, we have to stick with it for a while. We’re involved in practically every SBOM effort at this point, but when we make that final choice, we want to make sure it’s the most applicable choice at the time.
So, while we haven’t pulled the trigger on a single approach or said what we will support, the core foundation behind SBOM is absolutely critical and we invest a lot there. We’re excited about this and honestly, before the SolarWinds incident, this was an area that was overlooked as a risk to consuming software that you don’t understand.
With open source continuing to drive innovation, I think it’s critical for customers to understand where they’re getting that open source code from, whether it’s tied to vendors or whether they’re responsible for understanding it themselves. But we haven’t made that final call on the SBOM format to support right now. I fully expect, in the next year or so, that we start to converge as an industry on a couple of approaches.
What are your thoughts on the competitive landscape, particularly around VMware with its Tanzu Application Platform?
Higgs: It’s really about the choice on the right technology architecture to get the most out of hybrid cloud. About a year ago, most customers were drawn to a single public cloud and that trend was certainly strong, at least in the US and Europe, for a variety of reasons.
I think enterprises have realised that they might still have that desire, but it’s not practical for them. They’re going to end up in multiple public clouds, maybe through acquisition or geopolitical challenges. And your on-premise environments, whether it’s mainframe technology or others, are not going away quickly. So, the need for hybrid has become much more recognised today than it was even a year or two ago.
The second piece on that is, what is the technology platform that enterprises are going to leverage to build and structure their application footprint for hybrid? VMware certainly has their traditional investment in virtualisation and the topology around that.
We at Red Hat, along with IBM, have put our bet on containers. VMware, I think, has tried or was sort of a late entrant to that party around Tanzu. For us, our core is innovation in Linux which is an extension to containers. We’re pretty comfortable with that and we see a lot of traction because all the hyperscalers have adopted that model.
Personally, I think we have a great position on a technology that lets customers leverage public clouds natively and get the most out of their on-premise environments. I don’t know if virtualisation will have that same reach and flexibility of being able to run on the International Space Station, as well as power DBS Bank’s financial transactions as containers do.
VMware, I think, will be more drawn to their core strength in virtualisation, but we still have 75% of workloads remaining that have yet to move so we’ll see how that really shakes out. But I’m pretty comfortable with the containers and OpenShift bet on our side.
Red Hat has a strategic partnership with Nutanix to deliver open hybrid cloud solutions. In light of the uncertainty around Broadcom’s acquisition of VMware, are you seeing more interest from VMware customers?
Hicks: Acquisitions are tricky and it’s hard to predict the outcome of an acquisition like that. What I would say is we partner pretty deeply with VMware today as virtualisation still provides a good operating model for containers. I would expect us to partner with VMware as part of Broadcom.
That said, there’s a bit of uncertainty in an area like this and it does create a decision point around architecture. We’re neutral to that because for us, if customers choose to stay on that core vSphere base, we will continue to serve them, even if containers are their technology going forward.
We also partner closely with companies like Nutanix which will compete at that core layer. For us, we really run on the infrastructure tier, and we want to let customers run applications whether they are on Nutanix, vSphere or Amazon EC2.
So, we don’t really care too much where that substrate lies. We want to make sure we serve customers at that decision point, and I think we have a lot of options to deliver to customers regardless of how the acquisition ends or how the landscape changes with other partners.
The CHIPS and Science Act allows the U.S. to invest in critical technologies such as quantum computing and artificial …
CIOs should help evaluate management goals to support long-term strategy. Learn how IT can assist business objectives and justify…
Creating a safe metaverse experience means bringing all stakeholders to the table, according to experts.
Threat analysts have observed some ransomware gangs using a new technique that only partially encrypts victims’ files, which …
Microsoft warned that two unpatched zero-day vulnerabilities are being exploited against Exchange Server, a problem that’s …
Cisco Talos researchers spotted a new wave of phishing attacks that target job seekers in the U.S. and New Zealand, infecting …
To avoid network overprovisioning, teams should review baselines and roadmaps, assess limitations and consider business strategy …
Enterprises need integrated security and networking frameworks to manage distributed IT environments and are looking to SD-WAN …
Automated pre-change network validation with Batfish can save time in the network change management process and minimize …
File server reporting within File Server Resource Manager can help admins identify problems and then troubleshoot Windows servers…
Administrators who manage many users can go one step further toward streamlining license assignments by taking advantage of a new…
ServiceNow doubled down on its commitment to take the complexity out of digital transformation projects with a new version of its…
DataOps is a growing tool for organizations looking to efficiently distribute accurate data to users. Learn the DataOps trends …
More organizations are turning to DataOps to bolster their data management operations. Learn how to build a team with the right …
Moving from an on-premises data system to the cloud can be a complex operation. Lufthansa is looking to remove some of the …
All Rights Reserved, Copyright 2000 – 2022, TechTarget
Do Not Sell My Personal Info