Uncategorized

(lively music) >> Announcer: From around the globe it’s theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020, Virtual Brought to you by Red Hat, the Cloud Native Computing Foundation, and Ecosystem Partners >> Hi and welcome back to theCUBE’s coverage of KubeCon and CloudNativeCon Europe 2020’s virtual event I’m Stu Miniman and happy to welcome back to the program two of our CUBE alumni We’re going to be talking about storage in this Kubernetes and container world First of all, we have Sam Werner, he is the Vice President of Storage Offering Management at IBM, and joining him is Brent Compton Senior Director of Storage and Data Architecture at Red Hat Sam and Brent, thank you for joining us and we get to really dig in as to the kind of combined IBM and Red Hat activity in this space Of course, both companies very active in this space ahead of the acquisition And so we’re excited to hear about what’s going forward Sam, maybe if we could start with you as the tee-up You know, both Red Hat and IBM have had their conferences this year, we’ve heard quite a bit about how Red Hat, the solutions they’ve offered, the open source activity is really a foundational layer for much of what IBM’s doing When it comes to storage, what does that mean today? >> First of all, I’m really excited to be virtually at KubeCon this year and I’m also really excited to be with my colleague, Brent from Red Hat This is, I think this is the first time that IBM Storage and Red Hat Storage have been able to get together and really articulate what we’re doing to help our customers in the context of Kubernetes and also with OpenShift and the things we’re doing there So I think you’ll find, as we talk today, that there’s a lot of work we’re doing to bring together the core capabilities of IBM Storage that have been helping enterprises with their core applications for years alongside the incredible open source capabilities being developed by Red Hat and how we can bring those together to help customers continue moving forward with their initiatives around Kubernetes, and rebuilding their applications to be develop once deploy anywhere which runs into quite a few challenges for storage So Brent and I are excited to talk about all the great things we’re doing, excited about getting to share it with everybody else at KubeCon >> Yeah, so of course, containers when they first came out were for stateless environments and we knew that we’ve seen this before those of us who lived through the wave of virtualization You kind of have a first generation solution, what applications, what environments can be used, but as we’ve seen the huge explosion of containers in Kubernetes, there’s going to be a maturation of the stack, storage is a critical component of that So Brent, if you could bring us up to speed You’re steeped and have, you know, long history in this space, the challenges that you are hearing from customers, and where are we today in 2020 for this set >> Thanks, Stu Stu, the most basic apps out there, I think, are just traditional databases Apps that have databases like a Postgres, a long-standing apps out there that have databases like DB2 So traditional apps that are moving towards a more agile environment, that’s where we’ve seen, in fact, our collaboration with IBM, in particular, the DB2 team Now that’s where we’ve seen as they’ve gone to a microservices container-based architecture We’ve seen a pull from the marketplace, say you know, in addition to inventing new cloud-native apps, we want our tried, true, and tested apps, I mean, such as DB2 such as MQ, we want those to have the benefits of a Red Hat OpenShift agile environment and that’s where the collaboration between our group and Sam’s group comes in together is providing the storage and data services for those state collapse >> Great, Sam, you know, IBM, you’ve been working with the storage administrator for a long time, what challenges are they facing when they go to the new architectures? Is it still the same people? Might there be a different part of the organization where you need the start in delivering these solutions? >> It’s a really good question and it’s interesting because I do spend a lot of time with storage administrators and the people who are operating the IT infrastructure And what you’ll find is that the decision maker isn’t the IT operations or storage operations people

These decisions about implementing Kubernetes and moving applications to these new environments are actually being driven by the business lines which is I guess not so different from any other major technology shift And the storage administrators now, are struggling to keep up, so the business line would like to accelerate development, they want to move to a develop once deploy anywhere model, and so they start moving down the path to Kubernetes in order to do that, they start leveraging middleware and components that are containerized and easy to deploy And then they’re turning to the IT infrastructure teams and asking them to be able to support it, and when you talk to the storage administrators, they’re trying to figure out how to do some of the basic things that are absolutely core to what they do which is protecting the data in the event of a disaster or some kind of a cyber attack, being able to recover the data, being able to keep the data safe, ensuring governance and privacy of the data These things are difficult in any environment, but now you’re moving to a completely new world and the Spurge administrators have a tough challenge ahead of them And I think that’s where IBN and Red Hat can really come together with all of our experience and our very broad portfolio with incredibly enterprise-hardened storage capabilities to help them move from their more traditional infrastructure to a Kubernetes environment >> All right, Brent, maybe if you could, bring us up to date When we’ve looked back at OpenStack, Red Hat had a few projects from an open first standpoint (audio distortion) to help bolster the open source storage world In the container world, we saw some of those get ported over, there’s some new projects, there’s been a little bit of argument as to the various different ways to do storage, and of course, we know storage has never been a single solution There’s lots of different ways to do things, but you know, where are we with the options out there? What’s the recommendation from Red Hat and IBM as to how we should look at that? >> I want to bridge your question to Sam’s earlier comments about the challenges facing the storage admin So if we start with the word agility, what does agility mean in a data world? We’re conscious for agility from an application development standpoint, but if you use the term, of course we’ve been used to the term dev ops, but if we use the term data ops, what does that mean? What does that mean to In the past, for decades, when a developer or someone deploying in production wanted to create new storage or data resources, they typically filed a ticket and waited So in the agile world of OpenShift and Kubernetes, everything is self-service and on demand But what kind of constraints and demands does that place on the storage and data infrastructure? So now I’ll come back to your question, Stu So yes, at the time that Red Hat was very heavily into OpenStack, Red Hat acquired Ceph, well, acquired Ink Tank and a majority of the Ceph developers who are most active in the community, and that became the defacto software-defined storage for OpenStack But, actually, from the last time that we spoke at KubeCon, the Rook project has become very popular there in the CNCF, as a way effectively to make software-defined storage systems like Ceph simple, so effectively, the power of Ceph made simple by Rook inside of the OpenShift operator framework People want that power that Ceph brings but they want the simplicity of self-service on demand, and that’s kind of the fusion, the coming together of traditional software-defined storage with agility in a Kubernetes world, so Rook Ceph OpenShift container storage >> Wonderful, and I wonder if we could take that a little bit further A lot of the discussion theses days, and I hear it every time I talk to IBM and Red Hat is, “Our customers are using hybrid clouds.” So obviously that has to have an impact on storage You know, moving said data is not easy, there’s a little bit of nuance there, so how do we go from what you were just talking about into a hybrid environment? >> I guess I’ll take that one to start, and Brent please feel free to chime in one it So first of all, from an IBM perspective, you really have to start at a little bit higher level

and at the middleware layer So IBM is bringing together all of our capabilities, everything from analytics and AI to application development and all of our middleware and packaging them up in something that we call Cloud Paks, which are pre-built catalogs of containerized capabilities that can be easily deployed in any OpenShift environment which allows customers to build applications that could be deployed both on-premises and then within public cloud, so in a hybrid, multi-cloud environment Of course, when you build that sort of environment you need a storage and data layer which allows you to move those applications around freely, and that’s where the IBM Storage Suite for Cloud Paks comes in We’ve actually taken the core capabilities of the IBM storage software-defined storage portfolio which give you everything you need for high performance, block storage, scale out, file storage, and object storage, and then we’ve combined that with the capabilities that we were just discussing from Red Hat, including OCS and Ceph which allow a customer to create a common agile and automated storage environment both on-premises and the cloud, giving consistent deployment and the ability to orchestrate the data to where it’s needed >> I’ll just add on to that I mean, as Sam noted and as probably most of you are aware hybrid cloud is at the heart of the IBM acquisition at Red Hat With the Red Hat OpenShift, the stated intent of Red Hat OpenShift is to become the default operating environment for the hybrid cloud, so effectively, bring your own cloud wherever you run So that is at the very heart of the synergy between our companies made manifest by the very large portfolios of software at which have been moved to many of which to run in containers, and embodied inside of IBM Cloud Paks So IBM Cloud Paks backed by Red Hat OpenShift on wherever you’re running, on-premises, in a public cloud, and now with this Storage Suite for Cloud Paks that Sam referred to, also having a deterministic experience That’s one of the things as we work, for instance, deeply with the IBM DB2 team One of the things that was critical for them is they couldn’t have their customers when they run on AWS have a completely different experience than when they ran on-premises, say, on VMware, or on-premises on per middle, critical to the DB2 team to give their customers deterministic behavior wherever they ran >> Right, so Sam, I think any of our audience that have followed this space have heard Red Hat’s story about OpenShift and how it lives across multiple cloud environments I’m not sure that everybody’s familiar with how much of IBM’s storage solutions today are really just software-driven So and therefore, if I would think about IBM, it’s like, “Okay, I can buy storage,” or “yes, it can live in the IBM cloud,” but from what I’m hearing from Brent and you, and what I know from previous discussion, this is independent and can live in multiple clouds leveraging this underlying technology and can leverage the capabilities from those public cloud offer That right, Sam? >> Yeah, that’s right, and we have the most comprehensive portfolio of software-defined storage in the industry Maybe to some it’s a well-kept secret but those that use it know the breadth of the portfolio We have everything from the highest performing scale-out file system to an object store that can scale into the exabytes We have our block storage as well, which runs within the public clouds and can extend back to your private cloud environment When we talk to customers about deploying storage for hybrid multi-cloud in a container environment we give them a lot of paths to get there We give them the ability to leverage their existing SAN in infrastructure through the CSI drivers, container storage interface, so our whole physical on-prem infrastructure support CSI, today And then all the software that runs on our arrays also supports running on top of the public clouds, giving customers then, the ability to extend that existing SAN infrastructure into a cloud environment And now with Storage Suite for Cloud Paks,

as Brent described earlier, we give you the ability to build a really agile infrastructure leveraging the capabilities from Red Hat that give you a fully extensible environment and a common way of managing and deploying both on-prem and in the cloud So we give you a journey with our portfolio to get from your existing infrastructure today, you don’t have to throw it out, get started with that, and build out an environment that runs both on-prem and in the cloud >> Yeah, Brent, I’m glad that you started with database ’cause it’s not something that I think most people would think about in a Kubernetes environment Do you have any customer examples you might be able to give, maybe anonymized, of course, just talking about those mission critical applications can fit into the new modern architecture? >> The big banks, I mean, full-stop, the big banks But what I’d add to that, so that’s kind of frequently where they start because applications based on structured data remain at the heart of a lot of enterprises But I would say workload category number two is all things machine learning, analytics, and AI, and we’re seeing an explosion of adoption within the OpenShift, and of course, Cloud Paks, IBM Cloud Paks for data is a key market participant in that machine learning and analytic space, so an explosion of the usage of OpenShift for those types of workloads And I’m just going to touch just briefly on an example going back to our kind of data pipeline and how it started with databases, but it just explodes For instance, data pipeline automation where you have data coming into your apps that are Kubernetes-based that are OpenShift-based, well, that maybe will end up inside of Watson Studio inside of IBM Cloud Pak for data But along the way there are a variety of transformations that need to occur Let’s say that you’re at a big bank and you need to, effectively, as it comes in, you need to be able to run a CRC to ensure, to attest that when you modify the data, for instance, in a real-time processing pipeline, that when you pass it on to the next stage, that you can guarantee, well, that you can attest there’ve been no tampering of the data So that’s an illustration where it began with the basics of basic applications running with structured data with databases Where we’re seeing the state of the industry today is tremendous use of these Kubernetes and OpenShift-based architectures for machine learning analytics made more simple by data pipeline automation through things like OpenShift container storage, through things like OpenShift serverless where you have scalable functions and whatnot So it began there but boy, I tell you what, it’s exploded since then >> Yeah, great to hear, not only traditional applications but as you said, so much interest and need for those new analytics use cases, so it’s absolutely that’s where it’s going Sam, one other piece of the storage story, of course, is not just that we have stateful usage, but talk about data protection, if you could, and how things that I think of, traditionally, my backup, restore, and the like How does that fit into the whole discussion we’ve been having? >> When you talk to customers, it’s one of the biggest challenges they have, honestly, and moving to containers is, “How do I get the same level of data protection “that I use today?” The environments are, in many cases, more complex from a data and storage perspective You want to be able to take application consistent copies of your data that could be recovered quickly and in some case, even reused, you can reuse the copies for dev tasks, for application migration, there’s lots of, or for actually, AI, or analytics, there’s lots of use cases for the data But a lot of the tools and APIs are still very new in this space IBM has made doing data protection for containers a top priority for our Spectrum Protect Suite and we provide the capabilities to do application-aware snapshots of your storage environment so that a Kubernetes developer can actually build in the resiliency they need as they build applications, and a storage administrator can get a pane of glass and visibility into all of the data and ensure that it’s all being protected appropriately and provide things like SLAs So I think it’s about the fact that the early days

of Kubernetes tended to be stateless Now that people are moving some of their more mission critical workloads, the data protection becomes just as critical as anything else you do in the environment, so the tools have to catch up So that’s a top priority of ours and we provide a lot of those capabilities today, and you’ll see if you watch what we do with our Spectrum Protect Suite, we’ll continue to provide the capabilities that our customers need to move their mission critical applications to a Kubernetes environment >> All right, and Brent, one other question looking forward a little bit We’ve been talking for the last couple of years about how serverless can plug into this entire Kubernetes ecosystem, the Knative project is one that IBN and Red Hat have been involved with So for OpenShift and serverless, I’m sure leveraging Knative, what is the update today? >> The update is effectively adoption inside of, a lot of cases like the big banks, but also other top, the largest companies and other industries, as well So if you take the words event-driven architecture, many of them are coming to us with that’s kind of top of mind to them is the need to say, you know, “I need to ensure “that when data first hits my environment, “I can’t wait for a scheduled batch job “to come along and process that data, “and maybe run an inference.” I mean, the classic case is you’re ingesting a chest x-ray and you need to immediately run that against an inference model to determine if the patient has pneumonia or COVID-19, and then kick off another serverless function to anonymize the data to send back in to retrain your model So the need, and so you mentioned serverless, and of course, people would say, “Well I could handle that “just by really smart batch jobs.” What kind of one of the other parts of serverless sometimes people forget, but smart companies are aware of is that serverless is inherently scalable so zero to N scalability So as data’s coming in hitting your Kafka bus, hitting your object store, hitting your database I don’t know if you picked up the community project Debezium where something hits your relational database and that can automatically trigger an event onto the Kafka bus so that your entire architecture becomes event-driven >> All right, well Sam, let me give you the final, let me let you have the final word, excuse me, on the IBM in this space and what you want them to have as takeaways from KubeCon 2020 Europe >> I’m actually going to talk to, I think, the storage administrators, if that’s okay because if you’re not involved right now in the Kubernetes projects that are happening within your enterprise, they are happening and there will be new challenges You’ve got a lot of investments you’ve made in your existing storage infrastructure We at IBM and Red Hat can help you take advantage of the value of your existing infrastructure, the capabilities, the resiliency, the security you’ve built into it over the years and we can help you move forward into a hybrid, multi-cloud environment built on containers We’ve got the experience and the capabilities between Red Hat and IBM to help you be successful because there’s still a lot of challenges there, but our experience can help you implement that with the greatest success I appreciate it >> All right, Sam and Brent, thank you so much for joining It’s been excellent to be able to watch the maturation in this space over the last couple of years >> Thank you >> Thank you >> All right, we’ll be back with lots more coverage from KubeCon CloudNativeCon Europe 2020, the virtual event I’m Stu Miniman and thank you for watching theCUBE (uplifting music)

You Want To Have Your Favorite Car?

We have a big list of modern & classic cars in both used and new categories.