Container Management with CloudForms – Security & Compliance

This blog is part 4 of our series on Container Management with CloudForms.
 
This blog post focuses on the security and compliance aspects of managing containerized  environments.

In a container based infrastructure, the container software is often built directly by developers, usually via continuous integration (CI/CD). Once it comes to deploying this software in production, we need to make sure it is securely validated.
 
Another challenge is the source of those containers. Developers can use any base images for their builds, including insecure container images downloaded from the Internet. On the other hand, Enterprise IT needs to ensure all containers running in production are built based on trusted and approved sources.
 
And finally, it is also important to validate that all containers images, as well as containers instantiated from those images, are up to date with respect to security fixes.
 
CloudForms provides specific capabilities for managing security and compliance for container based infrastructures.
 
It can enforce policies for container hosts, and marks the nodes that are not compliant (e.g. outdated versions, configuration issues, security risks, etc). Those policies take into account information about the container host itself, but also about any resources that are connected to this host. If needed, it can trigger an action to start automatic remediation. We could for example automatically trigger an update of a package when a new security fix is available.
 
CloudForms also provides reporting for container sources. For example, it can identify containers that come from untrusted registries.
 
Finally it can scan the content of container images using OpenSCAP for standardized security checks. When an image is identified as non-compliant, all running containers instantiated from this image can be flagged automatically.
 
The following video demonstration highlights these capabilities in CloudForms:

Compliance with Enterprise Policies
Trusted Sources (flagging Unknown Registries)
Container Images Validation
Out-of-Compliance Containers

 

 
 
Quelle: CloudForms

Building your digital platform to capitalize in a multicloud environment

Recently I shared my thoughts on how to tackle digital transformation in a multicloud world. I talked about the business need for agility, reliability and a developer experience that is fast, simple and based on modern tools.  Today I want to share how you build the digital platform and how microservices play a key part in this transformation.
Why microservices and agile methods are core to the digital platform
Both are well suited to deal with projects that require a degree of uncertainty to be managed, along with a need to iterate quickly and be able to respond rapidly to changing requirements.  Agile methods help prioritize the work of teams, so they can quickly iterate and respond to change. Similarly, microservice architectures enable small teams to build and integrate new capabilities where each microservice operates on its own delivery schedule.  These two approaches help leaders build small responsive teams that can investigate, develop and deliver new capabilities independently, creating an environment that supports innovation.
Meeting developer expectations
My goal is to provide the next generation of technology that forms the foundation for enterprises to maintain their edge in the new digital economy. These technologies help development teams build cloud-native applications while using the skills they already have, such as their investment in Java development and WebSphere. By building on this established skill base, productivity can be reached faster. There is no need to retrain or hire new developers, or spend time investigating and understanding new technologies, and new solutions can be built that integrate and leverage the investment in existing applications.
I also recognize the value of open source projects, because developers can benefit from the collective experience and innovation that an open community provides.  This is why we have invested in the Eclipse MicroProfile project, alongside a host of other well-known open source industry contributors, to create an open community initiative to develop the capabilities required for building robust Java microservices and cloud-native applications.  The Eclipse MicroProfile project sets out to extend the existing broad range of Java EE capabilities for building microservices, and developers should be encouraged to participate in and define the future for Java microservices.
Delivering capabilities to help capitalize on multi-cloud strategy.
Meeting the expectations of developers and building capabilities as part of an open community are two principles I firmly believe in.  These two principles are at the heart of our decision to create the Open Liberty project – an open source server runtime built from the same Java EE and MicroProfile capabilities we provide in our commercial WebSphere Liberty portfolio, made available as open source in Github. I believe that the Open Liberty project provides an ideal foundation to build the next generation of cloud-native apps.
Having a strong foundation in Java microservices may not be enough to achieve the true value of a multicloud strategy. You also need a consistent approach to your software deployments irrespective of the target cloud.  This is why we also invested in creating Microservice Builder, an end-to-end delivery pipeline that helps you build and deploy containerized apps and microservices into the Kubernetes-based IBM Cloud and our new private cloud offering, IBM Cloud Private. With IBM Cloud Private, developers  can run their next generation data and software with the inclusion of underlying containers, logging, auditing and encryption services. The architecture is not only compatible with IBM Cloud, but most other public cloud platforms, supporting easy portability and integration across multicloud environments.
One of the best ways to learn and manage these cloud technologies and methods is to be hands-on with them, which is where the IBM Cloud Garage Method can help jump-start your learning process.  We provide the guidance and structure to help you learn how to approach new projects using Open Liberty and Microservice Builder or any other cloud technology you choose.  The Garage Method leads you from an initial evaluation to scalable and transformative solutions that will enable you to learn how to think about and build microservices using agile techniques.
I’ll leave you with this closing thought – the best way to be successful in today’s digital landscape is to be able to assemble the componentry that makes the most sense for your business.  Getting started to build Java microservices is easy using Open Liberty, and you can be confident that they will seamlessly integrate with your existing WebSphere apps, while putting you in charge of your own cloud journey.
The post Building your digital platform to capitalize in a multicloud environment appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Earth to IT…Integration is critical to your mobile strategy

(This post is part of a series. Read  part one and part two to learn more about the urgent need for Hybrid Integration.)
When I tweeted about my latest integration blog referencing Austin Powers International Man of Mystery, I encouraged people to take 7 minutes out of their days to read through it.  But let’s all be honest. You spent 7 minutes reading the blog post and then the next 35 minutes watching Austin Powers clips on YouTube.  You’re welcome.
One of my favorite parts of Zoolander, beyond the fact that he was not an ambiturner, was his cell phone.  Remember it?  It was about the size of a matchbox.  And beyond that, given that it was 2001, it was funny how attached he was to it.
Derek Zoolander: “Turn off my phone?”
Matilda: “Yeah.”
Derek Zoolander: “Earth to Matilda, this phone is as much a part of me as…”
Now it is 2017.  The question is, “How would you complete that sentence?”  Think about it.  How soon after you wake up in the morning do you check your phone?  How many times per day?  While it is the connectivity aspect rather than the phone itself, your phone is as much a part of you as…
That is why mobile has become such a critical aspect of not only marketing and sales, but of the entire customer experience.  And that means a mobile first strategy for any new system of engagement.  Whether it’s targeted ads for a free orange mocha frapuccino, a customer complaint that the new moisturizer isn’t delivering on the promise to keep them really, really, ridiculously good looking, or even the ability to collect donations for the center for kids who can’t read good and want to learn to do other stuff good too, mobile is the interactive media of choice.
From an IT perspective, making mobile the primary channel to engage with your customers is easy to make happen, right?  Is it as simple as connecting just another web application to your integration bus?  Uh…Earth to IT, all of those really cool mobile customer engagements the business is demanding are going to require you to re-evaluate how you do integration.  When we spoke to the experts at IDC they gave us some great pointers:

A security-first design is required for developers creating APIs between back-end services and mobile devices, as well as a build and test methodology to ensure corporate assets are being protected from unwanted access and use through mobile channels.
Cloud is the preferred location for back-end business logic and data services accessed by mobile devices, because it is “closer” to mobile devices and available whenever and wherever internet connectivity exists. Placing this logic in the cloud may require another layer of connectivity and replication to/from on-premises if a legacy application is part of the use case.
Production APIs must be managed to ensure the back-end services are performing adequately, and as systems are updated or the mobile app is modified, resulting changes are made to the APIs.
Session and state management is included in the API functionality, mobile applications and integration technology solutions.

A mobile first approach requires more than just standard integration.  It requires a true understanding of and investment in hybrid integration.  How you connect, and how well you connect, on-premises applications to cloud applications to devices, is critical to your mobile strategy success.
If your organization is one that “can’t do mobile good and wants to learn how to do other stuff good too” then download the IDC Report The Urgent Need for Hybrid Integration or go to the IBM Integration website to learn more about IBM’s view on hybrid cloud integration.
Now I’m off for an orange mocha frappuccino.
The post Earth to IT…Integration is critical to your mobile strategy appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

IBM Cloud revenue jumps 20 percent in third quarter

Cloud now represents 20 percent of total IBM revenue, according to third-quarter earnings results released this week.
In the quarter, revenue from as-a-service offerings was up 25 percent year-over-year. Total cloud revenue jumped 20 percent over the third quarter of 2016, and over the trailing 12 months, was up by 25 percent, with a total revenue of $15.8 billion. That number includes $8.8 billion in as-a-service revenue and $7 billion for hardware, software and services to for integrating cloud solutions across public, private and hybrid environments.
Clients who started new or stepped up existing cloud engagements with IBM over the quarter included Walgreens, Honeywell, the US Army and Atlanta’s Mercedes-Benz Stadium.
Learn more about IBM Cloud third-quarter earnings in the infographic below.

The post IBM Cloud revenue jumps 20 percent in third quarter appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

[Podcast] PodCTL #10 – Service Catalog all the Things

One of the most frequently discussed topics that we’ve had on the show is how Kubernetes has expanded the breadth of applications that can be containerized. This allows companies to run both existing applications and new (cloud-native, microservices) applications in containers, on Kubernetes. This week, we sat down with Paul Morie (@cheddarmint, Principal Software Engineer @RedHat, […]
Quelle: OpenShift

IBM expands partnership with Docker to drive apps to the public cloud

Today at DockerCon EU in Copenhagen, we’re sharing the news that IBM is expanding our relationship with Docker.
Together, IBM and Docker will be making it easier for clients to modernize their existing applications with Docker Enterprise Edition, combined with IBM Cloud, software and services.
Before we dive in, here’s a glimpse into the growth of our partnership, which is focused on three points:

Docker Enterprise Edition (EE) for IBM Cloud, which will allow customers to easily bring up a Docker environment to containerize their existing workloads and run them on IBM Cloud.
IBM’s participation in Docker’s Modernize Traditional Applications (MTA) program, so that we can help customers improve efficiency and agility by modernizing existing portfolios.
Certified IBM software will be available in the Docker Store, simplifying the containerization of existing software which uses IBM middleware.

For the past several years, we have collaborated with Docker to help customers realize the benefits of containers. Since then, we have expanded our work together to deliver the value of Docker and its container platform to customers in new, secure and powerful ways, whether they are running in the cloud or on IBM Systems, or a mixture of both.
Why is this important? As more companies look to migrate critical infrastructure and workloads to the cloud, they need to do so in a way that is efficient, secure, and cost effective. You may have already found that containers are the perfect solution, especially when moving from one computing environment, such as a physical machine in a data center, to a public cloud.
These new aspects of our partnership enable customers use Docker containers to more quickly move their existing applications to the cloud. Then, they can easily extend them using IBM Cloud services to help them innovate faster, build more intelligent solutions and compete.
Docker EE for IBM Cloud
Our ongoing focus and ultimate mission is to help organizations more easily migrate their current data and workloads to the public cloud. Earlier this year, we announced Docker EE available for Linux on IBM z Systems, LinuxONE and Power Systems. Built for hybrid cloud environments, this offering pairs the agility of Docker containers with the speed and scale of our enterprise servers, which can support up to one million Docker containers on a single system.
Building on this success, IBM and Docker are now working side by side to deliver an edition of Docker EE, which is great for enterprises looking to shift their workloads to containers on the IBM public cloud.
Here’s one notable aspect of this new offering: once these existing workloads are transitioned to the IBM Cloud via Docker containers, enterprise teams will be able to rapidly connect and integrate them with the services that make the IBM public cloud so attractive to many. That means companies can take their monolithic applications and make them smarter with Watson, without having to change the original application.
Using Docker on IBM Cloud, a pet supply store can containerize its digital inventory and easily connect it into cloud services such as Watson Visual Recognition and Twilio. This would allow a user to snap an image of an item, such as pet food, from their mobile phone, and then quickly receive information from the store’s databases about the item details and cost.
Joining the Docker Modernize Traditional Apps program
There’s another layer to this announcement: IBM is also joining Docker’s Modernize Traditional Applications (MTA) Program. With IT operations teams in mind, the Docker MTA program partners with companies to design and embark on cloud and containers transformation projects, which fits well with our own priority to help more of our customers modernize their legacy systems with cloud.
We’ve worked with hundreds of companies across nearly every industry over the years through our Global Business and Technology Services teams, so we know that a vast majority need a way to easily make their existing legacy apps more secure, efficient and portable to both hybrid and public cloud platforms. Now that IBM is an official Docker MTA partner, we can make it easier for both our customers and Docker users to begin moving to a modern cloud architecture on IBM Cloud.
Driving IBM Certified Content in the Docker store
It can be difficult for companies to take that first step of containerizing existing applications, which is why we are announcing that we are publishing official IBM software in the Docker store, including WebSphere Application Server, WebSphere MQ, and IBM DB2 database.
This will enable customers to quickly access the software images needed for containerization, and gain confidence in those images through the promises of container certification.
Making the move to cloud
Our existing partnership with Docker has a lot of history behind it, and through our relationship, we have become one of their trusted support partners. Our deep collaboration has now allowed us to create a unified experience through single vendor support, a comprehensive cloud platform, and a managed services portfolio that can handle even the most complex customer needs.
But we’re not stopping there. This next chapter for us and Docker will focus on our enterprise-strong mission, one that truly aids the customer on their journey to public cloud.
Discuss your thoughts on this here.
The post IBM expands partnership with Docker to drive apps to the public cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Live Code Updates Using WebDav

Take a look at how standard protocols can be used with OpenShift to handle file transfers and learn how to use the WebDav protocol as an alternative for handling live code updates to a running application.
Quelle: OpenShift

Going bananas for the Hybrid Cloud Express shipping solution on IBM Bluemix Cloud

Agencia Marítima Turbaduana S.A.S., or Turbaduana for short, is the shipping agent for 100 percent of the bananas that leave Colombia.
Approximately 20 percent of these bananas arrive in the United States and the rest go to Europe. Turbaduana works with well-known brands, and chances are if you’re American or European and eaten a Colombian banana, it passed through Turbaduana. The company ships 15 vessels per week, and they arrive at their destinations around the clock.
Turbaduana needed a better way to effectively track and record the widely varying dates and deadlines for customs. If they didn’t, they risked paying a fine per infraction, which is a levied as percentage of the value of the cargo. The company wanted to streamline its processes to be better able to deal with the complexities of import and export transactions, including regulations and export codes.

IBM process management solution for shipping
Working with IBM partner ne Digital, Turbaduana built a process management solution to track and manage all of its shopping schedules and deadlines as well as the regulatory and compliance guidelines they must follow.
The solution is based on ne Digital’s Hybrid Cloud Express (HCE) for VMware in the IBM Cloud as the private cloud infrastructure foundation to run IBM business process manager, IBM enterprise content management products, IBM information management products and IBM Watson Analytics software.
HCE is an IBM Bluemix catalog-certified solution. With HCE, ne Digital offers the capabilities of a VMware hypervisor cluster on IBM Bluemix bare-metal machines at an approachable pricing model for a small- to medium-sized business. It’s possible to have a VMware cluster up and running in the IBM Cloud infrastructure within 10 days. This includes a development, testing and production environment and all the security and other best practices from both IBM and VMWare running virtualized networking with NSX.

Savings on IT costs and avoiding shipping fines
Turbaduana’s process management solution has reduced the time-to-invoice for services by 80 percent, from 15 to just three business days. The company has successfully avoided penalties and fines because business processes are more efficient and information flows through one central interface. The company estimates it will also save 33 percent on its IT infrastructure budget because there is less local infrastructure to set up and maintain.
The high reliability of the solution is also important for Turbaduana’s round-the-clock operations.
Read the case study to learn more.
The post Going bananas for the Hybrid Cloud Express shipping solution on IBM Bluemix Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

To containerize or not to containerize, that is the question, or Containers vs VMs: the eternal debate

The post To containerize or not to containerize, that is the question, or Containers vs VMs: the eternal debate appeared first on Mirantis | Pure Play Open Cloud.
Have you got containers on your mind?  Join us October 17, for Containers as a Service: It’s not just a buzzword anymore, a look at how to get the most out of containerized environments.
  Here at Mirantis we do a lot of thinking about how to move traditional monolithic workloads to the cloud, and the first thing that we have to determine is not so much how to move a workload, but whether a workload should be moved at all. In this article, we’ll discuss some of the issues that you need to consider when making this decision about your own particular situation.
Although there are exceptions, moving your application to a cloud-based environment typically presents you with two basic options: virtual machines, or containers. Although in many cases the simplest solution seems to be to “lift and shift” the application into a VM and call it a day, that’s often not the best solution.
Let’s look at the different factors that can affect your decision.
The differences between containers and VMs
Before we talk about whether VMs or containers would ultimately be better for your project, it’s important to understand the differences between the two architectures.
Structure
A VM, or Virtual Machine, is exactly that: it’s an abstraction of an entire computer, from the operating system all the way down to memory and storage. The image from which a VM is built can represent just the operating system, on which applications can then be installed, or it can include all of the applications you need, such as a web server and database, and even your application itself.  Each VM is completely isolated from the host on which it runs, as well as any other VMs on that host.
Containers, on the other hand, are designed to occupy part of an existing machine, sharing the kernel of their host with any other containers running on the system and containing just enough of the operating system and any supporting libraries to run the required code. They’re built from images that include everything they need — and ideally, nothing else.
Resource requirements
Because of these differing structures, requirements for running VMs and containers can vary significantly. Because a VM is essentially an entire computer, it will naturally require more resources than a container, which involves just a minimal portion of the operating system. Therefore, in general, it’s less resource-intensive to scale containers, and you can “fit” more of them on a  single server than VMs.
It’s important to note, however, that because multiple services can “share” the resources of a single VM, there may be edge cases were scaling the multiple containers necessary to replace a single VM could overshadow any resources savings. For example, if you were to decompose the functions of a single VM into, say, 50 different services, that’s 50 partial copies of the operating system versus one full copy.  So be sure to understand exactly what you’re getting into.
Security
The question of whether VMs or containers are more secure is a contentious one, and a complete discussion is well beyond the scope of this article, but let’s touch on some of the major themes.  (Interested in a whole blog post on the topic?  Let us know in the comments!)
While VMs are fairly strictly isolated from each other, containers share a kernel, so if one is compromised, others on that host may be in danger. What’s more, libcontainers, which is used by Docker to interact with Linux, touches five separate namespaces: Process, Network, Mount, Hostname, and Shared Memory. Each provides an opportunity for security issues.
In addition, former PTL of the OpenStack Magnum containers project Adrian Otto notes that “VMs have small attack surfaces, while in the Linux 3.19 kernel, there are no fewer than 397 system calls for containers.”
However, while VMs simply have a smaller attack surface than containers, you do need to consider the entire virtualization platform.  It’s impossible to “break out” of a VM. Mirantis security expert Adam Heczko notes that “[The popular hypervisor] Qemu is affected by roughly 217 vulnerabilities so far, and there have been 3 VM escape attacks in the wild.  I’m not sure that VMs are more secure that containers, the threat model is just radically different as the architecture is.”
Another aspect of security to consider is that while users typically create their own VM images that run the software they need, containers — specifically, Docker containers — are designed to build upon each other.
For example, let’s say you were creating a web-based search application. You might create a container image as follows:

Start with a minimal operating system, such as Alpine
Deploy a web server such as Nginx
Deploy a search application that runs on Nginx

The issue here is that while you can be fairly confident in the first two layers of this image — as long as you use the “official” references, that is — that last application is a mystery, unless you take the time to dig into it and find out what’s really there. Anybody can add an image to a repo and call it anything they want. So if your developers decide to grab an Nginx image with the search application they want already installed, unless they do some due diligence to make sure they’re getting what they think they’re getting, you could be inviting real problems into your datacenter.
Now let’s look at the pros and cons of each architecture.
Pros and cons of VMs
While it’s fashionable to proclaim the death of VMs in favor of containers, the reality is that like most things in life, they have pluses and minuses.
Pros
The positive features of VMs include:

Complete abstraction of the system: Because all of the pieces of your application are running on the same “server” or servers, communication between them is straightforward, no need for additional complicated networking.
No need to decompose the application: Because you’re running in an environment similar to a bare metal machine, there’s no need to alter the architecture of the application itself.
Run multiple applications at the same time: It’s common to run multiple applications on a single VM, simplifying management of the overall infrastructure.
Secure: Virtual machines have a long track record of use, and are considered to be fairly secure, providing isolation with a fairly small attack surface, though you should keep in mind the caveats in the “Security” section above (and below).
Diverse operating systems available: Within a hypervisor, you can use virtually any operating system, so you can run multiple operating systems on a single physical server.

Cons

They can be big: Because they include so much, VMs can be large, both in terms of images required to define them, and in terms of resources needed to run them.
They can be slow to start: Starting a VM is the same as starting a computer; it can take some time. If you’re just starting it once and letting it run for a few weeks or months or years, this may not be an issue.  But if you’re dealing with a process that must be constantly spun up, this latency can definitely be a problem.
They can be slow to run: due to the fact that it’s essentially emulating a computer within a computer, applications running on VMs are often not as performant as those running on bare metal.
They can’t be nested (easily): While it is possible to run a VM within another VM under some circumstances, it’s not always an option. What’s more, when it is an option, the performance penalty can be substantial.
They need careful security configuration: The platform that hosts your VMs needs to be carefully analyzed and configured to prevent potential security problems due to security domains bridging, or components that span multiple security domains, such as public and management, or management and data.

Pros and cons of containers
Just as the death of containers has been overstated, containers aren’t universally great.  Let’s look at the pros and cons here, as well.
Pros

Relatively small size: Containers share the host’s kernel, they only include the absolutely necessary operating system and library components, and they (generally should) limit themselves to a single function, so they tend to be very small.
Fast: Because they are small, they can start in a matter of seconds, or even less, making them useful for applications that need to be repeatedly spun up and down, such as so-called “serverless” applications.
CI/CD: Containers are made to start and restart frequently, so it’s easy to pick up changes.
Portable: Because they’re self-contained, containers can be moved between machines with relative ease, as long as the correct kernel is in place
Lifecycle and delivery model: The structure of the containerized lifecycle makes it easier to incorporate advanced features such as vulnerability assessments and image registry signing.

Cons

They can require complicated networking: Because functions are (ideally) broken out into multiple containers, these containers need to communicate with each other to get anything done. But because containers are not a single unit, they have to communicate with each other. Some orchestration systems such as Kubernetes have higher level units such as multi-container pods that make this a little easier, but it’s still more complex than using VMs. That said, Adam Heczko adds, “Actually, the L3 networking model in Kubernetes is much simpler than the L2 model in OpenStack.”  So the amount of work you’re going to need to do on networking depends on whether you’re looking at communicating between functions or between VMs.
They can be less secure: As I mentioned above, containers are sitll young and they’re still not considered to be quite as secure as VMs, for a number of different reasons, but your mileage may vary here.
They can require more work upfront: If you’re using containers right, you will have decomposed your application into its various constituent services, which, while beneficial, isn’t necessary if you are using VMs.
They can be unreliable: While this sounds negative, containers are generally designed for cloud native computing, which assumes that any component can die at any time; you will need to ensure that your application is properly architected for this eventuality.

Making your decision
In the end, the decision about whether to use containers or VMs is the same as most other IT decisions: “it depends”.  
If you’re basically doing a “lift and shift” of your application, you may be better off simply moving it t a VM, where it will experience the least disruption. If you’re creating a new application from scratch, you’re probably better off starting with containers.
Fortunately, you don’t have to make a hard-and-fast decision; even if you’re starting with a 30 year old monolith, you can always move it to a VM to start with, and then gradually decompose and containerize its various components.
More on that next time.
The post To containerize or not to containerize, that is the question, or Containers vs VMs: the eternal debate appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis