How DevOps tools make applications go faster and farther

The term DevOps has been in existence for at least a decade now. In tech years, a decade is a long time.
It’s hard to measure how pervasive the DevOps model has become in that decade, but it’s safe to say that most, if not all, development and operations teams know about it, and many of those teams use at least some of its tenets.
In the spirit of continuous improvement, practitioners turn to DevOps tools to improve their work. These tools help teams overcome the challenges that often come with accelerated release cycles, while also helping them achieve greater speed, quality and control.
Challenges of DevOps
Some have said the essence of DevOps is “building cool things faster,” but that’s an oversimplification that overlooks the complexity of mastering the practice. Even teams that have followed DevOps principles for years occasionally falter in their execution.
Some of these missteps are all too common:

Component versioning and tracking. Software products are complex and contain multiple components, so manually tracking them is impractical. DevOps tools can map components, making continuous deployment simpler and reducing build errors.
Environment variability. Software and apps are tested in different environments — dev, test and production — and each of these can be configured differently. Tracking apps to environments is complex, and mapping environments to build versions is also complicated. Tools that track build versions and map them to environment parameters help alleviate this issue.
Continued reliance on manual processes and infrastructure deployment. Human intervention leads to human error. It also prevents repeatability, which is essential for continuous iteration. When there are hundreds of environments at play, virtualization offers the advantages of speed, cost and flexibility. Tools that automate processes and deployments play a core role in DevOps.
Lack of consideration for scalability. DevOps projects usually start small. Unfortunately, many teams proceed as if the project will stay small. In initial design and testing, consider how to make processes scale or even if they can scale. If your project can’t scale, you haven’t designed it correctly.

Tools that make DevOps execution easier
Automation solves some of the most common versioning and tracking issues. Continuous delivery provides an integrated set of tools that support app delivery, allowing developers to automate builds, tests and deployments. The open architecture of a continuous delivery tool such as the one from IBM allows integration of open source and third-party tools to make DevOps processes repeatable and easy to manage. These solutions also provide a delivery pipeline that sequences the stages of building, testing and deployment. They can also release updates into production.
Automated application deployment tools such as  enable workflows that usher builds from one stage and environment to the next. By automating application release and deployment to distributed data centers, clouds and virtualized environments, UrbanCode Deploy reduces the chance of deployment failure by delivering higher-quality releases. It serves this function without sacrificing speed, so software delivery can occur at a faster clip. Those processes can apply across dev, test and production environments.
There are also analytics tools such as DevOps Insights that can help improve other DevOps processes and solutions. These tools offer greater deployment quality, enhanced delivery control and even faster speed to market. DevOps Insights contains deployment risk analysis, too, analyzing results from unit and functional tests to prevent the release of risky updates. The tool also analyzes team dynamics through social coding. It learns how a team collaborates, then reveals ways it can work better. Working with IBM Continuous Delivery, the solution also integrates with other continuous integration/continuous delivery (CI/CD) platforms, such as Jenkins and other third-party tools.
The purpose of DevOps is to make software and apps better, but part of that improvement means making the processes and teams better as well. It may have its challenges, but DevOps tools provide one path to overcoming them.
Applications are the core of DevOps. Application performance management (APM) helps DevOps teams streamline the testing, deployment and management of their applications.
Register to download APM for Dummies to discover best APM practices.
The post How DevOps tools make applications go faster and farther appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Bringing AI and esports together with Watson on cloud

Whether players are physically on the field or controlling virtual avatars in digital settings, IBM Watson is helping sports fans catch the most breathtaking moments.
At the 2019 Game Developers Conference this week, IBM showcased how it’s using artificial intelligence (AI) on the IBM Cloud to make esports fan experiences better and improve player performance.
“The cloud is the backbone for scalability and better data portability,” according to an IBM statement to Variety. “To stay competitive, online gaming requires an ability to process complex graphic workloads while scaling without interruption. Game developers are increasingly turning to cloud technologies to enable them to build and deploy high-performance gaming platforms across the globe with the low latency players require.”
As it does for the US Open tennis tournament, Watson AI scans through hundreds of hours of esports video to create dynamic highlight reels in real time. Those clips are then fed to so-called “shoutcasters” — people who commentate on the games in live video feeds – through tablets so they don’t miss any of the best action.
As IBM stated, “feeding important information to live commentators and the IBM Cloud is increasing global scale for developers and publishers.”
Learn more about how IBM is bringing AI on the cloud to esports in the full article at Variety.
The post Bringing AI and esports together with Watson on cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A Self-Hosted Global Load Balancer for OpenShift

Introduction This is the fifth installment on a series of blog posts related to deploying OpenShift in multi-cluster configurations. In the first two posts (part 1 and part 2), we explored how to create a network tunnel between multiple clusters. In the third post, it was demonstrated how to deploy Istio multicluster across multiple clusters […]
The post A Self-Hosted Global Load Balancer for OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

IBM is partnering with AWS, Azure and GCP to unleash the power of cloud

According to a recent study, 98 percent of enterprises surveyed plan to use multiple hybrid clouds within three years. This may include the use of multiple cloud providers including IBM Cloud, Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP) or traditional on-premises environments.
The IBM approach is to meet you wherever you are on your cloud journey and support your choice of cloud.
While every enterprise has unique business and IT requirements, we often find customers in one of these two scenarios:
Scenario 1
An organization is using a public cloud and wants to expand workloads and applications to other clouds. However, leaders need to know that there are minimal risks of building and managing on those clouds.
Each cloud provider or Kubernetes solution comes with its own tools and operations, so managing a multicloud Kubernetes environment can be overwhelming. Business leaders, developers, site reliability engineers (SREs) and IT operations can use a single, integrated dashboard to manage applications and resources across traditional and multicloud environments in ways that deliver visibility, governance and automation.
Scenario 2
An organization wants a provider that helps it modernize and innovate faster.
Developing game-changing applications still requires being able to modernize and make use of existing investments. IBM supports an open ecosystem with consistent management of a full application lifecycle. Because the IBM Cloud management framework is open and supports AWS, Azure and GCP, enterprises can modernize and innovate faster and with ease.
For example, IBM middleware investments are further extended with an application modernization strategy that can span public clouds. Application workloads on public clouds may also benefit from IBM middleware implementing data analytics services.
Organizations can continue deploying these products in their environments by using IBM Cloud Private on AWS, Azure or GCP. This approach offers the best of both worlds. 
Demand choice and drive innovation
Even if you already have an established cloud strategy that includes the use of a public cloud provider such as AWS, Azure, GCP or IBM Cloud, IBM will meet you where you are and ensure that you have a choice of cloud models: public, dedicated, private and managed. It’s vital that you can run your workloads and develop amazing applications on the model that fits your needs.
Learn about the solutions and samples in each architecture that provide a roadmap to build, extend and deploy an application.

Schedule a no-charge consultation and meet with an IBM Cloud expert to discuss your team’s unique cloud journey.
The post IBM is partnering with AWS, Azure and GCP to unleash the power of cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

What is event streaming? The next step for business

As data proliferates across business types, event streaming is becoming ever more important. But what is event streaming? A simple way to think about it is that event streaming enables companies to analyze data that pertains to an event and respond to that event in real time.
Currently, in all markets, everything is being driven by events. For example, when a business transaction takes place, such as a customer placing an order or a deposit being added to a bank account, that is an event that drives a next step. With customers looking for responsive experiences when they interact with companies, being able to make real-time decisions based on an event becomes critical.
Top 3 business uses of event streaming
The vast number of business events creates an incredible amount of data, which can make real-time decisions difficult. Companies must gain reliable insights that can lead to quick decisions and enhanced customer experiences. Event streaming can help.
Here are the top three reasons event streaming is important for businesses today:
1. Using unused data.
Businesses have massive amounts of data everywhere.
For example, manufacturing companies have data on machine failures, time to completion, capacity peaks and flows, consumption data and more. Airlines have information on customer wait times, plane delays, maintenance records and ticket purchasing patterns, along with many other sources of data. Right now, so much of this data is sitting collecting dust. Organizations can use that data for good.
2. Taking advantage of real-time data insights.
One of the key tenets of event streaming is real-time insights and the ability to react to these insights. Say there’s a customer shopping online browsing for a new TV, but it’s out of stock. It does the retailer no good if they get insights on that data a week later. The customer has already gone somewhere else.
Companies should be able to take advantage of real-time insights. For example, if a customer shops at a specific store often, using location data from cell phone traffic or public wifi can enable a store to send a targeted ad or coupon based on the customer’s location.
3. Creating better and more engaging customer experiences.
When a business puts the influx of data and real-time data insights together, there’s an opportunity to create better and more engaging experiences for customers.
With all the choices that customers have these days, winning hearts and ultimately business not only means having the greatest product, but also having the best and most engaging customer experience possible. By responding to situations as they are detected, companies can create new ways of engaging with their customers, increasing customer sentiment.
Consider the example of an airline. When flights are canceled or delayed, customer service agents and desk agents are flooded with an influx of unhappy flyers. With event streaming capabilities, employees can see the event, in this case a canceled flight, and react to it in real time by rebooking passengers with similar itineraries, therefore creating a better customer experience.
All of this and more can be done through event streaming.
Apache Kafka and event streaming tools
Right now, the most prevalent and popular tool for event streaming is Apache Kafka. This tool allows users to send, store and request data when and where they need it. That’s where IBM Event Streams becomes helpful. Event Streams works with Apache Kafka in order to make it repeatable, scalable and consistent with a simple-to-use three-click deployment model. Since Kafka is an open source solution, it’s constantly evolving. Users are always looking to use the latest versions however, enterprises cannot just turn off their event streaming capabilities when they want to upgrade to the latest version of Apache Kafka. With IBM Event Streams, users are able to upgrade with zero downtime.
If you’re in the New York area on 2 April or the London area 13 – 14 May, join us at the Kafka Summit New York or the Kafka Summit London to have a conversation with an event streaming expert.
You can also explore more about event streaming by visiting the IBM Event Streams website.
The post What is event streaming? The next step for business appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introduction to Kustomize, Part 2: Overriding values with overlays

The post Introduction to Kustomize, Part 2: Overriding values with overlays appeared first on Mirantis | Pure Play Open Cloud.
In part 1 of this tutorial, we looked at how to use Kustomize to combine multiple pieces into a single YAML file that can be deployed to Kubernetes. In doing that, we used the example of combining specs for WordPress and MySQL, automatically adding a common app label. Now we’re going to move on and look at what happens when we need to override some of the existing values that aren’t labels.
Curious about what else is new in Kubernetes 1.14 (besides integration of Kustomize)? Join us for a live webinar on March 21.
Changing parameters for a component using Kustomize overlays
Now, we’re almost ready, but we do have one more problem.  While we’re deploying our production system to a cloud provider that supports LoadBalancer, we’re developing on our laptop so we need our services to be of type: NodePort.  Fortunately we can solve this problem with overlays.
Overlays enable us to take the base YAML and selectively change pieces of it.  For example, we’re going to create an overlay that includes a patch to change the Services to NodePort type services.
It’s important that the overlay isn’t in the same directory as the base files, so we’ll create it in an adjacent directory, then add a dev subdirectory.
OVERLAY_HOME=$BASE/../overlays
mkdir $OVERLAY_HOME
DEV_HOME=$OVERLAY_HOME/dev
mkdir $DEV_HOME
cd $DEV_HOME
Next we want to create the patch file, $DEV_HOME/localserv.yaml:
apiVersion: v1
kind: Service
metadata:
 name: wordpress
spec:
 type: NodePort

apiVersion: v1
kind: Service
metadata:
 name: mysql
spec:
 type: NodePort
Notice that we’ve included the bare minimum of information here; just enough to identify each service we want to change, and then specify the change that we want to make — in this case, the type.
Now we need to create the $DEV_HOME/kustomization.yaml file to tie all of this together:
bases:
– ../../base
patchesStrategicMerge:
– localserv.yaml
Notice that this is really very simple; we’re pointing at our original base directory, and specifying the patch(es) that we want to add.
Now we can go ahead and build the original, and see that it’s untouched:
kustomize build $BASE
You can see that we still have LoadBalancer services:

spec:
 ports:
 – port: 3306
 selector:
   app: my-wordpress

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 ports:
 – port: 80
 selector:
   app: my-wordpress
 type: LoadBalancer

apiVersion: apps/v1beta2
kind: Deployment
metadata:
 labels:

But if we build the overlay instead, we can see that we now have NodePort services:
$ kustomize build $DEV_HOME


 name: mysql-pass
type: Opaque

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: mysql
spec:
 ports:
 – port: 3306
 selector:
   app: my-wordpress
 type: NodePort

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 ports:
 – port: 80
 selector:
   app: my-wordpress
 type: NodePort

apiVersion: apps/v1beta2
kind: Deployment
metadata:

Notice that everything is unchanged by the patch except the type.  Now let’s look at making use of these objects in kubectl.
Using Kustomize with kubectl
Now, all of this is great, but saving it to a file then running the file seems like a little bit of overkill.  Fortunately there are two ways we can feed this in directly. One is to simply pipe it in, as you would do with any other Linux program:
kustomize build $DEV_HOME | kubectl apply -f –
Or if you’re using Kubernetes 1.14 or above, you can simply use the -k parameter:
kubectl apply -k $DEV_HOME
secret “mysql-pass” created
service “mysql” created
service “wordpress” created
deployment.apps “mysql” created
deployment.apps “wordpress” created
This may not seem like a big deal, but consider this example from the documentation, showing the old way of doing things:
kubectl create secret docker-registry myregistrykey –docker-server=DOCKER_REGISTRY_SERVER –docker-username=DOCKER_USER –docker-password=DOCKER_PASSWORD –docker-email=DOCKER_EMAIL
secret/myregistrykey created.
Versus the new way, where we create a kustomization.yaml file:
secretGenerator:
– name: myregistrykey
type: docker-registry
literals:
– docker-server=DOCKER_REGISTRY_SERVER
– docker-username=DOCKER_USER
– docker-password=DOCKER_PASSWORD
– docker-email=DOCKER_EMAIL
EOF
Then simply reference it using the -k parameter:
$ kubectl apply -k .
secret/myregistrykey-66h7d4d986 created
Considering that kustomization.yaml files can be stored in repos and subject to version control, where they can be tracked and more easily managed, this provides a much cleaner way to manage your infrastructure as code.
There are, of course, other things you can do with Kustomize, including adding name prefixes, generating ConfigMaps, and passing down environment variables, but we’ll leave that for another time.  (Let us know in the comments if you’d like to see that sooner rather than later.)
Meanwhile, if you’d like to see more of what’s new in Kubernetes 1.14, don’t forget to join us for that live webinar on March 21.  
The post Introduction to Kustomize, Part 2: Overriding values with overlays appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift 4 ISV Operators

In Red Hat OpenShift 4, the Operator Hub provides access to community and certified operators that facilitate the deployment and configuration of potentially complex applications. In this video, we take a look at creating and scaling a Couchbase cluster using the operator shipped with OpenShift 4.
The post OpenShift 4 ISV Operators appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Deploying HA PostgreSQL on OpenShift using Portworx

This is a guest post written by Gou Rao, CTO and Co-Founder of Portworx, leading the company’s technology, market, and solution execution strategy. Previously Gou was the CTO of Data Protection at Dell, in charge of the technical direction, strategy and architecture. Portworx is a cloud native storage platform to run persistent workloads deployed on […]
The post Deploying HA PostgreSQL on OpenShift using Portworx appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

7 considerations for choosing the right hybrid cloud provider

Hybrid cloud computing is an attractive option for businesses that want to combine the advantages of public and private clouds.
An organization’s hybrid cloud provider will be an important partner as it integrates on-premises systems with cloud-based ones. Before awarding this important contract, there are seven important aspects to consider.
1. Get to know your workloads.
Before even talking to a hybrid cloud provider, understand the workloads that you want to pull into the hybrid environment and where you will locate them. For example, data backup and disaster recovery require a different kind of hybrid cloud service than complex analytics applications.
At the same time, ensure your provider can grow with you as your cloud strategy matures. Look to providers that can offer the services you will need as your cloud environment evolves. Seek out solutions that can integrate well with other providers’ platforms if you need to allocate different hybrid cloud contracts in a multicloud environment.
2. Evaluate performance.
Your choice of workload informs the next question to ask a potential provider: for what kind of workload is its infrastructure optimized? As cloud services evolve, providers are beginning to specialize in the kinds of workloads that they support. For example, some might focus on supporting developers, while others might serve a particular kind of application such as systems, applications and products (SAP).
Another aspect of performance is latency. Latency requirements are strict, especially in hybrid cloud environments where on-premises workloads communicate with cloud infrastructure. In these instances, your organization might require a provider with a local edge data center or at least one that can support the appropriate direct connectivity options.
3. Match public and private infrastructure.
Your hybrid cloud provider must also be able to support the technology options that you already use in on-premises infrastructure. Look for easy mappings between the virtual machine choices you’ve made in house and the formats that the service provider supports, for example.
Aligning the two infrastructures will make it easier to migrate workloads between one environment and the other.
4. Look for easy onboarding.
Ask your potential cloud provider what assistance it offers with migrating data and workloads to its infrastructure. Migration can be a challenging task, especially when working with large data sets. How can the provider help make it simpler and cheaper?
Some may offer hardware appliances to help you ship large data sets manually. At the very least, it should provide migration tools to help you map data between your on-premises infrastructure and its own or provide a consulting service to walk you through the process.
5. Assess security.
The provider should also be able to help you as you secure your data in a hybrid environment.
Hybrid workloads often involve security controls such as tokens. These tokens protect sensitive information in cloud data centers by pointing to records kept on customer premises. Ensure that hybrid cloud providers can help you implement these security measures.
A cloud provider should also be able to answer questions about their compliance processes and risk management. For a list of questions to ask, look through this cloud security assessment list from the Object Management Group.
6. Ensure availability and redundancy.
Security is only one aspect of computing risk. Another is availability. Check your provider’s approach to making your data available.
Service-level agreements (SLAs) will be a key factor here. They should not only include availability guarantees, but also escalation and compensation procedures in case the service provider cannot meet them. Consider the provider’s ability to help you support multiple cloud service providers so that you can failover between each in the event of a problem.
7. Weigh out pricing.
Cost was one of the main initial drivers for cloud computing. While other considerations such as scalability have become increasingly prevalent as cloud computing strategies mature, budget is still a key factor.
“Cloud shock” is an issue in cloud computing contracts. It often happens when customers don’t keep track of the online resources they are using. Check operating fees with the hybrid cloud provider, including the cost of unplanned service expansions to cover spikes in demand.
Be mindful that ending a contract may come with a fee. Plan for any extraction costs to ensure you can migrate your data successfully at the close of the relationship.
Like any business partnership, a hybrid computing contract is something that customers should approach carefully and with an understanding of what they hope to achieve. This will help you choose the right hybrid cloud provider and craft a solid platform on which to build a long-term hybrid cloud strategy.
Learn more about the top 10 criteria for selecting a managed services provider that best matches your business’s IT needs.
The post 7 considerations for choosing the right hybrid cloud provider appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introduction to Kustomize, Part 1: Creating a Kubernetes app out of multiple pieces

The post Introduction to Kustomize, Part 1: Creating a Kubernetes app out of multiple pieces appeared first on Mirantis | Pure Play Open Cloud.
Kustomize is a tool that lets you create an entire Kubernetes application out of individual pieces — without touching the YAML for the individual components.  For example, you can combine pieces from different sources, keep your customizations — or kustomizations, as the case may be — in source control, and create overlays for specific situations. And it will be part of Kubernetes 1.14.
Kustomize enables you to do that by creating a file that ties everything together, or optionally includes “overrides” for individual parameters.
Let’s see how it works.
Curious about what else is new in Kubernetes 1.14? Join us for a live webinar on March 21.
 
Installing Kustomize
The first step, of course, is to install Kustomize.  It’s easy if you’re on MacOS:
brew install kustomize
If not, you can install from go:
go get sigs.k8s.io/kustomize
Or you can install the latest from source:
opsys=linux  # or darwin, or windows
curl -s https://api.github.com/repos/kubernetes-sigs/kustomize/releases/latest |
 grep browser_download |
 grep $opsys |
 cut -d ‘”‘ -f 4 |
 xargs curl -O -L
mv kustomize_*_${opsys}_amd64 kustomize
chmod u+x kustomize
To make sure it’s installed, go ahead and check the version:
$ kustomize version
Version: {KustomizeVersion:v2.0.2 GitCommit:b67179e951ebe11d00125bdf3c2670e88dca8817 BuildDate:2019-02-25T21:36:48+00:00 GoOs:darwin GoArch:amd64}
OK, now we’re ready to go!
Combining specs into a single app
One of the most common uses for Kustomize is to take multiple objects and combine them into a single application with common labels.  For example, let’s say you want to deploy WordPress, and you find two Kubernetes manifests on the web. Let’s start by creating a directory to serve as a base directory:
KUSTOM_HOME=$(mktemp -d)
BASE=$KUSTOM_HOME/base
mkdir $BASE
WORDPRESS_HOME=$BASE/wordpress
mkdir $WORDPRESS_HOME
cd $WORDPRESS_HOME
Now let’s look at the manifests.  One is for the deployment of WordPress itself. Let’s save that as $WORDPRESS_HOME/deployment.yaml.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
 name: wordpress
 labels:
   app: wordpress
spec:
 selector:
   matchLabels:
     app: wordpress
 strategy:
   type: Recreate
 template:
   metadata:
     labels:
       app: wordpress
   spec:
     containers:
     – image: wordpress:4.8-apache
       name: wordpress
       ports:
       – containerPort: 80
         name: wordpress
       volumeMounts:
       – name: wordpress-persistent-storage
         mountPath: /var/www/html
     volumes:
     – name: wordpress-persistent-storage
       emptyDir: {}
The second is a service to expose it. We’ll save it as $WORDPRESS_HOME/service.yaml.
apiVersion: v1
kind: Service
metadata:
 name: wordpress
 labels:
   app: wordpress
spec:
 ports:
   – port: 80
 selector:
   app: wordpress
 type: LoadBalancer
That all seems reasonable, and if we can put both of these files in a directory called wordpress and run:
$ kubectl apply -f $WORDPRESS_HOME
deployment.apps “wordpress” created
service “wordpress” created
Before we move on, let’s go ahead and clean that up:
$ kubectl delete -f $WORDPRESS_HOME
Now if you look at the definitions, you’ll notice that both resources show an app label of wordpress.  If you wanted to deploy them with a label of, say, app:my-wordpress, you’d either have to add parameters on the command line or edit the files — which does away with the advantage of reusing the files.
Instead, we can use Kustomize to combine them into a single file — including the desired app label — without touching the originals.
We start by creating the file $WORDPRESS_HOME/kustomize.yaml and adding the following:
commonLabels:
 app: my-wordpress
resources:
– deployment.yaml
– service.yaml
This is a very simple file that just says that we want to add a common label — app: my-wordpress — to the resources defined in deployment.yaml and service.yaml.  Now we can use Kustomize to actually build the new YAML:
$ kustomize build $WORDPRESS_HOME
apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 ports:
 – port: 80
 selector:
   app: my-wordpress
 type: LoadBalancer

apiVersion: apps/v1beta2
kind: Deployment
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 selector:
   matchLabels:
     app: my-wordpress
 strategy:
   type: Recreate
 template:
   metadata:
     labels:
       app: my-wordpress
   spec:
     containers:
     – image: wordpress:4.8-apache
       name: wordpress
       ports:
       – containerPort: 80
         name: wordpress
       volumeMounts:
       – mountPath: /var/www/html
         name: wordpress-persistent-storage
     volumes:
     – emptyDir: {}
       name: wordpress-persistent-storage
The output is the concatenation of YAML documents for all of the resources we specified, with the common labels added.  You can output that to a file, or pipe it directly into kubectl. (We’ll cover that in Using Kustomize with kubectl.
Multiple directories
Now, that’s all great, but WordPress won’t run without a database.  Fortunately, we have a set of similar files for setting up MySQL. Unfortunately, they have the same names as the files we have for WordPress, and remember, we don’t want to have to alter any of the files, so we need a way to pull from multiple directories.  We can do that by creating multiple “bases”.
So we’ll start by creating a new directory:
MYSQL_HOME=$BASE/mysql
mkdir $MYSQL_HOME
cd $MYSQL_HOME
We’ll add three files to it. The first is a $MYSQL_HOME/deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
 name: mysql
 labels:
   app: mysql
spec:
 selector:
   matchLabels:
     app: mysql
 strategy:
   type: Recreate
 template:
   metadata:
     labels:
       app: mysql
   spec:
     containers:
     – image: mysql:5.6
       name: mysql
       env:
       – name: MYSQL_ROOT_PASSWORD
         valueFrom:
           secretKeyRef:
             name: mysql-pass
             key: password
       ports:
       – containerPort: 3306
         name: mysql
       volumeMounts:
       – name: mysql-persistent-storage
         mountPath: /var/lib/mysql
     volumes:
     – name: mysql-persistent-storage
       emptyDir: {}
The second is $MYSQL_HOME/service.yaml:
apiVersion: v1
kind: Service
metadata:
 name: mysql
 labels:
   app: mysql
spec:
 ports:
   – port: 3306
 selector:
   app: mysql
And finally $MYSQL_HOME/secret.yaml to hold the database username and password:
apiVersion: v1
kind: Secret
metadata:
 name: mysql-pass
type: Opaque
data:
 # Default password is “admin”.
 password: YWRtaW4=
And we’ll add a Kustomization file, $MYSQL_HOME/kustomization.yaml, them together:
resources:
– deployment.yaml
– service.yaml
– secret.yaml
Now we need to tie the two directories together.  First let’s clean up $WORDPRESS_HOME/kustomization.yaml to remove the labels so that it only references the resources:
resources:
– deployment.yaml
– service.yaml
Now we need to add a new kustomization file to the base directory at $BASE/kustomization.yaml:
commonLabels:
 app: my-wordpress
bases:
– ./wordpress
– ./mysql
So we’ve moved our labels declaration out to this main file and defined the two base directories we’re working with.  Now if we run the build ….
kustomize build $BASE
We can see that all of the files are gathered and the label is added to all of them:
apiVersion: v1
data:
 password: YWRtaW4=
kind: Secret
metadata:
 labels:
   app: my-wordpress
 name: mysql-pass
type: Opaque

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: mysql
spec:
 ports:
 – port: 3306
 selector:
   app: my-wordpress

apiVersion: v1
kind: Service
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 ports:
 – port: 80
 selector:
   app: my-wordpress
 type: LoadBalancer

apiVersion: apps/v1beta2
kind: Deployment
metadata:
 labels:
   app: my-wordpress
 name: mysql
spec:
 selector:
   matchLabels:
     app: my-wordpress
 strategy:
   type: Recreate
 template:
   metadata:
     labels:
       app: my-wordpress
   spec:
     containers:
     – env:
       – name: MYSQL_ROOT_PASSWORD
         valueFrom:
           secretKeyRef:
             key: password
             name: mysql-pass
       image: mysql:5.6
       name: mysql
       ports:
       – containerPort: 3306
         name: mysql
       volumeMounts:
       – mountPath: /var/lib/mysql
         name: mysql-persistent-storage
     volumes:
     – emptyDir: {}
       name: mysql-persistent-storage

apiVersion: apps/v1beta2
kind: Deployment
metadata:
 labels:
   app: my-wordpress
 name: wordpress
spec:
 selector:
   matchLabels:
     app: my-wordpress
 strategy:
   type: Recreate
 template:
   metadata:
     labels:
       app: my-wordpress
   spec:
     containers:
     – image: wordpress:4.8-apache
       name: wordpress
       ports:
       – containerPort: 80
         name: wordpress
       volumeMounts:
       – mountPath: /var/www/html
         name: wordpress-persistent-storage
     volumes:
     – emptyDir: {}
       name: wordpress-persistent-storage
OK, so now we’ve gathered multiple components, but what happens if we need to change something?  Let’s look at that in part 2.
The post Introduction to Kustomize, Part 1: Creating a Kubernetes app out of multiple pieces appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis