New Docker Enterprise Phone a Friend: Call Solution Architects for Help Operationalizing Your Solution

The post New Docker Enterprise Phone a Friend: Call Solution Architects for Help Operationalizing Your Solution appeared first on Mirantis | Pure Play Open Cloud.
After an organization adopts the Docker Enterprise platform from Mirantis, they usually have questions and need help to operationalize the solution they purchased. But just as the car insurance that covers damage from an accident doesn’t do you any good if you need new tires, the Mirantis Support team does a great job in working with customers to address critical blockers and issues that affect their business operations, but the support process is not an efficient vehicle to address non-critical questions, concerns and general requests for help that fall outside the realm of support issues.
For example, you might want advice on how best to set up your CI/CD process, or your logging and monitoring. To solve that problem, we have introduced the Docker Enterprise Phone a Friend service — an easy-to-use channel for customers to connect with a domain expert to discuss roadmap items, enhancements, ideas, or tips to operationalize their end-to-end platform.
The Phone a Friend service enables you to block an hour of time with a domain expert once a week to discuss non-critical items related to your Docker Enterprise deployment. It is a subscription-based service and comes in two flavors: a 1-month and a 3-month subscription. The service gives you access to up to 4 one-hour sessions per month.
For example, the discussions can be around authorization, authentication, application containerization and modernization, troubleshooting, performance tuning, microservices architecture and integration, or other related topics.
Setting up Appointments
The process of setting up appointments with the Mirantis Consulting Services team is straightforward. Here is what the scheduling process looks like:

Go to the Mirantis CloudCare Portal and click on the Phone a Friend link, as shown below. (The link only appears for active Phone a Friend subscribers.)

The link will take you to a website where you will be able to see the availability of the Mirantis Consulting Services team and schedule a time.

On the next screen, enter some basic contact information and share anything that will help the architect prepare for the meeting.

And that’s it. We tried to make the Phone a Friend service as easy as possible, from the simple UI to the convenient subscription model that lets you readily get advice from Solution Architects without requiring a services engagement. For more information, or to get started with a Phone a Friend subscription, please contact your Account Manager.
The post New Docker Enterprise Phone a Friend: Call Solution Architects for Help Operationalizing Your Solution appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Celebrating Pride Month: Perspectives on Identity, Diversity, Communication, and Change

Throughout June, we’ve published a series of Q&As at WordPress Discover featuring members of the Automattic team. These conversations explore personal journeys; reflections on identity; and diversity and inclusion in tech, design, and the workplace. Here are highlights from these interviews.

“In a World That Wants You to Apologize or Minimize Who You Are, Don’t.”

Gina Gowins is an HR operations magician on the Human League, our global human resources team. In this interview, Gina examines identity and language; communication and trust-building in a distributed, mostly text-based environment; and how her life experiences have informed her work.

I am particularly attached to the term queer as a repurposing of a word that was once used to isolate and disempower people — it was used to call people out as problematically different and other. From my perspective, there is no normal and no other; instead, we are all individual and unique. Identifying as queer allows me to take pride in my own individuality.Language changes over time, and how we use language shapes our values and thinking. In a culture that is aggressively governed by heteronormative values and where it can still be dangerous and lonely to be LGBTQIA+ — such as the United States, where I live — defining myself as queer is also my small act of defiance. It is a reminder of the consistent fight for acceptance, inclusion, and justice that so many people face, and our inherent value and validity as humans.

Read Gina’s interview

“Reflect What Is Given, and In So Doing Change It a Little”

Echo Gregor is a software engineer on Jetpack’s Voyager team, working on new features that “expand Jetpack’s frontiers.” In this conversation, Echo talks about gender identity, pronouns, and names; and how xer identity and experiences have impacted xer approach to development and work in general.

Earlier in my transition, I called myself “E” sort of as a placeholder while I pondered name things. One late night, on the way home from a party, I had a friend ask if they could call me Echo, as it was the callsign equivalent for “E.” I immediately fell in love with the name, and gradually started using it more and more, until I made it my legal name.I like that it’s simple and doesn’t have many gendered connotations in the modern world. I also appreciate it’s mythological origin! In the myth, Echo was a mountain nymph cursed by the goddess Hera — to be unable to speak, and only repeat the last words said to her.I think there’s a lot of parallels in our world to that idea. We’re part of systems that are so much bigger than us that it’s rare any one of us can be loud enough to bring meaningful change, to speak new words. But echoes don’t perfectly repeat things. They reflect what is given, and in so doing change it a little. I like to try and live up to that by bringing a bit of change to the world, not by being the loudest, but by reflecting things back in my own way.

Read Echo’s interview

“Living My Life Freely and Authentically”

Mel Choyce-Dwan is a product designer on the theme team. In this Q&A, Mel tells us how she got involved with the WordPress community through a previous WordCamp, about her observations of tech events as a queer designer, and about the importance of inclusive design.

Show a lot of different kinds of people in your writing and your imagery, and don’t make assumptions. Talk to people from the communities you’re representing if you can, or read about their own experiences from their perspectives. Don’t assume you know better than someone else’s lived experience. When in doubt, talk to people.And don’t just talk to people about how your product should work, talk about how it shouldn’t work. Talk about how people think others could hurt them using your product. People of marginalized identities often have stories of being harassed, stalked, or abused on the web. We need to think about how our products can be used for harm before — not after — the harassment.

Read Mel’s interview

“Every Person and Voice Has the Opportunity to Be Heard”

Niesha Sweet, a people experience wrangler on the Human League, says she feels like she was destined to work at Automattic. In this final interview, Niesha reflects on her Pride Month traditions and what she finds most rewarding about her HR work.

I would say that we all have to apply an additional level of empathy, understanding, and openness when working together. Just with communication alone — English is not the first language for some Automatticians, and some cultures’ communication style is direct. Assuming positive intent and having an additional level of empathy for one another allows us to effectively communicate with each other, while also appreciating our differences. The reward that comes with our diverse workforce is that every person and voice has the opportunity to be heard. Impostor syndrome is real, so some Automatticians may not feel as though they can share their ideas with anyone at the company, but we truly can. Our level of diversity is truly outside of what the typical company is aiming to achieve. That’s not to say we’re not looking to hire more diverse Automatticians, or increase our workforce with non-US hires, but we’re not limited by age, sexual orientation, race, and gender identity. Diversity has a different meaning in a lot of the countries where we have Automatticians, and that alone is rewarding. 

Read Niesha’s interview

Learn more about diversity and inclusion at Automattic. We’re currently hiring — apply to work with us!
Quelle: RedHat Stack

Editing and Enhancing Images in the WordPress Apps

The WordPress app on your Android or iOS device is your companion wherever you go. Manage your site, write and publish, and even add images to your posts — from anywhere you are. Oftentimes, the most engaging posts include visuals, like the photos you take on the go: pictures from last week’s walk, snapshots of your afternoon picnic, or portraits of the family with your puppy.

Have you ever needed to edit your images on your phone? Maybe the lighting wasn’t quite right, or the framing and composition were off. You can now make small retouches right in the WordPress app, like cropping, rotating, and even adding a filter to change the mood of your photos.

Editing photos

You now have the option to edit an image. If your photo is already in the post, tap it, then tap the icon in the top right corner and select Edit. When you’re finished editing the image, tap Done and the previous image will be replaced with the new one.

If you’re adding a new image, you can edit it before inserting it into the post. For example, add a Gallery Block, tap Add Media, and select Choose from your device. Select one or multiple photos, then in the bottom left corner, tap Edit. Edit your image, tap Insert, and that’s it!

If you’re offline, you can still add, edit, and insert new images to a post. 

Making small adjustments

Need to adjust or enhance an image? You can now rotate a photo or crop the borders:

Adding a filter or drawing over an image

If you’re using the iOS app, you can apply a filter to your picture:

And if you have iOS 13 or later, you can also draw over an image, either with your finger or with your Apple Pencil:

We’re thrilled about these new updates to the Media Editor! Let us know what you’d like to see in upcoming versions. We’d love to hear your feedback.
Quelle: RedHat Stack

CloudForms blog is moving to ManageIQ

In the next few weeks, and with the goal of better aligning this content with the upstream ManageIQ, we will migrate all relevant content to the upstream blog before shutting down this blog platform.
You’ll be able to find all your favorite tips and tricks at manageiq.org/blog/
Quelle: CloudForms

Expert Advice: Manage Your Site on the Go Using the WordPress Mobile Apps

For many people, the go-to tool for updating a website is a laptop or desktop computer. Did you know, though, that the computer you carry around in your pocket has as much power as the one on your desk? The WordPress mobile apps are packed with features that make it possible to manage your site no matter where you are.Want to become a WordPress app pro? Register for our next webinar, “WordPress Mobile: Your site. Your inspiration. Anywhere.” We’ll be sharing bite-sized tips that will transform the way you manage your site and connect with your audience. Some of the topics we’ll cover include:

How to create a site from your phone.Using stats on the mobile app for a deep dive into your site’s performance. Leveraging the activity log to keep an eye on what’s going on around your site.The recently introduced WordPress editor and the ways it has revolutionized mobile content creation. Starter page templates and how they can jump-start your page designs.How to use the WordPress.com Reader to find new content and expand your site’s audience. Making the most of real-time notifications and alerts.

Date: Wednesday, June 24, 2020Time: 10:00 a.m. PDT | 11:00 a.m. MDT | 12:00 p.m. CDT | 1:00 p.m. EDT | 17:00 UTCCost: FreeRegistration link

Eli Budelli and I will be your hosts — we work on the WordPress mobile apps, so you’ll be learning and sharing with the people who are crafting your mobile experiences. No previous knowledge using our mobile apps is necessary, but we recommend a basic familiarity with WordPress.com and installing the WordPress app to ensure you can make the most from the webinar. The session will cover both iOS and Android, last about 40 minutes, and conclude with a Q&A session (15-20 minutes), so start writing down any questions you may have, and bring them with you to the webinar.

Attendee slots are limited, so be sure to register early to save your seat! But if you can’t make it, we’ve got your back. A recording of the webinar will be uploaded to our YouTube channel a few days after the event.

See you then!
Quelle: RedHat Stack

Enjoy a Smoother Experience with the Updated Block Editor

Little details make a big difference. The latest block editor improvements incorporate some common feedback you’ve shared with us and make the editing experience even more intuitive than before.

We’ve also updated the categories we use to organize blocks, so you can find exactly what you need, fast. Read on to learn about recent changes you’ll notice next time you open the editor.

Move on quickly after citations and captions

Have you ever felt as if you were stuck inside a block after adding a citation? Now, when you hit Enter or Return at the end of the citation, you’ll be ready to start typing in a new text block.

Quotes were a bit sticky…

Much smoother now!

Quotes, images, embeds, and other blocks now offer this smoother experience. It’s a small change that will save you a little bit of time, but those seconds add up, and less frustration is priceless.

Streamlined heading selection

Another subtle-yet-helpful change we’ve introduced is simplified heading levels. Before, the block toolbar included a few limited options with additional ones in the sidebar. Now, you can find all available heading levels right in the block toolbar, and adjust the heading directly from the block you’re working on. (For even more simplicity, we’ve also removed the dropdown in the sidebar.)

Select a parent block with ease

Working with nested blocks to create advanced page layouts is now considerably smoother. Some users told us it was too difficult to select a parent block, se we’ve added an easier way to find it right from the toolbar. Now it’s a breeze to make picture-perfect layouts!

Filter your latest posts by author

Sites and blogs with multiple authors will love this update: you can now choose a specific author to feature in the Latest Posts block.

To highlight recent articles from a particular writer, just select their name in the block’s settings.

Renamed block categories

Finally, the next time you click the + symbol to add a new block, you’ll notice new, intuitive block categories that make it both easier and faster to find just the block you’re looking for.

What’s new:

TextMediaDesign

What’s gone:

CommonFormattingLayout

You keep building, we’ll keep improving

Thank you for all your input on how the block editor can be better! We’re listening. If you have more ideas, leave a comment below.

Happy editing!
Quelle: RedHat Stack

Today I learned: How to make very small containers for golang binaries

The post Today I learned: How to make very small containers for golang binaries appeared first on Mirantis | Pure Play Open Cloud.
TL;DR – Official Go Docker container images tend to be beefy. The standard image on Docker Hub is called golang (docker pull golang), and tossing in a Go program (such as for interactive execution) will bring it up above 800MB. But building images from scratch (for example, using the scratch container) using compiled binaries that don’t need complex, multi-layered OS and language environment support, will keep things slimmer.
I’ve been building microservices applications for demos lately. A lot of the work has been in Node.js, because it’s easy. But this past weekend, I started learning Go (because ‘all the cool kids,’ obviously) and so, there I was, figuring out how to containerize Go programs.
It turns out this is easy, too. But I was surprised to discover how large the resulting container images were. Suppose we have a minimal program, hello.go, like:
package main

import (
“fmt”
“os”
)

func main() {
fmt.Println(os.Args)
}
… which prints the array containing its arguments. Running this on the command line with:
$ go run hello.go hi there
[/tmp/go-build289681080/b001/exe/hello hi there]
… gets you the standard argument array, beginning with the executable path.
Now you can put this into a golang container using the following Dockerfile…
FROM golang

COPY . .

CMD [“go”,”run”,”hello.go”,”hi”,”there”]
… then build it …
$ docker build -f Dockerfile.golang –tag hello:1.1 .
Sending build context to Docker daemon 2.073MB
Step 1/3 : FROM golang
—> 5fbd6463d24b
Step 2/3 : COPY . .
—> f72803dfaac0
Step 3/3 : CMD [“go”,”run”,”hello.go”,”hi”,”there”]
—> Running in 39e7765bec67
Removing intermediate container 39e7765bec67
—> f96c3d2c0861
Successfully built f96c3d2c0861
Successfully tagged hello:1.1
… and run it, and get the expected result:
$ docker run hello:1.1
[/tmp/go-build847833630/b001/exe/hello hi there]
But when you check with docker images, you see that the container is relatively yuge.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello 1.1 f96c3d2c0861 3 minutes ago 812MB
‘Kayso, you can make a much, much smaller container by changing your Dockerfile to this:
FROM scratch

COPY ./hello /go/bin/hello

CMD [“/go/bin/hello”,”hi”,”there”]
… and using the so-called ‘scratch’ image, which is basically an empty container without a shell.
But the scratch image also contains no Go language environment, so you need to compile your application into an executable first:
$ go build hello.go
… which gets you a hello binary in your local directory that you can execute like this …
$ ./hello hi there
[./hello hi there]
… and get back the (drum roll) expected result.
Looking back at the Dockerfile (called, in this case Dockerfile.scratch), you can see all it’s doing is copying the binary (hello) into a directory it creates in the container (/go/bin) and then running it from there. So build …
$ docker build -f Dockerfile.scratch –tag hello:1.2 .
Sending build context to Docker daemon 2.073MB
Step 1/3 : FROM scratch
—>
Step 2/3 : COPY ./hello /go/bin/hello
—> Using cache
—> fbc88299067f
Step 3/3 : CMD [“/go/bin/hello”,”hi”,”there”]
—> Running in dd925ee8a3ab
Removing intermediate container dd925ee8a3ab
—> 4a049a401c79
Successfully built 4a049a401c79
Successfully tagged hello:1.2

… and run …
$ docker run hello:1.2
[/go/bin/hello hi there]
… and (by this time, I figure you’re not surprised) … the expected result. But look at the container image:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello 1.2 4a049a401c79 33 seconds ago 2.07MB
That’s a pretty substantial size reduction!
Flipping from shell-based execution to no-shell-in-sight execution naturally comes with many potential gotchas. In this case, for example, if you were to build the arguments passed by your CMD out of expressions that required evaluation/expansion by a shell, things wouldn’t work as planned in a no-shell container. You’ll need to construct code carefully to run in such a stripped-back environment.
Now, be advised: I haven’t thought about side-effects, security, stability, gotchas that would no doubt be obvious to a seasoned Go dev, and I’d love to hear about them in the comments. But as we say in the artisanal code-creation atelier: “It works on my machine.” 
Today I Learned (TIL) is an intermittent journal of somewhat half-baked solutions to “speed bump” problems encountered by Mirantis Technical Marketing folks working well beyond our native spheres of expertise for research purposes. None of what we write about here has been checked by grown-ups, or is appropriate for production without further validation. Use at own risk. Comments, cautions, and suggestions for improvement are very welcome! Email jjainschigg@mirantis.com (Twitter: @jjainschigg).
The post Today I learned: How to make very small containers for golang binaries appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Journey into Cloud Native Learning with a New Education Track from Mirantis

The post Journey into Cloud Native Learning with a New Education Track from Mirantis appeared first on Mirantis | Pure Play Open Cloud.
Over the next few months, Mirantis Training will be launching a new Cloud Native Computing (CN) track. These courses utilize a new, pattern-driven methodology, and cover containerization with Kubernetes, Docker Enterprise, and adjacent technologies. This track represents the best of the Kube and Docker courses developed and delivered for years by the Mirantis and Docker training teams, combined into one curriculum offering for enterprise customers and individual IT professionals seeking to master developing, operating and securing applications built for the cloud.
One common thread shared by the new and the soon-to-be-released courses is that all this new content being generated by the Mirantis training team is pattern-driven learning: in each course, we explore a collection of concrete tools in extensive hands-on labs, but we also dig into the higher-level patterns these tools enable, such as the powerful Kubernetes operators pattern, GitOps management flows, developer-driven operations and fully containerized continuous integration, to name only a few.

The first release for this track was CN320: Advanced Kubernetes Operations, which came out at the end of April and focused on helping operations staff build on the Kubernetes knowledge they gained from legacy courses (such as Accelerated Kubernetes & Docker Bootcamp (KD250) and Docker Kubernetes Service) to explore some of the tools and patterns needed to run a Kube cluster in production. CN320 was joined last week by four more courses: 

CN210: Docker Enterprise Operations
CN220: Kubernetes Operations
CN310: Advanced Docker Enterprise Troubleshooting
CN230: Kubernetes Native Application Development 

CN210 and CN310 are the popular and heavily battle-tested Docker for Enterprise Operations and Docker Troubleshooting and Support courses from Docker, respectively, now updated for the recently released Docker Enterprise 3.1.
The new CN220 course centers on developing the skills and knowledge needed for Day-1 Kubernetes operation for managing applications. CN230 is a brand new developer-focused offering that reimagines Docker’s old Enterprise Developers course for a vendor-agnostic, Kubernetes-first audience of developers and DevOps professionals.
Besides direct private training delivery by Mirantis, all courses in Mirantis’ CN track leverage the global network of Mirantis’ Authorized Training Partners for public class delivery, available in various locations and time zones. 
We hope these new offerings will provide a robust toolkit for both operators seeking to maintain and scale Kubernetes applications in production and developers seeking to build truly cloud- and Kubernetes-native applications. We also hope they’ll set the tone for the entire Cloud Native Computing track as it emerges and evolves over the coming months. Stay tuned for the introductory-level CN courses to complete the full learning journey in this space, anticipated this summer.
The post Journey into Cloud Native Learning with a New Education Track from Mirantis appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Introduction to Istio Ingress: The easy way to manage incoming Kubernetes app traffic

The post Introduction to Istio Ingress: The easy way to manage incoming Kubernetes app traffic appeared first on Mirantis | Pure Play Open Cloud.
Istio is an open-source, cloud-native service mesh that enables you to reduce the complexity of application deployments and ease the strain on your development teams by giving more visibility and control over how traffic is routed among distributed applications.
Istio Ingress is a subset of Istio that handles the incoming traffic for your cluster. The reason you need ingress is that exposing your entire cluster to the outside world is not secure. Instead, you want to expose just the part of it that handles the incoming traffic and routes that traffic to the applications inside. 
This is not a new concept for Kubernetes, and you may be familiar with the Kubernetes Ingress object. Istio Ingress takes this one step further and allows you to add additional routing rules based on routes, headers, IP addresses, and more. Routing gives you the opportunity to implement concepts such as A/B testing, Canary deployments, IP black/whitelisting, and so on.
Let’s take a look at how to use Istio Ingress. 
Overview of how Istio is integrated in UCP
You can install Istio on any compatible Kubernetes cluster, but to make things simple we’ll look at how to use it with Docker Enterprise Universal Control Plane (UCP).  (If you don’t have Docker Enterprise installed, you can get a free trial here.)
Docker Enterprise 3.1 includes UCP 3.3.0, which includes the ability to simply “turn on” Istio Ingress for your Kubernetes cluster. To do that, execute the following steps:

Log into the UCP user interface.
Click <username> –> Admin Settings –> Ingress.  (If you’re following the Getting Started instructions, that will be admin –> Admin Settings –> Ingress.)
Under Kubernetes, click the slider to enable Ingress for Kubernetes.

Next you need to configure the proxy so Kubernetes knows what ports to use for Istio Ingress. (Note that for a production application, you would typically expose your services via a load balancer created by your cloud provider, but for now we’re just looking at how Ingress works.)
Set a specific port for incoming HTTP requests, add the external IP address of the UCP server (if necessary) and click Save to save your settings.

In a few seconds, the configurations will be applied and UCP will deploy the services that power Istio Ingress. From there, you can apply your configurations and make use of the features of Istio Ingress. Let’s look at how to do that.
Deployment of a sample application
The next step is to deploy a sample application to your Kubernetes cluster and expose it via Istio Ingress. As an example, we’ll use a simple httpbin app, which enables you to experiment with HTTP requests. You have two options for performing this step: via the UI and the CLI. Using one or the other depends on your preference; you can achieve the same things in both ways. We’ll cover both in this article, starting with the UI.
Installing the application via the UCP user interface
To install the application  using the UI, log into Docker Enterprise and follow these steps:

Go to Kubernetes -> Create.
Select the default namespace and paste the following YAML into the editor:
apiVersion: v1
kind: ServiceAccount
metadata:
 name: httpbin

apiVersion: v1
kind: Service
metadata:
 name: httpbin
 labels:
   app: httpbin
spec:
 ports:
 – name: http
   port: 8000
   targetPort: 80
 selector:
   app: httpbin

apiVersion: apps/v1
kind: Deployment
metadata:
 name: httpbin
spec:
 replicas: 1
 selector:
   matchLabels:
     app: httpbin
     version: v1
 template:
   metadata:
     labels:
       app: httpbin
       version: v1
   spec:
     serviceAccountName: httpbin
     containers:
     – image: docker.io/asankov/httpbin:1.0
       imagePullPolicy: IfNotPresent
       name: httpbin
       ports:
        – containerPort: 80

Click Create.

This YAML tells Kubernetes to create a Deployment for one replica of httpbin, a Service to create a stable IP and domain name within the cluster, and a ServiceAccount under which to run the application.
Now we have our application running within the cluster, but we have to expose it to the outside world, and that’s where Istio Ingress comes into play.
Create the Istio Gateway
The first step is to create the Istio Gateway that will be the entry point for all the traffic coming into our Kubernetes cluster. To do that, follow these steps:

Go to Kubernetes -> Ingress.

Click on Gateways -> Create and create a gateway object named httpbin-gateway. The gateway name is arbitrary; you will use it later to connect the Virtual Services to the Gateway. 

Scroll down and click Add Server to add an HTTP server for port 80 and all hosts (*), and give the port a name such as gateway-port. 
Click Generate YML.
Select the default namespace.
Click Create.

Now that we have our gateway in place we need to deploy a Virtual Service.
Deploy a Virtual Service
The Virtual Service is another Istio construct that deals with the actual routing logic we want to put in place. To create it, follow these steps:

Go to Kubernetes -> Ingress.
Click on Virtual Services -> Create and create a new service called httpbin-vs that can take requests from all hosts (*) and links to the httpbin-gateway we created in the previous section.
Click Generate YML.
Select the default namespace.
Click Create. You will be redirected to the Virtual Services view.
Select the new service and click the gear icon to edit to add the first routing.  (You can also do this before creating the service.) 
You will see a YAML editor with the Virtual Service configuration. You’ll need to make a few tweaks to add routing information before being able to use the Virtual Service. In this case, we want to create a route that takes all requests (/) and sends them to the httpbin service, which is exposed on port 8000.

spec:
http:
 – match:
   – uri:
       prefix: /
   route:
   – destination:
       host: httpbin
       port:
          number: 8000

In the end, your Virtual Service configuration should look like this: 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 generation: 1
 name: httpbin-vs
spec:
 gateways:
 – httpbin-gateway
 hosts:
 – ‘*’
 http:
 – match:
   – uri:
       prefix: /
   route:
   – destination:
       host: httpbin
       port:
          number: 8000

Click Save to make the changes to the service. Now you can go ahead and access the service. To do that, open the browser to the proxy you specified when we set up Istio Ingress.  In this case, its:
<PROTOCOL>://<IP_ADDRESS>:<NODE_PORT>
In my case, the proxy is set up as:

We set up the service as the HTTP protocol, so the URL would be:

http://34.219.89.235:33000

Install the application via the CLI
In order to communicate with the UCP via the CLI, you first need to download a client bundle and run the environment script to set kubectl to point to the current cluster. (You can get instructions for how to do that in the Getting started tutorial.)
Now we are ready to start building.

Start by creating the prerequisites for the exercise – a deployment for our app, a service account and a service.
cat <<EOF | kubectl apply -f –
apiVersion: v1
kind: ServiceAccount
metadata:
 name: httpbin

apiVersion: v1
kind: Service
metadata:
 name: httpbin
 labels:
   app: httpbin
spec:
 ports:
 – name: http
   port: 8000
   targetPort: 80
 selector:
   app: httpbin

apiVersion: apps/v1
kind: Deployment
metadata:
 name: httpbin
spec:
 replicas: 1
 selector:
   matchLabels:
     app: httpbin
     version: v1
 template:
   metadata:
     labels:
       app: httpbin
       version: v1
   spec:
     serviceAccountName: httpbin
     containers:
     – image: docker.io/asankov/httpbin:1.0
       imagePullPolicy: IfNotPresent
       name: httpbin
       ports:
       – containerPort: 80
EOF

Next create the Gateway that will accept the incoming connections.
cat <<EOF | kubectl apply -f –
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
 name: httpbin-gateway
spec:
 selector:
   istio: ingressgateway
 servers:
 – hosts:
   – ‘*’
   port:
     name: http
     number: 80
     protocol: HTTP
EOF

Now create the Virtual Service that is responsible for the routing of the ingress traffic to the application pods.
cat <<EOF | kubectl apply -f –
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  generation: 1
  name: httpbin-vs
spec:
  gateways:
  – httpbin-gateway
  hosts:
  – ‘*’
  http:
  – match:
    – uri:
        prefix: /
    route:
    – destination:
        host: httpbin
        port:
          number: 8000
EOF

Now that you have all of this in place, you can see the application.
Configuring blue/green deployments for an application via Istio Ingress
In the previous example, we saw how we can expose an application via Istio Ingress and do simple routing, but again, Istio Ingress is much more powerful than that. Let’s explore more of these capabilities. 
In this case, the scenario that we are going to explore is doing Canary (blue/green) deployments via Istio Ingress.
Canary deployments involve deploying two versions of your application side by side and serving the new one only to a part of your clients. This way, you can gather metrics about your new version without showing it to all of your end users. When you decide everything with the new version is fine, you roll it out to everyone. If not, you rollback.
To explore this scenario we first need to deploy the second version of our application.  For that, we have prepared a slightly modified version of httpbin. Everything is the same, except the header on the main page, which says httpbin.org V2 to indicate that this is indeed version 2.
To deploy the new application to your cluster, follow these steps:

Log into Docker Entperirse UCP.
Navigate to Kubernetes -> Create.
Select the default namespace
Paste the following YAML into the editor and click Create:
apiVersion: apps/v1
kind: Deployment
metadata:
 name: httpbin-v2
spec:
 replicas: 1
 selector:
   matchLabels:
     app: httpbin
     version: v2
 template:
   metadata:
     labels:
       app: httpbin
       version: v2
   spec:
     serviceAccountName: httpbin
     containers:
     – image: docker.io/asankov/httpbin:2.0
       imagePullPolicy: IfNotPresent
       name: httpbin
       ports:
        – containerPort: 80

Now we have V1 and V2 of httpbin running side by side in our cluster. By default, Kubernetes does round-robin load balancing, so approximately 50 percent of your users will see V1 and the other 50 will see V2. You can see this for yourself by refreshing the application page. 

When doing canary deployments, however, we usually want the percentage of users seeing the new version to start out much smaller, and that’s where Istio Ingress comes into play.
To control the percentage of traffic that goes to each version, we need to create a Destination Rule. This is an Istio construct that we will use to make a distinction between V1 and V2. To do that, follow these steps:

Log into Docker Enterprise UCP.
Go to Kubernetes -> Create.
Select the default namespace
Paste the following YAML into the editor and click Create:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
 name: httpbin-destination-rule
spec:
 host: httpbin
 subsets:
 – name: ‘v1′
   labels:
     version: ‘v1′
 – name: ‘v2′
   labels:
      version: ‘v2′
As you can see, we’ve created two different subsets, each pointing to a different version of the application.
Next, we need to edit our existing Virtual Service to make use of the newly created Destination Rule. Go to Kubernetes -> Ingress and click Virtual Services.
Find your Virtual Service in the list (you should have only one at that point) and click the gear icon to Edit.
Edit the service to replace the content of the http property so the service looks like the following YAML and click Save:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
 name: httpbin-vs
spec:
 gateways: 
 – httpbin-gateway
 hosts:
 – ‘*’
 http:
  – route:
   – destination:
       host: httpbin
       subset: ‘v1′
     weight: 70
   – destination:
       host: httpbin
       subset: ‘v2′
      weight: 30
Notice that the routing now adds specific weights to each of the subsets we created in the previous step. Now if you access the URL and start refreshing the page you will see that approximately 7 out of 10 times, you see V1 and the other 3 times you see V2. 

So at this point we have successfully completed a canary deployment with a 70-30 ratio. The next step would be to gradually increase the ratio of users seeing the new version until the weight of V2 is 100 percent. At that point we can completely remove the old version.
Next steps
At this point you know how to use Istio Ingress to safely expose your applications, and to create routing rules that enable you to control traffic flow to create scenarios such as canary deployments. To implement more complex situations, you can use these same techniques to create custom routing rules just as you did in this case.
To see a live demo of Istio Ingress in action, check out this video.
The post Introduction to Istio Ingress: The easy way to manage incoming Kubernetes app traffic appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Getting Docker Enterprise Edition Running in 1 Hour

The post Getting Docker Enterprise Edition Running in 1 Hour appeared first on Mirantis | Pure Play Open Cloud.

Last month, Stone Door Group teamed up with Mirantis to deliver a free, 4-part series of one-hour workshops on various DevOps topics to sharpen your Docker and Kubernetes skills.  Our first webinar on Thursday, April 16th 2020 showed administrators, either new to Docker or running a lab version of Docker CE, how to get Docker Enterprise Edition up and running within 1 hour. The topics included: a quick review of containers and orchestration, installation considerations, installing Docker EE or upgrading from Docker CE, implementing Docker Trusted Registry, and Universal Control Plane.
We still have two more coming up in May, and we’d love to see you! 
Register here: https://www.stonedoor.io/docker-webinar
Get Docker Enterprise Edition Running in One Hour Recording

SLIDE DECK

A curated list of questions from the Q&A during our first webinar that contain rich information about Docker Enterprise capabilities
What are the system requirements to run UCP? Where can I find documentation that describes the process?
Docker has a fantastic document repository available that answers almost any question that comes to mind.
You can find it here: https://docs.docker.com/ 
To take a closer look at what the requirements are to run UCP, here they are: https://docs.docker.com/ee/ucp/admin/install/system-requirements/
Do you have to install docker DE, UCP, and DTR in that particular order? Do you install Kubernetes last?
In our webinar, we show the steps for setting these up. We start with our Centos boxes, which are running the Docker Enterprise engine with Swarm set up already. Next, we run UCP on top of our Swarm. After we have UCP up and running, we install the DTR.
The process is:

Start with any Linux distro

Then the Docker engine

Initialize Swarm

Install UCP*

Install DTR
*Kubernetes is automatically bootstrapped for you with the UCP installation.

https://docs.docker.com/ee/ucp/admin/install/

In the UCP, is there a way to designate someone an organization owner so that you don’t have to manually do that in DTR each time you add a user to a team?
The short answer here is no.
Organization owners are strictly a concept pertaining to Docker Trusted Registry, so it has to be done through the DTR.
Can I set quotas in UCP Swarm around deployments for end-users? For example: If I want users to share the same computing nodes (for resource sharing) to prevent container sprawl and optimize resource capacity sharing?
Yes, absolutely. As a best practice, you should. Whenever you create a containerized workload (in either Swarm or Kubernetes), you want to impose memory and CPU constraints. Kubernetes controller objects and Swarm services can specify the maximum amount of memory and CPU that a container spun up for that service is allowed to consume. You must do this for all your Swarm services, and your Kubernetes deployments and controller objects, especially in production.
If you don’t, nothing will stop your containers from consuming as much memory as they want. This can eventually lead to nodes crashing and leading to cascading cluster failures.
How about alerting in the UCP? Do I still have to lookup Prometheus and Alertmanager? Does Docker Enterprise provide this?
One of the tools that Docker Enterprise bootstraps is Prometheus. This can be seen on the home page of your UCP, where we can see metrics that are collected by Prometheus.

Can we access this from an external API instead of connecting directly to our host machine to and hitting the Docker daemon?
We can hit this from an external API, and it is a great practice to do so. One of the features of Docker Enterprise is that it creates a set of certificates and public/private keys for every user that you can use to issue commands to UCP and Kubernetes remotely. It is best practice to have our users access their client bundle with UCP and have them hit the API remotely.
What network capabilities are supported by UCP?
Docker uses embedded DNS to provide service discovery for containers running on a single Docker engine and for tasks running a swarm. Docker engine has an internal DNS server that provides name resolution to all of the containers on the host in the user-defined bridge, overlay, and MacVLAN networks.

https://docs.docker.com/ee/ucp/admin/install/plan-installation/

https://docs.docker.com/datacenter/ucp/1.1/configuration/multi-host-networking/

How does calico networking share the CIDR range across the cluster?
The Kubernetes network model requires that all pods in the cluster be able to address each other directly, regardless of their host node. Clusters use the kubelet CNI. This creates network bridge interfaces to the pod network on each node, giving each node its own dedicated CIDR block of pod IP address to simplify allocation and routing.

https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model

How do I scan a docker image for vulnerabilities?
You’ll see that in the Docker Trusted Registry, the DTR downloads a database of vulnerabilities. You can set each repository in the DTR to automatically scan images when they are pushed to the repository.
Does the DTR download a vulnerabilities database every time the container is spun up?
No, security scanning in DTR does not run any containers.
It is a static scan of the image that you uploaded to DTR.  If you are running your DTR vulnerability updates in online mode, DTR will download and update to its vulnerability database, once every 24 hours. When it does that, it checks the list of components in all of your images against the new database to see if there are any new vulnerabilities discovered in your image.
Is there a trial version of Docker Enterprise so I can evaluate it?
Yes! There are a few options available to you to try our Docker Enterprise.
If you already have Docker engine up and running, you can use
docker container run—rm -it sdgdockerlabs/coffee
to access free a 5-day trial of Docker Enterprise. This is also available on our website at http://www.stonedoorgroup.com/docker-ce-to-ee
You can also go to hub.docker.com for an enterprise trial. You can download a license for ten nodes and it is good for one month.
Conclusion
If you found this article helpful, please feel free to join us for the remaining two webinars at: https://www.stonedoor.io/docker-webinar

The post Getting Docker Enterprise Edition Running in 1 Hour appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis