Windows Worker Nodes for Docker Enterprise Kubernetes: Easily add and scale Windows workload capacity

The post Windows Worker Nodes for Docker Enterprise Kubernetes: Easily add and scale Windows workload capacity appeared first on Mirantis | Pure Play Open Cloud.
Docker Enterprise 3.1 with Kubernetes 1.17 lets you easily add Windows Kubernetes workers to a cluster (cluster master nodes must still run on Linux), mixing them optionally with Linux workers. Newly-joined Windows workers are immediately recognized by Docker Enterprise Kubernetes, and workloads can be scheduled on them reliably via the nodeSelector element in a deployment spec. 
The ability to orchestrate Windows-based container deployments lets organizations leverage the wide availability of components in Windows container formats, both for new application development and app modernization. It provides a relatively easy on-ramp for containerizing and operating mission-critical (even legacy) Windows applications in an environment that helps guarantee availability and facilitates scaling, while also enabling underlying infrastructure management via familiar Windows-oriented policies, tooling, and affordances. Of course, it also frees users to exploit Azure Stack, and/or and other cloud platforms offering Windows Server virtual and bare metal infrastructure.
Configure Windows Server Workers
Before you add a Windows Server worker to the cluster, you of course have to have the cluster itself, which must run on Linux. If you haven’t got a cluster, please set one up by following instructions in our Getting Started Blog. You only need to create a single server cluster.
 
The next step is to create the Windows Server node and add the Docker Enterprise 3.1 software to it so you can add it to the cluster.
 
The following instructions detail configuration of a Windows Server 2019 node for use as a Kubernetes worker with Docker Enterprise 3.1, using PowerShell as the Administrator. If using a cloud host, please select a Windows Server 2019 basic OS image, rather than an image preconfigured for containers.  
 
We start by enabling the Windows containers feature, then restart. Note backticks are used to mark newlines:
 
Enable-WindowsOptionalFeature `
  -All `
  -FeatureName containers `
  -Online;
 
Then we restart the computer, if required:
 
Restart-Computer;
 
Following restart, we set an execution policy to allow remotely-downloaded scripts to execute in the current session:
 
Set-ExecutionPolicy `
  -ExecutionPolicy RemoteSigned `
  -Force `
  -Scope Process;
 
Then we download the installation script:
 
Invoke-WebRequest `
  -OutFile ‘install.ps1′ `
  -Uri ‘https://get.mirantis.com/install.ps1′ `
  -UseBasicParsing;
 
And execute it directly:
 
.install.ps1 -Channel ‘test’ -dockerVersion ‘19.03.8’;
 
Following execution, we need to log out and back in, to update path variables:
 
logoff
 
Logging back in, we remove the installation script:
 
Remove-Item -Path ‘install.ps1′;
 
Following this initial configuration, all we need to do is download UCP images and store them locally in the Docker repo.
 
We can optionally turn off PowerShell’s status bar, to increase download speed:
 
$ProgressPreference = ‘SilentlyContinue’
 
Then we download the image bundle:
 
Invoke-WebRequest `
  -OutFile ‘ucp_images.tar.gz’ `
  -Uri ‘https://packages.docker.com/caas/ucp_images_win_2019_3.3.0.tar.gz’ `
  -UseBasicParsing;
 
Once downloaded, we can load the bundle into the repo:
 
docker load –input ‘ucp_images.tar.gz';
 
We can then list the images: 
 
docker images;
 
And finally, clean up the downloaded bundle archive:
 
Remove-Item -Path ‘ucp_images.tar.gz';
 
At this point, you can obtain from Docker Enterprise/UCP the “docker join” command required to add nodes to your cluster.  This command may be obtained by either running  “docker swarm join-token worker” from the manager node console, or by navigating to the “nodes” page in UCP  web interface where the join command is shown, ready for copying. Copy this to the PowerShell command line of your new Windows Worker node, and join it up. The node will be recognized by Docker Enterprise, and will appear in the node list (Shared Resources/Nodes) identified as a Windows node.
 
If you’ve configured Docker Enterprise to add new nodes as Kubernetes workers (Admin Settings/Orchestration), the new node will be started as a Kubernetes worker. Otherwise, by default, the node orchestrator is configured as ‘swarm.’ To switch it to Kubernetes,  run the following command from the manager node console:
 
docker node update <nodename> –label-add com.docker.ucp.orchestrator.kubernetes=true
docker node update <nodename> –label-rm com.docker.ucp.orchestrator.swarm
Test Deployment
You can now easily run a test deployment — this example deploys a Windows webserver with two pods behind a load balancer. To deploy it, you’ll need “kubectl” installed on your machine and authenticated to your Docker Enterprise cluster using the env.sh script in the authentication bundle downloaded from the Docker Enterprise UI for your account. See the Getting Started Blog (link) for more.
 
The following code is the same as presented in Kubernetes documentation.
 
Start by creating a namespace for your deployment:
 
kubectl create -f demo-namespace.yaml
# demo-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: demo
 
Then create a new file called win-webserver.yaml, and place in it the following YAML. Note that the YAML includes a (long!) embedded command that configures the webserver and creates a homepage application that responds to requests (in this case, to the IP address of the service on port 80) by identifying the IP of the pod on which the responding webserver is running. This can be used later to demonstrate load-balancing:
 
# win-webserver.yaml
apiVersion: v1
kind: Service
metadata:
  name: win-webserver
  namespace: demo
  labels:
    app: win-webserver
spec:
  ports:
    # the port that this service should serve on
    – port: 80
      targetPort: 80
  selector:
    app: win-webserver
  type: NodePort

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: win-webserver
    namespace: demo
  name: win-webserver
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: win-webserver
  template:
    metadata:
      labels:
        app: win-webserver
      name: win-webserver
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            – labelSelector:
                matchExpressions:
                  – key: app
                    operator: In
                    values:
                      – win-webserver
              topologyKey: “kubernetes.io/hostname”
      containers:
        – name: windowswebserver
          image: mcr.microsoft.com/windows/servercore:ltsc2019
          command:
            – powershell.exe
            – -command
            – “<#code used from https://gist.github.com/wagnerandrade/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add(‘http://*:80/’) ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host(‘Listening at http://*:80/’) ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host ” ;Write-Host(‘> {0}’ -f $$requestUrl) ;  ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$ip=(Get-NetAdapter | Get-NetIpAddress); $$header='<html><body><H1>Windows Container Web Server</H1>’ ;$$callerCountsString=” ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ‘ -f $$ip[1].IPAddress,$$callerCounts.Item($$_) } ;$$footer='</body></html>’ ;$$content='{0}{1}{2}’ -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host(‘< {0}’ -f $$responseStatus)  } ; “
      nodeSelector:
        beta.kubernetes.io/os: windows
 
Then create the deployment as a service:
 
kubectl create -f win-webserver.yaml
 
Confirm that the service is running:
 
kubectl get service –namespace demo
 
An example response:
NAME            TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
win-webserver   NodePort   10.96.29.12   <none>        80:35048/TCP   12m
 
You’ll see the IP address of the service listed. Visiting that in a browser will show the application. Calling the IP with curl several times in succession …
 
curl 10.96.29.12
 
… should show that the application has been deployed to two pods, with two different IP addresses, and the incoming requests are load-balanced.
 
Finally, delete the service and its namespace:
kubectl delete service win-webserver
kubectl delete namespace demo
The post Windows Worker Nodes for Docker Enterprise Kubernetes: Easily add and scale Windows workload capacity appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Performance tuning best practices for Memorystore for Redis

Redis is one of the most popular open source in-memory data stores, used as a database, cache and message broker. There are several deployment scenarios for running Redis on Google Cloud, with Memorystore for Redis our integrated option. Memorystore for Redis offers the benefits of Redis without the cost of managing it. It’s important to benchmark the system and tune it according to your particular workload characteristics before you expose it in production, even if that system depends on a managed service. Here, we’ll cover how you can measure the performance of Memorystore for Redis, as well as performance tuning best practices. Once you understand the factors that affect the performance of Memorystore for Redis and how to tune it properly, you can keep your applications stable.Benchmarking Cloud MemorystoreFirst, let’s look at how to measure the benchmark.Choose a benchmark toolThere are a few tools available to conduct benchmark testing for Memorystore for Redis. The tools listed below are some examples.Redis-benchmarkMemtier-benchmarkYCSBPerfKit BenchmarkIn this blog post, we’ll use YCSB, because it has a feature to control traffic and field patterns flexibly, and is well-maintained in the community. Analyze the traffic patterns of your applicationBefore configuring the benchmark tool, it’s important to understand what the traffic patterns look like in the real world. If you have been running the application to be tested on Memorystore for Redis already and have some metrics available, consider analyzing them first. If you are going to deploy a new application with Memorystore for Redis, you could conduct preliminary load testing against your application in a staging environment, with Cloud Monitoring enabled. To configure the benchmark tool, you’ll need this information:The number of fields in each recordThe number of recordsField length in each rowQuery patterns such as SET and GET ratioThroughput in normal and peak timesConfigure the benchmark tool based on the actual traffic patternsWhen conducting performance benchmarks for specific cases, it’s important to design the content of the benchmark by considering table data patterns, query patterns, and traffic patterns of the actual system.Here, we’ll assume the following requirements.The table has two fields per rowThe maximum length of a field is 1,000,000The maximum number of records is 100 millionQuery pattern of GET:SET is 7:3Usual traffic is 1k ops/sec and peak traffic is 20k ops/secYCSB can control the benchmark pattern with the configuration file. Here’s an example using these requirements. (Check out detailed information about each parameter.)The actual system contains various field lengths, but you can use only solid fieldlength with YCSB. So, configuring fieldlength=1,000,000 and recordcount=100,000,000 at the same time, the benchmark data size will be far from one of the actual systems.In that case, run the following two tests:The test in which fieldlength is the same as the actual system;The test in which recordcount is the same as the actual system.We will use the latter condition as an example for this blog post.Test patterns and architectureAfter preparing the configuration file, consider the test conditions, including test patterns and architecture.Test patternsIf you’d like to compare performance with instances under different conditions, you should define the target condition. In this blog post, we’ll test with the following three patterns of memory size according to capacity tier.ArchitectureYou need to create VMs to run the benchmark scripts. You should select a sufficient number and machine types so that VM resources don’t become a bottleneck when benchmarking. In this case, we’d like to measure the performance of Memorystore itself, so VMs should be in the same zone as the target Memorystore to minimize the effect of network latency. Here’s what that architecture looks like:Run the benchmark toolWith these decisions made, it’s time to run the benchmark tool. Runtime options to control the throughput patternYou can control the client throughput by using both operationcount parameter in the configuration file, and the -target <num> command line option.Here is an example of the execution command of YCSB:The parameter operationcount=3000 is in the configuration file and running the above command. This means that YCSB sends 10 requests per second, and the number of total requests is 3,000. So YCSB throws 10 requests during 300sec.You should run the benchmark with incremental throughput, as shown below. Note that a single benchmark run time should be somewhat longer in order to reduce the impact of outliers.: Client throughput patterns: 10, 100, 1,000, 10,000, 100,000Load benchmark dataBefore running the benchmark, you’ll need to load data to the Memorystore instance that you’re testing. Here is the example of a YCSB command for loading data:Run benchmarkNow that you have your data loaded and command chosen, you can run the benchmark test. Adjust the number of processes and instances to execute YCSB according to the load amount. In order to identify performance bottlenecks, you need to look at multiple metrics. Here are the typical indicators to investigate:LatencyYCSB outputs latency statistics such as average, min, max, 95th and 99th percentile for each operation such as READ(GET) and UPDATE(SET). We recommend using 95th percentile or 99th percentile for the latency metrics, according to customer service-level agreement (SLA).ThroughputYou can use throughput for overall operation, which YCSB outputs.Resource usage metricsYou can check resource usage metrics such as CPU utilization, memory usage, network bytes in/out, and cache-hit ratio using Cloud Monitoring.Performance tuning best practices for MemorystoreNow that you’ve run your benchmarks, you should tune your Memorystore using the benchmark results. Depending on your results, you may need to remove a bottleneck and improve performance of your Memorystore instance. Since Memorystore is a fully managed service, various parameters are optimized in advance, but there are still items that you can tune based on your particular use case.There are a few common areas of optimization: Data storing optimizationsMemory managementQuery optimizationsMonitoring MemorystoreData storing optimizationsOptimizing the way to store data not only saves memory usage, but also reduces I/O and network bandwidth.Compress dataCompressing data often results in significant savings in memory usage and network bandwidth.We recommend Snappy and LZO tools for latency-sensitive cases, and GZIP for maximum compression rate. Learn more details.JSON to MessagePackMsgpack and protocol buffers have schemas like JSON and are more compact than JSON. And Lua scripts has support for MessagePack.Use Hash data structureHash data structure can reduce memory usage. For example, suppose you have data stored by the query SET “date:20200501” “hoge”. If you have a lot of data that’s keyed by such consecutive dates, you may be able to reduce the memory usage that dictionary encoding requires by storing it as HSET “month:202005” “01” “hoge”. But note that it can cause high CPU utilization when the value of hash-map-ziplist-entries is too high. See here for more details.Keep instance size small enoughThe memory size of a Memorystore instance can be up to 300GB. However, data larger than 100GB may be too large for a single instance to handle, and performance may degrade due to a CPU bottleneck. In such cases, we recommend creating multiple instances with small amounts of memory, distributing them, and changing their access points using keys on the application side. Memory managementEffective use of memory is important not only in terms of performance tuning, but also in order to keep your Memorystore instance running stably without errors such as out of memory (OOM). There are a few techniques you can use to manage memory:Set eviction policiesEviction policies are rules to evict data when the Memorystore instance memory is full. You can increase the cache hit ratio by specifying these parameters appropriately. There are the following three groups of eviction policies:Noeviction: Returns an error if the memory limit has been reached when trying to insert more dataAllkeys-XXX: Evicts chosen data out of all keys. XXX is the algorithm name to select the data to be evicted.Volatile-XXX: evicts chosen data out of all keys with an “expire” field set. XXX is the algorithm name to select the data to be evicted.volatile-lru is the default for Memorystore. Change the algorithm of data selection for eviction and TTL of data. See here for more details.Memory defragmentationMemory fragmentation happens when the operating system allocates memory pages, which Redis cannot fully utilize after repeated write and delete operations. The accumulation of such pages can result in the system running out of memory and eventually causes the Redis server to crash.If your instances run Redis version 4.0 or higher, you can turn on activedefrag parameter for your instance. Active Defrag 2 has a smarter strategy and is part of Redis version 5.0. Note that this feature is a tradeoff with CPU usage. See here for more details.Upgrade Redis versionAs we mentioned above, activedefrag parameter is only available in Redis version 4.0 or later, and version 5.0 has a better strategy. In general, with the newer version of Redis, you can reap the benefits of performance optimization in many ways, not just in memory management. If your Redis version is 3.2, consider upgrading to 4.0 or higher.Query optimizationsSince query optimization can be performed on the client side and doesn’t involve any changes to the instance, it’s the easiest way to optimize an existing application that uses Memorystore.Note that the effect of query optimization cannot be checked with YCSB, so run your query in your environment and check the latency and throughput.Use pipelining and mget/msetWhen multiple queries are executed in succession, network traffic caused by round trips can become a latency bottleneck. In such cases, using pipelining or aggregated commands such as MSET/MGET is recommended.Avoid heavy commands on many elementsYou can monitor slow commands using slowlog command. SORT, LREM, and SUNION, which use many elements, can be computationally expensive. Check if there are problems with these slow commands, and if there are, consider reducing these operations.Monitoring Memorystore using Cloud MonitoringFinally, let’s discuss resource monitoring for predicting performance degradation of existing systems. You can monitor the resource status of Memorystore using Cloud Monitoring.Even when you benchmark Memorystore before deploying, the performance of Memorystore in production may degrade due to various influences such as system growth and changes of usage trends. In order to predict such performance degradation at an early stage, you can create a system that will alert you or scale the system automatically, when the state of the resource exceeds a certain threshold.If you would like to work with Google Cloud experts to tune your Memorystore performance, get in touch and learn more here.
Quelle: Google Cloud Platform

Optimize for internet traffic with Peering Service and the routing preference option

Last week at the Microsoft Build conference, we announced that Azure Peering Service is now generally available. We also introduced “routing preference,” a new option for our customers to further architect and optimize their traffic to and from Azure over the “public Internet.”

Networking is a critical enabler of the cloud. The experience when accessing your applications and data depends on the performance of your network connection and the global network powering your applications and services in the cloud.

For the best experience, data should travel the shortest path and enter and exit the Microsoft network as close as possible to you or your users. Microsoft runs the Microsoft global network, one of the world's largest wide area networks (WANs). Stretching across all continents through hundreds of thousands of miles of fiber and hundreds of network points of presence (PoP), it powers all the Microsoft cloud services such as Azure, Microsoft 365, LinkedIn, and their millions of users.

A growing number of our customers are adopting an "Internet-first" approach. Driven by accelerated cloud adoption and the current global situation, and the need to quickly adjust and provide optimal access to users is a main priority. Cloud-centric architectures with virtual private networks (VPNs) and technologies such as SD-WAN are applied to optimize for cost, security, and performance.

Peering Service

Microsoft is always optimizing customer traffic within our network, from ingestion close to the user and carrying it as far as possible to its destination, avoiding the public Internet, to returning it the same way. Peering Service extends the optimized path to your doorstep or, in industry terms, to the last mile.

Concept diagram of Peering Service.

We have partnered with internet service providers (ISPs), internet exchange providers (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and performant public connectivity.

When connecting using a partner provider, you can take advantage of business-class internet connectivity with high availability and low latency. Using the optimal path and least amount of network hops, Peering Service improves the user experience in Microsoft apps, such as Microsoft Teams, SharePoint, and Outlook. Also, you will have access to optional advanced performance telemetry and security features such as route hijacking monitoring and prevention.

Prefix events in the portal showing an origin autonomous system number (ASN) change for a Peering Service customer's prefix.

Routing preference

While optimal consumption of apps is critical, so is the ability to architect the delivery. I am excited to introduce the new routing preference option in Azure. The option brings a new second network service tier and enables customers to select how traffic routes between their Azure resources and clients accessing them from the internet. The Microsoft global network is well provisioned with multiple redundant fiber paths to ensure exceptionally high reliability and availability. We do traffic engineering using a unique software-defined WAN controller that provides optimal path selection and high performance for your traffic.

Default routing of traffic for best performance in Azure.

While Microsoft will always default to the best performing and most secure option of carrying the traffic across our backbone from source to destination, the new competitive egress tier adds a secondary option for solutions that do not require the premium predictability and performance of Microsoft's global network. Instead, it will allow the routing of traffic directly to the public Internet.

Traffic routed with the new network service tier in Azure.

You can select your preferred routing when creating a public IP address and associating it to resources such as virtual machines (VMs), internet-facing load balancers, and more. You can also add the secondary routing preference, "Internet routing" for storage accounts that gives an additional endpoint to access services such as blobs, files, web, and Azure Data Lake over the public Internet.

Creation of an additional endpoint for internet routing option.

Let us look at how the two options compare. We did a performance comparison using ThousandEyes monitoring across multiple global locations, accessing Azure Virtual Machines. The average round-trip latency was measured over a period of 30 days. As expected, routing via Microsoft's network provides the best latency, with the gap between the two further widening with cross-continent traffic. The choice of best scheme, price, and performance is ultimately yours.

Performance between the Microsoft network and the public Internet.

Learn more

We continue to be fully committed to helping you connect to Azure in the best possible way, protect your workloads, and deliver a great networking experience. We will continue to provide innovative networking services and guidance to help you take full advantage of the cloud and are always interested in learning more about your new scenarios enabled by our networking services. As always, we welcome your feedback.

Learn more about Peering Service.
Learn more about routing preference.

Quelle: Azure