Die globalen DynamoDB-Tabellen sind jetzt in den Regionen Asien-Pazifik (Mumbai), Kanada (Zentral), EU (Paris) und Südamerika (São Paulo) verfügbar

Die globalen Amazon DynamoDB-Tabellen sind jetzt in den Regionen Asien-Pazifik (Mumbai), Kanada (Zentral), EU (Paris) und Südamerika (São Paulo) verfügbar. Mit den globalen Tabellen können Sie massiv skalierten globalen Anwendungen lokalen Zugriff auf eine DynamoDB Tabelle für eine schnelle Lese- und Schreibleistung geben. Sie können auch globale Tabellen verwenden, um DynamoDB-Tabellendaten in zusätzliche AWS-Regionen zu replizieren, um eine höhere Verfügbarkeit und eine Notfallwiederherstellung zu ermöglichen.
Quelle: aws.amazon.com

Hack Week: How Docker Drives Innovation from the Inside

Since its founding, Docker’s mission has been to help developers bring their ideas to life by conquering the complexity of app development. With millions of Docker developers worldwide, Docker is the de facto standard for building and sharing containerized apps. 

So what is one source of ideas we use to simplify the lives of developers? It starts with being a company of software developers who builds products for software developers. One of the more creative ways Docker has been driving innovation internally is through hackathons. These hackathons have proven to be a great platform for Docker employees to showcase their talent and provide unique opportunities for teams across Docker’s business functions to come together. Our employees get to have fun while creating solutions to problems that simplify the lives of Docker developers.

At Docker, our engineers are always looking for ways to improve their own workflows so as to ship quality code faster. Hack Week gives us a chance to explore the boundaries of what’s possible, and the winning ‘hacks’ make their way into our products to benefit our global developer community.

-Scott Johnston, Docker CEO

With that context, let’s break down how Docker runs employee hackathons. Docker is an open source company, and in the spirit of openness, I am sharing all the gory details here of our hackathon. 

First of all, our hackathon is known as “Hack Week.” We conduct hackathons twice a year. Docker uses Slack channels to manage employee communications, Confluence for team workspaces and Zoom for video conferencing and recording of demos. For example, we have a Confluence Hack Week site with all the info an employee needs to participate: hackathon rules, team sign-ups, calendar and schedule, demo recordings and results.

Because we still need to perform our day jobs, we run Hack Week for a full work week where employees can manage their time but are granted 20% of that time to work on their hackathon project during work hours. Below is a screenshot of Docker’s internal site for Hack Week that provides simple guidance and voting criteria – every employee gets a vote!

Docker Hackathon Home Page

What makes this fun at Docker is the fact that we have employees participating from Paris, Cambridge (UK) and San Francisco. There are no constraints on how teams form. You can have members from all three locations form as one team. Signing up is simple – all we require is a team name, your team members, your region and a 1-3 sentence description of your “hack.” Below is the calendar from Docker’s last Hack Week which we ran back in December 2019. This should give you a good overview of how we execute Hack Week. This actually runs quite smoothly for Docker despite the 8-9 hour time difference between our teams in San Francisco and the teams in the UK and France. 

The winning team for December’s Hack Week was Team M&Ms (s/o to Mathieu Champion in Paris and Michael Parker in Cambridge) after garnering the most employee votes. The description of their hack was “run everything from Docker Desktop.” The hack enables auto-generation of dockerfiles from Docker Desktop. (A dockerfile is a text document that contains all the commands a user could call on the command line to assemble a container image). 

I spoke with Michael Parker regarding his motivations for participation in Hack Week. “Hack Week is a great innovation platform – it lets employees show what can be easily implemented with our current systems and dream a bit bigger about what might be possible rather than focusing on the incremental feature tweaks and bug fixes.” 

Finally, I have shared the recorded video below from our Hack Week winning team. This will give you a good idea as to how we present, collaborate and vote in a virtual work environment with teams spread across two continents and an island. It’s a 6-minute video and will give you a great view of how passionate we are about making the lives of developers that much better by making their jobs that much easier and productive.

Feel free to let any of this content we have shared inspire your organization’s employees to plan and conduct your own hackathons. I remember back in 2012 when I was participating in a public hackathon at Austin’s SXSW Interactive Conference seeing none other than Deepak Chopra kicking off the event and inspiring developers. He talked about hackathons as a form of “creative chaos” and how conflict and destruction of established patterns can often lead to creativity and innovation. I think this is a great description of a hackathon. Are you ready for some creative chaos inside your own organization?

The post Hack Week: How Docker Drives Innovation from the Inside appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Advanced Network customizations for OpenShift Install

Each organization has its own unique IT environment, and sometimes it will not fit within the network configuration which Red Hat OpenShift sets by default. Thus, it becomes essential to customize the installation for the target environment. In this blog we are going to showcase how to do the advanced network related customization and configuration needed to accomplish this. 
Planning 
The first step in implementing or deploying any product is determining the specifications of the target environment.
How do you define your target environment?

What will be the MTU size on your physical network?
OpenShift’s MTU size should always be smaller than the transport network’s MTU by at least 50 bytes if using OVS.
Similarly if you are using any 3rd party SDN plugin along with OpenShift’s OVN which, consists of GENEVE  and VXLAN, make sure to account for bytes due to its encapsulations.
Is your virtualization platform already using VXLAN? If so, what port is it utilizing?
How many networks are laid out for each of the OpenShift nodes? Is there any preference to use any specific network for SDN?
Do you plan to bond together the different networks adaptors for redundancy purposes?
What type of network mode you are planning to use i.e. network policy  or multi tenant?
Do you plan to use a DHCP server to provide persistent IP and server names, or do you want to inject this information via ignition files?
Is your OpenShift cluster behind the proxy? Do you want to use enterprise NTP servers on to your masters to avoid any unnecessary latency issues.

In this blog we are going to showcase how to do advanced network related customizations and configurations during the OpenShift deployment for the above stated points. Note that this blog should only be used for customizing the primary interface i.e. OpenShift Container Platform (OCP) SDN. For customizing secondary  interfaces i.e. net1,net2 , the SRIOV Machine Configuration Operator (MCO) should be directly used.
Multiple networks scenario:
There are scenarios where customers will not be able to open up external/internet access on the internal network due to security concerns. As long as you can attach both public/external networks and the internal network to the OpenShift nodes, you will be able to do the connected deployment. All you need to do is inject the proper default gateway into your ignition files. This example describes the steps to generate the custom ignition file for such scenarios.
In order to make any customizations to the ignition file, we will use a tool called filetranspiler.
You can download it with the following commands:
curl -o ./utils/filetranspile https://raw.githubusercontent.com/ashcrow/filetranspiler/master/filetranspile

chmod +x ./utils/filetranspile
Download the python-yaml package on your RHEL server. This package is required due to the filetranspiler dependencies.
yum  -y install python3-pyyaml
Create a “fakeroot” filesystem for each OpenShift node you want to make an ignition file.
mkdir -p bootstrap/etc/sysconfig/network-scripts/
Create a network configuration inside the “fakeroot” filesystem. For example, in this scenario we will make these files for each of the interfaces.
cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens3
DEVICE=ens3
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.7.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.1.77
DOMAIN=ocp.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens4
DEVICE=ens4
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.1.77
DOMAIN=ocp.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF
Note: Here Default gateway is of public/external network!!
Using filetranspiler, create a new ignition file based on the one created by openshift-install. 
Please note in this example, bootstrap.ign is my original ignition file created by the openshift-install. Continuing with the example of my bootstrap server; it looks like this.
filetranspiler -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
The syntax is:
filetranspiler -i $ORIGINAL -f $FAKEROOTDIR -o $OUTPUT
NOTE: If you’re using the container version of filetranspiler, you need to be in the directory where these files/dirs are. In other words, absolute paths won’t work.
In this way, you can create your customized ignition file for each of the node types; i.e. master or worker.
Multiple networks bonding scenario:
There are scenarios where customers will not be able to open up external/internet access on the internal network due to security concerns. As long as you can attach both public/external network and internal network to the OpenShift nodes, you will be able to do the connected deployment. All you need to do is to inject the proper default gateway into your ignition files. This example describes the steps to generate the custom ignition file for such scenarios. In this scenario, we also showcase how to pass the bond information into the ignition files.

Create a “fakeroot” filesystem structure  for each OpenShift node you want to make an ignition file.
mkdir -p bootstrap/etc/sysconfig/network-scripts/
Create a network configuration inside the “fakeroot” filesystem. For example in this scenario we will make these files for each of the interfaces.
cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens3
DEVICE=ens3
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.7.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.7.77
DOMAIN=ocp.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
TYPE=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.70
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
BONDING_OPTS=”mode=5 miimon=100″

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens4
TYPE=Ethernet
DEVICE=ens4
BOOTPROTO=none
ONBOOT=yes
HWADDR=”08:00:27:69:60:c9″
MASTER=bond0
SLAVE=yes
EOF

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens5
TYPE=Ethernet
DEVICE=ens5
BOOTPROTO=none
ONBOOT=yes
HWADDR=”08:00:27:69:60:c6″
MASTER=bond0
SLAVE=yes
EOF

cat < bootstrap/etc/resolv.conf
search ocp.example.com
nameserver 192.168.7.1
EOF
Note: Here Default gateway is a public/external network!!
Specify the IP address, Netmask & Bonding modes as per your requirement. In my example I am using ‘mode=5′ which is used to provide fault tolerance and load balancing.
Using filetranspiler, create a new ignition file based on the one created by openshift-install. 
Please note in this example bootstrap.ign is my original ignition file created by the openshift-install utility. Continuing with the example of my bootstrap server; it looks like this:
filetranspiler -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
In this way, you can create your customized ignition file for each of the node types; i.e. master or worker.
Setting up MTU size or VXLAN port number scenario:
If the target environment requires a particular MTU size packet then you can follow the steps below.
Use the following command to create manifests:
$ ./openshift-install create manifests –dir=<installation_directory>
Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:
$ touch <installation_directory>/manifests/cluster-network-03-config.yml
cat < <installation_directory>/manifests/cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
 name: cluster
spec:
 clusterNetwork:
 – cidr: 10.128.0.0/14
   hostPrefix: 23
 serviceNetwork:
 – 172.30.0.0/16
 defaultNetwork:
   type: OpenShiftSDN
   openshiftSDNConfig:
     mode: NetworkPolicy
     mtu: 1450
     vxlanPort: 4789
Note: The above MTU of 1450 is allocated because of the additional encapsulation overhead, i.e. always consider 50 bytes less than what is set on the physical transport side.
defaultNetwork: This section configures the network isolation mode for OpenShiftSDN. The allowed values are Multitenant, Subnet, or NetworkPolicy. The default value is NetworkPolicy.
Mtu: MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value.
vxlanPort: The port to use for all VXLAN packets. The default value is 4789. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for VXLAN, since both SDNs use the same default VXLAN port number.
Alternatively, you can also specify the cluster network configuration for your OpenShift Container Platform cluster by setting the parameters for the defaultNetwork parameter in the CNO CR.
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
 name: cluster
spec:
 clusterNetwork: 
 – cidr: 10.128.0.0/14
   hostPrefix: 23
 serviceNetwork: 
 – 172.30.0.0/16
 defaultNetwork: 
   …
 kubeProxyConfig: 
   iptablesSyncPeriod: 30s 
   proxyArguments:
     iptables-min-sync-period: 
     – 30s
 
kubeProxyConfig: The parameters for this object specify the kube-proxy configuration. If you do not specify the parameter values, the Network Operator applies the displayed default parameter values.
iptablesSyncPeriod: The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h and are described in the Go time package documentation.
Iptables-min-sync-period: The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time package.
Injecting static IP and server name scenario:
For static IPs, you need to generate new ignition files based on the ones that the OpenShift installer generated. You can use the filetranspiler tool to make this process a little easier. This is an example from the bootstrap node.
When using filetranspiler you first need to create a “fakeroot” filesystem.
mkdir -p bootstrap/etc/sysconfig/network-scripts/
Create your network configuration inside this fakeroot.
cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens3
DEVICE=ens3
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.7.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.7.77
DNS2=8.8.8.8
DOMAIN=ocp4.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF
cat < bootstrap/etc/hostname
Bootstrap-hostname
EOF
NOTE: Your interface WILL probably differ, be sure to determine the persistent name of the device(s) before creating the network configuration files.
Using filetranspiler, create a new ignition file based on the one created by openshift-install. Continuing with the example of my bootstrap server; it looks like this.
filetranspiler -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
Injecting enterprise NTP server information scenario:
If the OpenShift cluster is sitting behind the corporate proxy, then it will be a good idea to use an enterprise NTP server rather than what is available on the internet to avoid any latency and firewall issues. One of the ways is to inject the NTP information directly into the ignition files if you are unable to inject it via your DHCP server.
When using filetranspiler you first need to create a “fakeroot” filesystem.
vi master/etc/chrony.conf
Create your chrony configuration inside this fakeroot.
cat < master/etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.enterprise.pool.ntp.org iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift
# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3
# Enable kernel synchronization of the real-time clock (RTC).
rtcsync
# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *
# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2
# Allow NTP client access from local network.
#allow 192.168.0.0/16
# Serve time even if not synchronized to a time source.
#local stratum 10
# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys
# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC
# Specify directory for log files.
logdir /var/log/chrony
# Select which information is logged.
#log measurements statistics tracking
EOF
Using filetranspiler, create a new ignition file based on the one created by openshift-install. Continuing with the example of my master server; it looks like this.
filetranspiler -i master.ign -f master -o master-static.ign
Conclusion:
As you can see from the post, with the help of filetranspiler tool, you can customize your ignition files at the deployment stage. This will help you deploy the OpenShift cluster in the target environment of your choice. There are other tools available to do such customizations in the ignition level i.e. jqplay. And lastly, for further customizations of your OpenShift cluster refer to this github link.
The post Advanced Network customizations for OpenShift Install appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift