Coming Soon: Make Your Site Private Until You’re Ready to Launch

When you create a new site, you may want to personalize it before making it public. On WordPress.com, we give you a safe space where you can work on building and editing your site until you’re ready to share it with the world.

Until recently, this Coming Soon mode was limited to new sites without plugins or custom themes installed. We realize, however, that many users — regardless of how long they’ve had a website on WordPress.com — might want to make updates, change the design, or add new functionality without making these changes visible to the public until they’re complete. Now, all sites have this Coming Soon option, and you can toggle it on or off as you wish.

To set your site to Coming Soon, go to Manage in the sidebar, select Settings, scroll down to Privacy, and select the Coming Soon button. Be sure to click on Save settings for the change to take effect.

While in this mode, site visitors will see a landing page with your site’s title. You and other logged-in people that you invite will see the full website.

Logged-out viewLogged-in view

To invite people to view your site while in Coming Soon mode, add them as new users from the Manage → People → Invite button. Users will need to at least be a Contributor to view the site in this mode.

To make your site public again, go to Manage in the sidebar, select Settings, scroll down to Privacy, and select the Public button. Again, be sure to click on Save settings.

Private sites for all

If instead of sharing your site with everyone, you want to keep it private, and only available to invited members of the site — for instance, a private blog for your family, made up of photos and videos of your children or pets — you can use the Private option under Manage → Settings → Privacy.

In this mode, instead of the Coming Soon landing page, logged-out visitors will see a more discreet prompt to log in.

No matter what you want to do on your site — whether making a few tweaks, refreshing your site design, or building a fully fledged online store — use the Coming Soon feature to keep it private until you’re ready to unveil it to the world.

We hope you enjoy this new feature!
Quelle: RedHat Stack

Expert Advice: Business Fundamentals for Creative Professionals

Are you an artist, photographer, or freelance writer? How about a website designer, master metalsmith, or musician? If you’re in any creative profession and would like to learn more about how to market and sell your services and work online, we’ve created a free webinar just for you.

We’ve partnered with our friends at FreshBooks, the leading invoicing and accounting software for creative entrepreneurs, to offer tips on how to build your online store and automate your sales and accounting, leaving you with more time to focus on your craft.

Date: Wednesday, May 20, 2020Time: 10:00 a.m. PDT | 11:00 a.m. MDT | 12:00 p.m. CDT | 1:00 p.m. EDT | 17:00 UTCCost: FreeRegistration linkWho’s invited: Artists, writers, musicians, website and graphic designers, photographers, marketers, and anyone else interested in learning how to sell their creative services online.

Your hosts will be Jonathan Wold, Community Manager at WooCommerce, and Irene Elliott, Senior Community Manager at FreshBooks. Dustin Hartzler, a WooCommerce Happiness Engineer, will moderate questions. After the 45-minute presentation, we’ll open up the (virtual) floor for a 15-minute Q&A session.

Attendee slots are limited, so be sure to register early to save your seat! But if you can’t make it, we’ve got your back. A recording of the webinar will be uploaded to our YouTube channel a few days after the event.

See you then!
Quelle: RedHat Stack

Earth Day Turns 50 with a Massive Livestream Event

As the world fights to bring the COVID-19 pandemic under control, another crisis looms.

In late 2018, the UN Intergovernmental Panel on Climate Change (IPCC) warned that if we want to avoid the worst impacts of climate change, we need to cut global carbon emissions almost in half by 2030. This decade will be critical.

As we’ve stated in the past, the time to act is now — we simply cannot continue business as usual, and this proves resoundingly true this year. We are in a time of maximum uncertainty and urgency.

Earth Day Live: April 22-24

Earth Day Live is a three-day livestream and an epic community mobilization to show support for our planet, through which millions of people can tune in online alongside activists, celebrities, musicians, and more. The massive live event — which starts on April 22 and concludes on April 24 — is organized by climate, environmental, and Indigenous groups within the US Climate Strike Coalition and Stop The Money Pipeline Coalition.

Starting today, you can opt into displaying a banner that promotes Earth Day Live on your WordPress.com site, showing your commitment to this critical topic and spreading the word about the digital event and livestream. On April 22, sites with this feature enabled will automatically display a full-screen overlay message. Your site visitors will be able to dismiss the banner once viewed.

Promote this global movement on your site

To activate the banner, go to My Site → Manage → Settings. At the top of the Settings menu, you will see a toggle switch — flip it on to join this digital climate strike.

Self-hosted WordPress sites can also join the movement by installing the Earth Day Live WP plugin from the WordPress.org plugin repository. 

After the livestream ends, the banner will disappear on its own — no further action is required on your end. (If you’ve installed the plugin, it will automatically disable.)

Together we can make a difference. We hope you’ll join us in supporting this movement.

Visit Earth Day Live for event details, and explore more digital Earth Day initiatives and resources on WordPress so you can take action on April 22 — or any day.
Quelle: RedHat Stack

Community Blog Round Up 19 April 2020

Photo by Florian Krumm on Unsplash
Three incredible articles by Lars Kellogg-Stedman aka oddbit – mostly about adjustments and such made due to COVID-19. I hope you’re keeping safe at home, RDO Stackers! Wash your hands and enjoy these three fascinating articles about keyboards, arduino and machines that go ping…
Some thoughts on Mechanical Keyboards by oddbit
Since we’re all stuck in the house and working from home these days, I’ve had to make some changes to my home office. One change in particular was requested by my wife, who now shares our rather small home office space with me: after a week or so of calls with me clattering away on my old Das Keyboard 3 Professional in the background, she asked if I could get something that was maybe a little bit quieter.
Read more at https://blog.oddbit.com/post/2020-04-15-some-thoughts-on-mechanical-ke/
Grove Beginner Kit for Arduino (part 1) by oddbit
The folks at Seeed Studio have just released the Grove Beginner Kit for Arduino, and they asked if I would be willing to take a look at it in exchange for a free kit. At first glance it reminds me of the Radio Shack (remember when they were cool?) electronics kit I had when I was a kid – but somewhat more advanced. I’m excited to take a closer look, but given shipping these days means it’s probably a month away at least.
Read more at https://blog.oddbit.com/post/2020-04-15-grove-beginner-kit-for-arduino/
I see you have the machine that goes ping… by oddbit
We’re all looking for ways to keep ourselves occupied these days, and for me that means leaping at the chance to turn a small problem into a slightly ridiculous electronics project. For reasons that I won’t go into here I wanted to generate an alert when a certain WiFi BSSID becomes visible. A simple solution to this problem would have been a few lines of shell script to send me an email…but this article isn’t about simple solutions!
Read more at https://blog.oddbit.com/post/2020-03-20-i-see-you-have-the-machine-tha/
Quelle: RDO

Expert Advice: Get Started on Your New Website

Starting a new website can be a bit overwhelming but we’re here to help! Beginning Monday, April 20th, WordPress.com will host free, 30-minute live webinars to cover those initial questions that come up as you start to build your website. Each day will cover a different topic, all designed to give actionable advice on how to create the type of website you want. 

Date: Starts April 20, 2020 and repeats daily Monday thru Friday

Weekly Schedule:

Mondays – Getting Started: Website Building 101Tuesdays – Quick Start: Payments (Simple and Recurring)Wednesdays – Quick Start: BloggingThursdays – Quick Start: WooCommerce 101Fridays – Empezando: Construcción de Sitios Web 101

Time: 09:00 am PDT | 10:00 am CDT | 12:00 pm EDT | 16:00 UTC

Who’s Invited: New WordPress.com users and anyone interested in learning more about WordPress.com’s website capabilities.

Register Here: https://wordpress.com/webinars/

Our WordPress.com customer service team, we call them Happiness Engineers, are experts in helping new users get up and running on their new websites. Across each week of webinars we’ll cover questions about the basics of setting up your website, handling simple and recurring payments, blogging best practices, and adding in eCommerce capabilities. Come with questions as you’ll be able to submit them beforehand—in the registration form—and during the live webinar.

Everyone is welcome, even if you already have a site set up. We know you’re busy, so if you can’t make the live event, you’ll be able to watch a recording of the webinar on our YouTube channel.

Live attendance is limited, so be sure to register early. We look forward to seeing you on the webinar!
Quelle: RedHat Stack

Securing Your Containers Isn’t Enough — Webinar Q&A

The post Securing Your Containers Isn’t Enough — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Last week we presented a webinar with our partner Zettaset about containerized data encryption and why it’s important. Here are the answers to your questions, provided by Bryan Langston and Uday Shetty of Mirantis and Tim Reilly and Maksim Yankovskiy of Zettaset.
View webinar slides and recording
Why is encryption so important if we wrap our existing security around all workloads?
It goes back to why encrypting a containerized environment is different than with a legacy environment. The single word answer would be multi-tenancy. Multi-tenant datacenters usually depend on the segmentation of the hardware. Multi-tenant containerized environments are entirely in the software stack. So we cannot just take an existing, legacy encryption technology or security technology and apply it to the entire software stack of containers and just call it a day. We have to do something that’s specific to containers. We have to do something that integrates directly with containers and works seamlessly in the containerized environment.
What is the performance impact of encrypting data in containers? How do you ensure that encryption does not introduce latency when dealing with very large buckets? How can we reduce processing time?
With the Zettaset solution, we’re running in containers, but we’re running within the kernel and working at the block level using AES_NI instructions for fast cryptographic performance. We measure about a 3% performance hit for read and write using file system benchmark tests on the underlying encryption scheme. It performs just as well on large file systems as it does on small ones. You can reduce that performance hit by splitting your application across more containers on more CPUs.
Performance is critical because you don’t want your encryption solution to slow down your analytics system, bringing it down to its knees. It happened before, that’s why people are rightfully concerned. Minimizing performance overhead is one of the fundamentals of our solution.
What is the most common security incident reported for containers?
Bryan: Like Tim and Maksim have been talking about, the data breaches, storming the castle. Storming the castle is made possible by not locking down your environment in terms of network access policies, the RBAC, lack of implementation of least privilege. Think of a top secret security clearance in the government: information is granted on a need-to-know basis. Explicitly defining that will avoid a lot of problems. The most common security incident is access to data by someone who shouldn’t have it.
Maksim: The most common security incidents stem from improperly configured containerized environments that allow attackers to install malicious software on a single container and then from that single container distribute that malicious software to all other containers within the infrastructure. Malicious software then takes over the entire container infrastructure and has unrestricted access to containers and data. While tools such as intrusion detection and container image integrity scanners would help alert the admins of the breach, these tools would not protect the data from compromise. This emphasizes the need for data at rest encryption.
How big of a component of DevSecOps is encryption? If you take advantage of a solution like Zettaset, what else remains to say we have a “DevSecOps Practice?”
Bryan: Good question, it’s kind of like, “How do I know if I’m doing DevSecOps right?” I would phrase that as: Encryption needs to be as big as needed to satisfy your company’s risk management policy. Every company has a different level of risk. Every company has different subsets of controls within a security framework, like PCI for example, to which they have to comply. Just because you sell stuff over the Internet doesn’t mean you’re handling payment, for example. Your subset of controls for PCI might look different from other players in the same online reselling space. So it’s as big as needed to satisfy your company’s risk management policy. What your DevSecOps team has to enforce will vary.
One other thing to keep in mind is that the implementation of DevSecOps is a combination of both industry best practices and the layer of security that pertains to your company’s business model, like how I was just talking about risk management. There are very well defined industry best practices for many components — I’m talking about CIS benchmarks, stuff you can easily download and run and give a quick assessment to how you are doing compared to those benchmarks — but then you also have to define the layer that pertains to you.
In order to say “I have a DevSecOps practice,” we’re talking about having a team that focuses on understanding the attack vectors, and then identifying the controls that are relevant to your workload and your business, and then having the means to implement those controls from a technology perspective, whether that’s encryption, like we’re talking about here, or RBAC, or network policies or some of the other things that we discussed earlier, or all of them.
Nick: Is it fair to say that you’re better off with too much security than not enough security?
Bryan: Yeah, you want to stay off the front page of the headlines. I’m sure Marriott, in light of the current environment we’re in with coronavirus, did not need to exacerbate their problem with a security breach. So definitely, err on the side of caution.
Maksim: By the way, I don’t think there’s such a thing as too much security. There are things that you may choose not to implement, but there’s no such thing as too much security.
My containers are running on SEDs (Self Encrypting Drives). Do I still need Zettaset Container Encryption?
SEDs offer a simple approach to encrypting data. SEDs, however, are not suitable for properly securing container environments.
SEDs do not offer key granularity to address the fluid topology of container environments – containers will share SED partitions and data from different containers will be encrypted with the same key. In the event of a compromise, one bad actor container compromises all other containers that share the same SED.
Also, SEDs do not offer enterprise grade key management; some store keys on the drive, some rely on the OS to store the master key. Both of these approaches are not scalable and not secure.
Zettaset Container Encryption ensures that:
Each container is allocated its own storage volume mapped to a unique storage volume group that is encrypted with unique encryption key.
The container volume is only available when a container mounts it; the volume is automatically unmounted when the container exits.
The KMIP-compatible enterprise key manager running natively in a container provides secure key management infrastructure.
I am curious, I should be able to encrypt the disks before I make it an image, so before I run “docker build”, shouldn’t I be able to run the application when I run “docker run”?
The Docker build command builds the container image, which includes unmodifiable layers of OS, software, and data. This is not the data that Zettaset encrypts. Zettaset encrypts data that containers use and generate at runtime, the data that persists, essentially – production data.
How do you handle encryption when containers move from one host to another host?
When containers move between hosts, the data doesn’t necessarily move between the hosts, because the storage should not be associated with the hosts. In a typical distributed environment, when you have processes running on different hosts, this is addressed by having shared storage, but shared storage is still managed by Docker daemons, which are on hosts. So we are able to handle the storage allocation to the container on the shared volumes just like we are able to handle storage allocation to the containers on the volumes that are hardwired to the hosts.
So, we can use storage that is tied to the host, or we can use the storage that is shared between the Docker hosts. in addition, with centralized virtual key manager, we are able to provide access to container data regardless of which host container runs on. This goes hand-in-hand with the shared storage approach.
Does Docker Enterprise integrate with any Key Manager/ HSM?
Docker Enterprise doesn’t have any specific integrations with any HSM currently. UCP provides a certificate authority (for TLS, client bundles, node joining, API certs, etc) and DTR provides the notary (for image content trust). The cert authority can be a 3rd party.
Can you clarify the ephemeral nature of the key manager container, and how it securely accesses the keys (and how those are stored securely?) Can the system leverage key storage like KMS in AWS?
That question speaks to several important things. Yes, a key manager in a containerized environment should run in a container. There has been a lot of work on key managers over the last 20 years. We started with proprietary key managers, then we started moving to key managers supporting what is known as KMIP (Key Manager Interoperability Protocol), which is essentially a common language that every key manager out there speaks. The key manager running in a container is part of our solution. It is fully KMIP compatible, and integration has been tested with other encryption solutions, and of course ours.
Understanding that enterprise security requirements may be different, we provide you with a software-based key manager, but we also understand that you may already have investments in existing Gemalto/Safenet or Thales/Vormetric security key management infrastructure. We allow you to connect very simply and very easily to any existing key management infrastructure that you may have. We have a KMIP compatible key manager that stores the keys in a key database. The key database is secured with a hierarchy of master keys and hashed with appropriate hash keys. Those keys are stored in the software security module, which is essentially a software implementation of your typical Hardware Security Module (HSM). We’re also (as any solution should be) PKCS11 compliant, which means it can talk to HSM’s if required, which are essentially hardware devices that store master keys securely.
AWS KMS does not support the industry-standard Key Management Interoperability Protocol (KMIP) and because KMS is owned by the cloud provider, Zettaset recommends using a third-party key manager that allows the customer to own their keys. AWS’s policy is that “security is a shared responsibility” and this is the optimal way to implement it.
The reason why I’m saying all this and giving such a detailed and somewhat technical answer is that when you look at an encryption solution, you should make sure that it does a good job with housekeeping. If encryption is an afterthought, then believe me when I tell you, key management is not even on the map, but it should be.
How does your “driver” know what size storage to create if the developer isn’t doing anything different in docker. It seems that the developer would have to specify that size.
We talk about transparency, and now we’re saying, when you specify a volume, we need to know the size of the volume you allocate. So it’s an additional burden of the development to ask for a certain amount of storage.
There are three options that our solution is providing. One option is: Storage volumes can be preallocated by administrators, and that’s very well integrated with the Docker Create Volume command. So an administrator can specify a volume for, let’s say, a MySQL database and specify size there.
The second option is that an administrator, at the time of installation of our software, can specify the default size of the volume that the container will get, and then a developer simply runs the Docker Run command and doesn’t have to specify the size of the volume and the volume gets allocated.
The third options is that nothing gets specified, and we just specify a default value of the volume size, which is specified within the installation of the software.
So between the three, transparency should be addressed.
In a cloud-based Kubernetes offering such as AKS/EKS, most operational deployments provision separate block storage disks (EBS/Azure Disk) per container. Since there is a logical separation here, and containers aren’t sharing a disk, what does a product like Zettaset bring to the table?
While storage provisioning can provision separate AWS EBS devices per container, Zettaset allows for partitioning one (or small number) of storage devices for use by multiple containers, therefore dramatically reducing the number of EBS volumes required. In addition, Zettaset provides automated encryption and key management of those storage devices without the need to deploy encrypted EBS volumes.
How is RBAC here different than in K8s?
Zettaset XCrypt relies on K8s RBAC for access and permission management. In addition, XCrypt provides ability to securely decommission a K8s worker node completely, if a node is compromised. This is done with a single admin command without the need to have access to the node.
Can we use Azure key vault with this to store encryption keys for containers, and if yes, then how will it be communicating ?
We don’t support Azure key vaults at this time. Only KMIP compatible key managers.
As a Docker Enterprise Partner, can we offer Zettaset as an OEM product?
Mirantis values our partners and will consider what’s best for our customers and mutual business benefits. Please contact us with specific questions.
How is Zettaset integrated with Docker containers? Is it a layer above Docker Enterprise or something which is integrated within every Docker container? Can it be used with Docker Enterprise orchestration tools ?
Zettaset XCrypt integrates with Docker Enterprise by providing “XCrypt Volume Driver”, a Docker Volume Driver API-compatible driver that transparently integrates into the Docker storage management stack. It is fully usable with Docker Enterprise orchestration tools.
The post Securing Your Containers Isn’t Enough — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Edge Computing Challenges

The post Edge Computing Challenges appeared first on Mirantis | Pure Play Open Cloud.
There is a lot of talk around edge computing. What is it? What will it mean to the telco industry? Who else will benefit from it? There’s also a large amount of speculation about identifying the killer application that will spark massive scale deployment of edge computing resources. 
In many ways, edge computing is just a logical extension of existing software defined datacenter models. The primary goal is to provide access to compute, storage and networking resources in a standardised way, whilst abstracting the complexity of managing those resources away from applications. The key factor that is missing in many of these discussions, however, is a clear view of how we will be expected to deploy, manage and gain a clear picture of these edge resources.
The key challenge here is that those resources need to be managed in a consistent and effective way in order to ensure that application developers and owners can rely on the infrastructure, and will be able to react to changes or issues in the infrastructure in a predictable way. 
The value of cloud infrastructure software such as Openstack is the provision of standardised APIs that developers can utilise to get access to resources, regardless of what they are or how they need to be managed.
With the advent of technologies such as Kubernetes, the challenge of managing the infrastructure in no way lessens; we still need to be able to understand what resources we have available, control access to them, and lifecycle manage them.
In order to enable the future goal of providing distributed ubiquitous compute resources to all who need them, where they need them, and when they need them, we have to look deeper into what is required for an effective edge compute solution.
What is Edge Computing?
Finding a clear definition of edge computing can be challenging; there are many opinions on what constitutes the edge. Some definitions will narrow the definition of cloud, claiming that edge only includes devices that are required to support low latency workload, or that are the last computation point before the consumer, whilst others will include the consumer device or an IOT device, even if latency is not an issue.  
As it appears that everyone has a slightly different perspective on what edge computing entails, in order to have a common understanding the following is the definition of edge computing used for this discussion.
In this discussion we take a broad interpretation of Edge Computing, including all compute devices that provide computing resources, that are not located in core or regional data centers, that bring computing resources closer to the end user or data collection devices.
For example, consider this hierarchy:

There are a number of different levels, starting with core data centers, which generally consists of fewer locations, each containing a large number of nodes and workloads. These core data centers feed into (or are fed by, depending on the direction of traffic!) the regional data centers. 
Regional data centers tend to be more numerous and more widely distributed than core data centers, but they are also smaller and consist of a smaller — though still significant — number of nodes and workloads.
From there we move down the line to edge compute locations; these locations are still clouds, consisting of a few to a few dozen nodes and hosting a few dozen workloads, and existing in potentially hundreds of thousands of locations, such as cell towers or branch offices. 
These clouds serve the far edge layer, also known as “customer premise equipment”. These are single servers or routers that can exist in hundreds of thousands, or even millions of locations, and serve a relatively small number of workloads. Those workloads are then accessed by individual consumer devices.
Finally the consumer or deep edge layer is where the services provided in the other layers are consumed from or from where the data is collected and processed.
Edge Use Cases
There are a large number of potential use cases for edge computing, with more being identified all the time.  For example:As you can see here, we can roughly divide edge use cases into third party applications and telco operator applications.
Third party applications are those that are more likely to be accessed by end users, such as providing wireless access points in a public stadium, connected cars, or on the business end, connecting the enterprise to RAN.
Operator applications, on the other hand, are more of an internal concern. They consist of applications such as geo-fencing of data, data reduction at the edge to enable more efficient analytics, or Mobile Core.
All of these applications, however, fall into the “low latency requirements” category.  Other edge use cases that don’t involve latency might consist of a supermarket that hosts an edge cloud that communicates with scanners customers can use to check out their groceries as they shop, or an Industrial IOT scenario in which hundreds or thousands of sensors feed information from different locations in a manufacturing plant to the plant’s local edge cloud, which then aggregates the data and sends it to the regional cloud.
Edge Essential Requirements
The delivery of any compute service has a number of requirements that need to be met. With edge computing, the same delivery of a massively distributed compute service takes all those requirements and compounds them, not only because of the scale, but also because access (both physically and via the network) may be restricted due to device/cloud location. 
So taking this into account, what are the requirements for edge computing?

Area
Detail

Security (isolation)

Effective isolation of workloads is critical to ensure not only that workloads will not interfere with each other’s resources, but also that they can not access each other’s data in a multi-tenanted environment.
Clear access control and RBAC policies and systems are required to support appropriate separation of duties and to prevent unauthorised access by both good and bad actors.
Cryptographic Identification and authentication of edge compute resources are also required.

Resource management

The system must provide the ability to manage the physical and virtual resources required to provide the resources to consumers in a consistent way, with minimal input from administrators.
Operators must be able to manage all resources remotely, with no need for local hands.

Telemetry Data

The system must provide a clear understanding of resource availability and consumption, in such a way that provides applications with the data necessary to make programmatic decisions about application distribution and scaling. This requires:

Providing applications with data on inbound demand 
Providing applications with geographic data that is relevant to application decisions

Operations

Low impact, zero application downtime infrastructure operations are critical.
Low or (preferably) zero touch infrastructure operations tooling must be available.
An efficient edge system requires a very high degree of automation and self-healing capabilities.
The system must consist of self contained operations with minimal dependencies on remote systems that could be impacted by low network bandwidth, latency or outage.

Open Standards 

A key feature of edge systems is the ability to rapidly deploy new and diverse workloads and integrate them with a number of different environments. Basing the solution on open standards allows for this flexibility and supports standardisation.
Open standards should be used in all areas that affect the deployment and management of workloads, enabling easy and rapid certification of workloads, such as: 

a common standard for the abstraction of APIs simplifies development and deployment.
standardised virtualisation or container engines

Stability and Predictability

Edge compute platforms need to behave predictably in different scenarios to ensure a consistent usage experience.
The stability of edge compute solutions is critical; this encompasses graceful recovery from errors, as well as being able to handle harsh environmental conditions with potentially unpredictable utilities and other external services.

Performance

Predictable and clearly advertised performance of Edge compute  systems is critical for the effective and appropriate hosting of applications. For example, it should be clear whether the environment provides access to specialised hardware components such as SmartNICs and Network Accelerators.
The performance requirements for Edge computers systems are driven by the applications needs. For example, a gaming application may need lots of  CPU and GPU power and very low latency network connections, but a data logger may be based on a low power CPU and can trickle feed the collected data over time.

Abstraction

Edge systems must provide a level of abstraction for infrastructure components in order  to support effective application/workload portability over multiple platforms. Common standard APIs typically drive this portability.

Sound familiar?
If you’re thinking that this sounds a lot like the theory behind cloud computing, you’re right. In many ways, “edge” is simply cloud computing taken a bit further out of the datacenter. The distinction certainly imposes new requirements, but the good news is that your cloud skills can be brought to bear to get you started.
If this seems overwhelming, don’t worry, we’re here for you! Please don’t hesitate to contact us and see how Mirantis can help you plan and execute your edge computing architecture.
The post Edge Computing Challenges appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Swarm to Kubernetes workload migration

The post Swarm to Kubernetes workload migration appeared first on Mirantis | Pure Play Open Cloud.
For customers, making the choice to migrate from a production Swarm cluster to a production Kubernetes cluster is not an easy decision. Many factors go into such a decision, such as wanting to take advantage of advanced scheduling capabilities or the growing Kubernetes ecosystem. While we continue to support Swarm, for those customers who have made the decision to embrace Kubernetes, we have good news: Mirantis is committed to freedom of choice for container orchestrators and has developed a suite of process, professional services, and automated tooling to make this migration as automated and painless as possible. 
Because Kubernetes is very flexible (and hence complex), customers should be wary of going it alone. Mirantis has unique expertise to ensure our customers enjoy a fast and successful migration of production workloads from Swarm to Kubernetes. We have worked with customers for over 5 years on Swarm workloads and over 2 years on Kubernetes workloads and we understand the real world challenges and solutions of running at production scale.  We will be announcing our Migration Package in April, but we wanted to take a moment to highlight some of the issues involved and the areas of concern you need to keep in mind.
Why moving from Swarm to Kubernetes is complicated
In some ways, Swarm and Kubernetes have a lot in common. They both orchestrate containers. They both provide ways to manage resources such as storage, networking, and ingress, all via YAML files.
Unfortunately, in many ways that’s where the similarities end. 
For example, Swarm is simple and very easy for a newcomer to successfully deploy containers, whereas Kubernetes can have a steep learning curve and may be overkill for certain scenarios. Kubernetes lends itself to very large, complex infrastructures where Swarm may not be a perfect fit. Swarm YAML (Docker Compose) is built around services for applications, whereas Kubernetes YAML is built around pods and other deployment artifacts.
Some of these problems can be resolved through the use of automated tooling, and we are creating all of the processes and automated tooling needed to eliminate (or at least minimize) any production impact. We will also be taking this opportunity to automate the clean up of any Compose files to make sure the new YAML files are meeting best practice standards.
In order to perform a successful migration, we consider dozens of different aspects of your existing deployment. They fall into a number of different areas, such as: 

Architecture, configuration, and scale: Obviously, the larger and more complex your current infrastructure, the more complex the migration. Part of our process involves not just ensuring that we’ve duplicated the application and infrastructure environment, but that configuration is managed in a testable, auditable, and repeatable way using Infrastructure as Code. We also examine not just the application, but the infrastructure that hosts it. For example, if you’re using an external storage appliance, we want to ensure that it can be used with Kubernetes, and determine whether any integration code needs to be modified.

Governance: Another area you must consider is that of governance. For trial applications this doesn’t have the same importance as for production applications, but when your business depends on your cluster remaining up, stable, and secure, having a handle on governance issues is crucial. For example, we would want to examine the ways in which you are capturing logs for Swarm workloads. Are you redirecting logs to a 3rd party provider? Will those logs be available in your Kubernetes cluster? We will determine whether you need to re-architect monitoring policies and procedures.
Identity: One area where Swarm and Kubernetes differ is in their handling of identity.

UCP contains its own role-based access control (RBAC) system for ensuring that the proper user has the proper access to the proper Swarm resources, but this RBAC system is independent of the native Kubernetes RBAC also deployed to the cluster. As a result, transitioning from Swarm to Kubernetes mapping between these disparate RBAC systems. We spend time identifying both out of the box and custom roles, then take care of the re-creation and mapping of those roles and the assignment of roles to various individuals and service accounts.  Other complications we’ve seen at clients include custom UCP Roles, such as specific restricted role for CI/CD, which need to be redefined in Kubernetes?

Networking and storage: Another place Swarm and Kubernetes differ is in how they handle networks. Swarm has the concept of application isolation via network partitions, but Kubernetes comes with a flat network. If the application needs to be isolated based on network, then we will need to create additional Kubernetes network policies to implement that. If you have deployed a Layer 7 ingress solution to the existing cluster, we will need to ensure that it’ll work with Kubernetes, and if not, we will help you rethink or replace it. Similarly, storage, being hardware-dependent, can be an issue. For example, which Docker Volume driver(s) are you using to provide persistent storage? Do appropriate Kubernetes CSI drivers exist for the storage you’re using? These are all questions we’ve encountered during multiple engagements.
Customer-specific integration and applications: The final thing we think about when transitioning a customer from Swarm to Kubernetes is all of the aspects of their cloud that are specific to their situation and applications.  For example, what CI/CD tools are you using, and how will that pipeline be impacted by switching to Kubernetes? Are you using a CI/CD pipeline to build, scan, sign and deploy your applications? Which one? Can you integrate that pipeline into Kubernetes using a solution such as Docker Trusted Registry? To ensure a successful transition, we must take all of that into consideration.

These are just a few of the dozens of things we think about when doing the careful work of planning the migration of your production workloads.
It’s a lot to think about — but you’re not alone
Migrating production workloads from Swarm to Kubernetes can be complicated and when we’re talking about the production applications on which your business depends, you need to tread carefully. Mirantis is the recognized leader in container orchestration and we’re working to bring you a comprehensive Professional Services Swarm to Kubernetes Migration package that will ensure your transition is successful. 
At Mirantis, we are committed to ensuring freedom of choice for the orchestrator that’s right for you, and whether you choose Swarm or Kubernetes, we want you to have the best possible experience.
If you’re thinking about moving orchestrators from Swarm to Kubernetes, please look for our upcoming announcements.
The post Swarm to Kubernetes workload migration appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Expert Advice: How to Start Selling on Your Website

Are you just taking your first steps selling a product or service online and don’t know where to begin? Be sure to register for our next 60-minute webinar, where our expert Happiness Engineers will walk you through the basics of eCommerce and show you how to set up your online store.

Date: Thursday, April 16, 2020Time: 5 p.m. UTC | 1 p.m. EDT | 12 p.m. CDT | 10 a.m. PDTCost: FreeWho’s invited: business owners, entrepreneurs, freelancers, service providers, store owners, and anyone else who wants to sell a product or service online.Registration link

Hosts Steve Dixon and Maddie Flynn are both veteran Happiness Engineers with years of experience helping business owners and hobbyists build and launch their eCommerce stores. They will provide step-by-step instructions on setting up:

Simple Payments — perfect for selling a single product or service.Recurring Payments — great for subscriptions and donations.WooCommerce — ideal for entrepreneurs who want to build an online store and automate sales.

No previous eCommerce experience is necessary, but we recommend a basic familiarity with WordPress.com to ensure you can make the most from the webinar. The presentation will conclude with a Q&A session (15-20 minutes), so you can already note down any questions you might have and bring them with you to the webinar.

Seats are limited, so register now to reserve your spot. See you then!
Quelle: RedHat Stack

Mirantis Partners with Kong for Destination: Decentralization Virtual Event

The post Mirantis Partners with Kong for Destination: Decentralization Virtual Event appeared first on Mirantis | Pure Play Open Cloud.
Mirantis will discuss the road to cloud-native and ways to secure decentralized applications
April 8, 2020, Campbell, CA — Mirantis, the open cloud company, today announced that it is partnering with Kong for the company’s Destination: Decentralization virtual event. Other partners for the event include Cloud Native Computing Foundation (CNCF), AWS, and DataDog. Mirantis will give two talks at the event about the road to cloud-native applications and ways platforms can help secure decentralized applications.
Destination: Decentralization, to be held on April 16th, is a free digital event about decentralizing software architectures in light of the rapid adoption of containers and microservices. The event will host virtual lectures and hands-on labs where attendees will learn how to adapt to this new technological landscape. Mirantis is also geared up to give two presentations at the event:
Two ways platforms can help decentralize applications (and cloud) while still controlling what matters most
Bryan Langston, Director of Architecture, will talk about trusted container registry best practices and Docker Enterprise’s experimental implementation of the NIST OSCAL security standard.
The long road to cloud-native applications: Inter-service communications, application architectures, and platform deployment patterns
Bruce Mathews, Sr. Solutions Architect, will cover the fundamentals of microservices architecture, inter-service communications from the Ops and Developer perspectives, and key design patterns for making service-mesh coordinated apps more operations-friendly.
Register now for Destination: Decentralization: https://konghq.com/events/destination-decentralization/#register 
The post Mirantis Partners with Kong for Destination: Decentralization Virtual Event appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis