Extending automation across the organization: How we can create a new culture of automation from legacy IT siloes

In order for organizations to meet these challenges, they must break down the siloes that so frequently exist among teams to integrate best practices, tools and processes. They need to adopt a new approach to operations through autonomous automation. Tom Anderson discusses how enterprises can create a new culture of automation extending across the organization.
Quelle: CloudForms

Development Teams Need to Keep All Options on the Table

The ability to move code between platforms is an inalienable right that has also become a critical business imperative. Every developer knows that not every application workload runs equally well on all platforms. Developers in this day and age also need to be able to easily shift between platforms as platforms and business conditions evolve. … Continued
Quelle: Mirantis

Building and expanding network services for a smart and connected world

Organizations are increasingly adopting multicloud implementations and hybrid deployments as a part of their cloud strategy. Networking is at the foundation of this digital transformation. Google has built a massive planet-scale network infrastructure serving billions of users every day. Our global network continues to expand in footprint with four new regions announced this year in Chile, Germany, Saudi Arabia and Israel. Regions in Delhi NCR, Melbourne, Warsaw and Toronto are now open. We also announced six new subsea cables which connect different parts of the world. Google Cloud offers a broad portfolio of networking services built on top of planet-scale infrastructure that leverages automation, advanced AI, and programmability, enabling enterprises to connect, scale, secure, modernize and optimize their infrastructure, without worrying about the complexity of the underlying network. In the past year, we’ve made several advancements to our networking services stack, from layer 1 to layer 7, so you can easily and flexibly scale your business. And what better time to discuss this progress as we gear up for Next ’21!Click to enlargeSimplify connectivity for hybrid environmentsLet’s start with connectivity. Networking can get complex, especially in hybrid and multicloud deployments. That’s why we introduced Network Connectivity Center in March as a single place to manage global connectivity. Network Connectivity Center provides deep visibility into the Google Cloud network with tight integration with third-party solutions. In May, we integrated Network Connectivity Center with Cisco, Fortinet, Palo Alto Networks, Versa Networks and VMware to be able to use their SD-WAN and firewall capabilities with Google Cloud, and Network Connectivity Center will be generally available for all customers in October. Operate confidently with advanced securityThe network security portfolio secures applications from fraudulent activity, malware, and attacks. Cloud Armor, our DDoS protection and WAF service, has four new updates:Integration with Google Cloud reCAPTCHA Enterprise bot and fraud management (in Preview). Learn more in the blog here.Per-client rate limiting, including two rule actions: throttle- and rate-based-ban are available (also in Preview). Both bot management and rate limiting are available in Standard and Managed Protection Plus tiers. Edge security policies allow you to configure filtering and access control policies for content that is stored in cache for Cloud CDN and Cloud Storage; this feature is also now in Preview. Adaptive Protection, our ML-based, application-layer DDoS detection and WAF protection mechanism, is now Generally Available. Other updates in the area of network security include: Cloud IDS, developed with threat detection technologies from Palo Alto Networks, was announced in July and is currently in Preview.In Cloud firewalls, the Firewall Insights capability has expanded, and hierarchical rules became available earlier this year. Cloud NAT has released in Preview new scaling features: destination-based NAT rules and dynamic port allocation.The solution blueprint for Cloud network forensics and telemetry, along with a companion blog comparing methods for harvesting telemetry for network security analytics are both available.Consume services faster with service-centric networkingPrivate Service Connect is a service-centric approach to networking that simplifies consumption and delivery of applications in the cloud. We’re adding support for HTTP(S) Load Balancer, which gives you granular control over your policies, and enables new capabilities such as vanity domain names and URL filtering. It provides tighter integration with services running on Google Kubernetes Engine (GKE) and provides more flexibility for the producers offering managed services. You can connect to services like Bloomberg, Elastic and MongoDB via Private Service Connect so you can develop apps faster and securely on Google Cloud.“MongoDB’s partnership with Google is an integral part of our strategy to support modern apps and mission-critical databases and to become a cloud data company. Private Service Connect allows our customers to connect to MongoDB Atlas on Google Cloud seamlessly and securely and we’re excited for customers to have this additional and important capability.” said Andrew Davidson, VP of Cloud Product at MongoDB.Lastly, managed services in Private Service Connect are now auto-registered with Service Directory in the consumer network, making service consumption even simpler.  Service-centric networking extends to GKE and Anthos, which provide a consistent development and operations experience for hybrid and multicloud environments. With Anthos network gateway in Preview, you get more service-centric networking control for your Anthos clusters, with features like Egress NAT Gateway and BGP-based load balancing. With Anthos network gateway, you can streamline costs by removing dependencies on third-party vendors. We’ve added Multi-NIC pod capabilities to our Anthos clusters, allowing customers and partners to offer services by using containerized network functions.Finally, as your GKE clusters grow in size, scalability becomes a big concern. With discontiguous pod CIDR, IP addresses are now a mutable resource, allowing you to increase your cluster size dynamically. No more deleting and recreating the cluster to increase their cluster size.Deliver applications to users anywhereWith Google Cloud’s extensive global network footprint, Cloud Load Balancing can help bring apps in single or multiple regions as close to your users as possible. Cloud Load Balancing now includes advanced traffic management for finer-grained control over your network traffic. These also include Regional Load Balancers which provide additional jurisdiction compliance for workloads that require it. Additionally we support hybrid load balancing capabilities where you can load balance on-prem and multicloud services. In order to serve apps to users quickly with the right level of redundancy and granularity, we are announcing DNS Routing Policiesin Cloud DNS. Now in Preview, this feature lets you steer traffic using DNS, with support for geo-location and weighted round robin policies. We’re also excited to announce that Cloud Domains will be generally available in October. Cloud Domains makes it easy for our cloud customers to register new domains or transfer in existing ones. Cloud Domains is integrated with Cloud DNS to make it easy to create public DNS zones and enable DNSSEC.Adopt proactive network operationsWe have made some exciting strides in Network Intelligence Center, our network monitoring, verification and optimization platform designed to help you move from reactive to proactive network operations. We announced General Availability of Dynamic Reachability within Connectivity Tests module, and General Availability of the Global Performance Dashboard. With Dynamic Reachability, you can get VM-level granularity for loss and latency measurements. Global Performance Dashboard shows real-time overall Google Cloud network performance metrics such as latency and packet loss, so you can correlate per-project performance metrics to the rest of Google Cloud. Find out more at Next ‘21 We have some great deep dive networking sessions at Next ‘21 with our product managers and engineers. Please join us and hear from our product leaders, partners and customers on how to leverage the Google Cloud network for your next cloud initiative. Register for Next and build your custom session playlist today!Sessions:INF105 – What’s new and what’s next with networkingINF205 – Simplifying hybrid networking and servicesINF212 – Delivering 5G networks and ecosystems with distributed cloudINF304 – Next generation load balancingINF305 – Monitor and troubleshoot your network infrastructure with centralized monitoringPAR205 – Google Cloud IDS for Network-based Threat Detection with Palo Alto NetworksSEC211 – Innovations in DDoS, WAF, firewall & network-based threat detectionHOL115 – HTTP Load Balancer with Cloud Armor
Quelle: Google Cloud Platform

Built-in transparency, automation, and interoperability for Cloud KMS

Cloud KMS helps customers implement scalable, centralized, fast cloud key management for their projects and apps on Google Cloud. As use of Cloud KMS has grown, many organizations have looked for ways to better understand crypto assets and to make more informed decisions around data management and compliance.  In response, the Cloud KMS team is pleased to announce several new features to help customers meet goals around increased transparency, improved interoperability, and greater automation as they use Cloud KMS.Transparency:  Key Inventory DashboardOne major request we’ve heard from our largest adopters of Cloud KMS is for improved transparency around their crypto inventory. The newly-launched Key Inventory Dashboard helps customers more easily explore, search, and review the keys used in their organization, all from one place in the Google Cloud Console.Key Inventory Dashboard provides you comprehensive information about your cryptographic keys, details such as key name, creation date, latest/next rotation dates and rotation frequency, among many other attributes. These insights are comprehensively presented in table form, which makes it easy to sort and filter keys by various attributes.Key Inventory Dashboard summarizes details about each key in a project –including key name, region, and rotation frequencyFiltering results in Key Inventory Dashboard using Key attributesThe Key Inventory Dashboard is just the first step — stay tuned for announcements in the coming months about additional ways we’re bringing increased transparency to customers’ key inventory.Interoperability:  PKCS#11Today, customers need to use the Cloud KMS API to make use of Cloud KMS or Cloud HSM.  But we know that many customers want (and sometimes need) to use the PKCS#11 standard to allow their applications to make use of Google Cloud cryptography.  We want to support these needs while also giving customers more options for easily integrating their applications and infrastructure with Google Cloud.Our Cloud KMS PKCS #11 Library – an open source project now in General Availability – allows you to access software keys in Cloud KMS or hardware keys in Cloud HSM and use them for encrypt and decrypt operations with the PKCS #11 v2.40 API.  Additionally, we are announcing that this library is being made available as an open source project and we welcome the community’s contributions for possible inclusion in subsequent versions.Our investment in the PKCS#11 library is one of several efforts to increase customer ease of integrating their applications and infrastructure with Google Cloud.  As we continue to plan new ways for customers to make use of Cloud KMS, we welcome additional customer feedback about what encryption features and methods will be most helpful in bringing more data and workloads to Google Cloud.Automation: Variable Key Destruction and Fast Key DeletionThrough improved automation, customers now have the ability to decide how long after they schedule a key for destruction that destruction will occur, as well as additional assurance about how quickly Google will fully purge customers’ destroyed key material.For newly created or imported software or hardware keys, customers may use our new Variable Key Destruction feature to specify a length  of time between 0-120 days (for imported keys) and 1-120 days (for non-imported keys created within Google Cloud) that a key will remain in “Scheduled for destruction” state after a customer requests the key to be destroyed.  This increased control and automation means that customers can specify the destruction window that is right for them.  Customers who need to destroy keys very shortly after attempting to do so can rest assured that their keys will be destroyed more quickly; alternatively, those who want a longer window to prevent inadvertent key destruction may opt for this. In all cases, customers may specify a key destruction window that has day, hour, or even minute-level granularity.Once a customer key has been destroyed, our new Fast Key Deletion functionality – rolling out by late October – will assure customers that all remnants of their destroyed key material will be fully purged from all parts of Google’s infrastructure.  Fast Key Deletion reduces Google’s data deletion commitment on destroyed keys from 180 days to 45 days. This means that all traces of destroyed key material will now be completely removed from Google’s data centers no later than 45 days after the time of destruction.  While Google completely purges all key material that customers want to destroy, customers who import keys to Google Cloud now have new options to recover keys once they are destroyed.  With the new Key Re-Importfeature, imported keys previously listed in “Destroyed” state can be restored to “Enabled” by re-uploading the original key material.  Re-Import can be conducted both via the command line interface as well as via Cloud Console.  This allows customers with imported keys who purposefully destroyed a key or who accidentally destroyed a key to later reimport that key.Key Re-Import allows customers to re-import key material for keys that were previously destroyed.What’s Next for Cloud KMSWe’re continuing our work to make encryption from Google Cloud the most complete, secure, and easy-to-use of any major cloud provider. Stay tuned for further updates on how we’re working to deliver additional transparency, interoperability, and automation. As always, we welcome your feedback. To learn more about all the features of Cloud KMS, see our documentation.Related ArticleNew Google Cloud whitepaper: Getting the most out of your Cloud Key Management ServiceThe Google Cloud security team published a whitepaper titled “Cloud Key Management Service Deep Dive” to help you get the most out of clo…Read Article
Quelle: Google Cloud Platform

Build and run a Discord bot on top of Google Cloud

Stuck at home these past–checks calendar–732 months, I’ve been spending a lot more time on Discord (an online voice, video and text communications service) than I ever thought I would. Chatting with far-flung friends, playing games, exploring, finding community as I am able, and generally learning about a platform I had not used all that much before the pandemic. And part of that fun was seeing all the cool bots people have made on Discord to liven things up: moderation, trivia, and games: many little, weird, random games. So I wondered: could I make one? Since bots are just code that interact with Discord’s APIs, they have to have a computer to run on. Cool, I have one of those. But I don’t want my bot to disappear just because my computer is off, so I need something better: a computer that can always stay on. Reliable infrastructure. If only I could find that somewhere…What would it take to actually run a bot on someone else’s (ie. Google’s) computers?I’m assuming here that you’ve set up your Discord developer account, made a New Application (with a clever name of course), got the bot token from the menu under Settings > Bot > Token (have to Copy or Reveal), and have that stored safely on a sticky note by your desk.Now, on to Google Cloud! First make sure you have an account set up and are able to create new resources. We’ll need a Virtual Machine (VM), part of Google Compute Engine. In Google Cloud, make a new project, head to Compute, and create a new instance. The smallest instance is going to be fine for this Hello, World example, so let’s use an f1-micro instance, because it’s free! To get that going I chose us-east1 as my region, Series N1, then Machine type f1-micro.Click to enlargeWhile I love me some serverless architecture, and it’s usually much lower overhead, in this case we want to have a persistent instance so there’s always a process ready to respond to a request, via the Discord API. We want to avoid any timeouts that might emerge if a serverless solution takes time to spin up listeners. At the same time, we need to avoid having more than one process running, otherwise we could have multiple responses to a Discord signal, which will be confusing.In choosing our VM we need something running Python 3.5+, to support our Discord bot code. The default (as I write) is Debian GNU/Linux 10 (buster) which is running Python 3.7, good enough for us! Once the VM is live we can start getting our code setup on it. You’ve got a few options for connecting to the Linux VMs.  Once you’ve done so, time to install and run some code!To run our code we want to install and setup pip (package installer for python), usingThen run…to install the Discord library onto your server.We can drop our code directly into a new bot.py file; organizing it into more structured files will come after we move past the Hello, World! stage. For now you’ll hard code the bot token into bot.py, even though that gives me the shivers.And now we’re ready to run our bot!python3 bot.pyAnd you can add it to our server by going back to the Discord Developer portal, select our App, and look under Settings for OAuth2. Once you choose the right scope (we can stick with just bot for now), a new link will appear below the Scopes box, starting withLoading that in your browser is all it takes to add you new bot the the server you have permissions to manage, and now you are all set! Test it out by sending the message `..hello` to your server.You can read more about the Free Tier for Compute Engine and other services, and come back next time for a deeper exploration into operations for your Google Cloud-powered Discord bot.Related ArticleWhat are the components of a Cloud VM?Join Brian and Carter as they explore why VMs are some of Google’s most trusted and reliable offerings, and how VMs benefit companies ope…Read Article
Quelle: Google Cloud Platform

We Turned Off the Paid Requirement to Skip Update Reminders. Got More Feedback? We’re All Ears!

The August 31st announcement of updating our product subscriptions has enabled us to make serious investments into building what you want. So we’re here to say: let us know what that is! 

On September 13th Scott Johnston announced that we are moving forward with Docker Desktop for Linux. Desktop for Linux is currently the second most liked item on our public roadmap, so it’s an easy call for us to invest in this. We want to know what else you are interested in so we can spend our time building things that will make your life easier. There are two pretty easy ways to get involved:

Up vote Issues. Take a gander through the issues that are out there and up vote ideas you would enthusiastically use for your day to day work. Submit a new issue. Have a problem you can’t solve with Docker as it stands today? Want more functionality in parts of the experience? Let us know about it!

There’s a bunch of us here that get together every other week to review the roadmap and give updates to folks, so we promise we’ll see it. Also we’re scanning the feedback we get from your ratings in Desktop and on Hub so that we can have a pulse on what users want us to improve. 

Your feedback also helps us correct our path when we miss the mark, like in 3.3 when we introduced the “Skip this update” behavior. Which is why in Docker Desktop 3.4 and above we removed the requirement to be a Pro/Team subscriber to skip reminder prompts about Docker Desktop releases. Now, when a new update becomes available, the whale icon will change to indicate that there’s an update available and you’ll be able to choose when to download and install the update. Two weeks after an update first becomes available, a reminder notification, like below, will appear.

On versions 3.4 and above if you click on “Skip this update”, you won’t get any additional reminders for this particular update. If you click on “Snooze” or dismiss the dialog, you’ll get a reminder to update on the following day. We’re curious what additional changes you’d like to see to the update process, some initial ideas are already on our public roadmap, let us know what you think!

Your feedback really is what makes us thrive. Share with us the good, the bad and the missing. We’re here to listen. 
The post We Turned Off the Paid Requirement to Skip Update Reminders. Got More Feedback? We’re All Ears! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Looking for a Docker Alternative? Consider This.

Docker recently announced updates and extensions to our product subscriptions. Docker CEO Scott Johnston also posted a blog about the changes. Much of the discussion centered on what the licensing changes mean for users of Docker Desktop, which remains free for small businesses and several other user types, but now requires a paid subscription — starting at $5 per user per month — for professional use in medium to large businesses.

Earlier this month Docker Captain, Bret Fisher weighed in on the debate by posting a YouTube video to his DevOps and Docker Live Show (Episode 138). In the nearly 90-minute episode, Bret dives into what Docker Desktop does, why we need it, why we should care, whether users can replace it with a simple tool, and more.

In digging into the nitty-gritty, Bret makes a lot of great points that can help you understand how the new Docker subscription changes may affect you, if at all. Here are the top 5 takeaways from Bret’s video (not necessarily in the order in which he shared them):

1) Value for money

Bret reminded his audience of the many things — some of them complex and subtle — that Docker Desktop does that make it such a valuable developer tool.

But wait, there’s more! 

These are just some of the things you don’t get if you don’t use Docker Desktop — and this is not even a complete list. 

The question for those affected by the licensing changes: Is all this functionality worth the price of a cup of coffee each month? (Note: the paid subscription is to Docker, not to Docker Desktop.) 

2) What is still free?

In all the discussion about the new subscription charges, it’s important to remember there’s a broad range of exceptions where users can continue to enjoy free usage. Bret goes through these carefully.

Although using Docker Desktop in larger businesses will require a paid subscription (Pro, Team or Business), it’s still free for small businesses of fewer than 250 employees and less than $10 million in annual revenue. It’s also still free for personal use, education, and non-commercial open source projects.

As an example of personal use, Bret gave the example of your kids running Minecraft in a Java container on your home server. The education exception covers, for example, students learning how to use Docker.

For non-commercial open source projects, Bret said most people are in one of two boats: If you’re working in open source as a volunteer in your spare hours, you won’t need to pay, but if you’re using open source in your job while on the clock for your employer, you’ll likely need to buy a license. 

Also free is the new Docker Personal replaces the former Docker Free subscription and focuses on open source communities, individual developers, education and small businesses. (Check out our FAQ for more detail on all our subscription tiers.)

Worth noting: Docker is allowing a grace period for users to comply with the new license agreement. If you accept the update to the service agreement that Docker recently pushed out, you have until January 31, 2022 to pay for your subscription to use Docker Desktop. The subscription is $5 per person per month, with no limit on how many machines you can put the software on.

Bret said Docker has no plans to enforce payment, but rather will trust customers to comply.

3) No changes for Linux/open source users

Remember, if you’re on Linux, none of the licensing changes apply to you. As Bret explains, Docker Desktop is a mix of open source and closed source software, and it’s the closed source bits to which the Docker Desktop licensing changes apply. That means all the binaries (Docker Engine, Docker Daemon, Docker CLI, Docker Compose, BuildKit, libraries, etc) and anything open source continues to be free of charge.

However, if you’re on a Mac or Windows PC and you installed Docker Desktop to simplify the running of a Linux VM, then you could be affected by the changes. In short, the Docker Desktop licensing changes are focused on the Docker mega-tool. Everything open source stays open source.

4) The why

Docker has been thinking for years about how to create a business model that will allow it to grow sustainably. But timing is everything.

As Bret explains, when Docker Desktop launched in 2017, the product was a shadow of what it is today in terms of features and added value. So asking people to pay for a license at that time would have failed, and the product we know and love today likely wouldn’t even exist.

The company flirted with charging for Docker Desktop a few years ago, but then backed away from the idea when it decided not to go the enterprise software route. But now, finally, the time has come — and with good reason. Over the past year, Docker has added a slew of features, such as image scanning in the Docker CLI, Docker Desktop on Apple Silicon, Audit Logs in Docker Hub, GPU support in Docker Desktop, BuildKit Dockerfile mounts, new Docker Verified Publisher images and more. And a glance at our public roadmap tells you there’s more of the same coming down the pike in the year ahead.

In Bret’s words, the changes happening now are, “really all about Docker just trying to make a sustainable business model around gigantic companies trying to use [Docker’s] product to get their companies’ jobs done every day that aren’t paying Docker a dime for it.”

5) DIY Alternatives to Docker Desktop? Yes, but …

Are there alternatives to Docker Desktop that do the job as well? For those who don’t want to pay for a license or are forced off Docker Desktop by an employer who doesn’t want to pay, Bret explores a handful of contenders — from Podman and minikube to containerd, and Lima.

But all of these alternatives involve multiple steps and tons of caveats for a fraction of the features, prompting this from Bret, “To my knowledge, there is no version of a comparable single product that provides all of this stuff anywhere close to what Docker does.”

So there you have it, from the mouth of a Docker Captain. Most of us have gotten used to free software from Docker, and now the game is changing. For what it’s worth, Bret shares that he started paying for Docker Hub years ago because he finds the software is critical to his workflow.

If you’re interested in more of Bret’s content, he has multiple Docker and Kubernetes courses, a weekly YouTube Live, a Docker-focused Podcast, and a DevOps Discord chat server. You can also chat with him on Twitter @BretFisher
The post Looking for a Docker Alternative? Consider This. appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/