Improving your security posture with centralized secrets management

Adopting centralized secrets management is an important step in improving your organization’s security posture. Centralized solutions provide unified auditing, access controls, and policy management, but many organizations struggle to install, configure, and drive internal adoption of these solutions due to lack of integrations, lack of experience, or organizational resistance.One of the biggest advantages of a centralized secrets management solution is mitigating secret sprawl. Without a centralized solution, secrets–like API keys, certificates, and database passwords–often end up committed to a source repository, saved on a corporate wiki page, or even written on a piece of paper. When secrets are sprawled like this, you lose the ability to easily audit and control access to their values, allowing an attacker to move undetected throughout a system, as has happened in several recent data breaches.Secret Manager is a generally available (GA) centralized secrets management solution hosted on Google Cloud. With Secret Manager, you don’t have to install custom software or manage any systems, and you can easily store credentials and other sensitive data, manage permissions using Cloud IAM, and audit access using Cloud Audit Logs. And since Secret Manager is a fully-managed service, running it doesn’t create extra operational overhead.Adopting a centralized secrets management solution is not without challenges, however. Many customers operate in heterogeneous environments and platforms, meaning secrets need to be accessed from varying software stacks, programming languages, operating systems, and third-party services. For example, what if a CI/CD system needs an API token, or an automation tool needs to store a TLS certificate. Without integrations between these systems, developers will most likely copy-and-paste their secrets into third-party systems, circumventing the overall value of centralization. A good secrets management solution, then, needs to provide extensibility and interoperability with first- and third-party systems.While Secret Manager already integrates with Google Cloud products like Cloud IAM and VPC Service Controls, we recognize many of our customers use tools and technologies outside of our cloud ecosystem. This post looks at some popular third-party tools and services, and shows you how Secret Manager can help create, manage, and access secrets in those systems.HashiCorp TerraformHashiCorp’s Terraform is a popular third-party tool for provisioning and managing Infrastructure as Code. You can create, manage, and access secrets from within their existing infrastructure templates.Suppose your infrastructure setup includes generating a TLS certificate. You can generate that certificate using Terraform and then store the corresponding private key in Secret Manager:If you already have a Secret Manager secret, you can access it from Terraform:In addition to creating and accessing secrets, you can also enable, disable, and destroy secret versions, as well as manage Cloud IAM permissions. For more information, check out the Terraform Google Secret Manager documentation.GitHub ActionsGitHub Actions is a popular CI/CD solution integrated into GitHub. You can access Secret Manager secrets from your GitHub Action build steps and export those secrets for use in subsequent build steps.While you could store secrets directly on GitHub, over time this will lead to secret sprawl and reduced auditability. Decentralized secrets management increases the challenge of auditing and controlling access. For this reason, we developed a GitHub Action so you can use Secret Manager as your centralized secrets management solution.Suppose your GitHub Action needs to publish a container image to a private registry, which requires an API token. You would create and store this API token in Secret Manager and then configure your GitHub Action to pull this value from Secret Manager as-needed.This ensures that each execution of your GitHub Action generates an audit entry, and you can revoke access to the secret at any time using Cloud IAM.Spring BootThe Spring Cloud GCP project allows Spring Boot developers to adopt and integrate with Google Cloud services in their microservices. Since Spring Cloud GCP 1.2.3, you can access Secret Manager secrets as Spring configuration values. This provides a convenient and familiar abstraction for Java developers and improves portability across various environments like development, staging, and production.With Spring Cloud GCP, you can access one or more properties in your application.properties file:Alternatively, you can map secrets directly inline, without needing to store them in an intermediate property value:That’s it! The Spring Cloud GCP project handles all the heavy lifting. You can see the complete sample on GitHub.BerglasBerglas is a predecessor to Secret Manager, but it’s now fully interoperable with Secret Manager. That means you can create, manage, and access Secret Manager secrets using the familiar Berglas APIs. Whereas previously you would use the berglas://prefix to specify a secret, you can now use the sm:// prefix to refer to a Secret Manager secret.As an added bonus, existing Berglas users can migrate to Secret Manager using the berglas migrate command.Integrations aheadAdopting a tool like Secret Manager improves your security posture with centralized auditing and access controls, so you can easily manage, audit, and access secrets like API keys and credentials. With these new integrations, you can now accomplish these tasks from your favorite tools, frameworks, and services—all with minimal changes to your existing workflows.At Google Cloud, we aim to make security as easy as possible. We look forward to sharing more first- and third-party integrations, better encryption controls, and expanded management functionality with you in the future.Finally, we would love your feedback! Please connect with us via any of the Secret Manager forums and ask questions on our Stackoverflow tag.
Quelle: Google Cloud Platform

Azure + Red Hat: Expanding hybrid management and data services for easier innovation anywhere

For the past few years, Microsoft and Red Hat have co-developed hybrid solutions enabling customers to innovate both on-premises and in the cloud. In May 2019, we announced the general availability of Azure Red Hat OpenShift, allowing enterprises to run critical container-based production workloads via an OpenShift managed service on Azure, jointly operated by Microsoft and Red Hat.

Microsoft and Red Hat are now working together to further extend Azure services to hybrid environments across on-premises and multi-cloud with upcoming support of Azure Arc for OpenShift and Red Hat Enterprise Linux (RHEL), so our customers will be able to more effectively develop, deploy, and manage cloud-native applications anywhere. With Azure Arc, customers will have a more consistent management and operational experience across their Microsoft hybrid cloud including Red Hat OpenShift and RHEL.

What’s new for Red Hat Customers with Azure Arc

As part of the Azure Arc preview, we’re expanding Azure Arc’s Linux and Kubernetes management capabilities to add support specifically for Red Hat customers, enabling you to:

Organize, secure, and govern your Red Hat ecosystem across environments

Many of our customers have workloads sprawling across clouds, datacenters, and edge locations. Azure Arc enables customers to centrally manage, secure, and control RHEL servers and OpenShift clusters from Azure at scale. Wherever the workloads are running, customers can view inventory and search from the Azure Portal. They can apply policies and manage compliance for connected servers and clusters from Azure Policy; either one or many clusters at a time. Customers can enhance their security posture through built-in Azure security policies and RBAC for the managed infrastructure that works the same way wherever they run. As Azure Arc progresses towards general availability, more policies will be enabled, such as reporting on expiring certificates, password complexity, managing SSH keys, and enforcing disk encryption.

In addition, with SQL Server 2019 for RHEL 8 is now quicker to deploy via new images now available in the Azure Marketplace, we’re expanding Azure Arc to manage SQL Server on RHEL, providing integrated database and server governance via unified Azure Policies.

Finally, Azure Arc makes it easy to use Azure Management services such as Azure Monitor and Azure Security Center when dealing with workloads and infrastructure running outside of Azure.

Manage OpenShift clusters and applications at scale

Manage container-based applications running in Azure Red Hat OpenShift service on Azure, as well as OpenShift clusters running on IaaS, virtual machines (VMs), or on-premises bare metal. Applications defined in Github repositories can be automatically deployed via Azure Policy and Azure Arc to any repo-linked OpenShift cluster, and policies can be used to keep them up to date. New application versions can be distributed globally to all Azure Arc-managed OpenShift clusters using Github pull requests, with full DevOps CI/CD pipeline integrations for logging and quality testing. Additionally, if an application is modified in an unauthorized way, the change is reverted, so your OpenShift environment remains stable and compliant.

Run Azure Data Services on OpenShift and anywhere else

Azure Arc enables you to run Azure data services on OpenShift on-premises, at the edge, and in multi-cloud environments, whether a self-deployed cluster or a managed container service like Azure Red Hat OpenShift. With Azure Arc support for Azure SQL Managed Instance on OpenShift, you’ll know your container-based data infrastructure is always current and up to date; Microsoft SQL Big Data Cluster (BDC) support for OpenShift provides a new container-based deployment pattern for big data storage and analytics, allowing you to elastically scale your data with your dynamic OpenShift based application anywhere it runs.

Managing multiple configurations for an on-premises OpenShift deployment from Azure Arc.

Azure SQL Managed Instances within Azure Arc.

If you’d like to learn more about how Azure is working with Red Hat to make innovation easier for customers in hybrid cloud environments, join us for a fireside chat between Scott Guthrie, EVP of Cloud and AI at Microsoft, and Paul Cormier, president and CEO of Red Hat, including a demo of Azure Arc for Red Hat today at the Red Hat Summit 2020 Virtual Experience.

Private hybrid clusters and OpenShift 4 added to Azure Red Hat OpenShift

Rounding out our hybrid offerings for Red Hat customers, today we’re announcing the general availability of Azure Red Hat OpenShift on OpenShift 4.

This release brings key innovations from Red Hat OpenShift 4 to Azure Red Hat OpenShift. Additionally we‘re enabling features to support hybrid and enterprise customer scenarios, such as:

Private API and ingress endpoints: Customers can now choose between public and private cluster management (API) and ingress endpoints. With private endpoints and Azure Express Route support we’re enabling private hybrid clusters, allowing our mutual customers to extend their on-premises solutions to Azure.
 
Industry compliance certifications: To help customers meet their compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is now PCI DSS, HITRUST, and FedRAMP certified. Azure maintains the largest compliance portfolio in the industry both in terms of total number of offerings, as well as number of customer-facing services in assessment scope.
 
Multi-Availability Zones clusters: To ensure the highest resiliency, cluster components are now deployed across 3 Azure Availability Zones in supported Azure regions to maintain high availability for the most demanding mission-critical applications and data. Azure Red Hat OpenShift has a Service Level Agreement (SLA) of 99.9 percent.

Cluster-admin support: We’ve enabled the cluster-admin role on Azure Red Hat OpenShift clusters, enabling full cluster customization capabilities, such as running privileged containers and installing Custom Resource Definitions (CRDs).

Getting started with Azure Arc

To learn more about Azure Arc for RHEL environments, get started with the preview today. For anyone interested in Azure Arc enabled OpenShift, we will be going into public preview soon. Contact us here for more info.
Quelle: Azure

Announcing the general availability of Windows Server containers and private clusters for Azure Kubernetes Service

Today’s application environments are often heterogeneous, composed of both Windows and Linux applications. Using Kubernetes to host containers on both Windows Server and Linux not only saves costs but also reduces operational complexity. Many Azure customers have demonstrated this with their usage of Windows server containers. Their success in our preview makes me thrilled to announce the general availability of Windows Server container support on Azure Kubernetes Service (AKS). 

AKS simplifies the deployment and management of Kubernetes clusters and provides a highly reliable and available environment for your applications. It integrates seamlessly with world-class development tools such as GitHub and Visual Studio Code and is built on years of Microsoft security expertise focusing on data protection and compliance. With the general availability of Windows Server containers, you can now lift and shift your Windows applications to run on managed Kubernetes service with Azure and get the full benefits of AKS for your production workloads using consistent tools and processes. For example, you can create, upgrade, and scale Windows node pools in AKS through the standard tools (portal/CLI) and Azure will help manage the health of the cluster automatically. Running both Windows and Linux applications side by side in a single AKS cluster, you can modernize your operations processes for a broader set of applications while increasing the density (and thus lowering the costs) of your application environment.

Today, we're also announcing the general availability of both private clusters and managed identities support in AKS. This further empowers our customers to achieve hardened security and meet compliance requirements with reduced efforts. Private clusters ensure that customers can create and use managed Kubernetes that only exists inside their private network and never on the internet. This network isolation provides security assurances that are especially important for regulated industries like finance and health care. In addition, Azure managed identities for AKS allows you to interact securely with other Azure services including Azure Monitor for Containers, Azure Policy, and more. With the introduction of managed identity, you don’t have to manage your own service principals or rotate credentials often. 

Applying best practices makes it easier to optimize your enterprise Kubernetes environment and applications. We continue to develop more integrations between AKS and Azure Advisor, bringing industry best practices right into the AKS experience. Regardless whether you are new or a seasoned Kubernetes user, our customers receive proactive and actionable recommendations to secure resources, maintain cluster hygiene, and increase operational efficiency. These recommendations are based on our learnings from thousands of customer engagements. Likewise, we have integrated developer advice into the VS Code extension for Kubernetes and integrated security advice into the Azure Security Center. We are also focused on providing learning, frameworks, and tools to ensure developers, operators, and architects in every enterprise can successfully use Kubernetes on Azure. Putting all this together gives you more confidence in your use of Kubernetes even as you are learning the system.

We’re going through unprecedented challenges in the world today. I hope that these updates make it easier for you to secure and optimize your Kubernetes environment today, allowing you to focus your energy on your business critical projects. You can learn more about Kubernetes on Azure here.
Quelle: Azure

Accelerating Cybersecurity Maturity Model Certification (CMMC) compliance on Azure

As we deliver on our ongoing commitment to serving as the most secure and compliant cloud, we’re constantly adapting to the evolving landscape of cybersecurity to help our customers achieve compliance more rapidly. Our aim is to continue to provide our customers and partners with world-class cybersecurity technology, controls, and best practices, making compliance faster and easier with native capabilities in Azure and Azure Government, as well as Microsoft 365 and Dynamics 365.

In architecting solutions with customers, a foundational component of increasing importance is building more secure and reliable supply chains. For many customers, this is an area where new tools, automation, and process maturity can improve an organization’s security posture while reducing manual compliance work.

In preparing for the new Cybersecurity Maturity Model Certification (CMMC) from the Department of Defense (DoD), many of our customers and partners have asked for more information on how to prepare for audits slated to start as early as the summer of 2020. 

Designed to improve the security posture of the Defense Industrial Base (DIB), CMMC requires an evaluation of the contractor’s technical security controls, process maturity, documentation, policies, and the processes that are in place and continuously monitored. Importantly, CMMC also requires validation by an independent, certified third-party assessment organization (C3PAO) audit, in contrast to the historical precedent of self-attestation.

Expanding compliance coverage to meet CMMC requirements

Common questions we’ve heard from customers include: “when will Azure achieve CMMC accreditation?” and “what Microsoft cloud environments will be certified?”

While the details are still being finalized by the DoD and CMMC Accreditation Body (CMMC AB), we expect some degree of reciprocity with FedRAMP, NIST 800-53, and NIST CSF, as many of the CMMC security controls map directly to controls under these existing cybersecurity frameworks. Ultimately, Microsoft is confident in its cybersecurity posture and is closely following guidance from DoD and the CMMC AB to demonstrate compliance to the C3PAOs. We will move quickly to be evaluated once C3PAOs are accredited and approved to begin conducting assessments. 

Microsoft’s goal is to continue to strengthen cybersecurity across the DIB through world-class cybersecurity technology, controls, and best practices, and to put its cloud customers in a position to inherit Microsoft’s security controls and eventual CMMC certifications. Our intent is to achieve certification for Microsoft cloud services utilized by DIB customers.

Note: While commercial environments are intended to be certified as they are for FedRAMP High, CMMC by itself should not be the deciding factor on choosing which environment is most appropriate. Most DIB companies are best aligned with Azure Government and Microsoft 365 GCC High for data handling of Controlled Unclassified Information (CUI).

New CMMC acceleration program for a faster path to certification

The Microsoft CMMC acceleration program is an end-to-end program designed to help customers and partners that serve as suppliers to the DoD improve their cybersecurity maturity, develop the cyber critical thinking skills essential to CMMC, and benefit from the compliance capabilities native to Azure and Azure Government. 

The program will help you close compliance gaps and mitigate risks, evolve your cybersecurity toward a more agile and resilient defense posture, and help facilitate CMMC certification. Within this program, you’ll have access to a portfolio of learning resources, architectural references, and automated implementation tools custom-tailored to the certification journey.

For more information on participating in the program, email cmmc@microsoft.com. 

Learn more about the CMMC framework

Read our in-depth article on CMMC on the Microsoft Tech Communities blog, and stay tuned to the Azure Government Dev Blog for ongoing guidance on implementing Azure to achieve compliance with CMMC requirements.

Disclaimer: Customers are wholly responsible for ensuring their own compliance with all applicable laws and regulations. Information provided in this post does not constitute legal advice, and customers should consult their legal advisors for any questions regarding legal or regulatory compliance.
Quelle: Azure

Microsoft Services is now a Kubernetes Certified Service Provider

Modern applications are increasingly built using containers, which are microservices packaged with their dependencies and configurations. For this reason, many companies are either containerizing their existing applications or creating new complex applications that are composed of multiple containers.

As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complex. To manage this complexity, Kubernetes, an open-source software for deploying and managing those containers at scale, provides an open source API that controls how and where those containers will run.

Kubernetes Certified Service Provider

Microsoft Services is now a Kubernetes Certified Service Provider (KCSP). The KCSP program is a pre-qualified tier of vetted service providers who have deep experience helping enterprises successfully adopt Kubernetes. The KCSP partners offer Kubernetes support, consulting, professional services, and training for organizations embarking on their Kubernetes journey.

We have trained hundreds of consultants on Kubernetes, developed a comprehensive service offering around Kubernetes, and successfully delivered Kubernetes engagements to many customers in all industries, all over the world.

Using our global reach and ecosystem, we empower organizations to put innovation into practice to deliver strategic business outcomes, maximize the value of cloud technology, and drive success through continual support.

Microsoft Services is your partner to enable your organization to leverage container capabilities and frameworks, such as Kubernetes, to adopt modern technologies to increase speed and agility while also maintaining control and good governance.

The Azure Workloads for Containers offering

We recognize a need to help you address your secure infrastructure challenges and requirements. We envision the containers infrastructure to be more than just the containers orchestration layer to include networking, storage, secrets, and Infrastructure as Code (IaC).

Microsoft Services has a full Kubernetes offering, called Azure Workloads for Containers. This offering is composed of several workstreams that focus on the activities and outcomes that are most relevant to our customers. These workstreams provide full flexibility to our customers as each one of them can be selected independently and customized to meet the specific needs of a given project.

Below are the details of these workstreams.

Kubernetes foundation

Design and plan Azure Kubernetes Service (AKS) cluster and shared services.
Implement AKS cluster and shared services.
Deploy application on AKS.
Test application.
Rollout to production. ​

Containers migration

Assess, design, and plan migration.
Migrate the containers-based application(s).
Test the migrated application(s).
Rollout to production.

Kubernetes security hardening

Refactor your security controls for AKS.
Secure your CI/CD pipeline (DevSecOps).
Harden your AKS environment to meet your compliance obligations.
Assist with third-party security product integration.

Kubernetes threat modeling

Build a threat mo​​del based on the AKS cluster and the apps running on it.
Identify threats and mitigations.
Produce clear actions to mitigate the threats.

Application containerization

Create container image(s) for one or multiple applications.
Test the application(s) running as container.
Deploy the application to an AKS cluster in production​.

The offering is aligned to Microsoft’s Cloud Adoption Framework for Azure and focuses primarily on the Adopt: Innovate principle of your cloud journey for Kubernetes.

Learn more

To learn more, have a look at the Azure Workloads for Containers datasheet.
Quelle: Azure