CIS hardening support in Container-Optimized OS from Google

At Google, we follow a security-first philosophy to make safeguarding our clients’ and users’ data easier and more scalable, with strong security principles built into multiple layers of Google Cloud. In line with this philosophy, we want to make sure that our Container-Optimized OS adheres to industry-standard security best practices. To this end, we released a CIS benchmark for Container-Optimized OS that codifies the recommendations for hardening and security measures we have been using. Our Container-Optimized OS  97 releases now support CIS Level 1 compliance, with an option to enable support for CIS Level 2 hardening.CIS benchmarks help define the security recommendations for various software systems, including various operating systems. In the past, Google had developed a CIS benchmark for Kubernetes as part of the continued contributions to the container orchestration space. We decided to build a CIS benchmark for Container-Optimized OS because they are well recognized across the industry, are created and reviewed in open source, and can provide a good baseline when it comes to hardening your operating systems. Our benchmarks for Container-Optimized OS are based on the CIS benchmarks defined by their security community for distribution-independent Linux OSes. In addition to applying some of the security recommendations for generic Linuxes—such as making file permissions more strict—we included measures to support hardening specific to Container-Optimized OS, such as verifying that the OS has the capabilities for checking filesystem integrity with dm-verity or that logs can be exported to Cloud Logging. We also removed some checks that don’t apply to Container-Optimized OS due to its minimal OS footprint that can reduce the attack surface. Container-Optimized OS 97 and later versions come with support for CIS Level 1 and can allow users to optionally apply support for Level 2 hardening as well.Compliance is not just about a one-time hardening effort, however. You will need to ensure that the deployed OS images stay within compliance throughout their life. At Google, we continually run scans on our Google Cloud projects to help verify that our VMs and container images are kept up-to-date with the latest CIS security guidelines. To help scan a wide range of products with a low resource usage overhead, we developed Localtoast, our own open-source configuration scanner.Localtoast is highly customizable and can be used to detect insecure OS configurations on local and remote machines, VMs, and containers. Google uses Localtoast internally to help verify CIS compliance on a wide range of Container-Optimized OS installations and other OSes. Its configuration and scan results are stored in the same Grafeas format that deploy-time security enforcement systems such as Kritis use, which can make it easier to integrate with existing supply chain security and integrity tooling. See this video for a showcase of how you can use this Localtoast scanner on COS.Included in the Localtoast repo is a set of scan configuration files that help scan Container-Optimized OS’ CIS benchmarks. For other Linux OSes, we include a fallback config which supports and is based on the distribution-independent Linux CIS benchmarks and aims to help provide relevant security findings for a wide range of Linuxes—with support for more OSes coming in the future.Apart from the configs for scanning live instances, we also released modified configs for scanning container images.Container-Optimized OS 97 and above comes with Localtoast and the Container-Optimized OS-specific scanning config that supports CIS compliance pre-installed. We welcome you to try out our user-guide, and hope that the provided tools will help you get a step further in your journey toward keeping your cloud infrastructure secure. If you have any questions, don’t hesitate to reach out to us.Related Article4 new ways Citrix & Google Cloud can simplify your Cloud MigrationCitrix and Google Cloud simplify your cloud migration. The expanding partnership between Citrix and Google Cloud means that customers con…Read Article
Quelle: Google Cloud Platform

Optimize and scale your startup on Google Cloud: Introducing the Build Series

We understand that at each stage of the startup journey, you need different levels of support and resources to start, build and grow. To help with your journey, we created the Google Cloud Technical Guide for Startups to help your organization across these different milestones.Technical Guides for Startups to support your startup journeyThe Google Cloud Technical Guides for Startups series includes a video series and handbooks, consisting of three parts optimized for different stages of a startup’s journey.The Start Series: Begin by building, deploying and managing new applications on Google Cloud from start to finish.The Build Series: Optimize and scale existing deployments to reach your target audiences.The Grow Series:  Grow and attain scale with deployments on Google Cloud.The Start Series – is fully available on this playlist. In this series,  we introduced topics to get you started on Google Cloud. This included setting up your project, choosing the right compute option, configuring databases, networking, as well as understanding support and billing.Now that you have applications running on Google Cloud, it is time to take the next step to optimize and scale these deployments.Kicking off the Build SeriesWith our Start Series complete, we are happy to announce the second program in the series – the Build Series! The Build Series focuses on optimizing deployments and scaling your business, enabling you to build a foundation to accelerate your startups’ growth in the future. We will dive into many exciting topics, ranging from startup programs to Google Cloud’s data analytics and pipelines solutions, machine learning, API management and more. You will learn to gain insights from your data and to better manage and secure your applications, which will accelerate scale and understanding of your end user. Our first episode shares an overview of these topics, and features our new website which has many useful startup resources and technical handbooks. Watch our kick off video to find out more.Embark on the journey togetherWe hope that you will join us on this journey, as we Start, Build, and Grow together. Get started by checking out our website and our full playlist on the Google Cloud Tech channel. Don’t forget to subscribe to stay up to date. See you in the cloud!Related ArticleBootstrap your startup with the Google Cloud Technical Guides for Startups : A Look into the Start SeriesAnnouncing the summary of the first phase of the Google Cloud Technical Guides for Startups, a video series for technical enablement aime…Read Article
Quelle: Google Cloud Platform

Advancing systems research with open-source Google workload traces

With rapid expansion of internet and cloud computing, warehouse-scale computing (WSC) workloads (search, email, video sharing, online maps, online shopping, etc.) have reached planetary scale and are driving the lion’s share of growth in computing demand. WSC workloads also differ from others in their requirements for on-demand scalability, elasticity and availability. Many studies (e.g., Profiling a warehouse-scale computer) and books (e.g., The Datacenter as a Computer: Designing Warehouse-Scale Machines) have pointed out that WSC workloads have fundamentally different characteristics than traditional benchmarks and require changes to modern computer architecture to achieve optimal efficiency. Google workloads have data and instruction footprints that go beyond the capacity of modern CPU caches, such that the CPU spends a significant portion of its time waiting for code and data. Simply increasing memory bandwidth would not solve the problem, as many accesses are in the critical path for application request processing; it is just as important to reduce memory access latency as it is to increase memory bandwidth.Over the years, the computer architecture community has expressed the need for WSC workload traces to perform architecture research. Today, we are pleased to announce that we’ve published select Google workload traces. These traces will help systems designers better understand how WSC workloads perform as they interact with underlying components, and develop new solutions for front-end and data-access bottlenecks.We captured these workload traces using DynamoRIO on computer servers running Google workloads — you can find more details at https://dynamorio.org/google_workload_traces.html. To protect user privacy, these traces only contain instruction and memory addresses. We have found these traces useful for understanding WSC workloads and seeding internal research on processor front-ends, on-die interconnects, caches and memory subsystems, etc. — all areas that greatly impact WSC workloads. For example, we used these traces to develop AsmDB. Likewise, we hope these traces will enable  the computer architecture community to develop new ideas that improve performance and efficiency of other WSC workloads.
Quelle: Google Cloud Platform

Azure Health Data Services: Engineering product for partners

The healthcare industry has come a long way from putting pen to paper on a pharmacy script or clinical SOAP note to now, being able to deliver primary care in the emerging hospital at home. My career in the healthcare and life sciences (HLS) industry has spanned different roles including: a military clinician, life science entrepreneur, clinical research application scientist, and business leader. Currently, I head the Partner Alliances team for Microsoft’s global health and Life sciences Cloud and Data engineering and product group. Today, I consider myself an HLS generalist bridging the gap between engineering and the application of it in the wild. I look forward to continuing to listen to the needs, implement solutions, and partner with others to bring forward meaningful change in healthcare.

Last month, we launched Azure Health Data Services, a platform as a service (PaaS) offering designed exclusively to support Protected Health Information (PHI) in the cloud, built on the global open standards Fast Healthcare Interoperability Resources (FHIR)® and Digital Imaging Communications in Medicine (DICOM). Watching the team work to develop this product, I feel compelled to share how intentional our product team is at building healthcare technologies for an industry that is currently experiencing historically unprecedented transformation. We are deploying technology that can ingest, transform, and persist data, allowing our customers to use their data to span workflows from discovery research to clinical end points.1 The underlying technology enables our customers to engage in activities ranging from novel biomarker identification to virtual clinical decision support. For example, today our customers can combine cellular assay data, pathology data, molecular imaging, genomics, handwritten, voice, and text derived notes. With so much data, the goal is to enable our customers to derive insights from a single system of record, so that they can optimize the user experience for patient,  research and clinical workflows so that adherence to treatment increases, scientists gain faster contextual evidence to support their early discoveries and clinicians can spend more time focused on delivering healthcare without experiencing burnout and information overload. The bottom line is, when you can bring these data sets together in a meaningful way, you inherently increase your signal to noise ratio since you are no longer looking for a needle in a haystack; you are looking for a book in a library.

Five years ago, under the leadership of Peter Lee, Microsoft made a purposeful decision that enabled us to lead the way in cloud, data, AI, and innovation. In 2020, Microsoft won the Frost and Sullivan Best Practice Award for our commitment to global AI for healthcare IT growth, and our innovation and leadership in the industry. The Microsoft executive health leadership team realized that we needed a common standards-based platform for healthcare and life sciences data and a secure compliant environment for the industry to build on. To accomplish this, we would need to contribute to the interoperability momentum for FHIR® standard. We also knew we had to lead with partners that know the space better than we do.  We are now focused on building the most trusted, health data platform designed with security and compliance in mind, that is ready to ingest a variety of data types and standards, workflow accelerators, and scenario-specific features. Our hope is that this will enable our ecosystem of partners to push the last mile of innovation for our shared customers in provider, pharmaceutical, payor, and life sciences.
With our partners as the foundation of our business, we will maintain competitive velocity in such transformational times.

Our approach to building Azure Health Data Services has been to support our partners by building and managing the underlying cloud technology so they can remain focused on the front-line industry scenarios. We appreciate the intimate business propriety required to remain innovative and competitive. For this model to work, we must begin and end with the question “are we going to build, buy, or partner for this given product, feature, or capability?” These decisions are rigorous and informed by key industry opinion leaders, the partner ecosystem, and our leadership teams.

Taking inspiration from industry leaders

To support this thesis, we built Health Data Services Partner Alliances team. Our charter is to listen to industry leaders like Tom Arneman at EPAM, BJ Moore at Providence, and the broader trusted advisors across the Microsoft health and life sciences partnership ecosystem. This industry driven feedback challenged us to deliver interoperable, FHIR enabled services and partner led solutions. Partners like Redox, Onyx, 3Cloud, EPAM, SAS, Efferent, Teladoc and ZS Services have been instrumental in providing direct user feedback.

These solutions are coming to life with our mutual customers across the provider, payer and pharma industries. Together we are delivering diversified solutions across the HLS continuum that includes users like translational oncology clinical trial coordinators to care providers remotely accessing their patients. We have worked closely to evolve features with early movers that have deep expertise in multi-modal interoperability deployments, FHIR resource creation, MedTech eventing features for remote patient monitoring, and DICOM for imaging. Now we are scaling these managed services with global partners, their large enterprise HLS practices and industry leading ISV solutions. We are deploying a breadth motion and application toolset that will make it simpler for our partners to build new transactional and analytic SMART on FHIR and other applications on top of Azure Health Data Services.

These partners are the cornerstone of building solutions for the greatest challenges we see today and foresee in years to come. At Microsoft we focus on aligning with them on a defined customer and business opportunity, we then commit resources and appropriate enablement to deliver timely and measurable business value. When we execute in this way, our likelihood of optimized collaboration, product; market fit, market adoption, and long-term partnership is much greater.

Azure Health Data Services is built with the goal of enabling our customers to be able to do more with their health data. We want our partners to be able to provide them solutions to do so—solutions optimized for Azure, Microsoft Cloud for Healthcare and Azure Health Data Services which can help them transform patient experience, discover new insights, and accelerate innovation.

Learn more

Learn more about Azure Health Data Services.
Read our recent blog, “Microsoft launches Azure Health Data Services to unify health data and power AI in the cloud.”
Learn more about Microsoft Cloud for Healthcare.
Learn more about how health companies are using Azure to drive better health outcomes.

1EPAM Debuts New Cloud-Powered Digital Clinical Trials Platform.

®FHIR is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and are used with their permission.
Quelle: Azure

AWS Service Catalog unterstützt jetzt das AWS Cloud Development Kit (AWS CDK)

Konstrukte von AWS Service Catalog für das AWS Cloud Development Kit (AWS CDK) sind jetzt verfügbar. Service-Catalog-Administratoren können ihren Katalog jetzt im Code innerhalb einer CDK-Anwendung definieren, die über AWS CloudFormation bereitgestellt wird. Sie können ein Service-Catalog-Produkt auch vollständig im Code in CDK definieren. Dazu ist es nicht nötig, vorher CloudFormation-Vorlagen in Amazon Simple Storage Service (Amazon S3) oder AWS CodeCommit hochzuladen und zu referenzieren.
Quelle: aws.amazon.com

Klonen der Eingaben des AWS Launch Wizard, um zukünftige SAP-Bereitstellungen zu vereinfachen

Der AWS Launch Wizard ermöglicht es Ihnen nun, die Eingaben beim Bereitstellen eines SAP-Systems zu klonen, um sie in zukünftigen Einführungen zu verwenden. In den meisten Fällen bleiben die meisten dieser Parameter bei allen Bereitstellungen gleich. Dank dem heutigen Launch ist es nicht mehr nötig, jeden Parameter bei späteren Bereitstellungen erneut manuell einzugeben. Dies ermöglicht es Ihnen, Zeit zu sparen und Fehler zu reduzieren, indem Sie sich voll auf die wenigen Parameter konzentrieren, die jede Bereitstellung einzigartig machen.
Quelle: aws.amazon.com

Amazon RDS for PostgreSQL unterstützt jetzt M6i- und R6i-Instances mit neuen Instance-Größen von bis zu 128 vCPUs und 1 024 GiB RAM

Amazon Relational Database Service (Amazon RDS) for PostgreSQL mit Version 11 und höher unterstützt jetzt M6i- und R6i-Instances. M6i-Instances sind die 6. Generation von Amazon-EC2-x86-basierten universellen Computing-Instances, die entwickelt wurden, um ein ausgewogenes Verhältnis von Computing-, Arbeitsspeicher-, Speicher- und Netzwerkressourcen bereitzustellen. R6i-Instances sind die 6. Generation von speicheroptimierten Amazon-EC2-Instances, die für speicherintensive Workloads entwickelt wurden. M6i- und R6i-Instances basieren auf dem AWS-Nitro-System, einer Kombination aus dedizierter Hardware und schlankem Hypervisor, der praktisch alle Rechen- und Speicherressourcen der Host-Hardware für Ihre Instances bereitstellt.
Quelle: aws.amazon.com