What you need to know about compliance audits

The post What you need to know about compliance audits appeared first on Mirantis | Pure Play Open Cloud.
Compliance audits are like doctor’s appointments. Nobody likes them, but virtually everybody needs them. Once your company gets beyond a certain size, it’s inevitable that you will be engaging in activities or collecting data that makes you subject to various regulations, whether it’s a hospital subject to HIPAA or a company collecting emails subject to the GDPR.
And eventually, you’re going to have to prove that you’re following those regulations successfully. That’s where the compliance audit comes in.
What is a compliance audit?
A compliance audit is quite literally an audit to see how closely you’re following the rules and regulations to which your company is subject, but it’s also more than that.  It’s about making sure you follow YOUR OWN rules.
Many companies think that all they need is to follow “best practices”, but that’s a fallacy, and for more than one reason.
First, where best practices DO exist, they are the absolute minimum that must be done to be effective. They’re essentially an excuse to stop thinking about how to solve a problem. And as the absolute minimum, there’s one group who absolutely LOVES them: hackers. They know what these “best practices” are, and they’ve had decades to learn how to get around them.
But there’s another important reason to go beyond this notion of “best practices”, and that’s that they simply do not exist. The technology world moves fast and any activity that’s been around long enough to be considered a “best practice” has been around long enough to be outdated.
In a world where data is money and the average data breach costs $3.6 million, a compliance audit is meant to ensure that you are following all of the security and legal controls necessary for your business, and not just blindly playing it by ear and hoping for the best.
How to do a compliance audit
Whether you hire a vendor or decide to do a compliance audit yourself, the process is essentially the same.
Step 1:  Determine what you’re trying to accomplish
The first thing you need to ask yourself is the simplest:  Why are you doing this? Do you have an audit due? Have you been compromised?
What keeps you up at night?
Ultimately you will be judged on your adherence to your particular regulatory scheme. In some cases you can choose a scheme to which you want to prove you’re being held, such as NIST or FedRamp. In others, your line of business will dictate that for you, such as HIPPA for medical institutions, PCI for companies that accept credit cards, or GDPR for companies storing personally identifiable information.
When making your decision, make sure that you are being realistic. It may sound like a great idea to shoot for the ultra-secure FedRamp High, but do you really want to spend a year and a million or so dollars to do that when you’re not actually providing a product to the United States Federal Government?
Step 2:  Decide what needs to be done
The next step is to determine the roadmap of your audit. How you proceed from here depends on whether you’re doing the audit yourself or hiring an outside vendor.
If you’re hiring an outside vendor, they will most likely provide you with a questionnaire that will enable them to get started without wasting time in your first meetings.  
If you’re performing the audit yourself (perhaps to ensure you’ll pass the third party audit), you’ll likely download the information detailing what you’ll need to check.  For example, NIST compliance requires you to satisfy 600-700 different security controls. FedRamp Moderate consists of 325 controls in 16 categories and 8 major areas.
Step 3:  Establish appropriate permissions
The whole point of this exercise is making sure that your systems are secure, so presumably the auditors will need permission to access various areas of your infrastructure, such as the network, servers, and so on. Make sure to establish these permissions in such a way that they can be removed later, when the audit is over.
Step 4:  Perform the actual assessment
This, of course, is the meat of the process, where auditors document information such as:

How many nodes do you have?
What is the networking situation?
What about antivirus protection? How is it kept up-to-date?

Auditors should also look at process, asking questions such as:

Do you have an incident response plan? Is it up-to-date?
Do you store event logs? Do you go through those stored logs?

After answering these questions, you’ll get hit with one of the most important:

Can you prove it?

Having a procedure in place to review event logs for anomalies is useless unless you can show that your team does actually review event logs for anomalies.
It’s in this “proof” step that companies most often fail a compliance audit.
Step 5:  Develop the gap analysis between what should be and what actually is
The whole point of doing a compliance audit is to identify places where you’re falling short and document them so you can correct the problem. At the end of this process, you should have a full gap analysis report, as well as one other crucial piece of information: the remediation plan.
A gap analysis that tells you you have problems but doesn’t provide the means for correcting them is only half the story.
What about hiring a compliance auditor?
While you certainly could perform your own compliance audit, it’s usually not in your best interests to do so, for several reasons:

Most companies don’t have compliance experts on staff
Staff members who take on this burden are operating from an “insider” perspective and are likely to just assume things are being done properly without digging deeper
The compliance auditor is always the most hated person in the room
While you don’t need to have a third party perform an audit, if you want the audit to be taken seriously — for example, if you’re trying to prove to your board that you need money for remediation — it’s better to have a third party audit.

If you’re hiring a vendor to perform your compliance audit, make sure that their goal is to understand what you’re doing. A good auditor will establish a relationship with you to help you meet your goals, not just take your money and tell you what you did wrong.
If you’d like to learn more about performing a compliance audit for your company, be sure to Contact Us for more information.
The post What you need to know about compliance audits appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Your Kubernetes Agenda at DockerCon

Kubernetes has seen a rapid rise over the last few years and is becoming one of the most sought after skills. DockerCon is a great opportunity to get hands-on training from industry experts and hear from real customers who have deployed Kubernetes in production.
You’ll also have a chance to learn how Docker is the easiest way to get started with Kubernetes and attend sessions that describe how the Docker platform manages and secures applications on Kubernetes in multi-Linux, multi-OS and multi-cloud customer environments.
.
Download your Kubernetes agenda and register now for DockerCon!
 
Expert-Led Workshops
Register soon as space is running out in these hands-on workshops!

Kubernetes 101: Getting up and running with Kubernetes – Led by Nigel Poulton, Docker Captain and Pluralsight author and writer of several popular Docker and Kubernetes books
Security Best Practices for Kubernetes – Led by Scott Coulton, Docker Captain and Principal Software Engineer at Microsoft

Customer Case Studies
Hear from Docker customers who are running Kubernetes in production.

McKesson: Serving Cloud Native Developer Teams in a Highly Regulated Environment
Visa: Kubernetes On-Premises Best Practices & Deploying Machine Learning Workloads Inside Visa

Technical Sessions
Learn about the inner workings of Kubernetes and the best practices around operating a Kubernetes environment.

Docker Enterprise Platform and Architecture – Mark Church, Docker
Crafty Requests: Deep Dive Into A Kubernetes CVE – Ian Coldwater, DevSecOps
How Docker Simplifies Kubernetes For The Masses – David Yu, Docker
Building Your Development Pipeline – Laura Tacho, CloudBees and Oliver Pomeroy, Docker
Zero Trust Networks Come to Docker Enterprise Kubernetes – Spike Curtis, Tigera and Brent Salisbury, Docker
Persisting State for Windows Workloads in Kubernetes – Deep Debroy and Anusha Ragunathan, Docker
Using Docker Content Trust (Notary) with Kubernetes Admission Controllers to Further Secure Your Runtime – Justin Cappos, NYU and Zachary Arnold, Ygrene Energy Fund

Open Source Summit
Kubernetes is a featured topic in this year’s Open Source Summit. Connect with different SIG members and leaders and learn how you can contribute to the project.
The Kubernetes session begins Thursday at 2:30pm and includes:

Kubernetes and Container Storage Interface Update –  Michelle Au, Google
Building stateful applications on Docker Enterprise with Rook – Roberto Hashioka, Docker
Windows support – Kubernetes 1.14 windows support graduating to GA; GMSA – Jean Rogue, Docker
Networking – CNI, Network Policy – Arko Dasgupta, Docker Inc.
Birds of a Feather

We look forward to seeing everyone in San Francisco April 29 – May 2.

Whether you’re a beginner or a #Kubernetes pro, #DockerCon is the best place to level up your Kubernetes skills. Register for @DockerCon now!Click To Tweet

The post Your Kubernetes Agenda at DockerCon appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the Cloud Healthcare API beta: Improving data access and shareability across organizations

At Google Cloud, we are focused on providing healthcare and life sciences organizations with innovative technology needed to improve our healthcare system. Through our customers and partners, we are working to improve healthcare for patients, providers, payers, and the many organizations involved in the discovery, development, and delivery of healthcare products and services.Today, we’re pleased to announce that our Cloud Healthcare API is now in beta. From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data—and better understand that data through the application of analytics and machine learning in real time, at scale.   Cloud Healthcare API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP), providing a critical bridge between existing care systems and applications hosted on Google Cloud. Using the API, customers can unlock significant new capabilities for data analysis, machine learning, and application development. These capabilities, in turn, enable the next generation of healthcare solutions.While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers. This is why we are also excited to share how some of our newest customers and partners are leveraging Google Cloud to transform the healthcare industry.   Next week at Google Cloud Next, we’ll hear from many healthcare customers, including:American Cancer Society will outline how it is using Cloud ML Engine on GCP to accurately and quickly identify novel patterns in digital pathology images.Hunterdon Health will discuss how cloud-native endpoints like Chrome Enterprise can be deployed throughout your healthcare network to increase information access, reduce operational costs, and deliver a better patient experience.Stratus Medicine will review a serverless architecture for generating real-time clinical predictions using Cloud Healthcare API to feed FHIR and DICOM data into Cloud Machine Learning Engine.CareCloud will discuss mapping X12 EDI transactions to FHIR as part of a broader approach to building a comprehensive Clinical Data Warehouse.Kaiser Permanente will talk about how it leverages Google’s CI-CD process, API best practices, and Apigee API management to power its API-first strategy.LifeImage will demo how they are enabling point of care epidemiology and secure image sharing networks on GCP.iDigital will present their architecture for a zero-footprint teleradiology solution on top of Cloud Healthcare HL7v2 and DICOM API.We look forward to continuing to bring innovative products to the healthcare and life sciences space, and partnering with organizations to improve our healthcare system. Visit our website to learn more about Google Cloud’s solutions in healthcare andlife sciences.
Quelle: Google Cloud Platform

Spinnaker continuous delivery platform now with support for Azure

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It is being chosen by a growing number of enterprises as the open source continuous deployment platform used to modernize their application deployments. Most of these enterprises deploy applications to multiple clouds. One of Spinnaker’s features is its ability to allow users to deploy applications to different clouds using best practices and proven deployment strategies.

Until now customers who had standardized on Spinnaker had to use custom/different tooling to deploy their applications to Azure.

With this blog post and the recent release of Spinnaker (1.13), we are excited to announce that Microsoft has worked with the core Spinnaker team to ensure Azure deployments are integrated into Spinnaker!

These integrations will strengthen our existing open source CI/CD pipeline toolchain and allow customers who have taken a dependency on Spinnaker.

Initial release (1.13)

 

In our initial release we have enabled a core Spinnaker scenario for deploying immutable VM images – the Build, Bake, Deploy scenario.

As the scenario name suggests, there are three primary stages in the Spinnaker pipeline.

Build (labeled “Configuration” above): The build stage happens outside of Spinnaker and is used as a trigger for the following stages. It can be a Jenkins job, Travis job, or Webhook, and generates a package that will be used to create a VM image.
Bake: This stage uses the package from the previous step to create an Azure managed VM image.
Deploy: Finally, the deploy stage deploys one or more Virtual Machine Scales Sets using the managed VM image from the previous step. This can be done using one of the built-in strategies like Highlander or Red/Black.

Since Spinnaker is used to deploy to multiple clouds, it has created some abstractions for common infrastructure components. In this release these abstractions map to Azure infrastructure as follows:

Server Group: Maps to an Azure Virtual Machine Scale Set
Load balancer: Maps to an Azure Application Gateway
Firewall: Maps to an Azure Network Security Group

What’s next?

We are excited to be accepted as part of the Spinnaker open source community and will continue to invest in Spinnaker to enable other scenarios like container-based Azure Kubernetes Service (AKS) deployments, improve performance, and flexibility in infrastructure abstractions. We will publish our roadmap so keep an eye out and let us know what you think.

If you are interested in learning more about Spinnaker, or it’s already an important component in your DevOps and you would like to help us make the integration with Azure great, please reach out to us. You can connect directly with us in any of the following venues:

Join the conversation on the Azure channel in Spinnaker Slack.
Create issues and/or contribute on GitHub.

Quelle: Azure

Device template library in IoT Central

With the new addition of a device template library into our Device Templates page, we are making it easier than ever to onboard and model your devices. Now, when you get started with creating a new template, you can choose between building one from scratch or you can quickly select from a library of existing device templates. Today you’ll be able to choose from our MXChip, Raspberry Pi, or Windows 10 IoT Core templates. We will be working to improve this library by adding more device templates which provide customer value.

The addition of the device template library helps to streamline the device modeling workflow. It saves time as you can pre-populate a model with existing details. This now opens the door for more manufacturers to create standard definitions for their devices or smart products which we’ll continue to include in this growing template library.

To get started with selecting a device template, select the Device Templates tab and click the “+ New” button. This will bring you to our library page where you can choose which template you’d like to get quickly started with. You can also choose the Custom option if you would like to begin modeling your device template from scratch.

Once you select a template, simply give it a name and click “Create” to add this template into your application. We will automatically create a simulated device for you to view simulated data coming into this new template. Once your template has been created, you can visit the “Device Explorer” page to connect other real or simulated devices into this template.

We are excited to continue simplifying your device onboarding experience. If there are particular device templates you want to use or if you have any other suggestions, please leave us feedback with the links below.

Next steps

Have ideas or suggestions for new features? Post it on UserVoice.
To explore the full set of features and capabilities and start your free trial, visit the IoT Central website.
Check out our documentation including tutorials to connect your first device.
To give us feedback about your experience with Azure IoT Central, take this survey.
To learn more about the Azure IoT portfolio including the latest news, visit the Microsoft Azure IoT page.

Quelle: Azure

6 HPC must-sees at Next ‘19

High performance computing (HPC) is all about scale and speed. And when you’re backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations.At Google Cloud Next ‘19, we have lots of sessions to help you understand how to use our scalable compute, networking and storage infrastructure. If you’re attending the event, here are 5 HPC sessions to mark on your calendar.1. High Performance Computing on Google Cloud Platform (GCP): Deploy an HPC Cluster Now – Register hereIn this session, we’ll discuss why GCP is a great platform to run HPC workloads. We’ll present best practices, architectural patterns, and how our professional services organization can help you on your journey. We’ll conclude by demoing the deployment of an autoscaling batch system in GCP.2. HPC Partner Ecosystem  – Register hereCome learn about how our HPC partners focusing on job scheduling, workload management, applications, and libraries can enable you to start running HPC workloads easily and quickly on Google Cloud.3. Technical Deep Dive Into Storage for High Performance Computing – Register hereLarge-scale computing in the cloud is maturing, but HPC storage in the cloud is still in its infancy. In this session, we will discuss HPC storage in the cloud with solutions for EDA, fintech, manufacturing, media, genomics, and many more. Then, our HPC storage partner DataDirect Networks will discuss its Lustre parallel file system and other future offerings on Google Cloud.4. How We Broke the World Record for Computing Digits of Pi (31.4 trillion!) – Register hereWe calculated 31.4 trillion digits of Pi on Google Cloud—the new world record. This session will discuss the nature of the calculation, the architecture, challenges and techniques, benefits of Google Cloud, and of course the brief history of Pi computation. Along the way, you’ll learn a ton about large-scale cloud computing.5. Performance Benchmarking on Google Cloud Platform – Register hereHow do you benchmark performance in the cloud, and in particular on Google Compute Engine? We’ll use PerfKitBenchmarker to take an early look at our new C2 instances and see how they stack up to our N1 series. We’ll also provide scripts so you can benchmark systems from the comfort of your own home!6. University Students & Researchers Push the Bounds of What is Possible With GCP – Register hereResearchers, students and developers at universities around the world are asking what’s possible—and using GCP to find out. Come learn how Google Cloud is helping researchers make new discoveries and share their insights, be it mapping the cosmos, seeking solutions to the opioid crisis, building more accessible technology to help people communicate and more.This is just a small sampling of the hundreds of breakout sessions we’ll be holding next week. To learn more about the event, and secure your spot, check out Google Cloud Next ‘19 website. And if you can’t make it to the show, stay tuned, because we’ll be publishing full-length recordings of every session for your viewing pleasure.
Quelle: Google Cloud Platform

Bringing it all together: learn GCP solutions from the sources at Next ‘19

If you need details about using a particular Google Cloud Platform (GCP) service—whether Compute Engine or Cloud Storage or BigQuery—you typically turn to the respective service’s docs. But developing cloud-based apps involves combining multiple services into a complete solution. And there are as many solutions as there are businesses using the cloud.We understand this, and that’s why we have a team of solutions architects (SAs) who are industry veterans and experts in app design and cloud architecture. These Google devs show you how to put services together to solve your particular business need, and ultimately, create your very own solution tailored to your specific requirements.Learn about GCP solutions at Next ’19If you’re attending Google Cloud Next ’19, our annual cloud conference, you’ll have the opportunity to meet these SAs in person. Stop by the whiteboard booth to put faces to the names and solutions, to share ideas for topics or to even get help with your designs and plans.And while you’re at it, there are plenty of sessions with our SAs that you can attend. Here are a few to mark on your calendars, if you haven’t already:Automate your way to a consistent and repeatable world with CSP Learn about a set of opinionated solutions that help speed up your creation of a solid cloud platform.Journey to the cloud confidently with Citrix and Google Cloud Get practical advice on how to move to GCP using Citrix and open source tools, with reference architectures and customer stories.Revolutionizing media workflows with intelligent content See how machine learning can help add intelligence for better media workflows.Managing a render farm on GCP using OpenCue Get a live demo of OpenCue, the open-source, high-performance render manager for animation industry companies.BigQuery ML: What’s New, and an Exploration With Booking.com on Using It to Assess Data Quality Learn about BigQuery ML’s latest models and features, plus hear how Booking.com uses these models to assess data quality.Large-scale multiplayer gaming on Kubernetes See how open-source projects built on Kubernetes can handle infrastructure for you, featuring the Open Match matchmaking and the Agones game-serving platforms.Marketing and creative insights from unstructured data: Cloud ML APIs Check out examples of how customers use the Cloud Natural Language and the Cloud Vision APIs to take advantage of unstructured data for marketing analytics.GCP for Apache Kafka Users: Stream Ingestion and Processing View two approaches to data integration and how to use Cloud Dataflow for your data streams.Explore solutions docs for your requirementsAlong with Next sessions, you can explore the Solutions Gallery, a portal featuring more than 600 solutions documents that explain how to use GCP components to create apps that address real-world requirements. Each of our solutions dives deep to describe how the SA implemented, well, a solution to a particular customer requirement. Some of the solutions are big-picture overviews. Some list best practices. Many are tutorials that include code and the link to GitHub repositories where you can find all the components you need to run the solution yourself.The topics are as wide-ranging as our customers and as their reasons for building cloud-based apps. Here are just a few examples:Is your system prepared for the unexpected? A multi-part series by SA Grace Mollison describes how to design a cloud architecture to handle disaster recovery. Check out her excellent “run-flat” analogy for hot failover.Want hands-on experience using machine learning for a real app? SA Lukman Ramsey created a multi-part tutorial series that explains how to use TensorFlow and GCP components to build a recommendation system.You already know how to back up your systems when everything is on-premises, but what do you do when your infrastructure is in the cloud? A solution by SA David Cueva Tello explains how to set up automated backups using Cloud Composer.We invite you to explore our solutions and learn about the many ways you can use GCP to address your development needs. And keep an eye out here for information about new
Quelle: Google Cloud Platform

Last month today: March on the Google Cloud blog

Whew—March was a busy month around here, with a new region opening, an actual Guinness World Record, and some interesting stories of using Google Cloud Platform (GCP) for gaming and basketball data analytics. Here are the top stories that caught your interest last month.Bulking up infrastructureGCP’s sixth European region, in Zurich, Switzerland, opened last month. It launched with our standard set of products across compute, databases, storage, security, big data and networking, including Compute Engine, Cloud Bigtable, BigQuery and more. The Zurich region comes with three availability zones and Cloud Interconnect, our private, software-defined network to speed cloud access and data movement. See which GCP region is closest to you at GCPing.com.On the infrastructure front, we introduced a new Cloud Storage pricing plan. The Storage Growth Plan tackles data growth and volatility that many of our users experience. We want to keep cost unpredictability out of that equation, so this new plan lets you commit to 12-month periods of using Cloud Storage for a fixed dollar amount.Using cloud for fun and πWe celebrated Pi Day, 3/14, here at Google Cloud with the excitement of winning a Guinness World Record for the most digits of π ever calculated, to 31.4 trillion decimal places. Google Cloud developer advocate Emma Haruka Iwao used Compute Engine VMs running y-cruncher to do the calculations. In addition to this being the most π digits ever calculated, it’s the first time this record was set using the cloud. Using cloud also brings the benefit of easy sharing: You can get your hands on the digits via pi.delivery.Also last month: The annual March Madness tournament started, and we continued our partnership to explore the NCAA’s 80-plus years’ worth of historical basketball data using Google Cloud. This year, we’re bringing student developers into the fold and adding a new online course so you can learn how to use BigQuery to analyze NCAA data with SQL and make a machine learning model to make predictions based on the historical data. We also built a public Data Studio dashboard with plenty of new insights to help you survive the madness. We’ll have bootcamps at Next ‘19 to continue the fun.And speaking of fun and games, Google Cloud made a big splash at this year’s Game Developers Conference, where we highlighted how Stadia and Google Cloud are better together, as well as how GCP is powering popular games like Apex Legends and Tom Clancy’s The Division 2. These multiplayer games run on GCP infrastructure around the world so players can constantly access matchmaking, statistics and high score data.Putting tools and concepts together for an SRE serviceThe latest installment in our series about using Istio covers how you can bring application metrics into your reporting and in line with your site reliability engineering (SRE) practice. Since Istio integrates with Stackdriver, you can get about a dozen metrics right away without further configuration. This post covers how you might choose from those metrics as you’re setting service-level indicators (SLIs), and how you can use Stackdriver Logging and Trace to get into the details that are most relevant for your team and business.That’s a wrap for March. We’ll see you next month—and at Next ‘19 in the meantime!
Quelle: Google Cloud Platform