Designing Your First Application in Kubernetes, Part 4: Configuration

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In part 3, I explained how to configure networking services in Kubernetes to allow pods to communicate reliably with each other. In this installment, I’ll explain how to identify and manage the environment-specific configurations expected by your application to ensure its portability between environments.

Factoring out Configuration
One of the core design principles of any containerized app must be portability. We absolutely do not want to reengineer our containers or even the controllers that manage them for every environment. One very common reason why an application may work in one place but not another is problems with the environment-specific configuration expected by that app.
A well-designed application should treat configuration like an independent object, separate from the containers themselves, that’s provisioned to them at runtime. That way, when you move your app from one environment to another, you don’t need to rewrite any of your containers or controllers; you simply provide a configuration object appropriate to this new environment, leaving everything else untouched.
When we design applications, we need to identify what configurations we want to make pluggable in this way. Typically, these will be environment variables or config files that change from environment to environment, such as access tokens for different services used in staging versus production or different port configurations.
Decision #4: What application configurations will need to change from environment to environment?
From our web app example, a typical set of configs would include the access credentials for our database and API (of course, you’d never use the same ones for development and production environments), or a proxy config file if we chose to include a containerized proxy in front of our web frontend.
Once we’ve identified the configs in our application that should be pluggable, we can enable the behavior we want by using Kubernetes’ system of volumes and configMaps.
In Kubernetes, a volume can be thought of as a filesystem fragment. Volumes are provisioned to a pod and owned by that pod. The file contents of a volume can be mounted into any filesystem path we like in the pod’s containers.
I like to think of the volume declaration as the interface between the environment-specific config object and the portable, universal application definition. Your volume declaration will contain the instructions to map a set of external configs onto the appropriate places in your containers.
ConfigMaps contain the actual contents you’re going to use to populate a pod’s volumes or environment variables. They contain-key value pairs describing either files and file contents, or environment variables and their values. ConfigMaps typically differ from environment to environment. For example, you will probably have one configMap for your development environment and another for production—with the correct variables and config files for each environment. 

The configMap and Volume interact to provide configuration for containers.

Checkpoint #4: Create a configMap appropriate to each environment. 
Your development environment’s configMap objects should capture the environment-specific configuration you identified above, with values appropriate for your development environment. Be sure to include a volume in your pod definitions that uses that configMap to populate the appropriate config files in your containers as necessary. Once you have the above set up for your development environment, it’s simple to create a new configMap object for each downstream environment and swap it in, leaving the rest of your application unchanged.
Advanced Topics
Basic configMaps are a powerful tool for modularizing configuration, but some situations require a slightly different approach.

Secrets in Kubernetes are like configMaps in that they package up a bunch of files or key/value pairs to be provisioned to a pod. However, secrets offer added security guarantees around encryption data management. They are the more appropriate choice for any sensitive information, like passwords, access tokens or other key-like objects.

To learn more about configuring Kubernetes and related topics: 

Check out Play with Kubernetes, powered by Docker.
Read the Kubernetes documentation on Volumes. 
Read the Kubernetes documentation on ConfigMaps.

We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 4 — Managing Environment-Specific ConfigurationsClick To Tweet

The post Designing Your First Application in Kubernetes, Part 4: Configuration appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Change Healthcare: Building an API marketplace for the healthcare industry

Today we hear from Gautam M. Shah, Vice President, API and Marketplaces at Change Healthcare, one of the largest independent healthcare technology companies in the United States. Change Healthcare provides data and analytics-driven solutions and services that address the three greatest needs in healthcare today: Reducing costs, achieving better outcomes, and creating a more interconnected healthcare system. Healthcare is a rapidly evolving industry. There is an urgent need to bridge gaps and connect multiple data sources, transactions, data owners and data users to improve all parts of the healthcare system. At Change Healthcare, we are rethinking and transforming how we approach our products and how we use APIs to achieve this goal. Taking a user-centered, outside-in approach, we identify, develop, and productize “quanta of value” within our portfolio (“quanta,” the plural of “quantum,” refers to small but crucial pockets of value). We connect and integrate those quanta into our own and our partners’ products to create a broader set of more impactful solutions. This approach to creating productized APIs enables us to bridge workflows and remove data silos. We bundle productized APIs to power solutions which open new possibilities for powering exceptional patient experiences, enhancing patient outcomes, and optimizing payer and provider workflows and efficiencies. To support this goal, we needed a way to support a large population of API producers, engage several segments of API consumers, and rethink how we bring API products to market at scale. We aren’t just delivering code; we’re creating and managing a broad product portfolio throughout its lifecycle. We take our APIs from planning, design, and operation through evolution and to retirement. Operating these products requires meeting the needs of many API producers, allowing for marketing and product enablement, supporting different distribution channels and pricing, and enabling rapid product and solution creation. We also have to do all of this while prioritizing security and requiring a minimum of added platform development or customization. In short, we need an enterprise marketplace enablement platform. We chose the Apigee API Management Platform because it allows us to do all this.Why Apigee?Change Healthcare is building a marketplace to advance API usage across the healthcare ecosystem. This marketplace, the API & Services Connection, is a destination where our internal users, customers, partners, and the healthcare ecosystem can readily discover, interact with, and consume our broad portfolio of clinical, financial, operational, and patient experience products and solutions in a secure, simple, and scalable manner.Using Google Cloud’s enterprise-class Apigee API Management Platform to power our marketplace allows us to support our entire organization with a standard set of tools, patterns, and processes. Using these common, and in some cases, pre-established, sets of security, performance, and operation standards frees our API producers from worrying about the mechanics of how to deploy their products, and allows them to focus on creating the best possible solutions. It also provides us with robust proxy development and management capabilities, allowing us to access and distribute existing APIs and assets, thereby eliminating the need for complex migrations.We empower our diverse mix of API producers by leveraging the full range of Apigee capabilities to automate engagement, integrate with different development methods, support visibility of products and pricing models, and measure usage, engagement, and adoption. By taking a “self-service first” approach, we allow our API producers to operate in line with their business processes and needs of the enterprise, while at the same time giving them the tools and metrics they need to create and optimize their products. We also use the Apigee bundling capabilities to allow our producers to easily create and productize API bundles, which are then used to develop solutions that incorporate leading-edge technologies to solve more complex problems. Our customer-facing marketplace makes the most of how Apigee supports distribution of APIs to multiple marketplaces, including a fully customizable developer portal. This capability gives us the ability to build private API user communities, create experiences for multiple customer segments, and distribute our APIs across multiple storefronts. Apigee lets us do all this while maintaining a common enterprise platform from which to control availability, monetization, and monitoring. In this way we can distribute our API assets internally and also allow our API producers to target how they want to manage their API products externally. Producers also benefit from rich engagement and usage data to better segment and target product availability, and pricing. Apigee also supports creating a more immersive and interactive experience for API consumers, enabling us to provide technical and marketing documentation, a sandbox, and connections to our product teams and other users.Fulfilling a bold visionAt Change Healthcare, we believe APIs are the present and the future. Today, our APIs power our products and enable us to serve the needs of the entire healthcare ecosystem. Looking forward, our APIs will power growth by enabling internal users to take advantage of valuable capabilities we’ve created, as well as make those capabilities easily available to external users. Armed with these productized APIs, our developers, customers, partners—ultimately all parts of the ecosystem—will be able to deliver new and innovative products that combine interoperable data, differentiated experiences, optimized workflows, and new technologies such as AI and blockchain.We’re just getting started with APIs! We’ve launched the first version of the API & Services Connection developer portal, and now have a standard method of engagement with our API producers and a place to drive internal visibility and external discovery. Our partnership with Apigee works well for us because we can demonstrate that we share the same goals internally and externally, and ultimately use the same set of tools to drive transformation. As our vision becomes a reality, we look forward to engaging not only more of our internal teams, but our partners and customers as well. Together we will use APIs to break down silos in healthcare, and ultimately create a more interoperable healthcare system for patients, providers, and payers. Learn more about API management on Google Cloud.
Quelle: Google Cloud Platform

Azure Sentinel general availability: A modern SIEM reimagined in the cloud

Earlier this week, we announced that Azure Sentinel is now generally available. This marks an important milestone in our journey to redefine Security Information and Event Management (SIEM) for the cloud era. With Azure Sentinel, enterprises worldwide can now keep pace with the exponential growth in security data, improve security outcomes without adding analyst resources, and reduce hardware and operational costs.

With the help of customers and partners, including feedback from over 12,000 trials during the preview, we have designed Azure Sentinel to bring together the power of Azure and AI to enable Security Operations Centers to achieve more. There are lots of new capabilities coming online this week. I’ll walk you through several of them here.

Collect and analyze nearly limitless volume of security data

With Azure Sentinel, we are on a mission to improve security for the whole enterprise. Many Microsoft and non-Microsoft data sources are built right in and can be enabled in a single click. New connectors for Microsoft services like Cloud App Security and Information Protection join a growing list of third-party connectors to make it easier than ever to ingest and analyze data from across your digital estate.

Workbooks offer rich visualization options for gaining insights into your data. Use or modify an existing workbook or create your own.

Apply analytics, including Machine Learning, to detect threats

You can now choose from more than 100 built-in alert rules or use the new alert wizard to create your own. Alerts can be triggered by a single event or based on a threshold, or by correlating different datasets (e.g., events that match threat indicators) or by using built-in machine learning algorithms.

We’re previewing two new Machine Learning approaches that offer customers the benefits of AI without the complexity. First, we apply proven off-the-shelf Machine Learning models for identifying suspicious logins across Microsoft identity services to discover malicious SSH accesses. By using transferred learning from existing Machine Learning models, Azure Sentinel can detect anomalies from a single dataset with accuracy. In addition, we use a Machine Learning technique called fusion to connect data from multiple sources, like Azure AD anomalous logins and suspicious Office 365 activities, to detect 35 different threats that span different points on the kill chain.

Expedite threat hunting, incident investigation, and response

Proactive threat hunting is a critical yet time-consuming task for Security Operations Centers. Azure Sentinel makes hunting easier with a rich hunting interface that features a growing collection of hunting queries, exploratory queries, and python libraries for use in Jupyter Notebooks. Use these to identify events of interest and bookmark them for later reference.

Incidents (formerly cases) contain one or more alerts that require further investigation. Incidents now support tagging, comments, and assignments. A new rules wizard allows you to decide which Microsoft alerts trigger the creation of incidents.

Using the new investigation graph preview, you can visualize and traverse the connections between entities like users, assets, applications, or URLs and related activities like logins, data transfers, or application usage to rapidly understand the scope and impact of an incident.

New actions and playbooks simplify the process of incident automation and remediation using Azure Logic Apps. Send an email to validate a user action, enrich an incident with geolocation data, block a suspicious user, and isolate a Windows machine.

Build on the expertise of Microsoft and community members

The Azure Sentinel GitHub repository has grown to over 400 detection, exploratory, and hunting queries, plus Azure Notebooks samples and related Python libraries, playbooks samples, and parsers. The bulk of these were developed by our MSTIC security researchers based on their vast global security experience and threat intelligence.

Support managed Security Services Providers and complex customer instances

Azure Sentinel now works with Azure Lighthouse, empowering customers and managed security services providers (MSSPs) to view Azure Sentinel for multiple tenants without the need to navigate between tenants. We have worked closely with our partners to jointly develop a solution that addresses their requirements for a modern SIEM. 

DXC Technology, one of the largest global MSSPs is a great example of this design partnership:

“Through our strategic partnership with Microsoft, and as a member of the Microsoft Security Partner Advisory Council, DXC will integrate and deploy Azure Sentinel into the cyber defense solutions and intelligent security operations we deliver to our clients.” said Mark Hughes, senior vice president and general manager, Security, DXC. “Our integrated solution leverages the cloud native capabilities and assets of Azure Sentinel to orchestrate and automate large volumes of security incidents, enabling our security experts to focus on the forensic investigation of high priority incidents and threats.”

Get started

It really is easy to get started. We have a lot of information available to help you, from great documentation to connecting with us via Yammer and e-mail.

Start a trial and kick the tires
Watch the overview video
Review the technical documentation

Please join us for a webinar on Thursday, September 26 at 10:00 AM Pacific Time to learn more about these innovations and see real-life examples of how Azure Sentinel helped detect previously undiscovered threats.

What’s next

Azure Sentinel is our SOC platform for the future, and we will continue to evolve it to better meet the security needs of the complex world we live in. Let’s stay in touch:

Keep up to date by following the TechCommunity blog
Join our TechCommunity
Send us an e-mail with feedback and suggestions
Become an Azure Sentinel Threat Hunter

Quelle: Azure

Nest: Wenn das Smart Home zum Horrorhaus wird

Ein Pärchen aus den USA wurde über seine gehackten Smart-Home-Geräte terrorisiert – dabei hatten sie sich die Geräte gekauft, um sich sicherer zu fühlen. Aus dem Lautsprecher der Nest-Cam tönte vulgäre Musik und die Temperatur kletterte immer weiter nach oben – wie in einem Horrorfilm, nur eben echt. (Nest, Google)
Quelle: Golem

Oneplus 7T im Test: Fast schon wie ein Pro-Modell

Das Oneplus 7T ist das erste von zwei erwarteten neuen Oneplus-Smartphones – und überraschend gut ausgestattet: Das Display hat 90 Hertz, im Inneren arbeitet ein Snapdragon 855 und auf der Rückseite ist eine Dreifachkamera mit Sony-Sensor eingebaut. Wer braucht da noch ein Pro-Modell? Ein Test von Tobias Költzsch (Oneplus, Smartphone)
Quelle: Golem