Enterprise Solution Offerings: Ensuring Success Across Your Entire Application Portfolio

This week at DockerCon 2019, we shared our strategy for helping companies realize the benefits of digital transformation through new enterprise solution offerings that address the most common application profile in their portfolio. Our new enterprise solution offerings include the Docker platform, new tooling and services needed to migrate your applications. Building on the success and the experience from the Modernize Traditional Applications (MTA) program and Docker Enterprise 3.0, we are excited to expand our solutions and play an even greater role in our customers’ innovation strategy by offering a complete and comprehensive path to application containerization.
Application Profiles
When you hear about different application profiles, you may think about different languages or frameworks or even different application architectures like microservices and monoliths. But one of the benefits of containerization is that all application dependencies are abstracted away and what you have is a container that can be deployed consistently across different infrastructure.
In our work with many enterprise organizations, we’ve validated that the successful adoption of a container strategy is just as much about the people and processes as it is about the technology. There are 3 behavioral patterns that matter and that is dependent on what you want to accomplish with containers:

Modernizing brownfield applications are a great way to begin your container journey because they tend to receive a lot of attention inside the organization. These are applications that are under active development but are based on existing code bases. In other words, there are both developers AND operators that are actively supporting these applications. Modernizing these applications typically involves containerizing the existing application and making modifications like the addition of microservices to make them more agile or integrated with other systems.
Accelerating greenfield applications of any architecture involves arming your developers with the tools and practices to build and ship production-ready applications. New apps can be cloud-native and microservices, but they can also be n-tier applications or even monoliths. The key is to accelerate developer productivity and remove any friction in the process.
Replatforming legacy applications is a practical way to extend the life of existing applications while making them more portable and easier to operate. Containerizing these legacy applications also delivers significant cost savings for organizations through increased CPU utilization and server consolidation.

Proven Methodology
Docker’s solution offerings are based on a proven, outcomes-based methodology that takes into account an organization’s unique characteristics. It’s a comprehensive approach based on 4 workstreams – Governance, Pipeline, Platform and Applications.  The aim of these workstreams is to establish and operationalize the Docker Enterprise platform for your applications in production – no matter which application profile.

Governance: Ensures that the team is organized, equipped and enabled to deliver and support Docker Enterprise for the on-boarding of applications to the platform
Platform: Activities in which the team defines, deploys, integrates and operates the Docker Enterprise platform
Pipeline: Activities in which the team defines, deploys, integrates, and operates the CI/CD delivery pipeline which builds and deploys applications to run on the Docker Enterprise platform
Applications: Activities in which the team defines, migrates, deploys, and operates workloads to run on the Docker Enterprise Platform; this workstream is repeated per application at scale in an on-boarding process.

Build a Path to Success
Every organization is faced with the challenge of digital transformation, but there are many different paths you can take, depending on your organization’s priorities – and sometimes what projects are getting funded. 
If you are not sure where to start, our experience is that starting with your brownfield applications will deliver the most immediate value back to your organization. From there, we can work with you to identify the next batch of applications to containerize. Here is one recommended path forward:

Summary
Docker Customer Success aims to accelerate our customers’ adoption of containerization. We are excited to share our new solutions offerings based on Docker Enterprise 3.0 and look forward to working with your organization to deliver on your digital transformation objectives.

Ensuring success across your entire application portfolio with new #Docker Enterprise Solutions based on Docker Enterprise 3.0 Click To Tweet

For more information, please check out these resources:

Watch the DockerCon keynote from Day 1
Learn more about Docker Enterprise 3.0 and sign up for the public beta

The post Enterprise Solution Offerings: Ensuring Success Across Your Entire Application Portfolio appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Principles and best practices for data governance in the cloud

Today’s businesses both generate and consume data at unprecedented rates. Diversity of data types and sources means that organizations have to grapple with data access, security, governance, and let’s not forget—regulatory compliance. These concerns give some customers pause when they consider moving their sensitive data to the cloud. That’s why we published a white paper that outlines best practices and guidelines to help organizations establish data governance in a cloud-first world. This white paper intentionally takes a platform-agnostic approach that you can use when building out your governance capabilities.Data governance encompasses the ways that people processes and technology can work together to enable auditable compliance with defined and agreed-upon policies. Ultimately, organizations want their data to work for them and governance is an essential part of making data work for your business.Every enterprise should think about the entire data governance lifecycle, including  data intake and ingestion, cataloging, persistence, retention, storage management, sharing, archiving, backup, recovery, disposition, and removal and deletion. Many organizations find these requirements overwhelming, so the white paper outlines best practices and guidelines for governance in the cloud, including:Data discovery and assessment, so that you know what data assets you haveProfiling and classifying sensitive data, to understand which governance policies and procedures apply to your dataMaintaining a data catalog that contains structural metadata, data object metadata, and the assessment of levels of sensitivity in relation to your company’s governance directivesDocumenting data quality expectations, techniques, and tools that support the data validation and monitoring processDefining identities, groups, and roles, and assigning access rights to establish a level of managed accessPerforming regular audits of the effectiveness of controls in order to quickly mitigate threats and evaluate overall security healthInstituting additional methods of data protection to ensure that exposed data cannot be read, including encryption at rest, encryption in transit, data masking, and permanent deletionUsing these best practices, enterprises can create an effective data governance strategy and operating model, gaining a path for organizations to establish control and maintain visibility into their data assets. Organizations will likely reap immense benefits as they promote a data-driven culture within their organizations, including :improved decision making, better risk management, and achieving regulatory compliance.You can read or download the full white paper here,or you can find more information about how we secure and govern your data on Google Cloud Platform here.
Quelle: Google Cloud Platform

From the data warehouse: Urs Hӧlzle explains how data analytics and ML can transform your business

As businesses collect and analyze more and more data with every passing year, traditional infrastructure is challenged: It’s not just that there is more data; it’s coming from more sources, with different contexts and uses than the enterprise has seen in the past. Not only that, internal and external customers expect results at a faster pace, challenging both the tools and practices of traditional infrastructure.The solution is to do well what technology has always aimed to do: Automate the rote stuff, so you can get faster to more value-added work. There are a number of ways to do this, but increasingly the most valuable is to use Artificial Intelligence, in particular Machine Learning, either overtly, or in the form of labor-saving tools and services that rely on ML.  Today, we’ll talk with one of Google’s early distinguished engineers, Urs Hӧlzle,who now plans, designs, and supports the infrastructure behind the growing user base for a number of Google products, as well as the infrastructure that serves all of our Google Cloud customers.Urs has played an essential role at Google from nearly the beginning, leading the development of the computing and data infrastructure that first revolutionized Internet search, and eventually became a platform for maps, mobility, cloud computing, and artificial intelligence engines—systems that predict deadly illnesses and prevent Google’s own data centers from overheating.Urs and I recently sat down to talk about how machine learning simplifies problem-solving for businesses.Note: This interview has been abridged and edited.Quentin: Urs, as you’ve expanded infrastructure and capacity to process information at a higher velocity, process data from multiple angles, and think of data as a much more dynamic asset, how do today’s larger quantities of data change the way people work?Urs: The ecosystem really changed a lot, because previously, you had to do a lot of planning: you had to carefully pick which insight you wanted to go after. Now, a data analyst with a simple SQL query can at least prototype this insight at their own pace–maybe in half a day or a week. And they don’t need a software team, they don’t need an analyst, and it’s not actually a software development project anymore, and that means that the number of questions you can answer from your data just explodes.Quentin: So you can have far more projects, you can think in novel ways, you can test at a deeper level.Urs: Often, you’re going after the right thing, but your initial understanding is actually incorrect. As you go through it iteratively, your understanding of the problem improves. At that point, you’re asking better questions than you asked on day one. And if you can do that every day, and ask a better question every day, then just in a matter of two weeks, you might actually fundamentally change how you think about a particular customer segment—because you have a much deeper understanding of how it behaves.Quentin: One could see AI and machine learning as a kind of a natural outgrowth of cloud computing, right? Because it’s a fundamentally better way to sort through the data, find patterns, and test things?Urs: Yes, and in fact we’re starting to see [the worlds of machine learning and cloud infrastructure] merge. Traditionally when you had data, then you wrote the data processing, or maybe you had queries, that was the first step: “I’m just trying to find a data point again.” That was databases. Then came analytics: “let me actually analyze the data, compute statistics on it.” But, it was still relatively manual. Now, ML gives you a more powerful way to look at the data, that also does well with unstructured data like images, sound, or other data types, where traditional analytics just doesn’t work at all.[Modern data analytics tools] really make sense and make use of the data you already have. So on BigQuery today, our data warehouse, you actually have [built-in] ML functionality in your data analytics warehouse. It’s a very natural way to say, “Gee, I have this data here, can I actually make a prediction function for things where I don’t have the data?” And the answer is that yes you can, and it’s actually very easy. You can do it in a SQL statement that is roughly 10 lines long, so you don’t even need to understand how machine learning works.Quentin: What are some of the most interesting ML problems that customers are bringing to you these days?Urs: I think the biggest problems that companies have are in two main areas.First, they believe that ML is the biggest opportunity, but they need to be able to translate that into actual outcomes. So it’s essential that we offer tools in our stack that make it much easier for you to use ML without being an expert. BigQuery can actually do predictions with ML, without you needing to know too much about the underlying techniques. For example, AutoML, our ML [training tools]: you can take your set of images in which you want to recognize objects, and we can automatically construct a machine learning system that recognizes them with very high accuracy. Only a year ago, you needed an expert to do that.The second problem is really how to deal with the transition to the cloud. Every large user is going to run in a hybrid configuration for a while. Now you have two environments, and they have different rules, so you need to have two different teams and train them differently in order to figure out how these things work together.Quentin: Doesn’t putting out a cloud management tool like Kubernetes help with coordination?Urs: Yes, absolutely. That is one of the hardest problems, and our answer to that is Kubernetes and Google Kubernetes Engine (GKE). Now you can use Kubernetes to manage your workloads both on premise and in the cloud—with not just the same code, but of equal importance, the same configuration.Integrated machine learning is core to Google’s products, helping businesses turn data into insights and make smarter decisions. Learn more about BigQuery or read about our broader suite of data analytics solutions. If you already use BigQuery and you’re interested in generating ML-based insights, you can read about BigQuery ML.
Quelle: Google Cloud Platform

Making API development faster with new Apigee Extensions

As API programs gain traction, we know many companies want to empower developers to quickly build and deliver their API products. To aid them in this effort, we recently announced the availability of new capabilities in Apigee, the enterprise API management platform of Google Cloud Platform (GCP), to help enterprise IT teams speed up their API development. With faster API development within GCP, you can innovate faster and create connected customer experiences, plus increase developer productivity. You can also speed the time to market for API products and ensure security and scalability.With this launch, our new Apigee Extensions will let developer teams building APIs access several GCP services: Google Cloud Functions, Cloud Authentication, Cloud Data Loss Prevention (templates support), Cloud Machine Learning Engine, and BigQuery services, all from within Apigee.When you’re building APIs, you often need to connect to various cloud services. Until now, connecting to those services securely required using a combination of ServiceCallout and other out-of-the-box Apigee policies to deal with credentials, manage tokens, and access the required cloud services. This process is error-prone and has to be repeated for every single API proxy that is built within Apigee.By removing the repetitive and redundant work required to configure and apply policies to API proxies, Apigee Extensions simplifies the process of securely accessing cloud services. An API developer can pick from the policy list and use the necessary services using a first-class policy interface, as shown here:Once configured, policies for cloud services can be reused across all API proxies.This launch also adds support for Salesforce into the growing list of third-party services supported by Apigee Extensions. The Apigee Salesforce extension lets API developers easily interact with data in their company’s Salesforce instance by reducing the complexity of accessing the Salesforce REST API.How customers are using Apigee Extensions to build APIsSince the announcement of Apigee Extensions last year, we’ve heard from many of you who want to help API developers be more productive. A great success story of this adoption is Global Payments Inc., which builds solutions to help businesses offer a customer-friendly payment experience. Previously, accessing a cloud service for API development was a tedious and laborious task. Gopika Patel, vice president of enterprise integrations and architecture at Global Payments, experienced this firsthand when her team was implementing logging policies for the company’s APIs.Before Gopika’s team adopted Apigee Extensions, the typical process for the implementation of API logging policies required creating a service account, generating and downloading the keys, creating KVMs in their environment, assigning the project_id, log_id, jwt_issuers and privte_key to Apigee context variables, using those variables to generate the token, caching the token, composing the log message, and connecting to the service to post the composed log message, asynchronously. Now, using Apigee’s Stackdriver extensions, Gopika and her team have considerably boosted productivity and accelerated API development by simplifying log policy enforcement experience.“Previously, our developers had to perform repetitive and time-consuming work in order to ensure that the Global Payments APIs are compliant with our logging policies,” Patel said. “Apigee has simplified this process with contextual access to cloud services with Apigee Extensions. Using Apigee and Apigee Extensions, we have been able to speed up API development, while complying with strict security and compliance requirements.”Another Apigee customer, Designer Brands (formerly DSW, Inc.), one of North America’s largest designers, producers and retailers of footwear and accessories and parent company of DSW Designer Shoe Warehouse, has also had great success using Apigee to cut down on development overhead and accelerate speed to market. Using Apigee has been a key part of developing DSW’s VIP loyalty program and customer-facing applications. Jon Herbst, director of data integration at Designer Brands, has been a key supporter of the company’s digital transformation into an API-first architecture, which helps the company adapt to changing consumer behavior and a rapidly evolving technology landscape. Jon and the rest of the DSW IT team use Google Cloud services, including Apigee, for managing all interactions with the company’s APIs.Since Designer Brands implemented Apigee and Apigee Extensions and incorporated Google Cloud services such as Cloud Pub/Sub, Cloud Dataflow, and Cloud SQL, among others, the company has experienced a steady increase in digital customer engagement. Cyber Monday 2018 was a big transaction volume day for DSW. The company was able to scale up its digital operations without any performance issues so customers could have a great experience buying their favorite shoes and accessories. Ultimately, this led to the company’s best year-over-year comparable sales performance on that day since 2011. DSW is also using Apigee to give sales associates tools to improve customer experience by quickly verifying the customer’s information and checking their order status through automated functions.“Google has been at the heart of DSW’s API-first strategy as true partners who’ve enabled us to scale our API development efforts.” Herbst said. “Since we implemented Apigee, we’ve been able to innovate and get to market faster, enabling our customers to engage with us on the platform of their choice. With Apigee Extensions, our team has been able to access Google Cloud Services when developing APIs from within the Apigee interface, effectively boosting their productivity. This new approach has yielded higher customer engagement, improved customer satisfaction, and huge leaps in transaction volume. Our developers are happy, our customers are happy, and our internal stakeholders are happy. This is a win, win, win!”To learn more about how to accelerate API development with Apigee Extensions and Google Cloud Services, join our upcoming webcast with Apigee Extensions Product Manager Prithpal Bhogil.
Quelle: Google Cloud Platform

Announcing the winners of the Confidential Computing Challenge

Confidential computing aims to protect the integrity and confidentiality of applications and data being processed in the public cloud. At Google, one approach to confidential computing is Asylo, an open-source framework that we released for creating enclaves (sometimes referred to as trusted execution environments, or TEEs) to help protect sensitive data and code with hardware-backed protections.  This emerging technology is promising and sought-after by customers that want to preserve the security and privacy of critical code and sensitive user data.That’s what inspired us to collaborate with Intel on the Confidential Computing Challenge (C3)–an online, global competition to accelerate the field of confidential computing. In February, we invited participants to explore the advantages confidential computing can bring, and they did not disappoint!”As an industry, we’ve made a lot of progress towards our common goals of protection for data-in-use, and we’re only just getting started in terms of understanding the potential applications of trusted execution environments,” explained Simon Johnson, Sr. Principal Engineer & Intel® Software Guard Extensions (Intel® SGX) Architect, who was also one of the C3 judges. “This is one of the primary reasons we decided to co-sponsor the Confidential Computing Challenge along with Google Cloud—to invite the world’s most brilliant minds to collaborate with us and share their ideas so we can collectively grow this nascent space.”We received entries from around the world that covered practical and creative use cases for confidential computing, including machine learning, data analytics, multi-party computation, and hardening existing security features like Transport Layer Security (TLS). It was so inspiring and energizing to see the effort participants put into developing their C3 ideas that we decided to expand our original plan and award not just a first place prize, but also a runner-up and two honorable mentions.With that, please join us in congratulating the winners of C3!First place: TF Trusted – Confidential Machine Learning with TensorFlow and AsyloTF Trusted is an open-source framework built on top of Asylo and TensorFlow Lite to compute a prediction without revealing the model or input vector to the host computer. This is achieved by performing computations inside of an Intel SGX device; the user can then perform private computation inside the enclave with any collection of operations supported by TensorFlow. This private computation can be performed in whole, as a TensorFlow Lite model. The enclave’s computation can be extended as a custom TensorFlow Operation for use in broader TensorFlow computation graphs and libraries like TF Encrypted.”We believe that TF Trusted is an important step towards empowering enterprises, data scientists, and machine learning engineers to leverage confidential machine intelligence to realize the true potential of artificial intelligence,” said Gavin Uhma, CEO and co-founder of Dropout Labs, a distributed startup from France, Canada, and the USA focused on secure, privacy-preserving machine learning. “Solutions like this are especially applicable to industries such as finance, healthcare, and transportation, which are interested in moving to the public cloud but have concerns around data confidentiality. It is great that the Confidential Computing Challenge provided us with a platform with which to share these ideas more broadly.”Runner up: PrivateLearnRecommendation systems typically learn their models from user data. PrivateLearn provides a potential solution to ensure that the learning process preserves the privacy of such sensitive data, backed by a strong security guarantee.”There are two phases where leakage may happen on the server side — one is data leakage during the training phase and the other is data leakage from the learned model,” said Ruide Zhang, PhD candidate at Virginia Tech. “To encourage adoption of new IoT and AI applications, machine learning frameworks need to guarantee user privacy. PrivateLearn recognizes this need and aims to address it. PrivateLearn also shows that porting existing application into Asylo framework is practical.”For more information, head on over to the PrivateLearn GitHub here.Honorable mention: GeneCrypt – putting users in control of their genetic dataGeneCrypt helps protect genomic data while also allowing it to be used for the benefit of the individual. “Unlike many other contexts, in this use case, you have a massive amount of sensitive data, but you don’t need all the raw data for practical purposes—just a computationally derived value,” explained Martin Thiim, a software and security engineer based in Denmark. “This could, for instance, be a boolean value indicating the presence or absence of some genetic variant. Enclaves lend themselves well to be the filters that extract just the relevant information.”This novel idea utilizes confidential computing principles, and particularly Asylo/Intel SGX enclaves, to realize its goals. You can read more about and try out GeneCrypt here.Honorable mention: Intel SGX-based Certificate TransparencyThis idea proposes to harden the security of a Certificate Transparency (CT) scheme using Intel SGX, by making query authentication much more lightweight, and paving the way for an efficient, secure and practical CT scheme.”Our proposal aims at hardening the security and building trustworthy systems of CT log servers and monitors,” said Dr. Yuzhe Tang, assistant professor in the department of Electrical Engineering & Computer Science at Syracuse University. “Intel SGX-based CT systems will help significantly reduce operational costs for both domain owners and organizations, without sacrificing security. This will eventually increase the adoption rate of CT among organizations and individual users in mainstream and mobile environments.”Intel SGX-based CT is being built on top of enclave Log-Structured Merge-tree (eLSM), a high-performance key-value store that leverages Intel SGX enclaves, developed earlier by Dr. Tang’s team. You can find the source code for eLSM here and the corresponding technical paper here. For more information, you can also check out the project website.Stay in touchCongratulations to the winners and a huge thank you to all our C3 participants! Thank you also to our judges for the time and energy they spent reviewing and providing feedback on the awesome C3 entries.If reading this has inspired you to develop your own confidential computing idea, you can start by learning more about Asylo here and Intel SGX here. We can’t wait to hear from you and see what you build next!
Quelle: Google Cloud Platform