Optimize your Google Kubernetes Engine workloads with Spotinst Elastigroup

Managing Kubernetes is about more than making sure you have enough capacity to run your deployments; it’s also about continuously optimizing all the moving pieces to make sure everything is running as cost-effectively as possible. But this “Tetris” game of mixing and matching workloads with available compute resources can be a full-time job.Spotinst, a DevOps automation provider and Google Cloud Platform (GCP) partner, provides a proactive workload scaling service that anticipates interruptions in excess cloud capacity. By leveraging GCP’s Preemptible VMs, Spotinst helps customers eliminate the need to manage infrastructure scaling, reducing costs and operational overhead. In fact, a Spotinst Elastigroup lets you run production-grade container environments on preemptible VMs while saving up to 70% on your compute expenses.We recently published a tutorial that demonstrates how to configure a Spotinst Elastigroup to manage Google Kubernetes Engine (GKE) cluster workloads, automatically maintaining the availability of the cluster while lowering costs.The following diagram shows a GKE cluster integrated with an Elastigroup. Once a new deployment is processed, the Elastigroup uses predictive algorithms to ensure it has sufficient resources.With Spotinst Elastigroup, you can focus on building applications with GKE and not the infrastructure they run on. Get started by visiting the solution page and see a Spotinst Elastigroup in action.
Quelle: Google Cloud Platform

5 ways financial services organizations will move faster in the cloud in 2019

Few industries grapple with the volume of information the financial services industry manages on a daily basis. Whether financial services organizations are analyzing market shifts, or protecting against fraud and money laundering, understanding their data and quickly finding the right insights are critical to their success.Over the past year, we’ve spent a lot of time working with our financial services customers—like HSBC, Citi, UBS, Scotiabank, Two Sigma, and more. And what we found is whether they’re large or small, a startup or a global institution, there are universal themes shared by all. Here are five things financial services organizations plan to do in the cloud in 2019:1. Tackle data silos to unlock the power of their data.Large, global institutions are using the cloud to overcome the incompatibilities, latencies and blind spots associated with traditional data silos, growing volumes of market data and alternative data sets. In 2019 we expect to see an increasing number using serverless, advanced data platforms, open APIs, and machine learning capabilities, to make full use of their data for enterprise-wide decision-making with a minimal IT footprint. These decisions will be supported by market data, news and commentary, risk and regulatory data, company data and other specialized and alternative data.2. Use ML-based early warning risk systems to stop threats before they happenFinancial services organizations continue to move their systems to the cloud take advantage of predictive technology that can help them prevent fraud, money laundering, and cyber breaches from causing harm before they ever occur. ML-based early warning systems are also helping monitor credit risk in real time. The cloud also helps these organizations streamline their lending processes, so they can approve credit faster and  offer their products to clients in more cost-efficient and engaging ways.3. Improve trading decisions with big data and machine learningThe ability to extract data, and crunch numbers inside data warehouses or from multiple real-time feeds, has evolved how many organizations make trading decisions, requiring a new way of managing and analyzing data. As a result, many organizations are looking to the cloud for dynamic and scalable compute resources that allow traders and quants to model and test algorithms, and perform complex calculations that read into vast amounts of data.  4. Move to the cloud for increased securityFor financial services organizations, security is always top-of-mind, and a growing number are moving to the cloud to take advantage of the automation and scale it offers. Our aim is to give these financial services customers a broad range of tools they can use to better protect their customers and their data, from VPC Service Controls which help prevent data exfiltration as a result of breaches or insider threats, to data encryption both at rest and in transit by default.  5. Harness the benefits of blockchain in the cloudBlockchain, a distributed ledger technology (DLT), will continue to present exciting opportunities for the financial services industry. Providing a single source of truth and security, without relying on intermediaries, DLT holds the promise of reducing the friction and costs associated with financial transactions. Blockchain in the cloud makes it easier to deploy and manage scalable, open source blockchain networks that support the full lifecycle of financial assets. With use cases ranging from trade finance, to cross border payments, to clearing and settlement, the cloud offers a more efficient means of harnessing the power of blockchain.We look forward to hearing how more financial services take advantage of the cloud in 2019. In the meantime, if you’re interested in learning more about financial services on Google Cloud, visit our solutions page or contact us for a discovery session.
Quelle: Google Cloud Platform

Cloud Functions pro tips: Building idempotent functions

In a previous blog post we discussed how to use retries to make your serverless system resilient to transient failures. What we didn’t mention is that if you’re going to retry a function, it needs to be able to run more than once without producing unexpected results or side effects.In computer science, this refers to the notion of idempotence, meaning that operation results remain unchanged when an operation is applied more than once. Likewise, a function is considered idempotent if an event results in the desired outcome even if the function is invoked multiple times for a given event. In other words, if you want your functions to behave correctly upon retries, you have to make them idempotent. In this post, we’ll show you how to do that.Exploring idempotent functionsTo better understand idempotency, let’s analyze a workflow. In this example, we have a function that processes incoming data, writes the results to one storage system, and then to another one.Success scenario: a write sequence to two different datastoresThe problem arises when, as you may expect, an upload to one of the storage systems fails. For example, imagine the second upload fails; this can result in data loss or inconsistency.Error scenario: the write to the second datastore failsWe already know how to handle such a failure—apply retries. But is it always safe to apply a retry? In this example, executing the function a second time stores the output in the second storage system (if the upload succeeded) but also results in writing a duplicate record or object into the first storage system. This could be unexpected by other systems, and result in further problems. Let’s discuss how to prepare a function for retried executions to avoid this kind of data duplication.Here, retrying your function may introduce a duplicate record.First, let’s look at a non-idempotent background function. It performs two uploads—first, it adds a document to Cloud Firestore, our flexible, scalable NoSQL database, and then uploads the document to another storage system off GCP. In a possible scenario when the upload to Cloud Firestore succeeds but the second upload fails, retrying the function results in a duplicate document, with the same contents, in the Cloud Firestore database. Of course, we don’t want duplicates, as they could cause confusion, accounting problems, and further inconsistencies.Use your event IDsOne way to fix this is to use the event ID, a number that uniquely identifies an event that triggers a background function, and— this is important—remains unchanged across function retries for the same event.Use event identifiers to avoid unwanted side-effects such as duplicationTo use an event ID to solve the duplicates problem, the first thing is to extract it from the event context that is accessed through function parameters. Then, we utilize the event ID as a document ID and write the document contents to Cloud Firestore. This way, a retried function execution doesn’t create a new document, just overrides the existing one with the same content. Similarly, some external APIs (e.g., Stripe) accept an idempotency key to prevent data or work duplication. If you depend on such an API, simply provide the event ID as your idempotency key.There! Now that you’ve applied this event ID mechanism, you shouldn’t see any more duplicates—in Cloud Firestore, or in another system that accepts idempotency keys.But what if the system you call does not support idempotency? Consider the following example. Here, we call Sendgrid, the email delivery service, to send an email from the function. But the call isn’t idempotent so retrying the function may result in duplicate emails. What can you do to avoid this problem?The general solution here is note when a system has handled an event, by recording its event ID. This way, you reduce the chance of unwanted retried calls to other services. In this example, we record the event ID in Cloud Firestore, but you can use another database or storage system as well. On each function execution, check whether the given event has already been recorded. If not, run the code and store the event ID in Cloud Firestore.A new lease on retriesWhile this approach eliminates the vast majority of duplicated calls on function retries, there’s a small chance that two retried executions running in parallel could execute the critical section more than once. To all but eliminate this problem, you can use a lease mechanism, which lets you exclusively execute the non-idempotent section of the function for a specific amount of time. In this example, the first execution attempt gets the lease, but the second attempt is rejected because the lease is still held by the first attempt. Finally, a third attempt after the first one fails re-takes the lease and successfully processes the event.Using a lease mechanism to handle non-idempotent codeTo apply this approach to your code, simply run a Cloud Firestore transaction before you send your email, checking to see if the event has been handled, but also storing the time until which the current execution attempt has exclusive rights to sending the email. Other concurrent execution attempts will be rejected until the lease expires, eliminating all duplicates for all intents and purposes.By now, you can see that there are multiple ways to make a function idempotent, and doing so is an important part of handling failures and improving the reliability of your system. First, you can ensure that mutations can happen more than once without changing the outcome. You can also record event IDs that have been processed, query database state in a transaction before mutating the state, and supply an idempotency key if you’re calling APIs that support them. To learn more, check out cloud.google.com/functions/ and you can also find all the code we used in this blog post on GitHub. Stay tuned for the next post in the series, where we’ll demonstrate how to use retries and idempotency as part of a simple restaurant order-processing system.
Quelle: Google Cloud Platform

Growing our presence in Asia Pacific: New GCP regions in Hong Kong and Jakarta

The Asia Pacific market is important for Google Cloud, and we are making long-term investments to support our growing business there. In just the past 18 months, we have expanded the number of GCP regions in APAC from three to six. With each additional region, we deliver lower latency to our customers and bring our technical innovations even closer to where they do business. Today, we’re thrilled to announce more progress: our sixth APAC region opened today in Hong Kong, with another coming to Jakarta, Indonesia in the future.A cloud made for Hong KongOur Hong Kong region is officially open for business. This new region—our eighteenth overall—will give both local and large multinational companies doing business in Hong Kong and Southeast Asia faster access to their data and applications.In a recent whitepaper published by Google Hong Kong, 54% of respondents based in Hong Kong indicated that they are planning to launch a cloud computing initiative, an increase of 13% from 2017. The Hong Kong GCP region (asia-east2) is designed to support our flourishing customer base in the area. It has three availability zones, enabling customers to distribute their workloads and storage to run at higher availability. Hybrid cloud customers can seamlessly integrate new and existing deployments with help from our regional partner ecosystem, and via two dedicated interconnect points of presence.Made for speedThe launch of the Hong Kong region brings speedier access to GCP products and services for organizations doing business in the area. Hosting applications in the new region can improve latency for end users in Hong Kong by up to 14ms. Customers in Vietnam and the Philippines will also benefit from a 25-30% improvement in latency. Visit www.gcping.com to see how quickly you can access the Hong Kong region from wherever you happen to be.Services that are presently not available within the Hong Kong region can still be utilized via the Google Network, and can be combined with other GCP services deployed around the world. For detailed information about our data storage capabilities, see our Service Specific Terms.Next up in APAC: JakartaLast month, we announced a new region for Indonesia to our customers there. Today, we are officially announcing it to the world: a new GCP region will be available in Jakarta, Indonesia. The region will be designed for high availability with three availability zones as well.Jakarta will become the eighth GCP region in Asia Pacific, joining Hong Kong, Mumbai, Sydney, Singapore, Taiwan and Tokyo, plus the soon-to-be-launched region in Osaka. Visit our locations page for updates on the availability of additional services and regions.Google Cloud: the world’s largest global private networkOur private, software-defined network provides a fast and reliable link between each region around the world. With more than 100 points of presence around the world, Google Cloud Platform ensures that your content is delivered quickly when every millisecond counts. Customers can quickly deploy and scale across multiple regions with products designed for organizations with a global footprint.  What our customers are saying”We’ve made a decision on applications hosting to partner with Google on building some next-gen applications in the full Google stack, top to bottom using Google tools, which is a big departure from our previous architecture. It’s very exciting because I think we’ve got the confidence that the tools they have are going to be great for productivity with developers.”— Darryl West, Group Chief Information Officer, HSBC”We’re excited that GCP is launching in Hong Kong as it will help the ecosystem continue to grow. This launch is very important to us as Google has been a great partner to us as it will improve our service performance, and will contribute to the growth of our business in many different areas.”— Tony Wong, Chief Executive Officer, Shopline“Google Cloud Platform has a network of global data centers with deep connectivity that enables us to put our infrastructure close to our customers. In addition, the ability to horizontally scale our global database using tools such as Cloud Spanner eliminates any limits on our geographic expansion.”— Teddy Chan, Chief Executive Officer and Chief Technology Officer, AfterShip”Google Cloud remains to be a strong partner in Klook’s continuous growth. By using Google big data and analytics, we have been able to draw insights from our data so we can provide better services to travelers worldwide instead of spending time on infrastructure.”— Bernie Xiong, Chief Technology Officer and Co-Founder, KlookGetting startedFor additional details, please visit our Hong Kong region page where you’ll get access to free resources, whitepapers, the “Cloud On-Air” on-demand video series and more. If you’re new to GCP, check out Best Practices for Compute Engine Region Selection. Our locations page provides updates on the availability of additional services and regions. As always, be sure to contact us to request early access to new regions and help us prioritize where we build next.
Quelle: Google Cloud Platform

The Google Cloud Adoption Framework: Helping you move to the cloud with confidence

Cloud computing is maturing at a scale and speed that can be hard to keep up with. In fact, it can seem as if every week a public cloud provider is announcing a new feature that will run your applications and store your data more scalably, reliable, or securely. And it’s not always easy to know where to start.The Google Cloud Professional Services team was created to help our customers make their way successfully to the cloud. Along the way, we’ve seen the sorts of things that can trip organizations up, as well as patterns developing around what makes other organizations successful. Most recently, we’ve seen a shift in the outcomes our customers want to achieve with the cloud. For the first 10 years, getting to the cloud has been about tactical cost cutting initiatives—building your “mess for less” in the cloud. Over recent years, our customers have begun to ask us much bigger, more strategic, even visionary questions: “How can I use machine learning to provide a better customer service?” “How do I do predictive inventory planning?”  Or “How do I enable dynamic pricing?”These are the types of questions that excite us at Google—and we want to help you answer them. But getting to the point where your organization can really thrive in the cloud often requires deep, comprehensive transformation. That can be a tough pill to swallow. And if you’re the one leading that transformation, being able to communicate your plan in a simple, logical way can be critical to inspiring confidence in your vision.All these reasons and more are why we’ve developed the Google Cloud Adoption Framework. Built on our experience working with enterprise customers, it can help you determine where you are on your cloud journey today, and where you’d like to be.Although we share a broad range of insights in our framework, one of the most important is this: Getting started in cloud is all about striking the right balance.We’ve seen two types of company cultures play out, time and again, in our customer base. For example, many of our cloud-native customers have a bias for action. They do many things well: self-sufficiency in pushing workloads into production, highly collaborative teams, and continuous learning and experimentation, just to name a few. But in their desire to move fast, we’ve seen some underestimate the value of putting guardrails in place early to contain the inevitable sprawl of data and compute resources. This omission not only adds cost to their monthly cloud hosting bill, but can also result in security and data privacy challenges in the long-term. In this case, they’ve prioritized speed in the short-term over long-term sustainability.The opposite can be true for many enterprises new to the cloud. These enterprises often gravitate towards replicating their tried and true governance and operating model in the cloud, spending a lot of time designing process and policies (which are important), but too little time moving actual workloads into the cloud. Without production workloads, they don’t develop the experience needed to manage increasingly complex and business-critical use cases. And without early successes, they can be reluctant to increase investment and ultimately lose momentum in their cloud strategy.The ideal is balancing the pace of change across your people, process, and technology. That way, you can learn continuously, lead effectively, scale efficiently, and secure your environment comprehensively—the four capabilities we’ve observed that drive success in cloud.These are just some of the insights and best practices we can share to help you get started. To learn more, download the white paper.
Quelle: Google Cloud Platform

Watch and learn: App dev on GCP

Developing applications today comes with lots of choices and plenty to learn, whether you’re exploring serverless computing or managing a raft of APIs. In today’s post, we’re sharing some of our top videos on what’s new in application development on Google Cloud Platform (GCP) to find tips and tricks you can use.1. One Platform for Your Functions, Applications, and ContainersThis demo-packed session walks you through the use of Knative, our Kubernetes-based platform for building and deploying serverless apps. This session goes through how to get started with using Knative to further the goal of focusing on writing code. You’ll see how it uses APIs that are familiar from GKE, and auto-scales and auto-builds to remove added tasks and overhead. The demos show how Knative spins up prebuilt containers, builds custom images, previews new versions of your apps, migrates traffic to those versions, and auto-scales to meet unpredictable usage patterns, among other steps in the build and deploy pipeline. You’ll see the cold start experience, along with preconfigured monitoring dashboards and how auto-termination works.The takeaway: Get an up-close view into how a serverless platform like Knative works, and what it looks like to further abstract code from the underlying infrastructure.2. Where Should I Run My Code? Serverless, Containers, VMs and MoreYou have a lot of key choices to make when deciding how and which technology to adopt to meet your application development needs. In this session, you’ll hear about various options for running code and the tradeoffs that may come with your decisions. Considerations include what your code is used for: Does it connect to the internet? Are there licensing considerations? Is it part of a push toward CI/CD? Is it language-dependent or kernel-limited? It’s also important to consider your team’s skills and interests as you decide where you want to focus, and where you want to run your code.The takeaway: Understand the full spectrum of compute models (and related Google Cloud products) first, then consider the right tool for the job when choosing where to run your code.3. Starting with Kubernetes Engine: Developer-friendly Deployment Strategies and Preparing for GrowthKubernetes empowers developers by making hard tasks possible. In this session, we introduce Kubernetes as a workload-level abstraction that lets you build your own deployment pipeline, and starts with the premise that rather than making simple tasks easier. The session walks through how to deploy containers with Kubernetes, and configuring a deployment pipeline with Cloud Build. Deployment strategy advice includes using probes to check container integrity and connectedness, using configuration as code for a robust production deployment environment, setting up a CI/CD pipeline, and requesting that the scheduler provision the right resources for your container. It concludes with some tips on preparing for growth by configuring automated scaling using the requests per section (RPS) metricThe takeaway: Kubernetes can help you automate deployment operations in a highly flexible and customizable way, but needs to be configured correctly for maximum benefit. Help Kubernetes help you for best results.4. Designing Quality APIsThere’s a lot of advice out there about APIs, so this session recommends focusing on what your goals are for each API you create. That could be updating or integrating software, among others. Choose a problem that’s important to solve with your API, and weigh your team and organization’s particular priorities when you’re creating that API. This session also points out some areas where common API mistakes happen, like version control or naming, and recommends using uniform API structure. When in doubt, keep it simple and don’t mess up how HTTP is actually used.The takeaway: APIs have to do a lot of heavy lifting these days. Design the right API for the job and future-proof it as much as you can for the people and organizations who will use it down the road.5. Life of a Serverless Event: Under the Hood of Serverless on Google Cloud PlatformThis session takes a top-to-bottom look at how we define and run serverless here at Google. Serverless compute platforms make it easy to quickly build applications, but sometimes identifying and diagnosing issues can be difficult without a good understanding of how the underlying machinery is working. In this session, you’ll learn how Google runs untrusted code at scale in a shared computing infrastructure, and what that means for you and your applications. You’ll learn how to build serverless applications that are optimized for high performance at scale, learn the tips and pitfalls associated with this, and see a live demo of optimization on Cloud Functions.The takeaway: When you’re running apps on a serverless platform, you’re focusing on managing those things that elevate your business. See how it actually works so you’re ready for this stage of cloud computing.6. Serverless All the Way Down: Build Serverless Systems with Compute, Data, and MLHere’s a look at what serverless is, and what it is specifically on GCP. The bottom line is that serverless brings invisible infrastructure that automatically scales, and where you’re only charged for what you use. Serverless tools from GCP are designed to spring to life when they’re needed, and scale very closely to usage needs. In this session, you’ll get a look at how the serverless pieces come together with machine learning in a few interesting use cases, including medical data transcription and building an e-commerce recommendation engine that works even when no historical data is available. Make sure to stay for the cool demo from the CEO of Smart Parking, who shows a real-time, industrial-grade IoT system that’s improving parking for cities and drivers—without a server to be found.The takeaway: Serverless helps workloads beyond just compute: learn how, why, and when you might use it for your own apps.
Quelle: Google Cloud Platform

New Year’s resolutions for the Google Cloud crowd

You probably already set some goals for yourself for 2019, but what about for your cloud architecture? We’ve assembled some New Year’s resolutions to guide the way toward operating a faster, more efficient cloud infrastructure. Whether you tackle one or all of these this year, you’ll come away better for it.1. Lose some data weight.This will be more fun than cutting out carbs, we promise. It’s data weight that lots of cloud and data center operators have to lose. For example, you might have unaccounted-for VMs or Compute Engine backups you don’t need anymore. Consider taking inventory and finding workload owners to reclaim capacity and save costs.2. Exercise your data more.The cloud is opening up lots of new avenues to explore all the data you collect. BigQuery is our serverless alternative to on-premises data warehousing and analytics, and its interface will be familiar to anyone who knows SQL. Try it out with our publicly available datasets, then check out BigQuery solutions from our partners on the GCP Marketplace.3. Save money to stretch your IT budget.The pricing model for cloud is a lot different from on-premises, and it can involve a learning curve when you’re getting started. Our GCP pricing calculator can help guide you through estimated costs for our range of products, so you can understand how pay-as-you-go works, whether it’s for Compute Engine or Cloud Functions—then start budgeting for the year ahead.4. Learn a new technology skill.If you want to hone your cloud technology skills, there are some pretty intriguing areas to explore right now. See how AutoML works, experiment with AI, or get to know serverless with this quick start guide to Cloud Functions.5. Make new friends, online and in-person.Lots of cloud tools entering the mainstream are largely based on the concept of openness and a community-driven mindset. Open-source tools mean your fellow developers are building the product in real time, and you can contribute code and improve products too. You can be part of a Google Cloud community, whether it means joining Github, becoming cloud certified with your peers or attending Next ‘19.6. Get more sleep while monitoring watches your cloud.With a cloud foundation in place, your next step is managing all these instances and applications. Stackdriver monitoring and logging lets you set up alerts and use collected data to make changes and improvements to your GCP systems. To keep making your infrastructure more reliable, learn more about what SRE is and how you can implement its principles.Let us know how you’re doing sticking to your resolutions!
Quelle: Google Cloud Platform

App migration checklist: How to decide what to move to cloud first

[Editor’s note: This post originally appeared on the Velostrata blog. Velostrata has since come into the fold at Google Cloud, and we’re pleased to now bring you their seasoned perspective on managing your cloud migration journey. There’s more here on how Velostrata’s accelerated migration technology works.]When you’re considering a cloud migration, you’re likely considering moving virtual machines (VMs) that may have been created over many years, by many teams, to support a huge range of applications. Moving these systems without breaking any team’s essential applications may seem daunting. It’ll require some knowledge of the applications in question to classify those apps before setting your migration plan.In a recent blog post, we talked about the four tiers you can use to help organize how you migrate your applications to the public cloud. We had a number of requests from that post, asking us to go a bit deeper on two important considerations: the application’s status and the application’s integrations and dependencies. In this post, we’ve put together a few more app-related questions that IT should be asking, alongside some of the likely answers. Of course, every enterprise and cloud migration is different. But the highlighted answers (or notes) are likely to yield a stronger candidate for migration than others.If you find yourself in a situation where you don’t know (and cannot obtain) the answers, that might be a sign that this app isn’t a good candidate for migration. Sometimes knowing what you don’t know is a helpful gauge when deciding on a next step.What’s the application status?Here, we’re looking at all the components that factor into an application’s status within your organization’s landscape. These are some of the most important questions to evaluate.What is the criticality of this application?For example: How many users depend on it? What is the downtime sensitivity?Tier 1 (highly important, 24×7 mission-critical)Tier 2 (moderately important)Tier 3 (low importance, dev/test)What is the production level of this application?In productionIn stagingIn developmentIn testingWhat are the data considerations for this app?Stateful dataStateless dataOther systems reliant on this data setHow was this application developed?Third-party purchase from major vendor (still in business?)Third-party purchase from minor vendor (still in business?)Written in-house (author still at company?)Written by a partner (still in business? still a partner?)What are this application’s operational standards?For example: what organizational, business, or technological considerations exist?Defined maintenance windows?Defined SLAs?Uptime-sensitive?Latency-sensitive?Accessed globally or regionally?Deployed manually or via automation?Guidance: Avoiding sensitive apps is often most desirable for a first migration.What are the specific compliance or regulatory requirements?ISO 27000?PCI/DSS?HIPAA?EU Personal Data Protection?GDPR?Guidance: The fewer compliance or regulatory requirements, the better for a first migration.What kind of documentation is readily available, and is it up-to-date?System diagram?Network diagram?Data flow diagram?Build/deploy docs?Ongoing maintenance docs?Guidance: The more docs that exist, the better!What are the migration implications?Easy to lift-and-shift as-is into the cloudMay require some refactoringNeed to modernize before migratingCan wait to modernize after migratingNeed to rewrite in the cloud from scratchAny business considerations?Is this system used year-round or seasonally?Is there a supportive line-of-business owner?Does this app support an edge case or general use case?Is this app managed by a central IT team or another team?Would a downtime window be acceptable for this app?Guidance: having more supportive owners and stakeholders is always crucial to the success of initial migrations.What are the app integrations and dependencies?Here, we’re going one step deeper, looking at how this application ties into all your other applications and workloads. This is hugely important, since you might want to group applications into the same migration sprint if they’re coupled together tightly through integrations or dependencies.What are the interdependent applications?SAP?Citrix?Custom or in-house apps?Guidance: Fewer dependencies are ideal.What are the interdependent workflows?Messaging?Monitoring?Maintenance/management?Analytics?Guidance: Fewer dependencies are ideal.Where is the database and storage located?Separate servers?Co-located servers?Is storage block- or file-level?Any other services to analyze?Web services?RPC used either inbound or outbound?Backup services (and locations) in effect?Guidance: None of these are more or less ideal, simply something to be aware of.Other questions to ask:Unique dependencies?Manual processes required?Synchronized downtime/uptime (with other apps)?Guidance: The goal for first apps to migrate is to minimize complexity and labor.Taking the time to truly understand your applications is a big part of success when migrating to the cloud. Picking the right applications to migrate first is key to building success and confidence within your organization in your cloud and migration strategy. Analyzing these details should help you and your IT team pick the right order for migrating your applications, which will be tantamount to achieving migration successFind out more here about how cloud migration works with Velostrata.
Quelle: Google Cloud Platform

Security trends to pay attention to in 2019 and beyond

Software security requires good hygiene and constant diligence to protect your organization and users from known threats; it also requires working proactively to identify and address emerging risks.Here at Google Cloud, we help you do both. We build products that make security easy—from automatic protections that keep you safe behind the scenes, to tools and recommendations that help you tailor your security posture to your organization’s specific needs (check out our “taking charge of your security” posts for some best practices). We’re always hunting for, and thinking about, how to protect against new and emerging threats, as demonstrated by the “Spectre” and “Meltdown” CPU vulnerabilities that our Project Zero team revealed earlier this year.As we kick off 2019, here are some security trends to watch, from some of the people here at Google Cloud who think about security every day:Attacks that skirt two-step verification will push high value targets to adopt stronger 2SV methods.Two-step verification (2SV), also known as two-factor authentication (2FA), goes a long way to help protect user accounts, and has become standard for most modern applications. However, not all 2SV methods are created equal; attackers are finding new ways to circumvent weaker 2SV methods, such as intercepting one-time-passwords like SMS codes through phishing attacks and phone number takeover. Typically, these are targeted attacks against high-value users such as executives, political figures, or cloud admins. With increased risks ahead, we predict more services will adopt stronger phishing-resistant 2SV methods, utilizing FIDO standards. This will allow users to authenticate with security keys, gaining stronger protection against phishing attacks and account takeovers.— Christiaan Brand, Product Manager, Google CloudWe’ll see broader strides toward a true “passwordless” era, due to mainstream adoption of new standards.We will see secure passwordless login experiences start appearing in the mainstream in 2019. This will mark the start of a broader “passwordless” era, enabled by W3C and FIDO APIs which will appear in major browsers and OS platforms. To start, websites will begin to offer the ability to re-login by just presenting a biometric such as a fingerprint. Comprehensive adoption of passwordless first-time logins will take more time, but in the future, we can expect simple and highly secure login experiences such as the ability to log in to a website on your computer simply by unlocking your phone nearby.— Sam Srinivas, Director, Product ManagementZero-trust architectures move from idea stage to implementation stage.“Zero Trust” architectures, and how to implement them, have been an increasingly hot topic in security as organizations embrace more business-critical cloud services and face increasing employee demands for anytime, anywhere, any device access to business resources. Google’s BeyondCorp model was the original enterprise implementation of this concept. Expect this to change in 2019 as more providers and vendors make packaged commercial solutions available and the concept is implemented at an architectural level in projects like Istio.— Jennifer Lin, Director, Product ManagementSelf-managed cloud encryption gets more visibility.Self-managed encryption keys can provide additional control over data access, provide additional audit clarity, help meet policy or regulatory requirements, or provide a measure of control over provider access. However, in the coming years, there will be cases of customer-managed key mishandling  that lead to high-profile data loss. Cloud providers will continue to build out native key management capabilities, extending provider-key-managed coverage across more services and offering customers more granular control options. These options can provide a good balance between customer control and robust durability and availability. So while self-managed key options will continue to evolve, customers will increasingly look to leverage provider-key-managed services to manage their keys.— Scott Ellis, Product Manager, Google CloudAttackers will turn their attention to more sophisticated attacks on cloud-native environments like containers.We’ve seen public attacks on container deployments, but most of them went after “low-hanging fruit” or mimicked attacks that you would be just as likely to see on a VM — e.g. misconfigurations, credentials and secrets in public code, etc. Think of them as doors that were accidentally left open. As container adoption increases, we’ll start to see more advanced attacks that are specific to container architectures and container vulnerabilities. Many admins will look for strong Cloud-managed services that offer best practices on container security by default.— Maya Kaczorowski, Product Manager, Google CloudVulnerabilities in open-source software will become increasingly common, requiring more rigorous testing.Introducing vulnerabilities into open-source software via source-code repositories is an effective attack method, since many downstream users use open-source software without inspecting it or testing it themselves. A widespread compromise of this manner is not unlikely, and might be just the thing that drives more companies to use continuous vulnerability scanning tools.— Matthew O’Connor, Product Manager, OCTOThere will be more than double the reported data incidents on legacy systems from the previous year as a result of GDPR.The EU’s GDPR requires organizations to report data incidents involving EU personally identifiable information (PII) to data protection authorities or risk large fines. In the UK alone, there were 30% more self-reported data breaches in the first half of 2018, compared to the whole of 2017. The first fines issued under GDPR are likely to be issued in 2019, and the increased transparency resulting public and regulatory scrutiny may highlight the fragility of legacy systems, and ultimately drive cloud adoption, where enterprise privacy management tooling and processes have been specifically developed to support GDPR.— James Snow, Customer Engineering Manager, Security & ComplianceHighly-regulated enterprises will select for cloud providers who provide real-time monitoring and controls for access to their workloadsWhile one of the major benefits to being in the public cloud is having your infrastructure managed for you, customers often have had limited visibility and control over activity conducted by their cloud provider. In 2019, companies—especially those in highly regulated industries—will increasingly expect full visibility and control over what actions cloud administrators can take on their data. These customers will expect more assurance that they are in control of their data and workloads.— Joseph Valente and Michee Smith, Product Managers, Google Cloud
Quelle: Google Cloud Platform

Last Month Today: December on GCP

2018 was a busy year for cloud technology, and December capped it off with some news and announcements from around Google Cloud Platform (GCP). Here’s what caught your attention last month.Securing the cloud is a team effortThere’s a new sheriff in town: The Cloud Security Command Center is now available for GCP users in beta. This central portal lets you track potential risks and vulnerabilities across your cloud asset inventory and get alerts on anomalies, see sensitive data in your cloud storage, and review access rights.In addition, we announced a partnership with Palo Alto Networksthat integrates their tools with Cloud Security Command Center. Palo Alto Networks will run both its Application Framework and GlobalProtect cloud service on GCP, bringing even more visibility and compliance readiness to users.In December, we released managed base images for Google Kubernetes Engine (GKE). Historically, GKE users brought their own container images, but now with managed base images, you can point your Dockerfile’s FROM line to the Cloud Marketplace. Those images have received the latest patches from upstream and let you follow best security practices for base images.The cloud of the future takes shapeMoving to cloud is a leap for businesses, with lots of decisions to make. We laid out our open cloud vision last month, and delved into some of the tools involved to build your apps and services on containers and microservices. Using Istio as a service mesh on top of GKE, along with its new networking features, is the foundation we’re excited to share and continue building.If machine learning (ML) is something you’ve integrated into your technology stack, check out this look at faster ML model training to get better cost and performance. As you develop real-world applications powered by ML, you’ll need to scale to meet real-world challenges. We trained models using our Cloud TPU pod accelerator and measured performance and cost.Learn about GCP in person this yearRegistration opened last month for our Next ‘19 conference, scheduled for April 9-11 in San Francisco, CA. You can expect lots of great sessions and presenters, with even more technical content and learning opportunities than before. Join us!We wish you a productive, happy and successful 2019!
Quelle: Google Cloud Platform