What’s the difference between Azure Monitor and Azure Service Health?

It’s a question we often hear. After all, they’re similar and related services. Azure Monitor helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on. Azure Service Health helps you stay informed and take action when Azure service issues like outages and planned maintenance affect you. So what’s the difference?

Azure Monitor and Azure Service Health are complementary services that you will often use together when troubleshooting issues. Let’s go over a typical scenario. For example, let’s say your app is having a problem and experiencing downtime. Your users are complaining and reporting the issue. What’s wrong? You start troubleshooting.

Step 1: Assess the health of Azure with Azure Service Health

As you start troubleshooting, you first want to answer the question: is it me or is it Azure? To make sure Azure as a platform isn’t having any problems, you’ll want to check Azure Service Health. Better yet, you might already know about any issues affecting you if you have Azure Service Health alerts set up. More on this later.

You visit Azure Service Health in the Azure portal, where you check to see if there are any active issues, outages, planned maintenance events, or other health advisories affecting you.

At this stage, you might have been tempted to visit the Azure status page. Instead, we recommend you check Service Health, as we outlined above. Why? The status page only reports on major, widespread outages and doesn’t include any information about planned maintenance or other health advisories. To understand everything on the Azure side that might affect your availability, you need to visit Service Health.

So you’ve checked Service Health and determined there aren’t any known issues at the Azure level, which means the issue is likely on your side. What next?

Step 2: Review the health of your apps with Azure Monitor

You’ll want to dive into Azure Monitor to see if you can identify any issues on your end. Azure Monitor gives you a way to collect, analyze, and act on all the telemetry from your cloud and on-premises environments. These insights can help you maximize the availability and performance of your applications.

Azure Monitor works by ingesting metrics and logs data from a wide variety of sources—application, OS, resources, and more—so you can visualize, analyze, and respond to what’s going on with your apps.

In our troubleshooting example, using Azure Monitor you might find there’s a lot of demand for your app early morning during the peak hours, and you’re running into capacity issues with your infrastructure (such as VMs or containers.) Now that you’ve determined the problem, you fix it by scaling up.

Well done, you’ve successfully used Service Health and Monitor to diagnose and solve the issue. But you’re not quite finished yet.

Step 3: Set up alerts for future events

To prevent this issue from happening again, you’ll want to use Monitor to set up log alerts and autoscaling to notify you and help you respond more quickly. At the same time, you should set up Service Health alerts so you’re aware of any Azure platform-level issues that might occur.

As you set up these alerts, you’ll find that one key similarity between Service Health and Azure Monitor is their alerting platform. They both use the same alert definition workflow and leverage the same action rules and groups. This means that you can set up an action group once and use it multiple times for different scenarios.

Learn more about Service Health alerts and recommended best practices in our blog “Three ways to get notified about Azure service issues.”

Recap: Is it Azure or is it me?

Azure Service Health and Azure Monitor answer different parts of the question “Is it Azure or is it me?” Service Health helps you assess the health of Azure, while Azure Monitor helps you determine if there are any issues on your end. Both services use the same alerting platform to keep you notified and informed of the availability and performance of your Azure workloads. Get started with Service Health and Azure Monitor today.
Quelle: Azure

Introducing Spinnaker for Google Cloud Platform—continuous delivery made easy

Development teams want to adopt continuous integration (CI) and continuous delivery (CD), to identify and correct problems early in the development process, and to make the release process safe, low-risk, and quick. However, with CI/CD, developers often spend more time setting up and maintaining end-to-end pipelines and crafting deployment scripts than writing code.Spinnaker, developed jointly by Google and Netflix, is an open-source multi-cloud continuous delivery platform. Companies such as Box, Cisco, and Samsung use Spinnaker to create fast, safe, repeatable deployments. Today, we are excited to introduce the Spinnaker for Google Cloud Platform solution, which lets you install Spinnaker in Google Cloud Platform (GCP) with a couple of clicks, and start creating pipelines for continuous delivery.Spinnaker for GCP comes with built-in deployment best practices that can be leveraged whether teams’ resources (source code, artifacts, other build dependencies) are on-premises or in the cloud. Teams get the flexibility of building, testing, and deploying to Google-managed runtimes such as Google Kubernetes Engine (GKE), Google Compute Engine (GCE), or Google App Engine (GAE), as well as other clouds or on-prem deployment targets for hybrid and multi-cloud CD. Spinnaker for GCP integrates Spinnaker with other Google Cloud services, allowing you to extend your CI/CD pipeline and integrate security and compliance in the process. For instance, Cloud Build gives you the flexibility to create Docker containers or non-container artifacts.Likewise, integration with Container Registry vulnerability scanning helps to automatically scan images, and Binary Authorization ensures that you only deploy trusted container images. Then, for monitoring hybrid deployments, you can use Stackdriver to gain insights into visibility into the performance, uptime, and overall health of your application, and of Spinnaker itself.Google’s Chrome Ops Developer Experience team uses Spinnaker to deploy some of their services:”Getting a new Spinnaker instance up and running with Spinnaker for GCP was really simple,” says Ola Karlsson, SRE on the Chrome Ops Developer Experience team. “The solution takes care of the details of managing Spinnaker and still gives us the flexibility we need. We’re now using it to manage our production and test Spinnaker installations.” Spinnaker for GCP lets you add sample pipelines and applications to Spinnaker that demonstrate best practices for deployments to Kubernetes, VMs and more. DevOps teams can use these as starting points to provide “golden path” deployment pipelines tailored to their company’s requirements.“We want to make sure that the solution is great both for developers and DevOps or SRE teams,“ says Matt Duftler, Tech Lead for Google’s Spinnaker effort. “Developers want to get moving fast with the minimum of overhead. Platform teams can allow them to do that safely by encoding their recommended practice into Spinnaker, using Spinnaker for GCP to get up and running quickly and start onboard development teams.”The Spinnaker for GCP advantageThe availability of Spinnaker for GCP gives customers a fast and easy way to set up Spinnaker in a production-ready configuration, optimized for GCP. Some other benefits include: Secure installation: Spinnaker for GCP supports one-click HTTPS configuration with Cloud Identity Aware Proxy (IAP), letting you control who can access the Spinnaker installation.Automatic backups: The configuration of your Spinnaker installation is automatically backed up securely, for auditing and fast recovery.Integrated auditing and monitoring: Spinnaker for GCP integrates Spinnaker with Stackdriver for simplified monitoring, troubleshooting and auditing of changes and deployments.Simplified maintenance: Spinnaker for GCP includes many helpers to simplify and automate maintenance of your Spinnaker installations, including configuring Spinnaker to deploy to new GKE clusters and GCE or GAE in other GCP projects.Existing Spinnaker users can migrate to Spinnaker for GCP today if they’re already using Spinnaker’s Halyard tool to manage their Spinnaker installations.
Quelle: Google Cloud Platform

Digital transformation with legacy systems simplified

Intelligent insurance means improving operations, enabling revenue growth, and creating engaging experiences—which is the result of digital transformation. The cloud has arrived with an array of technical capabilities that can equip an existing business to move into the future. However insurance carriers face a harder road to transform business processes and IT infrastructures. Traditional policy and claim management solutions lack both cloud-era agility, and the modularity required to react quickly to market forces. And legacy systems cannot be decommissioned unless new systems are fully operational and tested, meaning some overlap between old and new.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

The need for efficient automation

The prevailing approach to upgrading enterprise software is to engage in large scale IT projects that may take years and significant cost to execute. Delaying may only increase the costs, especially with the burden of continuing (and increasing) compliance. But more importantly, delay results in a significant opportunity cost. Due to competition, insurers are under pressure to pursue lower costs overall, and especially in claim handling. New insurance technology also forces the need for new distribution models and to automate internal workflows and supply chains.

A platform built for transformation

The name of Codafication’s solution is Unity (not to be confused with the Unity game engine platform). Codafication calls Unity an ecosystem Platform-as-a-Service (ePaaS). It enables insurance carriers to accelerate their digital transformation through secure, bi-directional data integration with core and legacy systems. At the same time, the platform enables Codafication’s subscribers to use new cloud-native apps and services. The increase in connectivity means customers, staff and supply chains can integrate more easily and with greater efficiencies.

Unity seeks to address the changing expectations of insured customers without disruption to core policy and claim management functions within the enterprise. Codafication stresses a modular approach to implementing Unity. Their website provides a catalog of components such as project management, supply chain and resource planning, and financial control (and more).

In this graphic, potential inputs for the system include a wide variety of processes, from legacy core systems (expected) to robotic processes (a surprise). The output is equally versatile—dashboards and portals along with data lake and IoT workflow apps.

Insurers can take an iterative and modular approach to solving high value challenges rapidly. Unity provides all the tools required to accelerate digital transformation. Other noteworthy features include:

Custom extensions: use any programming language supported by Docker, in combination with Unity SDKs, to build custom frontend and backend solutions.
Off-the-shelf apps: plug in applications and services (from Codafication) designed for the insurance industry.
Scalability: cloud-native technology, underpinned by Kubernetes, can be hosted in the cloud or in a multi-cloud scenario, with a mix of Docker, serverless and on-premises options.
GraphQL API: leverage the power of a graph database to unlock data silos and find relationships between data stores from legacy systems. Integrate with cloud vendors, AI services and best-in-breed services through a single, secure, scalable and dynamic API.
Integrative technologies: create powerful custom IoT workflows with logic hooks, web hooks and real-time data subscriptions.

Benefits

Through Unity, organizations can interconnect everything and relate data on the fly. Developers can leverage legacy core systems, middleware, and robotics using a microservice architecture driven by a powerful service mesh and extensible framework.
Teams can leverage this infrastructure to deliver (in parallel) solutions into the platform and into the hands of their users. Insurance carriers will find new use cases (like data science uses, and AI) and develop apps rapidly, to deliver projects faster, for less cost and less risk.
Projects can be secured and reused across the infrastructure. This accelerates digital transformation projects without disrupting existing architecture and is the primary step to implementing modern cloud native technologies, such as AI and IoT.
The ‘modernize now, decommission later’ approach to core legacy systems lets an insurer compete and remain relevant against competitors while providing a longer runway for decommissioning aging legacy systems.

Azure services

Unity leverages the power of Microsoft Azure to provide secure private cloud capability across the globe including services such as:

Azure Kubernetes Service
Azure Application Insights
Azure Monitor
Azure Security Center
Azure Blob Storage
Azure Database for PostgreSQL

Next steps

To learn more about other industry solutions, go to the Azure for insurance page.

To find out more about this solution, go to Unity Cloud and click Contact me.
Quelle: Azure

A dozen reasons why Cloud Run complies with the Twelve-Factor App methodology

With the recent release of Cloud Run, it’s now even easier to deploy serverless applications on Google Cloud Platform (GCP) that are automatically provisioned, scaled up, and scaled down. But in a serverless world, being able to ensure your service meets the twelve factors is paramount. The Twelve-Factor App denotes a paradigm that, when followed, should make it frictionless for you to scale, port, and maintain web-based software as a service. The more factors your environment has, the better.So, on a scale of 1 to 12, just how twelve-factor compatible is Cloud Run? Let’s take the factors, one by one. The Twelve FactorsI. CODEBASEOne codebase tracked in revision control, many deploysEach service you intend to deploy on Cloud Run should live in its own repository, whatever your choice of source control software. When you want to deploy your service, you need to build the container image, then deploy it. For building your container image, you can use a third-party container registry, or Cloud Build, GCP’s own build system. You can even supercharge your deployment story by integrating Build Triggers, so any time you, say, merge to master, your service builds, pushes, and deploys to production.You can also deploy an existing container image as long as it listens on a PORT, or find one of the many sporting a shiny Deploy on Cloud Run button. II. DEPENDENCIESExplicitly declare and isolate dependenciesSince Cloud Run is a Bring-Your-Own container environment, you can declare whatever you want in this container, and the container encapsulates the entire environment. Nothing escapes, so two containers won’t conflict with each other. When you need to declare dependencies, these can be captured using environment variables, keeping your service stateless.It is important to note that there are some limitations to what you can put into a Cloud Run container due to the environment sandboxing, and what ports can be used (which we’ll cover later in Section VII.)III. CONFIGStore config in the environmentYes, Cloud Run supports stored configuration in the environment by default. And it’s mandatory. You must listen for requests on PORT, otherwise your service will fail to start. To be truly stateless, your code goes in your container, and configurations are decoupled by way of environment variables. These can be declared when you create the service, in the Optional Settings. Don’t worry if you miss this setting when you declare your service. You can always edit it again by clicking “+ Deploy New Revision” when viewing your service, or by using the –update-env-vars flag in gcloud beta run deployEach revision you deploy is not editable, which means revisions are reproducible, as the configuration is frozen. To make changes you must deploy a new revision. For bonus points, consider using berglas, which leverages Cloud KMS and Cloud Storage to secure your environment variables. It works out of the box with Cloud Run (and the repo even comes with multiple language examples).IV. BACKING SERVICESTreat backing services as attached resourcesMuch like you would connect to any external database in a containerized environment, you can connect to a plethora of different hosts in the GCP universe.And since your service cannot have any internal state, to have any state you must use a backing service.V. BUILD, RELEASE, RUNStrictly separate build and run stagesHaving separate build and run stages is how you deploy in Cloud Run land! If you set up your Continuous Deployment back in Section I, then you’ve already automated that step. If you haven’t, building a new version of your Cloud Run service is as easy as building your container image: gcloud builds submit –tag gcr.io/YOUR_PROJECT/YOUR_IMAGE .to take advantage of Cloud Build, and deploying the built container image: gcloud beta run deploy –image gcr.io/YOUR_PROJECT/YOUR_IMAGE YOUR SERVICECloud Run creates a new revision of the service, ensures the container starts, and then re-routes traffic to this new revision for you. If for any reason your container image encounters an error, the service is still active with the old version, and no downtime occurs. You can also create continuous deployment by configuring Cloud Run automations using Cloud Build triggers, further streamlining your build, release, and run process. VI. PROCESSESExecute the app as one or more stateless processesEach Cloud Run service runs its own container, and each container should have one process. If you need multiple concurrent processes, separate those out into different services, and use a stateful backing service (Section IV) to communicate between them. VII. PORT BINDINGExport services via port bindingCloud Run follows the modern architecture best practices and each Service must expose themselves on a port number, specified by the PORT environment variable. This is the fundamental design of Cloud Run: any container you want, as long as it listens on port 8080.Cloud Run does support outbound gRPC and WebSockets, but does not currently work with these protocols inbound.VIII. CONCURRENCYScale out via the process modelConcurrency is a first-class factor in Cloud Run. You declare what the maximum number of concurrent requests your container can receive. If the incoming concurrent request count exceeds this number, Cloud Run will automatically scale by adding more container instances to handle all incoming requests. IX. DISPOSABILITYMaximize robustness with fast startup and graceful shutdownSince Cloud Run handles scaling for you, it’s in your best interest to ensure your services are the most efficient they can be. The faster they are to startup, the more seamless scaling can be. There are a number of tips around how to write effective services, so be sure to consider the size of your containers, the time they take to startup, and how gracefully they handle errors without terminating. X. DEV/PROD PARITYKeep development, staging, and production as similar as possibleA container-based development workflow means that your local machine can be the development environment, and Cloud Run can be your production environment! Even if you’re running on a non-Linux environment, a local Docker container should behave in the same way as the same container running elsewhere. It’s always a good idea to test your container locally when developing. Testing locally helps you achieve a more efficient iterative development strategy, allowing you to work more effectively. To ensure that you get the same port-binding behaviour as Cloud Run in production, make sure you run with a port flag: PORT=8080 && docker run -p 8080:${PORT} -e PORT=${PORT} gcr.io/[PROJECT_ID]/[IMAGE]When testing locally, consider if you’re using any GCP external services, and ensure you point Docker to the authentication credentials. Once you’ve confirmed your service is sound, you can deploy the same container to a staging environment, and after confirming it’s working as intended there, to a production environment. A GCP Project can host many services, so it’s recommended that your staging and production (or green and blue, or however you wish to call your isolated environments) are separate projects. This also ensures isolation between databases across environments. XI. LOGSTreat logs as event streamsCloud Run uses Stackdriver Logging out of the box. The “Logs” tab on your Cloud Run service view will show you what’s going on under the covers, including log aggregation across all dynamically created instances. Stackdriver Logging automatically captures stdout and stderr, and there may also be a native client for Logging in your preferred programming language. In addition, since logs are captured in Stackdriver Logging, you can then use the tools available for StackDriver logging to further work with your logs; for example, exporting to Big Query. XII. ADMIN PROCESSESRun admin/management tasks as one-off processesAdministration tasks are outside the scope of Cloud Run. If you need to do any project configuration, database administration, or other management changes, you can perform these tasks using the GCP Console, gcloud CLI, or Cloud Shell. A near-perfect score, as a matter of fact(or)With the exception of one factor being outside of scope, Cloud Run maps near perfectly with Twelve-Factor, which means it will map well to scalable, manageable infrastructure for your next serverless deployment.  To learn more about Cloud Run, check out this quickstart. 
Quelle: Google Cloud Platform

Cloud chapter two: How a hybrid cloud strategy can transform business

In the first chapter of cloud, we saw that enterprises were primarily focused on cost management and driving new workload innovation on the cloud. This included everything from building cloud-native applications to migrating less complex and more easily portable workloads to the public cloud. While adoption has grown rapidly, to date only about 20 percent of enterprise workloads have moved to the cloud according to a study by McKinsey & Company.
We’re now beginning chapter two, which is focused on driving the remaining 80 percent of enterprise workloads to the cloud. This will help businesses unlock new insights and value from their data using next-generation tools like artificial intelligence (AI), analytics, blockchain and more. These workloads are often mission critical and run the heart of the enterprise. It will not be easy pickings and a one size fits all model will not work.
Driving new business value with hybrid cloud solutions
While the possibilities are endless, the cloud journey can be daunting for enterprises who have unique regulatory and data requirements and are currently running anywhere from five to 15 different clouds from multiple providers.
This is why businesses need to consider a hybrid cloud approach, which helps them build, deploy and manage applications and data running on-premises, in private clouds and in public clouds from multiple vendors. With a combination of innovative technology and industry expertise completely underpinned with security, as well as a focus on open solutions and enterprise grade, IBM is already helping move some of the world’s largest enterprises into the next chapter of cloud.
For example, Harley-Davidson Motor Company, an American iconic motorcycle manufacturer, is using IBM Cloud, AI and Internet of Things (IoT) technologies to reimagine the everyday experience of riding. Their LiveWire H-D Connect service, built on the IBM Cloud, provides cellular connectivity and links a LiveWire owner with their motorcycle through their smartphone using the latest version of the Harley-Davidson App. This platform is the foundation on which Harley-Davidson will provide its riders with new services and insights available for its first-ever production electric vehicle.
Delivering enhanced global reach, scale and services
The need is clear across nearly every industry and geography from clients like Harley-Davidson to other major brands like ExxonMobil, Vodafone Business, and Whirlpool. They want to infuse existing IT and private cloud environments with new public cloud capabilities like AI and analytics in a secured, globally consistent manner. Moreover, they need to be able to easily choose where to deploy their workloads across multiple environments (on-premises, private and public cloud), which requires a commitment to open source and increased automation and management.  This hybrid cloud approach is helping our clients launch new business services, completely transform user and employee experiences and much more.
That’s why IBM continues to unveil new capabilities and services across our entire hybrid cloud portfolio, as well as expand the global reach, scale and services of the IBM public cloud. Here are just few public cloud features that IBM has introduced in this year:

Our sixth IBM Cloud region, which is in Sydney, Australia, and features three availability zones for high availability and resiliency.
IBM Cloud Virtual Private Cloud (VPC) providing the logical isolation and security of a private cloud with the availability, cost-effectiveness, and scalability of the public cloud, simplifying the deployment of secure, available, and resilient workloads.
IBM Power Systems Virtual Server on IBM Cloud for AIX and IBM i workloads with use cases such as disaster recovery, dev/test environments, and partial IT infrastructure moves and more.
IBM Cloud for VMware Solutions integration of VMs and containers along with IBM Cloud security services and additional infrastructure options.
New IBM Cloud Hyper Protect Services to provide encryption key management with a dedicated cloud hardware security module (HSM) built on the only FIPS 140-2 level 4-based technology certification offered by a public cloud provider.
Managed Istio and Managed Knative on the IBM Cloud Kubernetes Service so that developers can quickly build and deploy enterprise-scale container-based and serverless applications across hybrid environments.

And with the close of the Red Hat acquisition, we are bringing together Red Hat’s open hybrid cloud technologies with the unmatched scale and depth of IBM innovation and industry expertise, and sales leadership in more than 175 countries. Together, IBM and Red Hat will accelerate innovation by offering a next-generation hybrid multicloud platform. Based on open source technologies, such as Linux and Kubernetes, the platform will allow businesses to securely deploy, run and manage data and applications on-premises and on private and multiple public clouds.  This consistency regardless of deployment (private, public or a third-party cloud) will be a game changer for the IBM cloud strategy.
This continued dedication to innovation and client-first transformation has helped IBM build a $19.5 billion cloud business with clients relying on IBM Cloud to help them turn the page on the next chapter of their cloud journeys.  It’s going to be an amazing journey…..hop on, strap in and let’s go!
Learn more about the next chapter and what analysts are saying about the hybrid cloud.
The post Cloud chapter two: How a hybrid cloud strategy can transform business appeared first on Cloud computing news.
Quelle: Thoughts on Cloud