Planet scale operational analytics and AI with Azure Cosmos DB

We’re excited to announce new Azure Cosmos DB capabilities at Microsoft Build 2019 that enable anyone to easily build intelligent globally distributed apps running at Cosmos scale:

Planet scale, operational analytics with built-in support for Apache Spark in Azure Cosmos DB
Built-in Jupyter notebooks support for all Azure Cosmos DB APIs

See the other Azure Cosmos DB announcements and hear voices of our customers.

Built-in support for Apache Spark in Azure Cosmos DB

Our customers love the fact that Azure Cosmos DB enables them to elastically scale throughput with guaranteed low latency worldwide, and they also want to run operational analytics directly against the petabytes of operational data stored in their Azure Cosmos databases. We are excited to announce the preview of native integration of Apache Spark within Azure Cosmos DB. You can now run globally distributed, low latency operational analytics and AI on transactional data stored within your Cosmos databases. This provides the following benefits:

Fast time-to-insights with globally distributed Spark. With the native Apache Spark support on your multi-mastered globally distributed Cosmos database, you can now get blazing fast time-to-insight all around the world. Since your Cosmos database is globally distributed, all the data is ingested and queries are served against the local database replica closest to both the producers and the consumers of data, all around the world.
Fully-managed experience and SLAs. Apache Spark jobs enjoy the industry leading comprehensive 99.999 SLAs offered by Azure Cosmos DB without any hassle of managing separate Apache Spark clusters. Azure Cosmos DB automatically and elastically scales the compute required to execute your Apache Spark jobs across all Azure regions associated with your Cosmos database.
Efficient execution of Spark jobs on multi-model operational data. All your Spark jobs are executed directly on the indexed multi-model data stored inside the data partitions of your Cosmos containers without requiring any unnecessary data movement.
OSS APIs for transactional and analytical data processing. Along with using the familiar OSS client drivers for Cassandra, MongoDB, and Gremlin (along with the Core SQL API) for your operational workloads, you can now use Apache Spark for your analytics – all operating on the same underlying globally distributed data stored in your Cosmos database.

We believe that the native integration of Apache Spark into Azure Cosmos DB bridges the transactional and analytic divide that has been one of the major customer pain points building cloud-native applications at global scale.

Several of Azure’s largest enterprise customers are running globally distributed operational analytics on their Cosmos databases with Apache Spark. Coca-Cola is one such customer, watch their story.

Coca-Cola using Azure Cosmos DB for globally-distributed operational analytics

“Being able to scale globally and have insights that are actually delivered within minutes at a global scale is very important for us. Putting our data in a service like Azure Cosmos DB allows us to draw insights across the world much faster, going from the  hours that we used to take a couple years ago, down to minutes.”
–    Neeraj Tolmare, CIO, Global Head of Digital & Innovation at The Coca-Cola Company

Explore more of the Azure Cosmos DB API for Apache Spark.

Cosmic notebooks

We are also thrilled to announce the preview of Jupyter notebooks running inside Azure Cosmos DB, made available for all APIs (including Cassandra, MongoDB, SQL, Gremlin and Apache Spark) to further enhance the developer experience on Azure Cosmos DB. With the native notebook experience support for all Azure Cosmos DB APIs and all data models, developers can now interactively run queries, execute ML models, explore and analyze the data stored in their Cosmos databases. The notebook experience also enables easy exploration with the stored data, building and training machine learning models, and performing inferencing on the data using the familiar Jupyter notebook experience, directly inside the Azure portal.

Learn more about Jupyter notebooks.

 
Built-in support for Jupyter notebooks in Azure Cosmos DB

We also announced a slew of new capabilities and improvements for developers, including a new API for etcd offering native support for Azure Cosmos DB backed etcd to power your self-managed Kubernetes clusters on Azure, support for OFFSET/SKIP to our SQL APIs and other SDK improvements.

We are extremely grateful to our customers, who are building amazingly cool, globally distributed apps and trusting Azure Cosmos DB with their mission critical workloads at massive scale. Their stories inspire us.

Have questions? Email us at AskCosmosDB@microsoft.com any time.
Try out Azure Cosmos DB for free. (No credit card required)
For the latest Azure Cosmos DB news and features, stay up-to-date by following us on Twitter #CosmosDB, @AzureCosmosDB.

Quelle: Azure

OpenShift Commons Gathering at Red Hat Summit Boston 2019 Recap [with Slides]

It’s a wrap! the OpenShift Commons Gathering at Red Hat Summit showcased 15 OpenShift Production Case Studies and technical updates from project engineers and architects.  The OpenShift Commons Gathering will featured speakers from NASA, Volkswagen, UPS, RBC Microsoft Azure, VMWare and Red Hat’s CEO Jim Whitehurst. The OpenShift Commons Gathering at Red Hat Summit brought together […]
The post OpenShift Commons Gathering at Red Hat Summit Boston 2019 Recap [with Slides] appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Now generally available: Android phone’s built-in security key

Phishing—when an attacker tries to trick you into turning over your online credentials—is one of the most common causes of security breaches. At Google Cloud Next ‘19, we enabled you to help your users defend against phishing with a security key built into their Android phone, bringing the benefits of a phishing-resistant two-factor authentication (2FA) to more than a billion users worldwide. This capability is now generally available.While Google automatically blocks the overwhelming majority of malicious sign-in attempts (even if an attacker has a username or password), 2FA, also known as 2-Step Verification (2SV), considerably improves user security. At the same time, sophisticated attacks can skirt around some 2FA methods to compromise user accounts. We consider security keys based on FIDO standards, including Titan Security Key and Android phone’s built-in security key, to be the strongest, most phishing-resistant methods of 2FA. FIDO leverages public key cryptography to verify a user’s identity and URL of the login page, so that an attacker can’t access users’ accounts even if users are tricked into providing their username and password.Security keys are now available built-in on phones running Android 7.0+ (Nougat) at no additional cost. That way, your users can use their phones as their primary 2FA method for work (G Suite, Cloud Identity, and GCP) and personal Google Accounts to sign in on Bluetooth-enabled Chrome OS, macOS X, or Windows 10 devices with a Chrome browser. This gives them the strongest 2FA method with the convenience of a phone that’s always in their pocket.As the Google Cloud administrator, start by activating Android phone’s built-in security key to protect your own work or personal Google Account following these simple steps:Add your work or personal Google Account to your Android phone.Make sure you’re enrolled in 2-Step Verification (2SV).On your computer, visit the 2SV settings and click “Add security key”.Choose your Android phone from the list of available devices—and you’re done!When signing in, make sure Bluetooth is turned on on both your phone and the device you are signing in on. You can find more detailed instructions here.To help ensure the highest levels of account protection, you can also require the use of security keys for your users in G Suite, Cloud Identity, and GCP, letting them choose between using a physical security key, their Android phone, or both. We recommend that users register a backup security key to their account and keep it in a safe place, so that they can gain access to their account if they lose their phone. Hardware security keys are available from a number of vendors, including Google with our Titan Security Key.
Quelle: Google Cloud Platform

Advancing the developer experience for serverless apps with Azure Functions

Azure Functions constantly innovates so that you can achieve more with serverless applications, enabling developers to overcome common serverless challenges through a productive, event-driven programming model. Some releases we made in the last few weeks are good examples of this, including:

The Azure Functions premium plan, enables a whole new range of low latency and networking scenarios.
The preview of PowerShell support in Azure Functions, provides a way to tackle cloud automation scenarios which is a common challenge to IT pros and SREs all around the globe.

The new releases and improvements do not stop there, and today we are pleased to present several advancements intended to provide a better end-to-end experience when building serverless applications. Keep reading below to learn more about the following:

A new way to host Azure Functions in Kubernetes environments
Stateful entities with Durable Functions (in preview)
Less cluttered .NET applications with dependency injection
Streamlined deployment with Azure DevOps
Improved integration with Azure API Management (in preview)

Bring Azure Functions to Kubernetes with KEDA

There’s no better way to leverage the serverless advantages than using a fully managed service in the cloud like Azure Functions. But some applications might need to run on disconnected environments, or they require custom hardware and dependencies. Customers usually take a containerized approach for these scenarios, in which Kubernetes is the de facto industry standard. Managing application-aware, event-driven scale in these environments is non-trivial and usually insufficient, as it’s based only on resource usage, such as CPU or memory.

Microsoft and Red Hat partnered to build Kubernetes-based event-driven auto scaling (KEDA). KEDA is an open source component for Kubernetes that provides event-driven scale for any container workload enabling containers to scale from zero to thousands of instances based on event metrics, such as the length of an Azure Queue or Kafka stream, and back to zero again when done processing.

Since Azure Functions can be containerized, you can now deploy a Function App to any Kubernetes cluster, keeping the same scaling behavior you would have on the Azure Functions service.

This is a significant milestone for the open source ecosystem around Kubernetes, so we’re sharing much more detail in a separate blog post titled, “Announcing KEDA: bringing event-driven containers and functions to Kubernetes.” If you want to learn more about it, register today for the Azure webinar series scheduled for later in May. In this webinar we will go more in depth on this exciting topic.

Durable Functions stateful patterns

We have been thrilled with the excitement and energy from the community around Durable Functions, and our extension to the Functions runtime that unlocks new stateful and workflow patterns for serverless workflows. Today we are releasing some new capabilities in a preview package of Durable Functions.

For stateful functions that map to an entity like an IoT device or a gaming session, you can use the new stateful entity trigger for actor-like capabilities in Azure Functions. We are also making the state management of your stateful functions more flexible with preview support for Redis cache as the state provider for Durable Functions, enabling scenarios where applications may run in a disconnected or edge environment.

You can learn more about the new durable features in our documentation, “Durable Functions 2.0 preview (Azure Functions).”

Dependency injection for .NET applications

We are constantly striving to add new patterns and capabilities that make functions easier to code, test, and manage. .NET developers have been taking advantage of dependency injection (DI) to better architect their applications, and today we’re excited to support DI in Azure Functions written in .NET. This enables simplified management of connections plus dependent services, and unlocks easier testability for functions that you author.

Learn more about dependency injection in our documentation, “Use dependency injection in .NET Azure Functions.”

Streamlined Azure DevOps experience

With new build templates in Azure Pipelines, you will have the ability to quickly configure your Azure Pipeline with function-optimized tasks to build your .NET, Node.js, and Python applications. We are also announcing the general availability of the Azure Functions deployment task, which is optimized to work with the best deployment option for your function app.

Additionally, with the latest Azure CLI release we introduced a new command that can automatically create and configure an Azure DevOps pipeline for your function app. The DevOps definition now lives with your code, which allows to fine tune build and deployment tasks.

For more detailed information please check our documentation, “Continuous delivery using Azure DevOps.”

Defining and managing your Functions APIs with serverless API Management

We have also simplified how you can expose and manage APIs built with Azure Functions through API Management. With this improved integration, the Function Apps blade in the Azure portal presents an option to expose your HTTP-triggered functions through a new or an existing API in API Management.

Once the Function App is linked with API Management, you can manage API operations, apply policies, edit and download OpenAPI specification files, or navigate to your API Management instance for a full-featured experience.

Learn more about how to expose your Function Apps with API Management in our documentation.

Sharing is caring

We have also included a set of improvements to the Azure Serverless Community Library, including an updated look, a streamlined sample submission process, and more detailed information about each sample. Check out the Serverless Community Library to gain inspiration for your next serverless project, and share something cool once you’ve built it.

Get started today

With Functions options expanding and quickly improving, we’d sincerely love to hear your feedback. You can reach the team on Twitter and on GitHub, and we also actively monitor StackOverflow and UserVoice. For the latest updates, please subscribe to our monthly live webcast.

Tell us what you love about Azure Functions, and start learning more about all the new capabilities we are presenting today:

Learn more about using KEDA to host your function apps in Kubernetes in its blog post, “Announcing KEDA: bringing event-driven containers and functions to Kubernetes” and register for the Azure webinar series to see it in action.
Take a look at how you can benefit from dependency injection in .NET., session enabled Service Bus trigger, and extension bundle for your function apps in our docs.
Understand how you can have stateful functions mapping to entities with the preview capabilities added in Durable Functions.
Simplify the deployment of your serverless applications with the streamlined Azure DevOps experience through new tasks and CLI commands.
Learn how to expose and manage serverless APIs integrating Azure Functions with Azure API Management
Sign up for an Azure free account if you don’t have one yet, and start building awesome serverless applications in the cloud today.

Quelle: Azure

Innovating for SAP customers with Google Cloud

The world’s largest businesses run SAP applications, and increasingly, they’re doing it on Google Cloud. Since announcing our partnership with SAP two years ago, we’ve been expanding our support for SAP customers, from providing global virtualized compute infrastructure with 6 and 12 TB instances for SAP HANA workloads to partnerships with solutions providers such as Accenture, Deloitte, Atos, and Hitachi/Oxya that deliver services for application management, migrations and innovation.Today, we’re sharing the latest updates on our partnership and how we’re ensuring Google Cloud is the best place to run SAP workloadsEmpowering more SAP customersMore and more customers are running mission-critical SAP applications on Google Cloud, including ATB Financial, Carrefour and MediaMarktSaturn Retail Group, and we are continuing to see growth with long-time customers including Tory Burch and The Home Depot, who is now running SAP CAR, EWM and BW in production on Google Cloud. At the recent Google Cloud Next ‘19 in San Francisco, McKesson and Metro AG shared insights into why they chose Google Cloud for their SAP landscapes.We also continue to expand our ecosystem in support of SAP customers’ journey to S/4 HANA, by working with partners such as Gekkobrain who help automate custom code migration and SNP Group who help automate data migration. In addition, we have built joint centers of excellence with strategic partners like Accenture, Deloitte, Atos, and Hitachi/Oxya. Together, these partnerships simplify migrations and upgrades for Google Cloud customers.Introducing SAP HANA Enterprise Cloud as a fully managed service on GCPWe’re also delighted that SAP will bring its HANA Enterprise Cloud (HEC) to GCP, allowing HEC customers to take advantage of Google Cloud’s secure, modern and reliable global infrastructure and capabilities in AI, ML, and analytics. HEC will be offered as a fully managed service on GCP, delivering a flexible operating model to help customers reduce operational complexity.For customers that require specific compliance certifications and a specialized set of security practices, we’ve also partnered with SAP NS2 Cloud to deliver Secure HANA Cloud on Google Cloud.Google Cloud at SAPPHIRE NOW 2019If you’re attending SAPPHIRE NOW 2019, May 7-9 in Orlando, Florida, we’d love to say hello. Stop by the Google Cloud booth #1152 and check out our demos covering innovations in analytics, hybrid cloud, retail, healthcare and more. Come meet the SAP on Google Cloud experts in our booth to learn how the cloud can drive agility, risk avoidance, cost reduction and innovation for your SAP environment. Or join us at our sessions to learn all about SAP on Google Cloud. Here are some highlights:Why Google Cloud for SAP — a customer perspectiveTuesday, May 7th, 11:30 AM–12:10 PMGoogle Cloud with ATB FinancialSession ID: 86665SAP on Google Cloud — build a reliable, agile and innovative enterpriseTuesday, May 7th, 2:30 PM–2:50 PMSession ID: 86676Big Data for SAP — capture, consolidate and create with Google CloudTuesday, May 7th, 1:30 PM–1:50 PMSession ID: 86671Reinventing retail with SAP on Google CloudWednesday, May 8th, 1:30 PM–1:50 PMGoogle Cloud with Metro AGSession ID: 86681You can find a complete guide to our participation at SAP SAPPHIRE here, and to learn more about SAP on Google Cloud, visit our website.
Quelle: Google Cloud Platform

What’s new in Azure Monitor

At Ignite 2018, we shared the vision to bring monitoring infrastructure, applications, and the network into one unified offering, providing full stack monitoring for your applications. Over last few months, individual capabilities such as Application Insights and Azure Monitor logs have come together to provide a seamless and integrated Azure Monitor experience.

We’d like to share our three favorites:

End-to-end monitoring for Azure Kubernetes Service
Integrated access control for logs
Intelligent scalable alerts

End-to-end monitoring for Azure Kubernetes Service

Today, Azure Kubernetes Service (AKS) customers rely on Azure Monitor for containers to get out of the box monitoring for the AKS clusters. Kubernetes event logs are now available in real-time in addition to live container logs. You can now filter the charts and metrics for specific AKS node pools and see Node Storage Capacity metrics when you drill down into node details.

For monitoring your applications running on AKS, you can just instrument with Application Insights SDKs, but if you cannot instrument your workloads (for example, you may be running a legacy app or a third party app), we now have an alternative that doesn’t require any instrumentation! Application Insights can leverage your existing service mesh investments, the preview currently supports Istio, and provide application monitoring for AKS without any modification to your app’s code. This enables you to immediately start taking advantage of out-of-the-box capabilities like Application Map, Live Metrics Stream, Application Dashboards, Workbooks, User Behavior Analytics, and more.

Through a combined view of application and infrastructure, Azure Monitor now provides a full-stack monitoring view of Kubernetes clusters.

Integrated access control for logs

Azure Monitor is the central platform for collecting logs across monitoring, management, security, and other log types in Azure. Customers love the powerful, embedded Azure Monitor Logs experience that allows you to run diagnostics, root-cause analysis, statistics, visualizations, and answer any other ad-hoc questions. One of the challenges that customers were facing was configuring access control based on a resource. For example, how do you ensure that anyone who has access to virtual machine (VM) also has access to the logs generated by the VM. In line with our vision to provide you a seamless and native monitoring experience, we are now providing granular role-based access control for logs that help you cascade the permissions you have set at a resource level down to the operational logs.

Users can now also access logs scoped to their resource, allowing them to explore and query logs without needing the understand the entire workspace structure.

Intelligent and scalable alerts

Metric Alerts with Dynamic Thresholds, now generally available, enables Azure Monitor to determine the right thresholds for alert rules. Multi-resource alerts makes it easy to create a single alert rule and apply across multiple VMs.

The new Action Rules, available in preview, add more flexibility and finer controls for Action Groups. With Action Rules, scaling Action Groups to suppress alerts during a maintenance window is easy to do with a couple clicks

We shared three examples of how we are making Azure Monitor integrated, intelligent, and scalable, but that’s only a part of the story. Here is a list of other exciting announcements coming to you from Build.

Preview of Azure Monitor application change analysis, providing a centralized view and analysis of changes at different layers of a web app. The first iteration of this feature is now available in the App Services Diagnose and Solve Problems experience.
Improved visualizations in Application Map with better filtering to quickly scope to specific components, and ability to group/expand common dependencies, including Azure Functions v2.
Improved codeless instrumentation experience for ASP.NET apps on IIS with the preview of Status Monitor v2. This enables clean redeployments, the latest SDKs, support for TLS 1.2, offline install support, and more!
Application Insights SDK for Java workloads now fully supports W3C and provides monitoring support for Async Java apps with manual instrumentation API. Improved ILogger logs collection support for .NET Core apps and support for Live Metrics Stream for Node.JS apps.
Workbooks are now first-class citizen for Azure Monitor and available in the main menu. Use the sample templates to customize interactive reports or troubleshooting guides with rich text, analytics queries, metrics, and various parameters across your apps and infrastructure resources! New templates also available supporting Azure Monitor for VMs to monitor open ports and their connections. 

Get monitoring

Azure Monitor is constantly evolving to discover new insights and reduce potential issues with applications. Find the latest updates for Azure Monitor in the Azure portal. We want to hear from you! Ask questions or provide feedback.

Ready to get started?

Monitor metrics quickstart
Subscription Audit and Alerts quickstart
Learn how to view or analyze data collected.
Learn how to find and diagnose run-time exceptions.

Quelle: Azure