Microsoft Connected Vehicle Platform: trends and investment areas

This post was co-authored by the extended Azure Mobility Team.

The past year has been eventful for a lot of reasons. At Microsoft, we’ve expanded our partnerships, including Volkswagen, LG Electronics, Faurecia, TomTom, and more, and taken the wraps off new thinking such as at CES, where we recently demonstrated our approach to in-vehicle compute and software architecture.

Looking ahead, areas that were once nominally related now come into sharper focus as the supporting technologies are deployed and the various industry verticals mature. The welcoming of a new year is a good time to pause and take in what is happening in our industry and in related ones with an aim to developing a view on where it’s all heading.

In this blog, we will talk about the trends that we see in connected vehicles and smart cities and describe how we see ourselves fitting in and contributing.

Trends

Mobility as a Service (Maas)

MaaS (sometimes referred to as Transportation as a Service, or TaaS) is about people getting to goods and services and getting those goods and services to people. Ride-hailing and ride-sharing come to mind, but so do many other forms of MaaS offerings such as air taxis, autonomous drone fleets, and last-mile delivery services. We inherently believe that completing a single trip—of a person or goods—will soon require a combination of passenger-owned vehicles, ride-sharing, ride-hailing, autonomous taxis, bicycle-and scooter-sharing services transporting people on land, sea, and in the air (what we refer to as “multi-modal routing”). Service offerings that link these different modes of transportation will be key to making this natural for users.

With Ford, we are exploring how quantum algorithms can help improve urban traffic congestion and develop a more balanced routing system. We’ve also built strong partnerships with TomTom for traffic-based routing as well as with AccuWeather for current and forecast weather reports to increase awareness of weather events that will occur along the route. In 2020, we will be integrating these routing methods together and making them available as part of the Azure Maps service and API. Because mobility constitutes experiences throughout the day across various modes of transportation, finding pickup locations, planning trips from home and work, and doing errands along the way, Azure Maps ties the mobility journey with cloud APIs and iOS and Android SDKs to deliver in-app mobility and mapping experiences. Coupled with the connected vehicle architecture of integration with federated user authentication, integration with the Microsoft Graph, and secure provisioning of vehicles, digital assistants can support mobility end-to-end. The same technologies can be used in moving goods and retail delivery systems.

The pressure to become profitable will force changes and consolidation among the MaaS providers and will keep their focus on approaches to reducing costs such as through autonomous driving. Incumbent original equipment manufacturers (OEMs) are expanding their businesses to include elements of car-sharing to continue evolving their businesses as private car ownership is likely to decline over time.

Connecting vehicles to the cloud

We refer holistically to these various signals that can inform vehicle routing (traffic, weather, available modalities, municipal infrastructure, and more) as “navigation intelligence.” Taking advantage of this navigation intelligence will require connected vehicles to become more sophisticated than just logging telematics to the cloud.

The reporting of basic telematics (car-to-cloud) is barely table-stakes; over-the-air updates (OTA, or cloud-to-car) will become key to delivering a market-competitive vehicle, as will command-and-control (more cloud-to-car, via phone apps). Forward-thinking car manufacturers deserve a lot of credit here for showing what’s possible and for creating in consumers the expectation that the appearance of new features in the car after it is purchased isn’t just cool, but normal.

Future steps include the integration of in-vehicle infotainment (IVI) with voice assistants that blend the in- and out-of-vehicle experiences, updating AI models for in-market vehicles for automated driving levels one through five, and of course pre-processing the telemetry at the edge in order to better enable reinforcement learning in the cloud as well as just generally improving services.

Delivering value from the cloud to vehicles and phones

As vehicles become more richly connected and deliver experiences that overlap with what we’ve come to expect from our phones, an emerging question is, what is the right way to make these work together? Projecting to the IVI system of the vehicle is one approach, but most agree that vehicles should have a great experience without a phone present.

Separately, phones are a great proxy for “a vehicle” in some contexts, such as bicycle sharing, providing speed, location, and various other probe data, as well as providing connectivity (as well as subsidizing the associated costs) for low-powered electronics on the vehicle.

This is probably a good time to mention 5G. The opportunity 5G brings will have a ripple effect across industries. It will be a critical foundation for the continued rise of smart devices, machines, and things. They can speak, listen, see, feel, and act using sensitive sensor technology as well as data analytics and machine learning algorithms without requiring “always on” connectivity. This is what we call the intelligent edge. Our strategy is to enable 5G at the edge through cloud partnerships, with a focus on security and developer experience.

Optimizations through a system-of-systems approach

Connecting things to the cloud, getting data into the cloud, and then bringing the insights gained through cloud-enabled analytics back to the things is how optimizations in one area can be brought to bear in another area. This is the essence of digital transformation. Vehicles gathering high-resolution imagery for improving HD maps can also inform municipalities about maintenance issues. Accident information coupled with vehicle telemetry data can inform better PHYD (pay how you drive) insurance plans as well as the deployment of first responder infrastructure to reduce incident response time.

As the vehicle fleet electrifies, the demand for charging stations will grow. The way in-car routing works for an electric car is based only on knowledge of existing charging stations along the route—regardless of the current or predicted wait-times at those stations. But what if that route could also be informed by historical use patterns and live use data of individual charging stations in order to avoid arriving and having three cars ahead of you? Suddenly, your 20-minute charge time is actually a 60-minute stop, and an alternate route would have made more sense, even if, on paper, it’s more miles driven.

Realizing these kinds of scenarios means tying together knowledge about the electrical grid, traffic patterns, vehicle types, and incident data. The opportunities here for brokering the relationships among these systems are immense, as are the challenges to do so in a way that encourages the interconnection and sharing while maintaining privacy, compliance, and security.

Laws, policies, and ethics

The past several years of data breaches and elections are evidence of a continuously evolving nature of the security threats that we face. That kind of environment requires platforms that continuously invest in security as a fundamental cost of doing business.

Laws, regulatory compliance, and ethics must figure into the design and implementation of our technologies to as great a degree as goals like performance and scalability do. Smart city initiatives, where having visibility into the movement of people, goods, and vehicles is key to doing the kinds of optimizations that increase the quality of life in these cities, will confront these issues head-on.

Routing today is informed by traffic conditions but is still fairly “selfish:” routing for “me” rather than for “we.” Cities would like a hand in shaping traffic, especially if they can factor in deeper insights such as the types of vehicles on the road (sending freight one way versus passenger traffic another way), whether or not there is an upcoming sporting event or road closure, weather, and so on.

Doing this in a way that is cognizant of local infrastructure and the environment is what smart cities initiatives are all about.

For these reasons, we have joined the Open Mobility Foundation. We are also involved with Stanford’s Digital Cities Program, the Smart Transportation Council, the Alliance to Save Energy by the 50×50 Transportation Initiative, and the World Business Council for Sustainable Development.

With the Microsoft Connected Vehicle Platform (MCVP) and an ecosystem of partners across the industry, Microsoft offers a consistent horizontal platform on top of which customer-facing solutions can be built. MCVP helps mobility companies accelerate the delivery of digital services across vehicle provisioning, two-way network connectivity, and continuous over-the-air updates of containerized functionality. MCVP provides support for command-and-control, hot/warm/cold path for telematics, and extension hooks for customer/third-party differentiation. Being built on Azure, MCVP then includes the hyperscale, global availability, and regulatory compliance that comes as part of Azure. OEMs and fleet operators leverage MCVP as a way to “move up the stack” and focus on their customers rather than spend resources on non-differentiating infrastructure.

Innovation in the automotive industry

At Microsoft, and within the Azure IoT organization specifically, we have a front-row seat on the transformative work that is being done in many different industries, using sensors to gather data and develop insights that inform better decision-making. We are excited to see these industries on paths that are trending to converging, mutually beneficial paths. Our colleague Sanjay Ravi shares his thoughts from an automotive industry perspective in this great article.

Turning our attention to our customer and partner ecosystem, the traction we’ve gotten across the industry has been overwhelming:

The Volkswagen Automotive Cloud will be one of the largest dedicated clouds of its kind in the automotive industry and will provide all future digital services and mobility offerings across its entire fleet. More than 5 million new Volkswagen-specific brand vehicles are to be fully connected on Microsoft’s Azure cloud and edge platform each year. The Automotive Cloud subsequently will be rolled out on all Group brands and models.

Cerence is working with us to integrate Cerence Drive products with MCVP. This new integration is part of Cerence’s ongoing commitment to delivering a superior user experience in the car through interoperability across voice-powered platforms and operating systems. Automakers developing their connected vehicle solutions on MCVP can now benefit from Cerence’s industry-leading conversational AI, in turn delivering a seamless, connected, voice-powered experience to their drivers.

Ericsson, whose Connected Vehicle Cloud connects more than 4 million vehicles across 180 countries, is integrating their Connected Vehicle Cloud with Microsoft’s Connected Vehicle Platform to accelerate the delivery of safe, comfortable, and personalized connected driving experiences with our cloud, AI, and IoT technologies.

LG Electronics is working with Microsoft to build its automotive infotainment systems, building management systems and other business-to-business collaborations. LG will leverage Microsoft Azure cloud and AI services to accelerate the digital transformation of LG’s B2B business growth engines, as well as Automotive Intelligent Edge, the in-vehicle runtime environment provided as part of MCVP.

Global technology company ZF Friedrichshafen is transforming into a provider of software-driven mobility solutions, leveraging Azure cloud services and developer tools to promote faster development and validation of connected vehicle functions on a global scale.

Faurecia is collaborating with Microsoft to develop services that improve comfort, wellness, and infotainment as well as bring digital continuity from home or the office to the car. At CES, Faurecia demonstrated how its cockpit integration will enable Microsoft Teams video conferencing. Using Microsoft Connected Vehicle Platform, Faurecia also showcased its vision of playing games on the go, using Microsoft’s new Project xCloud streaming game preview.

Bell has revealed AerOS, a digital mobility platform that will give operators a 360° view into their aircraft fleet. By leveraging technologies like artificial intelligence and IoT, AerOS provides powerful capabilities like fleet master scheduling and real-time aircraft monitoring, enhancing Bell’s Mobility-as-a-Service (MaaS) experience. Bell chose Microsoft Azure as the technology platform to manage fleet information, observe aircraft health, and manage the throughput of goods, products, predictive data, and maintenance.

Luxoft is expanding its collaboration with Microsoft to accelerate the delivery of connected vehicle solutions and mobility experiences. By leveraging MCVP, Luxoft will enable and accelerate the delivery of vehicle-centric solutions and services that will allow automakers to deliver unique features such as advanced vehicle diagnostics, remote access and repair, and preventive maintenance. Collecting real usage data will also support vehicle engineering to improve manufacturing quality.

We are incredibly excited to be a part of the connected vehicle space. With MCVP, our ecosystem partners and our partnerships with leading automotive players, both vehicle OEMs and automotive technology suppliers, we believe we have a uniquely capable offering enabling at global scale the next wave of innovation in the automotive industry as well as related verticals such as smart cities, smart infrastructure, insurance, transportation, and beyond.
Quelle: Azure

Advancing safe deployment practices

"What is the primary cause of service reliability issues that we see in Azure, other than small but common hardware failures? Change. One of the value propositions of the cloud is that it’s continually improving, delivering new capabilities and features, as well as security and reliability enhancements. But since the platform is continuously evolving, change is inevitable. This requires a very different approach to ensuring quality and stability than the box product or traditional IT approaches — which is to test for long periods of time, and once something is deployed, to avoid changes. This post is the fifth in the series I kicked off in my July blog post that shares insights into what we're doing to ensure that Azure's reliability supports your most mission critical workloads. Today we'll describe our safe deployment practices, which is how we manage change automation so that all code and configuration updates go through well-defined stages to catch regressions and bugs before they reach customers, or if they do make it past the early stages, impact the smallest number possible. Cristina del Amo Casado from our Compute engineering team authored this posts, as she has been driving our safe deployment initiatives.” – Mark Russinovich, CTO, Azure

 

When running IT systems on-premises, you might try to ensure perfect availability by having gold-plated hardware, locking up the server room and throwing away the key. Software wise, IT would traditionally prevent as much change as possible — avoiding applying updates to the operating system or applications because they’re too critical, and pushing back on change requests from users. With everyone treading carefully around the system, this ‘nobody breathe!’ approach stifles continued system improvement, and sometimes even compromises security for systems that are deemed too crucial to patch regularly. As Mark mentioned above, this approach doesn't work for change and release management in a hyperscale public cloud like Azure. Change is both inevitable and beneficial, given the need to deploy service updates and improvements, and given our commitment to you to act quickly in the face of security vulnerabilities. As we can’t simply avoid change, Microsoft, our customers, and our partners need to acknowledge that change is expected, and we plan for it. Microsoft continues to work on making updates as transparent as possible and will deploy the changes safely as described below. Having said that, our customers and partners should also design for high availability, consume maintenance events sent by the platform to adapt as needed. Finally, in some cases, customers can take control of initiating the platform updates at a suitable time for their organization.

Changing safely

When considering how to deploy releases throughout our Azure datacenters, one of the key premises that shapes our processes is to assume that there could be an unknown problem introduced by the change being deployed, plan in a way that enables the discovery of said problem with minimal impact, and automate mitigation actions for when the problem surfaces. While a developer might judge it as completely innocuous and guarantee that it won't affect the service, even the smallest change to a system poses a risk to the stability of the system, so ‘changes’ here refers to all kinds of new releases and covers both code changes and configuration changes. In most cases a configuration change has a less dramatic impact on the behavior of a system but, just as for a code change, no configuration change is free of risk for activating a latent code defect or a new code path.

Teams across Azure follow similar processes to prevent or at least minimize impact related to changes. Firstly, by ensuring that changes meet the quality bar before the deployment starts, through test and integration validations. Then after sign off, we roll out the change in a gradual manner and measure health signals continuously, so that we can detect in relative isolation if there is any unexpected impact associated with the change that did not surface during testing. We do not want a change causing problems to ever make it to broad production, so steps are taken to ensure we can avoid that whenever possible. The gradual deployment gives us a good opportunity to detect issues at a smaller scale (or a smaller ‘blast radius’) before it causes widespread impact.

Azure approaches change automation, aligned with the high level process above, through a safe deployment practice (SDP) framework, which aims to ensure that all code and configuration changes go through a lifecycle of specific stages, where health metrics are monitored along the way to trigger automatic actions and alerts in case of any degradation detected. These stages (shown in the diagram that follows) reduce the risk that software changes will negatively affect your existing Azure workloads.

This shows a simplification of our deployment pipeline, starting on the left with developers modifying their code, testing it on their own systems, and pushing it to staging environments. Generally, this integration environment is dedicated to teams for a subset of Azure services that need to test the interactions of their particular components together. For example, core infrastructure teams such as compute, networking, and storage share an integration environment. Each team runs synthetic tests and stress tests on the software in that environment, iterate until stable, and then once the quality results indicate that a given release, feature, or change is ready for production they deploy the changes into the canary regions.

Canary regions

Publicly we refer to canary regions as “Early Updates Access Program” regions, and they’re effectively full-blown Azure regions with the vast majority of Azure services. One of the canary regions is built with Availability Zones and the other without it, and both regions form a region pair so that we can validate data geo-replication capabilities. These canary regions are used for full, production level, end to end validations and scenario coverage at scale. They host some first party services (for internal customers), several third party services, and a small set of external customers that we invite into the program to help increase the richness and complexity of scenarios covered, all to ensure that canary regions have patterns of usage representative of our public Azure regions. Azure teams also run stress and synthetic tests in these environments, and periodically we execute fault injections or disaster recovery drills at the region or Availability Zone level, to practice the detection and recovery workflows that would be run if this occurred in real life. Separately and together, these exercises attempt to ensure that software is of the highest quality before the changes touch broad customer workloads in Azure.

Pilot phase

Once the results from canary indicate that there are no known issues detected, the progressive deployment to production can get started, beginning with what we call our pilot phase. This phase enables us to try the changes, still at a relatively small scale, but with more diversity of hardware and configurations. This phase is especially important for software like core storage services and core compute infrastructure services, that have hardware dependencies. For example, Azure offers servers with GPU's, large memory servers, commodity servers, multiple generations and types of processors, Infiniband, and more, so this enables flighting the changes and may enable detection of issues that would not surface during the smaller scale testing. In each step along the way, thorough health monitoring and extended 'bake times' enable potential failure patterns to surface, and increase our confidence in the changes while greatly reducing the overall risk to our customers.

Once we determine that the results from the pilot phase are good, the deployment systems proceed by allowing the change to progress to more and more regions incrementally. Throughout the deployment to the broader Azure regions, the deployment systems endeavor to respect Availability Zones (a change only goes to one Availability Zone within a region) and region pairing (every region is ‘paired up’ with a second region for georedundant storage) so a change deploys first to a region and then to its pair. In general, the changes deploy only as long as no negative signals surface.

Safe deployment practices in action

Given the scale of Azure globally, the entire rollout process is completely automated and driven by policy. These declarative policies and processes (not the developers) determine how quickly software can be rolled out. Policies are defined centrally and include mandatory health signals for monitoring the quality of software as well as mandatory ‘bake times’ between the different stages outlined above. The reason to have software sitting and baking for different periods of time across each phase is to make sure to expose the change to a full spectrum of load on that service. For example, diverse organizational users might be coming online in the morning, gaming customers might be coming online in the evening, and new virtual machines (VMs) or resource creations from customers may occur over an extended period of time.

Global services, which cannot take the approach of progressively deploying to different clusters, regions, or service rings, also practice a version of progressive rollouts in alignment with SDP. These services follow the model of updating their service instances in multiple phases, progressively deviating traffic to the updated instances through Azure Traffic Manager. If the signals are positive, more traffic gets deviated over time to updated instances, increasing confidence and unblocking the deployment from being applied to more service instances over time.

Of course, the Azure platform also has the ability to deploy a change simultaneously to all of Azure, in case this is necessary to mitigate an extremely critical vulnerability. Although our safe deployment policy is mandatory, we can choose to accelerate it when certain emergency conditions are met. For example, to release a security update that requires us to move much more quickly than we normally would, or for a fix where the risk of regression is overcome by the fix mitigating a problem that’s already very impactful to customers. These exceptions are very rare, in general our deployment tools and processes intentionally sacrifice velocity to maximize the chance for signals to build up and scenarios and workflows to be exercised at scale, thus creating the opportunity to discover issues at the smallest possible scale of impact.

Continuing improvements

Our safe deployment practices and deployment tooling continue to evolve with learnings from previous outages and maintenance events, and in line with our goal of detecting issues at a significantly smaller scale. For example, we have learned about the importance of continuing to enrich our health signals and about using machine learning to better correlate faults and detect anomalies. We also continue to improve the way in which we do pilots and flighting, so that we can cover more diversity of hardware with smaller risk. We continue to improve our ability to rollback changes automatically if they show potential signs of problems. We also continue to invest in platform features that reduce or eliminate the impact of changes generally.

With over a thousand new capabilities released in the last year, we know that the pace of change in Azure can feel overwhelming. As Mark mentioned, the agility and continual improvement of cloud services is one of the key value propositions of the cloud – change is a feature, not a bug. To learn about the latest releases, we encourage customers and partners to stay in the know at Azure.com/Updates. We endeavor to keep this as the single place to learn about recent and upcoming Azure product updates, including the roadmap of innovations we have in development. To understand the regions in which these different services are available, or when they will be available, you can also use our tool at Azure.com/ProductsbyRegion.
Quelle: Azure

Backup Explorer now available in preview

As organizations continue to expand their use of IT and the cloud, protecting critical enterprise data becomes extremely important. And if you are a backup admin on Microsoft Azure, being able to efficiently monitor backups on a daily basis is a key requirement to ensuring that your organization has no weaknesses in its last line of defense.

Up until now, you could use a Recovery Services vault to get a bird’s eye view of items being backed up under that vault, along with the associated jobs, policies, and alerts. But as your backup estate expands to span multiple vaults across subscriptions, regions, and tenants, monitoring this estate in real-time becomes a non-trivial task, requiring you to write your own customizations.

What if there was a simpler way to aggregate information across your entire backup estate into a single pane of glass, enabling you to quickly identify exactly where to focus your energy on?

Today, we are pleased to share the preview of Backup Explorer. Backup Explorer is a built-in Azure Monitor Workbook enabling you to have a single pane of glass for performing real-time monitoring across your entire backup estate on Azure. It comes completely out-of-the-box, with no additional costs, via native integration with Azure Resource Graph and Azure Workbooks.

Key Benefits

1) At-scale views – With Backup Explorer, monitoring is no longer limited to a Recovery Services vault. You can get an aggregated view of your entire estate from a backup perspective. This includes not only information on your backup items, but also resources that are not configured for backup, ensuring that you don’t ever miss protecting critical data in your growing estate. And if you are an Azure Lighthouse user, you can view all of this information even across multiple tenant, enabling truly boundary-less monitoring.

2) Deep drill-downs – You can quickly switch between aggregated views and highly granular data for any of your backup-related artifacts, be it backup items, jobs, alerts or policies.

3) Quick troubleshooting and actionability – The at-scale views and deep drill-downs are designed to aid you in getting to the root cause of a backup-related issue. Once you identify an issue, you can act on it by seamlessly navigating to the backup item or the Azure resource, right from Backup Explorer.

Backup Explorer is currently supported for Azure Virtual Machines. Support for other Azure workloads will be added soon.

At Azure Backup, Backup Explorer is just one part of our overall goal to enable a delightful, enterprise-ready management-at-scale experience for all our customers.

Getting Started

To get started with using Backup Explorer, you can simply navigate to any Recovery Services vault and click on Backup Explorer in the quick links section.

You will be redirected to Backup Explorer which gives a view across all the vaults, subscriptions, and tenants that you have access to.

More information

Read the Backup Explorer documentation for detailed information on leveraging the various tabs to solve different use-cases.
Quelle: Azure

Hyperledger Fabric on Azure Kubernetes Service Marketplace template

Customers exploring blockchain for their applications and solutions typically start with a prototype or proof of concept effort with a blockchain technology before they get to build, pilot, and production rollout. During the latter stages, apart from the ease of deployment, there is an expectation of flexibility in the configuration in terms of the number of blockchain members in the consortium, size and number of nodes and ease in management post-deployment.

We are sharing the release of a new Hyperledger Fabric on Azure Kubernetes Service marketplace template in preview. Any user with minimal knowledge of Azure or Hyperledger Fabric can now set up a blockchain consortium on Azure using this solution template by providing few basic input parameters.

This template helps the customers to deploy Hyperledger Fabric (HLF) network on Azure Kubernetes Service (AKS) clusters in a modular manner, that meets the much-required customization with regard to the choice of Microsoft Azure Virtual Machine series, number of nodes, fault-tolerance, etc. Azure Kubernetes Service provides enterprise-grade security and governance, making the deployment and management of containerized application easy. Customers anticipate leveraging the native Kubernetes tools for the management plane operations of the infrastructure and call Hyperledger Fabric APIs or Hyperledger Fabric client software development kit for the data plane workflows.

The template has various configurable parameters that make it suitable for production-grade deployment of Hyperledger Fabric network components.

Top features of Hyperledger Fabric on Azure Kubernetes Service template are:

Supports deployment of Hyperledger Fabric version 1.4.4 (LTS).
Supports deployment of orderer organization and peer nodes with the option to configure the number of nodes.
Supports Fabric Certificate Authority (CA) with self-signed certificates by default, and an option to upload organization-specific root certificates to initialize the Fabric CA.
Supports running of LevelDb and CouchDB for world state database on peer nodes.
Ordering service runs highly available RAFT based consensus algorithm, with an option to choose 3,5, or 7 nodes.  
Supports ways to configure in terms of number and size of the nodes of Azure Kubernetes Clusters.
Public IP exposed for each AKS cluster deployed for networking with other organizations
Enables you to jump start with building your network sample scripts to help post-deployment steps such as create workflows of consortiums and channels, adding peer nodes to the channel, etc.
Node.js application sample to support running a few native Hyperledger Fabric APIs such as new user identity generation, running custom chain code, etc.

To know more about how to get started with deploying Hyperledger Fabric network components, refer to the documentation.

What's coming next

Microsoft Visual Studio code extension support for Azure Hyperledger Fabric instances

What more do we have for you? The template and consortium sample scripts are open-sourced in the GitHub repo, so the community can leverage to build their customized versions.
Quelle: Azure

10 recommendations for cloud privacy and security with Ponemon research

Today we’re pleased to publish Data Protection and Privacy Compliance in the Cloud: Privacy Concerns Are Not Slowing the Adoption of Cloud Services, but Challenges Remain, original research sponsored by Microsoft and independently conducted by the Ponemon Institute. The report concludes with a list of 10 recommended steps that organizations can take to address cloud privacy and security concerns, and in this blog, we have provided information about Azure services such as Azure Active Directory and Azure Key Vault that help address all 10 recommendations.

The research was undertaken to better understand how organizations undergo digital transformation while wrestling with the organizational impact of complying with such significant privacy regulations as the European Union’s General Data Protection Regulation (GDPR). The research explored the reasons organizations are migrating to the cloud, the security and privacy challenges they encounter in the cloud, and the steps they have taken to protect sensitive data and achieve compliance.

The survey of over 1,000 IT professionals in the US and EU found that privacy concerns are not slowing cloud adoption and that most privacy-related activities are easier in the cloud, while at the same time, most organizations don’t feel they have control and visibility they need to manage online privacy.  The report lists ten steps organizations can take to improve security and privacy.
 

Key takeaways from the research include:

Privacy concerns are not slowing the adoption of cloud services, as only one-third of US respondents and 38 percent of EU respondents say privacy issues have stopped or slowed their adoption of cloud services. The importance of the cloud in reducing costs and speeding time to market seem to override privacy concerns.
Most privacy-related activities are easier to deploy in the cloud. These include governance practices such as conducting privacy impact assessments, classifying or tagging personal data for sensitivity or confidentiality, and meeting legal obligations, such as those of the GDPR. However, other items such as managing incident response are considered easier to deploy on premises than in the cloud.
53 percent of US and 60 percent of EU respondents are not confident that their organization currently meets their privacy and data protection requirements. This lack of confidence may be because most organizations are not vetting cloud-based software for privacy and data security requirements prior to deployment.
Organizations are reactive and not proactive in protecting sensitive data in the cloud. Specifically, just 44 percent of respondents are vetting cloud-based software or platforms for privacy and data security risks, and only 39 percent are identifying information that is too sensitive to be stored in the cloud.
Just 29 percent of respondents say their organizations have the necessary 360-degree visibility into the sensitive or confidential data collected, processed, or stored in the cloud. Organizations also lack confidence that they know all the cloud applications and platforms that they have deployed.

The Ponemon report closes with a list of recommended steps that organizations can take to address cloud privacy and security concerns, annotated below with relevant Azure services that can help you implement each of the recommendations:

Improve visibility into the organization’s sensitive or confidential data collected, processed, or stored in the cloud environment. 
Azure service: Azure Information Protection helps discover, classify, and control sensitive data. Learn more.
Educate themselves about all the cloud applications and platforms already in use in the organization.
Azure service: Microsoft Cloud App Security helps discover and control the use of shadow IT by identifying cloud apps, infrastructure as a service (IaaS), and platform as a service (PaaS) services. Learn more.
Simplify the authentication of users in both on-premises and cloud environments.
Azure service: Azure Active Directory provides tools to manage and deploy single sign-on authentication for both cloud and on-prem services. Learn more.
Ensure the cloud provider offers event monitoring of suspicious and anomalous traffic in the cloud environment.
Azure service: Azure Monitor enables customers to collect, analyze, and act on telemetry data from both Azure and on-premises environments. Learn more.
Implement the capability to encrypt sensitive and confidential data in motion and at rest.
Azure service: Azure offers a variety of options for encrypting both data at rest and in transit. Learn more.
Make sure that the organization uses and manages its own encryption keys (BYOK).
Azure service: Azure Key Vault allow you to import or generate keys in hardware security modules (HSMs) that never leave the HSM boundary. Learn more.
Implement multifactor authentication before allowing access to the organization’s data and applications in the cloud environment.
Azure service: Azure Active Directory offers multiple options for deploying multifactor authentication for both cloud and on-prem services. Learn more.
Assign responsibility for ensuring compliance with privacy and data protection regulations and security safeguards in the cloud to those most knowledgeable: the compliance and IT security teams. Privacy and data protection teams should also be involved in evaluating any cloud applications or platforms under consideration.
Azure service: Role-based access control (RBAC) helps manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. Learn more.
Identify information that is too sensitive to be stored in the cloud and assess the impact that cloud services may have on the ability to protect and secure confidential or sensitive information.
Azure service: Azure Information Protection helps discover, classify, and control sensitive data. Learn more.
Thoroughly evaluate cloud-based software and platforms for privacy and security risks.
Azure service: Microsoft Cloud App Security Assess the risk levels and business readiness of over 16,000 apps. Learn more.

Read the full report to learn more.
Quelle: Azure

Azure Cost Management updates – January 2020

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Automate reporting for Microsoft Customer Agreement with scheduled exports
Raising awareness of disabled costs
What's new in Cost Management Labs
Custom RBAC role preview for management groups
New ways to save money with Azure
Recent changes to Azure usage data

Documentation updates

Let's dig into the details. 

Automate reporting for Microsoft Customer Agreement with scheduled exports

You already know you can dig into your cost and usage data from the Azure portal. You may even know you can get rich reporting from the Cost Management Query API or get the full details, in all its glory, from the UsageDetails API. These are both great for ad-hoc queries, but maybe you're looking for a simpler solution. This is where Azure Cost Management exports come in.

Azure Cost Management exports automatically publish your cost and usage data to a storage account on a daily, weekly, or monthly basis. Up to this month, you've been able to schedule exports for Enterprise Agreement (EA) and pay-as-you-go (PAYG) accounts. Now, you can also schedule exports across subscriptions for Microsoft Customer Agreement billing accounts, subscriptions, and resource groups.

Learn more about scheduled exports in Create and manage exported data. 

Raising awareness of disabled costs

Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) accounts both offer an option to hide prices and charges from subscription users. While this can be useful to obscure negotiated discounts (including vendors), it also puts you at risk of over-spending since teams that deploy and manage resources don't have visibility and cannot effectively keep costs down. To avoid this, we recommend using custom Azure RBAC roles for anyone who shouldn't see costs, while allowing everyone else to fully manage and optimize costs.

Unfortunately, some organizations may not realize costs have been disabled. This can happen when you renew your EA enrollment or when you switch between EA partners, as an example. In an effort to help raise awareness of these settings, you will see new messaging when costs have been disabled for the organization. Someone who does not have access to see costs will see a message like the following in cost analysis:

EA billing account admins and MCA billing profile owners will also see a message in cost analysis to ensure they're aware that subscription users cannot see or optimize costs.

To enable access to Azure Cost Management, simply click the banner and turn on "Account owners can view charges" for EA accounts and "Azure charges" for MCA accounts. If you're not sure whether subscription users can see costs on your billing account, check today and unlock new cost reporting, control, and optimization capabilities for your teams. 

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Get started quicker with the cost analysis Home view
Azure Cost Management offers five built-in views to get started with understanding and drilling into your costs. The Home view gives you quick access to those views so you get to what you need faster.
NEW: Try Preview gives you quick access to preview features—Now available in the public portal
You already know Cost Management Labs gives you early access to the latest changes. Now you can also opt in to individual preview features from the public portal using the Try preview command in cost analysis.

Of course, that's not all. Every change in Azure Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today. 

Custom RBAC role preview for management groups

Management groups now support defining custom RBAC roles to allow you to assign more specific permissions to users, groups, and apps within your organization. One example could be a role that allows someone to be able to create and manage the management group hierarchy as well as manage costs using Azure Cost Management + Billing APIs. Today, this requires both the Management Group Contributor and Cost Management Contributor roles, but these permissions could be combined into a single custom role to streamline role assignment.

If you're unfamiliar with RBAC, Azure role-based access control (RBAC) is the authorization system used to manage access to Azure resources. To grant access, you assign roles to users, groups, service principals, or managed identities at a particular scope, like a resource group, subscription, or in this case, a management group. Cost Management + Billing supports the following built-in Azure RBAC roles, from least to most privileged:

Cost Management Reader: Can view cost data, configuration (including budgets exports), and recommendations.
Billing Reader: Lets you read billing data.
Reader: Lets you view everything, but not make any changes.
Cost Management Contributor: Can view costs, manage cost configuration (including budgets and exports), and view recommendations.
Contributor: Lets you manage everything except access to resources.
Owner: Lets you manage everything, including access to resources.

While most organizations will find the built-in roles to be sufficient, there are times when you need something more specific. This is where custom RBAC roles come in. Custom RBAC roles allow you to define your own set of unique permissions by specifying a set of wildcard "actions" that map to Azure Resource Manager API calls. You can mix and match actions as needed to meet your specific needs, whether that's to allow an action or deny one (using "not actions"). Below are a few examples of the most common actions:

Microsoft.Consumption/*/read – Read access to all cost and usage data, including prices, usage, purchases, reservations, and resource tags.
Microsoft.Consumption/budgets/* – Full access to manage budgets.
Microsoft.CostManagement/*/read – Read access to cost and usage data and alerts.
Microsoft.CostManagement/views/* – Full access to manage shared views used in cost analysis.
Microsoft.CostManagement/exports/* – Full access to manage scheduled exports that automatically push data to storage on a regular basis.
Microsoft.CostManagement/cloudConnectors/* – Full access to manage AWS cloud connectors that allow you manage Azure and AWS costs together in the same management group. 

New ways to save money with Azure

Lots of cost optimization improvements over the past month! Here are a few you might be interested in:

Save up to 90 percent with Azure Spot VMs, now in preview—Spot will replace low priority VMs starting Feb 3, 2020.
Azure Dedicated Hosts now generally available, enabling you to save more compared to individually deployed VMs.
Check out new regions available in Norway that offer lower prices for some services. 

Recent changes to Azure usage data

Many organizations use the full Azure usage and charges dataset to understand what's being used, identify what charges should be internally billed to which teams, and/or to look for opportunities to optimize costs with Azure reservations and Azure Hybrid Benefit, just to name a few. If you're doing any analysis or have setup integration based on product details in the usage data, please update your logic for the following services.

All of the following changes were effective January 1:

Azure Data Box service renamed to "Azure Stack Edge"
Azure Data Share dataset movement meters renamed to "Snapshot Execution"
PostgreSQL, MySQL, and MariaDB General Purpose Large Scale Storage service tier and meter IDs changed
Azure Functions premium plan meter IDs changed

Also, remember the key-based Enterprise Agreement (EA) billing APIs have been replaced by new Azure Resource Manager APIs. The key-based APIs will still work through the end of your enrollment, but will no longer be available when you renew and transition into Microsoft Customer Agreement. Please plan your migration to the latest version of the UsageDetails API to ease your transition to Microsoft Customer Agreement at your next renewal. 

Documentation updates

There were tots of documentation updates. Here are a few you might be interested in:

Added Azure Database, Data Explorer, and Premium SSD reservations to list of supported reservation offers.
Minor updates and corrections to the scheduled exports tutorial and API reference.
Documented preview support for custom RBAC roles for management groups.
Corrected documentation about tags support by different resources—Azure NetApp Files and Managed database instances do not include tags in usage data.

Want to keep an eye on all of the documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

What's next?

These are just a few of the big updates from last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.
Quelle: Azure

Azure IoT improves pharmaceutical sample management and medication adherence

For the recent IoT Signals report, commissioned by our Azure IoT team and conducted by Hypothesis Group, more than 3,000 decision makers at enterprise companies across the US, UK, Germany, France, China, and Japan who were currently involved in IoT, participated in a 20-minute online survey. Healthcare was one of the industries included in the research. Of the healthcare executives surveyed, 82 percent said they have at least one IoT project in either the learning, proof of concept, purchase, or use phase, with many reporting they have one or more projects currently in ‘use.’ The top use cases cited by the healthcare executives included:

Tracking patient staff and inventory.
Remote device monitoring and service.
Remote health monitoring and assistance.
Safety, security, and compliance.
Facilities management.

Today we want to shed light on how two innovative companies are building upon this momentum and their own research to build IoT-enabled solutions with Azure IoT technologies that support medication management and adherence. These solutions address the safety, security, compliance, and inventory use cases highlighted in the report.

The Cost of Pharmaceutical Samples

According to a January 2019 article published by JAMA, Medical Marketing in the United States, 1997-2016, “Marketing to health care professionals by pharmaceutical companies accounted for [the] most promotional spending and increased from $15.6 billion to $20.3 billion, including $5.6 billion for prescriber detailing, $13.5 billion for free samples.”

Improving sample management

With billions of dollars on the line, one of our partners has developed an innovative way to ensure that pharmaceutical companies manage their samples in a cost-effective way. Using their own knowledge of the pharmaceutical industry and in-depth research, P360 (formerly Prescriber360), developed Swittons to bridge the gap between pharmaceutical companies and physicians. Designed as a “virtual pharmaceutical representative,” this IoT-enabled device offers real-time, secure communications between the physician and the pharmaceutical company. With this single device, physicians can order a sample, request a visit from a medical science liaison (MSL) or sales rep, or connect with the pharmaceutical company’s inside sales rep (as shown in the graphic below).

Designed to be branded with each pharmaceutical company’s product, the device is a physician engagement tool that enables pharmaceutical companies to customize and manage a sales channel that remains fully authentic to their brand experience. Furthermore, it provides an audit trail to manage samples more economically, enabling pharmaceutical companies to penetrate market whitespace and extend efficient sampling in areas that were previously unreachable.

Built on our Azure IoT platform, Swittons takes advantage of the latest in cloud, security, telecommunications, and analytics technology. “We strategically selected Azure IoT as the foundation for our Swittons ‘Virtual Rep.’ Microsoft’s vision, investments and the breadth of Azure cloud were the key criteria for selection. Having a reliable IoT platform along with world-class data and security infrastructure in Azure made the choice very easy,” commented Anupam Nandwana, CEO, P360, parent company of Swittons.

On the other end of the pharmaceutical supply chain is another scenario that dramatically affects the efficacy of pharmaceutical products—medication adherence.

Ensuring medication adherence

In the US today, 25 to 50 percent of all adults fail to take their prescribed medication on time, contributing to poor health outcomes, over-utilization of healthcare services and significant cost increases.

The causes of low levels of medication adherence are multi-faceted and include factors like carelessness, fear, supply, cost, and lack of understanding or information, with forgetfulness as the primary cause.

Furthermore, as cited in an editorial from BMJ Quality and Safety, “medication adherence thus constitutes one of the ‘big hairy problems’ or ‘big hairy audacious goals’ of healthcare. As well as affecting patients’ long-term outcomes, non-adherence can increase healthcare costs through consumption of medicines below the threshold of adherence required for clinical benefit, as well as contributing to healthcare resource use such as hospital admissions.

In response to this, the global market for medication adherence (hardware-based automation and adherence systems and software-based applications) was worth nearly $1.7 billion in 2016. The market is expected to reach more than $3.9 billion by 2021, increasing at a CAGR of 18.0 percent from 2016 through 2021. This steep increase is fueled by burgeoning demand for advanced medication adherence systems and a growing number of people worldwide with chronic diseases.

Personal experience leads to action

Emanuele Musini knows all too well the implications of not taking medications properly. In fact, it was the pain of losing his father in 2005 from a chronic condition and a lack of adhering to the prescribed medication regimen that became the catalyst for Emanuele to start studying the issue in-depth, searching for a solution. In 2015, Emanuele, along with his multidisciplinary team of doctors, entrepreneurs, engineers, and user-experience professionals, created Pillo Health, a health platform centered around a robot and digital assistant designed to prevent other family members from enduring what Emanuele and his family experienced. Since their founding, they’ve partnered with leading manufacturers, such as Stanley Black & Decker, to bring in-home medication management solutions to market with solutions like Pria, a winner of the 2019 CES Innovation Awards.”

The Pillo Health team built their medication adherence solution on Microsoft Azure Cloud Services using Azure Cognitive Services for voice technology and facial recognition, and services from the Azure IoT platform, including IoT Hub. The result is a voice-first, personalized, cloud-enabled, medication assistant that can help people maintain their medication regimen through social connectivity and delivery of important medical information at home. In a 4-week study conducted with AARP in 2018 for diabetic patients who were prescribed Metformin, Pillo delivered an average medication adherence rate of more than 87 percent—a meaningful 20 to 30 percent improvement from conventional reported standards.

Antonello Scalmato, Director of Cloud Services at Pillo Health noted, “We selected Microsoft Azure because it provided the best infrastructure for PaaS applications, allowed us to speed up the development of our complex product and avoided the overhead of machine and security management for traditional web API infrastructure. Moreover, IoT Hub provides a channel for secure communications and notifications to our users, and also enables simple device management that protects our product, from the factory into the users' homes.”

Learn More

To learn more about how Microsoft and our partners are transforming healthcare, visit or healthcare industry webpage. To get started building your IoT solutions, explore our portfolio for Azure IoT.
To learn more about Pillo Health, check out this video. You can also learn more about Pillo Health in their new white paper, “Improving the State of Medication Adherence.”

Quelle: Azure

Assess your servers with a CSV import into Azure Migrate

At Microsoft Ignite, we announced new Azure Migrate assessment capabilities that further simplify migration planning. In this post, we will demonstrate how to import servers into Azure Migrate Server Assessment through a CSV upload. Virtual servers of any hypervisor or cloud as well as physical servers can be assessed. You can get started with the CSV import feature by creating an Azure Migrate project or using your existing project.

Previously, Server Assessment required setting up an appliance in customer premises to perform discovery of VMware, Hyper-V virtual machines (VMs), and physical servers. We now also support importing and assessing servers without deploying an appliance. Import-based assessments provide support for Server Assessment features like Azure suitability analysis, migration cost planning, and performance-based rightsizing. The import-based assessment is helpful in the initial stages of migration planning, when you may not be able to deploy the appliance due to pending organizational or security constraints that prevent you from sending data to Azure.

Importing your servers is easy. Simply upload the server inventory in a CSV file as per the template provided by Azure Migrate. Only four data points are mandatory — server name, number of cores, size of memory, and operating system name. While you can run the assessment with this minimal information, we recommend you provide disk data as well to avail disk sizing in assessments.

Azure suitability analysis

The assessment determines whether a given server can be migrated as-is to Azure. Azure support is checked for each server discovered; if it is found that a server is not ready to be migrated, remediation guidance is automatically provided. You can customize your assessment by changing its properties, and regenerate the assessment reports. You can also generate an assessment report by choosing a VM series of your choice and specify the uptime of the workloads you will run in Azure.

Cost estimation and sizing

Assessment reports provide detailed cost estimates. You can optimize on cost using performance-based rightsizing assessments; the performance utilization value you specify of your on-premises server is taken into consideration to recommend an appropriate Azure Virtual Machine and disk SKU. This helps to optimize and right-size on cost as you migrate servers that might be over-provisioned in your on-premises data center. You can apply subscription offers and Reserved Instance pricing on the cost estimates

Assess your imported servers in four simple steps

Create an Azure Migrate project and add the Server Assessment solution to the project. If you already have a project, you do not need to create a new one. Download the CSV template for importing servers.
Gather the inventory data from a configuration management database (CMDB), or from your vCenter server, or Hyper-V environments. Convert the data into the format of the Azure Migrate CSV template.
Import the servers into Azure Migrate by uploading the server inventory in a CSV file as per the template.
Once you have successfully imported the servers, create assessments and review the assessment reports.

When you are ready to deploy an appliance, you can leverage the performance history gathered by the appliance for more accurate sizing, as well as plan migration phases using dependency analysis.

Get started right away by creating an Azure Migrate project. Note that the inventory metadata uploaded is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Brazil, Canada, Europe, France, India, Japan, Korea, United Kingdom, and United States geographies.

In the upcoming blog, we will talk about application discovery and agentless dependency analysis.

Resources to get started

Read this tutorial on how to import and assess servers using Azure Migrate Server Assessment.
Read these tutorials on how to assess Hyper-V, VMware, or any physical or virtual servers using the appliance in Server Assessment.

Quelle: Azure

Six things to consider when using Video Indexer at scale

Your large archive of videos to index is ever-expanding, thus you have been evaluating Microsoft Video Indexer and decided that you want to take your relationship with it to the next level by scaling up.
In general, scaling shouldn’t be difficult, but when you first face such process you might not be sure what is the best way to do it. Questions like “are there any technological constraints I need to take into account?”, “Is there a smart and efficient way of doing it?”, and “can I prevent spending excess money in the process?” can cross your mind. So, here are six best practices of how to use Video Indexer at scale.

1. When uploading videos, prefer URL over sending the file as a byte array

Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, but remember that the latter comes with some constraints.

First, it has file size limitations. The size of the byte array file is limited to 2 GB compared to the 30 GB upload size limitation while using URL.

Second and more importantly for your scaling, sending files using multi-part means high dependency on your network, service reliability, connectivity, upload speed, and lost packets somewhere in the world wide web, are just some of the issues that can affect your performance and hence your ability to scale. 

When you upload videos using URL you just need to give us a path to the location of a media file and we will take care of the rest (see below the field from the upload-video API).

To upload videos using URL via API you can check this short-code sample or you can use AzCopy for a fast and reliable way to get your content to a storage account from which you can submit it to Video Indexer using SAS URL.

2. Increase media reserved units if needed

Usually in the proof of concept stage when you just start using Video Indexer, you don’t need a lot of computing power. Now, when you want to scale up your usage of Video Indexer you have a larger archive of videos you want to index and you want the process to be at a pace that fits your use case. Therefore, you should think about increasing the number of compute resources you use if the current amount of computing power is just not enough.

In Azure Media Services, when talking about computing power and parallelization we talk about media reserved units (RUs), those are the compute units that determine the parameters for your media processing tasks. The number of RUs affects the number of media tasks that can be processed concurrently in each account and their type determines the speed of processing and one video might require more than one RU if its indexing is complex. When your RUs are busy, new tasks will be held in a queue until another resource is available.

We know you want to operate efficiently and you don’t want to have resources that eventually will stay idle part of the time. For that reason, we offer an auto-scale system that spins RUs down when less processing is needed and spin RUs up when you are in your rush hours (up to fully use all of your RUs). You can easily enable this functionality by turning on the autoscale in the account settings or using Update-Paid-Account-Azure-Media-Services API.

To minimize indexing duration and low throughput we recommend you start with 10 RUs of type S3. Later if you scale up to support more content or higher concurrency, and you need more resources to do so, you can contact us using the support system (on paid accounts only) to ask for more RUs allocation.

3. Respect throttling

Video Indexer is built to deal with indexing at scale, and when you want to get the most out of it you should also be aware of the system’s capabilities and design your integration accordingly. You don’t want to send an upload request for a batch of videos just to discover that some of the movies didn’t upload and you are receiving an HTTP 429 response code (too many requests). It can happen due to the fact that you sent more requests than the limit of movies per minute we support. Don’t worry, in the HTTP response, we add a retry-after header. The header we will specify when you should attempt your next retry. Make sure you respect it before trying your next request.

4. Use callback URL

Have you ever called customer service and their response was “I’m now processing your request, it will take a few minutes. You can leave your phone number and we’ll get back to you when it is done”? The cases when you do leave your number and they call you back the second your request was processed are exactly the same concept as using callback URL.

Thus we recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can just add a callback URL, and wait for us to update you. As soon as there is any status change in your upload request, we will send a POST notification to the URL you sent.

You can add a callback URL as one of the parameters of the upload-video API (see below the description from the API). If you are not sure how to do it, you can check the code samples from our GitHub repo. By the way, for callback URL you can also use Azure Functions, a serverless event-driven platform that can be triggered by HTTP and implement a following flow.

5. Use the right indexing parameters for you

Probably the first thing you need to do when using Video Indexer, and specifically when trying to scale, is to think about how to get the most out of it with the right parameters for your needs. Think about your use case, by defining different parameters you can save yourself money and make the indexing process for your videos faster.

We are giving you the option to customize your usage in Video Indexer by choosing those indexing parameters. Don’t set the preset to streaming it if you don’t plan to watch it, don’t index video insights if you only need audio insights, it is that easy.

Before uploading and indexing your video read this short documentation, check the indexingPreset and streamingPreset parts to get a better idea of what your options are.

6. Index in optimal resolution, not highest resolution

Not too long ago, we were in times when HD videos didn't exist. Now, we have videos of varied qualities from HD to 8K. The question is, what video quality do you need for indexing your videos? The higher the quality of the movie you upload means the higher the file size, and this leads to higher computing power and time needed to upload the video.

Our experiences show that, in many cases, indexing performance has almost no difference between HD (720P) videos and 4K videos. Eventually, you’ll get almost the same insights with the same confidence.

For example, for the face detection feature, a higher resolution can help with the scenario where there are many small but contextually important faces. However, this will come with a quadratic increase in runtime (and therefore higher COGS) and an increased risk of false positives.

Therefore, we recommend you to verify that you get the right results for your use case and to first test it locally. Upload the same video in 720P and in 4K and compare the insights you get. Remember, No need to use a cannon to kill a fly.

Have questions or feedback? We would love to hear from you. Use our UserVoice page to help us prioritize features, leave a comment below or email VISupport@Microsoft.com for any questions.

We want to hear about your use case, and we can help you scale.
Quelle: Azure

Fueling intelligent energy with IoT

At Microsoft, building a future that we can all thrive in is at the center of everything we do. On January 16, as part of the announcement that Microsoft will be carbon negative by 2030, we discussed how advances in human prosperity, as measured by GDP growth, are inextricably tied to the use of energy. Microsoft has committed to deploy $1 billion into a new climate innovation fund to accelerate the development of carbon reduction and removal technologies that will help us and the world become carbon negative. The Azure IoT team continues to invest in the platforms and tools that enable solution builders to deliver new energy solutions, customers to empower their workforce, optimize digital operations and build smart, connected, cities, vehicles, and buildings.

Earlier, Microsoft committed $50 Million through Microsoft AI for Earth that provides technology, resources, and expertise into the hands of those working to solve our most complex global environmental challenges. Challenges like helping customers around the world meet their energy and sustainability commitments. Our partnership with Vattenfall illustrates how we will power new Swedish datacenter locations with renewable energy and our partnership with E.ON who manages low-voltage distribution grids is challenging the limits of traditional technology for low-voltage distribution grids through an inhouse IoT platform based on Microsoft Azure IoT Hub.

Over the past few years, our engineers have had the pleasure to connect with and learn from a large ecosystem of energy solution builders and customers that are proactively shifting their consumption priorities. Transmission system operators (TSOs) are focused on transforming grid operations while distribution system operators (DSOs) and utilities are approaching their customers with new solutions, and all participants are requesting better, more accurate, more secure data.

As millions of new electric vehicles are entering our roads, new challenges arise around the transformation of the energy grid that moves us in our daily commutes. At the heart of these transformations are solutions that help energy providers get connected, stay connected, and transform their businesses through devices, insights, and actions.

Late 2019, we announced updates to Azure IoT Central to help solution builders move beyond proof of concept to building business-critical applications they can brand and sell directly or through Microsoft AppSource. Builders can brand, customize, and make their own apps using extensibility via APIs, data connectors to business applications, repeatability, and manageability of their investment through multitenancy and seamless device connectivity. Two IoT Central energy app templates for solar panel and smart meter monitoring already help energy solution builders accelerate development.

Azure IoT Central Energy App Templates.

DistribuTECH 2020

DistribuTECH International is the leading annual transmission and distribution event that addresses technologies used to move electricity from the power plant through the transmission and distribution systems to the meter and inside the home. Held January 28 to January 30 in San Antonio, Texas, we invited 8 leading Energy solution builders to join us at DistribuTECH to demonstrate how they have leveraged Azure IoT to deliver amazing innovation. These partners will join Azure IoT Experts who are available to discuss your business scenarios or get more specific on IoT devices, working with IoT data and delivering a secure solution from the edge to the cloud.

Partners fueling intelligent energy

NXP EdgeVerse™ platform: intelligently manage grid load securely at the edge

The shift to vehicle electrification requires a completely different fueling infrastructure than gas-powered vehicles. Drivers of electric vehicles need to trust they can fuel for every occasion—everywhere, anytime and not get stranded. Every electric utility vehicle in a managed fleet, for example, must be authorized to charge without overloading the grid during peak times.

To manage grid load intelligently, edge computing and security becomes vital. NXP and Microsoft have demonstrated “Demand Side Management” of a smart electric vehicle charging grid and infrastructure running on NXP’s EdgeVerse™ using Azure IoT Central. This solution helps reduced development risk and speed time to market. NXP EdgeVerse includes the NXP Layerscape LS1012 processor and i.MX RT 1060 series, integrated in Scalys TrustBox Edge, to provide best-in-class power efficiency and the most secure (portable) level of communication solution that connects to Azure IOT Central. As the fueling model shifts from petroleum to electric, intelligent management of grid load balancing is key.

OMNIO.net: Danish IoT connectivity startup onboarding devices and unifying data

OMNIO.net, a Danish Industrial IoT connectivity startup, is partnering with Microsoft Azure IoT to solve two of the biggest hurdles in Industrial IoT: onboarding of devices and unification of data.

OMNIO.net is helping companies of all sizes who have outfitted their campuses with solar panels. The OMNIO.net solution connected these panels to Azure IoT Hub to gather real-time data that will help optimize energy production and limit downtime. Companies look to OMNIO.net to overcome challenges connecting industrial devices and getting the most from their data. What may have taken months in the past, with the combination of OMNIO.net’s energy expertise and Azure IoT offers device connection for partners in less than 24 hours, so customers can focus on using their data to solve pressing business challenges rather than on IT.

iGen Technologies: a self-powered heating system for your home

iGen Technologies’ i2 is a self-power heating system for residential homes. With its patented technology, i2 sets a new benchmark in home comfort and efficiency, by generating, storing and using its own electricity, keeping the heat on, even during a grid outage. The system delivers resilience, lower operating costs, efficiency gains, and greenhouse gas emission reductions. The fully integrated solution offers a dispatchable resource with fuel switching capability, providing utilities a valuable tool to manage peak load and surplus generation situations. iGEN has partnered with Microsoft Azure IoT Central to develop a smart IoT interface for the i2 heat and power system. The integration of iGEN’s distributed energy resource (DER) technology with Microsoft’s robust IoT app platform offers an ideal solution for utility Demand Response programs.

The i2 self-powered heating system. 

Agder Energi, NODES: scaling a sustainable and integrated energy marketplace

Distributed energy resources, digitalization, decarbonization, and new consumer behavior introduce challenges and opportunities for grid system operators to maintain reliable operation of the power system and create customer-centric services. The NODES marketplace relies on Azure to scale its flexible marketplace across 15 projects in 10 different European countries. The focus is on the use of flexibility from the distribution grid, transmission and distribution coordination, and integration with current balancing markets. Agder Energi is now piloting a flexible asset register and data hub with device management and analytics built on IoT Central. Rune Hogga, CEO of Agder Energi Flexibility, told us, "In order to have control of the data and be able to verify flexibility trades, Azure IoT Central provides us with a fast and efficient way to set up a system to collect data from a large number of distributed flexible assets."

L&T Technology Services: reducing carbon consumption and emissions

L&T Technology Services (LTTS) has developed low carbon and EV charging grid solutions for global enterprises, buildings, and smart cities. The LTTS Smart City, Campus & Building solutions enable reducing carbon emissions by up to 40 percent through its iBEMS on Azure solution by connecting an entire building's infrastructure to through single unified interface. In collaboration with Microsoft Real Estate & Facilities, LTTS is building breakthrough EV Charging Solutions providing actionable insights and usage patterns, demand forecasting, design and efficiency anomalies for Facility Managers on EV Charger assets while accurately tracking carbon credit. The LTTS solution also enables Facility Managers to optimize EV Charging Grid based on energy sources (geothermal, solar, electric) and grid constraints such as energy capacity, providing consumer EV charging notifications-based drive range preferences.

Telensa: utilities to support the business case for smart street lighting

Telensa makes wireless smart city applications, helping cities and utilities around the world save energy, work smarter, and deliver more cohesive services for their residents. Telensa is demonstrating how utilities can support the business case for smart street lighting, offering a platform to simply and seamlessly add other smart city applications like traffic monitoring, air quality and EV charging with AI-driven data insights. Telensa’s smart city solutions are increasingly built on Microsoft Azure IoT, leveraging the combination of data, devices, and connectivity, making IoT applications a practical proposition for any city.

Telensa is leading the Urban Data Project, with an initial deployment in Cambridge, UK. This new edge-AI technology is generating valuable insights from streetlight-based imaging, creating a trusted infrastructure for urban data to enable cities to collect, protect, and use their data for the benefit of all residents. The first deployment is in Cambridge, UK. Telensa’s Urban IQ, using Microsoft Power BI for data visualization is an open, low-cost platform to add multiple sensor applications.

 

Telensa’s streetlight based multi-sensor pods, which run on Azure IoT Edge and feature real-time AI and machine learning to extract insights.

eSmart Systems: improving powerline inspections and asset optimization by empowering human experts with Collaborative AI

eSmart Systems helps utilities gain insight into their assets by creating a virtuous cycle of collaboration and training between subject matter experts like Distribution or Transmission Engineers and state of the art deep learning artificial intelligence (AI).

A Microsoft finalist for AI energy partner of the year in 2019, eSmart’s Connected Drone software uses the Azure platform for accurate and self-improving power grid asset discovery and analysis. Grid inspectors continuously review results and correct them to feedback more accurate results to the system. Utilities can optimize visual data to improve their asset registries, reduce maintenance costs and improve reliability.

Kongsberg Digital: Grid Logic digital twin services for electrical grids

Increased electrification and introduction of intermittent, distributed, and renewable energy production challenge today’s grid operations. A lack of sufficient data and insights lead to over-investment, capacity challenges, and power quality issues. With Grid Logic digital twin services running on Azure, grid operators get forecasting and insights into hotspots and scenario simulation. With Azure IoT Hub, Grid Logic will make it possible to build a robust operating system for automation of real-time grid operation, optimization, and automation.

Grid Logic capacity heatmap for a part of Norwegian DSO BKK Nett’s grid.

Let’s connect and collaborate to build your energy solutions  

Microsoft Azure IoT is empowering businesses and industries to shape the future with IoT. We’re ready to meet and support you wherever you are in your transformation journey. Pairing a strong portfolio of products and partners will help you accelerate building robust IoT solutions, to achieve your goals. If you are attending DistribuTECH 2020, speak with Azure IoT experts, or connect with one of the partners mentioned above. 

Learn more about Microsoft Azure IoT and IoT for energy

Partner links:

Adger Energi
eSmart Igen
Konsgberg
L&T Technologies Services and L&T Power 
Nodes Market NXP
Omnio
Telensa

Quelle: Azure