IoT in Action: Enabling cloud transformation across industries

The intelligent cloud and intelligent edge go hand-in-hand, and together they are sparking massive transformation across industries. As computing gets more deeply embedded in the real world, powerful new opportunities arise to transform revenue, productivity, safety, customer experiences, and more. According to a white paper by Keystone Strategy, digital transformation leaders generate 8 percent more per year in operating income than other enterprises.

But what does cloud transformation look like within the context of the Internet of Thing (IoT)?

Below I’ve laid out a typical cloud transformation journey and provided examples of how the cloud is transforming city government, industrial IoT, and oil and gas innovators. For a deep dive on this very topic, I hope you’ll join me and a whole host of cloud and IoT experts, and Microsoft partners and customers at the upcoming IoT in Action event in Houston.

The typical cloud transformation journey

As mentioned, the cloud is a vital piece of IoT. Below I’ve outlined a typical cloud journey.

Embrace an innovation mindset: The first part of the cloud transformation journey—and this applies to digital transformation in general—is building a culture and mindset that is willing to innovate, and welcomes change and the potential it brings. This must start with leadership. If leadership doesn’t set the example of an innovation mindset, it will be difficult to achieve buy-in internally.
Clarify rationale for a cloud move: Typically, these reasons are plentiful such as cost savings, greater availability, and better performance. Understanding rationale from a strategic standpoint and aligning with your overall business goals can help you focus your efforts and find the right cloud fit.
Determine which applications to modernize and migrate: Prioritizing applications and determining which ones need to be migrated is also key. Migration is an opportunity for modernization of the IT ecosystem, which can ultimately save time and money. Making a prioritized plan and budgeting for modernization needs is critical.
Expect cloud usage (and costs) to rise: After the initial migration, cloud consumption typically increases. Due to easy access and relatively low-cost, developers and administrators will consume more resources, developing new applications and solutions.
But then it levels out: As an organization gets a clear understanding around its actual cloud consumption, it will be able to prioritize its workloads, bring some workloads back on premise, and negotiate pricing models. Implementing governance processes will help to control costs and ensure optimal performance.

Below I’ve included a few snapshots that show how the cloud transformation journey is paying off for city government, manufacturers, and the oil and gas industry.

Smart cites and the cloud journey

What do flood detection sensors, firefighting drones, transit wi-fi, and smart water meters have in common? They’re cloud connected.

Houston is on a mission to connect its citizens to the city and the city to its citizens. In the wake of massive Hurricane Harvey destruction, the city is doing more than just rebuilding: it is working to become safer, more resilient, and more connected.

To that end, the City of Houston is working with Microsoft and Microsoft partners to leverage cloud transformation and build repeatable, IoT solutions that span transportation, public safety, disaster recovery and response, connected neighborhoods, smart buildings, and more. A shared vision and strong collaboration from city leaders have been crucial to the success of this massive undertaking.

Learn more about the Microsoft and Houston initiative for details around how Houston is embracing cloud transformation to take care of its citizens.

Industrial IoT and the cloud journey

Industrial organizations are also leveraging digital and cloud transformation. By combining cloud with IoT, manufacturers are able to streamline, increase productivity, and predict issues before they happen. They’re even able to offer new service lines.

Rolls-Royce is a fantastic example of a manufacturer that has embraced cloud transformation to create a valuable service that helps its customers minimize costly delays and maximize fuel efficiency. With more than 13,000 commercial aircraft engines in service worldwide, Rolls Royce uses data from equipment sensors to help airlines predict and plan for maintenance needs and increase fuel economy.

The solution relies on the Microsoft Azure platform and Azure IoT solution accelerators to help filter, synthesize, and analyze massive volumes of data, delivering actionable insights to the right stakeholders at the right time. According to Michael Chester, Product Manager Data Services, Rolls-Royce, “By looking at wider sets of operating data and using machine learning and analytics to spot subtle correlations, we can optimize our models and provide insight that might improve a flight schedule or a maintenance plan and help reduce disruption for our customers.”

Oil and gas IoT and the cloud journey

A shifting competitive landscape, price volatility, technology, and other factors are reshaping the oil and gas industry. Areas of transformation include field empowerment, operations, and industry innovation. Foundational to success is digital transformation.

XTO Energy, a subsidiary of ExxonMobil knows firsthand the importance of digital and cloud transformation. One of the challenges they faced was that the existing infrastructure where they have major holdings didn’t lend itself to collecting data.

Recognizing the need to modernize and use data to drive better decisions, they deployed a series of intelligent cloud and intelligent edge solutions that have helped them keep tabs on well heads. Using the Microsoft Azure platform and Azure IoT technologies, they collect, store, and analyze data, giving XTO Energy new insights into well operations and future drilling possibilities.

According to Brian Khoury, IoT and Data Architecture Supervisor at XTO Energy, “We recognize the need to further digitize and to use data as an asset that drives insights and solves problems that we couldn’t solve when information is confined to physical paper or siloed across departments. Oil and gas tends to be behind in the use of digital tools compared to other industries, so we’re working hard to be more digitally enabled and connected. Embracing the cloud is an important part of that effort because it frees us up from having to manage hardware, storage, servers—all things that aren’t our core business—and we can scale and spin up resources as needed.”

IoT in Action comes to Houston April 16, 2019

The intelligent cloud and intelligent edge present powerful opportunities across industries. Please join us for a one-day IoT in Action event in Houston. This event is a unique opportunity to explore innovative, scalable IoT solutions that enable cloud transformation across industries – from city government to industrial IoT solution providers and oil and gas innovators. It’s also a great way to connect with experts and network with other Microsoft partners and customers to explore opportunities around the intelligent edge and intelligent cloud.
Quelle: Azure

Introducing Compute- and Memory-Optimized VMs for Google Compute Engine

Whether you’re running compute-bound applications for HPC or large, in-memory database applications like SAP HANA, you need the right mix of compute resources for the job, while also keeping an eye on price-performance. The vast majority of enterprise workloads run successfully on Google Cloud Platform using our general-purpose VMs. However, as you port more workloads to the cloud, you may need VMs that are optimized for specific types of workloads.Today we are pleased to announce the expansion of our Compute Engine virtual machine (VM) offerings to include new Compute-Optimized VMs and Memory-Optimized VMs. Both are based on 2nd Generation Intel Xeon Scalable Processors, which we delivered to customers last October—the first cloud provider to do so. In addition, these processors will also be coming to our general-purpose VMs. This means you’ll have access to a complete portfolio of machine types to successfully run your workloads across a wide range of memory and compute requirements.Compute-Optimized VMsCompute-Optimized VMs (C2) are a new compute family on GCP, exposing high per-thread performance and memory speeds that benefit the most compute-intensive workloads. Compute-Optimized VMs are great for HPC, electronic design automation (EDA), gaming, single-threaded applications and more. The new Compute-Optimized VMs offer a greater than 40% performance improvement compared to current GCP VMs. They also leverage 2nd Generation Intel Xeon Scalable Processors and can run at a sustained clock speed of 3.8 GHz. Additionally, C2 VMs provide full transparency into the architecture of the underlying server platforms, enabling advanced performance tuning. You can choose Compute-Optimized VMs with up to 60 vCPUs, 240 GBs of memory, and up to 3TB of local storage. Compute-Optimized VMs are currently available in alpha.Memory-Optimized VMsMemory-Optimized VMs (M2) offer the highest memory configuration for a Compute Engine VM. They are well suited for memory-intensive workloads such as large in-memory databases, e.g., SAP HANA, as well as in-memory data analytics workloads. Last July, we announced memory optimized VMs with up to 4 TBs of memory. Today’s additions to the M2 family offer up to 12 TB of memory and 416 vCPUs, enabling you to run scale-up workloads on GCP. These VMs are also based on 2nd Generation Intel Xeon Scalable Processors, and these newest Memory-Optimized VMs will be available in the following sizes:M2 machine types will be available to early access customers this quarter.PricingThe new Compute-Optimized VMs will start at $0.209/hr for a c2-standard-4, and up to $3.13/hr for a c2-standard-60 instance. C2 VMs are also available as Preemptible VMs starting at $0.0505/hr.  Pricing for the newest M2 VMs will be announced at a later date.If you’re ready to get started, you can sign up for early access. Once your account is approved for access, you can log in to the GCP Console, use the Google Cloud SDK, or use Google Cloud APIs to launch the new VMs. Stay tuned for updates on beta and general availability.
Quelle: Google Cloud Platform

Agilex: Intels 10-nm-FPGAs nutzen Chiplet-Design

Die Altera-Tochter legt nach: Die Agilex-Generation der programmierbaren Schaltungen kann mit PCIe Gen5 umgehen, zudem binden die FPGAs eine Vielzahl an Speichertypen und sie sind Cache-kohärent zu Intels Xeon-Prozessoren. Vor allem aber ist das Design flexibel erweiterbar. (FPGA, Intel)
Quelle: Golem

Installing OpenShift 4 on AWS from Start to Finish

You have probably heard about all the great engineering work going on to get the next release of OpenShift 4 ready for prime time. OpenShift 4 marks an incredible advancement for enterprise Kubernetes as it includes some great new features such as over the air updates and integration with the operator hub.   One of […]
The post Installing OpenShift 4 on AWS from Start to Finish appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cloud Memorystore: Now with Redis version 4.0 support and manual failover API

After announcing the general availability of Cloud Memorystore for Redis last year, we have seen tremendous growth across various industries, especially in gaming and retail. Cloud Memorystore for Redis lets Google Cloud Platform (GCP) customers use a fully managed, in-memory data store service. Cloud Memorystore automates all administrative tasks to manage your Redis instances, including provisioning, scaling, and monitoring, so you can focus on building apps with low latency and high availability.We are excited to announce Cloud Memorystore support for Redis version 4.0 (in beta) and a new manual failover API here at RedisConf 2019.You can see here how to access the new version:What’s new with Redis 4.0Key features added in Redis version 4.0 include:Caching improvement. Redis introduced a least frequently used (LFU) algorithm, which can provide a more accurate estimation of caching usage than least recently used (LRU) caching.Active memory defragmentation. Redis can now defragment memory while online. This helps with actively reclaiming unused memory, which prevents unnecessary crashes.We’ve also added a manual failover API to Cloud Memorystore so you can test its failover behavior. Before deploying applications in production using Cloud Memorystore, it’s important to test the behavior of the client and the application when a failover happens. With the new API, it’s easy to trigger a failover and observe the application behavior so you can plan accordingly for backup and restore purposes.We exposed Redis metrics to Stackdriver in the previous release, so that you can easily debug Redis issues in your application. To make it easier to debug client-side issues, we’ve partnered with OpenCensus to automatically collect traces and metrics from your app. The traces and metrics are available in a variety of back-end monitoring tools, including Stackdriver, so you can get an even more detailed picture of Redis performance. You can learn more about Cloud Memorystore and OpenCensus in this video:Learn more about Cloud Memorystore for Redis here and see various deployment scenarios for running Cloud Memorystore on GCP here.
Quelle: Google Cloud Platform