AWS Identity and Access Management (IAM) wird um neuen Zugriffssteuerungsmechanismus für Anfragen erweitert, die AWS-Services in Ihrem Namen stellen

AWS Identity and Access Management (IAM) ermöglicht jetzt die Zugriffssteuerung für Anfragen, die von AWS-Services in Ihrem Namen gestellt werden. Der neue Mechanismus kann zum Beispiel Ihren IAM-Prinzipalen das Recht gewähren, Instances von Amazon Elastic Compute Cloud (EC2) zu starten, jedoch nur über AWS CloudFormation und ohne die Möglichkeit eines direkten Zugriffs auf EC2.
Quelle: aws.amazon.com

Mirantis will continue to support and develop Docker Swarm

The post Mirantis will continue to support and develop Docker Swarm appeared first on Mirantis | Pure Play Open Cloud.
Here at Mirantis, we’re excited to announce our continued support for Docker Swarm, while also investing in new features requested by customers.
Following our acquisition of Docker Enterprise in November 2019, we affirmed at least two years of continued Swarm support, pending discussions with customers. These conversations have led us to the conclusion that our customers want continued support of Swarm without an implied end date.
Mirantis’ goal is to simplify container usage at enterprise scale with freedom of choice for orchestrators. Swarm has a proven track record of running mission critical container workloads in demanding production environments, and our customers can rest assured that Mirantis will continue to support their Swarm investments.
To that end, Mirantis will be continuing to invest in active Swarm development. Recently Mirantis developed Swarm Jobs, a new service mode enabling run-and-done workloads on a Swarm cluster.
In addition, Mirantis is very excited to announce a commitment to the development of Cluster Volume Support with CSI Plugins. Originally discussed at DockerCon 2019, this development proposal received very positive feedback from the community. By leveraging the Container Storage Interface plugin architecture, Swarm will be able to use the growing CSI ecosystem to handle distributed persistent volumes, supporting a wider range of backend storage options and more flexible and intelligent scheduling.
Swarm Jobs and Swarm Volume Support are part of our 2020 Roadmap with dates announced at KubeCon EU.
The post Mirantis will continue to support and develop Docker Swarm appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Fileless attack detection for Linux in preview

This blog post was co-authored by Aditya Joshi, Senior Software Engineer, Enterprise Protection and Detection.

Attackers are increasingly employing stealthier methods to avoid detection. Fileless attacks exploit software vulnerabilities, inject malicious payloads into benign system processes, and hide in memory. These techniques minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions.

To counter this threat, Azure Security Center released fileless attack detection for Windows in October 2018. Our blog post from 2018 explains how Security Center can detect shellcode, code injection, payload obfuscation techniques, and other fileless attack behaviors on Windows. Our research indicates the rise of fileless attacks on Linux workloads as well.

Today, Azure Security Center is happy to announce a preview for detecting fileless attacks on Linux.  In this post, we will describe a real-world fileless attack on Linux, introduce our fileless attack detection capabilities, and provide instructions for onboarding to the preview. 

Real-world fileless attack on Linux

One common pattern we see is attackers injecting payloads from packed malware on disk into memory and deleting the original malicious file from the disk. Here is a recent example:

An attacker infects a Hadoop cluster by identifying the service running on a well-known port (8088) and uses Hadoop YARN unauthenticated remote command execution support to achieve runtime access on the machine. Note, the owner of the subscription could have mitigated this stage of the attack by configuring Security Center JIT.
The attacker copies a file containing packed malware into a temp directory and launches it.
The malicious process unpacks the file using shellcode to allocate a new dynamic executable region of memory in the process’s own memory space and injects an executable payload into the new memory region.
The malware then transfers execution to the injected ELF entry point.
The malicious process deletes the original packed malware from disk to cover its tracks. 
The injected ELF payload contains a shellcode that listens for incoming TCP connections, transmitting the attacker’s instructions.

This attack is difficult for scanners to detect. The payload is hidden behind layers of obfuscation and only present on disk for a short time.  With the fileless attack detection preview, Security Center can now identify these kinds of payloads in memory and inform users of the payload’s capabilities.

Fileless attacks detection capabilities

Like fileless attack detection for Windows, this feature scans the memory of all processes for evidence of fileless toolkits, techniques and behaviors. Over the course of the preview, we will be enabling and refining our analytics to detect the following behaviors of userland malware:

Well known toolkits and crypto mining software. 
Shellcode, injected ELF executables, and malicious code in executable regions of process memory.
LD_PRELOAD based rootkits to preload malicious libraries.
Elevation of privilege of a process from non-root to root.
Remote control of another process using ptrace.

In the event of a detection, you receive an alert in the Security alerts page. Alerts contain supplemental information such as the kind of techniques used, process metadata, and network activity. This enables analysts to have a greater understanding of the nature of the malware, differentiate between different attacks, and make more informed decisions when choosing remediation steps.

 

The scan is non-invasive and does not affect the other processes on the system.  The vast majority of scans run in less than five seconds. The privacy of your data is protected throughout this procedure as all memory analysis is performed on the host itself. Scan results contain only security-relevant metadata and details of suspicious payloads.

Getting started

To sign-up for this specific preview, or our ongoing preview program, indicate your interest in the "Fileless attack detection preview."

Once you choose to onboard, this feature is automatically deployed to your Linux machines as an extension to Log Analytics Agent for Linux (also known as OMS Agent), which supports the Linux OS distributions described in this documentation. This solution supports Azure, cross-cloud and on-premise environments. Participants must be enrolled in the Standard or Standard Trial pricing tier to benefit from this feature.

To learn more about Azure Security Center, visit the Azure Security Center page.
Quelle: Azure

Burst 4K encoding on Azure Kubernetes Service

Burst encoding in the cloud with Azure and Media Excel HERO platform.

Content creation has never been as in demand as it is today. Both professional and user-generated content has increased exponentially over the past years. This puts a lot of stress on media encoding and transcoding platforms. Add the upcoming 4K and even 8K to the mix and you need a platform that can scale with these variables. Azure Cloud compute offers a flexible way to grow with your needs. Microsoft offers various tools and products to fully support on-premises, hybrid, or native cloud workloads. Azure Stack offers support to a hybrid scenario for your computing needs and Azure ARC helps you to manage hybrid setups.

Finding a solution

Generally, 4K/UHD live encoding is done on dedicated hardware encoder units, which cannot be hosted in a public cloud like Azure. With such dedicated hardware units hosted on-premise that need to push 4K into the Azure data center the immediate problem we face is a need for high bandwidth network connection between the encoder unit on-premise and Azure data center. In general, it's a best practice to ingest into multiple regions, increasing the load on the network connected between the encoder and the Azure Datacenter.

How do we ingest 4K content reliably into the public cloud?

Alternatively, we can encode the content in the cloud. If we can run 4K/UHD live encoding in Azure, its output can be ingested into Azure Media Services over the intra-Azure network backbone which provides sufficient bandwidth and reliability.

How can we reliably run and scale 4K/UHD live encoding on the Azure cloud as a containerized solution? Let's explore below. 

Azure Kubernetes Service

With Azure Kubernetes Services (AKS) Microsoft offers a managed Kubernetes platform to customers. It is a hosted Kubernetes platform without having to spend a lot of time creating a cluster with all the necessary configuration burden like networking, cluster masters, and OS patching of the cluster nodes. It also comes with pre-configured monitoring seamlessly integrating with Azure Monitor and Log Analytics. Of course, it still offers flexibility to integrate your own tools. Furthermore, it is still just the plain vanilla Kubernetes and as such is fully compatible with any existing tooling you might have running on any other standard Kubernetes platform.

Media Excel encoding

Media Excel is an encoding and transcoding vendor offering physical appliance and software-based encoding solutions. Media Excel has been partnering with Microsoft for many years and engaging in Azure media customer projects. They are also listed as recommended and tested contribution encoder for Azure Media Services for fMP4. There has also work done by both Media Excel and Microsoft to integrate SCTE-35 timed metadata from Media Excel encoder to an Azure Media Services Origin supporting Server-Side Ad Insertion (SSAI) workflows.

Networking challenge

With increasing picture quality like 4K and 8K, the burden on both compute and networking becomes a significant architecting challenge. In a recent engagement with a customer, we needed to architect a 4K live streaming platform with a challenge of limited bandwidth capacity from the customer premises to one of our Azure Datacenters. We worked with Media Excel to build a scalable containerized encoding platform on AKS. Utilizing cloud compute and minimizing network latency between Encoder and Azure Media Services Packager. Multiple bitrates with a top bitrate up to 4Kp60@20Mbps of the same source are generated in the cloud and ingested into the Azure Media Services platform for further processing. This includes Dynamic Encryption and Packaging. This setup enables the following benefits:

Instant scale to multiple AKS nodes
Eliminate network constraints between customer and Azure Datacenter
Automated workflow for containers and easy separation of concern with container technology
Increased level of security of high-quality generated content to distribution
Highly redundant capability
Flexibility to provide various types of Node pools for optimized media workloads

In this particular test, we proved that the intra-Azure network is extremely capable of shipping high bandwidth, latency-sensitive 4K packets from a containerized encoder instance running in West Europe to both East US and Honk Kong Datacenter Regions. This allows the customer to place origin closer to them for further content conditioning.

Workflow:

Azure Pipeline is triggered to deploy onto the AKS cluster. In the YAML file (which you can find on Github) there is a reference to the Media Excel Container in Azure Container Registry.
AKS starts deployment and pulls container from Azure Container Registry.
During Container start custom PHP script is loaded and container is added to the HMS (Hero Management Service). And placed into the correct device pool and job.
Encoder loads source and (in this case) push 4K Livestream into Azure Media Services.
Media Services packaged Livestream into multiple formats and apply DRM (digital rights management).
Azure Content Deliver Network scales livestream.

Scale through Azure Container Instances

With Azure Kubernetes Services you get the power of Azure Container Instances out of the box. Azure Container Instances are a way to instantly scale to pre-provisioned compute power at your disposal. When deploying Media Excel encoding instances to AKS you can specify where these instances will be created. This offers you the flexibility to work with variables like increased density on cheaper nodes for low-cost low priority encoding jobs or more expensive nodes for high throughput high priority jobs. With Azure Container Instances you can instantly move workloads to standby compute power without provisioning time. You only pay for the compute time offering full flexibility for customer demand and future change in platform needs. With Media Excel’s flexible Live/File based encoding roles you can easily move workloads across different compute power offered by AKS and Azure Container Instances.

Azure DevOps pipeline to bring it all together

All the general benefits that come with containerized workload apply in the following case. For this particular proof-of-concept, we created an automated deployment pipeline in Azure DevOps for easy testing and deployment. With a deployment YAML and Pipeline YAML we can easily automate deployment, provisioning and scaling of a Media Excel encoding container. Once DevOps pushes the deployment job onto AKS a container image is pulled from Azure Container Registry. Although container images can be bulky utilizing node side caching of layers any additional container pull is greatly improved down to seconds. With the help of Media Excel, we created a YAML file container pre- and post-container lifecycle logic that will add and remove a container from Media Excel's management portal. This offers an easy single pane of glass management of multiple instances across multiple node types, clusters, and regions.

This deployment pipeline offers full flexibility to provision certain multi-tenant customers or job priority on specific node types. This unlocks the possibility of provision encoding jobs on GPU enabled nodes for maximum throughput or using cheaper generic nodes for low priority jobs.

Azure Media Services and Azure Content Delivery Network

Finally, we push the 4K stream into Azure Media Services. Azure Media Services is a cloud-based platform that enables you to build solutions that achieve broadcast-quality video streaming, enhance accessibility and distribution, analyze content, and much more. Whether you're an app developer, a call center, a government agency, or an entertainment company, Media Services helps you create apps that deliver media experiences of outstanding quality to large audiences on today’s most popular mobile devices and browsers.

Azure Media Services is seamlessly integrated with Azure Content Delivery Network. With Azure Content Delivery Network we offer a true multi CDN with choices of Azure Content Delivery Network from Microsoft, Azure Content Delivery Network from Verizon, and Azure Content Delivery Network from Akamai. All of this through a single Azure Content Delivery Network API for easy provisioning and management. As an added benefit, all CDN traffic between Azure Media Services Origin and CDN edge is free of charge.

With this setup, we’ve demonstrated that Cloud encoding is ready to handle real-time 4K encoding across multiple clusters. Thanks to Azure services like AKS, Container Registry, Azure DevOps, Media Services, and Azure Content Delivery Network, we demonstrated how easy it is to create an architecture that is capable of meeting high throughput time-sensitive constraints.
Quelle: Azure

Charting a lifetime of learning and love for technology

Editor’s note: In honor of Black History Month, we’re talking to Cloud Googlers about the people and moments that inspire them and share how they’re shaping a more inclusive vision of technology.From his Senegalese childhood to his European education to his work running the global solutions engineering organization in Google Cloud, Hamidou Dia has always had a passion for education. We talked with him to learn more about his passion and how he applies it to work.Tell us about what you do at Google.I’m the solutions engineering lead here at Google Cloud. That means that my team and I focus on helping customers solve their most complex business problems using our technology. We have amazing technology here, and we want to make sure we’re connecting that technology to the business challenges our customers are facing today. We work closely with our customers to understand their business issues, and then build solutions that help them. The issues we help customers solve for are phenomenal—in industries from retail, financial services to healthcare to the public sector, and more—so I’m continually learning new things. In my team, we conduct design thinking workshops with our customers to help uncover their business challenges and co-create solutions. This is what it means to be “Googley” – being helpful and solving together! In any sales organization, you are constantly articulating your value proposition to customers. Customers constantly ask me why they should choose Google Cloud. One of our key differentiators is the Google culture. Customers are intrigued by our culture of innovation and collaboration and they want to be associated with that.As we celebrate BHM, who inspires youSomeone who I’ve always admired is Léopold Senghor, the first president of Senegal after the country gained its independence in 1960. It’s an 80% Muslim country and he was Catholic, yet he was able to relate to many different people and bring them together. He laid out a vision for education and for building a peaceful, secular society. He’s one of the greatest African leaders, and I talk about him often. He was a poet and teacher as well as a leader, and he really understood how important education is to a society.  What’s your passion in life?Education is my passion. I first learned that passion from my mom – even though she couldn’t read or write, she knew how valuable education was. She told me that I was smart and I could succeed. I loved playing soccer, but she constantly told me I needed to balance schoolwork and soccer. I’m glad now that I spent more time on math and physics! I had a middle school teacher in Senegal who really believed I deserved to go to high school. There were only five high schools in the whole country. That teacher fought to get me into high school, and that acceptance opened so many doors for me. I had to leave home to go to high school since the nearest school was 140 kilometers away. After high school, I went to college in France on a scholarship. Very few students across the country each year went to college on an academic scholarship. I thought I would study business. But the very first time I interacted with a PC, it totally changed my path. I wrote my first program and knew I had to study engineering, then got a master’s degree in computer science. I love technology and how it can be so helpful in everyday life, and I knew right away it was the field for me.  I’m fortunate enough that I also get to live my passion working at Google. Every day I get to help educate our customers and find the best solution for their needs.What does Black History Month mean to you?I’ve lived in the U.S. for over 20 years. My five children were all educated here and they were often in the minority at school — which was very different from my childhood in Senegal. We’ve had many family discussions on race and identity. My advice to my children has always been to embrace their heritage and stay true to themselves. Don’t let others tell you what you can and can’t do. Carve your own path.As for my own experience, moving from Senegal to Europe and then the United States, I appreciated meeting people from so many different backgrounds and experiences. You can learn so much by talking to someone who is different from you. Black History Month is a wonderful opportunity to elevate those voices we don’t hear from enough and learn from their experiences.What advice do you give to students or those new to the workforce?You always have to have a drive, a passion for learning and growing all the time, no matter where you are in your career. I always refer back to the principles I was raised with in West Africa. Number one is character. It’s having integrity in everything you do. Second is that it’s all about hard work. In the technology industry, finding your area of expertise, and always continuing to learn more, is how you can stay on top of your game. And finally, don’t be afraid. Don’t fear the unknown, or fear a challenge. The greatest challenges are often where the greatest opportunities lie.
Quelle: Google Cloud Platform

All together now: our operations products in one place

Our suite of operations products has come a long way since the acquisition of Stackdriver back in 2014. The suite has constantly evolved with significant new capabilities since then, and today we reach another important milestone with complete integration into the Google Cloud Console. We’re now saying goodbye to the Stackdriver brand, and announcing an operations suite of products, which includes Cloud Logging, Cloud Monitoring, Cloud Trace, Cloud Debugger, and Cloud Profiler. The Stackdriver functionality you’ve come to depend on isn’t changing. Over the years, these operations products have seen a strong growth in usage by not just application developers and DevOps teams, but also IT operators and security teams. Complete integration of the products into the Cloud Console, along with in-context presence within the key service pages themselves—like the integrations into Compute Engine, Google Kubernetes Engine, Cloud SQL, and Dataflow management consoles—brings a great experience to all users. Putting operations tasks a quick click away, without users losing context of the activities they had been performing, shows how seamless an operations journey can be. In addition to this console integration, we’re very happy to share some of the progress in our products, with lots of exciting features launching today. Cloud LoggingContinuing with our goal to build easy-to-use products, we have completely overhauled the Logs Viewer and will be rolling this out to everyone over the next week. This makes it even easier for you to quickly identify and troubleshoot issues. We’re also pleased to announce that the ability to customize how long logs are retained is now available in beta. With both the new Cloud Logging user interface and the new 10-year log retention feature, you can search logs quickly and identify trends and correlations. We also understand that in some cases, it is very useful to export logs from Cloud Logging to other locations like BigQuery, Cloud Storage, or even third-party log management systems. To make this easier, we are making Logs Router generally available. Similarly, data from Cloud Trace can also be exported to BigQuery. Log Router’s support for customer management encryption keys (CMEK) also makes this a good solution for environments needing to meet that security need for compliance and other purposes. Cloud MonitoringOur biggest change of all that you’ll see in the console is Cloud Monitoring, as this was the last Stackdriver product to migrate over to our Google Cloud Console. You’ll now find a better designed, easy-to-navigate site, and important new features targeted to make your life easier. We are increasing our metrics retention to 24 months and writing metrics at up to 10-second granularity. The increased granularity is especially useful when making quick operational decisions like load balancing, scaling, etc. Like Cloud Logging, you can now access what you need more quickly, as well as prepare for future troubleshooting with longer retention. An additional key launch is Dashboards API, which lets you develop a dashboard once and share it multiple times in other workspaces and environments. Users might also notice better metrics recommendations by surfacing to the top of the selection list the most popular metrics for a given resource type. This is a great example of understanding the preferred metric by the millions of users on Google Cloud, and surfacing them quickly in other situations.This release makes it possible to route alerts to independent systems with Pub/Sub support, continuing with the ability to connect a broad variety of operational tools with Cloud Monitoring. To keep up with the needs of some of our largest users, we are also expanding support for hundreds of projects within a Workspace—providing a single point of control and management interface for multiple projects. Stay tuned for more details about all of these new capabilities in a series of blog posts over the next few weeks. 2020 will continue to see momentum for our operations suite of products, and we’re looking forward to the road ahead as we continue to help developers and operators across the world to manage and troubleshoot issues quickly and keep their systems up and running.Learn more about the operations suite here.
Quelle: Google Cloud Platform

A secure foundation for IoT, Azure Sphere now generally available

Today Microsoft Azure Sphere is generally available. Our mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. General availability is an important milestone for our team and for our customers, demonstrating that we are ready to fulfill our promise at scale. For Azure Sphere, this marks a few specific points in our development. First, our software and hardware have completed rigorous quality and security reviews. Second, our security service is ready to support organizations of any size. And third, our operations and security processes are in place and ready for scale. General availability means that we are ready to put the full power of Microsoft behind securing every Azure Sphere device.

The opportunity to release a brand-new product that addresses crucial and unmet needs is rare. Azure Sphere is truly unique, our product brings a new technology category to the Microsoft family, to the IoT market, and to the security landscape.

IoT innovation requires security

The International Data Corporation (IDC) estimates that by 2025 there will be 41.6 billion connected IoT devices. Put in perspective, that’s more than five times the number of people on earth today. When we consider why IoT is growing so rapidly, the astounding pace is being driven by industries and companies that are investing in IoT to pursue long-term, real-world impact. They’re looking to harness the power of the intelligent edge to make daily life effortless, to transform businesses, to create safer working and living conditions, and to address some of the world’s most pressing challenges.

Innovation, no matter how valuable, is not durable without a foundation of security. If the devices and experiences that promise to reshape the world around us are not built on a foundation of security, they cannot last. But when innovation is built on a secure foundation, you can be confident in its ability to endure and deliver value long into the future. Durable innovation requires future-proofing IoT investments by planning and investing in security upfront.

IoT security is complex and the threat landscape is dynamic. You have to operate under the assumption that attacks will happen, it's not a matter of if but when. With this in mind, we built Azure Sphere with multiple layers of protection and with continually improving security so that it’s possible to limit the reach of an attack and renew and enhance the security of a device over time. Azure Sphere delivers foundational security for durable innovation.

Security is complex, but it doesn’t have to be complicated

Many of the customers we talk to struggle to define the specific IoT security measures necessary for success. We’ve leveraged our deep Microsoft experience in security to develop a very clear view of what IoT security requires. We found that there are seven properties that every IoT device must have in order to be secured. These properties clearly outline the requirements for an IoT device with multiple layers of protection and continually improving security.

Any organization can use the seven properties as a roadmap for device security, but Azure Sphere is designed to give our customers a fast track to secured IoT deployments by having all seven properties built-in. It makes achieving layered, renewable security for connected devices an easy, affordable, no-compromise decision.

Azure Sphere is a fully realized security system that protects devices over time. It includes four components, three of which are powered by technology, the Azure Sphere-certified chips that go into every device, the Azure Sphere operating system (OS) that runs on the chips, and the cloud-based Azure Sphere Security Service.

Every Azure Sphere chip includes built-in Microsoft security technology to provide a dependable hardware root of trust and advanced security measures to guard against attacks. The Azure Sphere OS is designed to limit the potential reach of an attack and to make it possible to restore the health of the device if it’s ever compromised. We continually update our OS, proactively adding new and emerging protections. The Azure Sphere Security Service reaches out and guards every Azure Sphere device. It brokers trust for device-to-cloud and device-to-device communication, monitors the Azure Sphere ecosystem to detect emerging threats, and provides a pipe for delivering application and OS updates to each device. Altogether, these layers of security prevent any single point of failure that could leave a device vulnerable.

The fourth component may be the most important: our Azure Sphere team. These are some of the brightest minds in security and they’re dedicated to the security of every single Azure Sphere device. Our team is at work identifying emerging security threats, creating new countermeasures, and deploying them to every Azure Sphere device. We are fighting the security battle, so our customers don’t have to.

Security obsessed, customer-driven

The challenges of IoT device security that keep us up at night lead to the features and capabilities that give our customers peace of mind. It’s ambitious and demanding work. To realize the defense-in-depth approach we had to integrate multiple distinct technologies and their related engineering disciplines. Our team can’t think about any component in isolation. Instead, we work from a unified view of interoperability and dependencies that brings together our silicon, operating system, SDK, security services, and developer experience. Having a clear mission gives us a shared focus to strategize and collaborate across teams and technologies. By eliminating boundaries among technologies or engineering teams, we’ve been able to create a product far greater than the sum of its parts.

We also made a point to collaborate with our early customers. We’ve used public preview to learn and improve how we deliver security in a way that supports customer and partner needs. Working closely with a wide range of customers has helped shape our investments in hardware, features, capabilities, and services. To support customers across the breadth of their IoT journeys, we’ve built strong partnerships with leading silicon and hardware manufacturers. This gives customers more choice, more implementation options, and new offerings that can speed time to market. Right now, customers are using Azure Sphere to securely connect everything from espresso machines to datacenters. Between those examples, there’s a whole range of use cases for home and commercial appliances, industrial manufacturing equipment, smart energy solutions, and so much more.

Our customers across a wide array of industries are putting their trust in Azure Sphere as they connect and secure equipment, drive improvements, reduce costs, and mitigate the real risks that cyberattacks present.

The Azure Sphere commitment

“Culture eats strategy for breakfast.” Only when we ground everything we do in our culture, can we support what’s necessary to execute a brilliant strategy. What we’ve set out to achieve with Azure Sphere is ambitious and Microsoft is deeply invested in a culture that can support the most ambitious ideas. We apply a growth mindset to everything we do and always strive to learn more about our customers. We actively seek diversity and practice inclusion as we work together toward the ultimate pursuit of making a difference in the world. Guided by our belief that a strong culture is an essential foundation for bringing our vision to life, we’ve focused on culture from the beginning.

To bring together the right technology and tactics as a single, end-to-end solution at scale, is an amazing amount of work that requires true teamwork. We’ve built a team with a broad variety of backgrounds, experience, and expertise across multiple disciplines to work together on Azure Sphere. To support collaboration and creativity, we have nurtured the Microsoft cultural values by practicing fearlessness, trustworthiness, and kindness. Without a strong and positive culture, the work we do would be much harder and far less fun. Our culture gives us the confidence to tackle seemingly impossible challenges and the freedom to take bold steps forward.

Azure Sphere general availability is a culmination of the focus, commitment, and investment we make as a team to realize our shared vision. I’m incredibly proud of the Azure Sphere team and what we’ve built together. And I’m grateful to share this accomplishment with all of the teammates, partners, and customers who have been a part of our journey to general availability. We’re ready to be our customers’ trusted partner in device security so that they can focus on unleashing innovation in their products and in their businesses.

If you are interested in learning more about how Azure Sphere can help you securely fast track your next IoT innovation:

Visit the Azure Sphere website to learn more. 
See how customers like Starbucks are using Azure Sphere to drive efficiency and consistency in their retail operations.
Get started.

Quelle: Azure