Corsair: Externes Touchdisplay ermöglicht schnelle Einstellungen
Mit Corsairs kleinem Zusatzdisplay iCue Nexus lassen sich unter anderem Peripheriegeräte steuern und Makros aktivieren. (Corsair, Computer)
Quelle: Golem
Mit Corsairs kleinem Zusatzdisplay iCue Nexus lassen sich unter anderem Peripheriegeräte steuern und Makros aktivieren. (Corsair, Computer)
Quelle: Golem
In dem Bundesland sollen künftig bis zu 500.000 Personen ein Matrix-Chat-System nutzen können. Das ist Teil der Open-Source-Strategie. (Matrix, Applikationen)
Quelle: Golem
Podcasting isn’t just for professional broadcasters or celebrities. If you have a passion for a topic — no matter how niche — and want to explore your options beyond blogging and tweeting, consider launching a podcast! All you need to get started is a decent microphone and headset, an internet connection — and our next free webinar to learn the basics.
Date: Thursday, July 23, 2020Cost: FREETime: 8:00 am PDT | 9:00 am MDT | 10:00 am CDT | 11:00 am EDT | 15:00 UTCRegistration link: https://zoom.us/webinar/register/5115944218471/WN_DEIBungPRlSs4hIKhN6ezAWho’s invited: Bloggers, business owners, and anyone else interested in starting a podcast.
Your hosts, expert podcasters and Happiness Engineers Richard and Damianne, have years of experience in podcasting, radio journalism, and of course, helping our users get the most out of their WordPress.com sites. They’ll walk you through the basics of hosting your podcast on WordPress.com and adding it to the most popular podcast directories. They’ll also share some tips and best practices on crafting a successful podcast.
Please note that to host audio files on a WordPress.com site, your site must be on the Premium, Business, or eCommerce plan.
The one-hour webinar will include a 45-minute presentation and 15 minutes of live Q&A. Dustin, one of our veteran Happiness Engineers and another longtime podcaster, will also be on hand to answer questions over Zoom chat during the webinar.
Seats are limited, so register now to save your seat. We look forward to seeing you then!
Quelle: RedHat Stack
Our partners play an important role in all that we do, and we are always looking for ways to showcase and help them differentiate themselves in the market. Last year, we launched the Google Cloud Partner Advantage program to help them do exactly that. Since then, we’ve added new certifications,expanded our Expertise areas to cover new priority solutions, and added new specializations. According to our most recent Forrester study, certification, expertise, and specializations are three of the top areas partners talk about when it comes to growing their business—and it’s why all three play a key role in our program.Click to enlargeAs we look ahead to the second half of 2020, we wanted to share updates to the Partner Advantage program across several core areas.CertificationsRecognition as a certified professional on Google Cloud or G Suite is the first major step an individual can take to demonstrate their level of skill and knowledge on Google Cloud. We offer several learning and training opportunities of which you can take advantage:Use the certification interest form to get started on certifications.Take proctored exams online or at a local testing center. Click here to register.Consider learning opportunities such as Partner Certification Kickstart (PCK) programs that allow you to take accelerated, self-paced courses and hands-on training in six weeks or less. PCK Sprint, in particular, enables technical partners on GCP products through a mix of virtual classes, SME sessions and CloudHero games. Or Google Courses powered by Qwiklabs, which provides a central location for on demand training and hands on labs.Join upcoming webinars, such as the “Why Certify Now?” webcast on Wednesday, August 5. We’re also launching the Professional Machine Learning Engineer certification in October, so you can also register to learn more about the Professional Machine Learning Engineer certification.Customer successCustomer success is the driving force behind a partner’s entire differentiation journey. Highlighting your customer wins and showcasing what you do in the market along the way builds credibility. Customer success stories also help you meet eligibility requirements for Expertise (one public story required) and Specialization (at least three required). To help you easily share your customer wins, we’ve introduced a new customer success story tool to accelerate and simplify highlighting your phenomenal stories—find it on the Partner Advantage portal. While you can find a wealth of partner customer showcases on the Google Cloud Partner Directory, here are a few of the many great examples of customer success from our partners:Mondelēz and MightyHive: Personalizing CPG sales and marketing on a global scaleDxTerity and Pluto7: Using data-driven precision medicine to combat autoimmune diseaseMitsubishi Motors and Aeris Communications: Fueling customer engagement with the connected carExpertisePartner Expertise demonstrates your early customer success at a more granular level across products, priority solutions, and/or industries segments, based on a defined set of requirements, including customer evidence. All partners are welcome to apply for Partner Expertise, no matter the business model.New Expertise areas for which Partners can apply:Google MeetMainframe ModernizationMicrosoft on Google CloudMigrate Oracle Workloads to Google CloudOur partners continue to showcase their commitment via Expertise in solutions and industries:NORTHAM: Maven Wave, Pluto7 Consulting Inc, Softserve Inc., SpringMLEMEA: Ancoris, Cloudreach, Fourcast, PA Consulting GroupAPAC: Cloud Comrade, CloudMile Limited, Pluto Seven Business Solutions Private Limited, SearceLATAM: Colaborativa, Dedalus Prime, Qi Network, Safetec InformaticaJAPAN: Cloud Ace, Inc., EnisiasSpecializationPartner Specialization remains the highest level of achievement within the partner journey. It represents the strongest signal of proficiency and experience with Google Cloud, while helping you maintain a consistent practice that delights the customer. Congratulations to all of our partners who have achieved this milestone or renewed in 1H 2020:Application DevelopmentInfogain Corporation | PA CONSULTING GROUP | Qvik Ltd | SpringMLCloud Migration CLOUD COMRADEData AnalyticsAtosEducationCLOUDPOINT OY | Deploy Learning | Five-Star Technology Solutions | Foreducation EdTech | Gestion del conocimiento digital ieducando | NUVEM MESTRA | OPENNETWORKS | STREET SMART Inc.InfrastructureAppsbroker Limited | Cloudbakers | CLOUDPILOTS | Cognizant | Opticca Consulting Machine LearningiKala | NT ConceptsMarketing AnalyticsAliz Technologies | SingleViewTrainingJellyfish Training | LearnQuestWork TransformationCloudypedia | eSource Capital | Huware Srl | Intelligence Partner | NGC (New Generation Cloud) | Softline | TS CloudWork Transformation EnterpriseDavinci Technologies | NextNovate| Nubalia | S&E Cloud Experts | Safetec Informática| Softline Vodafone | Wipro LimitedAnnouncing two new Specialization areasWe are also pleased today to announce two new Specialization areas: SAP on Google Cloud and Data Management. Congratulations to our launch partners who are blazing the trail in these new areas.Data Management Cognizant | Deloitte | DoIt | PythianSAP on Google CloudAccenture | Deloitte | HCL | ManageCore | Tech MahindraTake the journey with us! For our partners who want to accelerate their Partner Advantage Differentiation journey today, please fill out this form and we will contact you directly. Looking for a partner in your region who has achieved an expertise and/or specialization? Search our global Partner Directory. Not yet a Google Cloud partner? Visit Partner Advantage and learn how to become one today!
Quelle: Google Cloud Platform
Like many organizations, you employ a variety of risk management and risk mitigation strategies to keep your systems running, including your Google Kubernetes Engine (GKE) environment. These strategies ensure business continuity during both predictable and unpredictable outages, and they are especially important now, when you are working to limit the impact of the pandemic on your business.In this first of two blog posts, we’ll provide recommendations and best practices for how to set up your GKE clusters for increased availability, on so-called Day 0. Then, stay tuned for a second post, which describes high availability best practices for Day 2, once your clusters are up and running. When thinking about the high availability of GKE clusters, Day 0 is often overlooked because many people think about disruptions and maintenance as being part of ongoing Day 2 operations. In fact, it is necessary to carefully plan the topology and configuration of your GKE cluster before you deploy your workloads.Choosing the right topology, scale, and health checks for your workloadsBefore you create your GKE environment and deploy your workloads, you need to decide on some important design points. Pick the right topology for your clusterGKE offers two types of clusters: regional and zonal. In a zonal cluster topology, a cluster’s control plane and nodes all run in a single compute zone that you specify when you create the cluster. In a regional cluster, the control plane and nodes are replicated across multiple zones within a single region.Regional clusters consist of a three Kubernetes control planes quorum, offering higher availability than a zonal cluster can provide for your cluster’s control plane API. And although existing workloads running on the nodes aren’t impacted if a control plane(s) is unavailable, some applications are highly dependent on the availability of the cluster API. For those workloads, you’re better off using a regional cluster topology. Of course, selecting a regional cluster isn’t enough to protect a GKE cluster either: scaling, scheduling, and replacing pods are the responsibilities of the control plane, and if the control plane is unavailable, that can impact your cluster’s reliability, which can only resume once the control plane becomes available again.You should also remember that regional clusters have redundant control planes as well as nodes. In a regional topology, nodes are redundant across different zones, which can cause costly cross-zone network traffic.Finally, although regional cluster autoscaling makes a best effort to spread resources among the three zones, it does not rebalance them automatically unless a scale up/down action occurs.To summarize, for higher availability of the Kubernetes API, and to minimize disruption to the cluster during maintenance on the control plane, we recommend that you set up a regional cluster with nodes deployed in three different availability zones—and that you pay attention to autoscaling.Scale horizontally and verticallyCapacity planning is important, but you can’t predict everything. To ensure that your workloads operate properly at times of peak load—and to control costs at times of normal or low load—we recommend exploring GKE’s autoscaling capabilities that best fit your needs.Enable Cluster Autoscaler to automatically resize your nodepool size based on demand.Use Horizontal Pod Autoscaling to automatically increase or decrease the number of pods based on utilization metrics.Use Vertical Pod Autoscaling (VPA) in conjunction with Node Auto Provisioning (NAP a.k.a., Nodepool Auto Provisioning) to allow GKE to efficiently scale your cluster both horizontally (pods) and vertically (nodes).VPA automatically sets values for CPU, memory requests, and limits for your containers. NAP automatically manages node pools, and removes the default constraint of starting new nodes only from the set of user created node pools.The above recommendations optimize for cost. NAP, for instance, reduces costs by taking down nodes during underutilized periods. But perhaps you care less about cost and more about latency and availability—in this case, you may want to create a large cluster from the get-go and use GCP reservations to guarantee your desired capacity. However, this is likely a more costly approach.Review your default monitoring settingsKubernetes is great at observing the behavior of your workloads and ensuring that load is evenly distributed out of the box. Then, you can further optimize workload availability by exposing specific signals from your workload to Kubernetes. These signals, Readiness and Liveness signals, provide Kubernetes additional information regarding your workload, helping it determine whether it is working properly and ready to receive traffic. Let’s examine the differences between readiness and liveness probes.Every application behaves differently: some may take longer to initiate than others; some are batch processes that run for longer periods and may mistakenly seem unavailable. Readiness and liveness probes are designed exactly for this purpose—to let Kubernetes know the workloads’ acceptable behavior. For example, an application might take a long time to start, and during that time, you don’t want Kubernetes to start sending customer traffic to it, since it’s not yet ready to serve traffic yet. With a readiness probe, you can provide an accurate signal to Kubernetes for when an application has completed its initialization and is ready to serve your end users.Make sure you set up readiness probes to ensure Kubernetes knows when your workload is really ready to accept traffic. Likewise, setting up a liveness probe tells Kubernetes when a workload is actually unresponsive or just busy performing CPU-intensive work.Finally, readiness and liveness probes are only as good as they are defined and coded. Make sure you test and validate any probes that you create.Correctly set up your deploymentEach application has a different set of characteristics. Some are batch workloads, some are based on stateless microservices, some on stateful databases. To ensure Kubernetes is aware of your application constraints, you can use Kubernetes Deployments to manage your workloads. A Deployment describes the desired state, and works with the Kubernetes schedule to change the actual state to meet the desired state.Is your application stateful or not?If your application needs to save its state between sessions, e.g., a database, then consider using StatefulSet, a Kubernetes controller that manages and maintains one or more Pods in a way that properly handles the unique characteristics of stateful applications. It is similar to other Kubernetes controllers that manage pods like ReplicaSets and Deployments. But unlike Deployments, Statefulset does not assume that Pods are interchangeable.To maintain a state, StatefulSet also needs Persistent Volumes so that the hosted application can save and restore data across restarts. Kubernetes provides Storage Classes, Persistent Volumes, and Persistent Volume Claims as an abstraction layer above Cloud Storage.Understanding Pod affinityDo you want all replicas to be scheduled on the same node? What would happen if that node were to fail? Would it be ok to lose all replicas at once? You can control the placement of your Pod and any of its replicas using Kubernetes Pod affinity and anti-affinity rules.To avoid a single point of failure, use Pod anti-affinity to instruct Kubernetes NOT to co-locate Pods on the same node. For a stateful application, this can be a crucial configuration, especially if it requires a minimum number of replicas (i.e., a quorum) to run properly.For example, Apache ZooKeeper needs a quorum of servers to successfully commit mutations to data. For a three-server ensemble, two servers must be healthy for writes to succeed. Therefore, a resilient deployment must ensure that servers are deployed across failure domains. Thus, to avoid an outage due to the loss of a node, we recommend you preclude co-locating multiple instances of an application on the same machine. You can do this by using Pod anti-affinity.On the flip side, sometimes you want a group of Pods to be located on the same node, benefitting from their proximity and therefore from less latency and better performance when communicating with one another. You can achieve this using Pod affinity.For example, Redis, another stateful application, may be providing in-memory cache for your web application. In this deployment, you would want the web server to be co-located with the cache as much as possible to avoid latency and boost performance.Anticipate disruptionsOnce you’ve configured your GKE cluster and the applications running on it, it’s time to think about how you will respond in the event of increased load or a disruption. Going all digital requires better capacity planningRunning your Kubernetes clusters on GKE frees you up from thinking about physical infrastructure and how to scale it. Nonetheless, performing capacity planning is highly recommended, especially if you think you might get increased load.Consider using reserved instances to guarantee any anticipated burst in resources demand. GKE supports specific (machine type and specification) and non-specific reservations. Once the reservation is set, nodes will automatically consume the reservations in the background from a pool of resources reserved uniquely for you.Make sure you have a support planGoogle Cloud Support is a team of engineers around the globe working 24×7 to help you with any issues you may encounter. Now, before you’re up and running and in production, is a great time to make sure that you’ve secured the right Cloud Support plan to help you in the event of a problem. Review your support plan to make sure you have the right package for your business.Review your support user configurations to make sure your team members can open support cases.Make sure you have GKE Monitoring and Logging enabled on your cluster; your technical support engineer will need these logs and metrics to troubleshoot your system.If you do not have GKE Monitoring and Logging enabled, consider enabling the new beta system-only logs feature to collect only logs that are critical for troubleshooting.Bringing it all togetherContainerized applications are portable, easy to deploy and scale. GKE, with its wide range of cluster management capabilities, makes it even easier to run your workloads hassle-free. You know your application best, but by following these recommendations, you can drastically improve the availability and resilience of your clusters. Have more ideas or recommendations? Let us know! And stay tuned for part two of this series, where we talk about how to respond to issues in production clusters.
Quelle: Google Cloud Platform
Some of the largest enterprises in the world are currently running their SAP solutions on Microsoft Azure. Since these SAP applications are mission critical, a delay or disruption of service for even a minute can have a significant financial and reputational impact on an organization.
To help our customers effectively monitor their SAP on Azure deployments, today we are announcing the preview of Azure Monitor for SAP Solutions. With this Azure-native monitoring solution, customers running their SAP landscapes on Azure now have access to simplified monitoring, efficient troubleshooting, and flexible customizations. Watch Introducing Azure Monitor for SAP Solutions on Azure Friday.
Before we announced a private preview of Azure Monitor for SAP Solutions in September 2019 we heard from customers that they relied on complex and unmanageable disparate tools and dashboards. Customers wanted to collect the required SAP telemetry in one location for an end-to-end view to easily recognize patterns and correlate data between various components within their SAP landscapes.
“Azure Monitor for SAP Solutions enables infrastructure teams to quickly identify the state of the enterprise critical SAP HANA DB without being an SAP HANA Expert. We had several occasions where functional teams pointed at infrastructure for system issue and with the use of the monitor we could quickly confirm or point at the real root cause for the issue. The tool speeds up the time it takes to identify who needs to be involved in solving whatever problem the customer faces…” —Thomas Kremer, Sr. Manager II Cloud and Service Delivery, Walgreens
Key features of Azure Monitor for SAP Solutions
Key features of Azure Monitor for SAP Solutions include:
Multi-instance/multi-provider: Customers can get telemetry data from multiple systems of the same source system type or from multiple systems of different source system types. For example, customers can deploy just one monitoring resource to monitor multiple SAP HANA instances and multiple Pacemaker clusters.
SAP HANA DB telemetry: Customers can collect and view HANA Backup and HSR telemetry, in addition to the infrastructure utilization data from various SAP HANA instances in one location with the Azure portal.
Microsoft SQL Server telemetry: Customers can get telemetry from Microsoft SQL Server, can visualize and correlate telemetry data—such as CPU and memory with top SQL statements—and can also get information about ‘Always On.’
High-availability (HA) cluster telemetry: Customers can get telemetry data from Pacemaker clusters and identify which clusters are healthy versus unhealthy and correlate this with the health of underlying node and resource health.
Benefits of Azure Monitor for SAP Solutions
Benefits of Azure Monitor for SAP Solutions include the ability to:
Easily collect and consolidate telemetry data from Azure infrastructure and databases in a central location, independent of the underlying infrastructure (Azure Virtual Machines, Azure Large Instances, or both). Customers can use this data to visually correlate telemetry between different components for faster troubleshooting.
Create Azure dashboards to see telemetry from both the SAP and non-SAP components running on Azure. This can be done with ‘pinning’ to combine telemetry from Azure Monitor for SAP Solutions (used to monitor SAP landscape components) with telemetry from Application Insights or Log analytics (used to monitor non-SAP components).
Edit the visualizations to create customized charts and graphs. Customers can run custom Kusto queries on the raw data collected by Azure Monitor for SAP Solutions to identify patterns, configure alerts to get proactive notifications, and configure custom data retention period to retain telemetry data for trend analysis.
Integrate with Azure Lighthouse. With this, partners can view telemetry across different tenants as per appropriate access policies. This enables partners to help their customers with monitoring and troubleshooting their SAP on Azure landscapes.
In addition, Azure Monitor for SAP Solutions is open source, so customers can see the inner workings of the product and offer feedback by visiting this GitHub repository.
Pricing and availability
Azure Monitor for SAP Solutions is available in West Europe, East US, East US 2, and West US 2.
There is no licensing fee for the product. Customers only pay for the underlying infrastructure which is deployed as part of the product.
Learn more
To learn more about the product and pricing, check out the Azure Monitor for SAP Solutions documentation. To get started, watch this QuickStart video and head to Azure Marketplace to create your first resource.
Quelle: Azure
More than ever before, companies are relying on their big data and artificial intelligence (AI) systems to find new ways to reduce costs and accelerate decision-making. However, customers using on-premises systems struggle to realize these benefits due to administrative complexity, inability to scale their fixed infrastructure cost-effectively, and lack of a shared collaborative environment for data engineers, data scientists and developers.
To make it easier for customers to modernize their on-premises Spark and big data workloads to the cloud, we’re announcing a new migration offer with Azure Databricks. The offer includes:
Up to a 52 percent discount over the pay-as-you-go pricing when using the Azure Databricks Unit pre-purchase plans. This means that customers can free themselves from the complexities and constraints of their on-premises solutions and realize the benefits of the fully managed Azure Databricks service at a significant discount.
Free migration assessment for qualified customers.
Azure Databricks is a fast, easy, and collaborative Apache Spark-based service that simplifies building big data and AI solutions. Since its debut two years ago, Azure Databricks has experienced significant adoption from customers, such as Shell, Cerner, Advocate Aurora Health, and Bosch, which are using it to run mission-critical big data and AI workloads.
We’ve also seen several customers accelerating their migration of on-premises systems to Azure Databricks for the following reasons:
Reduced costs and enhanced security: Moving to the fully managed Azure Databricks environment enables customers to reduce administrative costs while also helping increase overall security and compliance of their solutions. Autoscaling and auto-termination of jobs help reduce operational costs. In addition, native integration with Azure Data Lake Storage Gen 2, which supports the Hadoop Distributed File System (HDFS) format, helps reduce migration costs.
Increased agility: On-premises systems are limited to a fixed amount of compute and storage. With Azure Databricks, customers can quickly scale up or down compute resources as needed to accelerate jobs and increase productivity.
Enhanced collaboration: Azure Databricks empowers data engineers, data scientists and developers to collaborate in an interactive workspace using the languages and frameworks of their choice. Integration with Azure Machine Learning, Synapse Analytics, and Cosmos DB provides users easy access to new technologies, thereby accelerating overall time to value.
This new offer is designed to help customers who are still using on-premises big data systems but are looking to move to the cloud and take advantage of Azure Databricks capabilities.
Offer details
The Azure Databricks Unit pre-purchase plan already enables customers to save up to 37 percent over pay-as-you-go pricing when they pre-pay for one or three-year commitments. With the migration offer, we are adding an extra 25 percent discount for three-year pre-purchase plan larger than 150,000 DBCUs and a 15 percent discount for one-year pre-purchase plan larger than 100,000 DBCUs. The offer is valid until January 31, 2021. More information on the Azure Databricks Unit pre-purchase plan can be found on the pricing page.
All Azure Databricks SKUs—Premium and Standard SKUs for Data Engineering Light, Data Engineering, and Data Analytics—are eligible for this migration offer. The Azure Databricks pre-purchase units can be used at any time and can be consumed across all Databricks workload types and tiers.
Qualified customers will also receive a free migration evaluation. This includes an assessment of current tools, systems, and processes, and a two-day workshop to identify value drivers, prioritize use cases, and define the future state architecture.
Get started today
Learn more about migration to Azure Databricks and the offer by watching this webinar. For more information on discount tiers, please visit the Azure Databricks pricing page and contact your sales team to take advantage of this offer.
Quelle: Azure
Azure Blob storage is a massively scalable object storage solution that serves from small amounts to hundreds of petabytes of data per customer across a diverse set of data types, including logging, documents, media, genomics, seismic processing, and more. Read the Introduction to Azure Blob storage to learn more about how it can be used in a wide variety of scenarios.
Increasing file size support for Blob storage
Customers that have workloads on-premises today utilize files that are limited by the filesystem used with file size maximums up to exabytes in size. Most usage would not go up to the filesystem limit but do scale up to the tens of terabytes in size for specific workloads that make use of large files. We recently announced the preview of our new maximum blob size of 200 TB (specifically 209.7 TB), increasing our current limit of 5TB in size, which is a 40x increase! The increased size of over 200TB per object is much larger than other vendors that provide a 5TB max object size. This increase allows workloads that currently require multi-TB size files to be moved to Azure without additional work to break up these large objects.
This increase in object size limit will unblock workloads, including seismic analysis, backup files, media and entertainment (video rendering and processing), and others which include scenarios where multi-TB object size is used. As an example, a media company which is trying to move from a private datacenter to Azure can now do so with our ability to support files up to 200TB in size. Increasing our object size removes the need to carefully inventory existing file sizes as part of a plan to migrate a workload to Azure. Given many on-premises solutions can store files in the ten to hundreds of terabytes in size, removing this gap simplifies migration to Azure.
With large file size support, being able to break up an object into blocks to ease upload and download is critical. Every Azure Blob is made up of up to 50,000 blocks. This allows a multi-terabyte object to be broken down into manageable pieces for write. The previous maximum of 5 TB (4.75TiB) was based on a max block size of 100 MiB x 50,000 blocks. The preview increases the block size to 4,000 MiB and keeps 50,000 blocks per object for a maximum object size of 4,000 MiB x 50,000 = 190.7 TiB. Conceptually in your application (or within the utility or SDK), the large file is broken into blocks, each block is written to Azure Storage, and, after all, blocks have successfully been uploaded, the entire file (object) is committed.
As an example of the overall relationship within a storage account, the following diagram shows a storage account, Contososa, which contains one container with two blobs. The first is a large blob made up of 50,000 blocks. The second is a small blob made of a single block.
The 200 TB preview block blob size is supported in all regions, using tiers including Premium, Hot, Cool, and Archive. There is no additional charge for this preview capability. We do not support upload of very large objects using Azure Portal. The various methods to transfer data into Azure will be updated to make use of this new blob size. To get started today with your choice in language:
.Net.
Java.
JavaScript.
Python.
REST.
Next steps
We look forward to hearing your feedback via email or post in the Azure Storage technet forum.
Learn more about Azure Blob storage.
Quelle: Azure
To help customers save on data warehouse migration costs and accelerate time-to-insight on critical SAP data, we are announcing two new analytics offers from Azure Synapse Analytics.
Business disruptions, tactical pivots, and remote work have all emphasized the critical role analytics plays for every organization. Uncharted situations demand charted performance insights, so businesses can quickly determine what is and is not working. In recent months, the urgency for these business-guiding insights has only been heightened—leading to a need for real-time analytics solutions. And equally important is the need to discover and share these insights in the most cost-effective manner.
Azure Synapse has you covered. It is the undisputed leader in price-performance and when compared to other cloud providers is up to 14 times faster and costs 94 percent less. In fact, businesses using Azure Synapse today report an average ROI of 271 percent.
To help customers get started today, we are announcing the following new offers aimed at empowering businesses to act now wherever they are on their cloud analytics journey.
Save up to 76 percent when migrating to Azure Synapse
For customers that use an on-premises data warehouse, migrating to the cloud offers both significant cost savings and accelerated access to innovative features. Today, customers experience cost savings with our existing reserved capacity discount for cloud data warehousing with Azure Synapse. To boost these cost savings further, today we are announcing a new limited time offer that provides additional savings on top of the existing reserved capacity discount—enabling qualifying customers who currently use an on-premises data warehouse to save up to 76 percent when migrating to Azure Synapse.
To learn more about the terms and conditions and the qualification criteria of this offer, contact your Microsoft account representative. The migration offer is available until January 31, 2021.
Gain breathtaking insights of your ERP data with new offering from Azure, Power BI, and Qlik Data Integration
For companies worldwide, SAP data is at the core of their business applications—housing critical information on sales, manufacturing, and financial processes. However, due to the inherent complexity of SAP systems, many organizations struggle to integrate SAP data into modern analytics projects. To enable businesses to gain real-time insights from their SAP data, we are announcing a new joint offer with Qlik (formerly Attunity) that brings Azure Synapse, Power BI, and Qlik Data Integration together for end-to-end supply chain intelligence, finance analytics, and more.
With this new offer, customers can now work with Azure, Power BI, and Qlik Data Integration to easily understand how to enable real-time insights on SAP data through a robust proof of value. This joint proof-of-value offer provides customers a free solution architecture workshop, software subscriptions, and hands-on technical expertise from dedicated personnel and resources from both Microsoft and Qlik.
To learn more about this joint offer and how to apply, register for the upcoming webinar.
Get started today
Register for the webinar, Gain Real-Time SAP Data Insights with Azure Synapse Analytics, airing July 30, 2020 at 10:00 AM PT.
Try the new Azure Synapse features and create an Azure Synapse workspace in minutes.
Learn more about the new joint offer, Unleash your SAP data with Microsoft and Qlik.
Migration offer details
The Azure Synapse Analytics reserved capacity plan for data warehousing (formerly SQL Data Warehouse) already enables customers to save up to 65 percent over pay-as-you-go pricing when they pre-pay for a three-year commitment. With this new migration offer, we are adding an extra 33 percent discount for three-year pre-commits that spend over $60,000/ year in year 2 and year 3 on SQL Data Warehouse Compute Optimized Gen2. Terms and conditions apply and can be discussed in full with your Microsoft account representative. More information on the Azure Synapse Analytics (formerly SQL Data Warehouse) reserved capacity plan can be found on the pricing page.
Quelle: Azure
Developing Python projects in local environments can get pretty challenging if more than one project is being developed at the same time. Bootstrapping a project may take time as we need to manage versions, set up dependencies and configurations for it. Before, we used to install all project requirements directly in our local environment and then focus on writing the code. But having several projects in progress in the same environment becomes quickly a problem as we may get into configuration or dependency conflicts. Moreover, when sharing a project with teammates we would need to also coordinate our environments. For this we have to define our project environment in such a way that makes it easily shareable.
A good way to do this is to create isolated development environments for each project. This can be easily done by using containers and Docker Compose to manage them. We cover this in a series of blog posts, each one with a specific focus.
This first part covers how to containerize a Python service/tool and the best practices for it.
Requirements
To easily exercise what we discuss in this blog post series, we need to install a minimal set of tools required to manage containerized environments locally:
Windows or macOS: Install Docker DesktopLinux: Install Docker and then Docker Compose
Containerize a Python service
We show how to do this with a simple Flask service such that we can run it standalone without needing to set up other components.
server.pyfrom flask import Flask
server = Flask(__name__)@server.route(“/”) def hello(): return “Hello World!”if __name__ == “__main__”: server.run()
In order to run this program, we need to make sure we have all the required dependencies installed first. One way to manage dependencies is by using a package installer such as pip. For this we need to create a requirements.txt file and write the dependencies in it. An example of such a file for our simple server.py is the following:
requirements.txtFlask==1.1.1
We have now the following structure:
app
├─── requirements.txt
└─── src └─── server.py
We create a dedicated directory for the source code to isolate it from other configuration files. We will see later why we do this.
To execute our Python program, all is left to do is to install a Python interpreter and run it.
We could run this program locally. But, this goes against the purpose of containerizing our development which is to keep a clean standard development environment that allows us to easily switch between projects with different conflicting requirements.
Let’s have a look next on how we can easily containerize this Python service.
Dockerfile
The way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it. The steps are sketched below.
To generate a Docker image we need to create a Dockerfile which contains instructions needed to build the image. The Dockerfile is then processed by the Docker builder which generates the Docker image. Then, with a simple docker run command, we create and run a container with the Python service.
Analysis of a Dockerfile
An example of a Dockerfile containing instructions for assembling a Docker image for our hello world Python service is the following:
Dockerfile# set base image (host OS)FROM python:3.8# set the working directory in the containerWORKDIR /code# copy the dependencies file to the working directoryCOPY requirements.txt .
# install dependenciesRUN pip install -r requirements.txt# copy the content of the local src directory to the working directoryCOPY src/ .# command to run on container startCMD [ “python”, “./server.py” ]
For each instruction or command from the Dockerfile, the Docker builder generates an image layer and stacks it upon the previous ones. Therefore, the Docker image resulting from the process is simply a read-only stack of different layers.
We can also observe in the output of the build command the Dockerfile instructions being executed as steps.
$ docker build -t myimage .
Sending build context to Docker daemon 6.144kB
Step 1/6 : FROM python:3.8
3.8.3-alpine: Pulling from library/python
…
Status: Downloaded newer image for python:3.8.3-alpine
—> 8ecf5a48c789
Step 2/6 : WORKDIR /code
—> Running in 9313cd5d834d
Removing intermediate container 9313cd5d834d
—> c852f099c2f9
Step 3/6 : COPY requirements.txt .
—> 2c375052ccd6
Step 4/6 : RUN pip install -r requirements.txt
—> Running in 3ee13f767d05
…
Removing intermediate container 3ee13f767d05
—> 8dd7f46dddf0
Step 5/6 : COPY ./src .
—> 6ab2d97e4aa1
Step 6/6 : CMD python server.py
—> Running in fbbbb21349be
Removing intermediate container fbbbb21349be
—> 27084556702b
Successfully built 70a92e92f3b5
Successfully tagged myimage:latest
Then, we can check the image is in the local image store:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myimage latest 70a92e92f3b5 8 seconds ago 991MB
During development, we may need to rebuild the image for our Python service multiple times and we want this to take as little time as possible. We analyze next some best practices that may help us with this.
Development Best Practices for Dockerfiles
We focus now on best practices for speeding up the development cycle. For production-focused ones, this blog post and the docs cover them in more details.
Base Image
The first instruction from the Dockerfile specifies the base image on which we add new layers for our application. The choice of the base image is pretty important as the features it ships may impact the quality of the layers built on top of it.
When possible, we should always use official images which are in general frequently updated and may have less security concerns.
The choice of a base image can impact the size of the final one. If we prefer size over other considerations, we can use some of the base images of a very small size and low overhead. These images are usually based on the alpine distribution and are tagged accordingly. However, for Python applications, the slim variant of the official Docker Python image works well for most cases (eg. python:3.8-slim).
Instruction order matters for leveraging build cache
When building an image frequently, we definitely want to use the builder cache mechanism to speed up subsequent builds. As mentioned previously, the Dockerfile instructions are executed in the order specified. For each instruction, the builder checks first its cache for an image to reuse. When a change in a layer is detected, that layer and all the ones coming after are being rebuilt.
For an efficient use of the caching mechanism , we need to place the instructions for layers that change frequently after the ones that incur less changes.
Let’s check our Dockerfile example to understand how the instruction order impacts caching. The interesting lines are the ones below.
…# copy the dependencies file to the working directoryCOPY requirements.txt .# install dependenciesRUN pip install -r requirements.txt
# copy the content of the local src directory to the working directoryCOPY src/ ….
During development, our application’s dependencies change less frequently than the Python code. Because of this, we choose to install the dependencies in a layer preceding the code one. Therefore we copy the dependencies file and install them and then we copy the source code. This is the main reason why we isolated the source code to a dedicated directory in our project structure.
Multi-stage builds
Although this may not be really useful during development time, we cover it quickly as it is interesting for shipping the containerized Python application once development is done.
What we seek in using multi-stage builds is to strip the final application image of all unnecessary files and software packages and to deliver only the files needed to run our Python code. A quick example of a multi-stage Dockerfile for our previous example is the following:
# first stage
FROM python:3.8 AS builder
COPY requirements.txt .
# install dependencies to the local user directory (eg. /root/.local)
RUN pip install –user -r requirements.txt
# second unnamed stage
FROM python:3.8-slim
WORKDIR /code
# copy only the dependencies installation from the 1st stage image
COPY –from=builder /root/.local/bin /root/.local
COPY ./src .
# update PATH environment variable
ENV PATH=/root/.local:$PATH
CMD [ “python”, “./server.py” ]
Notice that we have a two stage build where we name only the first one as builder. We name a stage by adding an AS <NAME> to the FROM instruction and we use this name in the COPY instruction where we want to copy only the necessary files to the final image.
The result of this is a slimmer final image for our application:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE
myimage latest 70a92e92f3b5 2 hours ago 991MB
multistage latest e598271edefa 6 minutes ago 197MB
…
In this example we relied on the pip’s –user option to install dependencies to the local user directory and copy that directory to the final image. There are however other solutions available such as virtualenv or building packages as wheels and copy and install them to the final image.
Run the container
After writing the Dockerfile and building the image from it, we can run the container with our Python service.
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE
myimage latest 70a92e92f3b5 2 hours ago 991MB
…
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker run -d -p 5000:5000 myimage
befb1477c1c7fc31e8e8bb8459fe05bcbdee2df417ae1d7c1d37f371b6fbf77f
We now containerized our hello world server and we can query the port mapped to localhost.
$ docker psCONTAINER ID IMAGE COMMAND PORTS …befb1477c1c7 myimage “/bin/sh -c ‘python …” 0.0.0.0:5000->5000/tcp …$ curl http://localhost:5000″Hello World!”
What’s next?
This post showed how to containerize a Python service for a better development experience. Containerization not only provides deterministic results easily reproducible on other platforms but also avoids dependency conflicts and enables us to keep a clean standard development environment. A containerized development environment is easy to manage and share with other developers as it can be easily deployed without any change to their standard environment.
In the next post of this series, we will show how to set up a container-based multi-service project where the Python component is connected to other external ones and how to manage the lifecycle of all these project components with Docker Compose.
Resources
Best practices for writing Dockerfileshttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/https://www.docker.com/blog/speed-up-your-development-flow-with-these-dockerfile-best-practices/Docker Desktop https://docs.docker.com/desktop/
The post Containerized Python Development – Part 1 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/