Geforce RTX 2060: Founder's Edition kostet 370 Euro

Das hauseigene Referenz-Design der Geforce RTX 2060 kostet vergleichsweise viel, da Nvidia auf eine teure Platine setzt. Günstigere Versionen der Turing-Grafikkarte erscheinen erst in ein paar Wochen, zudem liegt ein hochkarätiges Spiel von EA bei. (Nvidia Turing, Grafikhardware)
Quelle: Golem

Teradata to Azure SQL Data Warehouse migration guide

With the increasing benefits of cloud-based data warehouses, there has been a surge in the number of customers migrating from their traditional on-premises data warehouses to the cloud. Microsoft Azure SQL Data Warehouse (SQL DW) offers the best price to performance when compared to its cloud-based data warehouse competitors. Teradata is a relational database management system and is one of the legacy on-premises systems that customers are looking to migrate from.

The Teradata to SQL DW migrations involve multiple steps. These steps include analyzing the existing workload, generating the relevant schema models, and performing the ETL operation. The intent of this discussed whitepaper is to provide guidance for these aforesaid migrations with emphasis on the migration workflow, the architecture, technical design considerations, and best practices.

Migration Phases

The Teradata migration should pivot on the following six areas. Though recommended, proof of concept is an alternative step. With the benefit of Azure, you can quickly provision Azure SQL Data Warehouses for your development team to start business object migration before the data is migrated and speed up the migration process.

Phase one – Fact finding

Through a question and answers session you can define what your inputs and outputs are for the migration project.

Phase two – Defining success criteria for proof of concept (POC)

Taking the answers from phase one, you identify a workload for running a POC to validate the outputs required and run the following phases as a POC.

Phase three: Data layer mapping options

This phase is about mapping the data you have in Teradata to the data layout you will create in Azure SQL Data Warehouse. Some of the common scenarios are data type mapping, date and time format, and more.

Phase four – Data modeling

Once you’ve defined the data mappings, phase four concentrates on how to tune Azure SQL Data Warehouse. This provides the best performance for the data you will be landing into it.

Phase five: Identify migration paths

What is the path of least resistance? What is the quickest path given your cloud maturity? Phase five helps describe the options open to you and then for you to decide on the path you wish to take.

Phase six: Execution of migration

Migrating your Teradata data to SQL Data Warehouse involves a series of steps. These steps are executed in three logical stages, preparation, metadata migration, and data migration.

Migration solution

To ingest data, you need a basic cloud data warehouse setup for moving data from your on-premise solution to Azure SQL Data Warehouse, and to enable the development team to build Azure Analysis Cubes once the majority of the data is loaded.

Azure Data Factory Pipeline is used to ingest and move data through the store, prep, and train pipeline.
Extract and load files via Polybase into the staging schema on Azure SQL DW.
Transform data through staging, source (ODS), EDW and sematic schemas on Azure SQL DW.
Azure Analysis services will be used as the sematic layer to serve thousands of end users and scale out Azure SQL DW concurrency.
Build operational reports and analytical dashboards on top of Azure Analysis services to serve thousands of end users via Power BI.

For more insight into how to approach a Teradata to Azure SQL Data Warehouse migration check the following whitepaper, “Migrating from Teradata to Azure SQL Data Warehouse.”

This whitepaper is broken into sections which detail the migration phases, the preparation required for data migration including schema migration, migration of the business logic, the actual data migration approach, and testing strategy.

The scripts that would be useful for the migration are available in github under Teradata to Azure SQL DW Scripts.
Quelle: Azure

5 ways financial services organizations will move faster in the cloud in 2019

Few industries grapple with the volume of information the financial services industry manages on a daily basis. Whether financial services organizations are analyzing market shifts, or protecting against fraud and money laundering, understanding their data and quickly finding the right insights are critical to their success.Over the past year, we’ve spent a lot of time working with our financial services customers—like HSBC, Citi, UBS, Scotiabank, Two Sigma, and more. And what we found is whether they’re large or small, a startup or a global institution, there are universal themes shared by all. Here are five things financial services organizations plan to do in the cloud in 2019:1. Tackle data silos to unlock the power of their data.Large, global institutions are using the cloud to overcome the incompatibilities, latencies and blind spots associated with traditional data silos, growing volumes of market data and alternative data sets. In 2019 we expect to see an increasing number using serverless, advanced data platforms, open APIs, and machine learning capabilities, to make full use of their data for enterprise-wide decision-making with a minimal IT footprint. These decisions will be supported by market data, news and commentary, risk and regulatory data, company data and other specialized and alternative data.2. Use ML-based early warning risk systems to stop threats before they happenFinancial services organizations continue to move their systems to the cloud take advantage of predictive technology that can help them prevent fraud, money laundering, and cyber breaches from causing harm before they ever occur. ML-based early warning systems are also helping monitor credit risk in real time. The cloud also helps these organizations streamline their lending processes, so they can approve credit faster and  offer their products to clients in more cost-efficient and engaging ways.3. Improve trading decisions with big data and machine learningThe ability to extract data, and crunch numbers inside data warehouses or from multiple real-time feeds, has evolved how many organizations make trading decisions, requiring a new way of managing and analyzing data. As a result, many organizations are looking to the cloud for dynamic and scalable compute resources that allow traders and quants to model and test algorithms, and perform complex calculations that read into vast amounts of data.  4. Move to the cloud for increased securityFor financial services organizations, security is always top-of-mind, and a growing number are moving to the cloud to take advantage of the automation and scale it offers. Our aim is to give these financial services customers a broad range of tools they can use to better protect their customers and their data, from VPC Service Controls which help prevent data exfiltration as a result of breaches or insider threats, to data encryption both at rest and in transit by default.  5. Harness the benefits of blockchain in the cloudBlockchain, a distributed ledger technology (DLT), will continue to present exciting opportunities for the financial services industry. Providing a single source of truth and security, without relying on intermediaries, DLT holds the promise of reducing the friction and costs associated with financial transactions. Blockchain in the cloud makes it easier to deploy and manage scalable, open source blockchain networks that support the full lifecycle of financial assets. With use cases ranging from trade finance, to cross border payments, to clearing and settlement, the cloud offers a more efficient means of harnessing the power of blockchain.We look forward to hearing how more financial services take advantage of the cloud in 2019. In the meantime, if you’re interested in learning more about financial services on Google Cloud, visit our solutions page or contact us for a discovery session.
Quelle: Google Cloud Platform

To infinity and beyond: The definitive guide to scaling 10k VMs on Azure

Every platform has limits, workstations and physical servers have resource boundaries, APIs may be rate-limited, and even the perceived endlessness of the virtual public cloud enforces limitations that protect the platform from overuse or misuse. You can learn more about these limitations by visiting our documentation, “Azure subscription and service limits, quotas, and constraints.” When working on scenarios that take platforms to their extreme, those limits become real and therefore thought should be put into overcoming them.

The following post includes essential notes taken from my work with Mike Kiernan, Mayur Dhondekar, and Idan Shahar. It also covers some iterations where we try to reach a limit of 10K virtual machines running on Microsoft Azure and explores the pros/cons of the different implementations.Load tests at cloud scale

Load and stress tests before moving a new version to production are critical on the one hand, but pose a real challenge for IT on the other. This is because they require a considerable amount of resources to be available for only a short amount of time, every release-cycle. When purchased the infrastructure doesn’t justify its cost over extended periods, making this a perfect use-case for a public cloud platform where payment is billed only per usage.

This post is in fact based on a customer we’ve been working with, and discusses challenges we have met. However, the provided solution is general enough to be used for other use cases where large clusters of VMs in Azure exist, such as:

Scaling requirements beyond a single VMSS, and the cluster is static in size once provisioned (HPC clusters).
DDoS simulation – Please note, in this case ethics must be practiced and the targeted endpoint should be owned by you, otherwise you assume risk the liability for damages.The process

At a high level, to provision and initialize a cluster of x VMs that “do something” the following steps should be taken:

Start from a base image.
Provision x VMs from the base image.
Download and install required software and data to each VM.
Start the “do-something” process on each VM.

However, given the targeted hyper-scale there are a number of critical elements that must be taken into account. It quickly becomes clear that the concerns of implementing such scenarios are as much about management, cost optimization, and avoiding platform limits as they are about infrastructure and the provisioning process.

How do you manage 10K VMs? How do you even count them?
What is the origin of data and can it handle the load of 10K concurrent downloads?
How would you know that the process completes?
Can the cloud provide 10K VMs in one region and which?
How long would it take to provision and reach its scale?

The next section describes a load-test scenario implemented using different services and tackling the questions raised previously with the following goals:

Generate stress on a backend service located in some other datacenter using client machines (VMs) in Azure.
Trigger the process using HTTP POST.
Avoid manual steps, pre-requisites, and custom images which may be outdated over time.
Minimal time to reach a full-scale cluster.The solution outline

Read more about all the details of the solution in the blog post, “To Infinity and Beyond (or: The Definitive Guide to Scaling 10k VMs on Azure).” You can also see the solution code and deployment scripts on GitHub.
Quelle: Azure

Azure.Source – Volume 64

Updates

Azure Migrate is now available in Azure Government

The Azure Migrate service assesses on-premises workloads for migration to Azure. The service assesses the migration suitability of on-premises machines, performs performance-based sizing, and provides cost estimations for running on-premises machines in Azure. If you're contemplating lift-and-shift migrations, or are in the early assessment stages of migration, this service is for you. Azure Migrate now supports Azure Government as a migration project location. This means that you can store your discovered metadata in an Azure Government region (US Gov Virginia). In addition to Azure Government, Azure Migrate supports storing the metadata in United States and Europe geographies. Support for other Azure geographies is planned for the future.

Python 2.7 Now Available for App Service on Linux

Last month, built-in Python images for Azure App Service on Linux became available in public preview for Python 3.7 and 3.6. Python 2.7 is now available in the public preview of Python on Azure App Service (Linux). When you use the official images for Python on App Service on Linux, the platform automatically installs the dependencies specified in the requirements.txt​ file.

If you’re interested in building with Python on Azure, be sure to check out the four-part Python on Azure series with Nina Zakharenko and Carlton Gibson to get an introduction to building and running Django apps with Visual Studio Code and Azure Web Apps, setting up CI/CD pipelines with Azure Pipelines, and running serverless Django apps with Azure Functions.

News

Microsoft Certified Azure Developer Associate

Microsoft Azure Developers design, build, test, and maintain cloud solutions, such as applications and services, partnering with cloud solution architects, cloud DBAs, cloud administrators, and clients to implement these solutions. Based on feedback received about the Azure Developer Associate certification beta exams, AZ-200: Microsoft Azure Developer Core Solutions and AZ-201: Microsoft Azure Developer Advanced Solutions, the decision was taken to simplify the path and transition to a single exam, AZ-203: Developing Solutions for Microsoft Azure. By the way, Exam AZ-900: Microsoft Azure Fundamentals is as an optional first step in learning about cloud services and how those concepts are exemplified by Microsoft Azure. You can take AZ-900 as a precursor to AZ-203, but it is not a pre-requisite for it.

Technical content

Introduction to Cloud Storage for Developers

This introductory level post covers data storage options in a platform-agnostic way, with a focus on Azure Storage examples, to help developers understand that traditional NoSQL and SQL databases aren't the only option. Jeremy Likness shares when and why cloud storage is better option, definitions for various storage terms and concepts, simple ways to get started, and resources to learn more.

KubeCon 2018: Tutorial – Deploying Windows Apps with Kubernetes, Draft, and Helm

Curious about deploying Windows apps to Kubernetes? Would you like to use Draft and Helm, just as you would if you were deploying Linux apps or containers? Check out this blog post from Jessica Deen, which includes her session from KubeCon 2018.

Apache Spark: Tips and Tricks for Better Performance

Building on her "Apache Spark Deep Dive" exploration, Adi Polak shares her top five tips for improving Spark performance and writing better queries — from why you should avoid custom user defined functions to understanding and optimizing your cloud configuration. In her next post, she’ll dive into how to use Apache Spark on Azure, including real life use cases.

Using Object Detection for Complex Image Classification Scenarios Part 1: The AI Computer Vision Revolution

AI and ML are theoretically as easy as consuming a few APIs, but how do you apply them to real business scenarios? In this series, you’ll walk through how major Central Eastern European candy company uses computer vision, AI, and ML to solve a problem: automatically validating store shelves are properly stocked, eliminating costly audits and manual processes. By the end of the series, you’ll understand how to compare and contrast different Machine Learning approaches and technologies, understand available services and tools, and build, train, and deploy your own custom models to the cloud and remote clusters.

Azure shows

Episode 260 – Azure Sphere | The Azure Podcast

In addition to the usual updates, Cale, Russell and Sujit break down the Azure Sphere offering from Microsoft and what it means for the future of IoT development.

HTML5 audio not supported

Interning in Azure Engineering and the Visual Studio Code extension for ACR Build | Azure Friday

What is it like to intern at Microsoft? Scott Hanselman meets with three interns from the Microsoft Explorer Program (a cross-discipline internship designed for college freshmen and sophomores) to talk about their experience working on the Azure Container Registry and their contribution of ACR Build and Task capabilities to the Visual Studio Code Docker Extension.

Visual Azure Provisioning From a Whiteboard | The Xamarin Show

On this week's Xamarin Show, James is joined by good friend Christos Matskas who shows off a beautiful Xamarin application that is infused with AI to generate a full Azure backend just by drawing pictures on a white board. You don't want to miss this mind blowing demo and walkthrough of the code.

How the Azure DevOps teams plan with Aaron Bjork | DevOps Interviews

In this interview, Donovan Brown interviews Group Program Manager Aaron Bjork about Agile Planning.

IPFS in Azure | Block Talk

This episode will introduce the use of IPFS (Interplanatory File System) in a consortium setting. The concepts of how this technology can be helpful to remove centralization of storage that is not part of the block in the blockchain is shown. Along with this is a short demonstration of how the marketplace offering for IPFS in Azure can make creating these storage networks simple is shown.

Live demo of BeSense, an application built by Winvision on Azure Digital Twins | Internet of Things Show

Winvision has leveraged the spatial intelligence capabilities of Azure Digital Twins to build BeSense, a smart building application that provides real-time data that optimizes space utilization and occupant experience. Remco Ploeg, a Solution Architect at Winvision demos the application.

How to add logic to your Testing in Production sites with PowerShell | Azure Tips and Tricks

Learn how to add additional logic by using PowerShell to automatically distribute the load between your production and deployment slot sites with the Testing in Production feature.

Gopinath Chigakkagari on Key Optimizations for Azure Pipelines | The Azure DevOps Podcast

In this episode, Jeffrey Palermo is joined by his guest, Gopinath Chigakkagari. Gopinath hits on some fascinating points and topics about Azure Pipelines, including (but not limited to): what listeners should be looking forward to, some highlights of the new optimizations on the platform, key Azure-specific offerings, as well as his recommendations on what listeners should follow up on for more information!

HTML5 audio not supported

Events

Microsoft Ignite | The Tour

Learn new ways to code, optimize your cloud infrastructure, and modernize your organization with deep technical training. Join us at the place where developers and tech professionals continue learning alongside experts. Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with our community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence. Find a city near you and register today.

SolarWinds Lab #72: Two Geeks and a Goddess II: Azure the Easy Way

Wednesday, January 16 – 1:00-2:00 PM Central (UTC/GMT -6)

If there’s one takeaway from 2018, it’s that most organizations now run at least some production workloads in somebody else’s data center, especially Azure — and we're here to show you how to monitor those cloud resources with the tools you already have. Join us – Phoummala Schmitt (Microsoft Cloud Advocate), Thomas LaRock (Head Geek and 10-year Microsoft MVP). and Patrick Hubbard (Head Geek) for a special hybrid IT/Cloud operations episode. We'll have live chat and experts on hand, so come with you Azure operations questions.You'll learn how to break down remote monitoring barriers, get a telemetry plan in place before migrating your apps, manage cloud costs, throttle dev sprawl. We'll also cover the new Azure and Office 365 Server & Application Monitor (SAM) templates and account activation.

Time Series Forecasting Build and Deploy your Machine Learning Models to Forecast the Future

Wednesday, January 23 – 8:00-11:00 AM Pacific (UTC/GMT -8)

In this O'Reilly three-hour live training course, Francesca Lazzeri walks you through the core steps for building, training, and deploying your time series forecasting models. First, you’ll learn common time series forecast methods, like simple exponential smoothing and recurrent neural networks (RNN), then get hands-on experience, using machine learning components like Keras, TensorFlow, and other open source Python packages to apply models to a real-world scenario.

A Cloud Guru | Azure This Week – 4 January 2019

This time on Azure This Week, Lars talks about 2019 predictions for Azure, changes and new certificates for Azure, and a new version of the Bot Framework SDK.

Quelle: Azure