Elektroautobauer: Nio macht hohe Verluste

Der chinesische Elektroautohersteller Nio steckt in Schwierigkeiten: Das Unternehmen muss einen Quartalsverlust von 479 Millionen US-Dollar bei einem Umsatz von 220 Millionen US-Dollar ausweisen. Doch die Probleme sind noch weitreichender. (Elektroauto, Technologie)
Quelle: Golem

Atom bank accelerates transformation with Google Cloud

When building a radically different bank in an industry dominated by centuries-old institutions, you need to be both creative and nimble. For Atom bank, the UK’s first mobile-only bank, the secret to its success lies in its technology stack, which is now powered by Google Cloud.Since its launch three years ago, Atom bank has aimed to empower people to own their financial futures, with the desire to use the best technology to deliver an outstanding customer experience. Cloud hosting of banking software wasn’t an option when Atom bank was authorised in 2015 and when they launched, their IT infrastructure was managed by a third party in a data center. Within a year, however, Atom bank was bumping up against the limits of on-premise technology from both an operational and a business perspective. Regulatory guidance started to emerge and it was then, says Atom bank CTO Rana Bhattacharya, that they turned to Google Cloud.With on-premises-based data centers, it can take nearly three months to spin up a new service for customers. However, Atom bank wanted to inspire its digitally savvy customers with new apps and offerings at a frequent pace. So it made the decision to switch. Now, with Google Cloud, the bank can spin up as many new apps and services as it needs with fewer  lengthy delays and lower costs. Plus, when the bank no longer needs them, they can be decommissioned in an instant. “As a challenger bank, every penny counts,” says Bhattacharya. “We need to do more with less. That’s one reason why embracing the cloud helps us so much. This whole journey is really around removing obstacles, keeping costs low, and having more control and velocity around creating the right products, propositions and experiences for our customers.”Adopting Google Cloud offers more agility and scalability at a lower cost, says Bhattacharya. It allows Atom bank to be more responsive to the needs of customers, whether that be through new product and app features, or even entirely new products. More importantly, moving to Google Cloud  has enabled Atom bank to accelerate its transformation initiatives and roll out a completely new consumer-facing app. “Atom bank has always had the ambition to be built in the cloud, along with the intent to scale,” says Bhattacharya. “The speed of our growth and regulatory guidance resulted in us turning to Google Cloud.”“The types of products we offer today run very effectively on our current technologies but things have changed. To take advantage of the current innovation and build for future speed we’re replatforming the bank. By leveraging Google Cloud, it will allow us to be cloud native, building more SaaS and creating an architecture that is efficient and resilient.”At the end of the day, Google was more than just a cloud provider to Atom bank. It was a true transformation ally, offering engineering support that helped the bank overcome technical hurdles, and providing training and other services to bank employees. “We picked Google Cloud because we really wanted a partner, not just a provider,” says Bhattacharya. “We knew the cloud provider we chose would be very important to us, so we wanted to be sure we were important to that cloud provider, too. Fortunately, we found that in Google Cloud.”
Quelle: Google Cloud Platform

12 TB VMs, Expanded SAP partnership on Blockchain, Azure Monitor for SAP Solutions

A few months back, at SAP’s SAPPHIRE NOW event, we announced the availability of Azure Mv2 Virtual Machines (VMs) with up to 6 TB of memory for SAP HANA. We also reiterated our commitment to making Microsoft Azure the best cloud for SAP HANA. I’m glad to share that Azure Mv2 VMs with 12 TB of memory will become generally available and production certified in the coming weeks, in US West 2, US East, US East 2, Europe North, Europe West and Southeast Asia regions. In addition, over the last few months, we have expanded regional availability for M-series VMs, offering up to 4 TB, in Brazil, France, Germany, South Africa and Switzerland. Today, SAP HANA certified VMs are available in 34 Azure regions, enabling customers to seamlessly address global growth, run SAP applications closer to their customers and meet local regulatory needs.

Learn how you can leverage Azure Mv2 VMs for SAP HANA by watching this video.

Running mission critical SAP applications requires continuous monitoring to ensure system performance and availability. Today, we are launching private preview of Azure Monitor for SAP Solutions, an Azure Marketplace offering that monitors SAP HANA infrastructure through the Azure Portal. Customers can combine monitoring data from the Azure Monitor for SAP Solutions with existing Azure Monitor data and create a unified dashboard for all their Azure infrastructure telemetry. You can sign up by contacting your Microsoft account team.

We continue to co-innovate with SAP to help accelerate our customers’ digital transformation journey. At SAPPHIRE NOW, we announced several such co-innovations with SAP. First, we announced general availability of SAP Data Custodian, a governance, risk and compliance offering from SAP, which leverages Azure’s deep investments in security and compliance features such as Customer Lockbox.

Second, we announced general availability of Azure IoT integration with SAP Leonardo IoT, offering customers the ability to contextualize and enrich their IoT data with SAP business data to drive new business outcomes. Third, we shared that SAP’s Data Intelligence solution leverages Azure Cognitive Services Containers to offer intelligence services such as face, speech, and text recognition. Lastly, we announced a joint collaboration of the integration of Azure Active Directory with SAP Cloud Platform Identity Authentication Service (SAP IAS) for a seamless single sign on and user provisioning experience across SAP and non-SAP applications. Azure AD Integration with SAP IAS for seamless SSO is generally available and the user provisioning integration is now in public preview. Azure AD integration with SAP SuccessFactors for simplified user provisioning will become available soon.

Another place I am excited to deepen our partnership is in blockchain. SAP has long been an industry leader in solutions for supply chain, logistics, and life sciences. These industries are digitally transforming with the help of blockchain, which adds trust and transparency to these applications, and enables large consortiums to transact in a trusted manner. Today, I am excited to announce that SAP’s blockchain-integrated application portfolio will be able to connect to Azure blockchain service. This will enable our joint customers to bring the trust and transparency of blockchain to important business processes like material traceability, fraud prevention, and collaboration in life sciences.

Together with SAP, we are offering a trusted path to digital transformation with our best in class SAP certified infrastructure, business process and application innovation services, and a seamless set of offerings. As a result, we help migrate to Azure SAP customers across the globe such as Carlsberg and CONA Services, who have large scale mission critical SAP applications. Here are a few additional customers benefiting from migrating their SAP applications to Azure:

Al Jomaih and Shell Lubricating Oil Company: JOSLOC, the joint venture between Al Jomaih Holding and Shell Lubricating Oil Company, migrated their mission critical SAP ERP to Azure, offering them enhanced business continuity and reduced IT complexity and effort, while saving costs. Migrating SAP to Azure has enabled the joint venture to prepare for their upgrade to SAP S/4HANA in 2020.

TraXall France: TraXall France provides vehicle fleet management services for upwards of 40,000 managed vehicles. TraXall chose Microsoft Azure to run their SAP S/4HANA due to the simplified infrastructure management and business agility, and to meet compliance requirements such as GDPR.

Zuellig Pharma: Amid a five-year modernization initiative, Singapore-based Zuellig Pharma wanted to migrate their SAP solution from IBM DB2 to SAP HANA. Zuellig Pharma now runs its SAP ERP on HANA with 1 million daily transactions and 12 TB of production workloads at a 40 percent savings compared to their previous hosting provider.

If you’re attending SAP TechEd in Las Vegas, stop by at the Microsoft booth #601 or attend one of the Microsoft Azure sessions to learn more about these announcements and to see these product offerings in action.

Tuesday September 24, 1:00pm–1:30pm: Bringing SAP Cloud Platform and Microsoft Azure Closer Together
Thursday September 26, 11:45am–12:45am: Innovation, IT Agility, and Developer Productivity on Azure

To learn more about how migrating SAP to Azure can help you accelerate your digital transformation, visit our website at https://azure.com/sap.
Quelle: Azure

Designing Your First App in Kubernetes, Part 2: Setting up Processes

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series. In this post, I’ll explain how to use pods and controllers to create scalable processes for managing your applications.
Processes as Pods & Controllers in Kubernetes
The heart of any application is its running processes, and in Kubernetes we fundamentally create processes as pods. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host, which brings us to our first decision point:
Decision #1: How should our processes be arranged into pods?
The original idea behind a pod was to emulate a logical host – not unlike a VM. The containers in a pod will always be scheduled on the same Kubernetes node, and they’ll be able to communicate with each other via localhost, making pods a good representation of clusters of processes that need to work together closely. 
A pod can contain one or more containers, but containers in the pod must scale together.
But there’s an important consideration: it’s not possible to scale individual containers in a pod separately from each other. If you need to scale your application up, you have to add more pods, which come with copies of every container they include. Factors such as which application components will scale at similar rates, which ones will not, and which ones should reside on the same host will factor into how you arrange processes in pods.
Thinking about our web app, we might start by making a pod containing only the frontend container; we want to be able to scale this frontend independently from the rest of our application, so it should live in its own pod.
On the other hand, we might design another pod that has one container each for our database and API; this way, our API is guaranteed to be able to talk to our database on the same physical host, eliminating network latency between the API and database and maximizing performance. As noted, this comes at the expense of independent scaling; if we schedule our API and database containers in the same pod, every time we want a new instance of our database container, it’s going to come with a new instance of our API.
Case specific arguments can be made for or against this choice: Is API-to-database latency really expected to be a major bottleneck? Could it be more important to scale your API and database separately? Final decisions may vary, but the same decision points can be applied generically to many applications.
Now that we have our pods planned out (one for the frontend and one for the API-plus-database combo), we need to decide how to manage these pods. We virtually never want to schedule pods directly (called ‘bare pods’); we want to take advantage of Kubernetes controllers, which will automatically reschedule failed pods, give us some simple influence on how and where our pods are scheduled, and give us some functionality on how to update and maintain those pods. There are at least two main types of controllers we need to decide between:
Decision #2: What kind of controller should we use for each pod: a deployment or a daemonset?

Deployments are the most common kind of controller, typically the best choice for stateless pods which can be scheduled anywhere resources are available.
DaemonSets are appropriate for pods meant to run one per host; these are typically used for daemon-like processes, like log aggregators, filesystem managers, system monitors or other utilities that make sense to have exactly one of on every host in your Kubernetes cluster.

Most, but not all, pods are best scheduled by one of these two controllers, and of them deployments make the large majority. Since neither of our web app components make sense as cluster-wide daemons, we would schedule both of them as deployments. If later we wanted to deploy a logging or monitoring appliance, a daemonSet would be a common pattern to ensure it runs on every node in the cluster.
Now that we’ve decided on how to arrange our containers into pods and how to manage our pods using controllers, its time to write some Kubernetes yaml to capture these objects; many examples of how to do this are available in the Kubernetes documentation and Docker’s Training material.
I strongly encourage you to define your applications using Kubernetes yaml definitions, and not imperative kubectl commands. As I mentioned in the first post, one of the most important aspects of orchestrating a containerized application is shareability, and it is much easier to share a yaml file you can check into version control and distribute, rather than a series of CLI commands that can quickly become hard to read and hard to keep track of.
Checkpoint #2: write Kubernetes yaml to describe your controllers and pods.
Once you have that yaml in hand, now’s a good time to create your deployments and make sure they all work as expected: individual containers in pods should run without crashing, and containers inside the same pod should be able to reach each other on localhost.
Advanced Topics
Once you’ve mastered the pods, deployments, and daemonSets mentioned above, there are a few deeper topics you can approach to enhance your Kube applications even further:

StatefulSets are another kind of controller appropriate for managing stateful pods; note these require an understanding of Kube services and storage (discussed below).
Scheduling affinity rules allow you to influence and control where your pods are scheduled in a cluster, useful for sophisticated operations in larger clusters.
Healthchecks in the form of livenessProbes are an important maintenance tool for your pods and containers, that tell Kube how to automatically monitor the health of your containers, and take action when they become unhealthy.
PodSecurityPolicy definitions allow an added layer of security for cluster administrators to control exactly who and how pods are scheduled, commonly used to prevent the creation of pods with elevated or root privileges.

To learn more about Kubernetes pods and controllers, read the documentation:

Kubernetes pods
Kubernetes controllers

You can also check out Play with Kubernetes, powered by Docker.
We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises. To get notified when the training is available, sign up here:
Get Notified About Training

Designing Your First App in #Kubernetes, Part 2 — Processes as Pods and ControllersClick To Tweet

The post Designing Your First App in Kubernetes, Part 2: Setting up Processes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New Azure blueprint enables SWIFT Connect

This morning at the SIBOS conference in London we announced how our new Azure Blueprint is being introduced by Microsoft in conjunction with the recent efforts to enable SWIFT connectivity in the cloud. It supports our joint customers in compliance monitoring and auditing of SWIFT infrastructure for cloud native payments, as described on the Official Microsoft Blog. 

SWIFT is the world’s leading provider of secure financial messaging services used and trusted by more than 11,000 financial institutions in more than 200 countries and territories. Today, enterprises and banks conduct these transactions by sending payment messages over the highly secure SWIFT network which leverages on-premises installations of SWIFT technology. SWIFT Cloud Connect creates a bank-like wire transfer experience with the added operational, security, and intelligence benefits the Microsoft Cloud offers.

Azure Blueprints is a free service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations. Azure Blueprints include mappings for key compliance standards such as ISO 27001, NIST SP 800-53, PCI-DSS, UK Official, IRS 1075, and UK NHS. 

The new SWIFT blueprint maps Azure built-in polices to CSP's security controls framework, enabling financial service organizations to have agility in creating and monitoring secure and compliant SWIFT infrastructure environments.

The Azure blueprint includes mappings to:

Account management. Helps with the review of accounts of that may not comply with an organization’s account management requirements.
Separation of duties. Helps in maintaining an appropriate number of Azure subscription owners.
Least privilege. Audits accounts that should be prioritized for review.
Remote access. Helps with monitoring and control of remote access.
Audit review, analysis, and reporting. Helps ensure that events are logged and enforces deployment of the Log Analytics agent on Azure virtual machines.
Least functionality. Helps monitor virtual machines where an application white list is recommended but has not yet been configured.
Identification and authentication. Helps restrict and control privileged access.
Vulnerability scanning. Helps with the management of information system vulnerabilities.
Denial of service protection. Audits if the Azure DDoS Protection standard tier is enabled.
Boundary protection. Helps with the management and control of the system boundary.
Transmission confidentiality and integrity. Helps protect the confidentiality and integrity of transmitted information.
Flaw remediation. Helps with the management of information system flaws.
Malicious code protection. Helps the management of endpoint protection, including malicious code protection.

Information system monitoring. Helps with monitoring a system by auditing and enforcing logging across Azure resources

We are committed to helping our customers leverage Azure in a secure and compliant manner. Over the next few months, we will release new built-in blueprints for HITRUST, FedRAMP, and Center for Internet Security (CIS) Benchmark. If you have suggestions for new or existing compliance blueprints, please share them via the Azure Governance Feedback Forum.

Learn more about the SWIFT CSP blueprint in our documentation.
Quelle: Azure