Amazon CloudFront is now Available in Mainland China

Amazon CloudFront announces the launch of CloudFront in China with three new Edge locations (POPs) located in Beijing, Shanghai, and Zhongwei, operated by Ningxia Western Cloud Data Co. Ltd. (NWCD). Customers can now serve content to end viewers in Mainland China with improved latency, availability and security.
Quelle: aws.amazon.com

Announcing the New AWS Certified Alexa Skill Builder – Specialty Exam

AWS Training and Certification is excited to announce the availability of the new AWS Certified Alexa Skill Builder – Specialty certification, the industry’s first and only certification that validates your ability to build, test, and publish Amazon Alexa skills. With the new AWS Certified Alexa Skill Builder – Specialty certification, Alexa developers can more confidently publish skills that have the potential to reach customers through over 100 million Alexa-enabled devices in use globally.
Quelle: aws.amazon.com

Louisville: Google Fiber reißt sein Gumminetz ab

Google Fibers Plan, in einer Stadt FTTH mit extrem flachem Trenching zu verlegen, endet mit hohen Wiederherstellungszahlungen. Das Unternehmen schnitt Gräben von fünf Zentimetern Tiefe, die mit einer gummiartigen Flüssigkeit gefüllt wurden. Doch Shallow Trenching kam wieder an die Oberfläche. (Google Fiber, Google)
Quelle: Golem

Improve enterprise IT procurement with Private Catalog, now in beta

With the sheer number of applications in today’s enterprises, it can be hard for procurement departments and cloud administrators to maintain compliant and efficient procurement processes for their cloud development teams. Last week at Next we introduced you to Private Catalog, a new service from Google Cloud that lets you control the availability and distribution of IT solutions to maintain compliance and governance, simplify internal solution discovery, and ensure that only approved and compatible apps are available throughout your organization. Here’s a bit more color.Stay compliant, control accessPrivate Catalog helps you reduce complexity in regulated industries, or when handling sensitive data. Controlling which apps your developers use can help you avoid costly data loss, data leaks, or reliability issues from unverified code. You can ensure that only products that meet your compliance and governance rules are published to your catalog and available to your developer teams. For additional control, you can create hierarchies complete with access controls in the catalog, limiting who can deploy what within your organization.Create a collaborative environmentCentralizing your apps is not only good for compliance, it’s good for productivity. Distributed workforces often create technology silos, introducing redundancies across your teams. Private Catalog simplifies how users find sanctioned applications—they simply navigate to a single place to find all the approved internal apps available to them. And when central IT teams create a new solution, Private Catalog makes it easy to distribute it to the whole organization.Less failures, more efficiencyFailing to control how you deploy internal apps leads to inefficient resource usage and more support tickets. With Private Catalog, you can control how you distribute your software according to parameters in Cloud Deployment Manager templates, including regions, RAM, CPUs, and almost any other value. When you control the parameters, you ensure that apps have the correct amount of resources, in approved configurations.Management and reporting capabilitiesPrivate Catalog includes robust management, integration, and reporting capabilities. With the APIs available today, you can delete catalogs, hierarchies, and individual solutions that are no longer relevant for your teams. You can also customize the user interface, and integrate your Private Catalog solutions with your other enterprise service catalogs. To report on identity and access management, simply query which solutions users have access to within each organization’s hierarchy and catalog.Internal apps don’t have to be a source of compliance, support, and communication issues. With Private Catalog, you put controls in place that let your developers access the tools they need safely and efficiently. For more information, visit the Private Catalog homepage.
Quelle: Google Cloud Platform

Kohl's leverages Google Cloud Platform for omnichannel retail

hearing from customers about how they are digitizing their businesses using Google Cloud. I want to share some of their stories and news about some exciting new products we are introducing which is why I’ll be posting here regularly. So, let’s get started.How Kohl’s is leveraging Google Cloud Platform for omnichannel retailKohl’s is an omnichannel retailer focused on driving traffic, operational efficiency and delivering seamless omnichannel customer experiences. Ratnakar Lavu is Kohl’s Senior Executive Vice President and Chief Technology Officer. He, and Kohl’s, are at the forefront of retail technology innovation, focusing on a frictionless customer journey across digital, mobile and more than 1,150 stores. As part of this journey to more closely unify its online and offline experiences for customers, the company was looking for supporting cloud services that would continue to drive  best-in-class data center infrastructure; the ability to manage data at a very large scale; and industry leading analytics and machine learning tools to continually understand real time data streams and help personalize experiences for their customers.Kohl’s recognized the opportunity to take on a cloud partner to help drive the improvement of the speed and reliability of their operations, while they focused on a number of innovations to deepen customer experiences. “At the time, I was looking for an open and scalable platform to partner with our Kohl’s technology team as we transform our business by shifting to the cloud,” Ratnakar told me. “Google has great engineering talent as well as demonstrated experience solving stability and scale in its own Ads and Search business. At Kohl’s, we need to be bold and innovative in today’s retail environment, and therefore need partners who deeply understand how to manage risk.” Kohl’s leveraged several capabilities of Google Cloud. For example:They built applications to automate deployment, scaling and operations.They used monitoring capabilities to monitor for things like response time.Scalable technology provided an infrastructure to elastically scale to site traffic.They ran their infrastructure across multiple regions for high availability.In 2017 and 2018, record-setting numbers of customers visitedKohls.com during the Thanksgiving holiday weekend and the digital platform experienced high double-digit growth both years. The capabilities provided by Google Cloud Platform (GCP) and Google’s data center infrastructure supported Kohl’s servers and systems during these key timeframes.In addition, the Kohl’s team partnered together with Google’s core engineering team and services organization to optimize applications and make them more reliable. Our Customer Reliability Engineers (CREs) worked with them in advance of their peak time frames to test the infrastructure for performance, scaling, and fault tolerance. “Google CRE and services teams collaborated with us as we ran drills and exercises during each phase of preparation for peak time frames,” Ratnakar said. “As we continued to understand better how to scale, monitor, and support our applications in GCP and we are pleased that we worked with the CRE team as partners on monitoring services, alerting teams, and triaging work.”We are grateful for our partnership with Ratnakar and the Kohl’s organization and are so happy to see their success using Google Cloud. Many other retailers, be they among the top 10 globally or bringing their new perspective to retail experiences, are also transforming their digital business models to capture new opportunities using Google Cloud.You can learn more here.
Quelle: Google Cloud Platform

Introducing GKE Advanced— enhanced reliability, simplicity and scale for enterprise workloads

Editor’s note:This is the first of many posts on unique differentiated capabilities in Google Kubernetes Engine. Stay tuned in the coming weeks as we discuss GKE’s more advanced features.Kubernetes has come a long way since Google open-sourced it in 2014. Since then, the community has developed a robust suite of installation, management, and configuration tooling for a variety of use cases. But many organizations are overwhelmed by having to run  Kubernetes on their own, and instead adopt Google Kubernetes Engine (GKE), our managed service. Their concern isn’t the underlying infrastructure; they just want a strong foundation that lets them focus on their business.Today, we’re introducing you to GKE Advanced, which adds enterprise-grade controls, automation and flexibility, building on what we’ve learned managing our robust worldwide infrastructure. Going forward, we’ll refer to our existing GKE offering as GKE Standard.Here are the two GKE editions at a glance:GKE Advanced delivers advanced infrastructure automation, integrated software supply chain tooling for enhanced security, a commitment to reliability with a financially backed SLA, and support for running serverless workloads. These new, advanced GKE features and tooling help you operate in fast-moving environments to simplify the management of workloads and clusters, and scale hands-free. You still benefit from Kubernetes’ portability and third-party ecosystem, but with an enhanced feature set.  GKE Standard includes all the features and capabilities that are generally available today, providing a managed service for less complex projects. You can continue to take advantage of the rich ecosystem of first-party and third-party integrations in GCP, including those available in the GCP Marketplace.Let’s take a closer look at features GKE Advanced will include:Enhanced SLAGKE Advanced is financially backed by an SLA that guarantees availability of 99.95% for regional clusters, providing peace of mind for mission-critical workloads.Simplified automationManually scaling a Kubernetes cluster for availability and reliability can be complex and time consuming. GKE Advanced includes two new features to make it easier: Vertical Pod Autoscaler (VPA), which watches resource utilization of your deployments and adjusts requested CPU and RAM to stabilize the workloads; and Node Auto Provisioning, which optimizes cluster resources with an enhanced version of Cluster Autoscaling.Additional layer of defenseDevOps and system administrators often need to run third-party software in their Kubernetes cluster but still want to make sure that it’s isolated and secure. GKE Advanced includes GKE Sandbox, a lightweight container runtime based on gVisor that adds a second layer of defense at the pod layer, hardening your containerized applications without any code or config changes, or requiring you to learn a new set of controls.Software supply-chain securityMalicious or accidental changes during the software development lifecycle can lead to downtime or compromised data. With Binary Authorization, container images are signed by trusted authorities during the build and test process. By enforcing that only verified images are integrated into the build-and-release process, you can gain tighter control over your container environment.Serverless computingYou want to quickly develop and launch applications, without having to worry about the underlying infrastructure on which your code runs. Cloud Run on GKE provides a consistent developer experience for deploying and running stateless services, with automatic scaling (even to zero instances), networking and routing, logging, and monitoring; all based on Knative.Understand your infrastructure usageWhen multiple tenants share a GKE cluster, it can be hard to estimate which tenant is consuming what portion of resources. GKE usage metering allows you to see your cluster’s resource usage broken down by Kubernetes namespaces and labels, and attribute it to meaningful entities such as customers, departments and the like.With the addition of advanced autoscaling and security, support for serverless workloads, enhanced usage reporting—all backed financially by an SLA, GKE Advanced gives you the tools and confidence you need to build the most demanding production applications on top of our managed Kubernetes service. GKE Advanced will be released with a free trial later in Q2. Have questions about GKE Advanced? Contact your Google customer representative for more information, and sign up for our upcoming webcast, Your Kubernetes, Your Way Through GKE.
Quelle: Google Cloud Platform

Machine Learning powered detections with Kusto query language in Azure Sentinel

This post is co-authored by Tim Burrell, Principal Security Engineering Manager and Dotan Patrich, Principal Software Engineer.

As cyberattacks become more complex and harder to detect. The traditional correlation rules of a SIEM are not enough, they are lacking the full context of the attack and can only detect attacks that were seen before. This can result in false negatives and gaps in the environment. In addition, correlation rules require significant maintenance and customization since they may provide different results based on the customer environment.

Advanced Machine Learning capabilities that are built in into Azure Sentinel can detect indicative behaviors of a threat and helps security analysts to learn the expected behavior in their enterprise. In addition, Azure Sentinel provides out-of-the-box detection queries that leverage the Machine Learning capabilities of Azure Monitor Logs query language that can detect suspicious behaviors in such as abnormal traffic in firewall data, suspicious authentication patterns, and resource creation anomalies. The queries can be found in the Azure Sentinel GitHub community.

Below you can find three examples for detections leveraging built in Machine Learning capabilities to protect your environment.

Time series analysis of authentication of user accounts from unusual large number of locations

A typical organization may have many users and many applications using Azure Active Directory for authentication. Some applications (for example Office365 Exchange Online) may have many more authentications than others (say Visual Studio) and thus dominate the data. Users may also have a different location profile depending on the application. For example high location variability for email access may be expected, but less so for development activity associated with Visual Studio authentications. The ability to track location variability for every user/application combination and then investigate just some of the most unusual cases can be achieved by leveraging the built in query capabilities using the operators make-series and series_fit_line.

SigninLogs
| where TimeGenerated >= ago(30d)
| extend locationString= strcat(tostring(LocationDetails["countryOrRegion"]), "/", tostring(LocationDetails["state"]), "/", tostring(LocationDetails["city"]), ";")
| project TimeGenerated, AppDisplayName , UserPrincipalName, locationString
| make-series dLocationCount = dcount(locationString) on TimeGenerated in range(startofday(ago(30d)),now(), 1d)
by UserPrincipalName, AppDisplayName
| extend (RSquare,Slope,Variance,RVariance,Interception,LineFit)=series_fit_line(dLocationCount)
| where Slope >0.3

Creation of an anomalous number of resources

Resource creation in Azure is a normal operation in the environment. Operations and IT teams frequently spin up environments and resources based on the organizational needs and requirements. However, an anomalous creation of resource by users that don’t have permissions or aren’t supposed to create these resources is extremely interesting. Tracking anomalous resources creation or suspicious deployment activities in azure activity log can provide a lead to spot an execution technique done by an attacker.

AzureActivity
| where TimeGenerated >= ago(30d)
| where OperationName == "Create or Update Virtual Machine" or OperationName == "Create Deployment"
| where ActivityStatus == "Succeeded"
| make-series num = dcount(ResourceId) default=0 on EventSubmissionTimestamp in range(ago(30d), now(), 1d) by Caller
| extend outliers=series_outliers(num, "ctukey", 0, 10, 90)
| project-away num
| mvexpand outliers
| where outliers > 0.9
| summarize by Caller

Firewall traffic anomalies

Firewall traffic can be an additional indicator of a potential attack in the organization. The ability to establish a baseline that represents the usual firewall traffic behavior on a weekly or an hourly basis can help point out the anomalous increase in traffic. Using the built-in capabilities in the Log Analytics query language can point directly to the traffic anomaly and be investigated.

CommonSecurityLog
| summarize count() by bin(TimeGenerated, 1h)

With Azure Sentinel, you can create the above advanced detection rules to detect anomalies and suspicious activities in your environment, create your own detection rules or leverage the rich GitHub library that contains detections written by Microsoft security researchers.
Quelle: Azure

Gain flexibility with microservices applications

Microservice development techniques have ushered in an unprecedented era of continuous delivery and deployment. It’s important that organizations investing in IT evaluate its application modernization journey. As part of that journey, businesses can gain efficiencies and cost savings by unlocking the potential of microclimate architectures. Yet, careful consideration must be given to how to best deploy microservices in an environment where more demands are being put on developers and site reliability engineers (SREs) every day.
The complexity of microclimate architectures requires developers and SREs to monitor and ensure application reliability and performance even after they go into production. While monitoring the resource consumption of an application during development mitigates any cost or sizing surprises before production, microservice architectures, and the opportunity they offer to rapidly enhance application capabilities while in production, require the SRE to continue monitoring resource consumption during an application’s life cycle of changes. Thus, using a resource monitoring solution that can be used throughout the development and production life cycle is ideal. Lightweight data collectors embedded into a microclimate can offer incredible value.
What are lightweight data collectors?
But, what are lightweight data collectors and how do they help? Data collectors are simply application modules that collect performance metrics. Modern data collection agents are easy to install and have minimal footprints, making them “lightweight”. Most lightweight data collectors are open source, which not only allows the community to contribute improvements, but also enables customization specific to your applications. Supported in Node.JS, Swift, and Java runtime environments, modern data collectors can be embedded into the application container image just like any other application library. They can also be easily instrumented by adding a line of code to the runtime instead of installing a huge agent into each service. The ability to collect data at the service level can help developers work more efficiently.
Overcoming microservices development challenges with lightweight data collectors
During development in a microservices environment, tracking the compute resources, application response times, throughput, stack and method traces is very valuable. By measuring and monitoring these resources earlier in the development lifecycle, companies can prevent latency and other issues occurring in production that may affect the customer. But, even this due diligence isn’t enough in a microservices environment.
The versatility of microclimates allows for rapid innovation even after deployment. Developers are often faced with troubleshooting applications while in production. With user impact looming, it can be a scramble to get insight into container performance and measure the availability of compute resources.
As applications progress through the continuous delivery pipeline, developers need to collect metrics to ensure they understand resource consumption at each stage. Lightweight data collectors in a microclimate environment provide real-time intelligence at the service level to help developers find problems during development or in production, so that they feel confident in delivering code to the pipeline and maintaining availability during improvement cycles. As applications are changed in production, developers can easily compare resource metrics from one version to another.
Lightweight data collectors offer huge benefits to developers, but they also improve the IT environment.
Overcoming microservices operational challenges with lightweight data collectors
In most enterprises, SREs are faced with monitoring applications deployed in both private and public clouds, and often in hybrid cloud environments that include traditional on-premises resources. Ensuring applications are performing well while being continuously updated in hybrid production environments can be difficult if data is not processed quickly.
Luckily, lightweight data collectors offer a common approach to collecting resource metrics. In these situations, and also for deployments that are spread across multiple public clouds, microclimate topology and lightweight data collectors help gather performance metrics in a consistent manner. This is doubly important when migrating applications to microservice architectures. When a new service is introduced in the cluster, it’s a huge value add for operations teams to have performance metrics data at their fingertips comparing how service was performing before and after the transition.
And, when modernizing application architectures, teams will need tools that not only collect data, but also provide insights into how to fix issues when they arise.
The solution: A centralized view of microservices application metrics
Microclimate environments enable rapid continuous delivery processes and allow teams to communicate through common metrics, but that communication is only as valuable as the data being pushed out. Siloed data can create big problems. Redirecting lightweight data collectors for a centralized view of microservices applications metrics ensures your teams are all operating from the same view.
Using tools like the IBM Cloud App Management and data collectors, developers and site reliability engineers can share a centralized view of all of an application’s microservices resource metrics. IBM Cloud App Management monitors at the service level using SRE golden signals, latency, errors, traffic and saturation as indicators with lightweight data collectors to give deeper insight into service impacting issues.
Learn more:

See how IBM Cloud App Management deploys lightweight data collectors, informed by SRE golden signal monitoring.
Learn how to deploy your company microclimate on IBM Cloud Private.
Read about the importance of the application modernization journey to organizations investing in IT.

The post Gain flexibility with microservices applications appeared first on Cloud computing news.
Quelle: Thoughts on Cloud