AWS and Red Hat – Digging a Little Deeper

Hopefully by now, you have either seen the Amazon Web Services (AWS) and Red Hat alliance keynote or at least read the press release. Some highlights in case you missed it:

* AWS cloud services integrated with Red Hat OpenShift Container Platform to enable hybrid deployments.
* Joint support path for applications using Openshift with integrations to AWS.
* Collaboration on Kubernetes to make OpenShift run more efficiently on AWS.
* Enhanced Red Hat Enterprise Linux optimizations for AWS.
Quelle: OpenShift

Manage your business needs with new enhancements in Azure Autoscale

Automatically scaling out or scaling in applications to handle the demands of your business is an essential element of the cloud strategy. Azure’s Autoscale service empowers you to automatically scale your compute and App Service workloads based on user-defined rules regarding metric conditions, time/date schedules, or both. Azure Autoscale is available for Classic Cloud Services, Virtual Machine Scale Sets (VMSS) and App Services. Today we are excited to announce a host of improvements to Autoscale, including faster auto scaling, simplified configuration, the ability to scale by a custom metric using Application Insights, and more troubleshooting information available in the Activity Log.

Faster Autoscale

Classic Cloud Services: The Classic Virtual Machine infrastructure that powers Classic Cloud Services now supports more reliable, host-level metrics via the Azure Monitor metric pipeline. Because of this, an Autoscale setting can now be set to have as low as a five-minute time window to activate (previously we recommended a time window of no less than 30 minutes). With this, you can now do faster and more reliable auto scaling. If you have a Classic Cloud Service Autoscale setting where you wish to take advantage of the improved scaling, please update it with a shorter time window (as low as five minutes).

VMSS and App Services: The Autoscale engine for VMSS and App Services can also now trigger scale actions faster. The new engine is tuned to check for your metric based rules every minute, thereby enabling the ability to scale your instances as early as 1 minute after a metric value crosses the threshold set in an Autoscale setting. To take advantage of the faster Autoscale, please update your existing autoscale setting. All new Autoscale settings created or updated on VMSS or App Services after May 10th will automatically use the new engine.

Simplified management experience in the portal

Based on your feedback, we made it easier to discover and manage Autoscale settings in the portal. Autoscale settings can now be accessed directly from within the Azure Monitor blade, use a completely re-vamped configuration blade, and enable you to easily see the full template in JSON or scale action history for that setting. Learn more about how to get started with Autoscale today.

Figure 1. the new tab within Azure Monitor for accessing and managing Autoscale settings.

Figure 2. the simplified Autoscale blade with options to view scale action history, view the JSON object and edit notifications.

Autoscale using custom metrics

One of our top customer asks was to be able to Autoscale based on a custom, user-defined metric, and we’ve now enabled it using Application Insights. This new capability is available now and enables you to scale Classic Cloud Service, VMSS or App Service workloads by any Azure Monitor based metric or custom and application metrics collected by Application Insights, Azure’s application performance management service. Here is a sample of an Autoscale setting that allows you to scale your Web API app based on a custom metric ingested to Application Insights. This ability to Autoscale via Application Insights based metrics is now available in public preview, please try this out and share your feedback.

Figure 3. the ability to select Application Insights as a source of metrics and the ability to select a standard or user-defined metric by which scaling will occur.

Improved Autoscale troubleshooting

The Autoscale engine logs an event in Activity Log every time it triggers a scale action, however, the target resource that is being scaled out or in can take the time to complete the scale action. It is important to know when the scale action completes or reports as failed so that you can take automated actions on the resource. To support this, the Autoscale engine now generates a scale action result event when the underlying target service completes the action or reports it as failed. This scale result event is also logged in the Activity Log and includes valuable information about why your Autoscale event failed. We’ve also introduced a new Autoscale Activity Log category so that you can easily filter to view only Autoscale-related events. You can leverage the new Activity Log Alerts to receive notifications or take automated actions via webhooks and Azure Automation, Logic Apps or Functions. This feature is now enabled for Cloud Services, VMSS and App Services.

Figure 4. A view of Activity Logs filtered by Autoscale events, listing the Autoscale trigger action and result event.

Wrapping Up

These new capabilities in Azure Autoscale enable you to efficiently leverage the compute power of Azure to scale your applications to best suit your growing business needs. We are eager to hear your feedback to inform our future work on Autoscale. Please try these new features and let us know what you think. Also be sure you’re getting the most out of this feature by checking out our Autoscale best practices, and most common autoscale patterns.
Quelle: Azure

Apple Will Announce Amazon Prime Video Coming To Apple TV At WWDC

After a fraught few years, Apple and Amazon have reached something of an accord over their rival video efforts.

Sources in position to know tell BuzzFeed News that Amazon's Prime video app — long absent from Apple TV — is indeed headed to Apple's diminutive set-top box. Apple plans to announce Amazon Prime video's impending arrive to the Apple TV App Store during the keynote at its annual Worldwide Developers Conference (WWDC) on June 5 in San Jose, CA. A source familiar with the companies' thinking say the app is expected to go live this summer, but cautioned that the hard launch date might change. Amazon had previously declined to even submit a Prime Video app for inclusion in Apple's Apple TV App Store, despite Apple's “all are welcome” proclamations.

Recode earlier reported that Apple and Amazon were nearing an agreement that may finally bring the Prime Video app to Apple TV. It's now official.

As part of the arrangement between the two companies, Amazon — which stopped selling Apple TV devices two years ago, when it also banned Google’s Chromecast devices from its virtual shelves — will resume selling Apple's set-top box. In October 2015, Amazon forbid third-party electronics sellers from selling Apple TVs and Google Chromecasts through their Amazon storefronts, arguing that the devices inspired “customer confusion.”

“Over the last three years, Prime Video has become an important part of Prime,” Amazon told BuzzFeed News at the time. “It’s important that the streaming media players we sell interact well with Prime Video in order to avoid customer confusion. Roku, XBOX, PlayStation and Fire TV are excellent choices.”

A hard date for the Apple TV's return to Amazon and its storefronts couldn't be learned.

Apple declined comment on forthcoming Amazon Prime Video announcements. Amazon has not yet responded to a request for comment.

Quelle: <a href="Apple Will Announce Amazon Prime Video Coming To Apple TV At WWDC“>BuzzFeed

Azure Government – The most secure & compliant cloud for defense with new compliance and service offerings

Broad support for regulatory compliance and ongoing innovation are at the core of Microsoft’s commitment to enabling U.S. government missions with a complete, trusted, and secure cloud platform. Today, we are announcing support for Defense Federal Acquisition Regulation Supplement (DFARS) requirements, expanding opportunities for defense contractors to take advantage of cloud computing in meeting the needs of the U.S. Department of Defense (DoD). Adding DFARS compliance extends Azure Government’s lead as the cloud platform with the broadest support for U.S. DoD workloads. In addition to this compliance milestone, we are also announcing enhanced technical capabilities with the expansion of our Cognitive Services preview, addition of Graphics Processing Unit (GPU) clusters, and the addition of new database and storage options in Azure Government. With these expanded compliance and service offerings, government customers will have new opportunities to use cloud computing to help meet their mission goals.

Supporting DFARS requirements

Azure Government’s support for DFARS requirements creates new options for DoD contractors as they partner with the defense department. DoD industry partners can now host Covered Defense Information (CDI) on the Microsoft cloud platform while maintaining compliance with DoD procurement requirements, giving them access to the same set of Azure Government capabilities as the DoD itself.

“As a mission partner of the DoD, the security of covered defense information is of utmost importance. Compliance with DFARS is not only required by regulation, but is also critical to the defense of our nation,” says Michael Hawman, General Atomics CIO, “As more DoD contractors consider the adoption of cloud computing to reduce costs and increase agility and capability, the transparency by which CSPs provide support will be critical to building and maintaining trust with cloud security in the defense contractor community. Commercial cloud service providers must familiarize themselves with, and be capable of accepting flow down DFARS requirements as soon as possible."

Cognitive Services available for all customers

Building on the successful preview of Cognitive Services in March, we are now making Cognitive Services available to all government and defense. Opening the preview up to more customers, U.S. government customers and partners can use Cognitive Services to feed real-time analysis that supports their mission objectives. Leveraging the artificial intelligence from Cognitive Services for things like facial recognition or text translation, customers can more easily build applications to help make informed decisions in critical scenarios such as Public Safety and Justice. Azure Government support for application innovation is part of why agencies are choosing Microsoft as their partner in digital transformation:

“Before beginning the search for specific technologies and digital platforms to meet DC’s digital needs, we identified our own list of standards for government cloud service providers. The first three criteria are compliance, reliability and the technical architecture and environment of the platform,” says Archana Vemulapalli, CTO of Washington D.C., “Microsoft offers a strong government cloud platform and services that help my staff and me perform our jobs effectively and create the city’s digital future.”

Announcing GPU clusters, Azure Cosmos DB and Cool Storage

Azure Government continues to add services at an accelerated pace to meet existing as well as unrealized needs of the U.S. government. By announcing GPU clusters today, Azure Government further enables the use of High Performance Computing (HPC) in the cloud for government. Whether using computational analysis to better research diseases and weather patterns or helping reduce backlogs of questions answered for citizens through predictive analysis, U.S. government customers and partners are sure to benefit.

Additionally, Azure Government now supports Azure Cosmos DB and Cool Blob Storage which enable government customers to deploy global-scale databases and choose from more options to control storage costs. Azure Cosmos DB is the next big leap in the evolution of DocumentDB and, as a part of this Azure Cosmos DB release, DocumentDB customers and their data automatically and seamlessly become Azure Cosmos DB customers. Additionally, we are making Cool Storage available so customers can store less frequently accessed data like backup data, media content, scientific data and active archival data at a reduced cost.

Powering innovation at the Department of Veterans Affairs

Agencies are choosing cloud computing and Azure Government to help speed innovation to those they serve. Last month, the U.S. Department of Veterans Affairs launched its Access to Care site on Azure Government. The site helps veterans and their caregivers decide where to go for healthcare services by providing data on patient satisfaction, appointment wait times, and other quality measures from surrounding clinics and VA facilities. Already, the VA has been able to meet the demand, while enhancing the website and continuously adding new functionality by leveraging the capabilities of Azure Government.

“The VA is focused on driving transparency and empowering the veteran,” said Jack Bates, Director VA OI&T Business Intelligence Service Line, “Working closely with Microsoft to deliver the Patient Wait Times App on Azure Government, we have enabled the Department to be fully transparent about performance, and to improve service to the veteran by providing meaningful data.”

By building and hosting Access to Care on Azure Government, which achieved a FedRAMP High ATO from the VA in March, the VA is continuing to embrace digital transformation and improve its services for veterans around the world.

Cloud computing for U.S. Government

From increased support for compliance requirements to application innovation, Azure Government continues to expand capabilities that make it easier for U.S. government customers and partners to take advantage of the cloud. And with six announced government regions in the U.S., Azure Government enables customers to run mission workloads closer to their users and provides geographic redundancy that is not possible with any other major cloud provider. To learn more about what Microsoft is doing in this area, check out the Azure Government blog and sign up for an Azure Government Trial.

– Tom
Quelle: Azure

Your Apple Watch Could Someday Detect This Risky Heart Condition

Siphotography / Getty Images

The Apple Watch’s heart rate sensors come in handy for knowing how hard your blood is pumping at the gym. But a new, if preliminary, study suggests that the smartwatch also has the potential to spot a much more serious medical condition: an irregular heart rate, known as atrial fibrillation.

The study’s researchers first trained an algorithm to recognize instances of atrial fibrillation in heart rate measurements submitted by people all over the world. The algorithm then accurately detected when a small group of people was experiencing atrial fibrillation in real time, based on data flowing from the Apple Watch on their wrists.

These results are being presented Thursday at the Heart Rhythm Society’s annual conference in Chicago. They have not been published in a scientific journal and need to be validated in larger groups of patients, so don’t expect your Apple Watch to replace a heart check-up any time soon.

Still, cardiology experts say that if the concept is proven to work, the Apple Watch could be a useful tool in helping identify, track, and treat patients with a medical condition that affects an estimated 2.7 million Americans. Atrial fibrillation increases risk of blood clots, stroke, and heart failure — but because it sometimes doesn’t result in symptoms, it can go undetected and untreated, according to the American Heart Association. Catching it early, alerting a doctor, and treating it with blood-thinning medications could save lives.

“This is an important study which gives hope to the notion that someday, it may be possible on a widespread basis for patients or individuals to detect atrial fibrillation with smartwatch technology,” said Hugh Calkins, director of the Cardiac Arrhythmia Service at Johns Hopkins University, who was not involved with the study. He also stressed that it is “no more than an early proof of concept.”

“We were pretty surprised that a device you could go into Best Buy and purchase was capable of this level of accuracy.”

The study, led by researchers at UC San Francisco and the heart rate-analysis startup Cardiogram, illustrates how wearables could help make scientific research and health care more personalized, precise, and effective. Millions of devices sold by companies like Apple, Fitbit, Garmin, and Jawbone are capturing unprecedented quantities of biometric data, from steps to sleep to heart rate, that researchers have never had access to before.

There are already a couple wireless, FDA-cleared devices that atrial fibrillation patients can use to track their heart rate, but because they aren’t meant to be worn all the time, they inevitably miss some data. The Zio Patch sticks to your chest for two weeks, which makes it most useful for monitoring patients right after they’re discharged from the hospital. The AliveCor, a set of electrodes that straps onto the back of your smartphone, produces a heart-rate readout when you press on it.

But people wear Apple Watches all the time. One analyst estimated that Apple sold 6 million units in the last quarter of 2016 alone — nearly 80% of the total smartwatch market. That popularity, along with its high-quality heart-rate sensor, makes it an attractive tool for researchers like Greg Marcus, an atrial fibrillation expert at UCSF and senior author of the study.

Instead of requiring people to buy new gadgets, “the idea here is that we can leverage what people are buying on their own and using anyway,” Marcus told BuzzFeed News.

Marcus is leading an ongoing research project, called the Health e-Heart Study, which aims to study heart disease and health in people scattered throughout the globe. For this study, his team drew from a pool of about 6,400 Apple Watch owners, including 166 people with atrial fibrillation and AliveCor devices. Together, they produced nearly 140 million heart rate measurements and 6,340 AliveCor recordings.

Justin Sullivan / Getty Images

Cardiogram, a startup that’s raised $2 million from Andreessen Horowitz and other investors, collected those data points through its iOS app. Then it used them to train an algorithm to distinguish atrial fibrillation patterns from normal heart rhythms.

To see if it worked, Marcus’ group waited for atrial fibrillation patients to come to UCSF for cardioversions, the procedure for restoring a normal heart rhythm. They gave the patients Apple Watches to wear before, during, and after, and also ran electrocardiograms for a definitive record of their heartbeats. When the algorithm was later applied to the collected heart rate data, it turned out to flag atrial fibrillation episodes with 97% accuracy.

“We were pretty surprised that a device you could go into Best Buy and purchase was capable of this level of accuracy,” said Brandon Ballinger, cofounder of Cardiogram.

It’s an early example of how machine learning can potentially help diagnose people and spot health problems before humans do. But it may be a while before physicians feel totally comfortable relying on an algorithm.

“The downside is there’s a bit of a black-box nature to it,” Marcus said. “By its nature, it’s figuring out the best way to do it and we as investigators may not have as much transparency into the exact algorithm it’s using. That’s going to take some getting used to.”

Other hurdles mean algorithms and wearables are a long way from becoming a mainstay in medical care. When the Apple Watch was being tested on patients in the study, it had to be in workout mode in order to continually capture data. People had to keep still, since the heart rate sensors are potentially less accurate when the wearer is moving around. The algorithm was also tested on a small group of about 50. “This is just a promissory note because they only have a limited number of people they’ve analyzed so far,” said Eric Topol, a cardiologist and geneticist at the Scripps Research Institute, who was not involved with the study.

And while many people have Apple Watches, not all of them are at risk for atrial fibrillation, since the condition is more common in people over age 60. As Calkins put it, “Is your grandmother going to be able to wear this smartwatch to figure out if she has atrial fibrillation or not?”

Still, Topol says that he can see a future where “the Apple Watch and other wearables will get to a point where people will get an alert on their phone or through their devices that says look like you may have atrial fibrillation.” He added, “That’s where we’re headed.”

Quelle: <a href="Your Apple Watch Could Someday Detect This Risky Heart Condition“>BuzzFeed

Top Liberals Are Unintentionally Building An Anti-Trump Conspiracy Media

Harvard Law School's Laurence Tribe accepts an award from the ACLU in 2011.

Alberto E. Rodriguez / Getty Images

Democrats and the mainstream media have spent the months since Donald Trump's election fixated on the the flood of unconfirmed reports, half-truths, and outright propaganda that accompanied his rise.

But some of the country’s leading liberal lights — respected figures including elected officials, prominent legal scholars, members of the media and celebrities — are themselves sharing wild allegations about the Trump administration from unreliable sources.

Perhaps no one embodies this trend so well as Laurence Tribe. Tribe is one of the country’s foremost constitutional lawyers, a the Carl M. Loeb University Professor at Harvard Law School. He has argued dozens of cases in front of the Supreme Court. He’s a major figure in American public life. In recent months Tribe has devoted much of his activity on Twitter to outraged extrapolation about the Trump administration. Often, these take the form of “big if true” tweets that cite unconfirmed reports about Trump’s possible misdeeds and are essentially conjecture.

On April 22, Tribe shared a story from a website called the Palmer Report — a site that has been criticized for spreading hyperbole and false claims — entitled “Report: Trump gave $10 million in Russian money to Jason Chaffetz when he leaked FBI letter,” a reference to the notorious pre-election letter sent by former FBI director James Comey to members of Congress that many have blamed for Hillary Clinton’s November loss.

The “report” the article points to is a since-deleted tweet by a Twitter user named LM Garner, who describes herself in her Twitter biography as “Just a VERY angry citizen on Twitter. Opinions are my own. Sometimes prone to crazy assertions. Not a fan of this nepotistic kleptocracy.” Garner, who has 257 followers, has tweeted more than 25 thousand times from her protected account.

“I don't know whether this is true,” Tribe’s tweet reads, “But key details have been corroborated and none, to my knowledge, have been refuted. If true, it's huge.”

Reached by email, Tribe said that he was aware of the Palmer Report’s “generally liberal slant” and “that some people regard a number of its stories as unreliable.” Still, he added, “When I share any story on Twitter, typically with accompanying content of my own that says something like “If X is true, then Y,” I do so because a particular story seems to be potentially interesting, not with the implication that I’ve independently checked its accuracy or that I vouch for everything it asserts.”

Asked whether he had considered his role in spreading unconfirmed information, given his stature in American society, Tribe responded that “I really don’t have anything to tell you about my thoughts regarding my personal role in sharing information over social media in this usually agnostic manner.”

Tribe is far from alone among prominent liberals in sharing unconfirmed, speculative, and sometimes wild information. But he is emblematic of an information echo chamber that has grown up since the election around sites like the Palmer Report and figures like the anti-Russian influence crusader Louise Mensch, in which anti-Trump public figures share unreliable information, the very act of which the sources of these reports use to bolster their own legitimacy. It therefore operates similarly — though it is smaller and far less powerful — to the vast new right wing online media that launders dubious claims through increasingly mainstream outlets before, sometimes, reaching the highest levels of government.

The Palmer Report is the work of Bill Palmer, who describes himself on his website as a “political journalist who covered the 2016 election cycle from start to finish.” Before the Palmer Report, Palmer ran a site called Daily News Bin, which Snopes’ Brooke Binkowski called “…basically a pro-Hillary Clinton “news site.” It was out there to counter misinformation.” Last November, Palmer introduced his new site as an “investigative reporting…side project” and has since written hundreds of articles that range from “evidence-free” assertions that Vladimir Putin personally ordered last month’s chemical attack in Syria to a story entitled “Brain specialist doctor believes Donald Trump’s frontal lobe is failing” based on a single tweet by a doctor. Along the way Palmer has collected more than 63 thousand Twitter followers and more than a few famous signal boosters.

Indeed, the site includes a “Thank Yous” section, a long list of liberal notables who have shared the site’s stories. It includes MSNBC host Joy-Ann Reid, Harvard Law School Professor Laurence Tribe, novelist Joyce Carol Oates, director Rob Reiner, Trump foil Rosie O’Donnell, and Mark Hamill — Luke Skywalker. The Democratic California Congressman Ted Lieu is specially thanked for sharing a Palmer Report story on his official website.

Lieu's office did not respond to a request for comment.

The site had its most significant exposure yet this week. As confusion swirled in Washington Wednesday following President Trump’s firing of FBI director James Comey, Democratic Massachusetts Senator Ed Markey went on CNN to make an explosive claim: A grand jury had been empaneled in New York to investigate Trump’s ties to Russia. (Another grand jury investigation, in Virginia, has been reported by CNN.)

Among the outlets that eagerly picked up the news were the Palmer Report and the Twitter feed of Louise Mensch, the anti-Trump crusader who has accused hundreds of people of being Russian agents, often with no evidence.

And what were Markey’s sources for this alarming claim? According to a Guardian reporter and the Daily Caller, none other than the Palmer Report and Mensch themselves. Hours after making the claim, Markey was forced to apologize for spreading unsubstantiated information, and through a spokesman, to reveal that he had no direct knowledge of any New York investigation.

Markey's office did not respond to a request for comment.

And despite Markey’s apology, as of Thursday afternoon, the Palmer Report headline read: “U.S. Senator confirms grand jury is now underway in Donald Trump case in New York State.”

Quelle: <a href="Top Liberals Are Unintentionally Building An Anti-Trump Conspiracy Media“>BuzzFeed

Azure DevTest Labs updates at Build 2017

Azure DevTest Labs is a commercial Azure service that enables IT admins to create a cost-controlled self-service for developers and testers to quickly create environments in Azure, while minimizing waste and optimizing cost. We announced the service GA last May, and never stop exploring more opportunities to build solutions that solve our customers’ real problems in various scenarios. Today, as Microsoft Build 2017 happening now in Seattle, I would like to take this moment with you to look back all the key functionalities we’ve shipped since Connect() conference last November, and explain how they can help you in various scenarios.
Quelle: Azure