Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack

The post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Mirantis Cloud Platform 1.0 is a distribution of OpenStack and Kubernetes that can orchestrate VMs, Containers and Bare Metal

SUNNYVALE, CA – April 19, 2017 – Mirantis, the managed open cloud company, today announced availability of a commercially-supported distribution of OpenStack and Kubernetes, delivered in a single, integrated package, and with a unique build-operate-transfer delivery model.

“Today, infrastructure consumption patterns are defined by the public cloud, where everything is API driven, managed and continuously delivered. Mirantis OpenStack, which featured Fuel as an installer, was the easiest OpenStack distribution to deploy, but every new version required a forklift upgrade,” said Boris Renski, Mirantis co-founder and CMO. “Mirantis Cloud Platform departs from the traditional installer-centric architecture and towards an operations-centric architecture, continuously delivered by either Mirantis or the customers’ DevOps team with zero downtime. Updates no longer happen once every 6-12 months, but are introduced in minor increments on a weekly basis. In the next five to ten years, all vendors in the space will either find a way to adapt to this pattern or they will disappear.”

Along with launching Mirantis Cloud Platform (MCP) 1.0, Mirantis is also first to introduce a unique delivery model for the platform. Unlike traditional vendors that sell software subscriptions, Mirantis will onboard customers to MCP through a build-operate-transfer delivery model. The company will operate an open cloud platform for customers for a period of at least twelve months with up to four nines SLA prior to off boarding the operational burden to customer&;s team, if desired. The delivery model ensures that not just the software, but also the customer&8217;s team and process are aligned with DevOps best practices.

Unlike any other solution in the industry, customers onboarded to MCP have an option to completely transfer the platform under their own management. Everything in MCP is based on popular open standards with no lock-in, making it possible for customers to break ties with Mirantis and run the platform independently should they choose to do so.

“We are happy to see a growing number of vendors embrace Kubernetes and launch a commercially supported offering based on the technology,&; said Allan Naim from the Kubernetes and Container Engine Product Team.

&;As the industry embraces composable, open infrastructure, the &8220;LAMP stack of cloud&8221; is emerging, made up of OpenStack, Kubernetes, and other key open technologies,” said Mark Collier, chief operating officer, OpenStack Foundation. “Mirantis Cloud Platform presents a new vision for the OpenStack distribution, one that embraces diverse compute, storage and networking technologies continuously rather than via major upgrades on six-month cycles.&8221;

Specifically, Mirantis Cloud Platform 1.0 is:

Open Cloud Software &; providing a single platform to orchestrate VMs, containers and bare metal compute resources by:

Expanding Mirantis OpenStack to include Kubernetes for container orchestration.
Complementing the virtual compute stacks with best-in-class open source software defined networking (SDN), specifically Mirantis OpenContrail for VMs and bare metal, and Calico for containers.
Featuring Ceph, the most popular open source software defined storage (SDS), for both Kubernetes and OpenStack.

DriveTrain &8212; Mirantis DriveTrain sets the foundation for DevOps style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/ CD pipeline. DriveTrain enables:

Increased Day 1 flexibility to customize the reference architecture and configurations during initial software installation.
Greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes.
Seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

StackLight &8212; enables strict compliance to availability SLAs by providing continuous monitoring of the open cloud software stacks through a unified set of software services and dashboards.

StackLight avoids lock-in by including best-in-breed open source software for log management, metrics and alerts.
It includes a comprehensive DevOps portal that displays information such as StackLight visualization and DriveTrain configuration settings.
The entire Mirantis StackLight toolchain is purpose-built for MCP to enable up to 99.99% uptime service level agreements with Mirantis Managed OpenStack.

With the release of MCP, Mirantis is also announcing end-of-life for Mirantis OpenStack (MOS) and Fuel by September 2019. Mirantis will be working with all customers currently using MOS on a tailored transition plan from MOS to MCP.

To learn more about MCP, watch an overview video and sign up for the introductory webinar at www.mirantis.com/mcp.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&;T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Let’s Meet At OpenStack Summit In Boston!

The post Let&;s Meet At OpenStack Summit In Boston! appeared first on Mirantis | Pure Play Open Cloud.

 
The citizens of Cloud City are suffering — Mirantis is here to help!
 
We&8217;re planning to have a super time at summit, and hope that you can join us in the fight against vendor lock-in. Come to booth C1 to power up on the latest technology and our revolutionary Mirantis Cloud Platform.

If you&8217;d like to talk with our team at the summit, simply contact us and we&8217;ll schedule a meeting.

REQUEST A MEETING

 
Free Mirantis Training @ Summit
Take advantage of our special training offers to power up your skills while you&8217;re at the Summit! Mirantis Training will be offering an Accelerated Bootcamp session before the big event. Our courses will be conveniently held within walking distance of the Hynes Convention Center.

Additionally, we&8217;re offering a discounted Professional-level Certification exam and a free Kubernetes training, both held during the Summit.

 
Mirantis Presentations
Here&8217;s where you can find us during the summit&;.
 
MONDAY MAY 8

Monday, 12:05pm-12:15pm
Level: Intermediate
Turbo Charged VNFs at 40 gbit/s. Approaches to deliver fast, low latency networking using OpenStack.
(Gregory Elkinbard, Mirantis; Nuage)

Monday, 3:40pm-4:20pm
Level: Intermediate
Project Update &; Documentation
(Olga Gusarenko, Mirantis)

Monday, 4:40pm-5:20pm
Level: Intermediate
Cinder Stands Alone
(Ivan Kolodyazhny, Mirantis)

Monday, 5:30pm-6:10pm
Level: Intermediate
m1.Boaty.McBoatface: The joys of flavor planning by popular vote
(Craig Anderson, Mirantis)

 

TUESDAY MAY 9

Tuesday, 2:00pm-2:40pm
Level: Intermediate
Proactive support and Customer care
(Anton Tarasov, Mirantis)

Tuesday, 2:30pm-2:40pm
Level: Advanced
OpenStack, Kubernetes and SaltStack for complete deployment automation
(Aleš Komárek and Thomas Lichtenstein, Mirantis)

Tuesday, 2:50pm-3:30pm
Level: Intermediate
OpenStack Journey: from containers to functions
(Ihor Dvoretskyi, Mirantis; Iron.io, BlueBox)

Tuesday, 4:40pm-5:20pm
Level: Advanced
Point and Click ->CI/CD: Real world look at better OpenStack deployment, sustainability, upgrades!
(Bruce Mathews and Ryan Day, Mirantis; AT&T)

Tuesday, 5:05pm-5:45pm
Level: Intermediate
Workload Onboarding and Lifecycle Management with Heat
(Florin Stingaciu and Lance Haig, Mirantis)

 

WEDNESDAY MAY 10

Wednesday, 9:50am-10:30am
Level: Intermediate
Project Update &8211; Neutron
(Kevin Benton, Mirantis)

Wednesday, 11:00am-11:40am
Level: Intermediate
Project Update &8211; Nova
(Jay Pipes, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
Kuryr-Kubernetes: The seamless path to adding Pods to your datacenter networking
(Ilya Chukhnakov, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
OpenStack: pushing to 5000 nodes and beyond
(Dina Belova and Georgy Okrokvertskhov, Mirantis)

Wednesday, 4:30pm-5:10pm
Level: Intermediate
Project Update &8211; Rally
(Andrey Kurilin, Mirantis)

 

THURSDAY MAY 11

Thursday, 9:50am-10:30am
Level: Intermediate
OSprofiler: evaluating OpenStack
(Dina Belova, Mirantis; VMware)

Thursday, 11:00am-11:40am
Level: Intermediate
Scheduler Wars: A New Hope
(Jay Pipes, Mirantis)

Thursday, 11:30am-11:40am
Level: Beginner
Saving one cloud at a time with tenant care
(Bryan Langston, Mirantis; Comcast)

Thursday, 3:10pm-3:50pm
Level: Advanced
Behind the Scenes with Placement and Resource Tracking in Nova
(Jay Pipes, Mirantis)

Thursday, 5:00pm-5:40pm
Level: Intermediate
Terraforming OpenStack Landscape
(Mykyta Gubenko, Mirantis)

 

Notable Presentations By The Community
 
TUESDAY MAY 9

Tuesday, 11:15am-11:55am
Level: Intermediate
AT&;T Container Strategy and OpenStack&8217;s role in it
(AT&038;T)

Tuesday, 11:45am-11:55am
Level: Intermediate
AT&038;T Cloud Evolution : Virtual to Container based (CI/CD)^2
(AT&038;T)

WEDNESDAY MAY 10

Wednesday, 1:50pm-2:30pm
Level: Intermediate
Event Correlation &038; Life Cycle Management – How will they coexist in the NFV world?
(Cox Communications)

Wednesday, 5:20pm-6:00pm
Level: Intermediate
Nova Scheduler: Optimizing, Configuring and Deploying NFV VNF&8217;s on OpenStack
(Wind River)

THURSDAY MAY 11

Thursday, 9:00am-9:40am
Level: Intermediate
ChatOpsing Your Production Openstack Cloud
(Adobe)

Thursday, 11:00am-11:10am
Level: Intermediate
OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
(Ericsson)

Thursday, 1:30pm-2:10pm
Level: Beginner
Participating in translation makes you an internationalized OpenStacker &038; developer
(Deutsche Telekom AG)

Thursday, 5:00pm-5:40pm
Level: Beginner
Future of Cloud Networking and Policy Automation
(Cox Communications)

The post Let&8217;s Meet At OpenStack Summit In Boston! appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Webinar Q&A: Introducing Docker Enterprise Edition (EE)

A few weeks ago we announced Docker Enterprise Edition (EE), the trusted, certified and supported container platform. Docker EE enables IT teams to establish a Containers as a Service (CaaS) environment to converge legacy, ISV and microservices apps into a single software supply chain that is flexible, secure and infrastructure independent. With a built in orchestration architecture (swarm mode) Docker EE allows app teams to compose and schedule simple to complex apps to drive their digital transformation initiatives.

On March 14th we hosted a live webinar to provide an overview and demonstration of Docker EE. View the recorded session below and read through some of the most popular questions.

Frequently Asked Questions
Q: How is Docker EE licensed?
A: Docker EE is licensed per node. A node is an instance running on a bare metal or virtual server. For more details visit www.docker.com/pricing
Q: Is Google Cloud also one of your certified infrastructure partners?
A: Docker EE is available today for both Azure and AWS. Google Cloud is currently offered as a private beta with Docker Community Edition. Learn more in this blog post and sign up at https://beta.docker.com 
Q: What technology is used for the security scanning and vulnerability features of Docker EE? Does security scanning have a separate license?
A: Docker Security Scanning is the technology that conducts binary level scanning of Docker images and continuous vulnerability monitoring. This capability is included in the Docker EE Advanced subscription tier.  A free 30 day trial is available for you to try security scanning.
Q: Will signing and scanning images and the vulnerabilities for that image work on any image that is internally developed or only images that are downloaded from the Docker Store?
A: Yes, signing and scanning works for any image that is pushed to the on-premises registry (DTR) that is part of Docker EE.
Q: Where can I see the key features included in each of the different Docker EE tiers?
A: There are three tiers: Basic, Standard and Advanced. A comparison table is available at www.docker.com/pricing. 

Q: Can we use the container management layer (UCP) with Docker Community Edition?
A: No. The container management (UCP) and image registry (DTR) are tested, validated and supported for the Docker EE certified infrastructure only.
Q: Can you run Docker Certified Containers over Docker CE Engine?
A: No. Certified Containers and Plugins are tested, validated and supported to the Docker EE certified infrastructure only.
Q: How does the licensing work for Certified Containers and Plugins downloaded from Docker Store?
A: Similar to many other software marketplaces, Docker Store provides an interface for the publisher to provide a pay as you go or BYOL style of container for the Docker Store. Entitlement and upgrades are managed through the Docker Store by the publisher. The publisher can determine the subscription price for the end user.
Q: What is the difference between Docker CE and Docker EE?
A: The Docker product page provides a comparison between Docker CE and EE. https://www.docker.com/get-docker. Docker CE provides a free Docker platform available for many community infrastructure. Docker EE is an integrated container management and security platform with certification and capabilities like role based access control, LDAP/AD integration, deployment policies and more.
Q: Can Docker EE be run within my enterprise or is it only run externally?
A: Docker EE can be deployed on-premises or in your VPC.
Q 12: Does EE Basic use use normal swarm since UCP is with the other versions?
A: Both Docker CE and EE have built in orchestration capabilities of swarm mode. Each node is a fully functioning building block to be a manager or worker node in the cluster.  With Docker Enterprise Edition, the integrated management UI builds on top of the built in swarm mode orchestration and integrates with the private registry and Role Based Access Controls to provide a robust platform for end to end container application management.
Q 13: What is the migration path from Docker Community Edition to Enterprise Edition?
A: The apps built on Docker CE will also run on Docker EE. However there is no in place migration of the cluster itself.  To migrate apps to a Docker EE environment, a new cluster will need to be set up and the same Compose files and images can be deployed as services to the new Docker EE environment.
For More Information:

Learn more about Docker CE and EE
Try Docker EE for free
Register for DockerCon 2017 and Federal Summit

The post Webinar Q&;A: Introducing Docker Enterprise Edition (EE) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How Watson helps H&R Block deliver engaging customer experiences

Just under three-fourths of US citizens get tax refunds every year, according to H&R Block CEO Bill Cobb. For H&R Block customers, the number is higher; it&;s closer to 85 percent.
Now that IBM Watson is helping H&R Block tax professionals guide customers through the filing process, the company is aiming to make that number rise even further.
Cobb joined IBM CEO Ginni Rometty on stage at IBM InterConnect Tuesday to explain just how H&R Block teamed up with IBM to get Watson working on taxes and how the whole process works.
&;I think this is one of the best examples of two brands coming together where they worked seamlessly,&; Cobb said after showing the ad that aired during this year&8217;s big game. Rometty added that H&R Block is &8220;a wonderful exemplar of continuous transformation.&8221;
Cobb shared that, after the 2016 tax season, H&R Block research found that customers were looking for more engaging experiences. So he called IBM on his landline phone and asked how Watson could make that happen while still keeping tax professionals at the center of customer relationships. In June 2016, teams from both companies were working on a solution. Just eight months later, ads for the service were running on TV.
&8220;Anyone who says IBM doesn&8217;t work quickly, I&8217;m here to tell you, IBM works fast,&8221; Cobb said.
The cognitive interview
Here&8217;s how the process works: a customer walks into an H&R Block office and sits down in front of a screen&;where previously they usually just watched a tax professional type away. A tax professional begins the usual interview, asking about life events, potential deductions and possible credits.
Throughout that process, Watson is listening in, referencing 600 million data points and the entire US tax code, creating a &8220;knowledge graph,&8221; which outlines all the areas where there might be a savings.
After the interview, Watson displays a massive chart of all the possible deductions and credits, and the tax professional goes through that chart with the customer, explaining all the different ways to increase the refund.
Positive response
Even before H&R Block with Watson was branded, when it was just a pilot program, customer satisfaction was ticking up, Cobb said. Now it&8217;s rising even more.
Tax professionals are responding positively, too, he said.
&8220;This makes them feel like they&8217;re really on the cutting edge,&8221; Cobb said.
Cobb said Watson is &8220;a beautiful fit for the nature of our business&8221; and is likely to expand into other areas of H&R Block&8217;s services, such as digital tax preparation.
Learn more about Watson on IBM Cloud.
The post How Watson helps H&;R Block deliver engaging customer experiences appeared first on news.
Quelle: Thoughts on Cloud

AT&T and IBM partner for analytics with Watson

Today at IBM InterConnect, we learned that IBM is partnering with AT&T to support enterprise customers&; Internet of Things (IoT) with data insights.
This data is huge for business customers, but is only valuable with real-time, meaningful insights.
AT&T will be using a variety of IBM products including:

Watson IoT Platform: to build the next generation of connected industrial IoT devices and products that continuously learn from the physical world
IBM Watson Data Platform: which is the fastest data ingestion engine, combined with cognitive powered decision making to help uncover business insights and value from data, possibly from the weather, the road, social media, or customer sales data
IBM Machine Learning Service: used by AT&T to give their customers access to machine learning

Benefiting AT&T customers
Companies can use IoT data to predict their machine maintenance, but how does this impact AT&T customers?
For example, say an oil and gas company wants to detect unusual events in its wells. By using AT&T’s IoT network and the IBM Watson Data Platform, AT&T’s IoT analytics solutions will ingest data from hundreds of wells, creating the models necessary with appropriate machine learning libraries and open source technology to help predict potential failures or machine malfunctions. The company will be able to detect anomalies in less time and with greater accuracy.
“We have more than 30 million connections on our network today and that number continues to grow, primarily driven by enterprise adoption,” said Chris Penrose, president of IoT solutions at AT&T. “Integrating the IBM Watson Data Platform into our IoT capabilities will be huge for our enterprise customers.”
Bringing IoT innovations to market
The news today builds on existing collaborations between AT&T and IBM to deliver new IoT innovations to the market. The companies’ strategic alliance brings together leading wireless connectivity, advanced analytics and cognitive capabilities for AT&T’s enterprise customers to improve their business processes.
Stay tuned for further announcements live from IBM InterConnect.
Start your next IoT project.
A version of this article originally appeared on the IBM Internet of Things blog.
The post AT&;T and IBM partner for analytics with Watson appeared first on news.
Quelle: Thoughts on Cloud

AT&T and IBM partner for analytics with Watson

Today at IBM InterConnect, we learned that IBM is partnering with AT&T to support enterprise customers&; Internet of Things (IoT) with data insights.
This data is huge for business customers, but is only valuable with real-time, meaningful insights.
AT&T will be using a variety of IBM products including:

Watson IoT Platform: to build the next generation of connected industrial IoT devices and products that continuously learn from the physical world
IBM Watson Data Platform: which is the fastest data ingestion engine, combined with cognitive powered decision making to help uncover business insights and value from data, possibly from the weather, the road, social media, or customer sales data
IBM Machine Learning Service: used by AT&T to give their customers access to machine learning

Benefiting AT&T customers
Companies can use IoT data to predict their machine maintenance, but how does this impact AT&T customers?
For example, say an oil and gas company wants to detect unusual events in its wells. By using AT&T’s IoT network and the IBM Watson Data Platform, AT&T’s IoT analytics solutions will ingest data from hundreds of wells, creating the models necessary with appropriate machine learning libraries and open source technology to help predict potential failures or machine malfunctions. The company will be able to detect anomalies in less time and with greater accuracy.
“We have more than 30 million connections on our network today and that number continues to grow, primarily driven by enterprise adoption,” said Chris Penrose, president of IoT solutions at AT&T. “Integrating the IBM Watson Data Platform into our IoT capabilities will be huge for our enterprise customers.”
Bringing IoT innovations to market
The news today builds on existing collaborations between AT&T and IBM to deliver new IoT innovations to the market. The companies’ strategic alliance brings together leading wireless connectivity, advanced analytics and cognitive capabilities for AT&T’s enterprise customers to improve their business processes.
Stay tuned for further announcements live from IBM InterConnect.
Start your next IoT project.
A version of this article originally appeared on the IBM Internet of Things blog.
The post AT&;T and IBM partner for analytics with Watson appeared first on news.
Quelle: Thoughts on Cloud

ASP.NET on OpenShift: Getting started in ASP.NET

In parts 1 &; 2 of this tutorial, I’ll be going over getting started quickly by using templates in Visual Studio Community 2015. This means that it’ll be for Windows in this part. However, I’ll go more in-depth with doing everything without templates in Visual Studio Code in a following tutorial, which will be applicable to Linux or Mac as well as Windows. If you’re not using Windows, you can still follow along in parts 1 &038; 2 to get a general idea of how to create a REST endpoint in .NET Core.
Quelle: OpenShift

What’s new in OpenStack Ocata webinar — Q&A

The post What&;s new in OpenStack Ocata webinar &; Q&;A appeared first on Mirantis | Pure Play Open Cloud.
On February 22, my colleagues Rajat Jain, Stacy Verroneau, and Michael Tillman and I held a webinar to discuss the new features in OpenStack&8217;s latest release, Ocata. Unfortunately, we ran out of time for questions and answers, so here they are.
Q: What are the benefits of using the cells capability?
Rajat: The cells concept was introduced in the Juno release, and as some of you may recall, it was to allow a large number of nova/compute instances to share openstack services.

Therefore, Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments.

When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker. This was achieved by the nova cells and nova api services to provide the capabilities.
One of the key changes in Ocata is the upgrade to cells v2, which now only relies on the nova api service for all the synchronization across the cells.
Q: What is the placement service and how can I leverage it?
Rajat: The placement service, which was introduced in the Newton release, is now a key part of OpenStack and also mandatory in determining the optimum placement of VMs. Basically, you set up pools of resources, provide an inventory of the compute nodes, and then set up allocations for resource providers. Then you can set up policies and models for optimum placements of VMs.
Q: What is the OS profiler, and why is it useful?
Rajat: OpenStack consists of multiple projects. Each project, in turn, is  composed of multiple services. To process a request &8212; for example, to boot a virtual machine &8212; OpenStack uses multiple services from different projects. If something in this process runs slowly, it&8217;s extremely complicated to understand what exactly goes wrong and to locate the bottleneck.
To resolve this issue,  a tiny but powerful library, osprofiler, was introduced. The osprofiler library will be used by all OpenStack projects and their python clients. It provides functionality to be able to generate 1 trace per request, flowing through all involved services. This trace can then be extracted and used to build a tree of calls which can be quite handy for a variety of reasons (for example, in isolating cross-project performance issues).
Q: If I have keystone connected to a backend active directory, will i benefit from the auto-provisioning of the federated identity?
Rajat: Yes. The federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience. This is therefore a big usability enhancement for deployers leveraging the federated identity plugins.
Q: Is FWaaS really used out there?
Stacy: Yes it is, but its viability in production is debatable and going with a 3rd party with a Neutron plugin is still, IMHO, the way to go.
Q: When is Octavia GA planned to be released?
Stacy: Octavia is forecast to be GA in the Pike release.
Q: Are DragonFlow and Tricircle ready for Production?
Stacy: Those are young big tent projects but pretty sure we will see a big evolution for Pike.  
Q: What&8217;s the codename for placement service please?
Stacy: It&8217;s just called the Placement API. There&8217;s no fancy name.
Q: Does Ocata continue support for Fernet tokens?
Rajat: Yes.
Q: With federated provider,  can i integrate openstack env with my on-prem AD and allow domain users to use Openstack?
Rajat: This was always supported, and is not new to ocata. More details at https://docs.openstack.org/admin-guide/identity-integrate-with-ldap.html
What&8217;s new in this area is that the federated identity mapping engine now supports the ability to automatically provision projects for federated users. A role assignment will automatically be created for the user on the specified project. Prior to this, a federated user had to attempt to authenticate before an administrator could assign roles directly to their shadowed identity, resulting in a strange user experience.

Q: if i&8217;m using my existing domain users from AD to openstack,  how would i control their rights/role to perform specific tasks in the openstack project?
Rajat: You would first set up authentication via LDAP, then provide connection settings for AD and also set the identity driver to ldap in the keystone.conf. Next you will have to do an assignment of roles and projects to the AD users. Since Mitaka, the only option that you can use is the SQL driver for the assignment in the keystone.conf, but you will have to do the mapping. Most users prefer this approach anyway, as they want to keep the AD as read only from the OpenStack connection. You can find more details on how to configure keystone with LDAP here.
Q: What, if anything, was pushed out of the &;big tent&; and/or did not get robustly worked?
Nick:  You can get a complete view of work done on every project at Stackalytics.
Q: So when is Tricircle being released for use in production?
Stacy: Not soon enough.  Being a new Big Tent project, it needs some time to develop traction.  
Q: Do we support creation of SRIOV ports from horizon during instance creation. If not, are there any plans there?
Nick: According to the Horizon team, you can pre-create the port and assign it to an instance.
Q: Way to go warp speed Michael! Good job Rajat and Stacy. Don&8217;t worry about getting behind, I blame Nick anyway. Then again I always I always blame Nick.
Nick: Thanks Ben, I appreciate you, too.

H&R Block teams with IBM for cloud-based Watson tax services

When you&;re watching the big game this Sunday, keep an eye out for H&R Block&8217;s new ad that highlights an initiative to use cloud-based IBM Watson services to help customers make sense of the time-consuming and often confusing process of filing their taxes.
The first phase of the two companies&8217; collaboration involves training Watson to knowledgeably answer the numerous questions that come up during the tax preparation process. To do so, H&R Block fed the IBM supercomputer all 74,000 pages of the US tax code. That&8217;s just the first step. NetworkWorld explained what comes next:
IBM said that Watson’s initial training was validated by H&R Block tax experts – who have filed some 720 million returns since 1955 – and the initial corpus will expand over time through each subsequent tax season. During the next phase, H&R Block tax professionals will work with IBM to continue teaching Watson all about tax and apply the technology to innovate in other areas of their business.
Applying Watson&8217;s machine learning, natural language and image recognition capabilities to world of tax preparation adds another industry to its skillset. It has also worked in fields including healthcare and cybersecurity.

For more, check out NetworkWorld&;s full article.
The post H&;R Block teams with IBM for cloud-based Watson tax services appeared first on news.
Quelle: Thoughts on Cloud

Q&A: 15 Questions AWS Users Ask About DDC For AWS

Docker is deployed across all major cloud service providers, including AWS. So when we announced Docker Datacenter for AWS (which makes it even easier to deploy DDC on AWS) and showed live demos of the solution at AWS re:Invent 2016 it was no surprise that we received a ton of interest about the solution. Docker Datacenter for AWS, as you can guess from its name, is now the easiest way to install and stand up the Docker Datacenter (DDC)  stack on an AWS EC2 cluster. If you are an AWS user and you are looking for an enterprise container management platform, then this blog will help answer questions you have about using DDC on AWS.
In last week’s webinar,  Harish Jayakumar,  Solutions Engineer at Docker, provided a solution overview and demo to showcase how the tool works, and some of the cool features within it. You can watch the recording of the webinar below:

We also hosted a live Q&A session at the end where we opened up the floor to the audience and did our best to get through as many questions as we could. Below, are fifteen of the questions that we received from the audience. We selected these because we believe they do a great job of representing the overall set of inquiries we received during the presentation. Big shout out to Harish for tag teaming the answers with me.
Q 1: How many VPCs are required to create a full cluster of UCP, DTR and the workers.
A: With the DDC Template it creates one new VPC along with its subnets and security groups. More details here:  https://ucp-2-1-dtr-2-2.netlify.com/datacenter/install/aws/
However, if you do want to use DDC with your existing VPC you can always deploy DDC directly without using the Cloud Formation template if you would like.
Q 2: Is the $150/monthly cost  per instance. Is this for an EC2 instance?
A: Yes, the $150/month cost is per EC2 instance. This is our monthly subscription model and is is purchasable directly on Docker Store. We also offer have annual subscriptions that are currently priced at $1,500 per node/per year or $3,000 per node/per year. You can view all pricing here.
Q 3: Would you be able to go over how to view logs for each containers? And what&;s the type of log output that UCP shows in the UI?
A: Within the UCP UI you can click on the “Resources” tab-> and then go to “Containers.” Once you have selected “Containers,  you can click on each individual container and see the logs within the UI.

Q 4: How does the resource allocation work? Can we over allocate CPU or RAM?
A: Yes. By default, each container’s access to the host machine’s CPU cycles is unlimited, but you can set various constraints to limit a given container’s access to the host machine’s CPU cycles. For RAM, Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Or you Docker can provide soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. You can find more details here: https://docs.docker.com/engine/admin/resource_constraints/
Q 5: Can access to the console via UCP be restricted via RBAC constraints?
A: Yes. Here is a blog explaining access controls in detail:

https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/

Q 6: Can we configure alerting from Docker Datacenter based on user definable ined criteria (e.g. resource utilization of services)?
A: Yes, but with a little tweaking. Everything with Docker is event driven- so you can configure to trigger alerts for each event and take the necessary action. Within the UI, you can see all of the usage of resources listed. You have the ability to set how you want to see the notifications associated with it.
Q 7: Is there a single endpoint in front of the three managers?
A: Within UCP, we suggest teams deploy three managers to ensure high availability of the cluster. As far as the single endpoint, you can configure one if you would like. For example, you can configure an ELB in AWS to be in front of those three (3) managers and then they can reach to that one load balancer instead of accessing the individual manager with their ip.
Q 8: Do you have to use DTR or can you use alternative registries such as AWS ECR, Artifactory, etc.?
A: With the Cloud Formation template, it is only DTR. Docker Datacenter is the end to end enterprise container management solution and DTR/UCP are integrated. This means they share several components between them. They also have SSO enabled between the components so the same LDAP/AD group can be used. Also, the solution ensures a secure software supply chain including signing and scanning. The chain is only made possible when using the full solution. The images are signed and scanned by DTR and because of integration you can simply enable UCP to not run containers based of images that haven’t been signed. We call this policy enforcement.
Q 9: So there is a single endpoint in front of the mgrs (like a Load balancer) where I can config my docker cli to?
A: Yes, that is correct.
Q 10: How many resources on the VMs or Physical machines are needed to run Docker Datacenter on prem? Let&8217;s say for three UCP manager nodes and three worker nodes.
A: The CloudFormation template does it all for you. However, if you plan to install DDC outside of the Cloud Formation template here are the infrastructure requirements you should consider:

https://docs.docker.com/ucp/installation/system-requirements/

(installed on CommerciallySupported Engine https://docs.docker.com/cs-engine/install/)
Q 11: How does this demo of DDC for AWS compare to https://aws.amazon.com/quickstart/architecture/docker-ddc/
A: It is the same. But stay tuned, as we will be providing an updated version in the coming weeks.
Q 12: If you don&8217;t use a routing mesh, would you need to route to each specific container? How do you know their individual IPs? Is it possible to have a single-tenant type of architecture where each user has his own container running?
A: The routing mesh is available as part of the engine. It’s turned on by default and it routes to containers cluster wide. Before the Routing mesh ( prior to Docker 1.12) you will have to route to a specific container and its port. It does not have to be the ip specifically. You can route host names to specific services from within the UCP UI. We also introduced the concept of alias &; where you can associate a container by its name and the engine has a built in DNS to handle the routing for you. However, I would encourage looking at routing mesh, which is available in Docker 1.12 and above.
Q 13: Are you using Consul as a K/V store for the overlay network ?
A: No we are not using Consul as the K/V store nor does Docker require an external K/V store. The state is stored using a distributed database on the manager nodes called Raft store.  Manager nodes are part of a Raft consensus group. This enables them to share information and elect a leader. A leader is the central authority maintaining the state, which includes lists of nodes, services and tasks across the swarm in addition to making scheduling decisions.
Q 14: How do you work with node draining in the context of Auto Scaling Groups (ASG)?
A: The node drain drains all the workloads from a node. It prevents a node from receiving new tasks from the manager. It also means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability. The node does remaining the ASG group.
Q 15: Is DDC for AWS dependent on AWS EBS?
A: We use EBS volumes for the instances, but we aren&8217;t using it for persistent storage, more of a local disk cache. Data there will go away if instance goes away.
To get started with Docker Datacenter for AWS, sign up for a free 30-day trial at www.docker.com/trial.
Enjoy!
 

Meet the easiest way to deploy @Docker Datacenter on AWS!Click To Tweet

The post Q&;A: 15 Questions AWS Users Ask About DDC For AWS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/