Event Hubs .NET Standard client is now generally available

After several months of testing, both internally and by our users (thank you), we are releasing our newest Event Hubs clients to general availability. This means that these new libraries are production ready and fully supported by Microsoft.

What new libraries are available?

Consistent with our past design decisions, we are releasing two new NuGet packages:

Microsoft.Azure.EventHubs – This library comprises the Event Hubs specific functionality that is currently found in the WindowsAzure.ServiceBus library. In here you will be able to do things like send and receive events from an Event Hub.
Microsoft.Azure.EventHubs.Processor – Replaces functionality of the Microsoft.Azure.ServiceBus.EventProcessorHost library. This is the easiest way to receive events from an Event Hub, and keeps you from having to remember things such as offsets and partition information between receivers.

What does this mean for you?

Releasing these new libraries provides three major benefits:

Runtime portability – Using .NET Standard, we now have the ability to write a single code base that is portable across different .NET runtimes, including .NET Core, .NET framework, and the Universal Windows Platform. You can take this library and run it on Windows Server with .NET Framework 4.6, or on a Mac/Linux machine using .NET Core.
Open source – Yes! We are very excited that these new libraries are open source and available on GitHub. We love the interactions that we have with our customers, whether it be an issue or pull request.
Event Hubs now has its own library – while Event Hubs and Service Bus have been seemingly joined in the past, the use cases between these two products are often times different. Previously, you needed to download a Service Bus library in order to use Event Hubs. These new libraries are specific to Event Hubs, so we hope that they will make things more clear for our new users.

What&;s next?

For those of you currently using the WindowsAzure.ServiceBus library, we will continue to support Event Hubs workloads on this library for the foreseeable future. With that said, we currently have a .NET Standard Service Bus library in preview!

For more information on getting started with these new libraries , check out our updated getting started documentation.

So take the new libraries for a spin, and let us know what you think!
Quelle: Azure

Indian IT Minister Says Apple Plans To Make iPhones In Bengaluru

Priyank Kharge, Minister IT & BT || Tourism : Govt of Karnataka / Via Twitter: @PriyankKharge

Apple&;s plans to pursue iPhone manufacturing operations in India seem to be moving along well. In a statement issued to Bengaluru&039;s local press late on Thursday, the IT minister of the Indian state of Karnataka, Priyank Kharge, said he welcomed “Apple Inc.’s proposal to commence initial manufacturing operations in the state.”

He followed the statement with a tweet: “Glad to announce initial manufacturing operations of the world&039;s most valued company: Apple, in Karnataka. Another validation for Karnataka.”

Kharge offered no other details on the proposal.

Apple has been in talks with the Indian government about manufacturing locally for months, and had asked India for major incentives, including a 15-year exemption on customs duty among other things.

Twitter: @chandrarsrikant

Apple declined comment on the minister&039;s statement, but instead pointed BuzzFeed News to a statement it issued a few weeks ago: “We&039;ve been working hard to develop our operations in India and are proud to deliver the best products and services in the world to our customers here. We appreciate the constructive and open dialogue we’ve had with government about further expanding our local operations.”

Quelle: <a href="Indian IT Minister Says Apple Plans To Make iPhones In Bengaluru“>BuzzFeed

Enhanced Automated Backup for SQL Server 2016 in Azure Virtual Machines

We are excited to announce some great enhancements to our Automated Backup feature, which greatly extends your control over backups when running SQL Server 2016 in Azure Virtual Machines. You can now control the schedule of your backups and backup system databases. You can easily enable this feature through the Azure Portal or PowerShell on Azure Virtual Machines running SQL Server 2016 Enterprise, Standard, or Developer.

For those of you not familiar with Automated Backup, this feature allows you to automatically backup all the databases in a SQL Server VM running in Azure to one of your storage accounts. Automated Backup ensures a consistent backup chain at all times, so you can always recover your databases to any point in time. Even better, it manages the desired retention for the backups, keeping them only for the time you specify. If you’re curious, Automated Backup is implemented on top of the SQL Server IaaS Agent Extension and the SQL Server Managed Backup feature.

New capabilities

Backup system databases

Automated Backup now gives you the option to schedule backups for System databases in addition to User databases. If you choose to enable this option, your System databases, and all their important instance-level objects, will be backed up on the same schedule as your User databases.

Scheduling backups

Automated Backup now allows you to schedule a time window and frequency, daily or weekly, for full backups so that these don’t impact performance during business hours. In addition, you can specify how often to take log backups.

Remember that Azure Storage keeps 3 copies of every VM disk to guarantees no data loss, thus the purpose of these backups is to recover from human errors (e.g. deleting a table).

For disaster recovery purposes or compliance reasons, consider storing these backups in a geo-replicated storage account, preferably readable. This will make the backups available also in a remote Azure region.

How to find Automated Backup v2

For new SQL Server VMs

When creating a new virtual machine running SQL Server 2016 in the Azure portal, you will be presented with several SQL Server configuration options. Here you can enable and configure Automated Backup according to your requirements.

For existing SQL Server VMs

If you have an existing virtual machine running SQL Server 2016, you can enable and configure Automated backup v2 from the Azure Portal or PowerShell. If you find your SQL Server virtual machine in the Azure Portal, you can find Automated Backup under SQL Server configuration.

To learn more, check out the documentation for this feature.

Try out this feature today in the Azure Portal!

If you do not have an Azure subscription, you can easily sign up for a free trial!
Quelle: Azure

Microsoft Urges Trump To Exempt Students And Workers From The Travel Ban

David Ramos / Getty Images

Microsoft is urging the Trump administration to create an exemption in its controversial travel ban to allow foreign-born students, workers, and people with family emergencies to leave and enter the United States.

In an executive order signed last week, President Donald Trump indefinitely suspended Syrian refugees from entering the US and has blocked people from Iraq, Iran, Sudan, Somalia, Libya, and Yemen from entering the country for 90 days.

In a letter to the head of Homeland Security and the Secretary of State, Microsoft President Brad Smith said the immigration order has impacted people with “pressing needs,” noting situations in which parents and children have been separated, individuals stranded, and travel for family medical emergencies blocked. Microsoft has 76 employees and 41 dependents who are impacted by the immigration order. But Smith added, “These situations almost certainly are not unique to our employees and their families.”

In the letter, Microsoft proposed an exemption to the travel ban. Individuals with valid travel documents, and who have committed no crimes, would be permitted to enter the US. And employees with work travel or family members with medical emergencies would be allowed to leave and enter the US, within a two-week window of time. Under Microsoft&;s proposal, travel to one of the seven Muslim majority countries for family-related emergencies would require approval on a case-by-case basis.

“We believe such an exception under the existing framework of the Executive Order
would help address compelling personal needs without compromising the Executive
Order’s security-related objectives,” wrote Smith.

To bolster Microsoft&039;s case, Smith notes that the president&039;s executive order explicitly grants Homeland Security and the State Department discretion to grant such exemptions.

“We therefore believe that the process we are proposing here is not only consistent with the Executive Order, but was contemplated by it.”

Based in Washington state, Microsoft has also lent its support to a lawsuit there challenging the president&039;s immigration order. Washington-based businesses Amazon and Expedia filed sworn statements in support of the suit, which was led by Attorney General Bob Ferguson. Earlier this week, a spokesperson for the company told BuzzFeed News, “Microsoft has been supportive and has provided information to the Attorney General and is willing to provide further testimony if necessary.”

Quelle: <a href="Microsoft Urges Trump To Exempt Students And Workers From The Travel Ban“>BuzzFeed

Q&A: 15 Questions AWS Users Ask About DDC For AWS

Docker is deployed across all major cloud service providers, including AWS. So when we announced Docker Datacenter for AWS (which makes it even easier to deploy DDC on AWS) and showed live demos of the solution at AWS re:Invent 2016 it was no surprise that we received a ton of interest about the solution. Docker Datacenter for AWS, as you can guess from its name, is now the easiest way to install and stand up the Docker Datacenter (DDC)  stack on an AWS EC2 cluster. If you are an AWS user and you are looking for an enterprise container management platform, then this blog will help answer questions you have about using DDC on AWS.
In last week’s webinar,  Harish Jayakumar,  Solutions Engineer at Docker, provided a solution overview and demo to showcase how the tool works, and some of the cool features within it. You can watch the recording of the webinar below:

We also hosted a live Q&A session at the end where we opened up the floor to the audience and did our best to get through as many questions as we could. Below, are fifteen of the questions that we received from the audience. We selected these because we believe they do a great job of representing the overall set of inquiries we received during the presentation. Big shout out to Harish for tag teaming the answers with me.
Q 1: How many VPCs are required to create a full cluster of UCP, DTR and the workers.
A: With the DDC Template it creates one new VPC along with its subnets and security groups. More details here:  https://ucp-2-1-dtr-2-2.netlify.com/datacenter/install/aws/
However, if you do want to use DDC with your existing VPC you can always deploy DDC directly without using the Cloud Formation template if you would like.
Q 2: Is the $150/monthly cost  per instance. Is this for an EC2 instance?
A: Yes, the $150/month cost is per EC2 instance. This is our monthly subscription model and is is purchasable directly on Docker Store. We also offer have annual subscriptions that are currently priced at $1,500 per node/per year or $3,000 per node/per year. You can view all pricing here.
Q 3: Would you be able to go over how to view logs for each containers? And what&;s the type of log output that UCP shows in the UI?
A: Within the UCP UI you can click on the “Resources” tab-> and then go to “Containers.” Once you have selected “Containers,  you can click on each individual container and see the logs within the UI.

Q 4: How does the resource allocation work? Can we over allocate CPU or RAM?
A: Yes. By default, each container’s access to the host machine’s CPU cycles is unlimited, but you can set various constraints to limit a given container’s access to the host machine’s CPU cycles. For RAM, Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Or you Docker can provide soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. You can find more details here: https://docs.docker.com/engine/admin/resource_constraints/
Q 5: Can access to the console via UCP be restricted via RBAC constraints?
A: Yes. Here is a blog explaining access controls in detail:

https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/

Q 6: Can we configure alerting from Docker Datacenter based on user definable ined criteria (e.g. resource utilization of services)?
A: Yes, but with a little tweaking. Everything with Docker is event driven- so you can configure to trigger alerts for each event and take the necessary action. Within the UI, you can see all of the usage of resources listed. You have the ability to set how you want to see the notifications associated with it.
Q 7: Is there a single endpoint in front of the three managers?
A: Within UCP, we suggest teams deploy three managers to ensure high availability of the cluster. As far as the single endpoint, you can configure one if you would like. For example, you can configure an ELB in AWS to be in front of those three (3) managers and then they can reach to that one load balancer instead of accessing the individual manager with their ip.
Q 8: Do you have to use DTR or can you use alternative registries such as AWS ECR, Artifactory, etc.?
A: With the Cloud Formation template, it is only DTR. Docker Datacenter is the end to end enterprise container management solution and DTR/UCP are integrated. This means they share several components between them. They also have SSO enabled between the components so the same LDAP/AD group can be used. Also, the solution ensures a secure software supply chain including signing and scanning. The chain is only made possible when using the full solution. The images are signed and scanned by DTR and because of integration you can simply enable UCP to not run containers based of images that haven’t been signed. We call this policy enforcement.
Q 9: So there is a single endpoint in front of the mgrs (like a Load balancer) where I can config my docker cli to?
A: Yes, that is correct.
Q 10: How many resources on the VMs or Physical machines are needed to run Docker Datacenter on prem? Let&8217;s say for three UCP manager nodes and three worker nodes.
A: The CloudFormation template does it all for you. However, if you plan to install DDC outside of the Cloud Formation template here are the infrastructure requirements you should consider:

https://docs.docker.com/ucp/installation/system-requirements/

(installed on CommerciallySupported Engine https://docs.docker.com/cs-engine/install/)
Q 11: How does this demo of DDC for AWS compare to https://aws.amazon.com/quickstart/architecture/docker-ddc/
A: It is the same. But stay tuned, as we will be providing an updated version in the coming weeks.
Q 12: If you don&8217;t use a routing mesh, would you need to route to each specific container? How do you know their individual IPs? Is it possible to have a single-tenant type of architecture where each user has his own container running?
A: The routing mesh is available as part of the engine. It’s turned on by default and it routes to containers cluster wide. Before the Routing mesh ( prior to Docker 1.12) you will have to route to a specific container and its port. It does not have to be the ip specifically. You can route host names to specific services from within the UCP UI. We also introduced the concept of alias &; where you can associate a container by its name and the engine has a built in DNS to handle the routing for you. However, I would encourage looking at routing mesh, which is available in Docker 1.12 and above.
Q 13: Are you using Consul as a K/V store for the overlay network ?
A: No we are not using Consul as the K/V store nor does Docker require an external K/V store. The state is stored using a distributed database on the manager nodes called Raft store.  Manager nodes are part of a Raft consensus group. This enables them to share information and elect a leader. A leader is the central authority maintaining the state, which includes lists of nodes, services and tasks across the swarm in addition to making scheduling decisions.
Q 14: How do you work with node draining in the context of Auto Scaling Groups (ASG)?
A: The node drain drains all the workloads from a node. It prevents a node from receiving new tasks from the manager. It also means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability. The node does remaining the ASG group.
Q 15: Is DDC for AWS dependent on AWS EBS?
A: We use EBS volumes for the instances, but we aren&8217;t using it for persistent storage, more of a local disk cache. Data there will go away if instance goes away.
To get started with Docker Datacenter for AWS, sign up for a free 30-day trial at www.docker.com/trial.
Enjoy!
 

Meet the easiest way to deploy @Docker Datacenter on AWS!Click To Tweet

The post Q&;A: 15 Questions AWS Users Ask About DDC For AWS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Q&A: 15 Questions AWS Users Ask About DDC For AWS

Docker is deployed across all major cloud service providers, including AWS. So when we announced Docker Datacenter for AWS (which makes it even easier to deploy DDC on AWS) and showed live demos of the solution at AWS re:Invent 2016 it was no surprise that we received a ton of interest about the solution. Docker Datacenter for AWS, as you can guess from its name, is now the easiest way to install and stand up the Docker Datacenter (DDC)  stack on an AWS EC2 cluster. If you are an AWS user and you are looking for an enterprise container management platform, then this blog will help answer questions you have about using DDC on AWS.
In last week’s webinar,  Harish Jayakumar,  Solutions Engineer at Docker, provided a solution overview and demo to showcase how the tool works, and some of the cool features within it. You can watch the recording of the webinar below:

We also hosted a live Q&A session at the end where we opened up the floor to the audience and did our best to get through as many questions as we could. Below, are fifteen of the questions that we received from the audience. We selected these because we believe they do a great job of representing the overall set of inquiries we received during the presentation. Big shout out to Harish for tag teaming the answers with me.
Q 1: How many VPCs are required to create a full cluster of UCP, DTR and the workers.
A: With the DDC Template it creates one new VPC along with its subnets and security groups. More details here:  https://ucp-2-1-dtr-2-2.netlify.com/datacenter/install/aws/
However, if you do want to use DDC with your existing VPC you can always deploy DDC directly without using the Cloud Formation template if you would like.
Q 2: Is the $150/monthly cost  per instance. Is this for an EC2 instance?
A: Yes, the $150/month cost is per EC2 instance. This is our monthly subscription model and is is purchasable directly on Docker Store. We also offer have annual subscriptions that are currently priced at $1,500 per node/per year or $3,000 per node/per year. You can view all pricing here.
Q 3: Would you be able to go over how to view logs for each containers? And what&;s the type of log output that UCP shows in the UI?
A: Within the UCP UI you can click on the “Resources” tab-> and then go to “Containers.” Once you have selected “Containers,  you can click on each individual container and see the logs within the UI.

Q 4: How does the resource allocation work? Can we over allocate CPU or RAM?
A: Yes. By default, each container’s access to the host machine’s CPU cycles is unlimited, but you can set various constraints to limit a given container’s access to the host machine’s CPU cycles. For RAM, Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Or you Docker can provide soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. You can find more details here: https://docs.docker.com/engine/admin/resource_constraints/
Q 5: Can access to the console via UCP be restricted via RBAC constraints?
A: Yes. Here is a blog explaining access controls in detail:

https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/

Q 6: Can we configure alerting from Docker Datacenter based on user definable ined criteria (e.g. resource utilization of services)?
A: Yes, but with a little tweaking. Everything with Docker is event driven- so you can configure to trigger alerts for each event and take the necessary action. Within the UI, you can see all of the usage of resources listed. You have the ability to set how you want to see the notifications associated with it.
Q 7: Is there a single endpoint in front of the three managers?
A: Within UCP, we suggest teams deploy three managers to ensure high availability of the cluster. As far as the single endpoint, you can configure one if you would like. For example, you can configure an ELB in AWS to be in front of those three (3) managers and then they can reach to that one load balancer instead of accessing the individual manager with their ip.
Q 8: Do you have to use DTR or can you use alternative registries such as AWS ECR, Artifactory, etc.?
A: With the Cloud Formation template, it is only DTR. Docker Datacenter is the end to end enterprise container management solution and DTR/UCP are integrated. This means they share several components between them. They also have SSO enabled between the components so the same LDAP/AD group can be used. Also, the solution ensures a secure software supply chain including signing and scanning. The chain is only made possible when using the full solution. The images are signed and scanned by DTR and because of integration you can simply enable UCP to not run containers based of images that haven’t been signed. We call this policy enforcement.
Q 9: So there is a single endpoint in front of the mgrs (like a Load balancer) where I can config my docker cli to?
A: Yes, that is correct.
Q 10: How many resources on the VMs or Physical machines are needed to run Docker Datacenter on prem? Let&8217;s say for three UCP manager nodes and three worker nodes.
A: The CloudFormation template does it all for you. However, if you plan to install DDC outside of the Cloud Formation template here are the infrastructure requirements you should consider:

https://docs.docker.com/ucp/installation/system-requirements/

(installed on CommerciallySupported Engine https://docs.docker.com/cs-engine/install/)
Q 11: How does this demo of DDC for AWS compare to https://aws.amazon.com/quickstart/architecture/docker-ddc/
A: It is the same. But stay tuned, as we will be providing an updated version in the coming weeks.
Q 12: If you don&8217;t use a routing mesh, would you need to route to each specific container? How do you know their individual IPs? Is it possible to have a single-tenant type of architecture where each user has his own container running?
A: The routing mesh is available as part of the engine. It’s turned on by default and it routes to containers cluster wide. Before the Routing mesh ( prior to Docker 1.12) you will have to route to a specific container and its port. It does not have to be the ip specifically. You can route host names to specific services from within the UCP UI. We also introduced the concept of alias &; where you can associate a container by its name and the engine has a built in DNS to handle the routing for you. However, I would encourage looking at routing mesh, which is available in Docker 1.12 and above.
Q 13: Are you using Consul as a K/V store for the overlay network ?
A: No we are not using Consul as the K/V store nor does Docker require an external K/V store. The state is stored using a distributed database on the manager nodes called Raft store.  Manager nodes are part of a Raft consensus group. This enables them to share information and elect a leader. A leader is the central authority maintaining the state, which includes lists of nodes, services and tasks across the swarm in addition to making scheduling decisions.
Q 14: How do you work with node draining in the context of Auto Scaling Groups (ASG)?
A: The node drain drains all the workloads from a node. It prevents a node from receiving new tasks from the manager. It also means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability. The node does remaining the ASG group.
Q 15: Is DDC for AWS dependent on AWS EBS?
A: We use EBS volumes for the instances, but we aren&8217;t using it for persistent storage, more of a local disk cache. Data there will go away if instance goes away.
To get started with Docker Datacenter for AWS, sign up for a free 30-day trial at www.docker.com/trial.
Enjoy!
 

Meet the easiest way to deploy @Docker Datacenter on AWS!Click To Tweet

The post Q&;A: 15 Questions AWS Users Ask About DDC For AWS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/