New protections for users, data, and apps in the cloud

At Google Cloud, we’re always looking to make advanced security easier for enterprises so they can stay focused on their core business. Already this year, we’ve worked to strengthen user protection, make threat defense more effective, and streamline security administration through a constant stream of new product releases and enhancements. We continue to push our pace of security innovation, and today at Google Cloud Next ‘19 Tokyo, we’re announcing four new capabilities to help customers protect their users, data, and applications in the cloud. 1. Bringing Advanced Protection Program to the enterpriseGoogle’s Advanced Protection Program helps safeguard the personal Google Accounts of anyone at risk of targeted online attacks. We are now introducing the Advanced Protection Program to G Suite, Google Cloud Platform (GCP) and Cloud Identity customers. Enterprise admins can allow their users most at risk of targeted attacks to enroll into the program. Examples of users who would benefit from the protections of the Advanced Protection Program include IT administrators, business executives, and employees in security-sensitive verticals such as finance and government.With Advanced Protection Program for the enterprise, we’ll enforce a specific set of policies for the users you identify, including:Enforcing the use of FIDO security keys, like Titan Security Keys, or compatible hardware from other vendors, to secure your account against phishing and account takeovers. Automatically blocking access to third-party apps that your company has not explicitly marked as trusted.Enabling enhanced scanning of incoming email for phishing attempts, viruses, and attachments for malicious content.The beta for Advanced Protection Program for the enterprise will be rolling out in the coming days. Learn more.2. Making Titan Security Keys available in Japan, Canada, France, and the UKFIDO security keys provide the strongest protection against phishing, targeted attacks, and automated bots and other techniques that seek to compromise user credentials. Last year, Google launched our own Titan Security Keys with availability in the United States. Starting today, Titan Security Keys are also available on the Google Store in Canada, France, Japan, and the United Kingdom (UK).Titan Security KeysTitan Security Keys can be used anywhere FIDO security keys are supported, including Google’s Advanced Protection Program. Learn more in our detailed blog post.3. Using machine learning to detect anomalous activity in G SuiteStaying on top of activity that impacts the organization’s security is top of mind for most admins. Starting today, G Suite Enterprise admins can now automatically receive anomalous activity alerts in the G Suite alert center. Our machine learnings models analyze security signals within Google Drive to detect potential security risks such as data exfiltration or policy violations related to unusual external file sharing and download behavior.Anomaly detection is available in beta for G Suite Enterprise and G Suite Enterprise for Education customers. Learn more.4. Enabling one-click access to thousands of additional appsAs organizations expand their use of SaaS apps, they need to reduce friction for users while maintaining security. Cloud Identity and G Suite already enable single sign-on (SSO) for apps that use modern identity standards like SAML and OIDC, but just as important in meeting organizations where they are in their cloud journey is the ability to support legacy apps that still require a username and password to authenticate. We’re pleased to announce that support for password vaulted apps will be generally available for Cloud Identity in the coming days. The combination of standards based- and password-vaulted app support will deliver one of the largest app catalogs in the industry, providing seamless one-click access for users and a single point of management, visibility, and control for admins.Creating environments that are secure—and keeping them that way—is critical for organizations that run in the cloud. These new features will help strengthen protection and securely enable cloud workloads and business processes. If you are at Next Tokyo, learn more by checking out our security sessions. You can also watch our most recent round of Google Cloud Security Talks here, and register for our next round of security talks here.
Quelle: Google Cloud Platform

5 Things to Try with Docker Desktop WSL 2 Tech Preview

We are pleased to announce the availability of our Technical Preview of Docker Desktop for WSL 2! 

As a refresher, this preview makes use of the new Windows Subsystem for Linux (WSL) version that Microsoft recently made available on Windows insider fast ring. It has allowed us to provide improvements to file system sharing, boot time and access to some new features for Docker Desktop users. 

To do this we have changed quite a bit about how we interact with the operating system compared to Docker Desktop on Windows today: 

To learn more about the full feature set have a look at our previous blog:   Get Ready for Tech Preview of Docker Desktop for WSL 2  and  Docker WSL 2 – The Future of Docker Desktop for Windows.

Want to give it a go?

Get setup on a Windows machine on the latest Windows Insider build. The first step for this is heading over to the Microsoft and getting set up as a Windows Insider: https://insider.windows.com/en-gb/getting-started/ You’ll need to install the latest release branch (at least build version 18932) and you will then want to enable the WSL 2 feature in Windows: https://docs.microsoft.com/en-us/windows/wsl/wsl2-installThen get Ubuntu 18.04 on your machine: Microsoft store. Finally, download the Tech Preview  Docker Desktop for WSL 2 Technical Preview

If you are having issues or want more detailed steps, have a look at our docs here.

Things to try:

Navigate between WSL 2 and traditional Docker

Use $ docker context ls  to view the different contexts available.

The daemon running in WSL 2 runs side-by-side with the “classic” Docker Desktop daemon. This is done by using a separate Docker Context. Run `docker context use wsl` to use the WSL 2 based daemon, and `docker context use default` to use the Docker Desktop classic daemon. The “default” context will target either the Moby Linux VM daemon or the Windows Docker daemon depending if you are in Linux or Windows mode. 

Access full system resources

Use $ docker info to inspect the system statistics. You should see all of your system resources (CPU & memory) available to you in the WSL 2 context. 

Linux workspaces

Source code and build scripts can live inside WSL 2 and access the same Docker Daemon as from Windows. Bind mounting files from WSL 2 is supported, and provides better I/O performance.

Visual Studio remote with WSL

You can work natively with Docker and Linux from Visual Studio Code on Windows. 

If you are a Visual Studio Code user make sure you have installed the plugin from the marketplace. You can then connect to WSL 2 and access your source in Linux, which means you can use the console in VSCode to build your containers using any existing Linux build scripts from within the Windows UI.

For full instructions have a look through Microsoft’s documentation: https://code.visualstudio.com/docs/remote/wsl

File system improvements: 

If you are a PHP Symfony user let us know your thoughts! We found that page refreshes went from ~400ms to ~15ms when we were running from a Linux Workspace.

Want to Learn More?

Read more about the Docker Desktop for WSL 2 Technical PreviewLearn more about Docker Desktop and the new Docker Desktop EnterpriseLearn more about running Windows containers in this On-Demand webinar: Docker for Windows Container Development

5 Things to Try with #Docker Desktop WSL 2 Tech PreviewClick To Tweet

The post 5 Things to Try with Docker Desktop WSL 2 Tech Preview appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Azure Cost Management updates – July 2019

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Azure Cost Management for partners
Marketplace usage for pay-as-you-go (PAYG) subscriptions
Cost Management Labs
Save and share customized views directly in cost analysis
Viewing costs in different currencies
Manage EA accounts from the Azure portal
Expanded availability of resource tags in cost reporting
Tag your resources with up to 50 tags
Documentation updates

Let's dig into the details.

 

Azure Cost Management for partners

Partners play a critical role in successful planning, implementation, and long-term cloud operations for organizations, big and small. Whether you're a partner who sells to or manages Azure on behalf of another organization or you're working with a partner to help keep you focused on your core mission instead of managing infrastructure, you need a way to understand, control, and optimize your cloud costs. This is where Azure Cost Management comes in!

In June, we announced new capabilities in the Cloud Solution Provider (CSP) program coming in October 2019. With this update, CSP partners can onboard customers using the same Microsoft Customer Agreement (MCA) platform used across Azure. CSP partners and customers will see product alignment, which includes common Azure Cost Management tools, available at the same time they're available for pay-as-you-go (PAYG) and enterprise customers.

Azure Cost Management capabilities optimized for partners and their customers will be released over time, starting with the ability to enable Azure Cost Management for MCA customers. You'll see periodic updates throughout Q4 2019 and 2020, including support for customers who do not transition to MCA. Once enabled, partners and customers will have the full benefits of Azure Cost Management.

If you're a managed service provider, be sure to check out Azure Lighthouse, which enables partners to more efficiently manage resources at scale across customers and directories. Help your customers manage their Azure and AWS costs in a single place with Azure Cost Management!

Stay tuned for more updates in October 2019. We're eager to bring much-anticipated Azure Cost Management capabilities to partners and their customers!

 

Marketplace usage for pay-as-you-go (PAYG) subscriptions

Last month, we talked about how effective cost management starts by getting all your costs into a single place with a single taxonomy. Now, with the addition of Azure Marketplace usage for pay-as-you-go (PAYG) subscriptions, you have a more complete picture of your costs.

Azure and Marketplace charges have different billing cycles. To investigate and reconcile billed charges, select the appropriate Azure or Marketplace invoice period in date picker. To view all charges together, select calendar months and group by publisher type to see a breakdown of your Azure and Marketplace costs.

 

Cost Management Labs

Cost Management Labs are the way to get the latest cost management features and enhancements! It is the same great service you're used to, but with a few extra features we're testing and looking for feedback on as we finalize before releasing to the world. This is your chance to drive the direction and impact the future of Azure Cost Management.

Participating in Cost Management Labs is as easy as opening the Azure preview portal and selecting Cost Management from Azure Home. On the Cost Management overview, you'll see the preview features available for testing and have links to share new ideas or report any bugs that may pop up. Reporting a bug is a direct line back to the Azure Cost Management engineering team, where we'll work with you to understand and resolve the issue.

Here's what you'll see in Cost Management Labs today:

Save and share customized views directly within cost analysis
Download your customized view in cost analysis as an image
Several small bug fixes and improvements, like minor design changes within cost analysis

Of course, that's not all! There's more coming and we're very eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today!

 

Save and share customized views in cost analysis

Customizing a view in cost analysis is easy. Just pick the date range you need, group the data to see a breakdown, choose the right visualization, and you're good to go! Pin your view to a dashboard for one-click access, then share the dashboard with your team so everyone can track cost from a single place.

You can also share a direct link to your customized view so others can copy and personalize it for themselves:

Both sharing options offer flexibility, but you need something more convenient. You need to save customized views and share them with others, directly from within cost analysis. Now you can!

People with Cost Management Contributor (or greater) access can create shared views. You can create up to 50 shared views per scope.

Anyone can save up to 50 private views, even if they only have read access. These views cannot be shared with others directly in cost analysis, but they can be pinned to a dashboard or shared via URL so others can save a copy.

All views are accessible from the view menu. You'll see your private views first, then those shared across the scope, and lastly the built-in views which are always available.

Need to share your view outside of the portal? Simply download the charts as an image and copy it into an email or presentation, as an example, to share it with your team. You'll see a slightly redesigned Export menu which now offers a PNG option when viewing charts. The table view cannot be downloaded as an image.

You'll also see a few small design changes to the filter bar in the preview:

The scope pill shows more of the scope name for added clarity
The view menu has been restyled based on its growing importance with saved views
The granularity and group by pickers are closer to the main chart to address confusion about what they apply to

This is just the first step. There's more to come. Try the preview today and let us know what you'd like to see next! We're excited to hear your ideas!

 

Viewing costs in different currencies

Every organization has its own unique setup and challenges. You may get a single Azure invoice or perhaps you need separate invoices per department. You may even be in a multi-national organization with multiple billing accounts in different currencies. Or perhaps you simply moved subscriptions between billing accounts in different currencies. Regardless of how you ended up with multiple currencies, you haven't had a way to view costs in the portal. Now you can!

When cost analysis detects multiple currencies, you'll have an option to switch between them, viewing costs in each currency individually. Today, this only shows charges for the selected currency – cost analysis is not converting currencies. For example, if you have two charges, one for $1 and another for £1, you can see either USD only ($1) or GBP only (£1). You cannot see $1+£1 in USD or GBP today. In the future, Azure Cost Management will convert costs into a single currency to show everything in USD (e.g. $2.27 in this case) and eventually in a currency you select (e.g. ¥243.43).

 

Manage EA departments and policies from the Azure portal

If you manage an Enterprise Agreement (EA), you're all too familiar with the Enterprise portal, which lets you to keep an eye on your usage, monetary commitment credits, and additional charges each month. Did you know you can also do this in the Azure portal? With richer reporting in cost analysis and finer-grained control with budgets, the Azure portal delivers even more capabilities to understand and control your costs.

Now, you can also create and manage your departments and policy settings from the Azure portal. Departments allow you to organize subscriptions and delegate access to manage account owners and policy settings allow you to enable or disable reservations, Azure Marketplace purchases, and Azure Cost Management for your organization. To ensure everyone in the organization can see and manage costs, make sure you enable account owners to view charges.

Enabling account owners to view charges also ensures subscription users with RBAC access have visibility into their costs throughout the lifetime of their resources, can control spending with budgets, and can optimize their spending with cost-saving recommendations. Enabling cost visibility is critical to driving accountability throughout your organization. Once enabled, you can manage finer-grained access with the Cost Management Reader and Cost Management Contributor roles on any resource group, subscription, or management group. We recommend Cost Management Contributor to ensure everyone can create and share Azure Cost Management views and budgets across the resources and costs they have visibility to.

If you're still using the enterprise portal on a regular basis, we encourage you to give the Azure portal a shot. Simply go to the portal and click Cost Management + Billing in the list of favorites on the left.

And don't forget to plan your move from the key-based EA APIs (such as consumption.azure.com) to the latest UsageDetails API (version 2019-04-01-preview or newer). The key-based APIs will not be supported after your next EA renewal into Microsoft Customer Agreement (MCA) and switching to the UsageDetails API now will streamline this transition and minimize future migration work.

 

Expanded availability of resource tags in cost reporting

Tagging is the best way to organize and categorize your resources outside of the built-in management group, subscription, and resource group hierarchy. Add your own metadata and build custom reports using cost analysis. While most Azure resources support tags, some resource types do not. Here are the latest resource types which now support tags:

VPN gateways

Remember tags are a part of every usage record and are only available in Azure Cost Management reporting after the tag is applied. Historical costs are not tagged. Update your resources today for the best cost reporting.

 

Tag your resources with up to 50 tags

To effectively manage costs in a large organization, you need to map costs to reporting entities. Whether you're breaking down cost by organization, application, environment, or some other construct, resource tags are a great way to add that metadata and reuse it for cost, health, security, and compliance tracking and enforcement. But as your reporting needs change over time, you may have hit the 15 tag limit on resources. No more! You can now apply up to 50 tags to each resource!

To learn more about tag management and the benefits of tags, see "Use tags to organize your Azure resources".

 

Documentation updates

Lots of documentation updates! Here are a few you might be interested in:

Updated Marketplace usage status for PAYG in "Understand Cost Management data"
Updated PAYG usage terminology in "Understand the terms in your Azure usage and charges file"
Added forecast to "Explore and analyze costs with cost analysis"
Expanded details about viewing reservations in cost analysis in "Get Enterprise Agreement reservation costs and usage"
Added resource group scoping to multiple docs for reservations
Created new "How to buy" and "How the discount is applied" docs for Azure DataBricks reservations
Added instance size flexibility to the "How to buy" and "How the discount is applied" virtual machine reservation docs
Added steps on how to rename your Azure subscriptions to "Change the profile information for your Azure account"
Lots of updates across multiple docs for Microsoft Customer Agreements

Want to keep an eye on all documentation updates? Check out the Azure Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select "Edit" at the top of the doc and submit a quick pull request.

 

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming!

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks! And, as always, share your ideas and vote up others in the Azure Cost Management feedback forum.
Quelle: Azure

Write Maintainable Integration Tests with Docker

Testcontainer is an open source community focused on making integration tests easier across many languages. Gianluca Arbezzano is a Docker Captain, SRE at Influx Data and the maintainer of the Golang implementation of Testcontainer that uses the Docker API to expose a test-friendly library that you can use in your test cases. 

Photo by Markus Spiske on Unsplash.

The popularity of microservices and the use of third-party services for non-business critical features has drastically increased the number of integrations that make up the modern application. These days, it is commonplace to use MySQL, Redis as a key value store, MongoDB, Postgress, and InfluxDB – and that is all just for the database – let alone the multiple services that make up other parts of the application.

All of these integration points require different layers of testing. Unit tests increase how fast you write code because you can mock all of your dependencies, set the expectation for your function and iterate until you get the desired transformation. But, we need more. We need to make sure that the integration with Redis, MongoDB or a microservice works as expected, not just that the mock works as we wrote it. Both are important but the difference is huge.

In this article, I will show you how to use testcontainer to write integration tests in Go with very low overhead. So, I am not telling you to stop writing unit tests, just to be clear!

Back in the day, when I was interested in becoming  a Java developer, I tried to write an integration between Zipkin, a popular open source tracer, and InfluxDB. I ultimately failed because I am not a Java developer, but I did understand how they wrote integration tests, and I became fascinated.

Getting Started: testcontainers-java

Zipkin provides a UI and an API to store and manipulate traces, it supports Cassandra, in-memory, ElasticSearch, MySQL and many more platforms as storage. In order to validate that all the storage systems work, they use a library called testcontainers-java that is a wrapper around the docker-api designed to be “test-friendly.”Here is the Quick Start example:

public class RedisBackedCacheIntTestStep0 {
private RedisBackedCache underTest;

@Before
public void setUp() {
// Assume that we have Redis running locally?
underTest = new RedisBackedCache(“localhost”, 6379);
}

@Test
public void testSimplePutAndGet() {
underTest.put(“test”, “example”);

String retrieved = underTest.get(“test”);
assertEquals(“example”, retrieved);
}
}

At the setUp you can create a container (redis in this case) and expose a port. From here, you can interact with a live redis instance.

Everytime you start a new container, there is a “sidecar” called ryuk that keeps your Docker environment clean by removing containers, volumes and networks after a certain amount of time. You can also remove them from inside the test.The below example comes from Zipkin. They are testing the ElasticSearch integration and as the example shows, you can programmatically configure your dependencies from inside the test case.

public class ElasticsearchStorageRule extends ExternalResource {
static final Logger LOGGER = LoggerFactory.getLogger(ElasticsearchStorageRule.class);
static final int ELASTICSEARCH_PORT = 9200; final String image; final String index;
GenericContainer container;
Closer closer = Closer.create();

public ElasticsearchStorageRule(String image, String index) {
this.image = image;
this.index = index;
}
@Override

protected void before() {
try {
LOGGER.info(“Starting docker image ” + image);
container =
new GenericContainer(image)
.withExposedPorts(ELASTICSEARCH_PORT)
.waitingFor(new HttpWaitStrategy().forPath(“/”));
container.start();
if (Boolean.valueOf(System.getenv(“ES_DEBUG”))) {
container.followOutput(new Slf4jLogConsumer(LoggerFactory.getLogger(image)));
}
System.out.println(“Starting docker image ” + image);
} catch (RuntimeException e) {
LOGGER.warn(“Couldn’t start docker image ” + image + “: ” + e.getMessage(), e);
}

That this happens programmatically is key because you do not need to rely on something external such as docker-compose to spin up your integration tests environment. By spinning it up from inside the test itself, you have a lot more control over the orchestration and provisioning, and the test is more stable. You can even check when a container is ready before you start a test.

Since I am not a Java developer, I ported the library (we are still working on all the features) in Golang and now it’s in the main testcontainers/testcontainers-go organization.

func TestNginxLatestReturn(t *testing.T) {
ctx := context.Background()
req := testcontainers.ContainerRequest{
Image: “nginx”,
ExposedPorts: []string{“80/tcp”},
}
nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
t.Error(err)
}
defer nginxC.Terminate(ctx)
ip, err := nginxC.Host(ctx)
if err != nil {
t.Error(err)
}
port, err := nginxC.MappedPort(ctx, “80”)
if err != nil {
t.Error(err)
}
resp, err := http.Get(fmt.Sprintf(“http://%s:%s”, ip, port.Port()))
if resp.StatusCode != http.StatusOK {
t.Errorf(“Expected status code %d. Got %d.”, http.StatusOK, resp.StatusCode)
}
}

Creating the Test

This is what it looks like:

ctx := context.Background()
req := testcontainers.ContainerRequest{
Image: “nginx”,
ExposedPorts: []string{“80/tcp”},
}
nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
t.Error(err)
}
defer nginxC.Terminate(ctx)

You create the nginx container and with the defer nginxC.Terminate(ctx) command, you are cleaning up the container when the test is over. Remember ryuk? it is not a mandatory command, but testcontainers-go uses it to remove the containers at some point.

Modules

The Java library has a feature called modules where you get pre-canned containers such as databases (mysql, postgress, cassandra, etc.) or applications like nginx.The go version is working on something similar but it is still an open pr.

If you’d like to build a microservice your application relies on from the upstream video, this is a great feature. Or if you would like to test how your application behaves from inside a container (probably more similar to where it will run in prod). This is how it works in Java:

@Rule
public GenericContainer dslContainer = new GenericContainer(
new ImageFromDockerfile()
.withFileFromString(“folder/someFile.txt”, “hello”)
.withFileFromClasspath(“test.txt”, “mappable-resource/test-resource.txt”)
.withFileFromClasspath(“Dockerfile”, “mappable-dockerfile/Dockerfile”))

What I’m working on now

Something that I am currently working on is a new canned container that uses kind to spin up Kubernetes clusters inside a container. If your applications use the Kubernetes API, you can test it in integration:

ctx := context.Background()
k := &KubeKindContainer{}
err := k.Start(ctx)
if err != nil {
t.Fatal(err.Error())
}
defer k.Terminate(ctx)
clientset, err := k.GetClientset()
if err != nil {
t.Fatal(err.Error())
}
ns, err := clientset.CoreV1().Namespaces().Get(“default”, metav1.GetOptions{})
if err != nil {
t.Fatal(err.Error())
}
if ns.GetName() != “default” {
t.Fatalf(“Expected default namespace got %s”, ns.GetName())

This feature is still a work in progress as you can see from PR67.

Calling All Coders

The Java version for testcontainers is the first one developed, it has a lot of features not ported to the Go version or to other libraries as well such as JavaScript, Rust, .Net.

My suggestion is to try the one written in your language and to contribute to it. 

In Go we don’t have a way to programmatically build images. I am thinking to embed buildkit or img in order to get a damonless builder that doesn’t depend on Docker. The great part about working with the Go version is that all the container related libraries are already in Go, so you can do a very good work of integration with them.

This is a great chance to become part of this community! If you are passionate about testing framework join us and send your pull requests, or come to hang out on Slack.

Try It Out

I hope you are as excited as me about the flavour and the power this library provides. Take a look at the testcontainers organization on GitHub to see if your language is covered and try it out! And, if your language is not covered, let’s write it! If you are a Go developer and you’d like to contribute, feel free to reach out to me @gianarb, or go check it out and open an issue or pull request!

Docker Captain @GianArb gives the low down on how to write maintainable integration tests with #DockerClick To Tweet
The post Write Maintainable Integration Tests with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/