AWS Marketplace Self-Service Listings Adds Support for SaaS Software Scenarios

Today, AWS Marketplace released new feature enhancements to the Seller Self-Service Listings feature, a web-based interface that lets AWS Marketplace software vendors manage their product listings through the AWS Marketplace Management Portal (AMMP). SaaS software vendors that have product listings in AWS Marketplace can now use Self-Service Listings to upload new SaaS Contract products, or make changes to their SaaS Subscriptions and SaaS Contracts product listings, including updating product metadata and making pricing changes. In addition, Self-Service Listings now enables sellers to sunset their AMI or SaaS listings.
Quelle: aws.amazon.com

Review and Future Directions of CloudForms State-Machines

This article seeks to explain the use of State Machines in Red Hat CloudForms for the use in the flow control of automation.
The topic of State Machines is sometimes perceived as rocket science, barely used but often taught. The first thing to dispel is the complexity in state machines, then we can compare how a state machine differs from other process automation like Workflows.
Finally the article is to dispel the myth that State Machines are RUBY or if you use Ansible Automation Inside you do not need state machines, again not a true statement.

Why State Machine?
Many automation flows are typically bigger than first envisaged when taken into an enterprise.
Example :
You may wish to provision a Database cluster, so the primary task is the installation and configuration of the database cluster.

Your compliance officer may instruct that the corporate CMDB must be updated with any artifacts that have been provisioned.
As well as the IT department may wish to trace the activity of provisioning using the corporate help desk system, by opening a ticket at the start of the job and closing the ticket when the job has completed.
Or lastly maybe you have a new requirement to use IP addresses from a corporate IP Address Management (IPAM) system when provisioning your Database Cluster.

Enterprise’s bring with them corporate standards, regulatory compliance requirements and operation patterns to follow. State Machines are a great way to combine varying automation requirements together.
By having each requirement handled as a separate state provides the following value;

You can decide how the next state will behave based on how the current state has exited.
You can have re-use of states in other state machines.

The last benefit is important, as each automation group in the enterprise creates their automation in silos, CloudForms provides a way to re-use the corporate states outside of the primary automation play.
Example:

The Amazon team create an automation play that deploys instances into Amazon EC2.
The VMware team create an automation play that deploys VMs into vSphere Clusters.

Both teams need to update the corporate CMDB with the asset details. With CloudForms you can write the automation play that updates the CMDB, but share it twice. Allowing both the Amazon and VMware teams to leverage the same automation play, which saves time but also ensures adherence to corporate standards.
 
Why not Workflows?
“What is the difference between a State Machine and a Workflow?” is the common question.
Answer:
A state machine, which is a series of states with transitions between them, allows for loops as opposed to a sequential workflow, which precedes down different branches until done.
The key here is that a state machine has re-entrancy. A state can run, and by itself decide if it should run again. A state can jump states and even go back to a previous state.
Example:

State – Update the CMDB with asset detail.

This state should succeed, but it can also fail.
A fail could be a hard fail whereby the return from the CMDB tells the state that an authentication failure occurred.
Another more soft failure is simply that the CMDB service is unreachable. Both of these failures could be dealt with in different ways such as;

Authentication Failure EQUALS Fail the State.
Unreachable CMDB EQUALS Retry the state (predefined retry count and interval).

This results in the state that updates the corporate CMDB with information retrying when the service is unavailable/busy, fail the job if it fails to authenticate or continues onto the next state if successful.
This is the state machine for our example;
 

 
So what’s the benefit again? Well to do this in a workflow would require that the workflow author writes the logic to control the gates using decision processes, also the re-entrancy would need to be coded into the workflow too. Which would look something like;
 

 
Another benefit of using States in a State machine is that we have the ability to do pre- and post- processing of a state. This means that before entering a state we can do some logic, then we can execute the state itself, and finally upon exiting the state we can do more process logic on an exit state. Along with supporting a state for failure. This means that any one state can do the following:

Pre State – Is updating the CMDB enabled for this job?
State – Run the CMDB Update.
Post State – Did the CMDB update successfully?
Error State – Send an email to admin to say CMDB is misconfigured.

A flowchart diagram for pre state with state would look like;
 

 
Summary – State Machines are a table of states. Each state has entry, exit and retry logic so that any state can succeed, fail or retry its status giving any state the ability to traverse the state machine in any order.
 
State Machine and Method Types
A state machine transitions through a series of states. The states will call/connect to instances. It is the job of the instance to define the state. What does this mean?
 

 
The instance could define some attributes like;

CMDB Server URL.
CMDB Username.
CMDB Password.

But these are not much use unless you feed them into something that can use them. This brings us to METHODS.
A method is something that runs. You define the method and call the method from an instance. Such as;
 

 
The implementation of State Machines in CloudForms is limited by the method types it supports. Here are the supported method types’

Built-in – A number of built in methods exist for placement, quota and email use cases, etc.

 

 

In-line – RUBY scripting language support. Write a ruby script and it will be executed.

 

 

URI –  Point to a RUBY script on a URI resource. It will be executed from that location.

 

 
As you can see, two of the three options support native RUBY scripting language, but also the first being built in also demonstrates how RUBY is NOT the only Method choice.
 
State Machine using Built-In Method
Some time ago, I wrote a blog on running built-in methods in CloudForms.
It demonstrates how a state machine can call a built-in method, passing parameters to send an email. This requires no RUBY coding, and utilizes simply an instance that calls a method.
 
State Machine using In-Line Method
This is the most common method type. Most if not all states in any of the provisioning.
 
State Machine using URI Method
This method type is not used today out of the box, but you certainly can use it in a custom route. The main issue with URI based location for the method is availability. You need to ensure that ALL CloudForms servers running the Automate role can access the resource. Though…the concept of using an external location for methods is very cool. Maybe you could point the method location to be a Git URL. Then you have versioning, branching and availability all from Git., an interesting blog for the future.
 
State Machine and Ansible Automation Inside
The near future direction for CloudForms is to add Ansible as a method type for State Machines. This would allow for States to use instances that execute methods that are Ansible. This gives all the benefits of the Ansible simplistic language and power of its module integration merged with the power of state machines to control process flows. A single state example would be like;
 

 
A more complete example would look like;
 

 

You can see in this example how the first state calls a playbook to open a ticket in the corporate help-desk system.
Then state 2 is to perform a quota check, this in CloudForms is a RUBY method that takes some input parameters and either fails the state if no quota available or continues on.
The third state calls a built in method for provisioning.
Last state is another playbook to close the ticket, again taking parameters from the instance such as the connection parameters and the ticket number to close.

 
State Control
You can control a state in a state-machine using re-entrancy and exit codes. For example if you set the exit code of your method as follows;

Ok – The placement of the VM was successful.
Error – The placement of the VM failed.
Retry – We want to run this placement logic again.

Therefore what determines how the next state will be processed is simply the exit from the previous state.

If the previous state exits with a retry then the state will be retried for the number of retries the state machine is configured for, plus the duration between retires can be controlled.
If the state exists OK then the next state is processed.
If the state exists Error then the next state is actually the error state of the same state so to clean up any failure or backout what was partially done. A good example of this would be;

State 1 – Create VM.
State 2 – Configure Firewall.
State 3 – Install Apache.

If State 3 fails, then the error state for State 3 might undo the firewall config and remove the VM.
 
Appendix A
Definition from Google/Wikipedia
 

 
 
Assertions, Relationships and Schema
Relationships
A state machine when written contains states, but it can also contain other types. For example if you wish to connect a state machine to another state machine as follows;

State Machine “Create VM in VMware”.
State Machine “Install Apache”.

These two state machines may have many states doing various tasks. The advantage of separating the two state machines, is that you can re-use either of the state machines with others. Such as;

State Machine “Create VM in RHV”.
State Machine “Install Apache”.

Now uses a new state machine to create a VM in RHV, but the same state machine for installing Apache.
To allow state machines to connect to each other we have “Relationships”, you can bind from one place to another in state machines.
 
Assertions, and more on Relationships
These are very cool and allow you to stop a state machine mid-flow. The first example to look at is whereby you do not wish to continue with something based on a condition. Using the following as an example;

State 1 – Create VM.
State 2 – Configure Firewall.
State 3 – Install Apache.

You can in the state 2, do the following;

Assertion = “Continue only if the VM was created”.
Method = Configure Firewall.

This would result in the assertion being resolved first, and if True it will continue to the next line, being the method that actually configures the firewall. Otherwise if the condition returned False, then the state would end processing there and NOT continue to run the method.
You can mix Assertions with Relationships too, an example of this would be, you wish to install all packages onto a VM. You have created methods for each package install. You could do the following based on what we have discussed so far;

State 1 – Create VM.
State 2 – Install Apache.
State 3 – Install PHP.
State 4 – Install CSS.
State 5 – Install WebSite.

Or more easily you can;

State 1 – Create VM.
State 2 – Install Web Components.

The Install Web Components would be a wild card connection from the state to the methods, for example;

State 1 – Create VM.
State 2 – Install Web Components.

Relationship – WebComponents*

The reason why you may wish to use Assertions here is to stop the “resolution” of the wildcard picking instances to methods that should be excluded. Example, that if you wish to only pick up the Linux version of the web components because the VM template was Linux, you could configure on all the Linux instances heading the methods and assertion that evaluates to true or false based on the template OS = Linux. Example;
You have many instances for webcomponents, some Windows, and some Linux;

Linux – Install Apache.
Linux – Install PHP.
Linux – Install CSS.

And

Windows – Install Apache.
Windows – Install PHP.
Windows – Install CSS.

And

Common – Install WebSite.

Therefore when the state machine resolves the relationship it will take the webcomponents that match the condition but always take the “Common – Install WebSite”.
Quelle: CloudForms

UK Met Office’s High-Resolution Weather Forecast Data is Now on AWS

Archive data from the UK Met Office Global and Regional Ensemble Prediction System (MOGREPS) is now available on Amazon S3. Data from two models is available: MOEGREPS-UK, a high resolution weather forecast covering the United Kingdom, and MOGREPS-G, a global weather forecast. MOGREPS is primarily designed to aid the forecasting of rapid storm development, wind, rain, snow and fog. Accurate weather forecasts allow farmers to predict when to plant crops, let airlines know when it’s safe to fly, help governments plan for transportation hazards, and are useful in a number of other ways.
Quelle: aws.amazon.com

NOAA’s GOES-R Series Weather Satellite Imagery is Now on AWS

Data from NOAA’s GOES-R series satellite is available on Amazon S3. The National Oceanic and Atmospheric Administration (NOAA) operates a constellation of Geostationary Operational Environmental Satellites (GOES) to provide continuous weather imagery and monitoring of meteorological and space environment data for the protection of life and property across the United States. GOES satellites provide critical atmospheric, oceanic, climatic and space weather products supporting weather forecasting and warnings, climatologic analysis and prediction, ecosystems management, safe and efficient public and private transportation, and other national priorities.
Quelle: aws.amazon.com

Docker 101: Introduction to Docker webinar recap

Docker is standardizing the way to package applications, making it easier for developers to code and build apps on their laptop or workstation and for IT to manage, secure and deploy into a variety of infrastructure platforms
In last week’s webinar, Docker 101: An Introduction to Docker, we went from describing what a container is, all the way to what a production deployment of Docker looks like, including how large enterprise organizations and world-class universities are leveraging Docker Enterprise Edition (EE)  to modernize their legacy applications and accelerate public cloud adoption.
If you missed the webinar, you can watch the recording here:

We ran out of time to go through everyone’s questions, so here are some of the top questions from the webinar:
­Q: How does Docker get access to platform resources, such as I/O, networking, etc.­ Is it a type of hypervisor?
A: Docker EE is not a type of hypervisor. Hypervisors create virtual hardware: they make one server appear to be many servers but generally know little or nothing about the applications running inside them. Containers are the opposite: they make one OS or one application server appear to be many isolated instances. Containers explicitly must know the OS and application stack but the hardware underneath is less important to the container. In Linux operating systems, the Docker engine is a daemon installed directly in a host operating system kernel that isolates and segregates different procedures for the different containers running on that operating system. The platform resources are accessed by the host operating system and each container gets isolated access to these resources through segregated namespaces and control groups (cgroups). cgroups allow Docker to share available hardware resources to containers and optionally enforce limits and constraints. You can read more about this here.
Q: ­Are containers secure since they run on the same OS?­
Yes, cgroups, namespaces, seccomp profiles and the “secure by default” approach of Docker all contribute to the security of containers. Separate namespaces protects processes running within a container meaning it cannot see, and even less affect, processes running in another container, or in the host system. Cgroups help ensure that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single container cannot bring the system down by exhausting one of those resources. And Docker is designed to limit root access of containers themselves by default, meaning that even if an intruder manages to escalate to root within a container, it will be much harder to do serious damage, or to escalate to the host. These are just some of the many ways Docker is designed to be secure by default. Read more about Docker security and security features here. 
Docker Enterprise Edition includes additional advanced security options including role-based access control (RBAC), image signing to validate image integrity, secrets management, and image scanning to protect images from known vulnerabilities. These advanced capabilities provide an additional layer of security across the entire software supply chain, from developer’s laptop to production.
Q: ­Can a Docker image created under one OS (e.g Windows) be used to run on a different operating system (e.g RedHat 7.x)?
A: Unlike VMs, Docker containers share the OS kernel of the underlying host so containers can go from one Linux OS to another but not from Windows to Linux. So you cannot run a .NET app natively on a Linux machine, but you can run a RHEL-based container on a SUSE-based host because they both leverage the same OS kernel.
Q: Is there another advantage other than DevOps for implementing Docker in enterprise IT infrastructure?
A: Yes! Docker addresses many different IT challenges and aligns well with major IT initiatives including hybrid/multi-cloud, data center and app modernization. Legacy applications are difficult and expensive to maintain. They can be fragile and insecure due to neglect over time while maintaining them consumes a large portion of the overall IT budget. By containerizing these traditional applications, IT organizations save time and money and make these applications more nimble. For example:

Cloud portability: By containerizing applications, they can be easily deployed across different certified platforms without requiring code changes.
Easier application deployment and maintenance: Containers are based on images which are defined in Dockerfiles. This simplifies the dependencies of an application, making them easier to move between dev, test, QA, and production environments and also easier to update and maintain when needed. 62% of customers with Docker EE see a reduction in their mean time to resolution (MTTR).
Cost savings: Moving to containers provides overall increased utilization of available resources which means that customers often see up to 75% improved consolidation of virtual machines or CPU utilization. That frees up more budget to spend on innovation,

To learn more about how IT can benefit from modernizing traditional applications with Docker, check out www.docker.com/MTA.
Q: Can you explain more about how Docker EE can be used to convert apps to microservices?
A: Replacing an existing application with a microservices architecture is often a large undertaking that requires significant investment in application development. Sometimes it is impossible as it requires systems of record that cannot be replaced. What we see many companies do is containerize an entire traditional application as a starting point. They then peel away pieces of the application and convert those to microservices rather than taking on the whole application. This allows the organization to modernize components like the web interface without complete re-architecture, allowing the application to have a modern interface while still accessing legacy data.
­Q: Are there any tools that will help us manage private/corporate images? ­Can we have host our own image repository in-house vs using the cloud?
A: Yes! Docker Trusted Registry (DTR) is a private registry included in Docker Enterprise Edition Standard and Advanced. In addition, DTR provides additional advanced capabilities around security (eg. image signing, image scanning) and access controls (eg. LDAP/AD integration, RBAC). It is intended to be a private registry for you to install either in your data center or in your virtual private cloud environment.
Q: ­Is there any way to access the host OS file system(s)?  I want to put my security scan software in a Docker container but scan the host file system.
A: The best way to do this is to mount the host directory as a volume in the container with “-v /:/root_fs” so that the file system and directory are shared and visible in both places. More information around storage volumes, mounting shared volumes, backup and more are here.

Top 7 questions from #Docker 101 – Webinar recapClick To Tweet

Next Steps:

If you’re an IT professional, join our multi-part learning series: IT Starts with Docker
If you’re a developer, check out the Docker Playground 
Learn more about Docker Enterprise Edition or try the new hosted demo environment
Explore and register for other upcoming webinars or join a local Meetup

The post Docker 101: Introduction to Docker webinar recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

August updates to the Azure Analysis Services web designer

Last month we released a preview of the Azure Analysis Services web designer. This new browser-based experience will allow developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make simple changes fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to a development or production AAS model.

Today we are announcing the first set of updates which include a mix of fixes and new features. In the upcoming months, we will continue to evolve the web designer to allow for easier and more advanced model creation in the web. New functionality includes:

DAX syntax highlighting for measures

Adding measures is a bit simpler with the use of a multiline code editor which recognizes DAX formula syntax.

New mini map in JSON editor

The model JSON editor now includes a mini document map on the right hand side to make browsing the JSON document simpler.

Display folder and hierarchy support in the query designer

You can now use hierarchies and display folders when graphically designing queries.

Table relationship editor

Create new relationships or edit existing ones between table with the new relationship editor dialog.

Copy server name

When needing to connect to your server from other tools such as SSMS or SSDT, you can now simply copy your full server name from the server blade.

You can try the Azure Analysis web designer today by linking to it from a server in the Azure portal.

Submit your own ideas for features on our feedback forum. Learn more about Azure Analysis Services and the Azure Analysis Services web designer.
Quelle: Azure

Introducing the #Azure #CosmosDB Change Feed Processor Library

Azure Cosmos DB is a fast and flexible globally-replicated database service that is used for storing high-volume transactional and operational data with predictable millisecond latency for reads and writes. To help you build powerful applications on top of Cosmos DB, we built change feed support, which provides a sorted list of documents within a collection in the order in which they were modified. Now, to address scalability while preserving simplicity of use, we introduce the Cosmos DB Change Feed Processor Library. In this blog, we look at when and how you should use Change Feed Processor Library.

Change feed: Event Sourcing with Cosmos DB

Storing your data is just the beginning of the adventure. With change feed support, you can integrate with many different services depending on what you need to do once changes appear.

Example #1: You are building an online shopping website and need to trigger an email notification once a customer completes a purchase. Whether you prefer to use Azure Functions, Azure Notification Hub, Azure App Services, or your custom-built micro services, change feed allows seamless integration by surfacing changes in the order that they occur.

Example #2: You are storing data from an autonomous vehicle and need to detect abnormalities in incoming sensor data. As new entries are stored in Cosmos DB, these changes that appear on the change feed can be directly processed by Azure stream analytics, Azure HDInsight, Apache Spark, or Apache Storm. With change feed support, you can apply intelligent processing in real-time while data is stored into Cosmos DB.

Example #3: Due to architecture changes, you need to change the partition key for your Cosmos DB collection. Change feed allows you to move your data to a new collection while processing incoming changes. The result is zero down time while you move data from anywhere to Cosmos DB.
 

What about working with larger data storage with multiple partitions?

As your data storage needs grow, it’s likely that you will use multiple partitions to store your data. Although it’s possible to manually read changes from each partition, the Change Feed Processor makes it easier by abstracting the change feed API. This function facilitates the reading across partitions and distributes change feed event processing across multiple consumers. This library provides a thread-safe, multi-process, safe runtime environment with checkpoint and partition lease management for change feed operations. The Change Feed Processor Library is available as a NuGet package for .NET development.

When to use Change Feed Processor Library:

Pulling updates from the change feed when data is stored across multiple partitions
Moving or replicating data from one collection to another
Parallel execution of actions triggered by updates to data and the change feed

Getting started with the Change Feed Processor Library is simple and lightweight. In the following example, we have a collection of documents containing news events associated with different cities. We use “city” as the partition key. In just a few steps, we can print out all changes made to any document from any partition.

To set this up, install the Change Feed Processor Library Nuget package and create a lease collection. The lease collection should be created through an account close to the write region. This collection will keep track of change feed reading progress per partition and host information.
 

To define the logic performed when new changes surface, edit the ProcessChangesAsync function. Here, we are simply printing out the document ID of the new or updated document. You can also modify this function to perform different tasks.

 

public Task ProcessChangesAsync(ChangeFeedObserverContext context, IReadOnlyList<Document> docs)
{
Console.WriteLine("Change feed: total {0} doc(s)", Interlocked.Add(ref totalDocs, docs.Count));
foreach (Document doc in docs)
{
Console.WriteLine(doc.Id.ToString());
}

return Task.CompletedTask;
}

 

Next, to begin the Change Feed Processor, instantiate ChangeFeedProcessorHost, providing the appropriate parameters for your Azure Cosmos DB collections. Then, call RegisterObserverAsync to register your IChangeFeedObserver (DocumentFeedObserver in this example) implementation with the runtime. At this point, the host attempts to acquire a lease on every partition key range in the Azure Cosmos DB collection using a "greedy" algorithm. These leases last for a given timeframe and must then be renewed. As new nodes come online, in this case worker instances, they place lease reservations. Over time the load shifts between nodes as each host attempts to acquire more leases.

 

DocumentFeedObserver docObserver = new DocumentFeedObserver();

ChangeFeedEventHost host = new ChangeFeedEventHost(hostName, documentCollectionLocation, leaseCollectionLocation, feedOptions, feedHostOptions);

await host.RegisterObserverFactoryAsync(docObserverFactory);

 

Next steps

Review the documentation: Working with the Change Feed support in Azure CosmosDB.
Try out sample code: An example to read and copy changes to new collection.
Download the NuGet Package to get started.

Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter @AzureCosmosDB and #CosmosDB, and reach out to us on the developer forums on Stack Overflow.
Quelle: Azure

"Faules Ei"-App soll Insektizid-Eier erkennen

Eine "Faules Ei" genannte App soll dabei helfen, vom aktuellen Insektizid-Skandal betroffene Eier zu identifizieren. Dafür soll man schlicht den Erzeugercode des jeweiligen Eis eingeben müssen – die kostenlose App für iOS und Android erledigt den Rest.

Quelle: Heise Tech News