rdopkg-0.44 ChangeBlog

I’m happy to annouce version 0.44.2 of rdopkg RPM packaging automation
tool has been released.

While a changelog generated from git commits is available in the
original 0.44 release commit
message,
I think it’s also worth a human readable summary of the work done by the rdopkg
community for this release. I’m not sure about the format yet, so I’ll start
with a blog post about the changes – a ChangeBlog ;)

41 commits from 7 contributors were merged over the course of 4
months since last release with average time to land of 6 days.
More stats

For more information about each change, follow the link to inspect related
commit on github.

Software Factory migration
Migrate to softwarefactory-project.io

rdopkg now has a
new home under software factory project
alongside DLRN.
Github repository also
moved from legacy
location to softwarefactory-project namespace.
Issues tracker stays on github.

Versioning
Adopt pbr for version and setup.py management
Include minor version 0.44 -> 0.44.0 as pbr dictates

Python 3 compatibility
Add some Python 3 compatibility fixes
More python 3 compatibility fixes

rdopkg now loads under python3, next step is
running tests using python3
using
tox.

Testing
Add BDD feature tests using python-behave

Unit tests sucked for testing high level behavior so I tried an alternative.
I’m quite pleased with python-behave, see one of first new-version scenarios
written in Gherkin:

Scenario: rdopkg new-version with upstream patches
Given a distgit at Version 0.1 and Release 0.1
Given a patches branch with 5 patches
Given a new version 1.0.0 with 2 patches from patches branch
When I run rdopkg new-version -lntU 1.0.0
Then .spec file tag Version is 1.0.0
Then .spec file tag Release is 1%{?dist}
Then .spec file doesn’t contain patches_base
Then .spec file has 3 patches defined
Then .spec file contains new changelog entry with 1 lines
Then new commit was created

It also looks reasonable on the pyton side.

Avoid test failure due to git hooks
tests: include pep8 in test-requirements.txt
tests: enable nice py.test diffs for common test code
tests: fix gerrit query related unit tests

New Features
pkgenv: display color coded hashes for branches

You can now easily tell the state of branches just by looking at color:

distgit: new -H/–commit-header-file option
patch: new -B/–no-bump option to only sync patches
Add support for buildsys-tags in info-tags-diff
Add options to specify user and mail in changelog entry
allow patches remote and branch to be set in git config
new-version: handle RHCEPH and RHSCON products
guess: return RH osdist for eng- dist-git branches

Improvements
distgit: Use NVR for commit title for multiple changelog lines
Improve %changelog handling
Improve patches_ignore detection
Avoid prompt on non interactive console
Update yum references to dnf
Switch to pycodestyle (pep8 rename)

Fixes
Use absolute path for repo_path

This caused trouble when using rdopkg info -l.

Use always parse_info_file in get_info
Fix output of info-tags-diff for new packages

Refactoring
refactor: merge legacy rdopkg.utils.exception

There is only one place for exceptions now o/

core: refactor unreasonable default atomic=False
make git.config_get behave more like dict.get
specfile: fix improper naming: get_nvr, get_vr
fixed linting

Documentation
document new-version’s –bug argument
Update doc to reflect output change in info-tags-diff

Happy rdopkging!
Quelle: RDO

Announcing public preview of Azure Batch Rendering

This week SIGGRAPH 2017 is blasting away in Los Angeles and I can’t imagine a better place than the premier event for computer graphics to announce that Azure Batch Rendering will now move into public preview.

The complexities of cinematic productions, associated workflows, and infrastructure have always intrigued me, and they are honestly one of the very best examples of the hands-on value that Azure provides. Abstracting away infrastructure considerations, deployment, and management rarely made more sense, while at the same time being able to scale beyond the physical boundaries of your on-premises environments.

Enabling artists, engineers, and designers to submit rendering jobs seamlessly via client applications such as Autodesk Maya, 3ds Max, or via our SDK, Azure Batch Rendering accelerates large scale rendering jobs to deliver results to our customers faster.

Back in May during the Microsoft Build conference, we announced the first limited preview of Batch Rendering, a milestone in integrating the high-end graphics user experience with the power of Azure. Since then, hundreds of curious and excited customers have been putting Batch Rendering through its paces and have provided invaluable feedback to us on the product – thank you!

While Azure Batch Rendering with Autodesk is moving to public preview, we are also excited to announce a limited preview of V-Ray in partnership with Chaos Group. With V-Ray being supported for Maya and 3ds Max, this is another great step forward in supporting a rich and vibrant ecosystem on Azure.

Azure will continue to work with Autodesk, Chaos Group, and other partners to enable customers to run their day to day rendering workloads seamlessly on Azure. Batch Rendering will provide tools, such as client plugins, offering a rich integrated experience allowing customers to submit jobs from within the applications with easy scaling, monitoring, and asset management. Additionally, the SDK, available in various languages, allows custom integration with customer’s existing environments.

In addition to the our Batch Rendering announcements, we are launching a preview of a cool new management application, Batch Labs! Batch Labs is a cross-platform desktop management tool which includes job submission capabilities as well as a rich management and monitoring experience, along with the ability to manage asset uploads and downloads. Batch Labs hosts a marketplace of supported applications which can be easily extended by customers for their own applications and custom workflows.

Lastly, I’d like to invite you to come and meet our team at SIGGRAPH 2017. We’re hosting sessions and will be at booth #923, showing off a bunch of cool demos with partners like Conductor, Avid, Vizua, JellyFish Pictures, and PipelineFX along with exciting Microsoft hardware like the HoloLens and new Surface Studio.

If you are in the Los Angeles area during the week, you’re more than welcome to use the promo code “MSFT2017” to register for a complimentary visitor pass to the expo floor of SIGGRAPH.

Thank you all for your support in hitting this important milestone for Azure Batch Rendering. We are looking forward to continue working with you on the further expansion of the product and welcome your continued feedback!

Get more information and documentation on using Azure Batch Rendering.
Quelle: Azure

What’s new in ZuulV3

Zuul is a program used to
gate a project’s source code repository so that changes are only
merged if they pass integration tests. This article presents some
of the new features in the next version:
ZuulV3

Distributed configuration

The configuration is distributed accross projects’ repositories,
for example, here is what the new zuul main.yaml configuration will
look like:

– tenant:
name: downstream
source:
gerrit:
config-projects:
– config
untrusted-projects:
– restfuzz
openstack.org:
untrusted-projects:
– openstack-infra/zuul-jobs:
include: job
shadow: config

This configuration describes a downstream tenant with two sources. Gerrit
is a local gerrit instance and openstack.org is the review.openstack.org
service. For each sources, there are 2 types of projects:

config-projects hold configuration information such as logserver access.
Jobs defined in config-projects run with elevated privileges.
untrusted-projects are projects being tested or deployed.

The openstack-infra/zuul-jobs has special settings discussed below.

Default jobs with openstack-infra/zuul-jobs

The openstack-infra/zuul-jobs repository contains common job definitions and
Zuul only imports jobs that are not already defined (shadow) in the local
config.

This is great news for Third Party CIs that will easily be able to re-use
upstream jobs such as tox-docs or tox-py35 with their convenient
post-processing of unittest results.

In-repository configuration

The new distributed configuration enables a more streamlined workflow.
Indeed, pipelines and projects are now defined in the project’s repository
which allows changes to be tested before merging.

Traditionaly, projects’ CI needed to be configured in two steps: first, the jobs
were defined, then a test change was rechecked until the job was working.
This is no longer needed because the jobs and configurations are directly set in
the repository and CI change undergoes the CI workflow.

After being registered in the main.yaml file, a project author can submit a
.zuul.yaml file (along with any other changes needed to make the test succeed).
Here is a minimal zuul.yaml setting:

– project:
name: restfuzz
check:
jobs:
– tox-py35

Zuul will look for a zuul.yaml file or a zuul.d directory as well as hidden
versions prefixed by a ‘.’. The project can also define its own jobs.

Ansible job definition

Jobs are now created in Ansible, which brings many advantages over
the Jenkins Jobs Builder format:

Multi-node architecture where tasks are easily distributed,
Ansible module ecosystem simplify complex task, and
Manual execution of jobs.

Here is an example:

– job:
name: restfuzz-rdo
parent: base
run: playbooks/rdo
nodes:
– name: cloud
label: centos
– name: fuzzer
label: fedora

Then the playbook can be written like this:

– hosts: cloud
tasks:
– name: “Deploy rdo”
command: packstack –allinone
become: yes
become_user: root

– name: “Store openstackrc”
command: “cat /root/keystonerc_admin
register: openstackrc
become: yes
become_user: root

– hosts: fuzzer
tasks:
– name: “Setup openstackrc”
copy:
content: “{{ hostvars[‘cloud’][‘openstackrc’].stdout }}”
dest: “{{ zuul_work_dir }}/.openstackrc”

– name: “Deploy restfuzz”
command: python setup.py install
args:
chdir: “{{ zuul_work_dir }}”
become: yes
become_user: root

– name: “Run restfuzz”
command: “restfuzz –target {{ hostvars[‘cloud’][‘ansible_eth0′][‘ipv4′][‘address’] }}”

The base parent from the config project manages the pre phase to copy
the sources to the instances and the post phase to publish the job logs.

Nodepool drivers

This is still a work in progress
but it’s worth noting that Nodepool is growing a driver based design to
support non-openstack providers. The primary goal is to support static node
assignements, and the interface can be used to implement new providers.
A driver needs to implement a Provider class to manage access to a new API,
and a RequestHandler to manage resource creation.

As a Proof Of Concept, I wrote an
OpenContainer driver that can spawn
thin instances using RunC:

providers:
– name: container-host-01
driver: oci
hypervisor: fedora.example.com
pools:
– name: main
max-servers: 42
labels:
– name: fedora-26-oci
path: /
– name: centos-6-oci
path: /srv/centos6
– name: centos-7-oci
path: /srv/centos7
– name: rhel-7.4-oci
path: /srv/rhel7.4

This is good news for operators and users who don’t have access to an
OpenStack cloud since Zuul/Nodepool may be able to use new providers
such as OpenShift for example.

In conclusion, ZuulV3 brings a lot of new cool features to the table,
and this article only covers a few of them. Check the
documentation
for more information and stay tuned for the upcoming release.
Quelle: RDO

Google Cloud Platform at SIGGRAPH 2017

By Todd Prives, Product Manager

For decades, the SIGGRAPH conference has brought together pioneers in the field of computer graphics. This year at SIGGRAPH 2017, we’re excited to announce several updates and product releases that reinforce Google Cloud Platform (GCP)’s leadership in cloud-based media and entertainment solutions.

As part of our ongoing collaboration with Autodesk, our hosted ZYNC Render service now supports its 3ds Max 3D modeling, animation and rendering software. Widely used in the media and entertainment, architecture and visualization industries, artists using ZYNC Render can scale their rendering needs to tens of thousands of cores on-demand to meet the ever-increasing need for high resolution, large format imagery. Support for 3ds Max builds on our success with Autodesk; since we announced Autodesk Maya support in April 2016, users have logged nearly 27 million core hours on that platform, and we look forward to what 3ds Max users will create.

ZYNC Render for Autodesk 3ds Max

At launch of 3ds Max support, we’ll also offer support for leading renderers such as Arnold, an Autodesk product, and V-Ray from Chaos Group.

In addition, we’re showing a technology preview of V-Ray GPU for Autodesk Maya on ZYNC Render. Utilizing NVIDIA GPUs running on GCP, V-Ray GPU provides highly scalable, GPU-enhanced rendering performance.

We’re also previewing support for Foundry’s VR toolset CaraVR on ZYNC Render. Running on ZYNC Render, CaraVR can now leverage the massive scalability of Google Compute Engine to render large VR datasets.

We’re also presenting remote desktop workflows that leverage Google Cloud GPUs such as the new NVIDIA P100, which can perform both display and compute tasks. As a result, we’re taking full advantage of V-Ray 3.6 Hybrid Rendering technology, as well as NVIDIA’s NVLink to share data across multiple NVIDIA P100 cards. We’re also showing how to deploy and manage a “farm” of hundreds of GPUs in the cloud.

Google Cloud’s suite of media and entertainment offerings is expansive — from content ingestion and creation to graphics rendering to distribution. Combined with our online video platform Anvato, core infrastructure offerings around compute, GPU and storage, cutting-edge machine learning and Hollywood studio-specific security engagements, Google Cloud provides comprehensive and end-to-end solutions for creative professionals to build media solutions of their choosing.

To learn more about Google Cloud in the media and entertainment field, visit our Google Cloud Media Solutions page. And to experience the power of GCP for yourself, sign up for a free trial.
Quelle: Google Cloud Platform

Teradata Bolsters Analytics and Database capabilities for Azure

This post is co-authored with Rory Conway, Product Manager, Teradata on Azure.

The enterprise-class capabilities of Teradata Database have been enhanced for Microsoft Azure Marketplace. This is great news for organizations using, or wanting to try, Teradata Database as their engine for advanced analytics to deliver optimal business outcomes for areas such as customer experience, risk mitigation, asset optimization, finance transformation, product innovation, and operational excellence. Coupled with the recent launch of Teradata Demand Chain Management on Azure, these are substantial improvements that yield an impressive solution set worthy of your attention. Try it for yourself.

Teradata Primer

As you may already know, Teradata Corporation has long been regarded as the market’s leading data warehouse provider for analytics at scale. With more than $2B in revenue from over 1,400 customers and powered by 10,000 employees, Teradata has the deep roots and technical strength that large organizations seek when aligning with a strategic partner.

 

Teradata by the numbers:

35+ years of innovation and leadership
~1,400 customers in 77 countries
~10,000 employees in 43 countries
$2.8B in revenue in 2016

Teradata works with many leading firms:

Airlines – All top 6 airlines
Banking – 18 of top 20 global commercial and savings banks
Communications – 19 of top 20 telecommunications companies
Manufacturing – 13 of top 20 manufacturing companies
Retail – 15 of top 20 global retailers

Teradata’s reputation is based on analytic performance at scale. The company’s deployment strategy is centered on hybrid cloud and license portability – “Teradata Everywhere™” – which make it easier for customers to buy Teradata in smaller increments and grow consumption as needed, where needed.

Teradata’s market research predicts that more than 90 percent of companies will employ a mix of on-premises and cloud resources by 2020. As such, Teradata emphasizes software consistency across all its deployment options aided by a strong bench of services experts helping organizations evolve to hybrid architectures and derive the most value from their analytic investments.

Teradata Database on Azure

Now let’s turn our attention to the options. Teradata Database on Azure provides four tiers of software with different features at varying price points. You choose a bundle corresponding to what you need for the workload at hand. From low to high, the Teradata Database tiers are:

Developer – Free software for application development
Base – Positioned for low concurrency, entry-level data warehouses
Advanced – Supports high-concurrency, production mixed workloads
Enterprise – Top-of-the-line offer with sophisticated workload management

Azure deployment is offered on multiple Virtual Machine (VM) types available in nearly every region globally, including DS15_v2 and DS14_v2 (Azure premium storage) and G5 and D15_v2 (local storage). There are also other analytical ecosystem components available, such as Teradata QueryGrid which enables you to pull and combine insights from multiple data repositories.

The table below shows the currently available options. Please see Teradata’s website for the latest configurations and software pricing.

Getting going with Teradata on Azure is easy. An Azure Marketplace Solution Template leads you through an intuitive step-by-step provisioning process and you can be up and running with an entire analytical ecosystem in about an hour. Here’s a screenshot illustrating the comprehensive guided deployment process:

Teradata Demand Chain Management on Azure

It’s no secret that many companies, particularly in retail and consumer goods, have aligned themselves with Azure as their public cloud provider of choice. An additional software as a service (SaaS) option from Teradata for the retail and consumer goods segments is Teradata Demand Chain Management (DCM), an application suite that provides forecasting, fulfillment, and demand chain analytics. 

Teradata DCM employs consumer demand data to develop daily and/or weekly sales forecasts of each item in multiple store locations based on historical performance with seasonal and causal identification. The forecast is then combined with inventory and fulfillment strategies which pull inventory through your supply chain based on expected sales across each location. The result is a reversal in the traditional supply chain flow of information, allowing store and SKU-level demand to serve its proper role at the peak of the pyramid.

Get started with Teradata on Azure today

Teradata brings powerful analytic capabilities to the Azure community. For existing Teradata customers, consistent software in Azure means that you can leverage investments you’ve already made. For anyone else, trying Teradata on Azure is an easy, low-risk way to determine whether it’s right for you. Try it today by deploying Teradata from Azure Marketplace.

To be secure-by-default, Teradata deployment does not automatically create public IP addresses for the VMs. After deployment, you can access the Teradata VMs from either a jumpbox VM that you already have in the same Virtual Network, via VPN/ExpressRoute, or by manually associating public IP addresses and specific Network Security Groups to the VMs you need to access such as Viewpoint, Data Mover, Data Stream Controller, Ecosystem Manager, or Server Management.

For additional information, see the Teradata on Azure Getting Started Guide.
Quelle: Azure