Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Wednesday, 14 August 2013

Teradici ups virtual desktop performance with caching and GPUs


Teradici has updated the PCoIP Hardware Accelerator with caching and better GPU support to improve the performance of virtual desktop installations based on VMware's Horizon View platform.

The PCoIP protocol is used to run desktops in the data center. To do that the protocol compresses, encrypts and encodes the desktop and transmits the necessary pixels across an IP network to a compatible client. Depending on the applications and the number of users involved, that can put a lot of stress on the underlying servers, and that's where Teradici's PCoIP Hardware Accelerator comes in.

By off-loading image encoding to a hardware encoding card, it reduces peaks in CPU utilization, ensuring more consistent desktop performance for users, according to Teradici.

The next software update, the version 2.3 driver, will add support for caching, and the company has also fine tuned the processing of pixels to better take advantage of GPUs in VMware environments, according to Olivier Favre, director of product management at Teradici. For users, the latter means higher frame rates generated by the GPU on the server can still be presented to the user, he said.

The addition of caching lets Teradici decrease bandwidth utilization. It can be cut by up to 50 percent in cases where users run graphics-intensive tasks, such as CAD/CAM or video content, when a lot pixels change on the display, according Teradici's measurements.

Thanks to performance improvements, interest is now growing for virtualizing entry-level workstations as well as machines used for higher-end work, Favre said.

The PCoIP Hardware Accelerator runs on a PCI card. Drivers are installed on VMware's ESXi hypervisor and on the virtual machines whose performance users want to accelerate. Off-loading the process of image controlling can then be accomplished via the View administration tool.

The 2.3 driver release will be available on August 20 as a free download.

Amazon improves performance of CloudFormation management platform


Amazon Web Services (AWS) has added new features to the company's management platform CloudFormation that aim to improve performance and simplify updates.

As companies get more used to running applications in the cloud, they are putting together more complex systems. That in turn puts higher demands on management platforms, which have to allow users to take better advantage of the programmability and scalability of the cloud.

CloudFormation aims to give developers and systems administrators a way to create and manage a collection of related resources, provisioning and updating them in an orderly and predictable fashion.

The latest additions to the platform are parallel stack processing and nested stack updates.

The first feature allows CloudFormation to create, update, and delete resources in parallel in order to improve the performance of these operations. For example, provisioning a RAID 0 setup, which involves the creation of multiple Elastic Block Store volumes, is now faster because CloudFormation can provision the volumes in parallel, Amazon said in a blog post.

The platform automatically determines which resources in a template can be created in parallel. Templates are used as a blue print when running CloudFormation to describe the stack of applications and resources needed.

The second new feature, called nested stack updates, deals with how resources are updated. Using CloudFormation, a three-tier application consisting of, for example, a web tier, app tier, and database tier can be created together and in the correct order. With the introduction of nested stack updates, users can also update all the parts in one swoop, instead of having to update each part individually.

Thursday, 18 July 2013

Salesforce.com launches Sales Performance Accelerator

Salesforce.com is hoping customers will tap more pieces of its growing cloud software portfolio with a new product, Sales Performance Accelerator, that combines its CRM software with its Work.com performance management application as well as customer lead information from Data.com.

“We’re basically trying to make every sales rep an A-player,” with the combined package of applications, said Mark Woollen, vice president of product marketing, Sales Cloud.

Information from Salesforce.com’s Data.com service can help increase the amount of “pipeline,” or early-stage deals, salespeople have to work with, Woollen said.

Meanwhile, Work.com’s Facebook-like software environment gives managers a way to provide their sales teams with better coaching, leading to more consistent “win rates,” he said.

“One thing we find when we talk to sales organizations is that reps don’t know why they won or lost a particular deal,” said Nick Stein, senior director of marketing and communications.

While Salesforce.com customers were already able to purchase subscriptions for the three applications included in Sales Performance Accelerator, the points of integration have now been made much deeper, according to Stein and Woollen.

Salesforce.com is also offering a temporary break on pricing. For the next 90 days, Sales Performance Accelerator can be obtained for as little as $90 per user per month. Pricing will start at $110 per user per month after the promotional period.

Early users of Sales Performance Accelerator include Enterasys and CareerBuilder, according to Salesforce.com.

Beyond an attempt to generate more revenue through attractive bundling, Sales Performance Accelerator also represents a return of focus by Salesforce.com to its core sales force automation software business.
Led by CEO Marc Benioff, of late the company’s marketing efforts have focused on a theme of “customer companies,” with Salesforce.com positioned as a purveyor of tools that can help businesses make stronger connections with their partners and customers.


Salesforce.com has also been spending big on acquisitions in order to enter adjacent product areas, such as marketing. It recently paid $2.5 billion to acquire marketing software vendor ExactTarget, with Benioff saying that marketing could end up being a $1 billion annual business for Salesforce.com.  

Thursday, 11 July 2013

Move over, Linpack: Supercomputers get new performance test

The developer of the most widely used test for ranking the performance of supercomputers has said his metric is out of date and proposed a new test that will be introduced starting in November.

Jack Dongarra, distinguished professor of computer science at the University of Tennessee, said the Linpack test he developed in the 1970s, which has been the basis for the Top500 list of the world's fastest computers for the past 20 years, is no longer the most useful benchmark for how well a system can perform.

The new metric, he said, could change the way vendors design their supercomputers and will provide customers with a better measure of the performance they can expect for the types of real-world applications they'll be running.

The Top500 list is published twice a year, in June and November, and is closely watched as vendors and nations seek bragging rights for who has the fastest system. The current leader is the Tianhe-2, developed by China's National University of Defense Technology.

Linpack has been used to rank the systems since the first Top500 list was published in 1993, but it's no longer an indicator of real application performance, Dongarra said.

"Linpack measures the speed and efficiency of linear equation calculations," according to a statement Wednesday announcing the new benchmark, called the High Performance Conjugate Gradient (HPCG). "Over time, applications requiring more complex computations have become more common. These calculations require high bandwidth and low latency, and access data using irregular patterns. Linpack is unable to measure these more complex calculations."

HPCG is needed, Dongarra said in a telephone interview, in part because computer vendors optimize their systems to rank highly on the Top500 list. If that list is based on an out of date test, it encourages vendors to architect their systems in a way that's not optimal for today's applications.

"We don't want to build a machine that does well on this 'fake' problem. We want to build a machine that does well for a larger set of applications," said Dongarra, who developed the new test with a colleague, Michael Heroux, of Sandia National Laboratories in Albuquerque.

Because of the way the new test is being introduced, however, it could potentially spark disagreements over who really has the world's fastest supercomputer. That's because HPCG will be introduced gradually over time, and it could be years before it becomes the primary method for ranking the Top500.

"One of the nice things about Linpack is that there's one number, so it's very clear what we mean by the fastest computer. This will in fact generate two numbers," Dongarra said.

He plans to maintain the Linpack test alongside HPCG in part for the valuable trending information that Linpack provides, he said. But it will also continue to be used because it could take years before a significant number of supercomputers are tested against the new benchmark.

"I expect in November we'll just have a few entries based on this new benchmark. Populating the list with 500 entries is going to take some time, so I'd guess over the next five years we'd have a chance of seeing that list fully populated," he said.

Starting in November, "we're going to have a list of the Top500, and then we're going to have a second column, and that second column will be the new benchmark," Dongarra said.

"It may ultimately lead to a list that is based on this new benchmark, but certainly not right away," he said.

The dueling benchmarks could potentially lead to different supercomputing centers, which covet positions on the Top500, claiming leadership based on both the old and the new tests. That could make it hard to say definitively who has the fastest supercomputer, though it seems the Top500 will consider Linpack to be the primary ranking metric at least for now.

The new test could lead to some "big changes" in which systems show the greatest performance potential, Dongarra said. The HPCG benchmark stresses architectural features which might not be easy for systems that perform well on the Linpack test to optimize for, he said.

"I think individuals will have to then evaluate what number makes sense for their particular mix of problems. And over time I would hope that the new [benchmark] would carry more weight."

HPCG was developed partly at the behest of the U.S. Department of Energy, Dongarra said. "They're looking towards exascale now, and the concern is that if you build an exascale computer that will do this Linpack test well, it may not do well at other problems. So that's one of the issues here."

The University of Tennessee conducts joint projects with the DOE and Dongarra said he's familiar with their application requirements. But he said the new test will be a good indicator of how computers will run other types of applications as well, such as those used for oil and gas exploration or weather modeling.

"One of the problems with the Linpack is that is stresses only one component, that being the floating point potential of the computer," he said. It doesn't stress areas like system latency and memory hierarchy, and the new test will be able to expose weaknesses of systems as they relate to those areas.

Dongarra plans to distribute the software for the new test to computer vendors in the next few months, giving them a chance to begin optimizing their systems and to propose changes to HPCG before it's introduced formally at the SC13 supercomputing conference in Denver this November, where the next Top500 list will be announced.