Tuesday 28 July 2009

18 cracks found in Delhi Metro pillars

From The Economic Times

NEW DELHI: Structural audits on piers under construction for Delhi Metro Rail Corporation (DMRC) tracks have revealed 18 cracks on pillars. The
company, however, Monday said that there was "no need for panic".

The DMRC has begun to re-check all the piers of the metro's 190 km-long phase II, after

the July 12 accident due to a crack killed six people. Independent consultant Shirish Patel and Associates were appointed to conduct the structural audits.

DMRC Managing Director E. Sreedharan had asked metro engineers to inspect all the piers built on respective lines for any cracks.

"They found hairline cracks on eight piers of the Central Secretariat to Gurgaon corridor, two piers of the Noida corridor and eight piers of the Central Secretariat to Badarpur corridor. All appear to be superficial in nature," DMRC spokesperson Anuj Dayal said.

Dayal also informed that Sreedharan has asked the consultant to reassess the design of the 18 points in detail in addition to the overall checking of phase-II structures.

"DMRC will carry out further corrective action if required and take necessary remedial measures after Shirish Patel and Associates have examined these locations," Dayal added.

On July 12, an elevated stretch of the metro rail under construction in south Delhi collapsed with tonnes of concrete and steel, killing six and injuring 15 others.

Cracks noticed on pillars there were speculated by workers as the reason for the accident. However investigations so far have pointed to a design or construction material inefficiency.

While residents in Noida have expressed fear that DMRC is in a hurry to conduct trial runs on the nearly finished stretch, the company said that surface cracks in concrete structures are not uncommon and trials are likely to continue along with the investigation.

"There is no need for panic in the matter. In fact, Indian standard codes for design of reinforced concrete structures allows and permits tension cracks within limits," a DMRC statement said.

To further investigation, ultrasonic and rebound hammer testing will carried out to check the integrity and quality of the concrete. In addition,

"DMRC will get load testing done wherever considered necessary," Dayal said.

"Similar testing was done during phase I of the construction also as a precautionary measure whenever required."

Monday 27 July 2009

Randomizing Input Data for Visual Studio Load Tests

From Ajax World Magazine

While preparing for my presentation Load and Performance Testing: How to do Transactional Root-Cause Analysis with Visual Studio Team System for Testers that I gave at the Boston .NET User Group on May 13th I came across certain load-testing topics. One was: How to randomize Input Data.

If you go with Visual Studio you can code your web tests in any .NET Language giving you the freedom to create random data by using e.g.: System.Random. If you however want to use the “nicer” UI Driven Web Test Development (that’s how I call it) – you are limited in your options. You add web requests – you can parameterize input values by using Context Parameters with hard coded values, you can reuse values extracted from a previous request or you can use data from an external data source like a database, CSV or XML file.

Basic random numbers for VSTS Web Tests

One thing that I missed was the ability to use basic random values, e.g.: a random number from 1 to 5 that would be used for a quantity field of a web form or a random username with the pattern “testuserX” where X is in the range of my test user accounts.

In order to do that there only seems to be one way – implementing a WebTest or WebTestRequest Plugin that generates random data and makes it available as Context Parameter to the test.

Friday 24 July 2009

National swine flu help service goes live

From The Guardian

Will the new freephone and website service buckle under the weight of growing panic about the pandemic? We check out its online advice and the public's reaction

3.03pm: The national swine flu help service finally goes live at 3pm today, with millions expected to call the telephone helpline - 0800 1513 100 - and website. Until half an hour ago the Department of Health press office said it was unable to state at what time the service would actually launch, amid reports of serious IT problems, which is unlikely to reassure the government's critics that it has the situation in hand.

3.04pm: Just minutes after the website is launched, it is apparently already overwhelmed with requests for information. Visitors clicking through to the section for patients in England get this message: "The service is currently very busy and cannot deal with your request at this time."


3.27pm: Nearly half an hour after its launch and there is still no access to the service. The website does prompt you for the following info: date of birth; current symptoms; medical history; NHS number or, if you are a foreign visitor, passport number or European Identity Card number; postcode. The Department of Health press release says if you are deemed to have swine flu symptoms you'll be given a unique access number and told where your nearest antiviral collection point is. Then you need to get your "flu friend" to go and pick up the drugs. They must show their own ID as well as yours.



3.29pm: The helpline offers callers two options - general information on swine flu, directing you to call 0800 1513513, and an assessment of your symptoms. There was only a short wait to get through to a member of staff.

3.44pm: Twitterers complain they can't access the website. djwhisky writes: "Pandemic Flu site overloaded already https://www.pandemicflu.direct.gov.uk/ - would've thought they'd do some load testing b4 launch #fail."

Thursday 23 July 2009

Industry View: Green Has to Mean Better

From SMT

It is easy to point a finger at certain substances and say they are bad and should be eliminated. Sometimes, however, even more difficult than eliminating a substance is finding a replacement for it. Care must be taken to avoid replacing a “bad” substance with another that causes even more problems than its predecessor.

We must ensure that changes are science-based. When there is technical and/or ecological evidence of how the industry can better protect the environment, we should take the proactive approach of carefully evaluating alternative technologies to determine trade-offs between product functionality, environmental impact, reliability, safety, and cost. Stakeholders must be involved in this process.

It is difficult to affect industry-wide elimination of specific materials without deadlines of pending legislation. The infrastructure changes required are especially challenging, given that most companies are not vertically integrated. Widespread conversion requires consensus on solutions and requirements across the supply chain.

iNEMI has organized initiatives to help build such consensus. The first focuses on PVC alternatives, while the second addresses issues related to elimination of HFRs.

The PVC Alternatives Project is evaluating alternatives for PVC power cables to determine trade-offs between product functionality, environmental impact, reliability, safety, and cost. Participating members will conduct cradle-to-grave environmental lifecycle assessments (LCAs) comparing PVC and PVC-free compounds for U.S.-based detachable desktop power cord applications (cable, connectors, wire). They also will compare equivalent functional units that meet UL requirements and conduct performance testing to gain a better understanding of the electrical, mechanical, and safety aspects of PVC-free alternatives.

Many of iNEMI’s OEM members and their suppliers are assessing the feasibility of a broad conversion to HFR-free PCB materials. Significant technical questions remain. What electrical properties are needed to meet high-speed signaling requirements? With many HFR-free materials showing higher stiffness, what mechanical properties are needed to ensure system reliability isn’t degraded? Can design modifications reduce sensitivity to electrical and material properties?

Tuesday 21 July 2009

BreakingPoint First to Deliver Dual Stack IPv4/IPv6 Testing Capabilities and Support for Current IPv6 Standards

From Earth Times

AUSTIN, TX -- 07/21/09 -- The current blend of IPv4 and IPv6 network traffic can have serious repercussions on network device and application server performance and security. Yanick Pouffary, technology director for the North American IPv6 Task Force and an HP Distinguished Technologist, recently told CIO Magazine that, "At least half of U.S. CIOs have IPv6 on their networks that they don't know about, but the hackers do. You can't ignore IPv6. You need to take the minimum steps to secure your perimeter. You need firewalls that understand IPv4 and IPv6. You need network management tools that understand IPv4 and IPv6."(1)

Only through testing IPv6-aware firewalls, intrusion detection systems and other network devices using both IPv4 and the most current IPv6 traffic, can you certify device resiliency and meet mandates for IPv6 compliance.

News Overview

-- BreakingPoint today became the only testing tools provider with dual stack IPv4/IPv6 testing using the most current IPv6 standards and support for both application and security traffic as of July 2009.

-- BreakingPoint is now the exclusive provider of resiliency testing using current IPv6 traffic to simulate true global network traffic. BreakingPoint has the unique ability to generate blended stateful application traffic mixed with live security attacks at line-rate speeds and high session counts, delivered from the same address space.

-- BreakingPoint dual stack testing is the industry's most comprehensive and up to date IPv6 capable testing allowing Network Equipment Manufacturers (NEMs), service providers and application server suppliers to:


-- Simulate IPv6 traffic through each BreakingPoint testing component
including Client Simulator for load testing IPv6 capable application
servers.
-- Enhance Layer 4-7 performance with more than 75 blended IPv6 native
application protocols.
-- Authenticate security with more than 4,200 IPv6 capable security
strikes.
-- Ensure compliance with the latest IPv6 standards as of July 2009.
-- Validate real-world performance by blending IPv4 and IPv6 traffic.
-- Reduce time-to-test through importing of IPv6 traffic captures.

Wednesday 15 July 2009

Self-sufficient robot

From The Engineer

Robotic Technology of Potomac, Maryland, is developing an autonomous robot that is able to perform long-range, long-endurance military missions without the need for manual or conventional refuelling.

The patent-pending robot can do so because it can find, ingest and extract energy from biomass in the environment, as well as use conventional and alternative fuels such as petrol, diesel, propane and solar when suitable.

The source of power for the so-called Energetically Autonomous Tactical Robot, or EATR, is a hybrid external combustion engine system developed by Cyclone Power Technology.

Unlike internal combustion engines, the Cyclone engine uses an external combustion chamber to heat a separate working fluid (deionized water) which expands to create mechanical energy.

This is integrated with a biomass combustion chamber to provide heat energy for the engine that then provides electric power for a rechargeable battery pack, which powers sensors, processors and controls, and a robotic arm/end effector.

The data from the optical, ladar, infrared and acoustic sensors is processed by a control system to provide the situational awareness such that the robot is able to identify and locate suitable biomass.

The control system also controls the movement and operation of the robotic arm/end effector to manipulate the biomass and ingest it into the combustion chamber as well as control the operation of a hybrid external combustion engine to provide suitable power.

So far in the development cycle, engineers at Cyclone Power Technologies have coupled their proprietary steam generator with the compact biomass furnace and produced sufficient steam to power the robot's six-cylinder, 16HP Waste Heat Engine (WHE).

With this stage of development complete, Cyclone will now commence system performance testing with the goal of delivering a complete beta system to Robotic Technology in the next 90 days.

Tuesday 14 July 2009

Two sides of the ‘Big Blue lemon’ row

From Manila Standard Today

It’s easy to get worked up over the raging war between the Government Service Insurance System and the Philippine office of software giant International Business Machines.

You can take the side of the state pension fund and lament that the crash of the Integrated Loans, membership, Acquired Assets and Account Management System has caused inordinate delays in the processing of loans, benefits and first-time pension claims of lowly government employees who comprise its membership. You can play up emotions and say that while there are millions of pesos involved in the deal and in the lawsuits that have sprouted as a result of the row, the fund members only need a few thousands to pay their children’s tuition, settle hospital bills, buy some medicine, or fix the house.

On the other hand, you can imagine being on the side of the IBM, a company that has built its reputation for decades, doing business in numerous countries all over the world. IBM has been operating in the Philippines for more than 70 years already. Certainly, accusations of supplying “defective” database software to one of its longtime clients, a state agency at that, dents this prized reputation. This, too, should not be taken lightly.

...

n a press statement released earlier this month, IBM Philippines quoted Questronix which said that while the system had been operating successfully since May 26 (after IBM had provided the so-called build to address the problem), the overall stability of the system “will continue to be in question until the GSIS takes steps to address the many other issues impacting the system...these include instituting backup and recovery procedures, conducting appropriate performance testing and tuning in accordance with industry practice and having certified personnel manage complex systems on a regular basis.”

Friday 10 July 2009

N.J. computerizing driver testing

From The Philadelphia Inquirer

TRENTON - The New Jersey driver testing system is undergoing an overhaul. The Motor Vehicle Commission announced yesterday that Robbinsville-based New Jersey Business Systems has been awarded a contract valued at about $4 million to update the system.
The project is to begin this month and could be complete by late spring.

The written test will use networked personal computers, allowing for more efficient test management and scheduling. Security measures to prevent cheating will include random tests unique to each applicant.

Changes to the road test include GPS tracking to prevent fraud and the use of lightweight tablet computers by examiners for automated scoring

Thursday 9 July 2009

What customers say about WAN optimization

From Network World

Network optimization means different things to different people. That’s one of the reasons I enjoy keeping tabs on the field, since the technologies, customer settings and implementation tactics seem endlessly varied.

This week I’m going to kick off a series of customer stories that reflect that variability. Over the last several weeks I’ve had the chance to talk to a handful of enterprise IT executives about their recent and ongoing WAN optimization projects. Beginning with the next newsletter, I’m going to share those stories.

For instance, there’s Merial, the animal healthcare giant behind consumer pet supply brands such as Heartgard and Frontline. Based in Atlanta, Merial operates in more than 150 countries. When the company decided to centralize its key business applications, it chose WAN optimization gear from Silver Peak Systems to boost application delivery worldwide and to speed data replication between data centers for business continuity and disaster recovery purposes.

Another company that shared its story is CGGVeritas, a geophysical company that works with major petroleum companies to help them find oil and gas reserves. CGGVeritas collects, crunches and distributes massive quantities of data, and it’s using Aspera’s file transport software to enable fast, predictable and reliable transfers of seismic data over the WAN.

Dollar Thrifty Automotive Group shared details about how it prepped for the peak summer car rental season, in part by load testing its two redesigned Web sites.

Wednesday 8 July 2009

Your Web App, Their Experience: Load Testing 2.0

From TechNewsWorld

Traditional load-testing methodologies can measure the strength of an enterprise's internal infrastructure. However, if external, third-party components aren't delivering snappy Web application performance, customers likely won't care whose fault it is -- they'll just go away. Load testing 2.0 is a way to assess your Web app's performance from the customer's point of view.

Imagine that it's "show time" for your company's annual peak period of e-commerce traffic. If you've ever been an e-commerce manager for a toy company on Black Friday, a floral company the day before Mother's Day, or a sporting events ticketing company a month before the Super Bowl, then you can surely relate.

Your online customer service representatives are trained and primed to help your customers. Your warehouse shelves are stocked, and your logistics providers are all lined up at the door. Your marketing promotions and campaigns are in full throttle, and your feature-rich Web applications including search, online catalogs, shopping carts, order status information, ratings and reviews, streaming video and more are all ready to roll.

Then -- boom! -- something in your most critical Web application goes awry the very day or days that exceptional performance is needed most, bringing your e-commerce operation to a screeching halt. Thousands of shopping carts and product searches go abandoned, and you have more disgruntled customers than you can imagine.

"Impossible," you might say, "we've conducted internal testing of our Web applications inside and out! All our tests have passed, and we are confident our internal infrastructure can handle our best-case traffic and then some." I'm here to tell you -- testing internal components is not enough to "get it right." With potentially huge revenue hits and brand image on the line, is anything less than exceptional performance really something you can live with?


You're Only as Strong as Your Web App's Weakest Link
Let's start with a look at today's Web applications, which have evolved from single-function tools to extended, interdependent, multi-tier delivery chains comprising numerous third-party applications and services. The performance of your Web application in its entirety hinges on the performance of each and every third-party application or service comprising it. Consider an online sales application which includes search, shopping cart and check-out functionalities. Together, these functionalities comprise a highly interdependent Web application delivery chain, and poor performance at any step can bring down performance of the entire application.

Today's modern Web sites incorporate an average of six third-party applications and services delivering content and functionalities from beyond the firewall, all converging and assembling in your customers' browsers. Third-party applications and services are more prevalent than one might think and include such commonly used features as CDNs, Omniture and Google (Nasdaq: GOOG) Analytics. While these third-party applications and services are designed to enable a richer online experience, they also present a liability since it's estimated they comprise 50 percent or more of the time a user spends waiting for a Web site or application to load.

In a recent study, Aberdeen Group found that a 1-second increase in response time can reduce online sales conversions by 7 percent. In the event of poor application performance, customers simply don't care which of your third-party application providers is to blame. Instead, they will hold you responsible, and failure to guarantee performance anywhere in your Web application delivery chain can result in significant damage to your brand and revenue.

Tuesday 7 July 2009

Large Hadron Collider grid stress-tested

From CNet News

The grid that will process data from the Large Hadron Collider has undergone stress testing, as CERN and other groups try to gauge its limits.

The tests, called Scale Testing for the Experiment Program '09, threw huge amounts of data around the distributed computing project, which uses dedicated optical-fiber networks to distribute data from CERN (European Organization for Nuclear Research) to 11 main computer centers in Europe, Asia, and North America.

From these centers, data is dispatched to over 140 centers in 33 countries around the globe, where the LHC data is managed and processed. The recent grid tests, which lasted for two weeks, were completed before the beginning of July.

LHC computing-grid project leader Ian Bird said Friday that CERN had tried to break the grid but had not succeeded.

"People were trying to break the system by seeing how much data we could push through it, but we didn't (break it)," Bird told ZDNet UK. "The test was successful."

Data from all the experiments running at CERN--including analyses from the Atlas particle accelerator, which is linked to the LHC--were processed through the grid, according to Bird. While the amount of data expected from the LHC will be in the area of 1.3GB per second, the grid systems were bombarded with 4GB per second. "The data volume got to a much larger scale than is needed," Bird said.

Monday 6 July 2009

GSIS did not address database crash, says IBM

From Inquirer.net

MANILA, Philippines – IBM continues to assert that it is not at fault for the major database crashes experienced by the Government Service Insurance System (GSIS).

In a statement, the company warned that the GSIS has “made it impossible” for both parties to resolve the issue more constructively.

IBM was responding to several announcements by the GSIS through paid print and TV advertisements , weeks after the government agency filed cases against IBM.

In June 3, GSIS charged IBM Philippines and its parent US company, and systems integrator Questronix for failure to fix the crashes of its Integrated Loans, Membership, Acquired Assets and Accounts Management System (ILMAAAMS).

GSIS alleged that IBM’s DB2 database software was the cause of the crashes, which started in April. IBM countersued with a libel case against GSIS.

IBM also warned that ILMAAAMS would continue to experience stability problems until it implements several suggestions made by Questronix in May.

Among these suggestions include backup and recovery procedures, system performance testing based on industry standards and employing certified personnel to regularly manage their systems.

Friday 3 July 2009

CDC's "Lord of the Rings" to Test in July

From JLM Pacific Epoch

CDC Corporation's (NASDAQ:CHINA) online games subsidiary CDC Games plans to start technology load testing of its licensed MMORPG The Lord of the Rings Online in mid-July, reports Sina. The company announced June 30 that it expects to launch the game in the second half of 2009, whereas previous reports said it had slated a third quarter release.

Thursday 2 July 2009

Baselining VoIP service quality

From CED

Simple changes to the installation procedure can provide significant returns.

The network and infrastructure are architected, the product is developed, load testing of the infrastructure is complete, the marketing campaign is underway and customers are signing up at a blistering pace. You want your triple-play installation to go smoothly and don’t want to send a technician to the house for installation issues that can be detected automatically. The ideal scenario is a self-installation by the customer. No one has to be home, there is no truck roll and costs are kept to a minimum.

Unfortunately, the technical competence of customers and the number of unsuccessful installations has kept the percentage of self-installations relatively small. Most carriers still send technicians for at least one triple-play service installation, and some are sending a technician for every new service turn-up. Truck roll costs vary from carrier to carrier but range up to $500. If a second truck must be rolled just to correct the initial installation, 10 months’ revenue from a $50 service is wasted.

Tuesday 30 June 2009

JustGiving CEO pledges refund over upgrade cockup

From The Register

JustGiving.com’s CEO has apologised to users of the company’s online charity donation service, following a clumsy upgrade that has plagued the system since going live over a week ago.

The firm is offering cheesed-off users, who struggled to access fundraising pages and logon to the system after the relaunch of the JustGiving website, a refund of the five per cent transaction fee for any donations made since 20 June for seven days.

“We did carry out extensive testing before we launched it last weekend. However, what we now know is that we didn’t test it extensively enough, or try hard enough to break it,” admitted JustGiving’s boss Zarine Kharas in a miserable blog post late last week.

In the meantime the company is continuing to grapple with the upgrade cockup.

“We take full responsibility for our mistakes and our tech team is working 24 hours a day to fix everything, and the rest of the team are staffing the phones and email enquiries to try and help out as quickly as possible,” said Kharas.

“The bottom line is this: we know that the performance of the new site over the past week has been totally unacceptable. We take full responsibility for that and are committed not only to fixing it, but also to showing that we’re sorry.”

JustGiving’s CEO is hoping that refunding the transaction fee to those affected by the upgrade will help draw a line under the matter.

However, many are complaining it doesn’t go far enough to compensate those people who have faced major strife collecting and making donations via the site.

“To me, it just seems a bit dotcomish, a bit amateurish, which is fine if you’re launching a free service from your garage, not so fine if you’ve been going for 10 years and have turned over £20+ million,” commented angry JustGiving user SteveK on the firm’s blog.

On Saturday the firm posted an update to its blog, confirming it was still having problems but added most issues had now been fixed.

Unfortunately some users are still struggling to view fundraising pages, while others are unable to create new pages since the website relaunched.

Users continue to complain about the upgrade snafu.

"My event completed yesterday, everybody raising money for the charity I am supporting has had issues in the build-up to this event," wrote CharlieW in a comment below the company's latest blog post.

"How many donations are being lost because donators are put off by amateur implementation of new website?", asked "less than impressed" user Dave.

Last week the website’s CTO Dominic Lacey confessed to El Reg that deployment of the upgrade had been less than smooth.

"Load testing didn't accurately reflect the way it's being used in the live environment," explained Lacey last Tuesday, a message that was echoed yesterday by the company’s boss.

We’ve asked JustGiving to provide more details on when users can expect to see the website return to life fully in order that they can, you know, help to improve lives and that via their much-needed fundraising efforts. At time of writing the firm hadn’t got back to us with comment.

Monday 29 June 2009

News sites falter as traffic spikes after Jackson's death

From ComputerWorld

Michael Jackson's death on Thursday caused a spike in visits to news Web sites that affected the performance and availability of some of the biggest ones, according to Web monitoring company Keynote Systems.

Michael Jackson's death on Thursday caused a spike in visits to news Web sites that affected the performance and availability of some of the biggest ones, according to Web monitoring company Keynote Systems.

Between 6 p.m. and 8 p.m. U.S. Eastern Time, the availability for the news sites from ABC, CBS and the LA Times dropped to almost 10 percent, meaning that about nine out of 10 visitors couldn't get the sites to load.

Starting at 5:30 p.m., the average download speed for news sites tracked by Keynote went from less than four seconds to almost 9 seconds, and their average availability dropped from almost 100 percent to 86 percent, the company said. News sites monitored by Keynote returned to normal performance and availability levels by 9:15 p.m.

Other news sites that experienced problems included AOL, MSNBC, NBC, the San Francisco Chronicle and Yahoo News, according to Keynote.

However, in a subsequent statement late Friday evening, Keynote noted that the slowdowns were caused primarily by external providers of interactive images and ads to the news sites. An example was the news site of ABC, which served up its internal content without delay but got dragged down by its external providers, Keynote said.


In these situations, depending on how a Web site is designed or how end users' browsers are configured, Web pages can display immediately their internal content, leaving blank sections for the delayed external content or, at the other extreme, the pages will not be displayed until all components are ready to be rendered, according to Keynote.


"Ongoing end-to-end load testing and performance measurement benchmarking are essential to being prepared for unexpected news events. News sites should require third party content companies, such as ad networks, to certify the capacity of their networks, perform regular load tests from around the globe, and have strong Service Level Agreements in place," Keynote said in its statement Friday evening

Thursday 25 June 2009

Acutest Launch Load Cannon Testing Service

From NewsReel Network.com

Acutest have launched the Load Cannon: a hosted performance testing service for web applications. A fast and cost-effective service, it is aimed at enabling organisations without the inhouse capabilities to enjoy the benefits of load testing.

London, UK June 25, 2009 — Acutest have launched the Load Cannon: a hosted performance testing service for web applications. A fast and cost-effective service, it is aimed at enabling organisations without the inhouse capabilities to enjoy the benefits of load testing.

Acutest (http://www.acutest.co.uk), the UK software testing company, have announced the launch of their new load testing service: the Load Cannon. This is a performance testing service for web-enabled applications and websites. Hosted in the UK, it is a fusion of performance testing tools; load generators; monitors; structured testing methods; risk-based testing techniques and experienced performance testing consultants.

Acutest found that many organisations wanted to check how their web applications performed under a heavy load of users or when processing a high volume of transactions. This was usually triggered by an upcoming new web-site launch, enhancements to a web-based system, or changes to the infrastructure ahead of a significant event or seasonal peak. But they were deterred by the cost of performance testing tools; the expertise needed to conduct a performance test and analyse the results effectively; and the length of time it took to perform the testing. All of which led to them do little, if any, performance testing.

The current difficult economic conditions have restricted performance testing further. Cost constrained, many organisations just live with the risks of performance failure (ironically, in a marketplace where the performance of web applications can be a key competitive differentiator and business lifeline). Sometimes their web systems work as they hope. Other times, they don’t.

“We were approached by an organisation that had enhanced its website services ahead of a seasonal peak load. Costs had been reigned in and they did no performance testing. When the peak load arrived, the website collapsed with a significant adverse impact on the business,” said Ian Coe, the Head of Load Testing Services at Acutest. “The old adage that an ounce of prevention is worth a pound of cure is all very well, but if the ounce costs too much people will take the risk of needing a costly cure. We decided to come up with a cost-effective load testing version of that ounce: our Load Cannon Testing Service.”

The load testing service is designed to be robust, quick to operate, capable of both onsite and offsite deployment and is also scalable. The pricing has been designed for the current economic climate. There are no testing tool license costs or restrictions such as rental periods. There are no additional costs for simulating high volumes of users, or transactions, in a test scenario. And you only pay for what you use.

A risk assessment is carried out at the start of the assignment and the testing is prioritised by business impact and likelihood of failure so organisations can match their budget with the level of risk they want to mitigate. This enables them to choose the level of performance testing they want, ranging from a simple benchmark load test to a comprehensive set of performance tests for a complex web-enabled system. So now organisations no longer have to live with the untested risk of performance failure.

“It’s easy to cut testing costs by simply doing less but this increases risk and the potential for high recovery costs, not to mention loss of business and reputation. We set ourselves the challenge of creating new testing solutions with the constraints of the current recession in mind. Services which cut the costs of testing whilst managing the risks,” said Barry Varley, Managing Director of Acutest. “In May, we brought Software Planner, a cost effective SDLC test management tool, to the UK testing market. Now we’ve launched the Load Cannon, our cost effective web application performance testing service. We’re also piloting other recession testing services that that I hope to announce in the coming months.”

Tuesday 9 June 2009

Ixia's product suite tests virtualized data centers

From Test and Measurement World

Expanding into virtual network and virtualization testing, Ixia has launched the IxVM platform to enable data-center managers to assess virtual infrastructure performance and capacity. IxVM builds on the company's library of Layer 2–7 performance test tools that discover, manage, and automate testing in large virtualized environments.

With the suite of IxVM tools, it is possible to test Layer 2/3 virtual network resources and Layer 4–7 virtual applications. IxChariot VM, a component of IxVM, uses software endpoints—small software components that run on each virtual machine—that send and receive traffic, while measuring performance. This makes it possible to source traffic from virtual servers in the same manner as the supported applications. IxExplorer VM uses software endpoints to generate Layer2/3 traffic to test features such as VLAN and QoS.

IxVM allows performance testing across thousands of VMs (virtual machines) simultaneously with real-world application traffic. It also enables independent measurement and convergence testing of VM migration; tuning of virtual resources, such as servers and NICs; measurement of key performance indicators, like delay, jitter, or packet loss, through virtual switches; and testing of network performance variances when running applications over different operating systems.