Kiwibank charts seven-year search for ICT capacity

Bank has had two 'forklift' ICT upgrades

Kiwibank has now been with us for seven years and claims 800,000 customers, so it is sometimes easy to forget it is still a start-up business.

One of few user sessions presented at the Auckland leg of this year’s IBM Forum came from Wayne Knowles, senior infrastructure architect at the bank, charting the organisation’s search for cheap but flexible computing power.

Knowles, who spent six years with the bank as an employee of service provider Datacom before joining as a direct employee, says the back end core computing at the bank is built around IBM’s P-Series technology.

Knowles says Kiwibank began life as an ambitious organisation, but had a low start-up budget. It had strong backers and opponents in the Labour/Alliance government of the time. It was, he says, a “bit of a laboratory experiment”.

He says that political opposition is now history, but the battle to deliver capacity to the bank’s large and growing customer base in a cost-effective manner continues.

The bank’s strategy to roll out services to all NZ Post branches was highly effective, he says. Behind that, Kiwibank also piggybacked on NZ Post’s ISP and existing ICT vendor relationships for any missing technology pieces.

At start-up the bank had two P-series boxes for core banking and didn’t know how quickly the customer base or load on those boxes would grow. It also had a handful of X-Series boxes running Microsoft applications, file and print services, some SQL Server and some Citrix.

Demand grew quickly, with the NZ Post network signing up 2000 customers a week.

“It was a good time to go to market as the other banks were shutting down their satellite branches.”

Knowles says the biggest problem in developing Kiwibank’s computing platform was that the bank’s customer growth was linear, but growth in computing capacity was not.

The bank used a building society package built by an Australian company, which “kind of goes against our marketing”, he says, referring to Kiwibank’s nationalistic advertising campaigns.

The database in use was based on 1970s PIC technology.

Within 18 months, the bank was the largest user of the software and had to refine it to meet growth demands. Knowles says you have to think proactively rather than reactively to deal with capacity development.

Knowing that “silicon is faster than disk”, Kiwibank kept a working set of data in memory for 24 hours to avoid I/O issues. That meant there was a high performance impact on reboot, so the bank avoided any reboots around peak times. It was also important to understand which processes invoked other sub-processes.

Knowles says it is essential to understand the application to avoid high cost operations such as disk I/O reads and process startups and forks. The best way to find bottlenecks, he says, is a full load test.

Around four years ago the business realised it was here for the long haul, he says. and needed to get better and smarter. It was also delivering internet banking and other services bolted on to the core set offered at launch.

Performance issues on the core system were emerging and needed to scale well and efficiently. Benchmarking became more important and new management tools were progressivley introduced, including Tivoli, HP Loadrunner and in-house tools to deal with the “semi-proprietary nature of the beast”.

Knowles advises users facing similar issues to “optimise for the common case”. Optimise for stuff that runs often, he says. The common queries, not the ones that run once a day.

“That has really good payback.”

It was also necessary to understand storage growth. Once again, this wasn’t linear – but exponential, which was scary, he says, especially in the case of unstructured data.

“Most organisations recognise that unstructured data growth is out of control.”

Once again, analysis is key. Users have to ask if they need, for instance, the history of every transaction held on a given system. Multiple copies of structured data, in test, disaster recovery and banking systems, can also balloon requirements, he says.

Forecasting the upper limits of capacity and time until those limits were reached became key to scaling, he says. Issues then had to be taken to management to make a case for further investment.

Knowles advises when making such cases ICT people should be clear about making their case and explaining what happens when capacity runs out.

“Engineer up front to cater for future upgrades,” he says. Kiwibank has now done two “forklift” upgrades, he says, bringing in new platforms to boost capacity.

The most recent, carried out by Integral, was the result of forecasting in the middle of last year. The bank realised that at Christmas it would have to process five days worth of transactions in one day.

At that time, the bank took the opportunity to cater for future requirements, upgrading its AIX system and introducing LPAPO virtualisation in its P-Series boxes. The project took eight weeks.

“We went through the Christmas peak, but now we are back where we were again in 12 months,” he says.

Knowles advises other users to focus on repeat problems and minimise touching the system through automated monitoring and reporting.

“More fingers equals more problems.”

He says Kiwibank’s futureproofed infrastructure will consist of thin provisioning, virtualisation with VMware and LPAR, disk deduplication, clusters, “fine-grained” disaster recovery (treating each system as separate), redundancy to allow live fixes and aiming for zero outage upgrades and changes.

Adherence to industry standards, such as SOA and XML, also helps to create manageable applications and to bolt on new functionality, he says.

Join the newsletter!

Error: Please check your email address.

Tags kiwibank

Show Comments

Market Place

[]