Thursday, May 2, 2013

WebCenter Content - Is your system performance on target?

Quick question: how's your WebCenter Content response times in terms of making your business users happy? Now if you do hear complains - do you blame it on the number of records? I mean, wow. Approaching a million content items mark per repository still makes application owners and system administrators feel anxious, especially if business expects some rapid growth rates with more and more content coming in day after day. "How will my system perform when I get over a million items?" Will it slow down to a crawl and will I be losing data?

First off, let me start by acknowledging these fears. Yes, a million records is a lot... when you store them in Access database back in year 1995. Now when fly back to 2013 and put them in an enterprise-level database, like Oracle or one of those other databases supported by WebCenter Content 11g (like SQL Server or DB2) - a million records is not very much at all. WebCenter Content is designed to handle extremely high volumes of content - up to one hundred million content items per day!

Yes, I'm not kidding. A hundred millions items coming in each and every day - when you invest into some high end hardware. But even if you won't - WebCenter Content can still check in over 10 million items each and every day - on a 'middle shelf' commodity hardware!

Just read this section. Especially if you're still thinking that a million records is a lot. It will blow your mind:

WebCenter Content 11g benchmarks on commodity hardware

The following benchmarks were conducted in the Oracle lab on a cheap 'commodity' sever with dual Xeon 2.33 and 16Gb RAM - running a single node Content Server.

Contribution

Multiple tests were conducted - checking in a variety of file sizes - from 4Kb to 250Kb and various types - text, MS Office and PDF. The table below shows the results of a test run that took full 24 hours to complete and shown some staggering results:

Anywhere from 11 to over 23 millions content items checked in per day on a commodity hardware


Table from the Oracle White Paper: "Oracle Enterprise Content Management Suite Extreme Performance, Extreme Scalability"

Consumption 

Site Studio for External Applications has delivered 124 pages per second and over 446,400 pages an hour - with 89% CPU

The reason most clients don't see that kind of performance lies in all that other 'stuff' that lies outside of the WebCenter Content. This is the kind of stuff that we will be addressing in this whitepaper.

Oracle WebCenter Content is an I/O limited application, which means that overall performance is restricted by the speed of your hard disks, and the bandwidth in your network, not by the software itself.

Adding Exadata


A single node of Oracle UCM 11g can ingest over 91 Million files per day with a Oracle Exadata Database Machine Quarter Rack (which includes Oracle Database 11gRelease 2 and Oracle Exadata Storage Server software), and almost 179 Million files per day with Oracle Exadata Database Machine Half Rack.

So how do you feel now, that you're convinced that WebCenter Content can handle it?



Seriously speaking, I agree, lab environment may behave better then a real life system where you have slow networks, other apps competing for resources and third party vendor customizations to support. I agree, so let me give you some simple guidelines to see if your system is pumping out the numbers it was designed to deliver or the whining and moaning of your business folks really has some ground and should be looking at improving performance.

System performance targets

Here're a couple of numbers for you to use a guideline. By no means your system must match those exactly as every configuration is different, so these are just the ballpark for you to see where you are.

Read-only requests - like the page loads - should pump out about 20 requests per second per GHz of CPU times the number of CPUs. Dual 2Hz box should pump up 80 pages per sec.

For raw requests such as content information or calls to the GET_FILE or Check In (for small files, say, around 200K) you should target 4 requests per GHz of CPU times number of CPUs in the system.

I hope you're coming right in the ballpark and now you have some proven numbers to back up  your system performance. But even if don't - there's no reason to panic.

Next Steps

Have you checked my Quick Hits article from a couple of weeks ago? There's some solid tips to apply. And even if that didn’t do it for you - we're here to help. Drop us a quick line at support@ECMSolutions.ca and we will help. Will routinely run comprehensive scans to diagnose sub-optimal performance - and provide detailed reports of findings.

Stand by for more tips on performance tuning and diagnosis here on this blog.

1 comment: