POWER4 has some rather effective algorithms that can keep branch prediction across most workloads between 90-95% accuracy (the highest in the industry on commercial RISC processsors, I think). I belive this has been benchmarked with scientific/technical workloads as well as a number of commercial workloads, though I'll have to check to make sure I'm not fibbing here. I'm positive that sci/tech workloads have seen this high of a rate...
Other RISC techologies in the past were happy to see 60-75% most of the time across either.
In a nutshell, branch prediction tries to "guess" what to do to when a branch instruction is encountered in a pipeline. If it's correct on what to do, it's saved time and cycles in processing instructions; if not, everything has to be flushed out of the pipeline and you start the intruction executions anew (not good).
RISC and CISC branch prediction permutations do differ in how they are implemented; the theory is the same, but how it's handled on a POWER4 processor as compared to some sort of Pentium or Itanium system is an unknown to me. I just know I've been told that they are different and that difference is somewhat considerable.
Intel has been rather murky on how their iteration of branch prediction works; come to think of it, IBM hasn't been much clearer in sessions I've sat in on. Both are usually glossed over in favor of other "flashy" technologies that are going into these chip architectures...