Nvidia's CEO predicts that GPU performance will increase by 570x over the next six years, compared to only 3x for CPUs. So if you are running time intensive computations, it might be worth thinking about ways of porting them to the graphics cards.
Personally, I want my hidden Markov models to run on GPUs and I've just submitted a grant application to get a PhD student and a postdoc to work on that.
Speaking of GPUs, now that Snow Leopard is out it is also time to look at OpenCL. I haven't gotten hold of Snow Leopard myself yet, but my student who works on our HMM framework has, so I hope we get around to that soon.
In the meantime I am going to look at these tutorials: