At Google they are trying to give Python a real boost. See Ars Technica's post on the project.
We want to make Python faster, but we also want to make it easy for large, well-established applications to switch to Unladen Swallow.
- Produce a version of Python at least 5x faster than CPython.
- Python application performance should be stable.
- Maintain source-level compatibility with CPython applications.
- Maintain source-level compatibility with CPython extension modules.
- We do not want to maintain a Python implementation forever; we view our work as a branch, not a fork.
The main approach is to use LLVM (an open source virtual machine) and JIT compilation to speed up the code. This is probably a good idea. JIT approaches (plus dynamic runtime optimisation) has done wonders for Java and is under the hood of Google's Chrome browser in the virtual machien V8.
More interesting, though, in my opinion is that they want to support multi-core machines by getting rid of the global interpreter lock (GIL). Because of global synchronisation issues, multi-threading in Python isn't quite as parallel as you might think. It is fine for system calls without blocking, but not really for exploiting multiple cores. But see Multiprocessing with Python (I wanted to write a separate post on that, but probably won't have time, so now I'll just link to it here).
Moore's law is dead. Processors are not getting faster, they are just getting more cores. See also Herb Sutter's The Free Lunch is Over. Multi-core software is going to be essential for high performance in the future, and by handling this in the VM for Python, rather than running separate processes, we might see runtime parallelisation optimisation. That would be really exciting!