major advances in architecture

Modern computers present a quandary to the everyday person. They hear about advances in CPU and computer technology, yet their computers seem slower now than the ones they had five or ten years ago. It isn’t a perception problem, the average brand-new consumer computer DOES respond more slowly with a modern OS. It’s also true that the computer itself is light-years faster than its predecessors, something I will explore at the end of this post. So what’s the problem?

There are two main culprits for this. One is that the software we use asks more and more of the hardware. Microsoft and other primary software vendors add more and more capability all the time, but most often with a full new version like Vista. The same thing happens with everyday software like Office, instant messengers, photo processing software, and so on. Most of the development is done with rapid development environments that are abstracted from writing regular source code by one, two, or more layers. Each layer adds tons of potential functionality and makes development faster, but also adds large amounts of complexity and code. If they were written at a lower level, they’d be a lot smaller and probably run several times faster, but initial development would not be as rapid.

The other major culprit is a little feature introduced by Windows XP known as WinSxS, or Windows Side by Side. This is a repository of library software, primarily system DLLs. It’s a compatibility mechanism, solving the enormous problem known as DLL Hell that plagued older Windows versions. IN a nutshell, when you install just about anything, it makes a copy of system libraries that the program requires and puts them into a program specific subdirectory of the %SystemRoot%\WinSxS folder, where %SystemRoot% is normally C:\WINDOWS.

This does two things that cause problems. It chews up disk space when multiple versions of the same DLL file exist, and it chews up memory when different programs load different versions of the same DLL. There is some intelligence in this folder – for the most part, identical files that get copied in multiple times are actually references to the same file, not additional copies. It still chews up a ton of disk space, though. Although you can’t rely on the disk space reported by Explorer for the folder, if you go exploring it you’ll be amazed at how much is in there.

This nod to compatibility loses the performance benefits that led to the creation of dynamically loaded libaries in the first place. It can also lead to problems where a third-party program is still using an old version of a DLL even though you’ve installed a service pack or security update that should address a problem.

The WinSxS folder is the primary reason that you can get such incredible speed gains by reinstalling your computer from scratch (from a CD that’s had the latest service pack slipstreamed in) and applying all the updates before installing anything else. Unfortunately, I’m not aware of anything that’s got enough intelligence to prune this folder without the risk of screwing something up badly, so reinstalling remains the only sure-fire cure.

I’ve seen the advances in system architecture first-hand. One of the easiest and most dramatic demonstrations of the difference is in compression and text processing, both of which are heavily used by the package manager in Debian. My faster server runs an older version of debian, the slower one runs a newer version, so this is not an apples to apples comparison, but it’s pretty close. I ran these two commands on both servers:

rm -f /var/lib/apt/lists/*
time apt-get update

The older version of Debian downloads 11.1 MB of data and decompresses to 49.7 MB. The newer version is 13.7 MB and expands to 60.1 MB. This is a 23 percent increase in downloaded size and a almost 21 percent increase in the extracted size.

The slower server (400 MHz PII) downloaded, uncompressed, and combined its package information in 71 seconds, using 41 seconds of CPU time. The newer server (1.7GHz P4 Celeron) did its job (granted, with more than 20 percent less data) in 26 seconds, using 18 seconds of CPU time. A system at work running with a dual core 3.4Ghz CPU and the same software as my newer server took 8 seconds with 5 seconds of CPU time, with a lot of it visibly spent waiting to establish connections with the remote servers.

EDIT: The second paragraph of this post never got finished. I think I got my original intent in there now.

One response to “major advances in architecture”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.