I was just starting to read this new Boxes and Arrows article MSWeb: An Enterprise Intranet #1 about the massive intranet at Microsoft and how some knowledge engineers (information architects, whatever the hot term du jour is) tackled it. I was brought up short by this sentence:
It’s nearly impossible to develop a successful information architecture against a backdrop of explosive content growth, content ROT, and the political twists and turns common in any organization.
“content ROT?” thought I, “What do they mean? Return On Time, maybe? But how is that bad?” So I went searching on Google (God bless Google) and found, in a slide presentation done by my Adaptive Path buddies no less, that ROT in this case stands for
and is a mnemonic for weeding out useless content.
I will have to go back and read that article later, but it’s time to leave work and go enjoy one quiet solitary evening before computer maintenance, book release party, welcome home parties and my subsequent collapse.
How should I spend my time this evening? Cleaning the apartment, listening to a book on tape, going to bed early.
How will I probably actually spend my time? Playing The Game Neverending.
A programmer friend just approached me with an odd problem which I couldn’t explain. Basically what he’s seeing is this:
On Server A, you can do a search and the results page will load and in the log you’ll see the requests coming in from the client for the navigation image files. Then you can hit next to go to the next page of results and you won’t see the image files re-requested since the browser already has them.
On Server B, when you search the behavior is the same until you hit next. Then the next page loads and requests all the navigation image files as though it was the first time it had ever heard of them. And if you hit next to see the 3rd page of results, again it will request all the navigation images.
He’s seeing this in the same browser, so it appears to be an issue on the server not the client side.
Anyone got any idea what’s going on?
One environmental note: Server A & B are, theoretically, running exactly the same proprietary web server which is integrated with the searching software.
One of the better ideas I came across while researching for my masters thesis was William K. Horton’s description of design as “a continual process of successive refinement” in his book Designing And Writing Online Documentation.
He describes this in a diagram thusly:
do it right. – > do it better -> do it better…”>
“Development of online documentation is iterative, cumulative, and empirical. It is iterative in that several cycles of development are required, cumulative in that you learn and improve through each cycle, and empirical in that improvements are based on testing and experience with working prototypes of the system.”
I think this concept can be readily extended to all sorts of online development and, based on my experience with my bookstore, to other projects as well. The key factor is to do something from which to improve. Yes, you should think first, but don’t feel you must solve every question in the first specification. “Ready, aim, aim, aim, aim….” won’t get you anywhere.
of the Interface Hall of Shame is always good for a bit of lunchtime browsing. And now back to work with a renewed sense of hopelessness!