Wednesday, December 17, 2008

Image Evolution

Ten days ago... No, wait. That's something else.

Eight days ago, I stumbled onto Genetic Programming: Evolution of Mona Lisa. I thought it was pretty awesome. Today, I stumbled upon this simulation of the same idea you can run in your browser. Doesn't work in IE, but I really hope anyone reading this isn't using IE (any version) as your default browser anyway. Right now, mine is at 1,077/17,000. That means out of 17,000 “mutations” 1,077 have been an improvement over the previous best fit. I just hit 90% fit. Which looks like 005874.jpg on the example shots from the original post. That leads me to believe the offline version is a bit more efficient than the online version, since it's taken about 3 times the number of mutations to get there. But that's to be expected.

I think the ability to watch it develop over time in your own browser hammers home the idea a bit better than the static example provided in the first post from Roger Alsing. But I think there's still room for improvement. For one thing, Roger's program makes it look like it would take almost 1,000,000 generations to get a decent replication of the low res detail of the Mona Lisa he used as his example. That's a necessary abstraction to get this kind of simulation to run on a computer. In real evolution, each generation produces several nodes, each with their own mutations. The node tree would fork out to very huge, very quickly, due to exponential growth.

But applying survival of the fittest to the tree structure would get us to the best fit much more quickly. The version that runs in a browser addresses this somewhat with the x/y display. I'm currently at 91.10% with 1238/23000. On average so far, it's taken about 18.5 generations to find a better fit. If we could fork that into a node tree, even a simple node tree of 2 child nodes per parent node, we'd hit the first improvement in about 5 generations. Actually, we'd almost hit the first 2 improvements in 5 generations. After 5 more generations, we'd be hitting quantum leaps where over 2,000 mutations are tried. And that number doubles every generation.

Ok, I'm proving to not be a strong enough math geek to really get these thoughts out of my head in a way that makes sense. But the way these programs are running is a bit like selection sort whereas the node tree approach is more like heap sort. In another browser tab, I just hit ~94% fit after ~ 50,000 mutations (but only 1900 improvements). Selection sort has an efficiency rating of n2. So if it's taken ~50,000 steps to reach ~94%, that means n is about 224. Heap sort runs at n(log n), so to reach ~94% using that method would only take about 527 steps rather than ~50,000. That's a closer fit to the number of true, natural generations it would take to get the same sort of results through evolution.

A better approach to visualizing this sort of thing would allow for multiple source images and create a node tree instead of using the linear approach. The multiple source images would allow for a simulation of speciesization. The node tree would be less of an abstraction from the natural processes (although still astronomically simplified in comparison) and give a more natural indication of true “generations” required to reach a goal. Ideally, the number of descendant nodes would depend on past history of improvements. So “blood lines” that have produced a high number of improvements in the past would be more fruitful. Those with poor histories would get fewer chances, and eventually die out. There would also need to be a threshold after which a given line is only compared to the source image it seems to be naturally drifting towards. This would simulate the migration into more specialized environments over time. It would also cut down on the processing power required to make it all run, but such a system would experience exponential growth and would quickly overrun any processor or pile of RAM currently available. As abstract as such a program would still be, it would require hardware we won't see until quantum computing becomes a reality.

Tuesday, December 16, 2008

The plan

I probably won't have time to put this plan into action until I'm off for the holidays. Or maybe even until I'm back from the holiday break. I'm sure it will undergo some quick evolution once it's made contact with the enemy, so to speak.

Tags

Languages
  • php
  • css
  • html
  • xml
  • js
  • jquery

Technically jQuery is not a language. But I find myself writing more jQuery than straight JavaScript anymore. So in effect the “js” tag will simply mean “JavaScript that isn't jQuery”.

Tools/Software
  • dreamweaver
  • photoshop
  • browsers
  • firefox
  • safari
  • opera
  • chrome
  • ie
  • ie6
  • ie7
  • ie8

The first two are the biggies. Usually when I see something pertaining to browsers, it's a comparison among several. Sometimes it's more specific, a Firefox plug in for example, or a bug in IE. That's the main reason for breaking IE into different versions. Each version has it's own set of bugs. :(

Content type
  • tutorial
  • plug in
  • code
  • framework
  • tips
  • gallery
  • interview
  • list

I've put those in order of descending anticipated usefulness. A “tutorial” is an article or blog post that deals with a specific goal in depth. Photoshop tutorials are the most obvious example, but there are tutorials for PHP, jQuery, CSS, DreamWeaver, even specific plug ins like Firebug (which may end up getting its own tag under Tools/Software depending on how much content I find).

Software can have a “plug in” but so can languages and frameworks. “Code” would apply to any specific technique within a language that is meant to be copied and pasted rather than published as a plug in. It's also distinguished from the more general philosophical stuff that would normally be tagged as “tips”.

The “framework” tag is something that will probably fall out of favor as I research more frameworks. If I end up adopting Cake PHP as my PHP framework of choice, then I'll probably start using a “cake” or “cake php” tag the same way I plan to use “jquery”. The framework itself will be promoted to a tag describing the language.

“Tips” aren't plug ins and they aren't code, they are less concrete than either of those. Both the articles I linked to yesterday about writing style guides and creating maintainable CSS are good examples of tips.

A “gallery” is usually but not always visual. There are many galleries of CSS designs out there meant to inspire other designers. But there are also galleries of plug ins and code snippits. The “interview” tag is self explanatory.

Usually, the stuff worth tagging will be a few of the individual resources being linked to on a “list”. I'm thinking of all those “50 CSS Tricks You Can't Live Without!” and “Avogadro's Number Photoshop Tutorials for People Who Spend More Time Reading About Photoshop than Working in Photoshop” that have gotten so popular lately.

Content source
  • blog
  • article
  • video
  • screen cast
  • pod cast
  • wiki

I think most of what I find would technically be classified as “blog” posts. If I notice myself tagging several posts from the same blog, I should probably add it to my RSS reader.

Speaking of RSS feeds, that can cause some problems with the “article” tag. The articles I read online are often from sources that provide RSS feeds (A List Apart, Smashing Magazine, SitePoint, ThinkVitamin, etc.). But I'd also use that tag for anything from a peer reviewed journal, for example, the 80+ articles I've dug up on web usability studies as part of my lit review for my final project. RSS feeds are much less common in that world.

Is the different enough to justify coming up with a new way to categorize content such as Taming Lists vs. Testing web sites: five users is nowhere near enough? My heart tells me no. Jared Spool blogs. As important as peer review is, I think that sort of review process happens more quickly and more transparently online. So blogs, the good blogs at least (those that actually get read, possibly unlike this one), are not free from such a review process. At this stage, I think the academic model servers more as a means of exclusion than as any real control on quality of content. Portals like ACM may be on their way out in favor of Technocrati.

Anyway, I was talking about these tags, but I think the remaining ones are self descriptive enough to not explore in detail. :)

That should provide enough of an organizational skeleton for me to get started. I'm sure it will expand and evolve with use. The most important thing, I think, is to keep it detailed enough to describe resources in a useful way while staying small enough to be maintainable.

And someday, I'll need to work in the need to organize and tag stuff that isn't exactly work related. Luckily, the nature of tags will keep most of that content separate naturally. There may be the occasional overlap with a tag like “funny” and resources like You Suck at Photoshop.

Monday, December 15, 2008

Future KM entries

Lately I've been on an tagging/organizing kick with my knowledge management posts. But there are other areas I need to explore too. As my post from earlier today pointed out, the lack of organizational structure behind the existing site (or at least the lack of documentation of any such structure, the difference between it existing with me failing to understand it vs simply failing to exist is effectively nothing) makes for a lot of unfun time. Once I eventually vacate this seat for whatever reason, I hope to leave my replacement in better shape than I currently find myself.

That means I need to work on documentation ranging from style guides to well documented, maintainable code to official policies and guidelines that better fit into the bureaucratic mindset of the campus than all that geeky stuff. (Both the resources I linked to there are written by women. I wonder if that is at all relevant or just a simple coincidence?)

In a time of economic crunch and budget cuts, many people would probably shy away from purposefully making themselves easier to replace. But if knowledge management is really about boosting the productivity of knowledge workers, isn't it possible that turning our backs on knowledge management now could very well extend the economic troubles? KM is about a lot more than just easing the transition for my eventual replacement. One thing I learned early on in my career about commenting my code, the person I'm communicating with in my comments may very well be my future self. Six months after I've written something, there's a good chance I won't be able to figure out what the hell was going on without at least a little guidance. It may be more important for someone who has never seen the code before, but that doesn't eliminate the very real (and almost immediate) benefits such documentation provides for me.

Bad tagging habits?

In my last KM post, I hinted that SU's swelling user base may be a contributing factor in the rampant miscategorization of content. Of course, I'm really doing a lot of projecting. The tags that work best for me might not work at all for the majority of users. The act of tagging takes its meaning from the idiolect of the person writing the tags. It doesn't get much more personal than that.

But different people have different goals in mind with their tagging. I'm trying to come up with a manageable set of tags that can efficiently describe the majority of the work related online resources I make use of both now and in the future (and hopefully scale well for future growth). Essentially, I'm trying to do more with less when it comes to my tagging structure. Obviously, that's not everyone's goal. Even among my personal heroes, such as Jeffrey Zeldman, the tagging structures used by other people can be called “improper” for my purposes. Zeldman has less than 2000 bookmarks with over 5000 tags in his Ma.gnolia account. In fact, he's currently averaging 2.58 tags per entry. And that assumes that each tag is unique. Obviously that's not the case. He's currently got 62 items tagged with “iphone” and 259 items tagged with “webdesign”. I also see some redundancy at work. He's got items tagged with “palin”, “sarah palin”, and “sarahpalin”. This entry is tagged with “september11th”, “911”, “9-11”, & “9/11”. Since that's all on one entry I assume Zeldman's doing it on purpose and not just forgetting what his primary tag for that concept is from entry to entry. That's precisely the sort of redundancy I hope to avoid. But obviously it's working for him.

So in my previous entries I've said some things that could be taken as insulting my fellow users of social media. I should be more forgiving. If everyone used social media for the same ends, there wouldn't be much of a point, right? I guess just like any other social construct (democracy, economics, etc) simply by participating in it we take on the responsibility to be mindful of our own needs and tolerant of the needs of others.

I'm going to go find a drumming circle to join. More later.

Cleaning up the existing site

At first I thought we'd be launching the redesign before our SACS review. The powers that be have other plans. At first I was happy to have a few extra months to work and test and debug prior to launch. But then it hit me. I was gonna need to clean up the site content for the review. That means I've got to deal with the pretty much total lack of any real information architecture on the existing site.

As a first pass, I ran a site wide link report. My first step was to get rid of the orphans. I quickly discovered 2 problems wit this idea.

  1. The current site uses many JavaScript driven pop up windows using code generated by GoLive. DreamWeaver apparently doesn't know how to check these links. Therefore all such content shows up as orphaned.
  2. Ditto for the Flash stuff. This is more surprising since Flash is also a Macromedia product from back in the day. This means the the thousands of photos we have in the various Flash driven galleries all show up as orphans.

The initial report said that out of 24,000+ files, 13,000+ were orphans. More than half the files on the server showed up as not being linked to at all. Of those 24,000-something files, 10,626 were HTML/ASP files. The rest were images, PDFs, and stuff like that. We're now down to a total of 19,957 files, 9,413 of which are HTML/ASP. But 6,138 still show up as orphaned. I bet a few of those really are orphans. Probably no more than 200. And most of those would be images. I'm primarily worried about indexable content that could turn up in a Google search but present horribly outdated information. The trouble there is not all of those files are orphaned. We're still linking to many of them. I guess the next step will be to search for obviously outdated files. Stuff with years in the file names, for example. Then I'll probably need to run another orphan check for freshly orphaned files once that content is cleaned up.

The good news is I've reduced the size of my local directory by 45%, from about 3.4 gigs to 1.9 gigs. The majority of that was the files we're still hosting from last year's CIT conference. But I never need to update those, so there's no need to store them locally. Some of those PowerPoint files got crazy big.

Of course, currently the beta site takes up a total of 339 megs. But it's not quite complete. Still, I'll be surprised if it grows to anywhere near 1.9 gigs before launch. Due to simple changes like getting rid of tables for layout and abandoning the <font> tag in favor of CSS we've shaved about 35k per page. We've also eliminated many pages. The beta site currently contains just 985 PHP files. That's about 10% of the files the current site contains, but we've migrated way more than 10% of the content. One of the big changes in that regard is that we now link to the online catalog for curriculum and course descriptions. There goes 2 pages per degree program plus at least a page per course offered. I think the current site has a lot of redundancy in course description pages among the various program directories. Most of the remaining content will be database driven.

Sunday, December 14, 2008

The Social Side of Social Bookmarking

There's a decent amount of discovery power available through Ma.gnolia as well. I still think Stumble Upon does it better. But it would be silly to not explore the 2nd best tool available for the job. Ironically, but doing so, I was quickly reminded of the problems with default tagging in SU. I pulled up a couple of recent bookmarks from Jeffry Zeldman.

The first is a blog post from Simon Clayson on feeding IE6 a basic style sheet using the sort of techniques that were once common for targeting Netscape Navigator 4 with a set of specific, dumbed down styles while simultaneously protecting NN4 users from that browsers botched implementation of the majority of CSS which was safe to show to less craptastic browsers. Now NN4 is little but a ghost to haunt the nightmares of us old school CSS scribes and IE6 is now the crappiest browser still in common usage. I'll probably spend the rest of December debugging the redesign in IE6. Had I found this idea a year ago, I probably would have served IE6 a very simple style sheet and skipped the debugging. In all honesty, even at this stage it may be less work to implement these ideas rather than try to “fix” IE6.

So anyway, I thought this was a potentially useful technique, so I thumbed it up. This didn't pull up the form for submitting new content to SU, so I knew someone else had already submitted this particular link. This gave me a great opportunity to see what the default tag would be. jackosborne says this page is primarily about “graphic-design”. It deals exclusively with serving specific CSS code targeted at a specific web browser. I can think of at least half a dozen tags more useful for this content than “graphic-design”. But the current SU system gives too much power to the person submitting the content. Jack's actually got more stumbles tagged “web-design” (54) than “graphic-design” (45), but apparently that's due to other people's default tags on the pages he is thumbing up. Looking at his discoveries, he's also submitted this article on Five CSS Design Browser Differences I Can Live With by Andy Clarke and Using jQuery for Background Image Animations from Snook.ca as “graphic-design”. Maybe that tagging scheme server Jack well. It makes SU virtually worthless for me when it comes to organizing and retrieving the resources I discover through it.

The other page I discovered via Zeldman is Western Civ's guide to CSS browser support. Again, this page deals exclusively with CSS and web browsers, so for my purposes it would be pretty easy to tag. It was submitted by SU user kancerman uh...wow, 3 years ago. If I'm reading this right, I'm only the 10th person to thumb this up in those 3 years. That could be because it was submitted into the category “internet-tools”. For me, that category is better suited for things like online mortgage calculators or WriteBoard. But due to the way kancerman submitted this page, “internet-tools” is the default tag. Now I can look at his entry for this page and see that his 2nd tag is in fact “CSS”, but since that's the 2nd tag on his entry, it holds no bearing for how it is tagged by default when I thumb up the page.

Maybe this problem in the design of SU is worse than I thought. Not only does the default tagging scheme make it harder for me to go back and look up stuff I have previously thumbed up without bothering to write a review and/or manually tag myself. But it also seems to have a negative impact on the effectiveness of SU to function as a discovery engine. How many times have I found a page via means other than SU, thumbed it up, didn't see the new content submission form pop up, assumed whoever beat me to the punch on submitting the content at least submitted it properly, and went on my way? How often does the average SU user do that? One thing I've noticed since I started paying attention to the default tagging scheme in SU is how often I see content that is submitted into the wrong category. If I have found this content via SU, then I can use the “report last stumble” feature. But that only works if I stumble into something in one of my defined interests that really should be tagged as some other of my defined interest. If someone submits a CSS gallery as “photography” for example. But if I get to that page without being referred there by SU, there's no way for me to bring the miscategorization to the attention of whoever addresses such things. That is most likely to happen if someone submits content that should fall within one of my defined interests as pertaining to a topic of interest that isn't on my list.

Oh look, this jQuery plug-in has been submitted under “alternative-medicine”

There's no way I can do anything about that. All I can do is tag it properly within my own account. But since very few web designers are going to be stumbling through the alternative medicine category (then again, maybe I assume too much), and very few people looking for alternative medicine information will give a rat's ass about a jQuery plug-in, very few people who care about that content will ever stumble into that content. I can't even resubmit it. Once a page is submitted, all I can do is tag and review it myself. In effect, such content is quarantined, cut off from it's true target audience. I've got to think there are ways for SU to address this. If Mac OS X can have a pretty effective summarize tool built in, can't a similar algorithm be run against the content of new submissions to SU in an attempt to verify the categorization of that content? Couldn't meta tags, key words, or the sort of tricks search engines use to categorize content be applied? I know these things aren't cheap, but they are possible, and SU has a larger user base than delicious (which may actually be a big part of the problem).

Friday, December 12, 2008

Getting organized with tags and Ma.gnolia

I've been mulling over how to approach planning out my tags for using Ma.gnolia (M) to store and organize the work related resources I find via Stumble Upon (SU). At first I thought about a matrix of some sort allowing me to drill down within a topic. But how would I share that here? An HTML table won't work without getting really nasty with the colspan attribute. Mind mapping software could work, but the image it would produce would to far too huge to post here. I even tried to think of it in terms of XML and custom Doctypes where I could use “web design” like a root element and have a bunch of descendant tags from there. That lead me to two realizations:

  1. Only someone at least as geeky as me would have any chance of understanding such a system
  2. I was ultimately still thinking in hierarchies

The philosophy behind tagging is to address areas where hierarchies break down. For example, as I explored such a matrix, I found myself coming up with compound tags like “javascript-framework” and “php-framework”. Part of the reason for this line of thinking is due to the structure of SU, where such compound tags would be useful. But M allows to search and sort based on a combination of tags. Resources on Cake PHP, Zend, Ruby on Rails, and jQuery could all take the tag “framework”. I could also tag such resources with their associated language: “php”, “php”, “ruby”, and “javascript” respectively.

Simply typing this out here has lead to another discovery. Typing out “javascript” sucks. I don't plan on tagging stuff with “cascading style sheets” either, so I'll steal an idea I've been using in my directory structures for years and shorten “javascript” to “js”. That will be clear to me and should be clear to any other web professional who happens to browse through my links in Ma.gnolia. In theory, that should also apply to anyone taking over this position in the future.

I also realized that trying to use a root tag such as “web design” is short sighted. Not everything I do is related to design. Some of the server administration or SEO stuff I do would stretch the common definition of design.

Leaving behind the idea of a hierarchy should also allow me more freedom for future growth. If I'm simply tagging resources with an associated language (“css”, “js”, “php”) it becomes much easier to just add a new tag for any new languages I start to use or learn (“perl”, “ruby”, “lolcode”). This industry evolves so quickly, that sort of scalability will likely pay off in ways I currently can't even predict. Five years from now my job may require as much understanding of psychology as it currently does programming and design. I'm pretty sure we're one high profile lawsuit away from moving a solid understanding of the legal implications of accessibility from the “recommended skills” to the “required skills” section of job descriptions such as mine.

I'm going to give some thought to classification of tags I will need. Languages are an obvious example of what I'm talking about. Maybe I should add a class of tags for software, Dreamweaver, Photoshop, etc. I try to keep what I do tool-neutral, but I'd be lying if I said I never use Photoshop tutorials from the internet. If the Adobe Creative Suite was not already bought and paid for, I could get buy using free text editors and open source programs like the GIMP. But the reality of my job is that I spend a lot of time working with Adobe software so it's probably a good idea to reflect that in my tag structure. I'll also put some thought towards the need for consistency. Luckily, Ma.gnolia offers an auto-complete function based on previously used tags. SU offers no such feature. I just realized Blogger has the same sort of auto-complete function in the tag field for my blog entries. That should at least keep me from forking my tags due to a common misspelling or something silly like that. I'll continue to think things over and share my thoughts over the weekend.

Thursday, December 11, 2008

Does anyone actually read this?

I've been emailing some of my designer/developer heroes asking for opinions on CSS frameworks. Not sure if those efforts will bear any fruit, but in the past even the rock stars of the industry have been pretty approachable.

I thought I might as well try to get the conversation going here as well. (Assuming anyone is actually out there with whom to converse.)

My thoughts on what I've seen so far have not been good. Even Google, who usually awakens my inner fan boy, seems to miss the boat on this one. (Just got confirmation from John Resig [I told you even the rock stars are approachable] that Blueprint is hosted by Google Code but isn't itself a Google product. Apparently I'm not alone in my confusion. Still, it's the de facto king of CSS frameworks so I'll continue to pick on it anyway.) Blueprint strikes me as a tangled mess of presentational class names. Actually, forms.css isn't so bad. It's got .title, .text, .error, .notice, and .success. Those classes make semantic sense. I can see myself actually using the core ideas behind that style sheet if not the actual framework code itself. And it's things like that which give me hope for the general idea of a CSS framework. tyopgraphy.css starts to get a bit more iffy. As much as I hate the idea of classes like .left and .right for floating images within text, I have to admit I often use them myself. So I can't point too many fingers there. Classes like .added and .removed have obvious uses for AJAX-y goodness. And I can see uses for classes such as .first and .last assuming they are applied dynamically (by jQuery, for example). I can even imagine common uses for classes like .small & .large and .quiet & .loud, but those start to fall outside my personal comfort level when it comes to putting non-semantic, presentational classes into my markup.

The deal killer for me is grid.css, which is really the main selling point of the idea of a CSS framework. Almost everything in that style sheet is purely presentational.

Let's say I'm building a site using Blueprint to do a basic 3 column layout. I start out with 6-12-6 and put the needed classes into the markup to accomplish that. I build a couple hundred pages and put it into production. A few months later I realize that the sort of content I've got going down the right hand side (AdSense, social bookmarking feeds, etc) is not coequal with the primary navigation in the left column and I want to shift things a bit to say 7-13-4. Now I have to alter class names on a few hundred pages worth of markup rather than making these presentational edits to the CSS where they really belong.

“But you could easily do that with a find/replace. Quit yer bitchin'.”

Ok, that's true. I see your find/replace and raise you a redesign requiring a horizontal primary navigation menu. If you're keeping your markup all Zen-like, you edit one file and the changes magically appear site wide. You might still conceivably be able to pull off such a change with a few find/replace operations using Blueprint (or any other grid based CSS framework using presentational class names, I'm using Blueprint as an example but just about everything I've seen suffers the same flaws). But it would take some pretty complex work. And even running find/replace across hundreds of pages of markup is more work than changing a couple of rules in a centrally located CSS file.

But this is a lot like how I felt about js frameworks vs. rolling my own unobtrusive DOM scripting before I found jQuery. I knew there was potential there, but I just wasn't seeing enough trade off in benefit offered in exchange for giving up so much control of my code. It helped the jQuery applies CSS style selector logic to scripting and thus meshed well with the way my brain already thinks about these things. I imagine if I had both the time and talent required to build my own js framework, it would end up working a lot like jQuery. I have yet to experience that with a CSS framework.

Exploring the Potential

I haven't taken an in depth look at everything yet, but there are two CSS frameworks that managed to not send me running away screaming after mere moments of peaking at the source code of the available demos.

  1. Boilerplate
  2. Content with Style

The language used to introduce the concept on the home page for Boilerplate is enough to get me findin' religion.

As one of the original authors of Blueprint CSS I've decided to re-factor my ideas into a stripped down framework which provides the bare essentials to begin any project. This framework will be lite and strive not to suggest un-semantic naming conventions. You're the designer and your craft is important.

If you prefer:

 { float: left; width: 240px; margin-right: 110px; }

over

class="column span-2 append-1"

then you're in the right place my friend.

Yes! But wait, it's only at version number 0.3 and it looks like that version is nearly a year old. Is my best lead effectively abandonware? That makes me a sad panda. Still, it may give me a good place from which to start future projects. Let's download this bad boy and take a look under the hood.

Not significantly different from Blueprint. I see an almost identical typography.css file, down to the somewhat uncomfortable .small & .large. There's also .quite, but no .loud. I'm also seeing a pretty basic reset.css (I personal prefer to slightly modified version of Eric Meyer's Reset Reloaded), a small collection of basic IE hacks in ie.css, and a few form styles that aren't really enough to save me significant time vs. writing my own from scratch or cannibalizing my own back catalog.

While Boilerplate avoids going deeply enough into the realm of presentational class names to catch my ire, I'm not seeing much in the way of benefit here either. At least with Blueprint I could wireframe something up in a day to show to a client, even if it's not the sort of thing I would want to put into production. I see nothing mind blowing here. Maybe if it ever hits version 1.0 I'll take another look. :(

Moving on to Content with Style. Yes, this is better. But it seems more like a philosophy more so than a true framework. Maybe I'll feel different after checking out the source code. This also hasn't been updated in just over a year (and then only once, and that wasn't really a change to the code at all, just a license addition). That also contributes to an overall feeling that I'm better off applying the underlying ideas to my own design work rather than using these exact files. But that's pretty much how I feel about Blueprint. I can figure out which classes would apply to a given element in the design I aim to achieve and rather than apply those class names to the element I could simply apply the styles for that class to whatever semantic class or ID I use for that element. The philosophy is sound. It's just the execution that leaves a bad taste in my mouth.

Looking at the code feels a lot like looking at my own code. But rather than feeling like the epiphany behind jQuery, this just feels like a slightly more systematic method of reusing the code snippets I've been using for years. It's far from bad. It's quite good in fact. But it's nothing I couldn't do myself. And by doing it myself it will mesh better with the way I approach things and thus be more usable and useful.

Why I Care

My biggest concern is that those currently learning CSS will fall back on these frameworks to save time for things like class projects or one off work for portfolio fodder. In those situations, giving up the separation of content and style to gain a bit of agility is probably justified. But I fear that in the process the vast majority of the next generation of web developers will miss out on the philosophical core of web standards. I think it could set back the industry, right at the time when I was finally starting to feel that education (both formal and informal) was starting to get things right in this industry.

Or maybe I'm chicken little and the sky isn't really falling.

Wednesday, December 10, 2008

My personal knowledge management problem

Full Disclosure:

This post and probably a couple of future posts will serve to fulfill a requirement in my graduate course on knowledge management. But I'm trying hard to approach this in such a way that such requirements are totally transparent asside from this note. Maybe I'll pull it off and this will bear some interest for folks other than my proff. Or maybe I'll totally drop the ball and not engage anyone with this content and totally screw up the assignment to boot. If so, maybe I'll at least fail spectacularly enough to get some good schadenfreude going.


I tried going through the exercises Kirby put together. The basic goal is to figure out which tasks I perform as a knowledge worker bring the most value to my organization, then figure out how much of my time I spend on those tasks vs. less valuable tasks, then try to maximize the time I can devote to the valuable stuff and minimize the time wasted (although that's a slightly harsher term than it needs to be in this context) on less valuable tasks.

I'll be honest, I don't think those exercises work for me right now. There's a couple of reasons for this.

  1. I'm 18 months in to this job and the task that has dominated my time thus far, redesigning the website (we launched the beta by the way, I don't think I took the time to announce that officially here, although I did on the Vol State blog), is not typical of the work someone in this position would be doing otherwise. Once the redesign launches, the way I work will shift, rather radically. It's hard if not impossible for me to look at the last 18 months and make conjectures for the next 18 months.
  2. The biggest drain on my productivity falls outside of my realm of influence; IE it's a trend I am powerless to address. I won't go into detail here but Kirby if you want specifics just email me and I'll fill you in.

So I've been looking at the way Kirby breaks down his model of personal knowledge management and one area I see a lot of room for improvement in the way I currently handle things is with information organization and retrieval. The sad part is I've been sitting on the tools to address this issue for years. I just need to be mindful of how I use them.

I signed up for a Ma.gnolia account back when they were still in beta. I used it for a while, then I found Stumble Upon (hereafter: SU). My thoughts at the time were that SU did all the social bookmarking stuff I had been using Ma.gnolia for with the added element of discovery of new content at the push of a button. That's true in theory. 2 years later, it's obvious that it falls apart in practice.

SU does the whole discovery thing very well. I don't think I ever would have found jQuery without SU. I had been underwhelmed by the JavaScript libraries I had seen during the first few months of that whole buzz and had pretty much written off the whole idea. I was just gonna stick to writing my own custom unobtrusive JavaScript using the Document Object Model. Now I literally use jQuery every day. My job would not be the same without it.

But I discovered jQuery at a time when I actually had the time to take on the learning curve, as gentle as it may be. The official documentation is complete enough that managing access to the information I needed to direct my own learning wasn't an issue either. The only exception I could find to that would be the plug ins, but truth be told if the plug in doesn't make intuitive sense and isn't well documented, I don't use it.

Compare that to some things I hope to learn more about in the near future, such as Drupal and Cake PHP or Perl. Or even compare it to some of the stuff I'm already using but need to reference source material rather than working from the top of my head, like regular expressions and PEAR or Active Directory. Now we're talking about steeper learning curves just as what little time to learning new skills is shaved away as I try to push the redesign through the beta testing phase and into launch. I keep stumbling onto sources for these topics, but lacking the time to fully digest them, I thumb them up and move on.

Ok, that last statement begs the question, if I don't have time to digest this stuff how do I have time to keep stumbling onto new content? First of all, SU is addictive. On top of that, it's so easy to just click the stumble button (with or without specifying a topic to stumble through, such as web design) that I can click through a fresh page or two while I'm checking in the files I just completed working on in Dreamweaver (hereafter: DW). Or while I wait for DW to generate the broken link reports I've been running lately. Actually, now that I bring that up, I really hope DW performs better on the redesigned site. The current site is such a mess of spaghetti code that DW is prone to take its sweet time or even crash when I ask it to perform a site wide action. The redesign is much leaner. Based on the work done so far, the code we shave off should be equivelant to about 68 copies of the complete works of Shakespeare. No, really. Project Gutenberg has the complete works of Shakespeare as a plain text file. I've done the math. :)

This is where the problem comes in. I find these great resources, or at least potentially great, but going back to find them later gets to be a real pain. Sometimes I don't take the time to write my own tags for a page. I just thumb it up and switch back to DW or click the stumble button again. But SU being socially driven, the tags default to the category chosen by the person submitting the site. I currently have 143 stumbles tagged with “graphic-design”. I'm not a graphic designer. I don't really even consider myself a web designer. If you want to split hairs, I consider myself more of a web developer. When I tag an article relating to design, I use “web-design”. I've got 418 of those. But it's possible I didn't personally tag all of them. If the person submitting them tagged it as “web-design” and I just thumbed it up and moved on without bothering to apply my own tags to it, then that's how it would default. It's obvious that graphic design is a pretty popular tag in the wild and the zeitgeist is polluting my tag cloud.

An Example

Over a year ago (November 1st of 2007 according to my SU history), I stumbled upon Scott Jehl's StyleMap script. At the time I thought, “This is how we need to do the org charts.”. Previously we had done the org charts in Microsoft Viso and then those files were exported to HTML. But that produces a tangled mess of frames and images. It's hard to navigate, hard to maintain, and doesn't even work on my Mac (thanks Microsoft). I thumbed it up and moved on.

In October of this year, I finally turned my attention to the org charts for the redesign. I remembered stumbling upon this script a long time ago that would be perfect. But I couldn't find it in my SU history. I tried Google searching every combination I could think of. I literally wasted an entire day trying to find this script.

The problem was the default tag the page was assigned had nothing to do with how I conceptualized the content of the page. I don't even remember what it was now and I have since gone back and edited the entry with my own tags. Google wasn't working because I had forgotten that it was written as a script to do site maps rather than org charts. To add insult to injury, when I finally dug up the article and tried to put the script into use, our org chart proved to be way too complex. But I could have discovered that in an hour had I not wasted an entire day (and part of the following morning) digging up the script.

The Problem

Partly due to flaws in the way I use it, and partly due to flaws in the way it's designed, SU is failing me as a means of efficient information organization and retrieval. In defense of the development team behind SU, it is designed more as a discovery engine than as an organization tool. And I can't sing enough praises as to how well it performs its core function.

The Solution?

So I turn my attention to my neglected Ma.gnolia account. If I start using both these tools to perform the tasks for wich they were designed, and approach my use of these tools in a mindful way, I think I can milk a lot more productivity out of my days. I'll map out that plan in a future entry. Stay tuned.

Friday, December 05, 2008

When it rains it pours

Getting caught up on my blog reading, which always gets me thinking, and once my comments to the entries of others grow longer than the entry to which I am replying, I figure I'm better off putting it here.

I have both a 2 year and a 4 year degree in web design. As in, the word “web” is actually printed somewhere on my physical degrees. One says “Web Development Technology” and the other says “Web Design”. I got a lot out of my education because the timing and my mindset were right for it. I had a few years of field work under my belt, so even when the course work did a crappy job of tying the concepts to real world examples I was able to fill in those gaps.

The downside is at the undergraduate level classes have to assume no prior knowledge on the part of the student, at least nothing above and beyond what is taught in high school. In other words, the first few classes assume you can type and that's about it. The problem there is some of the most promising students will be turned off early on in such a program, say to hell with it, and go get a job with the basic skills they already have. The other edge of that particular sword is you quickly go from the early hand holding courses to being expected to understand stuff like the OSI Model or object oriented programming in detail. If I were playing a video game with a learning curve that steep, controllers would be bouncing off walls, likely in multiple pieces. But my prior experiences made things more manageable and I was mentally prepared for the challenges that came up.

A lot of people would not be. That doesn't make them bad people or somehow less intelligent than me. If anything it indicates that the traditional academic model is poorly suited to cover highly technical disciplines like web design.

I say the timing was right because the 2 year program had just entered a stage of growth and was updating its curriculum to be more relevant. The first semester I was in that program the textbooks and course materials were very 1996. Browser sniffing, tables for layout, optimizing animated gifs; scary stuff. By the time I graduated they were offering classes on CSS and XML. Still far from cutting edge, but they managed to advance their curriculum about 7-8 years in the 3 semesters I was there. The problem is the nature of most colleges and universities make it very hard to do such changes more often than once a decade or so.

The 4 year program was brand new. I'm the 6th person to graduate with that degree. Since it was so new, the curriculum had not yet had time to get stale. The director of the program was also both knowledgeable and passionate, and he personally taught most of the core classes in the program.

In a single lecture we might start out talking about the Iliad and how it's an example of an epic, then move in to how the structure of an epic is guided by oral traditions. And just when you're starting to wonder why the hell we're talking about this stuff in a class on web design, we shift to talk about how communication on the web shares a lot of characteristics with oral communications (you and I are having a type of conversation right now). From there we talk about how the structure of an epic poem can be seen as an early prototype of hypertext with the various ways the narrative loops back on itself and includes passing references to other epics from the same cannon. And I use phrases like "we talk" for a reason; the courses were very much structured around discussion and team work. I learned more from my fellow students than from the professor and that was by design. My current graduate program hasn't managed to pull together ideas as seemingly disparate in 2 years as Bob routinely did over the course of an hour.

But literally the day I graduated the director of the program packed his bags and drove several hours north to become the new Director of Graduate Studies at Empire State College in Saratoga Springs, NY. Without his leadership, I don't know how well the content of the courses will be kept up to date or how well the structure of the courses will translate to new professors covering that material. I'm very lucky to have gone through the program when I did.

Antique Browser

What reminded me to come back and complete that last post is a response I wrote in yet another design blog. The thoughts that lead to that reply were obviously inspired by the ideas that have been kicking around in my head for the past month while my last post was locked in draft stasis. I'll go ahead and share those thoughts here as well, slightly more detailed than in my reply to Raymond's entry on hating IE6 (I took the time to do the math).

I was thinking just last night that an appropriate metaphor to explain to non-techies why IE6 sucks would be to compare it to an antique car.

The Model T was not the first car, and Mosiac was not the first browser, but both brought those particular products into the main stream an birthed industries to support them. So we’ll use those are our anchor points for comparison.

The first Model T rolled off the assembly line in 1908. So the automobile as we know it is 100 years old this year. Mosaic was released on April 22, 1993. Today being December 5th, 2008, that makes the modern web browser 15.6226243 years old. IE6 came out August 27, 2001, so it is 7.274739 years old. That means IE6 is 46.5654096% as old as the modern web.

If you were driving a car manufactured in 1963, would you expect to meet modern safety standards? Would you expect to pass modern emissions testing? You couldn’t use a modern gas pump without using an artificial additive to replace the lead that was commonly added to fuels 46 years ago. There’s a reason most people who own antique cars usually drive them only to auto shows or for the occasional pleasure cruise and don’t use them for every day driving. If you drove such a car the typical 1,000 miles or so a month, you would expect to pay much more in maintenance costs than someone driving a newer model.

The primary appeal of an antique car is the cool factor. They offer styles not available in the modern market and you just look cooler behind the wheel of a cherry antique than just about any modern automobile. No one is impressed by your antique browser. So why are you still using it?