Wednesday, December 17, 2008

Image Evolution

Ten days ago... No, wait. That's something else.

Eight days ago, I stumbled onto Genetic Programming: Evolution of Mona Lisa. I thought it was pretty awesome. Today, I stumbled upon this simulation of the same idea you can run in your browser. Doesn't work in IE, but I really hope anyone reading this isn't using IE (any version) as your default browser anyway. Right now, mine is at 1,077/17,000. That means out of 17,000 “mutations” 1,077 have been an improvement over the previous best fit. I just hit 90% fit. Which looks like 005874.jpg on the example shots from the original post. That leads me to believe the offline version is a bit more efficient than the online version, since it's taken about 3 times the number of mutations to get there. But that's to be expected.

I think the ability to watch it develop over time in your own browser hammers home the idea a bit better than the static example provided in the first post from Roger Alsing. But I think there's still room for improvement. For one thing, Roger's program makes it look like it would take almost 1,000,000 generations to get a decent replication of the low res detail of the Mona Lisa he used as his example. That's a necessary abstraction to get this kind of simulation to run on a computer. In real evolution, each generation produces several nodes, each with their own mutations. The node tree would fork out to very huge, very quickly, due to exponential growth.

But applying survival of the fittest to the tree structure would get us to the best fit much more quickly. The version that runs in a browser addresses this somewhat with the x/y display. I'm currently at 91.10% with 1238/23000. On average so far, it's taken about 18.5 generations to find a better fit. If we could fork that into a node tree, even a simple node tree of 2 child nodes per parent node, we'd hit the first improvement in about 5 generations. Actually, we'd almost hit the first 2 improvements in 5 generations. After 5 more generations, we'd be hitting quantum leaps where over 2,000 mutations are tried. And that number doubles every generation.

Ok, I'm proving to not be a strong enough math geek to really get these thoughts out of my head in a way that makes sense. But the way these programs are running is a bit like selection sort whereas the node tree approach is more like heap sort. In another browser tab, I just hit ~94% fit after ~ 50,000 mutations (but only 1900 improvements). Selection sort has an efficiency rating of n2. So if it's taken ~50,000 steps to reach ~94%, that means n is about 224. Heap sort runs at n(log n), so to reach ~94% using that method would only take about 527 steps rather than ~50,000. That's a closer fit to the number of true, natural generations it would take to get the same sort of results through evolution.

A better approach to visualizing this sort of thing would allow for multiple source images and create a node tree instead of using the linear approach. The multiple source images would allow for a simulation of speciesization. The node tree would be less of an abstraction from the natural processes (although still astronomically simplified in comparison) and give a more natural indication of true “generations” required to reach a goal. Ideally, the number of descendant nodes would depend on past history of improvements. So “blood lines” that have produced a high number of improvements in the past would be more fruitful. Those with poor histories would get fewer chances, and eventually die out. There would also need to be a threshold after which a given line is only compared to the source image it seems to be naturally drifting towards. This would simulate the migration into more specialized environments over time. It would also cut down on the processing power required to make it all run, but such a system would experience exponential growth and would quickly overrun any processor or pile of RAM currently available. As abstract as such a program would still be, it would require hardware we won't see until quantum computing becomes a reality.

Tuesday, December 16, 2008

The plan

I probably won't have time to put this plan into action until I'm off for the holidays. Or maybe even until I'm back from the holiday break. I'm sure it will undergo some quick evolution once it's made contact with the enemy, so to speak.


  • php
  • css
  • html
  • xml
  • js
  • jquery

Technically jQuery is not a language. But I find myself writing more jQuery than straight JavaScript anymore. So in effect the “js” tag will simply mean “JavaScript that isn't jQuery”.

  • dreamweaver
  • photoshop
  • browsers
  • firefox
  • safari
  • opera
  • chrome
  • ie
  • ie6
  • ie7
  • ie8

The first two are the biggies. Usually when I see something pertaining to browsers, it's a comparison among several. Sometimes it's more specific, a Firefox plug in for example, or a bug in IE. That's the main reason for breaking IE into different versions. Each version has it's own set of bugs. :(

Content type
  • tutorial
  • plug in
  • code
  • framework
  • tips
  • gallery
  • interview
  • list

I've put those in order of descending anticipated usefulness. A “tutorial” is an article or blog post that deals with a specific goal in depth. Photoshop tutorials are the most obvious example, but there are tutorials for PHP, jQuery, CSS, DreamWeaver, even specific plug ins like Firebug (which may end up getting its own tag under Tools/Software depending on how much content I find).

Software can have a “plug in” but so can languages and frameworks. “Code” would apply to any specific technique within a language that is meant to be copied and pasted rather than published as a plug in. It's also distinguished from the more general philosophical stuff that would normally be tagged as “tips”.

The “framework” tag is something that will probably fall out of favor as I research more frameworks. If I end up adopting Cake PHP as my PHP framework of choice, then I'll probably start using a “cake” or “cake php” tag the same way I plan to use “jquery”. The framework itself will be promoted to a tag describing the language.

“Tips” aren't plug ins and they aren't code, they are less concrete than either of those. Both the articles I linked to yesterday about writing style guides and creating maintainable CSS are good examples of tips.

A “gallery” is usually but not always visual. There are many galleries of CSS designs out there meant to inspire other designers. But there are also galleries of plug ins and code snippits. The “interview” tag is self explanatory.

Usually, the stuff worth tagging will be a few of the individual resources being linked to on a “list”. I'm thinking of all those “50 CSS Tricks You Can't Live Without!” and “Avogadro's Number Photoshop Tutorials for People Who Spend More Time Reading About Photoshop than Working in Photoshop” that have gotten so popular lately.

Content source
  • blog
  • article
  • video
  • screen cast
  • pod cast
  • wiki

I think most of what I find would technically be classified as “blog” posts. If I notice myself tagging several posts from the same blog, I should probably add it to my RSS reader.

Speaking of RSS feeds, that can cause some problems with the “article” tag. The articles I read online are often from sources that provide RSS feeds (A List Apart, Smashing Magazine, SitePoint, ThinkVitamin, etc.). But I'd also use that tag for anything from a peer reviewed journal, for example, the 80+ articles I've dug up on web usability studies as part of my lit review for my final project. RSS feeds are much less common in that world.

Is the different enough to justify coming up with a new way to categorize content such as Taming Lists vs. Testing web sites: five users is nowhere near enough? My heart tells me no. Jared Spool blogs. As important as peer review is, I think that sort of review process happens more quickly and more transparently online. So blogs, the good blogs at least (those that actually get read, possibly unlike this one), are not free from such a review process. At this stage, I think the academic model servers more as a means of exclusion than as any real control on quality of content. Portals like ACM may be on their way out in favor of Technocrati.

Anyway, I was talking about these tags, but I think the remaining ones are self descriptive enough to not explore in detail. :)

That should provide enough of an organizational skeleton for me to get started. I'm sure it will expand and evolve with use. The most important thing, I think, is to keep it detailed enough to describe resources in a useful way while staying small enough to be maintainable.

And someday, I'll need to work in the need to organize and tag stuff that isn't exactly work related. Luckily, the nature of tags will keep most of that content separate naturally. There may be the occasional overlap with a tag like “funny” and resources like You Suck at Photoshop.

Monday, December 15, 2008

Future KM entries

Lately I've been on an tagging/organizing kick with my knowledge management posts. But there are other areas I need to explore too. As my post from earlier today pointed out, the lack of organizational structure behind the existing site (or at least the lack of documentation of any such structure, the difference between it existing with me failing to understand it vs simply failing to exist is effectively nothing) makes for a lot of unfun time. Once I eventually vacate this seat for whatever reason, I hope to leave my replacement in better shape than I currently find myself.

That means I need to work on documentation ranging from style guides to well documented, maintainable code to official policies and guidelines that better fit into the bureaucratic mindset of the campus than all that geeky stuff. (Both the resources I linked to there are written by women. I wonder if that is at all relevant or just a simple coincidence?)

In a time of economic crunch and budget cuts, many people would probably shy away from purposefully making themselves easier to replace. But if knowledge management is really about boosting the productivity of knowledge workers, isn't it possible that turning our backs on knowledge management now could very well extend the economic troubles? KM is about a lot more than just easing the transition for my eventual replacement. One thing I learned early on in my career about commenting my code, the person I'm communicating with in my comments may very well be my future self. Six months after I've written something, there's a good chance I won't be able to figure out what the hell was going on without at least a little guidance. It may be more important for someone who has never seen the code before, but that doesn't eliminate the very real (and almost immediate) benefits such documentation provides for me.

Bad tagging habits?

In my last KM post, I hinted that SU's swelling user base may be a contributing factor in the rampant miscategorization of content. Of course, I'm really doing a lot of projecting. The tags that work best for me might not work at all for the majority of users. The act of tagging takes its meaning from the idiolect of the person writing the tags. It doesn't get much more personal than that.

But different people have different goals in mind with their tagging. I'm trying to come up with a manageable set of tags that can efficiently describe the majority of the work related online resources I make use of both now and in the future (and hopefully scale well for future growth). Essentially, I'm trying to do more with less when it comes to my tagging structure. Obviously, that's not everyone's goal. Even among my personal heroes, such as Jeffrey Zeldman, the tagging structures used by other people can be called “improper” for my purposes. Zeldman has less than 2000 bookmarks with over 5000 tags in his Ma.gnolia account. In fact, he's currently averaging 2.58 tags per entry. And that assumes that each tag is unique. Obviously that's not the case. He's currently got 62 items tagged with “iphone” and 259 items tagged with “webdesign”. I also see some redundancy at work. He's got items tagged with “palin”, “sarah palin”, and “sarahpalin”. This entry is tagged with “september11th”, “911”, “9-11”, & “9/11”. Since that's all on one entry I assume Zeldman's doing it on purpose and not just forgetting what his primary tag for that concept is from entry to entry. That's precisely the sort of redundancy I hope to avoid. But obviously it's working for him.

So in my previous entries I've said some things that could be taken as insulting my fellow users of social media. I should be more forgiving. If everyone used social media for the same ends, there wouldn't be much of a point, right? I guess just like any other social construct (democracy, economics, etc) simply by participating in it we take on the responsibility to be mindful of our own needs and tolerant of the needs of others.

I'm going to go find a drumming circle to join. More later.

Cleaning up the existing site

At first I thought we'd be launching the redesign before our SACS review. The powers that be have other plans. At first I was happy to have a few extra months to work and test and debug prior to launch. But then it hit me. I was gonna need to clean up the site content for the review. That means I've got to deal with the pretty much total lack of any real information architecture on the existing site.

As a first pass, I ran a site wide link report. My first step was to get rid of the orphans. I quickly discovered 2 problems wit this idea.

  1. The current site uses many JavaScript driven pop up windows using code generated by GoLive. DreamWeaver apparently doesn't know how to check these links. Therefore all such content shows up as orphaned.
  2. Ditto for the Flash stuff. This is more surprising since Flash is also a Macromedia product from back in the day. This means the the thousands of photos we have in the various Flash driven galleries all show up as orphans.

The initial report said that out of 24,000+ files, 13,000+ were orphans. More than half the files on the server showed up as not being linked to at all. Of those 24,000-something files, 10,626 were HTML/ASP files. The rest were images, PDFs, and stuff like that. We're now down to a total of 19,957 files, 9,413 of which are HTML/ASP. But 6,138 still show up as orphaned. I bet a few of those really are orphans. Probably no more than 200. And most of those would be images. I'm primarily worried about indexable content that could turn up in a Google search but present horribly outdated information. The trouble there is not all of those files are orphaned. We're still linking to many of them. I guess the next step will be to search for obviously outdated files. Stuff with years in the file names, for example. Then I'll probably need to run another orphan check for freshly orphaned files once that content is cleaned up.

The good news is I've reduced the size of my local directory by 45%, from about 3.4 gigs to 1.9 gigs. The majority of that was the files we're still hosting from last year's CIT conference. But I never need to update those, so there's no need to store them locally. Some of those PowerPoint files got crazy big.

Of course, currently the beta site takes up a total of 339 megs. But it's not quite complete. Still, I'll be surprised if it grows to anywhere near 1.9 gigs before launch. Due to simple changes like getting rid of tables for layout and abandoning the <font> tag in favor of CSS we've shaved about 35k per page. We've also eliminated many pages. The beta site currently contains just 985 PHP files. That's about 10% of the files the current site contains, but we've migrated way more than 10% of the content. One of the big changes in that regard is that we now link to the online catalog for curriculum and course descriptions. There goes 2 pages per degree program plus at least a page per course offered. I think the current site has a lot of redundancy in course description pages among the various program directories. Most of the remaining content will be database driven.

Sunday, December 14, 2008

The Social Side of Social Bookmarking

There's a decent amount of discovery power available through Ma.gnolia as well. I still think Stumble Upon does it better. But it would be silly to not explore the 2nd best tool available for the job. Ironically, but doing so, I was quickly reminded of the problems with default tagging in SU. I pulled up a couple of recent bookmarks from Jeffry Zeldman.

The first is a blog post from Simon Clayson on feeding IE6 a basic style sheet using the sort of techniques that were once common for targeting Netscape Navigator 4 with a set of specific, dumbed down styles while simultaneously protecting NN4 users from that browsers botched implementation of the majority of CSS which was safe to show to less craptastic browsers. Now NN4 is little but a ghost to haunt the nightmares of us old school CSS scribes and IE6 is now the crappiest browser still in common usage. I'll probably spend the rest of December debugging the redesign in IE6. Had I found this idea a year ago, I probably would have served IE6 a very simple style sheet and skipped the debugging. In all honesty, even at this stage it may be less work to implement these ideas rather than try to “fix” IE6.

So anyway, I thought this was a potentially useful technique, so I thumbed it up. This didn't pull up the form for submitting new content to SU, so I knew someone else had already submitted this particular link. This gave me a great opportunity to see what the default tag would be. jackosborne says this page is primarily about “graphic-design”. It deals exclusively with serving specific CSS code targeted at a specific web browser. I can think of at least half a dozen tags more useful for this content than “graphic-design”. But the current SU system gives too much power to the person submitting the content. Jack's actually got more stumbles tagged “web-design” (54) than “graphic-design” (45), but apparently that's due to other people's default tags on the pages he is thumbing up. Looking at his discoveries, he's also submitted this article on Five CSS Design Browser Differences I Can Live With by Andy Clarke and Using jQuery for Background Image Animations from as “graphic-design”. Maybe that tagging scheme server Jack well. It makes SU virtually worthless for me when it comes to organizing and retrieving the resources I discover through it.

The other page I discovered via Zeldman is Western Civ's guide to CSS browser support. Again, this page deals exclusively with CSS and web browsers, so for my purposes it would be pretty easy to tag. It was submitted by SU user kancerman, 3 years ago. If I'm reading this right, I'm only the 10th person to thumb this up in those 3 years. That could be because it was submitted into the category “internet-tools”. For me, that category is better suited for things like online mortgage calculators or WriteBoard. But due to the way kancerman submitted this page, “internet-tools” is the default tag. Now I can look at his entry for this page and see that his 2nd tag is in fact “CSS”, but since that's the 2nd tag on his entry, it holds no bearing for how it is tagged by default when I thumb up the page.

Maybe this problem in the design of SU is worse than I thought. Not only does the default tagging scheme make it harder for me to go back and look up stuff I have previously thumbed up without bothering to write a review and/or manually tag myself. But it also seems to have a negative impact on the effectiveness of SU to function as a discovery engine. How many times have I found a page via means other than SU, thumbed it up, didn't see the new content submission form pop up, assumed whoever beat me to the punch on submitting the content at least submitted it properly, and went on my way? How often does the average SU user do that? One thing I've noticed since I started paying attention to the default tagging scheme in SU is how often I see content that is submitted into the wrong category. If I have found this content via SU, then I can use the “report last stumble” feature. But that only works if I stumble into something in one of my defined interests that really should be tagged as some other of my defined interest. If someone submits a CSS gallery as “photography” for example. But if I get to that page without being referred there by SU, there's no way for me to bring the miscategorization to the attention of whoever addresses such things. That is most likely to happen if someone submits content that should fall within one of my defined interests as pertaining to a topic of interest that isn't on my list.

Oh look, this jQuery plug-in has been submitted under “alternative-medicine”

There's no way I can do anything about that. All I can do is tag it properly within my own account. But since very few web designers are going to be stumbling through the alternative medicine category (then again, maybe I assume too much), and very few people looking for alternative medicine information will give a rat's ass about a jQuery plug-in, very few people who care about that content will ever stumble into that content. I can't even resubmit it. Once a page is submitted, all I can do is tag and review it myself. In effect, such content is quarantined, cut off from it's true target audience. I've got to think there are ways for SU to address this. If Mac OS X can have a pretty effective summarize tool built in, can't a similar algorithm be run against the content of new submissions to SU in an attempt to verify the categorization of that content? Couldn't meta tags, key words, or the sort of tricks search engines use to categorize content be applied? I know these things aren't cheap, but they are possible, and SU has a larger user base than delicious (which may actually be a big part of the problem).

Friday, December 12, 2008

Getting organized with tags and Ma.gnolia

I've been mulling over how to approach planning out my tags for using Ma.gnolia (M) to store and organize the work related resources I find via Stumble Upon (SU). At first I thought about a matrix of some sort allowing me to drill down within a topic. But how would I share that here? An HTML table won't work without getting really nasty with the colspan attribute. Mind mapping software could work, but the image it would produce would to far too huge to post here. I even tried to think of it in terms of XML and custom Doctypes where I could use “web design” like a root element and have a bunch of descendant tags from there. That lead me to two realizations:

  1. Only someone at least as geeky as me would have any chance of understanding such a system
  2. I was ultimately still thinking in hierarchies

The philosophy behind tagging is to address areas where hierarchies break down. For example, as I explored such a matrix, I found myself coming up with compound tags like “javascript-framework” and “php-framework”. Part of the reason for this line of thinking is due to the structure of SU, where such compound tags would be useful. But M allows to search and sort based on a combination of tags. Resources on Cake PHP, Zend, Ruby on Rails, and jQuery could all take the tag “framework”. I could also tag such resources with their associated language: “php”, “php”, “ruby”, and “javascript” respectively.

Simply typing this out here has lead to another discovery. Typing out “javascript” sucks. I don't plan on tagging stuff with “cascading style sheets” either, so I'll steal an idea I've been using in my directory structures for years and shorten “javascript” to “js”. That will be clear to me and should be clear to any other web professional who happens to browse through my links in Ma.gnolia. In theory, that should also apply to anyone taking over this position in the future.

I also realized that trying to use a root tag such as “web design” is short sighted. Not everything I do is related to design. Some of the server administration or SEO stuff I do would stretch the common definition of design.

Leaving behind the idea of a hierarchy should also allow me more freedom for future growth. If I'm simply tagging resources with an associated language (“css”, “js”, “php”) it becomes much easier to just add a new tag for any new languages I start to use or learn (“perl”, “ruby”, “lolcode”). This industry evolves so quickly, that sort of scalability will likely pay off in ways I currently can't even predict. Five years from now my job may require as much understanding of psychology as it currently does programming and design. I'm pretty sure we're one high profile lawsuit away from moving a solid understanding of the legal implications of accessibility from the “recommended skills” to the “required skills” section of job descriptions such as mine.

I'm going to give some thought to classification of tags I will need. Languages are an obvious example of what I'm talking about. Maybe I should add a class of tags for software, Dreamweaver, Photoshop, etc. I try to keep what I do tool-neutral, but I'd be lying if I said I never use Photoshop tutorials from the internet. If the Adobe Creative Suite was not already bought and paid for, I could get buy using free text editors and open source programs like the GIMP. But the reality of my job is that I spend a lot of time working with Adobe software so it's probably a good idea to reflect that in my tag structure. I'll also put some thought towards the need for consistency. Luckily, Ma.gnolia offers an auto-complete function based on previously used tags. SU offers no such feature. I just realized Blogger has the same sort of auto-complete function in the tag field for my blog entries. That should at least keep me from forking my tags due to a common misspelling or something silly like that. I'll continue to think things over and share my thoughts over the weekend.

Thursday, December 11, 2008

Does anyone actually read this?

I've been emailing some of my designer/developer heroes asking for opinions on CSS frameworks. Not sure if those efforts will bear any fruit, but in the past even the rock stars of the industry have been pretty approachable.

I thought I might as well try to get the conversation going here as well. (Assuming anyone is actually out there with whom to converse.)

My thoughts on what I've seen so far have not been good. Even Google, who usually awakens my inner fan boy, seems to miss the boat on this one. (Just got confirmation from John Resig [I told you even the rock stars are approachable] that Blueprint is hosted by Google Code but isn't itself a Google product. Apparently I'm not alone in my confusion. Still, it's the de facto king of CSS frameworks so I'll continue to pick on it anyway.) Blueprint strikes me as a tangled mess of presentational class names. Actually, forms.css isn't so bad. It's got .title, .text, .error, .notice, and .success. Those classes make semantic sense. I can see myself actually using the core ideas behind that style sheet if not the actual framework code itself. And it's things like that which give me hope for the general idea of a CSS framework. tyopgraphy.css starts to get a bit more iffy. As much as I hate the idea of classes like .left and .right for floating images within text, I have to admit I often use them myself. So I can't point too many fingers there. Classes like .added and .removed have obvious uses for AJAX-y goodness. And I can see uses for classes such as .first and .last assuming they are applied dynamically (by jQuery, for example). I can even imagine common uses for classes like .small & .large and .quiet & .loud, but those start to fall outside my personal comfort level when it comes to putting non-semantic, presentational classes into my markup.

The deal killer for me is grid.css, which is really the main selling point of the idea of a CSS framework. Almost everything in that style sheet is purely presentational.

Let's say I'm building a site using Blueprint to do a basic 3 column layout. I start out with 6-12-6 and put the needed classes into the markup to accomplish that. I build a couple hundred pages and put it into production. A few months later I realize that the sort of content I've got going down the right hand side (AdSense, social bookmarking feeds, etc) is not coequal with the primary navigation in the left column and I want to shift things a bit to say 7-13-4. Now I have to alter class names on a few hundred pages worth of markup rather than making these presentational edits to the CSS where they really belong.

“But you could easily do that with a find/replace. Quit yer bitchin'.”

Ok, that's true. I see your find/replace and raise you a redesign requiring a horizontal primary navigation menu. If you're keeping your markup all Zen-like, you edit one file and the changes magically appear site wide. You might still conceivably be able to pull off such a change with a few find/replace operations using Blueprint (or any other grid based CSS framework using presentational class names, I'm using Blueprint as an example but just about everything I've seen suffers the same flaws). But it would take some pretty complex work. And even running find/replace across hundreds of pages of markup is more work than changing a couple of rules in a centrally located CSS file.

But this is a lot like how I felt about js frameworks vs. rolling my own unobtrusive DOM scripting before I found jQuery. I knew there was potential there, but I just wasn't seeing enough trade off in benefit offered in exchange for giving up so much control of my code. It helped the jQuery applies CSS style selector logic to scripting and thus meshed well with the way my brain already thinks about these things. I imagine if I had both the time and talent required to build my own js framework, it would end up working a lot like jQuery. I have yet to experience that with a CSS framework.

Exploring the Potential

I haven't taken an in depth look at everything yet, but there are two CSS frameworks that managed to not send me running away screaming after mere moments of peaking at the source code of the available demos.

  1. Boilerplate
  2. Content with Style

The language used to introduce the concept on the home page for Boilerplate is enough to get me findin' religion.

As one of the original authors of Blueprint CSS I've decided to re-factor my ideas into a stripped down framework which provides the bare essentials to begin any project. This framework will be lite and strive not to suggest un-semantic naming conventions. You're the designer and your craft is important.

If you prefer:

 { float: left; width: 240px; margin-right: 110px; }


class="column span-2 append-1"

then you're in the right place my friend.

Yes! But wait, it's only at version number 0.3 and it looks like that version is nearly a year old. Is my best lead effectively abandonware? That makes me a sad panda. Still, it may give me a good place from which to start future projects. Let's download this bad boy and take a look under the hood.

Not significantly different from Blueprint. I see an almost identical typography.css file, down to the somewhat uncomfortable .small & .large. There's also .quite, but no .loud. I'm also seeing a pretty basic reset.css (I personal prefer to slightly modified version of Eric Meyer's Reset Reloaded), a small collection of basic IE hacks in ie.css, and a few form styles that aren't really enough to save me significant time vs. writing my own from scratch or cannibalizing my own back catalog.

While Boilerplate avoids going deeply enough into the realm of presentational class names to catch my ire, I'm not seeing much in the way of benefit here either. At least with Blueprint I could wireframe something up in a day to show to a client, even if it's not the sort of thing I would want to put into production. I see nothing mind blowing here. Maybe if it ever hits version 1.0 I'll take another look. :(

Moving on to Content with Style. Yes, this is better. But it seems more like a philosophy more so than a true framework. Maybe I'll feel different after checking out the source code. This also hasn't been updated in just over a year (and then only once, and that wasn't really a change to the code at all, just a license addition). That also contributes to an overall feeling that I'm better off applying the underlying ideas to my own design work rather than using these exact files. But that's pretty much how I feel about Blueprint. I can figure out which classes would apply to a given element in the design I aim to achieve and rather than apply those class names to the element I could simply apply the styles for that class to whatever semantic class or ID I use for that element. The philosophy is sound. It's just the execution that leaves a bad taste in my mouth.

Looking at the code feels a lot like looking at my own code. But rather than feeling like the epiphany behind jQuery, this just feels like a slightly more systematic method of reusing the code snippets I've been using for years. It's far from bad. It's quite good in fact. But it's nothing I couldn't do myself. And by doing it myself it will mesh better with the way I approach things and thus be more usable and useful.

Why I Care

My biggest concern is that those currently learning CSS will fall back on these frameworks to save time for things like class projects or one off work for portfolio fodder. In those situations, giving up the separation of content and style to gain a bit of agility is probably justified. But I fear that in the process the vast majority of the next generation of web developers will miss out on the philosophical core of web standards. I think it could set back the industry, right at the time when I was finally starting to feel that education (both formal and informal) was starting to get things right in this industry.

Or maybe I'm chicken little and the sky isn't really falling.

Wednesday, December 10, 2008

My personal knowledge management problem

Full Disclosure:

This post and probably a couple of future posts will serve to fulfill a requirement in my graduate course on knowledge management. But I'm trying hard to approach this in such a way that such requirements are totally transparent asside from this note. Maybe I'll pull it off and this will bear some interest for folks other than my proff. Or maybe I'll totally drop the ball and not engage anyone with this content and totally screw up the assignment to boot. If so, maybe I'll at least fail spectacularly enough to get some good schadenfreude going.

I tried going through the exercises Kirby put together. The basic goal is to figure out which tasks I perform as a knowledge worker bring the most value to my organization, then figure out how much of my time I spend on those tasks vs. less valuable tasks, then try to maximize the time I can devote to the valuable stuff and minimize the time wasted (although that's a slightly harsher term than it needs to be in this context) on less valuable tasks.

I'll be honest, I don't think those exercises work for me right now. There's a couple of reasons for this.

  1. I'm 18 months in to this job and the task that has dominated my time thus far, redesigning the website (we launched the beta by the way, I don't think I took the time to announce that officially here, although I did on the Vol State blog), is not typical of the work someone in this position would be doing otherwise. Once the redesign launches, the way I work will shift, rather radically. It's hard if not impossible for me to look at the last 18 months and make conjectures for the next 18 months.
  2. The biggest drain on my productivity falls outside of my realm of influence; IE it's a trend I am powerless to address. I won't go into detail here but Kirby if you want specifics just email me and I'll fill you in.

So I've been looking at the way Kirby breaks down his model of personal knowledge management and one area I see a lot of room for improvement in the way I currently handle things is with information organization and retrieval. The sad part is I've been sitting on the tools to address this issue for years. I just need to be mindful of how I use them.

I signed up for a Ma.gnolia account back when they were still in beta. I used it for a while, then I found Stumble Upon (hereafter: SU). My thoughts at the time were that SU did all the social bookmarking stuff I had been using Ma.gnolia for with the added element of discovery of new content at the push of a button. That's true in theory. 2 years later, it's obvious that it falls apart in practice.

SU does the whole discovery thing very well. I don't think I ever would have found jQuery without SU. I had been underwhelmed by the JavaScript libraries I had seen during the first few months of that whole buzz and had pretty much written off the whole idea. I was just gonna stick to writing my own custom unobtrusive JavaScript using the Document Object Model. Now I literally use jQuery every day. My job would not be the same without it.

But I discovered jQuery at a time when I actually had the time to take on the learning curve, as gentle as it may be. The official documentation is complete enough that managing access to the information I needed to direct my own learning wasn't an issue either. The only exception I could find to that would be the plug ins, but truth be told if the plug in doesn't make intuitive sense and isn't well documented, I don't use it.

Compare that to some things I hope to learn more about in the near future, such as Drupal and Cake PHP or Perl. Or even compare it to some of the stuff I'm already using but need to reference source material rather than working from the top of my head, like regular expressions and PEAR or Active Directory. Now we're talking about steeper learning curves just as what little time to learning new skills is shaved away as I try to push the redesign through the beta testing phase and into launch. I keep stumbling onto sources for these topics, but lacking the time to fully digest them, I thumb them up and move on.

Ok, that last statement begs the question, if I don't have time to digest this stuff how do I have time to keep stumbling onto new content? First of all, SU is addictive. On top of that, it's so easy to just click the stumble button (with or without specifying a topic to stumble through, such as web design) that I can click through a fresh page or two while I'm checking in the files I just completed working on in Dreamweaver (hereafter: DW). Or while I wait for DW to generate the broken link reports I've been running lately. Actually, now that I bring that up, I really hope DW performs better on the redesigned site. The current site is such a mess of spaghetti code that DW is prone to take its sweet time or even crash when I ask it to perform a site wide action. The redesign is much leaner. Based on the work done so far, the code we shave off should be equivelant to about 68 copies of the complete works of Shakespeare. No, really. Project Gutenberg has the complete works of Shakespeare as a plain text file. I've done the math. :)

This is where the problem comes in. I find these great resources, or at least potentially great, but going back to find them later gets to be a real pain. Sometimes I don't take the time to write my own tags for a page. I just thumb it up and switch back to DW or click the stumble button again. But SU being socially driven, the tags default to the category chosen by the person submitting the site. I currently have 143 stumbles tagged with “graphic-design”. I'm not a graphic designer. I don't really even consider myself a web designer. If you want to split hairs, I consider myself more of a web developer. When I tag an article relating to design, I use “web-design”. I've got 418 of those. But it's possible I didn't personally tag all of them. If the person submitting them tagged it as “web-design” and I just thumbed it up and moved on without bothering to apply my own tags to it, then that's how it would default. It's obvious that graphic design is a pretty popular tag in the wild and the zeitgeist is polluting my tag cloud.

An Example

Over a year ago (November 1st of 2007 according to my SU history), I stumbled upon Scott Jehl's StyleMap script. At the time I thought, “This is how we need to do the org charts.”. Previously we had done the org charts in Microsoft Viso and then those files were exported to HTML. But that produces a tangled mess of frames and images. It's hard to navigate, hard to maintain, and doesn't even work on my Mac (thanks Microsoft). I thumbed it up and moved on.

In October of this year, I finally turned my attention to the org charts for the redesign. I remembered stumbling upon this script a long time ago that would be perfect. But I couldn't find it in my SU history. I tried Google searching every combination I could think of. I literally wasted an entire day trying to find this script.

The problem was the default tag the page was assigned had nothing to do with how I conceptualized the content of the page. I don't even remember what it was now and I have since gone back and edited the entry with my own tags. Google wasn't working because I had forgotten that it was written as a script to do site maps rather than org charts. To add insult to injury, when I finally dug up the article and tried to put the script into use, our org chart proved to be way too complex. But I could have discovered that in an hour had I not wasted an entire day (and part of the following morning) digging up the script.

The Problem

Partly due to flaws in the way I use it, and partly due to flaws in the way it's designed, SU is failing me as a means of efficient information organization and retrieval. In defense of the development team behind SU, it is designed more as a discovery engine than as an organization tool. And I can't sing enough praises as to how well it performs its core function.

The Solution?

So I turn my attention to my neglected Ma.gnolia account. If I start using both these tools to perform the tasks for wich they were designed, and approach my use of these tools in a mindful way, I think I can milk a lot more productivity out of my days. I'll map out that plan in a future entry. Stay tuned.

Friday, December 05, 2008

When it rains it pours

Getting caught up on my blog reading, which always gets me thinking, and once my comments to the entries of others grow longer than the entry to which I am replying, I figure I'm better off putting it here.

I have both a 2 year and a 4 year degree in web design. As in, the word “web” is actually printed somewhere on my physical degrees. One says “Web Development Technology” and the other says “Web Design”. I got a lot out of my education because the timing and my mindset were right for it. I had a few years of field work under my belt, so even when the course work did a crappy job of tying the concepts to real world examples I was able to fill in those gaps.

The downside is at the undergraduate level classes have to assume no prior knowledge on the part of the student, at least nothing above and beyond what is taught in high school. In other words, the first few classes assume you can type and that's about it. The problem there is some of the most promising students will be turned off early on in such a program, say to hell with it, and go get a job with the basic skills they already have. The other edge of that particular sword is you quickly go from the early hand holding courses to being expected to understand stuff like the OSI Model or object oriented programming in detail. If I were playing a video game with a learning curve that steep, controllers would be bouncing off walls, likely in multiple pieces. But my prior experiences made things more manageable and I was mentally prepared for the challenges that came up.

A lot of people would not be. That doesn't make them bad people or somehow less intelligent than me. If anything it indicates that the traditional academic model is poorly suited to cover highly technical disciplines like web design.

I say the timing was right because the 2 year program had just entered a stage of growth and was updating its curriculum to be more relevant. The first semester I was in that program the textbooks and course materials were very 1996. Browser sniffing, tables for layout, optimizing animated gifs; scary stuff. By the time I graduated they were offering classes on CSS and XML. Still far from cutting edge, but they managed to advance their curriculum about 7-8 years in the 3 semesters I was there. The problem is the nature of most colleges and universities make it very hard to do such changes more often than once a decade or so.

The 4 year program was brand new. I'm the 6th person to graduate with that degree. Since it was so new, the curriculum had not yet had time to get stale. The director of the program was also both knowledgeable and passionate, and he personally taught most of the core classes in the program.

In a single lecture we might start out talking about the Iliad and how it's an example of an epic, then move in to how the structure of an epic is guided by oral traditions. And just when you're starting to wonder why the hell we're talking about this stuff in a class on web design, we shift to talk about how communication on the web shares a lot of characteristics with oral communications (you and I are having a type of conversation right now). From there we talk about how the structure of an epic poem can be seen as an early prototype of hypertext with the various ways the narrative loops back on itself and includes passing references to other epics from the same cannon. And I use phrases like "we talk" for a reason; the courses were very much structured around discussion and team work. I learned more from my fellow students than from the professor and that was by design. My current graduate program hasn't managed to pull together ideas as seemingly disparate in 2 years as Bob routinely did over the course of an hour.

But literally the day I graduated the director of the program packed his bags and drove several hours north to become the new Director of Graduate Studies at Empire State College in Saratoga Springs, NY. Without his leadership, I don't know how well the content of the courses will be kept up to date or how well the structure of the courses will translate to new professors covering that material. I'm very lucky to have gone through the program when I did.

Antique Browser

What reminded me to come back and complete that last post is a response I wrote in yet another design blog. The thoughts that lead to that reply were obviously inspired by the ideas that have been kicking around in my head for the past month while my last post was locked in draft stasis. I'll go ahead and share those thoughts here as well, slightly more detailed than in my reply to Raymond's entry on hating IE6 (I took the time to do the math).

I was thinking just last night that an appropriate metaphor to explain to non-techies why IE6 sucks would be to compare it to an antique car.

The Model T was not the first car, and Mosiac was not the first browser, but both brought those particular products into the main stream an birthed industries to support them. So we’ll use those are our anchor points for comparison.

The first Model T rolled off the assembly line in 1908. So the automobile as we know it is 100 years old this year. Mosaic was released on April 22, 1993. Today being December 5th, 2008, that makes the modern web browser 15.6226243 years old. IE6 came out August 27, 2001, so it is 7.274739 years old. That means IE6 is 46.5654096% as old as the modern web.

If you were driving a car manufactured in 1963, would you expect to meet modern safety standards? Would you expect to pass modern emissions testing? You couldn’t use a modern gas pump without using an artificial additive to replace the lead that was commonly added to fuels 46 years ago. There’s a reason most people who own antique cars usually drive them only to auto shows or for the occasional pleasure cruise and don’t use them for every day driving. If you drove such a car the typical 1,000 miles or so a month, you would expect to pay much more in maintenance costs than someone driving a newer model.

The primary appeal of an antique car is the cool factor. They offer styles not available in the modern market and you just look cooler behind the wheel of a cherry antique than just about any modern automobile. No one is impressed by your antique browser. So why are you still using it?

Thursday, November 13, 2008

Charging more to support craptastic and/or old browsers

Good God. I literally started writing this entry a month ago, got busy, and left it suspended in draft mode for ages. Let me brush off the dust and get this bad boy ready for publication...

This started as a reply to a post from Philip Beel, which was in turn explaining his thoughts in response to item #10 on Ten Web Development Tips I Wish I'd Known Two Years Ago over at hackification. By the time my comment hit it's 6th paragraph, I figured I'd be better off just putting my thoughts into my own blog post. Ain't the web grand?

Philip says in part:

Consider that you were sold a car, but told you could only drive it on motorways, as the wheels may fall off if you drove it on any other roads.

But I think that's the wrong way to look at the situation. My thinking is a bit closer to the hackification article, which states in part:

Explain to the client that since older browsers work in a different way to modern ones (which they do), that’s extra work and hence extra cost. (I’d suggest approximately 10% on top). Explain that IE6 has approximately 25-30% market share, and let them make the cost/benefit call. Money has an amazing focusing effect: it forces people to really think about what they want and need. (emphasis added)

As designers, we're concerned with stuff like use experience, so we tend to think of things in terms of the end user/consumer. But our clients are often business people, so I think it's more effective to talk to them in the language of business people.

I think the “buying a car that only works on motorways” example wouldn't work well on most clients. The situation is more akin to opening a new convenience store 7 years after the last gasoline powered car has rolled off the assembly line. Would you want to take on the cost of installing and maintaining a gas pump for a market segment that is no longer in the majority and will be forever dwindling? Even installing just a single pump alongside the hydrogen system (or quick charge stations or whatever the new technology happens to be) would require redoing a lot of the same type of work. The systems aren't interchangeable or compatible. There would be redundant systems; more things to build and eventually need repair.

I think most clients would understand that analogy. It probably would not be hard to come up with one more targeted towards their own business model.

But another problem is that consumers are better informed as to the cars they drive than the browsers they use. The owner and employees of the hypothetical fueling station above would probably never have to deal with an irate customer who doesn't understand why the hydrogen hose or quick charge plug won't fit into his gas tank. Sites like Browse Happy can come in handy for educating consumers, but the kind of people who are the most problematic (don't even realize browsers and other software are entities separate from “the computer”, never update whatever shipped with their operating system, etc) are the least likely to go there or truly understand the content once they get there. So the ethical considerations are still not dodged.

But I think the ethical issues are drastically different from the issues involved in getting clients to understand the cost associated with “supporting” old browsers. And a lot depends on how you define “support” as well.

If you want the 0.1% (or hopefully even lower than that) of folks still surfing on Netscape 4 to be able to use some Ajax based enhancement, that's gonna take a heck of a lot more work. What's the return on investment of that work? Expecting the 21.53% (and dropping) of IE6 users to see rounded corners exactly like what someone on Safari 3 sees is like expecting a 1993 Geo Metro to handle and perform just like a Tesla Roadster.

In my current work with jQuery, I'm noticing drastic performance difference even between IE6 and IE7. The features work in IE6, but it's not as smooth nor as quick. With the move towards just in time scripting execution in Google Chrome and future browsers, such performance differences will become even more noticeable. So what's the acceptable level of differences in appearance and performance from browser to browser? That will change from project to project, but the price tag for the work we do needs to also change to reflect the shift in work load (be it up or down).

Tuesday, September 23, 2008

Let's hear it for the girls

In the past 10 issues of A List Apart, 17 authors have been featured (not counting 2 articles penned by “ALA Staff”). Of these, 8 have been women. I'm not sure if that's due to a conscience effort on the part of ALA to pick up the slack on the (perceived) gender bias in the industry or not. Either way, I think it's pretty cool. I know some of these ladies better than others, but they seem just as knowledgeable and respectable as their male counterparts and I'm glad to see them getting some exposure.

Not like that. Get your mind out of the gutter. And hand mine up to me while you're down there.

In mildly related news, Happy Cog, the design firm founded in part by ALA founder Jeffrey Zeldman, recently launched a redesign for Books-A-Million. I was a lowly cashier / customer service agent at a Books-A-Million store for about a year following my marriage and move to Knoxville (now one of my least favorite places on Earth). This was 2001-2002 and even by the standards of the day their site had some issues, particularly in usability. At the time, it seemed the corporate culture was pretty clueless about technology. Everything from the way inventory was managed to the point of sale system to the tech books we carried on the shelves seemed to showcase this. Maybe that culture has shifted in the past 7 years. Then again, somewhere between delivery (at which point design studios lose control of their creations) and launch on the BAM servers, over 800 markup errors have cropped up in the home page alone. So maybe they're still just throwing money at their problems without taking the time to understand them. However, Happy Cog's case study of the work makes it sound like BAM had a lot more of the legwork done than most client would on a project like this. Obviously a few people in the organization “get it”. Here's hoping they can lead the company in the proper direction.

Friday, September 12, 2008

The joys of debugging

Looking at other people's code can be enlightening. I've been looking for an excuse to try out Benjamin Keen's smartlist plugin for jQuery for a while now. (The jQuery website was recently redesigned and is even more awesome now.)

One of the most important questions a potential student may hope to answer on our website is “Can I get the degree I want?” Actually, it's probably a bit more abstract than that. I know when I evaluate schools, from my own time at a community college to my undergrad days to my current graduate program, I care less about the degree than I do about the topics covered and the skills developed. I care about the topics because if I'm going to devote a couple years of my life to a program, I want to make sure it will hold my interest. The skills are important because ultimately that's what gives me an edge in my chosen job market. I doubt anyone surfs over to our website specifically looking for a degree in biotechnology, but s/he may have an interest in chemistry and biology already and may be considering a career path where such a degree offers a competitive edge.

In it's simplest form, this question could be written as “What do you have to offer me?” (The only question more important than this one is probably “How much will it cost me?”) Each student evaluates that question through criteria specific to his or her needs, wants, and experiences. The form of this evaluation follows traditional models of interpersonal communication (that is, a spoken conversation) more so than print materials. We really need a new model for a new communications medium, but I'll leave that for this generation's Peirce or McLuhan or Leary. (No, really. Leary wrote and lectured on cyberspace and virtual reality after the whole LSD thing blew over.) In the mean time, the idea of a conversation fits better than the idea of a book or newspaper article.

If you've taken a look at the smarlists demo you may see where I'm going with this.

The ability to tag (or flag, as Ben calls it) the items in a smartlist can foster the sort of back and forth communication that the web thrives on. This is something that will need a lot of testing, of course, because the initial set of tags may not fit the words in the minds of our students. There's also a careful line to walk between providing enough tags to make the ability to sort through the data useful vs. introducing a new flavor of information overload. It's very experimental, but I think it can be done right and I think it can be a great help to our students, both current and potential.

So yesterday I actually started building this new application for the redesigned site (still only available via IP from on campus, I'll link it here once we publicly release the beta). The first thing I notice is the demo uses a table. I furrow my brow, thinking a table isn't necessarily the best way to approach this as far as semantics go. I look a bit closer and realize the plugin is written in such a way to allow me to use any html tags I want. I just need to apply the right classes to those elements so the script knows what everything means. Ideally, I'd like to use a definition list. But that would require me to wrap each <dt> / <dd> set in a <div> since the script expects the list of tags (or flags) to be a child element of (that is, contained completely inside) the element defined as an “item”. I don't think a <dl> likes being filled with block level elements like <div>s between itself and its children. I could do a definition list per item, but that seems like over kill. Eventually, I settled on an unordered list with a few paragraph tags inside the list items and appropriate classes scattered throughout.

I build enough of the list that I feel it's testable. In hindsight, I should have built the smallest possible test case, something like 3 items and 4 tags. Everything worked fine as long as you clicked links. Pagination worked. Filtering by clicking on the tags worked. Resetting the list by choosing the default option on the drop down list also worked. Selecting a tag from the drop down resulted in all items being hidden. Strangely, the pagination still worked. In other words, if the list is showing 10 items per page, and you select a tag with 25 items, you get the links for 3 pages just like you should. But all 3 pages are blank. I run in to this particular error around lunch time yesterday. I pour over the code for about 3 hours looking for differences between my local, broken code and the functioning code published with the demo. It appears to be exactly the same, and it should be, I downloaded from the source, right?

Eventually, I check the jQuery code on the demo. That's version 1.2.3. I'm running 1.2.6 (actually just one revision removed, 1.2.4 and 1.2.5 were skipped for various reasons). I ran into a similar problem with a plug in when 1.2.3 was new. Back then I found a page in the documentation that explained exactly what functions were changed and how. That made it very easy to discover a couple of functions were being called that were no longer available. I replaced those 2 functions with the new single function that supersedes them in the new version and was back up and running in less than an hour. This time around, I the documentation on the changes (short version, full version) between versions was less helpful. :(

But at least now I know it's a problem due to changes in the most recent version of jQuery. Now I feel justified in contacting the author directly. I use the form on Ben's site to contact him, explaining what I've found so far so at least he knows I tried to fix the problem myself, and ask if he has put any work into updating the plugin that just haven't been published to the site yet. I go back to troubleshooting for about half an hour before I bother checking for a reply. Turns out that was a mistake because he replied almost immediately. He said I was the 3rd person to write him on the same issue and it seems to be a change to the inArray() function but he hasn't tracked down the exact nature yet.

I start to reply talking about how that doesn't seem to make sense because I had just read over that bit of code in the half hour since I first wrote him and the inArray() function is called the same way no matter which event triggers the action. Since it works fine when clicking a link but fails when selecting from the drop down menu, the problem has to be related to the triggering event. But the only place I can find where that makes a difference is the way a specific variable is set. I checked the state of that variable after it's set using either method and it contains the proper value. These are the thoughts going through my head as I begin to type my reply, but I don't quite make it that far. If you've got any experience debugging JavaScript or any other loosely typed language you may have figured this out already too.

The problem was the variable was a number when set by the click event, and a string when set from the drop down menu. The default value worked because it's a string rather than an index number.

You know what? I just realized that the fix I made last night introduces a new bug. I was just casting the value via the Number() function, but that screws up the default string value. Crap. So the full fix is an if statement checking for the default value and casting if it's not found. I'm off to make that change and then hopefully I'll be done.

Look for updated code to hit Ben's site in the not too distant future. :)

Thursday, July 17, 2008

One Week Only!

Do you like...

  • Neil Patrick Harris?
  • Nathan Fillion?
  • Joss Whedon?
  • comic books?
  • musicals?

Personally, I hate musicals, but I still love Dr. Horrible's Sing Along Blog. Maybe you will too.

But you'd better hurry, it's only available online (for free) through Sunday.

Monday, June 23, 2008

Google is so awesome

Seriously. I keep waiting for the other shoe to drop. How can a company be this cool, constantly, at no immediate cost to me? Have you seen the Google AJAX Libraries API? Nothing I can say here can do it justice. Just follow the link. I'm gonna try using this to load jQuery in the redesign.

Thursday, June 05, 2008


Ok, I make an honest effort to keep this blog as positive as possible since it's my professional / educational blog. I have other haunts I frequent in order to discuss things like sex, politics, and religion. Here, I try to stick to the sort of thing that I don't mind someone reading before they call me in for a job interview (or chose to not call me). But sometimes things happen that are related to the topics I try to cover here that I feel justified in sharing here. Today, such a thing happened. I got to see some of my previous work butchered.

This has actually happened before, on the same project. Of course, I should know once the product is delivered to the client all bets are off. I should know better than to even go look. It can't possibly end positively. But like rubberneckers at a train wreck, I just keep going back to look again.

Let me give you a little background. When I went back to TTU for my degree in web design (oh crap, that page has been ruined since I left too), I had a job at The Technology Institute. I don't think they do it anymore, but back then they'd actually take on programming projects either for the TTU campus or for our sister schools or for the Tennessee Board of Regents. TBR governs both the 4 year state schools like TTU and the community colleges, like Vol State, where I currently work. They also just launched a redesign, and it's decent. A vast improvement over their old site. I don't care for nested drop down menus though...usability and accessibility nightmare.

When I first started the job, we had one major project that had been in process for nearly a year. It was for TBR. The idea was sound. They wanted a site where prospective students could research what degrees are available at TBR schools and what jobs are associated with those degrees. So if you know which school you want to go to, you can browse their degrees and plan for your future career based on that. Or, if you have a career in mind but don't know which schools you should apply to, you can navigate the system based on jobs and work in the other direction. Or, if you just know what you want to major in, you can find schools and / or future careers. It's a pretty simple relational database issue and it makes intuitive sense to the target audience.

When I came on to the project:

  • The database was built but sparesly populated
  • Accounts were funtional, but most were inactive.
  • Account management and data entry forms were built and functioning.
  • The front end interface was built.
  • The lead PHP developer had just dropped out of school to take a job in Nashville.

We had a pretty good LAMP project going. It was built by students, and it showed. But we fixed our mistakes. We got to learn about the importance of normalizing a database the hard way. That probably took the most time. :)

The role I took was largely adding performance enhancements. I replaced the tables with a pure CSS layout. I wrote my first unobtrusive javascript, replacing literally several hundred lines of Dreamweaver generated scripts with about a dozen lines of code. I assisted with some MySQL and PHP clean up. I added a couple of functions that required creating new admin forms. I performed final testing (unfortunately, not with “real” users) and delivered the product to the client. By the end I was pretty comfortable with the project aside from the icon based navigation.

There are 16 job categories, each with a few sub categories. I think there was something like 83 in total. Rather than expressing these as text, each category and subcategory had an icon associated with it. I knew that introduced a high probability of misinterpretation of the navigation, adding to the learning curve. How do you communicate an idea like “Construction Plumbing” in an 80 x 80 pixel icon? But I also knew how much work went into those icons and didn't want to deny the graphic artist her moment in the sun.

About a year after we delivered, I surfed over to take a look. I had just taken my current job and was reviewing the examples of my work cited on my resume. The front end was completely different. All my CSS and javascript work was gone. The graphic designer's icons were gone. But functionally it was all still there. So I could at least tell myself I still contributed to a site whose purpose I believe in. Even if it was all back end stuff that no employer could look at in any meaningful way, some of my work was still there.

Today I'm looking for broken links in our “helpful links” page maintained by our career placement office. One of the links goes to The Tennessee Career Information Delivery System. The project I worked on lives on a different sub-domain of that same server farm, so I replaced “tcids” with “pathways” to check it out again.

Apparently the table that stores the 16 job categories and the table that stores the sub-categories still exists. The rest is gone. The Admin link doesn't even go anywhere. Once you get to the level of sub-categories, a whopping 1 click into the system, rather than being able to drill down any further and get info on degrees and / or the schools offering those degrees in your browser, you are prompted to download a .xls file. The meat of the database has been replaced with flat Microsoft Excel files. This pains me greatly.

The only surviving evidence of my involvement in this application is the HTML comment found on line #23, in reference to the IE whitespace bug.

<!-- The following just holds the entire "banner." It's all on one line because IE is stupid. -->

I pretty obviously wrote that. I'm not sure if the placement on line #23 was intentional.

Monday, May 26, 2008

Looking for Applications of Social Capital

I could talk about how I hope to launch the redesign at work and then start making my rounds on the conference circuit. But that seems a bit obvious, and also doesn't do much with the technology side of the equation of the course material.

So instead I've been thinking about what I can do at work to help incorporate these ideas into our own website, either as part of the redesign or as an area of further growth after the redesign is over. I'm not really sure if there's much point in trying to build a custom application. There's no sense in trying to out-Facebook Facebook. But maybe just getting some of our offices such as Career Placement, Cooperative Education, and the Small Business Development Center active on a site like LinkedIn and offering training to their “customers” (that seems to be a dirty word in higher ed, but I think to deny that side of our relationship with the study body leads us to mentally separate the connection between us meeting their needs and us getting a pay check).

I've worked this job long enough to know where the technophiles are, and in this situation they're not where I need them to be. This line of reason my get derailed before it even has a chance to leave the station. I guess I could try to subvert the system and take things straight to the students. But how? I mean, if there's any sort of training I want to offer to employees I've got professional development day at least once a semester. There's nothing like that for the students. There's no way to work this into the syllabus of a pre-existing class and without a PhD I won't be able to create a new class of my own. What we really need is to incorporate some basic digital humanities ideas into courses like English 101 and 102. That's something Dr. Clougherty tried to pitch back at TTU but it wasn't very well received. He was extremely lucky to get the web design program passed all the bureaucracy, but ultimately I think it was the constant fight to get even that much innovation introduced on campus that lead to his seeking greener pastures. Vol State might not be quite so resistant to change as TTU simply by virtue of being a 2 year school, but exactly how much is it worth to purposefully swim against the current in an educational institution? Ultimately, I love my job and I'm quite happy here. But couldn't picking the wrong battles do a lot of damage to my quality of life and job satisfaction without adding any real value to the students?

So I think the first step is to look up data on the rate at which our graduates find jobs. I may be dreaming up solutions for problems that don't even exist. I'll try to get my hands on some figures for both graduate employment and retention rates. Maybe I can find a couple of academic departments who would be willing to pilot a program using social media to keep their students connected on and off campus and on into their careers. I think the secret to success in organizations traditionally resistant to change is to not just tell them a better way to do thing, but to actually show them a better way. The down side is you end up often asking for forgiveness after the fact, rather than for permission before hand. Then again, is that really such a bad thing?

Tuesday, May 20, 2008

Social Capital and the Cohort Model in the MACT Program

I've been thinking about social capital and how it applies to our cohort. More than a few people have observed that for many of us out of towners, the end of this year's Spring Institute could very well be the last time we see each other face to face. There's even been half-joking suggestions of doing our own Spring Institute in Las Vegas in 2009, although probably not the full 3 weeks. :)

We talk about these things and we worry about these things because the cohort model allows us to make meaningful connections with each other. I think that's an important element of education in general. My undergrad program didn't exactly have a cohort model, but at the time I graduated there were only 60 or so of us in the major so I had shared classes with the few people who graduated before me and just about everyone with 2 or fewer years left to go. Many of us stay connected via Facebook or what not, even pointing out job listings to each other. We're in the same industry so that sort of things is easy.

But, that's where the social capital of the MACT program starts to break down. We're not all in the same industry or discipline. I assume, and this may be naive of me, that in most masters programs, with or without the cohort model, students can expect a good chance of crossing paths after graduation simply because everyone will run in the same professional or academic circles. Students getting a masters in Digital Humanities will probably frequent the same conferences for years to come. I don't see that happening with MACT students (or at least not in my cohort).

We've got people working in higher ed, civil service, banking, project management, IT, mass media, public relations, web development, law, graphic design...

The nature of our research projects seems to reveal the same trends. I won't know for sure until I see all the posters on Friday. In fact, that may not be enough either since we're able to change what we're doing after the poster presentation session. But based on what I've seen so far, we get a few similar groupings. I can think of at least 3 people who are looking at millennials in the work place. But they've got different approaches and varying research methods. There's a couple people using content analysis, but they're looking at radically different research questions in largely unrelated contexts. Even those of us focusing on web technologies are employing different research and philosophical lenses.

I think educationally this is one of the strengths of the MACT program. Discussions tend to be rich with idea generation as we bring our various backgrounds and understanding to the table and bounce ideas around. But this same multi-disciplinary approach robs us of a certain level of long term social capital. I wonder if the stronger ties facilitated by the cohort model will stand the test of time without the occasional reenforcement of weak ties found in chance encounters “in the field”.

Monday, May 12, 2008

Das Sozial Kapital

First of all, on the off chance that anyone reading this isn't currently in the MACT program at the University of Alberta, you may need to spend a little time on this site or maybe Wikipedia entry on social capital to follow this. I'm using this blog to meet course requirements, because I am both innovative and lame, the two great tastes that taste great together.

I'm still confused as to exactly what social capital is and what it is not, but looking at the literature it seems I'm not alone. Just about everything we've read in class, and some of the stuff I've read outside of class on this topic, include some sort of definition for the term. Often they go back and cite Bourdieu or Coleman or Putnam.

I understand the arguments against Putnam's definition as inviting circular reasoning. His proof-is-in-the-pudding approach seems to equate social capital with success, then cites that success as evidence of social capital. Being a southerner, I've seen such ideas used to justify “New South” racism. This is particularly true when a few examples of “successful” minorities can be cited as “proof” that racial inequality no longer exists, therefore anyone “playing the race card” is just making up excuses. Their lack of success is evidence of some sort of character flaw because framing it in those terms means it's not racist, even if those terms are being applied to the majority of the members of that race. Not to imply that Putnam himself is racist, just that I've seen similar lines of thought abused for such purposes.

Portes seems to be the only one to call shenanigans on Coleman for his definition. Maybe I'm the dense one here, but this is so close to meaningless as to be functionally worthless to me:

[Social capital is] “a variety of entities with two elements in common:
  1. They all consist of some aspect of social structures, and
  2. They facilitate certain action of actors – whether persons or corporate actors – within the structure.

Aren't we dealing with “social structures” any time we're dealing with 2 or more people? And since we're all people, doesn't that mean that we're dealing with social structures anytime we're dealing with even 1 other person? Some would even make the argument that social structures exist among various elements of ourselves either because our sense of identity itself is a social construct and/or the social elements of language that permeates our inner dialog and idiolect. And what the hell are “some aspects”? Can we be more vague there? And they “facilitate certain action of actors”? Really? Reminds me of “certain substances”:

That leaves us with Bourdieu's definition, which I'm still not completely clear on. I know he differentiates between the resources and the access granted to them, which helps steer clear of some of Putnam's circular reasoning. For example, say I'm a white male from a prominent family in a small town and I know that if I attempt to start my own business and fail, my family won't let me starve. This knowledge encourages me to take more risks than I otherwise would and luckily for me those risks pan out, making me quite successful. I never had to tap into the social capital afforded me by my family's status, but there mere knowledge that I could contributed to my success in significant ways. I think that's an instance of social capital that Bourdieu's definition covers but Putnam's does not. Coleman's definition seems to cover just about anything I can imagine.

But I could be mistaken. The biggest part of Bourdieu's definition that I'm left unclear about is do you have to access some more traditional forms of capital via social ties for it to count? Obviously, if the same guy in the hypothetical situation above got an interest free start up loan from a family member, that would be social capital, even under Putnam's definition. Also, when I was looking for my first job after I got my undergrad degree, I turned down some higher paying positions because I knew I would be less happy in those environments. Is such at-work happiness social capital even if it doesn't necessarily lead to increased productivity? It's a boost to my quality of life, but is that alone enough? Things that are hard to put into monetary terms are equally as hard, in my mind, to judge as social capital (or not).

Thursday, May 01, 2008

Got 16 Minutes?

Watch this:

Don't have the patience? Read the gist of it here.

Are we as a society slowly awakening from a TV induced stupor?

Tuesday, April 29, 2008

Why jQuery Rocks

As a lazy developer, I love jQuery. I'll try to enumerate my top reasons:

  1. It's among the smallest of the full featured javascript libraries. So I don't have to worry about adding a lot of overhead to my pages to use it. For example, minified and gzipped, jQuery is about 15k. Dojo is about 24k, which, I admit, it still pretty impressive.
  2. jQuery does tons of cool stuff out of the box and doesn't require me to understand complex extension packages (moo tools and the Yahoo! User Interface Library are the most extreme examples I can think of).
  3. I can call functions as soon as the DOM is ready. (What's the DOM?)
  4. Simple, CSS style syntax for selecting elements combined with chainable functions allow me to write in 1 to 5 lines what used to take 12 to 50.
  5. jQuery's also got some great plug ins.

Of course, not every plug in available is amazingly awesome or even a good fit for every project. In fact, some of the best, in my opinion, have a very narrow, targeted application. Today I was asked to create an image gallery from a collection of 50 shots taken at the graduate awards ceremony Friday night. In the past, we've used Flash to make our image galleries. I suck at Flash, so I'm sure it would take me all week to recreate something like that, even if I had a basic template to go by. I have no idea how long it took Ken (former webmaster, current director of public relations and therefore my boss) to generate something like that. But I figured I could whip up something similar in jQuery in an afternoon.

There are lots of image gallery plug ins for jQuery. I wanted one that looked impressive while being easy to navigate and not so flashy as to distract from the images themselves. I also wanted something that required nothing but a list of images identified by an ID or class. The idea there is, if someday move to a CMS, the end users could build a gallery by simply adding the class (or ID) to a list of images. That's the sort of thing we could easily cover in training. It helps that as of right now my favorite CMS is Drupal, which comes pre-packaged with jQuery.

I looked at slideViewer, but I like thumbnails. I looked at jqGalViewII but for this particular application I don't want to spend time generating my own thumbnails. Then I looked at Galleria and this seemed to be the best fit for my immediate needs.

It took about 15 minutes of code work to repurpose the key elements from the demo page for our needs. That includes time spent minifying the javascript and uploading the external files to the server. I changed a few color values in the CSS, altered the fonts slightly, linked in our images, and had a fully functional image gallery in less time than it took to crop the images. The only snag I ran into was the preloading function wasn't working properly in IE (curse you IE!). But updating the core jQuery library to the latest version fixed this bug pretty quickly. I should have done that earlier, but I've been focusing on the development server, not the “live” server.

I hope to incorporate some standard implementations of things like this into the redesign. I'm keeping the possibility of a future move to a CMS in mind, but there's plenty of other benefits to having a few well documented solutions like this. It saves me time. That's a good start. I also occasionally take a vacation or occasionally get sick. If something comes up and I'm unavailable, a quick HowTo on implementing a cookie cutter image gallery like this could be a real life saver. I need to experiment with managing a mix of image sizes and maybe port the auto-thumbnail feature from Galleria into jqGalViewII and then offer up both options in a tidy package. Since it's all open source, I can easily do that.

Here's hopin' I don't automate myself out of a job some day. :)

Friday, April 25, 2008

Random stuff collected on the net

First, a couple of new heroes.

This guy is so awesome. He's the antithesis of what the media tells us a “rock star” should be. But he doesn't care. His joy and enthusiasm about, not just his music, but the whole concept of music, is downright infectious.

Here in the south, religion tends to earn itself a reputation for discouraging critical thinking. But I know it doesn't have to be that way. I've met a few true religious scholars for whom I have great respect, even when we (often) disagree. I love seeing stuff like this, not because it's putting a Faux News reporter in his place, but because this guy's firm in his convictions, can back them up quite reasonably (with or without scripture), does not push his beliefs, but does call shenanigans on the reporter for trying to use the power of his position to distort reality. “Just because you say he is a racist doesn't make it so.” I'm paraphrasing there, but that's the gist of it.

Speaking of the social construction of race, I found a very cool quote from Tay Zonday (the Chocolate Rain guy):

I live in Minneapolis. I'm 25 years old. I'm not sure what you mean by "background." Is that a code word for "race?" The straight-faced answer is that I'm Martian. They don't have a box for me on the census form. I'm the write-in candidate that the government leaves no space for when you have to choose your race.

Seriously, is race something you choose? The whole point is that I don't choose it. It is somebody else's shortcut to my soul. So journalists ask "what's your background?" like I'm supposed to retell someone else's story about me as though it's a fact of who I am and where I come from. As long as I talk about myself in fiction that someone else wrote, I might as well write my own fiction: I'm from Mars. Most believe the story that I'm a black mulatto.

That's from this interview by way of Tay's Wikipedia entry. From that, I also learned that he's a graduate of The Evergreen State College. Evergreen is one of my favorite schools ever. Their curriculum overview page explains why I feel that way better than I can myself.

I discovered Evergreen in the book “Colleges That Change Lives”. Out of the 40 or so schools in that book, Evergreen and The New College of Florida were my favorites. At various points in my academic path, I've considered attending both. If I have any academic passions left after I'm done with MACT and move on to doctorate level work (my only current lead is the PhD in Digital Media from Georgia Tech), I think I'd be pretty happy teaching at one of those schools since I never got a chance to attend.

Friday, April 18, 2008

Curse my Lack of Focus

Quite a while back, I took a couple of days to play with the Google Maps API. I knew things were getting dangerous when I discovered the ability to add custom overlays to the map and immediately started imagining all the cool stuff that could be done with that. With all the redesign work that still needs to be done, I just can't spare the time to essentially play with a new toy, no matter how work related that toy may be. I experimented just enough to produce a sort of proof of concept and then moved on to other things.

But now I'm kicking myself 'cos at least 3 other schools have explored just how much awesome can be distilled from these tools:

Oregon State is using the Jquery Google Maps plug-in, or at the very least they're using Jquery with Google Maps, with or without the plug-in. I'm interested in that 'cos I'm already using Jquery. The coolest thing I've found so far digging around in the source code is from Boston U. It doesn't seem to have anything to do with the map function specifically, but there's a function that sets all external links to open in a new window/tab and it's declared like this:

function l337() {
  if (document.getElementsByTagName) {
    var elements = document.getElementsByTagName('a');
    for (var i = 0; i < elements.length; i++) {
      if (elements[i].getAttribute('rel') == 'external') {
        elements[i].target = '_blank';

How cool is a function called l337()? I don't even care what it does at that point.

And just so you know, FF3 has no problems rendering form elements within the baseline grid, so I can not worry about it and eventually that particular problem will take care of itself. Pixel perfection through apathy. Gotta love it.

Yesterday I got <sup> and <sub> tags working without screwing up the baseline grid too. It even seems to work cross browser. That's got just about everything I can think of except for images, and I know how that should work in pseudo code. Javascript has a MOD function, right? Right.