Wednesday, December 16, 2009

Badges, Banners, and Calls to Action

The Problem

The folks in Admissions are getting phone calls from people who can’t find a link to the online application.

Probable Causes

  1. Users can’t find content “below the fold”
  2. General placement of call to action does not align with relative importance
  3. Banner blindness


When I have a problem, I like to kill it with fire attack it with data; Google Analytics to the rescue! As of today, we’ve got just over 6 weeks worth of data since launch. Here’s what the numbers tell us about this page:

How They Got There How Many Times They Got There Percentage of Total Traffic for This Page
Future Students Landing Page text link in content 3,314 65.2%
Home Page badge 518 10.2%
Admissions Home Page 299 5.9%
Future Students Landing Page badge 199 3.9%
External (usually Google or another search) 158 3.1%
A to Z Index 120 2.4%
Total 5,081 100.0%

That’s data pulled together from several different reports as interpreted by me. There could be some rounding on Google’s part and some fudging on mine. But the general trend we see is reliable.

What Does That Mean?

By far, the most effective way we’re sending people to this content is the link about 1/3 of the way down the Future Students landing page that clearly states “The first step is to fill out an application.” In fact, 9.66% of all the people who see this page follow that link. It’s the 3rd most popular link on that page after Programs of Study and the Current Students landing page.

Taken together, the 2 badges are the 2nd biggest traffic producer, but we have a big drop off between 65% of traffic at #1 and 14% combined for #2 and #4.

After that we get a noticeable long tail effect, but that's normal for this type of data.

Applying the Data to Possible Causes

Content Below the Fold

The problem does not seem to have anything to do with scrolling. A wealth of previous research shows that content “below the fold” doesn’t really suffer for it. But it’s nice to see confirmation of this in our own data. The most effective link to this page is only visible after a bit of scrolling even on my rather large monitor. Comparing the people who are getting to the page to those who are not getting there could be apples and oranges, right? But we know about the 2nd group because they are calling us. Both the main telephone number and the direct number to the Admissions Office require a bit of scrolling to find. These people are obviously finding that information, so scrolling isn’t presenting a significant hurdle to finding information among this population.

Sub-Optimal Placement on the Page

We could still have a misalignment of placement vs. purpose even if scrolling doesn’t enter into the equation. The purpose of the badges is to feature timely content. The application process should probably be permanently featured, but a permanent badge runs askew of the core purpose of being timely and changing often. The link to the schedule for next semester makes for an excellent badge because that content is in high demand right now but in a couple of months we should be able to safely replace it with something more timely, perhaps a badge for the Academic Calendar so that people can quickly and easily find the dates for Spring Break.

So what are some other ways we can permanently incorporate “Apply Now” into the overall site template?

The primary navigation is all user role based, so adding it there wouldn't make sense. Putting it below the primary navigation would make it look like secondary navigation. At one point the site template had a “default” secondary navigation for those pages that didn’t have such a menu. It muddied the waters as to the purpose of that area of the page and it was an early cut in the design process. The area at the top, with links for the People Finder and A – Z Index, could work, but it’s already full. We’d need to delete a link in order to make room for it and I’m not comfortable dropping anything currently there.

We could add it to the Help Center under the Registration heading. Right now everything in that category applies to students who are already admitted, but I doubt most incoming students have a clear understanding of that distinction. Ask the average senior at Gallatin High School what the difference is between applying for admissions to Vol State and registering for classes at Vol State and they’ll probably look at you like you’ve got 3 heads. I would have at 17.

We could also list it in the Help Center under the Students heading. Either of those options fail to put the link visible on the screen by default, but it is accessible from (almost) any page. The footer is also an option, although that would take a bit more work on my part. We’ve got space to spare down there, but I want each area to have a clearly defined purpose.

Banner Blindness

But I think a bigger issue has been brought to light here. I’ve just gone through the navigation summary for all 7 pages that feature badges. None of the badges appear in the list of top 10 links for those pages. That may not be a bad thing if it means people are finding what they are looking for in the actual content. But it could also indicate a bad case of banner blindness.

Or Is It?

But approaching it from the other side, 92% of the traffic coming into the Schedule of Classes page are getting there through one of the various badges. That 92% translates into about 1,225 total page views, which is so tiny in comparison to the traffic coming through the landing pages that it may not make a blip on the radar. Learning Help Centers gets about 75% of its traffic from badges. SEEK gets about 94% of its traffic from the badges. But again page views measured in the hundreds are so small in comparison to the total traffic pumping through the various landing pages it’s probably easy for those numbers to fall through the cracks.

Overall badges seem to channel a significant percentage of traffic for the sorts of things people would not otherwise be aware of, such as SEEK and the Learning Help Centers. They also seem to work well for new site content such as the dedicated page for class schedule information. Both SEEK and the Learning Help Centers are fairly recent additions to the site as well. Badges seem to preform less well for older content with high awareness and lots of paths of entry. But even in the case of the “Apply Now!” badge, 14% of total incoming traffic may not sound like a significant boost, but it still translates into 700 visits. That’s nothing to sneeze at.


Lots of people are eventually getting to the application page, but that doesn’t change the fact that some are not and this is driving a noticeable number of phone calls to the Admissions Office. Some of the people who are eventually getting there may not be getting their easily. Maybe 10% of them were 30 seconds away from calling us too. And for every person who calls, maybe 3 or 4 other people just give up without even calling us. We really have no way of knowing. So I’m not pulling out these numbers as a means to dismiss the problem as it was reported to me. I aim to use the data available to understand the scope and context of the problem in order to find effective solutions.

We can increase our link coverage by adding it to a few of the persistent template elements:

  • Help Center —> Registration
  • Quick Links —> Students
  • Footer, not sure exactly where yet

We can tag and measure the performance of these links over time to gauge their effectiveness. But ultimately more data is needed. And it’s the sort of data Google Analytics can’t really give us.

In the near future, I plan to do some usability testing with potential students. Locating the online application will be one of the primary tasks for that research. The results may help us arrive at a long term solution.

Thursday, December 03, 2009

First month with the new site

Yesterday was December 2nd. We launched the new site on November 2nd. So as of today, I have one solid month of data on the new site available in Google Analytics. Let’s take a look at how we’re doing compared to November 3rd through December 3rd of last year. The dates don’t match up exactly so that we start on a Monday and end on a Tuesday with both date ranges.

  • Visits up 27.53%
  • Unique Visitors up 89.16%
  • Page views up 38.51%
  • Average page views up 8.61%
  • Average time on site up 22.44%
  • Bounce rate down 46.09%
  • Percentage of new visits up 90.69%

These are all positive changes. Bounce rate is a bad thing, so seeing that number go down is good. We’re reaching more people, who are looking at more pages and spending longer stretches of time before leaving. But I’m not ready to say all this is due to the redesign. After all, we’ve seen a significant enrollment increase this semester, so all these numbers should be improved over a year ago.

So let’s also compare the first month with the new site to a similar date range the month previous; 11/02/2009 through 12/02/2009 compared to 9/28/2009 through 10/28/2009, again starting on a Monday and ending on a Tuesday.

  • Visits up 0.22%
  • Unique Visitors up 15.27%
  • Page views up 30.01%
  • Average page views up 29.72%
  • Average time on site up 41.74%
  • Bounce rate down 51.39%
  • Percentage of new visits up 35.89%

This comparison is less straight forward. The new site has Thanksgiving break in this data set where the old site has no breaks, which would seem to put the new site at a disadvantage. But the old site’s figures come before registration opened up for the Spring, so in other ways it’s at a disadvantage. In other words, don’t read too much into this comparison.

It does help make it clear that the sorts of metrics tied to raw traffic have little to do with the redesign. The percentage change in visits is virtually zero. But metrics that measure engagement, such as time on site and bounce rate, actually show more improvement against a month ago than they do against a year ago. This probably helps show the natural boost we get thanks to registration opening up this time of year. When we’re talking about aggregate data it’s important to keep in mind all the variables that have nothing to do with the design of the site.

I’m more comfortable attributing large shifts in metrics for specific sections of site content that have been significantly overhauled as part of the redesign. For example, the list of our programs of study saw an increase in visits of 502.65% and an increase in unique visitors of 290.45% compared to figures for October of this year. Compared to a year ago, the difference is 548.75% increase in visits, 333.29% increase in unique visitors. One of my primary goals with the redesign was to increase the visibility of this content because I think it’s an important part of the “shopping” process. I think we can safely call that a success.

Sunday, November 01, 2009

Don't mind me

I just need some links from a non Vol State domain to test the custom 404 page.

Wednesday, July 08, 2009

The Culture of Technological Abundance

In one of my computer science courses the instructor asked if computing resources have reached the point where efficiency can safely be ignored. I responded with a resounding “No!” but some of the younger pups in the class disagreed. I'm begging to think I was showing my age, at least within certain contexts.

Something in my brain makes me love the idea of optimization. I actually police myself away from it, but I still find myself spending hours upon hours thinking about “better ways” to do things. I was recently debating with myself about storing telephone numbers in a database as integers vs. strings. I found the integer approach most appealing. It just felt right. But I quickly figured out that would require formatting the data to be human readable every time I needed to display it as well as a bit of trickery on the front end to get the form input (which would be in a human readable format) into a basic int structure. That consumes a lot of my time, and possibly a lot of time for the users who eventually put the collected data to use. In exchange, I save a little bit of hard drive space.

The new server has 500 gigs of RAIDed space. A formatted telephone number string, such as “(888) 555 - 1234” takes up 16 characters. That's a 17 byte VARCHAR. Even less if we default to something shorter for unrequired fields left blank by the user. But for the sake of argument let’s go with VARCHAR. Storing “8885551234” as a BIGINT requires 8 bytes, saving us 9 bytes. That’s 9 out of over 500 billion available. We’ll end up with a few hundred form fields that will see a few hundred hits per year. For the sake of argument let’s say 400 squared, or 160,000. If my attempts at optimization save an average of 9 bytes per field per record, we'll run out of space after about 350,000 years. I'm not sure what the clock cycles are involved in fetching or even comparing a 16 character string vs a 10 digit number, but probably even more negligible than the storage space. Obviously, server resources are abundant when compared to my time and the time of my users.

I’m about half way done building the forms on my to-do list and I just convinced myself to change the way I handle things. Oy vey.

If data is collected for the purpose of being later presented to humans, I will store it as a string, optimization be damned. I’ll use numeric types for data that is collected to be crunched, which in all honesty if rare right now. In situations where it could conceivably be used for both, such as dates, I’m probably better off storing both versions and fetching (or sorting by) whichever is most appropriate rather than running a timestamp through the date() function as needed or converting user input into the MySQL DATE format (which is both human readable and easily sortable).

2 years into this redesign project and I’m still not done. In hindsight, the biggest setback has been my own perfectionism. I have a hand crafted attitude towards my work. I take great pride in it. But at what cost? It feels great when I see something like Smashing Magazine’s list of current best practices in form validation and I realize I’m already doing the majority of those things simply because they “feel right”. But I spent all day yesterday working on a single form, stayed 45 minutes late, and still didn’t get it done. That felt anything but great. What’s the trade off? Where do I draw the line?

I’m even doing it now. I’m encoding my apostrophes and quote marks. Can anyone out there notice the difference between “this” and "this"? It's 12 extra key strokes for me to use the "proper" encoded characters. For all I know, Blogger auto-converts them for me anyway. (*EDIT* No, it doesn’t.) I've just developed the habit over the years of hand coding HTML. 12 keystrokes per quote pairs times 5 quotes per page times 3,000 pages at 400 characters per minute is 7.5 hours. That's a full workday over the past 2 years. Is that too high a price to pay for typographic correctness?

What's the cost of XHTML validation? Of ADA compliance? That last one could end up saving us a mint if lawsuits start getting tossed around. I know where my personal comfort level lies on most of these issues and I'm willing to re-evaluate in light of new information and fresh perspectives. As the only web guy around here I guess I get to make those judgement calls for the institution. But in a freelance situation my time is the client's money, which is definitely a scarce resource. A 0.2% markup cost for things like typographic correctness may not sit well with some clients, but there's plenty of designers out there who also don't care. Maybe they can service those clients.

Tuesday, June 30, 2009

New York Post op ed says degrees aren't worth it


Not the most well reasoned or argued article I’ve ever read (I know it’s just the Post, but the blogs I read are typically better written than this) and I don’t agree with his prescribed fix, but he hits on 2 big issues I think are vital to the future of higher ed.

  1. We're too slow.
  2. We have no accountability.

We’re too slow.

Someone has to decide to go to college, then apply for admissions, then apply for financial aid (which may spoil the deal if not enough is available), then wait around for registration to open, then pray the classes they want/need get enough people to “make” but not enough people to fill up, then wait around for classes to start. It’s a lot of hoops for the students to jump through that have very little to do with the core value we offer them. Some of it is necessary, even beneficial, but the benefit isn't immediately apparent to the “end user”. I think in the next 10 years we'll be able to greatly streamline this process and make it much more student centered. If we don't, someone else will, and we may find ourselves obsolete.

Hough balances this with the abundance of cheap or even free quality educational resources available thanks to stuff like Open CourseWare and iTunes U. “Today's student who decides to learn at 1 a.m. should be doing it by 1:30. A process that makes him wait 18 months is not an education system. It's a barrier to education.” I don’t think it’s as bad as an 18 month wait, especially not here (we're a 2 year school were students can literally walk in the day before classes start and leave with a schedule for the coming semester, which I think is awesome), but I agree with the spirit of what he’s saying.

We have no accountability.

His focus is more on the ivy leagues, who seem to play a game of recruiting the brightest students who leave with a degree, still bright, but not much more so. Hough compares Harvard to a hospital who turns away the least healthy 92% of its patients and then takes credit for the health and longevity of the patients it treats. A degree is supposed to certify a graduate has met a certain standard of learning and competency, but everyone just has to take our word for it (“our” meaning the higher ed industry as a whole). Grade inflation and downright degree milling are eroding employers' faith in our authority to make such claims about our graduates.

He calls for government regulation enforced by standardized testing. I think that's horribly short sighted. But maybe we need something like the Bureau of Labor Statistics but for the long term impact of education. If longitudinal data was collected and made publically available in a standardized format, we’d have no choice but to get some accountability. Even with newspapers failing, there will always be someone out there willing to take us to task over statistics.

The hypothetical example Hough uses sets the groundwork for the type of data that could be collected. Median income 1 year after graduation, 5 years after, 10, 25, etc. Median education related debt at graduation with interest rates. We'd have to correct for outside influences such as social-economic status. Maybe comparing graduate incomes to the incomes of their own parents? Someone better at statistics than me could tackle that problem. Schools shouldn't be punished simply for servicing a poor area (although we all now they already are).

Other fuzzy areas would be attempt to collect data on productivity and employer satisfaction. Standardized testing could play a role in measuring things like value added. If there was a way to compare GRE scores to ACT/SAT scores, or maybe create a new test that's taken on the way in and on the way out to establish how much was actually learned. I think a large percentage of what is learned in a college or university experience would be hard to test for, so the results would have to be tampered with other data.

There's also the type of research that goes into works such as Colleges That Change Lives. That has very little to do with economic issues such as debt and income and much more to do with civic engagement and overall satisfaction with ones own life. Hough would probably disagree with me there, since colleges that change lives tend to spend a lot of time on “frivolous” but engaging topics like gender studies.

I also agree with Hough that we have an over reliance on the degree model. But Hough seems ready to completely abandon the degree model. I think it works quite well in some areas, but not as well in others. I think we're ready to see the rise of several different models, each suiting certain educational goals better than others. There will of course be a transitional period where square pegs are pounded into new round holes, but eventually each model will focus on its own strengths and attract students with the proper kind of attitudes and learning styles. I also think this will help us move away from viewing graduation as a culminating event and seek out models for life long learning.

That may be horribly idealistic of me, but I also realize this won't happen over night. My kids will probably be the square pegs being pounded into round holes. And by the time my grandkids are ready for college, there will be a fresh crop of issues to address. I expect evolution and improvement, not utopia.

Thursday, June 04, 2009

.Edu websites and brand perception

I’ve been a member of a site called EduStyle since right after I took this job. It’s a gallery site for higher ed redesigns. I’ve been consistently blown away by the innovations, both visually and technologically, coming out of small, private colleges and universities compared to the standard 4-years.

In regards to technology, Douglas Adams said:

  1. Everything that's already in the world when you're born is just normal;
  2. anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;
  3. anything that gets invented after you're thirty is against the natural order of things and the beginning of the end of civilization as we know it until it's been around for about ten years when it gradually turns out to be alright really.

The modern web was “born” in 1993. We can go with the release of Mosaic and say April 22, in which case the web is now old enough to drive. Or we can move a little later in the year with the Eternal September in which case the web is still stuck with a learner’s permit. Either way, it’s old enough to be shopping around online and planning for its academic future. Our personified web will be a traditional starting freshman come Fall 2011.

A .edu website communicates a lot about the brand of the institution. If we're horribly behind the times in technology that our primary recruiting demographic considers part of the natural order of things, we will suffer for it. I’ve observed a gap developing over the past couple of years, and I admit that our site is on the wrong end of that gap. But I'm trying to catch up.

Compare Yale and Columbia to Denver Seminary and Biola Undergraduate Admissions. I’m picking on the Ivy League and perhaps a bit unfairly. Cornell’s design is good enough that it’s inspired many other redesigns, including our own. But it’s possible that the University of Southern California inspired Cornell.

The reason I single out the Ivy Leagues is because today’s potential students have an innate, subconscious ability to judge us based on our web presence even if they lack the prior knowledge to understand things like published research and accreditation. Does anyone go to the University of Phoenix because of the great research they produce? I highly doubt it. But their website feels more up to date than MIT. We can delude ourselves and pretend that somehow our students are too smart to fall for such marketing gimmicks. But it ain’t just gimmicks to them.

I think it’s easy for us to forget what it’s like to be a teenager shopping for a school. We’ve turned academics into a career, so we forget how alien a world it can be to an outsiders. And let’s be honest, the vast majority of our potential students are outsiders.

I’ve been at the top of my class throughout my entire academic career. I was recommended for West Point and was encouraged to apply to lots of prestigious schools. I didn’t pursue those options because I wanted to stay in Tennessee and I didn’t think I could afford Vandy. I don’t regret those decisions, but I admit I may have chosen differently had I been equipped with a better understanding of financial aid. I didn’t know who SACS was until my alma mater went through their own accreditation review while I was a student. I still don’t know the exact different between a Bachelor of Arts and a Bachelor of Science. If I ever need to list my degrees, I’ve gotta go look that up. I didn’t know my degree from Pellisiippi was an Associates of Applied Science until I literally had the piece of paper in my hand. All I knew was that it was a degree that could advance my chosen career path. I had no reason to care about the academic nomenclature. I doubt today’s students are much different.

Managing and protecting our online brand perception is a big part of my job. As much as I loathe jargon, I can't really think of a better way to say it. I’m lucky to be working on a campus that understands that concept (if not in all the gory technical detail) in spite of the rarity of an administrator who was under 30 in 1993. :)

Wednesday, April 22, 2009

The Proper Care and Feeding of

Most literature on user centered design (UCD) focuses on redesigning existing products or creating wholly new products. No literature I’ve found focuses on applying these ideas to established, mature products in the maintenance stage of their lifecycles. However, with an eye towards abstraction, I think we should be able to take the high-level concepts and apply them to any stage of a product’s lifecycle. My goal with this document is to establish some guidelines for making iterative improvements to using my current understanding of UCD practices, without having to shift into a full redesign.

There is a split among experts in the field of usability. Some strive for true science. Tests use many participants, drawn from the target audience, with tight statistical fit. Conditions are controlled and kept close to real-world environments. Participants use high fidelity prototypes that are functionally complete, or very nearly so. The data such tests produce is significant and reliable. But these tests also cost into the tens or even hundreds of thousands of dollars. Attempts have been made to codify formal systems for web site development, but these systems do little to address the practical concerns of time and money. (De Troyer & Leune, 1998)

In 1989, Jacob Nielsen introduced the concept of discount usability in a paper titled “Usability engineering at a discount”. The gist of the argument, to borrow a phrase from Steve Krug is “Testing with 1 user is 100% better than testing none.” Many people, Nielsen included, have tried to show statistically where the sweet spot for return on investment lies in conducting usability studies. Many other people, such as Jared Spool, have counter argued against such reasoning. Each side tends to rely on math supplemented with anecdotal evidence. This is all well and good as a distraction for academics. Personally, I’m much more interested in producing a better product for our users than in the math and science underlying the methodology.

I’m following Krug’s ideas most closely. He goes even beyond Nielsen’s discount usability into what he calls “lost our lease, going out of business sale usability testing”. Such testing can have value without the need for strict scientific validity. He’s even gone so far as to call himself a usability testing evangelist, actively encouraging people to integrate it into their workflow. In his book, Don’t Make Me Think, he addresses the top 5 excuses for not performing usability testing.

  1. No time – Keep testing as small a deal as possible and you can fit it into one morning a month. The data produced will actually save time by pointing out obvious flaws in a way that sidesteps most possible internal politics and by catching these problems as early as possible, when they are most easily fixed.
  2. No money – Testing can be performed for as little as $50-$100 per session. While that’s not free, it’s far from the $50,000+ price tag of big ticket professional testing. In terms of ROI, it’s one of the wisest investments you can make.
  3. No expertise – Experts will be better at it, but virtually any human being of reasonable intelligence and patience can conduct basic testing that will produce valuable results.
  4. No usability lab – You don’t need one. Since Krug first wrote his book, things have actually gotten even easier with software such as Silverback.
  5. Can’t interpret results – You don’t have to use SPSS to mine for valuable data. The biggest problems will be the most obvious. Even if you only identify and fix the most obvious 3 problems, that’s 3 fewer problems in the final product.

Obviously if my goal were to get published, even as a case study, I would worry a bit more about scientific validity than Krug stresses. Although even among the academic literature some examples of successful case studies can be found where no attempt was made in recruiting beyond very broad user categories. (Bordac & Rainwater, 2008) But my goal in this document is to create a framework for usability testing that is simple and affordable enough to be actionable at Vol State. Ideally, it should be straightforward enough to persist beyond my tenure in this position. For those goals, Krug seems a perfect fit.

Involving Users Through the Development Process

Step 1: Identify a Problem or Unmet Need

Occasionally, users may volunteer information related to areas where improvements are possible or identify desired features not currently available. Anything we can do to encourage and facilitate this can only help.

Direct observation may also provide vital insight. Students may be observed unobtrusively in computer labs or in the library. Employees could be observed, with their permission, within their everyday work environment. Surveys and focus groups may also assist in identifying unmet needs at a high level of abstraction.

Problems exist within the context of users and goals. A proper definition of a problem should identify these contexts. Some possible examples:

  • Students wish they could perform a task online that currently requires a visit to an office on campus
  • Parents want easier access to a certain type of data
  • Faculty need a way to share educational resources
  • Sports fans need a better way of viewing or sorting our events listings

Understanding these contexts is vital to the success of a possible solution. If parents express a desire for easier access to student records, that may very well violate FERPA, in which case that problem is not actionable. Faculty looking for ways to share educational resources may be best served by training on Delicious. Even though we won’t stress a strict statistical fit with our test participants, understanding user groups in broad categories such as this allows for quick and easy targeting which should add value to the data collected via future testing.

Step 2: Establish Benchmark (Where Applicable)

Persistent, longitudinal data collected via survey could also apply at this step. Google Analytics is another powerful tool we have at our disposal. The type of benchmarking would depend greatly on the type of problem. For example, problems that require adding new pages to the site would have no data available in Google Analytics against which to establish benchmarking. But if those pages address an unmet need revealed via surveys that fail to turn up as an unmet need in a future survey, that could serve as a benchmark. (Comeaux, 2008) While I hope to incorporate this step as often as possible, I don’t want it to serve as justification for abandoning a project due to a lack of a method of establishing a proper benchmark.

Step 3: Wireframe Possible Solutions

Brainstorm for possible solutions to the problem. Produce low-fi wireframes such as semi-functional HTML prototypes or paper prototypes. Research seems to indicate that testing multiple prototypes produces more useful data (Tohidi et al, 2006). Test 3 or 4 users loosely recruited from the broad user category identified in step 1. Aim for 3 testable prototypes, but again don’t let this guideline derail an otherwise healthy project.

Step 4: Develop Solution Based on Wireframe Testing

Perhaps a clear winner will emerge from step 3, or perhaps the best possible solution will incorporate elements from multiple prototypes. Entering into the development phase with even a small dose of user involvement cuts down on costs and ensures that we “get the right design.” (Tohidi et al, 2006)

Step 5: Think-Aloud Usability Testing with Prototype

Construct a task list informed by the context and goal(s) identified in step 1. Recruit loosely from the broad user group as in step 3. Spend a morning walking 3 or 4 users through the task list. Record sessions using Silverback. Take quick notes between each session, recording the biggest issues while they are fresh in memory. Produce a single page list of issues, not a full report. Writing formal reports consume a lot of time better invested in fixing problems or further testing. Review session recordings with stakeholders as appropriate.

Step 6: Tweak Based on Testing

Fix the biggest 3 issues revealed in step 5. Avoid the temptation to go for the low hanging fruit until the truly catastrophic issues have been resolved. If something can be fixed with 15 minutes of work, by all means do so. But don’t count that as one of the big 3 unless it is also a major problem. After the big 3 and the easy fixes are addressed, act on any remaining issues as time permits.

Step 7: Launch & Evaluate

Put the solution into production. If benchmarks were established, collect data for a length of time allowing for a reasonable comparison up to a maximum of 1 year. If the data does not indicate improvement (and the criteria for judging this will be highly subjective and depend a great deal on context) consider returning to step 1 with an eye towards testing any assumptions in the original problem description.

Don’t Sweat the Details

Use the tools at our disposal to the best of their ability. Maybe a focus group would be the perfect tool for a given job. But we can pull that off because of finances or because it’s summer and campus is like a ghost town. Move on to a less than perfect solution, or plan to readdress that particular issue when the timing is right. But if we postpone one project, we should find something to keep us busy in the meantime. With a site as large and complex as, we should always be able to find something to improve.

The Tools of the Trade

In this section I will list some of the tools available to us. While I’ve made an honest effort to exhaust my knowledge on the subject, I make no claims to knowing every possible method. We should actively look for new tools and add them to this list as we become aware of them.

Card Sort Studies

These studies are often used for organization and naming conventions. There are 2 varieties. (a) Closed card sort studies involve asking participants to sort pre-determined items into pre-determined categories. (b) Open card sort studies involve asking participants to sort pre-determined items into groups which can then be labeled any way the participant chooses. Such studies are cheap and easy to conduct and provide valuable information within the limited context of language choices and organization schemes. Such problems are commonly reported in the literature involving case studies of university libraries. (Bordac & Rainwater, 2008; Proffitt, 2007; Shafikova, 2005)


In the past we’ve had great luck with Care should be taken in wording questions in such a way as to avoid biasing the answers. Surveys, particularly those delivered online, allow for a great number of participants from a wide range of our target audience. Smaller, more targeted surveys can also be conducted in person or on paper depending on the need. We also have access to data collected through marketing surveys conducted in 2003, 2008, and hopefully 2009.


Don’t underestimate the power of simply watching people using a system under normal conditions. Observation alone will often only produce hints or assumptions about possible problem areas. But that can be a great starting point from which to branch out into interviews or surveys. In the case of formal observations, such as with employees, a quick follow up interview can probe for more details immediately. (Pekkola et al, 2006)

Focus Groups

While there are dangers associated with relying too heavily on preference and opinion based data such as that produced by focus groups, they can provide data useful to our purposes. For example, it could be a great way to test assumptions arrived at through observing students using the website at the library. We’ve observed students doing X, which we assume is a way of working around problem Y. We could conduct a focus group to discover if Y really is a problem at all, or if students are engaging in behavior X for reasons we didn’t think of. Focus groups are most effective when drawn from within a single user category. For example, we probably would not want to mix students and faculty in the same focus group, nor faculty and administrators for that matter. (Kensing & Blomberg, 1998)


Interviews could be paired with observations, helping to clarify and test assumptions much like a focus group. Or they could be used to help us construct personas. They would be most useful early on to help explain the context within which a given problem may be occurring.


Paper mock ups for a possible user interface can be produced quickly and cheaply. In a team environment, building such prototypes could even become a collaborative, iterative process. Some teams have reported success even with complex interactive paper prototypes. (Tohidi et al, 2006; see also i ♥ wireframes) Testing with such low-fi prototypes can lead to a greater willingness for participants to offer criticism since it’s obviously a work in progress.

HTML prototypes can range from semi-functional to fully functional. I’m personally much better at HTML wireframing than illustrating, so in the absence of outside assistance I will probably rely on HTML prototypes for both low-fi and high-fi testing.


Using demographic data, marketing research, and perhaps focus groups/interviews to construct an idealized but fictional user can help make user needs more personal and real (ironically enough) to a development team. Knowing that we have students with family and work obligations is important. But building a persona around that particular cluster of user attributes can make it easier to relate to and also give the development team a shorthand for addressing those issues.

For example, we could create a persona named Barbara who is a single mother of 2, working full time graveyard shift as a manager at a 24 hour restaurant, attending Vol State for 3-6 hours per semester working on a degree in Hospitality Management. She tries as often as possible to take classes on Tuesday/Thursday to minimize the burden of day time child care. She’s not technologically proficient, but she’s willing to learn. If given the choice between driving 45 minutes to campus or performing a task online, she prefers the online option in spite of her technological limitations.

That’s a very basic persona, but it still gives us enough detail to say “But what about Barbara?” rather than “But what about students who may have family or work obligations that prevent them from easily making an on campus visit on Mondays and Wednesdays?” Obviously personas could have applications outside of the realm of website development. Barbara could also provide insight into event planning, for example.

Think-Aloud Usability Study

This is the crown jewel in my usability toolbox. In a nutshell, it’s as simple as sitting a real human being in front of your product and watching them try to make sense of and use it. Ideally, you should have some specific tasks to ask them to perform and keep them talking about what they are looking at, what they think they should do next, what they expect to happen when they do it, their reaction to what actually happens when they do it, etc.

Full descriptions of the methodology can be found in lots of different places. I personally recommend Krug’s book. He’s even made the final 3 chapters of the first edition available on his website as a free download. Those chapters contain the bulk of the info on this sort of testing. There are other books, academic articles, and even blogs available, so pick your poison. Jakob Nielsen and Jared Spool have both written prolifically across many distribution channels.

Heuristic Study/Expert Review

My list would not be complete were I to leave this off, but personally I find this the most suspect. The general idea is you have an expert go over the interface and critique it based on a list of guidelines. This idea originated with Nielsen too and his list of guidelines can be found on his website. (Nielsen, 2005)

The highest value should be placed on methods that involve observations of real people interacting with some form of the interface in question. Preference and opinion based methods, such as surveys, focus groups, and interviews, can be quite effective for collecting marketing type data. But usability data relies too heavily on context for these methods to work alone. (Krug, 2008) However, any methods requiring direct human contact in the context of a college campus present a particular problem.

Sweat Some Details, but Document Them

One detail we have to sweat is the Institutional Review Board. All those highly valued methods that involve human interaction will require IRB approval. Even with approval, there’s no guarantee that a particular methodology will work. For example, I recently put together a proposal for a card sort study that failed to successfully recruit any participants. The good news is I’ve learned one method that does not work on this campus, so we can avoid any recruitment schemes that rely solely on face to face recruitment with no offers of compensation in the future. Eventually we will discover methods that both appease the IRB and meet with success out in the field. When that happens, we should save the IRB proposal, the information letter, the release forms, and all other deliverables. The next time the need for such a test comes up, we can build on the previous success. Many researchers, Krug among them, provide examples of their own materials that we can also integrate into our workflow.


  • Develop an easy to use feedback mechanism into such as a simple for to “Report a problem with this page” or an Amazon style “Was this page helpful to you?” tool. Possibly both.
  • Develop a standardized survey to be delivered via on an annual basis. Collecting standardized, longitudinal data will allow for some forms of benchmarking. Each iteration of the survey can include a handful of questions of a more timely nature in addition to the standard set. For example, a question about a specific change made in the past year.
  • Continue to develop our use of Google Analytics for statistical and possible benchmarking data. However, do so with the understanding that Jared Spool has called analytics “voodoo data” because they lack context. (Spool, 2009)
  • Integrate user testing to provide the context currently missing from our toolset as laid out in the 7 step model above.


  • Bordac, S, & Rainwater, J (2008). User-Centered Design in Practice: The Brown University Experience. Journal of Web Librarianship.
  • Comeaux, D.J. (2008). Usability Studies and User-Centered Design in Digital Libraries. Journal of Web Librarianship.
  • De Troyer, O, & Leune, C (1998). WSDM: a user centered design method for Web sites. Computer Networks and ISDN Systems.
  • Kensing, F, & Blomberg, J (1998). Participatory Design: Issues and Concerns. Computer Supported Cooperative Work.
  • Krug, S (2005). Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition. New Riders.
  • Krug, S (2008). Steve Krug on the least you can do about usability. Retrieved April 22, 2009, from Web site:
  • Nielsen, J (2005). Heurisitics for User Interface Design . Retrieved April 22, 2009, from usable information technology Web site:
  • Pekkola, S, Kaarilahti, N, & Pohjola, P (2006). Towards formalised end-user participation in information systems development process: bridging the gap between participatory design and ISD methodologies. Proceedings of the ninth conference on Participatory design: Expanding boundaries in design. 1, 21-30.
  • Proffitt, M (2007). How and Why of User Studies: RLG's RedLightGreen as a Case Study. Journal of Archival Organization. 4
  • Spool, J Journey to the Center of Design. Retrieved April 22, 2009, from YouTube Web site:
  • Tohidi, M, Buxton, W, Baecker, R, & Sellen, A (2006). Getting the right design and the design right. Proceedings of the SIGCHI conference on Human Factors in computing systems

Monday, April 13, 2009

Version Control for the Web

Does #lazyweb work on blogs?

So I've had an idea. And it may be what finally gets me to implement a version control system around here.

A little over a year ago, I did a survey with our website visitors (by “our website” I mean I expected the biggest problem to be reported would be trouble locating information. I have full access to the server and have worked directly with the file structure for nearly 2 years now and I still have trouble finding stuff. I was surprised to see that problems with findability were the 2nd most commonly reported. #1 went to outdated info.

I assume the 2 problems feed off each other. You spend 15 minutes trying to find something, and then once you finally locate it, it's obviously out of date. I know I'd be upset if placed in the same situation. And unfortunately that's a situation we place our website visitors in every day.

I can publish content updates once they cross my desk, but I can't summon them from thin air. But I also realize we've got a lot of busy people on this campus who may not have the time to proofread their web content and alert me when updates are needed. I could use that as an excuse, pass the buck, and move on with my life. But I don't like that.

One thing I've done in the redesign is to link to content from a single authoritative source as often as possible. For example, the current site often lists course and curriculum info. (I'm using our Health Information Technology program as an example here, but most of our programs follow this model.) Now that's important info for both current and potential students, so I understand why we'd want to publish it. But that's redundant info that can quickly fall out of date. The catalog is the definitive source for course descriptions and curricula info. I can also count on that getting updated every year. Thankfully, Acalog, our 3rd party online catalog service, makes it possible to link to individual program descriptions. So the redesigned program pages send people there for the course and curriculum info.

That's one less point of possible failure. I know those pages will be kept up to date. I no longer have to worry about them, and neither do the deans and program directors.

Along the same lines, I'm thinking about a way to implement a reporting system that could alert me to each page that has gone X days without an update. I figure a good version control system would support this, but at this stage that's a big assumption on my part.

But if that's possible, I could create a system where the people responsible for providing content would be notified when a page could potentially be out of date. If nothing has changed in the past X days to render the content out of date, that's fine, but I'd like to be able to document that. I don't want to be seen as The Web Nazi, but if the first gentle reminder fails to get any feedback, I could follow up in a month or so including traffic figures from Google Analytics. “X people have viewed potentially out of date information on since [date of initial notice]”

So my goals for this week are to generate a deliverable for my user centered design course (I tried a research project, but that failed to materalize once I hit the recruitment stage, still learned a lot though) and researching SVN, Git, CVS, etc.

Thursday, April 02, 2009

An open letter to the University of Alberta

Good morning everyone. I've got a bit of an odd situation, so let me start off with some background info.

I'm an international grad student enrolled in the MACT program. Since that program is offered primarily online, I'm still living at "home" in Gallatin Tennessee. I picked this program because it seemed more reputable than competing programs I found in the US, both those available within commuting distance and those offered online. The online nature of the program seemed to soften most of the hurdles traditionally associated with seeking a degree across international borders.

For the first year of the program, all those assumptions proved true and I was very happy with my overall experience with both the MACT program and the University of Alberta as a whole.

Then you stopped accepting credit cards.

Now the entire landscape has changed. Had my first few semesters gone like this, I would not have stuck with the program. By the time things took a turn for the worse, I was heavily invested in the program and hated to turn back. I can't say for certain that this move is costing you business, but I can state with authority that it has greatly tarnished my personal experience with and perception of the U of A brand.

As of right now, I still have a balance of $20.66 CAD. But on the day I made my international wire transfer payment, I printed the exchange rates published on the U of A website along with the info sheet for international wire transfers. I took this to my bank and paid a $45 USD fee for the wire transfer. Apparently in the 4 days or so it took the transfer to clear, the exchange rate was not kind to me, so my payment fell short by about 2.8%.

But how am I supposed to deal with this? I took the most up to date info available to me at the time, provided by you. I have no more control over the markets than you do. Last semester, the same thing happened, only I ended up over paying by about $25 CAD. That semester, I sent money orders via international post. That cost about $18 USD, required me to fill out customs forms, would not allow me to get any sort of delivery confirmation, and took several weeks for delivery. I had assumed it was the length of time that resulted in the imbalance, so this term I tried the faster yet more expensive method of international wire transfer. Although I must admit the relative speed is also much easier on my mind. Having no way to track or confirm delivery of something as important as payment for my education provided a very stressful couple of weeks last semester. That stress stems from a process over which you have no control, but the fact of the matter is I experienced that stress as a direct result of a change to your systems. So that experience tarnishes your brand perception anyway. I can avoid that stress in the future by disassociating myself with your brand.

With credit card payments, Visa or Mastercard took care of all the exchange rate madness instantly, so such discrepancies did not happen. Now, international wire transfer is the fastest method available to me, and that takes close to a full business week. It will be virtually impossible for me to make a payment that is correct to the penny. If I'm short, I can either spend $18 (USD) to send you $20 (CAD), and wait for several weeks. Or I can spend $45 (USD) to send you $20 (CAD) and wait about a week. Either way we can easily end up in a Zeno's Arrow situation where I'm faced with spending $18 or $45 (USD) to send you $0.60 (CAD) a month later. If I'm over, I'm sure your institution is placed in a similar situation. Only I lack the means or authority to state and enforce exchange rates past the initial transaction. And to tell the truth, I probably wouldn't worry about it even if I could. How much hassle justifies that last 3% lost to the market?

So to avoid all this, I've asked a classmate, an Edmonton native, to settle up my account as a personal favor to me. Given my options, being indebted to a classmate is the least ugly and gets around the volatility of the market.

Chances are, I'm an edge case. I doubt you have many, if any, other international students who are not actually living in Canada. Such students would have access to Canadian bank accounts and thus would be able to pay with "electronic checks" using native Canadian currency. But I would not be surprised to learn that the parents of international students have found themselves in situations similar to my own. If you've noticed a dip in the recruitment or retention rate of international students, the added red tape introduced by removing the option to pay with a credit card would be the first place to look.

This process costs me about an extra 3% per semester anyway. There's no quantitative measure for the added stress, and I have no means to track the time lost compared to the simplicity of the "old" credit card based payment system. If your institution offered an option to pay with a credit card given a ~3% markup to cover the added costs to you, I'd accept that offer in a heartbeat. And your brand perception could shrug off the layers of stress and aggravation I've come to associate with it over the past year.

Thank you for your time.
Derek Pennycuff

Thursday, February 12, 2009

Javascript performance testing

I've been horribly disappointed with IE's performance with some of my javascript, so the point that I've pretty much decided to simply hide the troublesome functions from IE users. That's nearly 85% of our audience who won't get to experience the work I've put in to some of this stuff. I thought I needed to get some actual figures on performance, so I took the worst offender and turned it into a test case. Then I thought while I'm doing that I might as well compare jQuery 1.2.6 to jQuery 1.3.1. So I made a 2nd test case.

First of all, you need to understand that I'm terribly abusing the plugin in question. I'm using treeview to do the online version of our organizational chart. There's a demo showing how it works with large lists. That demo has 290 total list items, 53 of which are expandable. Our org chart has 403 list items, 81 of which are expandable. It's large enough that I had to edit the default images to extend all the way down the fully expanded list.

I added a crude counter to the plugin. Here's the code. The lines involving Date() and getTime() and alert() functions are my additions (114 to 127 or so). It's far from perfect, but it should be equally imperfect for all browsers and therefore free of bias. What this does is whenever the expand all or collapse all function is triggered, it grabs the time early on in that process, then grabs the time again near the end of that process, computes the difference (in milliseconds) and alerts the value. In each browser I expanded the list, recorded the value, collapsed the list, recorded that value, and repeated the process until I had 15 measurements for both the expand and the collapse feature.

The Data

We'll do the Mac browsers first. I'm running on a 20 inch iMac with a 2.0 gHz Intel Core Duo processor and 4 gigs of RAM. I ran these tests under "normal" conditions. So I had other applications going and in the case of Firefox several other tabs open.

All figures are in milliseconds.

Firefox 3.0.6

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 838
  2. 840
  3. 847
  4. 826
  5. 827
  6. 821
  7. 839
  8. 835
  9. 827
  10. 827
  11. 825
  12. 839
  13. 824
  14. 816
  15. 845
  1. 1178
  2. 1184
  3. 1183
  4. 1188
  5. 1188
  6. 1184
  7. 1187
  8. 1205
  9. 1186
  10. 1190
  11. 1189
  12. 1182
  13. 1185
  14. 1182
  15. 1190
  1. 506
  2. 514
  3. 511
  4. 493
  5. 482
  6. 513
  7. 480
  8. 510
  9. 493
  10. 484
  11. 508
  12. 484
  13. 492
  14. 512
  15. 491
  1. 1006
  2. 1024
  3. 999
  4. 998
  5. 1011
  6. 998
  7. 1010
  8. 1003
  9. 1009
  10. 1008
  11. 1005
  12. 1002
  13. 1000
  14. 997
  15. 994
Min 816 1178 480 994
Max 847 1205 514 1024
Range 31 27 34 30
Median 827 1186 493 1003
Average 831.73 1186.73 498.2 1004.26

The most surprising thing here is that jQuery 1.2.6 is actually a bit faster than 1.3.1. Of course, 1.3.2 is supposed to release soon. And as we'll see that's not true for every browser. Collapsing takes longer than expanding. I'll leave explaining that to someone that knows more about jQuery DOM transversal.

Safari 3.2.1

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 262
  2. 256
  3. 267
  4. 269
  5. 264
  6. 269
  7. 266
  8. 267
  9. 270
  10. 270
  11. 267
  12. 269
  13. 292
  14. 267
  15. 267
  1. 161
  2. 159
  3. 164
  4. 163
  5. 167
  6. 164
  7. 169
  8. 165
  9. 163
  10. 164
  11. 165
  12. 164
  13. 164
  14. 166
  15. 166
  1. 311
  2. 311
  3. 326
  4. 316
  5. 314
  6. 311
  7. 311
  8. 319
  9. 312
  10. 314
  11. 310
  12. 316
  13. 318
  14. 318
  15. 317
  1. 147
  2. 146
  3. 149
  4. 151
  5. 148
  6. 148
  7. 146
  8. 149
  9. 149
  10. 149
  11. 149
  12. 151
  13. 148
  14. 152
  15. 150
Min 256 159 310 146
Max 292 169 326 152
Range 36 10 16 6
Median 267 164 314 149
Average 268.13 164.27 314.9 148.8

At first I thought the performance boost for 1.3.1 here was pretty small, but I think the fact that all the numbers involved is tiny threw me off. If you add up the median figures, you'll find a difference of about 7.5%. That's nothing compared to the relative advantage Safari already has over most of the other browsers, but I think it's still a decent speed boost for the new jQuery release.

Safri's js engine is awesome. :)

Can anyone explain why in Safari collapse is quicker than expand, but in Firefox it's the other way around?

Opera 9.63

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 692
  2. 690
  3. 737
  4. 667
  5. 711
  6. 703
  7. 715
  8. 703
  9. 704
  10. 692
  11. 719
  12. 680
  13. 731
  14. 695
  15. 682
  1. 698
  2. 709
  3. 721
  4. 709
  5. 715
  6. 730
  7. 697
  8. 695
  9. 692
  10. 696
  11. 705
  12. 704
  13. 697
  14. 713
  15. 696
  1. 875
  2. 944
  3. 904
  4. 875
  5. 872
  6. 869
  7. 864
  8. 929
  9. 905
  10. 910
  11. 891
  12. 885
  13. 926
  14. 857
  15. 918
  1. 691
  2. 711
  3. 692
  4. 692
  5. 684
  6. 692
  7. 741
  8. 688
  9. 652
  10. 687
  11. 674
  12. 707
  13. 701
  14. 692
  15. 690
Min 667 692 857 652
Max 737 730 944 741
Range 70 38 87 89
Median 703 698 891 692
Average 701.4 705.13 894.93 692.93

Opera seems slow compared to Safari, but it's on par with Firefox, and more than tolerable. We also see a decent speed boost with 1.3.1 here.

Windows Browsers

I'm running Windows XP via virtualization with 412 megs of RAM. These tests were run with no other running applications, so more "ideal" conditions than the Mac browsers got. Then again, the Macs get a ton more RAM and aren't running in a virtual machine. I tested both Firefox 2 and Firefox 3 because a decent portion of our Firefox using audience haven't upgraded yet.


jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 1018
  2. 1069
  3. 1019
  4. 1019
  5. 1019
  6. 1019
  7. 989
  8. 1049
  9. 1019
  10. 989
  11. 1029
  12. 1009
  13. 1029
  14. 999
  15. 969
  1. 1169
  2. 1109
  3. 1129
  4. 1129
  5. 1139
  6. 1159
  7. 1129
  8. 1189
  9. 1149
  10. 1129
  11. 1128
  12. 1129
  13. 1129
  14. 1189
  15. 1119
  1. 629
  2. 629
  3. 609
  4. 649
  5. 609
  6. 639
  7. 619
  8. 629
  9. 619
  10. 649
  11. 630
  12. 629
  13. 640
  14. 629
  15. 639
  1. 740
  2. 719
  3. 739
  4. 729
  5. 749
  6. 749
  7. 749
  8. 730
  9. 739
  10. 739
  11. 759
  12. 750
  13. 759
  14. 719
  15. 769

Here we're starting to get slower than I'd like. A full second for anything to happen is a bit much. Then again, 403 list items in 81 lists is a lot of munch on. Once again we loose a little speed with the new jQuery release. Hmm.

Firefox 3.0.6

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 431
  2. 449
  3. 461
  4. 434
  5. 464
  6. 472
  7. 558
  8. 493
  9. 481
  10. 436
  11. 457
  12. 442
  13. 472
  14. 462
  15. 458
  1. 657
  2. 688
  3. 648
  4. 623
  5. 657
  6. 708
  7. 632
  8. 650
  9. 695
  10. 649
  11. 634
  12. 649
  13. 648
  14. 659
  15. 625
  1. 450
  2. 467
  3. 424
  4. 475
  5. 443
  6. 456
  7. 446
  8. 472
  9. 460
  10. 477
  11. 449
  12. 464
  13. 483
  14. 468
  15. 440
  1. 618
  2. 650
  3. 634
  4. 638
  5. 667
  6. 637
  7. 632
  8. 632
  9. 646
  10. 643
  11. 622
  12. 614
  13. 628
  14. 651
  15. 640

Here we go! Come on Firefox users, update already!

Safari 3.2.1

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 253
  2. 326
  3. 277
  4. 257
  5. 268
  6. 288
  7. 278
  8. 269
  9. 244
  10. 271
  11. 264
  12. 281
  13. 284
  14. 281
  15. 293
  1. 141
  2. 171
  3. 148
  4. 160
  5. 147
  6. 143
  7. 147
  8. 150
  9. 151
  10. 171
  11. 163
  12. 158
  13. 154
  14. 142
  15. 147
  1. 303
  2. 300
  3. 324
  4. 327
  5. 313
  6. 312
  7. 283
  8. 297
  9. 355
  10. 332
  11. 367
  12. 332
  13. 360
  14. 317
  15. 307
  1. 130
  2. 133
  3. 129
  4. 134
  5. 135
  6. 134
  7. 129
  8. 130
  9. 132
  10. 130
  11. 133
  12. 135
  13. 148
  14. 133
  15. 129

Safari continues to be awesome, even on Windows. :)

Opera 9.63

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 470
  2. 480
  3. 469
  4. 480
  5. 480
  6. 499
  7. 490
  8. 580
  9. 469
  10. 519
  11. 460
  12. 470
  13. 479
  14. 563
  15. 480
  1. 399
  2. 410
  3. 429
  4. 409
  5. 419
  6. 399
  7. 419
  8. 409
  9. 400
  10. 400
  11. 400
  12. 430
  13. 410
  14. 399
  15. 399
  1. 949
  2. 989
  3. 949
  4. 929
  5. 939
  6. 939
  7. 939
  8. 929
  9. 939
  10. 939
  11. 959
  12. 949
  13. 958
  14. 1009
  15. 989
  1. 370
  2. 380
  3. 390
  4. 409
  5. 400
  6. 390
  7. 370
  8. 390
  9. 380
  10. 400
  11. 399
  12. 420
  13. 380
  14. 400
  15. 420

With Opera on Windows, we see some pretty solid speed boosts with the 1.3.1 release of jQuery.


jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 229
  2. 203
  3. 191
  4. 195
  5. 158
  6. 208
  7. 201
  8. 188
  9. 204
  10. 244
  11. 360
  12. 327
  13. 307
  14. 305
  15. 178
  1. 88
  2. 77
  3. 119
  4. 88
  5. 79
  6. 106
  7. 84
  8. 86
  9. 95
  10. 158
  11. 116
  12. 145
  13. 135
  14. 61
  15. 81
  1. 512
  2. 598
  3. 546
  4. 607
  5. 607
  6. 640
  7. 526
  8. 674
  9. 545
  10. 631
  11. 549
  12. 635
  13. 581
  14. 616
  15. 531
  1. 111
  2. 112
  3. 133
  4. 115
  5. 109
  6. 105
  7. 103
  8. 103
  9. 105
  10. 106
  11. 133
  12. 112
  13. 109
  14. 110
  15. 114

Chrome also sees some good boosts from 1.3.1 and manages to out awesome even Safari. When you look at the range as a percentage, it's all over the place. Sometimes the alert box would have a check box in it that would say something to the effect of "prevent this site from producing alert boxes" and I think those tended to take longer to generate. Maybe Chrome's doing other stuff behind the scenes. I'd need to do more specific research to figure this one out.

IE 7.0.5730.13

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 2990
  2. 2897
  3. 2867
  4. 2917
  5. 3016
  6. 2877
  7. 2877
  8. 2847
  9. 2907
  10. 2897
  11. 2877
  12. 2916
  13. 2897
  14. 2888
  15. 2848
  1. 1718
  2. 1738
  3. 1832
  4. 1718
  5. 1668
  6. 1708
  7. 1678
  8. 1678
  9. 1698
  10. 1709
  11. 1728
  12. 1738
  13. 1689
  14. 1688
  15. 1708
  1. 2538
  2. 2587
  3. 2557
  4. 2557
  5. 2577
  6. 2568
  7. 2567
  8. 2557
  9. 2548
  10. 2707
  11. 2548
  12. 2708
  13. 2558
  14. 2568
  15. 2568
  1. 1248
  2. 1219
  3. 1249
  4. 1248
  5. 1279
  6. 1249
  7. 1208
  8. 1228
  9. 1258
  10. 1229
  11. 1229
  12. 1239
  13. 1389
  14. 1229
  15. 1239

Ugh! I'm in browser hell! What's up with this? 2.5 to 3 seconds to expand the list? Come on!


I don't know the exact version number here because of the way the Multiple IE installer works.

jQuery: 1.3.1 1.2.6
Expand Collapse Expand Collapse
  1. 10779
  2. 12577
  3. 13060
  4. 14085
  5. 15034
  6. 16123
  7. 17272
  8. 19549
  9. 19859
  10. 21121
  11. 23080
  12. 23249
  13. 24961
  14. 24770
  15. 25953
  1. 4935
  2. 5165
  3. 5774
  4. 5964
  5. 6314
  6. 7032
  7. 7152
  8. 8021
  9. 8231
  10. 8611
  11. 9082
  12. 9402
  13. 9892
  14. 9959
  15. 10349
  1. 11078
  2. 12617
  3. 13167
  4. 14785
  5. 15684
  6. 16876
  7. 19643
  8. 21740
  9. 20642
  10. 21947
  11. 22726
  12. 23925
  13. 36235
  14. 28810
  15. 29422
  1. 3899
  2. 4005
  3. 4356
  4. 4715
  5. 5105
  6. 5534
  7. 6453
  8. 7366
  9. 6842
  10. 7453
  11. 7801
  12. 8241
  13. 9700
  14. 9364
  15. 9849

No. I was wrong before. This is browser hell. Notice how each test gains about 1 full second? I think that means we've got memory leaks. I closed and relaunched the browser between test cases to avoid skewing the results between jQuery versions.

I'll complete the number crunching and do a bit more in depth analysis tomorrow. It's 5 o'clock and I'm heading home for the day.

Thursday, February 05, 2009

Content! Third-hand content, but still, content!

One of the people I encountered in all the discussion on higher ed in the web design industry is Kurt Schmidt. (, isn't that one of the greatest domain names ever?)

Since then, I've been following his blog semi-daily. Earlier today he posted Make Your Site Faster - Or Else!, which is his take on info from Geeking with Greg. I started to reply to Kurt, but about the time I started my 3rd paragraph I figured I should just turn it into a blog post here. See what I mean about blogs and peer review?

The importance of speed and performance on the user experience is something that I'm currently struggling with. In the redesign I'm working on, I've made extensive use of jQuery. Everything runs super quick on all the Mac browsers. Chrome and Safari on Windows also blaze. Firefox on Windows is noticeably slower than the rest, but still tolerable. Who does that leave to drool on his shirt and crap his pants at the browser testing party?

Both IE6 and IE7 are intolerably slow. I've considered simply filtering out IE6 from all the js goodness. Since I've designed for progressive enhancement, everything will still function, just not the same way. But that still leaves IE7 users with a clunky and ungraceful experience. As much as I hate IE, I won't kid myself about it's market share.

The best solution from the users' point of view would be to test each jQuery implementation on the site separately and selectively enable/disable for IE based on performance in context. Some of the stuff I'm doing on the redesign works fine in IE. I know because I've been doing it on the current site for more than a year.

I also need to dig deeper into my jQuery-fu and make sure my calls are as efficient as possible. But some of it is straight up plug in functionality. I'm not too keen on rewriting other people's plug ins to increase efficiency. The next version of Firefox will probably incorporate ideas currently only available in Chrome's V8 js engine, making it's tolerable performance that much better. But that will probably just widen the gap between IE users and everyone else even more!

The next current version of jQuery (which I haven't updated to yet, oops) will also have a revamped selection engine that should boost performance across the board. But unless those changes impact performance in IE a hell of a lot more than everyone else, the gap will still be huge. It's the gap in the user experience that worries me.

What little testing I've done so far would seem to indicate that IE6 is bad at say 2000 milliseconds. IE7 is barely any better and a lot less consistent, with speeds ranging from 1500-2300ms. Firefox is around 700ms. Opera is 300-400ms. Safari is 200-300ms. Chrome is about 150ms (10 times better than IE7's best run!). I guess with those numbers, even if sizzle improves performance by 20% across the board, Chrome gains 30ms and IE7 gains at least 300ms. Maybe the gap will get smaller. But still, 1.2 seconds waiting for all the js to trigger is unacceptable from a user centered outlook. But why should I be punished as a designer and the 20% or so of our users who don't run IE miss out on those enhancements simply because IE can't get it's crap together? These sorts of situations are the only part of my job I hate. I'll stop there before this turns into a rant.

Well, now that I know the new version of jQuery is out, it's pointless to flap my jaw any more about this stuff until I've updated and retested. I've got work to do.

I've been neglecting all my imaginary readers

Things have been crazy. Professionally, academically, personally; you name it. Chaos defines me. Even more than usual. I've had plenty of blog worthy thoughts in the past month, but this is the first chance I've had to sit down and actually blog, you know, as a verb.

There was an explosion of discussion over at A List Apart on higher ed and web design. And by explosion, I mean I posted a lot and the discussion was meaningful to me. I'm sure there are articles with a hell of a lot more discussion.

The articles in question are Elevate Web Design at the University Level by Leslie Jensen-Inman and Brighter Horizons for Web Education by Aarron Walter. Both are worth checking out if you haven't already.

I've tried, and failed, to articulate what my heart tells me about this issue. I've posted discussions for both those articles. I've posted to the University Web Devlopers group on Ning. I've posted on various blogs where the discussion has spilled over. I've carried on conversations via email with a few folks. The closest I've come to getting it right is probably this comment in response to Leslie's article.

I'm trying to make the case in favor of a new type of “digital apprenticeship” in the industry. I think a formal education can work in our field. No. I know it can. My bachelors was a great model and actually follows a lot of the suggestions that people have come up with in these ongoing discussions. But the shelf life of that program as I experienced it was limited to about 5 years. A change in leadership is all it took to bring it back to business as usual (in other words, the kind of troubled program Leslie was targeting with her research). I don't think our industry can afford to wait a generation for the top level of campus politics to embrace the educational philosophies required to produce marketable graduates. For the next 20 years we'll see a lot of hit or miss programs, and some campuses will be more progressive in that respect than others. There are probably half a dozen or so solid programs in existence right now that have the proper institutional support to remain viable and stable. But I don't think students should be required to seek out places like MIT to find a decent web design program. And I'm sure less prestigious/well known campuses are capable of fielding the sort of program I'm talking about, but we're still faced with the problem of how do students find out about them? I mean, the only reason I found the program at TTU is because I started out there as an engineering student.

So while I appreciate the efforts that are being made to entice higher ed to get with the program, I don't think that's a sphere where we have any real influence. A lack of solid educational material hasn't been a problem in this industry for at least a decade. The fact that most academics will pay no mind to such material until it's formalized in a book or an accepted peer reviewed journal is a failing on their part (within the context of our industry), not on our part. In my mind, the spirit of peer review is alive and well on prominent blogs. And the transparent nature of the comment and debate system removes a lot of the politics that can infect trade journals. Movements like open course ware seem to indicate that some academics understand this and are forging new educational models. But until that kind of thinking can emerge from the underground, it only serves as a further means of exclusion.

We can't make higher ed listen to us. And we can't force HR departments to not require a degree for a given position. But I don't think we have to. Before we can effectively maneuver around the momentum and bureaucracy of the old schools of thought, we may be able to build a better model ourselves. After a couple of hiring cycles of employees with graphic design or computer science degrees who can't address the full range of skills a web position requires, HR might learn to value portfolios and experience a bit more, at least for these positions. If universities start noticing problems with retention and the ability of their graduates to compete in the market, they'll come to use for a solution.

Right now I feel like we're trying to cook a more enticing meal for a man who doesn't even realize he's hungry. But he's not just hungry, he's starving. Things like the Opera Web Standards Curriculum are great. But in the mind of most academics, it's foistware. We're pitching a solution to a problem that isn't even on the radar yet.

But, I have a bit more hope now than a few weeks ago. Leslie's article on ALA was largely preaching to the choir. But she's an academic herself (and a fellow Tennessean). The research that lead her to produce the ALA article is also the sort of research that academics may be willing to pay attention to. In fact, she caught the attention of the Chronicle of Higher Education. Maybe academia is more willing to listen than I've assumed. Still, I'll believe it when I see it.

Tuesday, January 06, 2009

The fruits of my labor

It's taken a year and a half, but I have reached a milestone here at work.

After 18 months of listening to iTunes and rating every song, I now have a library of nearly 3,500 songs that are all 4 or 5 stars in my not so humble opinion. I've culled the rest of the herd (from my work library anyway, I keep the 3 stars at home).

Of course, I complete this little project just as Genius becomes available and somewhat renders my library of nothing but awesome songs less relevant.

In the past I've tried Pandora and I like them both, a lot. It's a great way to discover new music. But they both tend to get a bit repetitive after a solid week or so of listening. That's not a viable long term solution for me. Also, I worry about bandwidth issues here at work. If I can create a “My Top Rated” playlist that's 11 days long, I probably should. If we get several people streaming Pandora on the campus network, it would probably start to present some congestion issues. So I'll do my part to conserve a limited resource. :)