Moving the University website to HTML9 Responsive Boilerstrap JS

H9RBS.js (v0.0001) is a flexible, dependency-free, lightweight, device-agnostic, modular, baked-in, component framework MVC library shoelacestrap to help you kickstart your responsive CSS-based app architecture backbone kitchensink tweetybirds.

H9RBS.js (v0.0001) is a flexible, dependency-free, lightweight, device-agnostic, modular, baked-in, component framework MVC library shoelacestrap to help you kickstart your responsive CSS-based app architecture backbone kitchensink tweetybirds.

When we redesigned the University website in 2008 we adopted the Blueprint CSS framework which greatly cut down our CSS development time. It offered us a solid, cross-browser compatible foundation on which to build our site.

With the advent of HTML5 and responsive web design we started using the Bootstrap framework which offered not only a mobile-friendly grid but also at least 12 JavaScript plugins to provide design patterns such as modal windows, carousels, tabs and accordions.

As you are probably aware the web is now developing at an astonishing rate. Browsers are updated now on a six-weekly basis, HTML5 is nearing completion, and CSS4 is already being discussed. So we are looking to the future for a new framework that will support our needs and the requirements of our ever more mobile-friendly users for the next few years to come.

In consultation with Prof. Ailol from the University’s School of Computer Science we will be planning a migration to the HTML9 Responsive Boilerstrap JS starting today.

The framework describes itself as

a flexible, dependency-free, lightweight, device-agnostic, modular, baked-in, component framework MVC library shoelacestrap to help you kickstart your responsive CSS-based app architecture backbone […]

In other words: it meets our needs perfectly. Or as the developer of the infamous “All your base are belong to us” might say: All proof I.

A few of the many reasons we’ve selected this framework:

  1. It is entirely suited to today’s web.
  2. Unlike many other frameworks this uses a poor fill rather than a polyfil.
  3. It supports the new JavaScript flair loop.
  4. It is compatible with the forthcoming Commodore 64, Spectrum 48 and BBC B platforms, as well as popular browsers such as IE 6, Netscape Navigator 4 and Mosaic.
  5. No polar bears were harmed in its creation; in other words it is in keeping with our IT strategy for green computing.
  6. It is 100% compatible with JaxelScrobd 8.1.π.

From a usability point of view the only difference you may experience is a mild sense of foolishness.

Advertisements

Coloured lists in Trello

Background colours on selected lists in Trello

Background colours on selected lists in Trello

As you probably know, we’re big fans of Trello here in the web team.

We have a number of project boards, overseen by one master, everything-we’re-doing™ board called @Web team. It currently has 24 columns (or lists) which makes quickly navigating to the right list a little tricky at times.

So today I quickly wrote a hacky little Tampermonkey (Chrome) / Greasemonkey (Firefox) script to add background colours to certain lists.

For colours, I simply selected an at-random theme from Adobe Kuler.

It could be optimised but it does what I need to without too much of a performance hit, and already I’ve found it really helpful to immediately identify the “in progress”, “this week” and “done” columns, as well as marking our project backlog columns.

// ==UserScript==
// @name Web team - Trello highlight lists
// @description Highlight certain columns in Trello.
// @include https://trello.com/b/YOUR-BOARD-URL
// @require https://code.jquery.com/jquery-1.11.0.min.js
// @version 2.1
// @copyright 2014, Gareth J M Saunders
// ==/UserScript==

$(document).ready(function() {

var black = '#393939',
brown = '#a39386',
green = '#a8C0aa',
red = '#a7585b',
white = '#fff';

$('body').hover(function() {
$("h2:contains('PROJECTS'), h2:contains('TEAM ADMIN')").css('color', black).parents('.list').css('background', brown);
$("h2:contains('IN PROGRESS')").css('color', black).parents('.list').css('background', green);
$("h2:contains('THIS')").css('color', white).parents('.list').css('background', red);
$("h2:contains('DONE')").css('color', white).parents('.list').css('background', black);
});
});

Please feel free to use it, adapt it, improve it, comment on this.

Obviously, change the ‘contains:’ keywords to find your own list headings, and the @include URL to that of your board.

Update

  • I noticed that I was trying to pull in the jQuery file under http rather than https, which was causing problems.
  • It turns out that the DOM loads long before the content, as there is some Ajax jiggery-pokery going on. So I’m using a hover event on the body to force the colours to change, which is a horrible hack but works, and does also pick up new lists created with these keywords.
  • 2014-12-08 I’ve discovered that rather than // @include https://trello.com/b/YOUR-BOARD-URL, I can use // @match //trello.com/* which will then work across all my Trello boards and follows the protocol of the board being viewed (which seems to always be https:// for me).

To do

  1. Use arrays and or variables to store common colours.
  2. Improve the event handling to determine the best way to load the script.
  3. Reduce duplication with regards to identifying the column (h2:contains gets repeated a lot) and adding the CSS rules.

A scalable and modular approach to writing CSS

Scalable and Modular Architecture for CSS

Something that I’ve been keen to introduce to the web team for quite a while is a coding standard: a style guide, a consistent way for the whole team to write and format their code (whether HTML, CSS, JavaScript or PHP) and name their assets (documents, images, videos, etc.) so we spend less time trying to figure out what the last person did and more time being productive.

I actually started work on one after a meeting we had with a few developers and project managers back in October 2010 but I kept getting interrupted. I kept getting taken off ‘just writing documentation’ and put on other projects because they were seen as more important.

The importance of code conventions

Interestingly the last JavaScript Jabber podcast (#075) features a long conversation with Nicholas Zakas, author of Maintainable JavaScript, who talks about the importance of writing code that others can immediately understand and work with without needing to reformat or rewrite it.

The idea behind maintainable JavaScript. Or really maintainable anything is that somebody other than you has a fighting chance at maintaining it later on. […]

There are a few things that go along with that: […] When you look at the code it’s laid out nicely; you’re playing by the rules that have been set down at your company or on your team.

One of the biggest things that annoyed me in my career is that you go some place and you’d have five people on the team and they are all writing code slightly different. How many times have you ever opened up a file and before you did anything you’ve reset all of the formatting?

The discussion then moves on to the importance of code formatting issues. One of the co-hosts confesses that he has often become more incensed by co-workers using inconsistent formatting than by anything else because it adds needless extra mental effort.

Zakas agrees and refers to the work of Daniel Kahneman’s book Thinking, Fast and Slow explaining that our brains get used to particular patterns, and when you detect an anomaly in that patterns that you’re used to you switch into a different mode in your brain and that upsets you.

Zakas explains that when he worked as a consultant he used to go into companies to help them sort out their coding conventions:

I’m a very big believer in the broken windows theory where you need to do the small things right if you have any chance to get the big things right.

Jonathan Snook: SMACSS

One approach to organising CSS code that I’ve been investigating and trying out is SMACSS (scalable and modular architecture for CSS) which has been developed by Canadian developer Jonathan Snook.

In his book he talks about structuring your CSS not only in terms of how to organise the code but also the naming convention that he uses for classes and IDs.

What I’ve found immediately helpful, however, is the way that he categorises CSS rules into five groups:

  1. Base
    These are the defaults, usually single element selectors such as a, input or li.
  2. Layout
    These rules divide the page into sections.
  3. Modules
    These rules introduce reusable blocks, such as sidebar sections, product lists, carousels, tabs.
  4. State
    This was a new way of thinking for me, which I find particularly useful: how do things look in a particular state, e.g. when something is hidden or expanded, active or inactive. I especially like his class naming conventions for state which makes them readable, for example: .is-pressed, .is-disabled.
  5. Themes
    Themes are something that we use already on the University website: we often have a common core of page styles and elements but which are themed or skinned differently, for example compare Current Staff with Current Students.

I’ve used this approach in a few stylesheets now and the clarity it brings to organising my code has been very welcome. I’ve found myself asking “what is this to do with?” This is layout so the code needs to go in this section, that is a module so it needs to go into this other section. It has allowed me to offer a generic theme within the module itself, and over-rule it with a particular theme, if required. Very useful.

My next task is to explore Snook’s naming rules (he uses prefixes such as l- for layouts, is- for states) and compare it against the BEM (block-element-modifier) approach advocated by Yandex.

No doubt I will report back.

Check accessibility with HTML_CodeSniffer from Squiz

University homepage showing results of HTML Code Sniffer.

University homepage showing results of HTML Code Sniffer.

Last month we had a visit from web content management system company Squiz. The primary reason for their visit was to demo their CMS Matrix but one of the really cool and practical things they left us with was simply making us aware of HTML_CodeSniffer.

Written entirely in JavaScript HTML_CodeSniffer checks the source code of a web page and shows you where your code doesn’t meet a particular accessibility standard: it supports

(These pages give a very clear summary of the different standards.)

Install and use

It’s rather simple to install, simply visit the HTML_CodeSniffer website, then drag the bookmarklet to your bookmarks bar. (A bookmarklet is essentially a bookmark that contains JavaScript code rather than a web address.)

To use HTML_CodeSniffer visit the web page you want to test (it can be a local page as well as a standard, hosted page, which is useful) and click on your new bookmarklet.

Results

By default it appears to evaluate your site against the WCAG 2.0 AA standard:

CodeSniffer results

but you can easily change that using the drop-down in the top right corner:

CodeSniffer accessibility standard settings

What is particularly useful are the reports that HTML_CodeSniffer offers for any errors, warnings and notices that it discovers. Select which areas you want to see results for, using the blue toggle switches, then click View Report.

You are then presented with a paginated list of results:

CodeSniffer results

Clicking on a result gives you more specific details about the issue, including a code snippet, as well as—most helpfully—an animated, bouncing marker showing you where on the page the error or warning is referring:

Detailed results from CodeSniffer

The yellow marker beneath the search box (on the left) is showing what the warning is referring to.

I can certainly see HTML_CodeSniffer being a particularly useful tool while developing sites.

Find it on GitHub: HTML_CodeSniffer.

Job vacancy: web developer

Spanner lying on a laptop keyboard

The University is looking for an experienced web developer to join the web team.

  • Grade: 5
  • Salary: £24, 766 — £29,541 per year
  • Fixed term: 3 years
  • Start: as soon as possible
  • Closing date for applications: Friday 19 July 2013

The main focus of the job will be in helping design and develop small-scale web applications, and add additional functionality to existing pages/websites so a solid and demonstrable knowledge of HTML, CSS, JavaScript, and PHP is required.

Our enterprise web content management system is the commercial, Java-based TerminalFour Site Manager (although we’ve never had to dabble with any Java) running on an Oracle database; and we’re working a lot with WordPress (and MySQL) too these days (both stand-alone installations and WordPress multisite). Pre-knowledge of either is not required as training will be offered.

We’re currently using an adapted version of the Blueprint CSS framework and a patchwork of jQuery plugins, but we have plans to move to the Bootstrap framework, the LESS CSS pre-processor and a host of other Node.js enabled time-saving goodies. We currently don’t use any particular PHP framework, although PHPMaker has been used to help generate a few applications; that’s not to say that we’re not option to the adoption of a PHP framework. We often use Agile methodologies in our project work and use Trello to keep ourselves organised.

It’s not all about hardcore coding, however. An important element of the job will be to deal with website users and content creators in a support and perhaps even training role. The web team offers first, second and third line support, and you will be expected to get involved there too.

The web team is currently made up of four members (web manager, web architect, web editor and web apprentice) with a fifth member on secondment to Corporate Communications for 18 months. Speaking as someone who is obviously somewhat biased, the web team is a good, fun and supportive place to work. Do you fancy joining us?

More details can be found on the University’s job vacancy website – the job reference code is SB1005.

New Museums and Collections website

I say “new”, but the Museums and Collections website actually launched a good few months ago now. But I have been so busy over the summer that I haven’t got round to writing about it, until now.

I recall that the very first meeting I had when I began working here two years ago was about the Museum Collections Unit’s web presence. The ultimate result was this new website, which we worked on during the spring and summer.

Screenshot of the MUSA homepage

The visual design was created by Steve Evans, the Web Manager. It was then passed on to me to build within our content management system, TerminalFour Site Manager.

It is a fairly complex website, and definitely the most challenging project I have worked on. I really enjoyed stretching my limits and working with T4 Site Manager in new ways.

Creating the homepage

My first task was to create the ‘four panels’ page, as I called it. This is the overarching Museums and Collections homepage, which links through to the individual websites for each of the four museums. Each museum is represented by a square, which expands to reveal more information when the user hovers over it.

Screenshot of the Museums and Collections homepage

This uses JavaScript, which has not been a particular strong point of mine in the past. But I am beginning to wonder if that has changed, because I surprised myself when I managed to achieve this result quite quickly.

Personally speaking, this is not the sort of design I would normally opt for. But in the end I think it has turned out quite well and feedback from others has been positive.

Building the website in T4 Site Manager

The five websites themselves all share the same basic building blocks, but are subtly given unique identities. For this, I had to be quite creative in the way I built the website in T4 Site Manager, in order to avoid unnecessary duplication. I did not want to create several styles (which can be a pain to maintain in the long run) and templates that all looked almost but not quite the same.

This meant creating lots of navigation objects and ‘related’ sections instead. This is one of the trickiest parts to get right. It is a jigsaw puzzle with lots of different potential solutions, but each with their own little pros and cons. So it takes some careful thought. But it’s easily worth it for the long-run benefits it brings in terms of ease of maintenance.

A bit of creativity in using existing navigation objects was also required to give each museum’s website its unique identity. This allowed me to use the same style (page layout) for each website, while still being able to assign different CSS stylesheets to each website’s homepage, and separate stylesheets again for the lower level pages. This is what enables each website to have its own colour scheme, yet still all use the same style.

Screenshots of three Museums and Collections websites

Other interesting bits

While working on this website, I also used the Google Maps API v3 for the first time for the maps on the visitor information pages. As far as I know, it is the only part of the University website that uses this newest version of the Google Maps API.

I was surprised to find it pleasingly easy to work with, and I think it provides a smoother user experience than version 2. The newer version is designed to work better on mobile devices too.

Another interesting part of the website is the virtual tour of MUSA’s Learning Loft. This was another first for me, working with Flash as well as JavaScript (normally we only use Flash for videos). But again this turned out to be reasonably straightforward in the end, and looks really good on the webpage.

Visual design

Initially the Museums and Collections website was going to be a more conventional affair. But Steve was inspired to create something more striking after seeing some of MUSA’s physical promotional material. Steve’s decision led to him creating a brilliant design. The Museums and Collections webpages are now, in my view, some of the very best looking pages on the University website.

On reflection, the decision to go with an image-heavy design makes perfect sense for a Museums and Collections unit that can draw on 600 years of history for its visuals. There are some fantastic images of some of the objects in the collections, which helps make these webpages particularly appealing to look at.

Summary

The new Museums and Collections website was a brilliant project to work on. It threw all sorts of challenges at me, but this was a great opportunity to learn. It took a while to get it right, but I think the result is a really eye-catching website.

My favourite Web developer add-ons for Firefox

Mozilla released Firefox 4.0 on Tuesday—it has already been downloaded 16,041,437 times; that’s about 92 downloads a second!— and there is a lot to commend it for: a clean look, that’s not too far away from both Google Chrome and Internet Explorer 9 and it’s much faster too.

While I use Google Chrome for most of my day-to-day browsing I still use Firefox for Web development, largely thanks to the number of mature add-ons available for it. These are my favourites:

1. Firebug

20110324-firefox4-firebug

Firebug is the number one reason that I use Firefox. Sure, Chrome and Internet Explorer have their own Web developer tools but none of them come close to Firebug for its awesomeness.

That said, I recently tried out Opera Dragonfly and I was really impressed.

2. Web Developer

20110324-firefox4-webdeveloper

A close second is Chris Penderick’s Web Developer toolbar that adds all sorts of useful tools to Firefox: disable CSS, outline headings and tables on the page, show HTML classes and IDs, show image sizes as overlays on the images. Brilliant!

3. ColorZilla

20110324-firefox4-colorzilla

The most useful feature of ColorZilla for me is the eyedropper tool that allows me to sample a colour on a Web page and find out the RGB or HEX value for it.

4. HTML Validator

20110324-firefox4-htmlvalidator

HTML Validator does exactly what it suggests that it does: it shows HTML validation information in the Firefox add-on bar (what used to be the status bar) at the foot of the browser viewport.

It’s very useful for at-a-glance error checking; obviously, recognising that HTML validation is an ideal and a guide rather than a hard-and-fast rule.

5. Wappalyzer

20110324-firefox4-wappalyzer

Wappalyzer is a new add-on for me that adds to Firefox the functionality that I’ve been enjoying with the Chrome Sniffer extension in Google Chrome.

It shows you in the AwesomeBar what technologies are being used, e.g. JavaScript framework, server type, content management system, web statistics, etc.

6. RSS Icon in Awesombar

20110324-firefox4-rssiconinawesomebar

For some unfathomable reason Mozilla has removed the RSS icon that appears in the AwesomeBar when you visit a page that has an RSS autodiscovery tag, such as the University homepage.

That’s where RSS Icon in Awesombar (sic) comes in. It… well, puts an RSS icon in the AwesomeBar.

7. Tab Mix Plus

20110324-firefox4-tabmixplus

There are some options within Firefox that I still cannot believe are missing. There is still no way to, by default, open your homepage when you open a new tab.

Tab Mix Plus allows you to set this option—and a whole lot more, like being able to duplicate existing tabs, or protect or lock tabs so that you don’t accidentally close them.

Over to you…

What are your favourite Firefox add-ons, for Web development or otherwise?

Should we still be supporting Internet Explorer 6?

Keep calm and debug IE6

Keep calm and debug IE6

Every couple of months the same topic of conversation comes up in the Web team office: should we still be supporting Internet Explorer 6? The answer so far has always been a resigned yes, but that may not be the case for too long.

A little history: IE6 was released on 27 August 2001, three days after Windows XP was released.  Since then IE7 was released in October 2006, IE8 in March 2009 and IE9 public beta in September 2010.  So, surely it’s now time to withdraw support for a browser that is over nine years old.

Bring down IE6

In 2009 .net magazine started a campaign called “Bring down IE6“.

Bring down IE6

Their mission:

The premise is simple: Internet Explorer 6 is antiquated, doesn’t support key web standards, and should be phased out. This isn’t about being anti-Microsoft, it’s about making sure that we encourage people to move to modern browsers such as IE8, Firefox, Chrome, Safari and Opera.

Case-by-case

In an article entitled “Calling time on IE6” Craig Grannell “asks designers and developers if it’s finally time to take IE6 behind the shed and shoot it”!  He leaves the conclusion of the article to Web standards hero Jeffrey Zeldman:

How much longer we prop up this ageing browser must be decided on a case-by-case basis. Not every site can afford to dump it today, but the writing’s on the wall.

I think that’s a really important point because until recently the primary browser on the University’s default PC setup, that was installed on every Windows PC in the PC classrooms, was Internet Explorer 6.  If we wanted our websites to be viewable and usable across the University then we had to support it, we had no option.

Supporting IE6 is a drag. As all web developers will know, you spend a couple of hours building something that works perfectly in Chrome, Firefox, Opera and Safari and then you spend twice as long again debugging it in IE6 and IE7 and IE8, which all appear to have introduced new bugs to the game.  Keep calm and debug IE!

Analytics

Since the University’s default PC setup (‘standard build’) has now moved to Windows XP (and will hopefully soon move again to Windows 7) the default browser is now IE8, and s the requirement to support IE6 has now been reduced.

This is backed up by the statistics from our Google Analytics account that tracks which pages are being view most often and by which browsers.

Unsurprisingly Internet Explorer, being the default browser on our standard build PC, is the most popular browser to use to visit the University website; Apple Safari (the default browser on Apple Macs) is second.  42.5% of all visitors in the last month have used one version or another of Internet Explorer.  The breakdown of which version is interesting:

  1. IE8: 79.8% (382,394 visits)
  2. IE7: 15.4% (73,944 visits)
  3. IE6: 4.4% (21,186 visits)
  4. IE9 beta: 0.29% (1,395 visits)

That means that only 1.8% of all visitors to the University website last month used IE6. But 21,186 visits is still quite a lot.

Frameworks

Adopting the Blueprint CSS framework a few years back made a considerable difference to our development time.  Blueprint comes with a build-in IE hacks/workarounds stylesheet that addresses a good number of common IE5, IE6 and IE7 issues that has literally saved us hours and hours of hair-pulling.

Similarly we’re using the jQuery JavaScript framework which still supports IE6 and so makes cross-browser coding much simpler.

My view is that with such good support built-in to these frameworks for IE6 there’s really no excuse at the moment to completely drop providing a certain degree of support for IE6. The bugs are well known and the hacks are well-documented, and so finding workarounds for those that are not already contained in the framework files really doesn’t take that long to code these days.

Yahoo! graded browser support

However, it doesn’t mean that pages need to look pixel-for-pixel identical in every browser.  Something that is made explicit in the Yahoo! Graded Browser Support chart:

Support does not mean that everybody gets the same thing. Expecting two users using different browser software to have an identical experience fails to embrace or acknowledge the heterogeneous essence of the Web. In fact, requiring the same experience for all users creates an artificial barrier to participation. Availability and accessibility of content should be our key priority.

Over the last two to three years I’ve used the Yahoo! GBS chart to inform the Web team about how much support we should be affording to the various browsers.  IE6 is still granted A-grade support but it appears from a blog post “Graded Browser Support Update: Q4 2010” on the Yahoo! User Interface Blog that this is all about to change.

Listed among the various changes, which includes dropping A-grade support for Firefox 3.0 and initiating support for WebKit browsers on iOS and Android OS, is this:

Forecast discontinuation of A-grade coverage for Internet Explorer 6 in Q1 2011; we expect to move IE6 to the C-grade browser list as of the next update.

C-grade browsers, according to the GBS page are “identified, incapable, antiquated and rare.”

I would say that the bell is tolling for IE6 but it would appear from some corners of the Web that it has already rung out.  Google has already held a Funeral for IE6 after it withdrew support for the aged browser.  Microsoft sent flowers!

Conclusion

According to Google IE6 is already dead and buried, while Yahoo! are expected to degrade support for it in early 2011. Microsoft themselves, on the other hand, have committed to supporting IE6 until Windows XP SP3 support is removed in 2014; but that just means removing security issues rather than adding new features.  IE6 will never, on its own, support HTML5 or CSS3, for example.

So, should we still be supporting Internet Explorer 6? I expect that we’ll follow Yahoo!’s lead next year and move to providing only a base level of support for it.  When we move to using HTML5 and CSS3 then I expect we’ll have to drop support for IE6 completely.

We’ll make sure that content is readable but not worry too much about the presentation (CSS) and behaviour (JavaScript) layers; we’re already kind of doing that already in places, to be honest.  But as we’re using frameworks for CSS and JavaScript which still support IE6 the elderly blue ‘e’ may be inadvertently supported for a little while to come.

Then all we need to do is try to kill off IE7.  Who’s with me?