Fighting Yesterday’s Battles

Duty Calls

I recently got involved in an argument on the Internet with a small group of people who think that the web design standards of the 1990s are still valid today. In support of this, one of them pointed me at anybrowser.com – a site which, not unconcidentally, dates from the late 90s.

My antagonists’ argument was, basically, that if it works it’s good enough, and the use of tables for layout, font tags, etc are perfectly acceptable provided you follow the guidelines on that site (and others) not to target your site at a particular subset of browser users. As a corollary of that, they also argued that if a website doesn’t work perfectly with IE6, it’s the fault of the web designers who aren’t doing their job properly. One even went so far as to accuse me of “Nazi tendencies” for insisting on web design standards rather than pragmatically using what works.

I don’t think I need to tell anyone who works in the web design industry that that’s wrong. But it’s worth exploring a little bit why it’s wrong.

Firstly, of course, the Any Browser campaigns didn’t always agree with each other. Anybrowser.com is happy with tables for layout. Anybrowser.org cautions against them. Anybrowser.com doesn’t use CSS. Anybrowser.org recommends it, albeit with caveats. So even historically, there’s no consistency.

However, there are two main themes running through all the Any Browser campaigns of the 90s:

1. Websites should not be targetted at users of any specific browser.

2. All websites must be backwards compatible with all older browsers, and it’s unrealistic to expect users to upgrade.

As far as the first point is concerned, it’s worth noting that that argument has been won. Back in the late 90s, shortly after the introduction of Internet Explorer, there was a real danger that the web would become siloed into sites compatible with Netscape (remember that?) and sites compatible with IE, each using different proprietary extensions. The Any Browser campaigns argued fiercely against that, and rightly so.

These days, though, that simply isn’t an issue. I use all five of the major browsers – Chrome, Firefox, IE, Safari and Opera – and I can’t remember the time I last encountered a website which worked better in one of them than the others.

What matters these days isn’t browsers, it’s devices. “Any Browser” is no longer an issue. “Any Device” is very much so. Sites which work well on a desktop can fail miserably on a tablet or smartphone. And, of course, vice versa. The challenge for web designers these days isn’t to make sure that their sites work on two or more browsers, it’s to make them work on a multiplicity of desk based and mobile devices.

I have to admit that not all of my own websites meet that requirement. I’ve been working towards it, of course, and one by one I’ve been updating them to use responsive frameworks that will work on phones and desktops alike. But I’m not completely there yet. In my day job, cross-device compatibility is utterly crucial, and on my desk I have not only the PC that I do my work in but also a tablet and smartphone that I need to test any visual change in before signing it off to go live.

Making that work, though, means sticking to standards. And it means sticking to standards designed for modern technology, not the technology of 20 years ago. For example, using tables for layout breaks horribly on smartphones – it makes sites unreadable without horizontal scrolling, which is one of the bigger no-nos. These days, we have to be standards complaint, and we have to be both cross-browser and cross-device compatible. In short, we have to write websites, not just for any browser, but for any device.

Which leads on to the second major theme of the Any Browser campaigns: It’s unacceptable to expect users to upgrade their browsers.

Unlike the first theme, of cross-browser compatibility, this theme has been proven wrong. And there are two main reasons why it is wrong.

The first is that, like a lot of statements made about the Internet at the time, it was based on an entirely false assumption about the rate of technological change. When installing an updated browser meant, at the very least, a lengthy download over a dial-up connection, an often complex user-initiated installation routine and (in the case of Netscape), even paying for the upgrade, there were very many good reasons not to do it unless you really wanted to. And, equally, website operators needed to be aware that users had many good reasons not to upgrade. Backwards compatibility was, therefore, an essential part of the Any Browser principle.

These days, none of those apply. All modern operating systems have an auto-update function for software, and all modern browsers either take advantage of it or use their own auto-update system. In some cases (Chrome being one), the auto-update takes place in the background and the user won’t necessarily even notice that it has happened. And people don’t connect to the Internet over dial-up any more. Even a slow ADSL connection is easly fast enough to download a browser update with no problems. With the exception of some PCs on locked-down corporate networks (which are an entirely different scenario, and one that isn’t really relevant here), the argument that upgrading a browser is difficult, complex or inconvenient simply no longer exists.

The second reason why it’s wrong to insist on backwards compatibility is, though, even more powerful. And it was competely unforeseen by the Any Browser campaigners back in the 90s. That reason is security.

The growth of the Internet has facilitated a lot of things, many of them entirely beneficial. But it has also facilitated a lot of bad things. When I started out in the IT industry, back in the 80s, hacking into a computer generally meant having physical access to it, or at least being part of the local network it was connected to. The Internet made it possible to hack into a computer anywhere in the world without going anywhere near it. And malicious software authors rapidly created programs to do just that. Viruses, trojans and other forms of malware are an everyday part of the 21st century Internet. And so is the need to defend against them.

But defending against the efforts of malware creators means keeping up to date with the necessary defensive measures. Which in turn means keeping Internet-connected software up to date and patched against any newly discovered vulnerability.

This might not be so bad if the only thing at risk was the user’s own PC. I used to have a friend who always left his car doors unlocked, because, as he put it, “there’s nothing in the car worth stealing, and I’d rather they didn’t smash the windows to discover that”. A lot of people have the same approach to their computers.

The problem with that approach is that it isn’t just the user’s own PC which is at risk. Once malware gets onto a PC, it can become part of a botnet which in turn is used to attack other computers, maybe sending spam, or acting as part of DDoS attack, or simply spreading the infection. Most spam comes from botnets these days.

If you have unpatched older software connected to the Internet, therefore, you are not merely a danger to yourself, but to other Internet users as well. If you are using a centrally managed computer at work then it isn’t your problem, it’s your IT department’s, and they can answer to their own management if they allow company PCs to become infected. But if you are a home user, or a small business running your own equipment, then not only is it basic common sense to keep up to date with software upgrades but there’s a very strong argument that it is a moral imperative. People who refuse to upgrade are contributing to the problems of spam and malware experienced by everyone else.

As far as website operators are concerned, that means they also have a moral obligation to encourage their users to keep their software up to date. And if that means deliberately refusing to cater for the small minority using broswers several generations behind, then, overall, that is a positive move.

The best way to encourage users to keep up to date is to stick with modern web standards. That doesn’t necessarily mean using all the bleeding-edge features of HTML5 and CSS3, but it does mean writing websites that comply with HTML5 standards rather than using outdated feature sets of older implementations. And, as a bonus, using HTML5 will also make it much easier to create websites that are truly cross-browser and cross-device compatible, which is the ultimate aim of the Any Browser campaigns.

The Any Browser campaigns of the late 20th century had two fronts. History shows that they’ve won one, and lost the other. There is no point now in revisiting either of them. What matters now is ensuring that the web remains an open, interoperable platform accessible to any user and any website developer on an equal basis. That outcome is still far from assured. Let’s not waste time fighting yesterday’s battles when we still have today’s to win.

Meltwater and the copyright right – a brief update

Back in 2011, I blogged on the disturbing case of Meltwater and the Newspaper Licensing Agency. If you’ve got time, go and read the original article before coming back to this one. But, if you want the TL:DR version here it is: The appeal court decided that following a link to material on the web could be an infringement of copyright if you didn’t have permission, because doing so would inevitably create a local copy of that material in your browser, and, unless authorised, that’s an infringement. As I said at the time,

Copyright law does have explicit exceptions for temporary or transient copies which exist merely to facilitate the transmission or lawful use of a work. The basis behind the Meltwater judgment is that such a permission only applies to lawful use, so if a particular use is not lawful then even a temporary copy is a breach of copyright.

I ended that post by hoping that Meltwater would appeal and that common sense would prevail. But I wasn’t holding my breath.

Now, nearly three years later, we have an update. And, fortunately, common sense has prevailed. Meltwater did appeal to the Supreme Court, and they won. And the Supreme Court itself then referred the question to the European Court Of Justice to get an EU-wide ruling. The ECJ, in turn, upheld the Supreme Court’s verdict. So the original appeal court’s ruling that, as I put it at the time, “a use can be unlawful just because the publisher says so” has been overturned. On the contrary, as the Supreme Court put it, “a use of the material is lawful, whether or not the copyright owner has authorised it, if it is consistent with EU legislation governing the reproduction right”.

The judgment is quite complex, and goes into a lot of detail, but the gist of it is that the exemption to copyright for transient copies is a lawful use in and of itself, and does not rely on any other right in order to be lawful. There may be other rights being infringed, of course, and the judgment refers to a case where they were. But, crucially, it makes the important declaration that even if other rights are being infringed, the exemption for transient copies is absolute and cannot be nullified by the publisher’s lack of consent. Which means that if the transient copies were the only possible infringement, then no infringement at all has taken place.

This has a lot of ramifications beyond the case in question. The Guardian has headlined the story, “Internet users cannot be sued for browsing the web“, which is certainly true, but there are other aspects as well. One of them is that this also settles once and for all the question of whether simply linking to publicly available material can be an infringement of copyright.

This has been addressed in the past, in other cases, but there hasn’t until now, been a definitive answer from a senior court. But this decision makes it clear that a link alone cannot be an infringement of copyright, because the link itself is not a copy of anything and the transient copies made by someone following the link are not an infringement either. (There are other, more tenuous rights, such as a “making available right”, which can, theoretically, be infringed by links in some circumstances. But if the material linked to is already public then that cannot be the case).

It also means that someone viewing or listening to a live stream online is not infringing copyright, even if the source of the stream is doing so. Because the person viewing the stream is only making transient copies, no infringement is taking place. It would be, of course, if they were making a permanent download, as well as if they were also communicating that material to the public. But both of those are an entirely different scenario. Private viewing of an illicit stream is not infringement, even when broadcasting the stream is.

So, overall, this is a sensible decision. And it’s nice to know that the courts don’t always follow a copyright maximalist agenda.

Best practice for online web forms

As part of an online discussion about best practice for online web forms which collect personal information, I contributed an extract of a document that I wrote a while ago for internal purposes. Someone asked me if it was online anywhere, which it wasn’t. So it is now!

This is just a brief summary, there are situations it doesn’t cover and there may well be good reasons where some of my recommendations are incorrect for certain circumstances. It’s also aimed at an English speaking audience writing websites for primarily English speaking (although worldwide) users; other languages and cultures have different conventions, and the comment about the PAF is specific to the UK.


Best practice for online forms is to make only the absolute minimum required information compulsory, and give as much flexibility as is reasonably possible for everything.

Some specific points:

“Title” (Mr, Mrs, etc), if present, MUST either a) be a single free-form text field, or b) have a free-form “other” option in addition to a preset list of the most common, and MUST NOT be a required field. If you do have a preset list of common options, the absolute minimum set is “Mr”, “Dr”, “Mrs”, “Miss” and “Ms”.

“Name” MUST be a single free-form field. DO NOT split names into first and last, or Christian and surname. (And do not assume that the first word in the name is the name that people wish to be addressed as when emailing them). If you do not have a separate title field, be aware that some people will include their title as part of their name.

“Telephone number”, if present, MUST NOT be an all-numeric field (consider people with extension numbers), and MUST NOT be a required field unless the purpose of the form is for the person completing it to explicitly request a telephone call or SMS message.

“Address”, if present, MAY include a separate field for postcode/zip code, but SHOULD NOT include a drop-down for county/state/country/whatever unless all possible legal options are included. A field for postcode/zip code MUST NOT be required unless necessary for delivery/billing purposes.

“Age” or “Date of Birth”, if present, MUST NOT be a required field. If necessary to validate age for legal purposes, a single checkbox for “I confirm that I am over 18” (or wording as appropriate) may be a required field.

“Sex” or “Gender”, if present, MUST NOT be a required field.

Data which has a canonical format (eg, postcodes, telephone numbers, credit card numbers) should be accepted in any format (eg, with or without spaces, with or without brackets) and post-processed into the canonical format. DO NOT reject form submissions for not using the correct format.

If validating postal addresses against the PAF, always allow for manual address entry as an alternative to selecting from the PAF options for the postcode.

Google Maps, where orange is the new blue (and also the new green, and red)

Google Maps is going through a bit of a makeover at the moment. There will, sooner or later, be an entirely new version of the web-based maps (which you can see in preview if you switch to the beta option), but in the meantime some of the changes that are part of the new version have also been rolled out the the existing system.

One of the things that has been changed is the colour scheme. Previously, Google used standard local mapping conventions for road colours. So, for example, in the UK motorways were blue and trunk roads were green. In France, toll autoroutes were green and non-toll autoroutes were red. That fits with signage, in both countries.

The new colour scheme, though, does away with all that and renders all roads, everywhere, in various shades of orange and grey.

I think that’s a really bad move. So do lots of people. But it’s probably best illustrated with an example. Here’s a screenshot of my local area using the new version:

(Clicking on the map will open it in a lightbox. If you don’t have a large monitor, then right-clicking and choosing “open link in new tab” will probably be better as it will allow you to see it actual size. The same goes for all the maps on this page).

The map shows Evesham at the bottom right, Worcester at the top left and Pershore in the middle. Up the left hand side runs the M5.

The major routes are reasonably easy to see, although there isn’t much of a visible difference between the motorway and other trunk roads. But can you see where the non-trunk A roads are on that map? What about the B roads? Can you tell the difference between them and unclassified roads?

The answer to that, as I’m sure you’ve realised, is that you can’t tell. Here’s the same area in the older version:

It’s immediately obvious at a glance how much clearer that is. Most importantly, Pershore is no longer isolated in a sea of back roads – you can see both the A4104 running north-south through the town, as well as the B roads linking it directly with Evesham and Worcester. Evesham, too, now has the key central spine road showing in a different colour, and, to the west of the M5, you can see the A38 which forms an important local connector in the area.

OK, so you may argue – that’s just the overview, you can see more detail by zooming in closer. Which is true. But the colours still don’t work. Here’s a rather bizarre splash of colour in Droitwich Spa, for example, where the main road is white but the slip roads at a junction are orange:

And here it is in the older, clearer version:

So why the change?

It seems to me that Google has forgotten one of the key principles of cartography: a map is intended as a representation of reality, not a work of art. To be sure, roads aren’t really painted blue, or green (or orange), so the actual colour you use for them is something of an arbitrary choice. But the way that roads are classified and used is not arbitrary, and there is a long-standing convention in map-making that the colours and iconography relate to those use in non-mapping documentation.

Going back to the first map, at the top, if you wanted to get from Wyre Piddle to Upton upon Severn, which way would you go? The map gives no obvious clues – you might assume that the only alternative to negotiating a maze of twisty country lanes is to go via Worcester. In the second map, it’s obvious: follow the A4104 through Pershore and Defford.

But, of course, people don’t use online maps in that way any more. Instead, if you wanted to get from Wyre Piddle to Upton on Severn, you’d use the “show directions” facility of the map. And, yes, it will correctly take you through Pershore. (Here’s a link showing just that, for comparison purposes).

And I think that is the key point here. Google no longer expects users to use its maps as maps. Instead, it expects the maps to be merely a means of conveying other data, such as computer-generated routes, and advertising, and links to other Google products. The idea that someone would look at a map, and, just by looking at it, be able to tell how to get from one place to another seems incredibly old-fashioned. And so there’s no longer any need for the visual clues necessary to make map-reading easy and intuitive.

I think, though, that that’s still a mistaken assumption. Yes, one of the primary uses of Google Maps (and Apple Maps, and Bing Maps) is for computer-generated route-finding. But it isn’t the only one.

It’s telling, too, that many of the positive comments you can find about the new Google Maps (and yes, there are plenty) online are all about how slick it looks and how “cool” the colours are. One review points out that “The redesign brings Maps into sync with the look and feel of the modern Google design aesthetic”, which is certainly true. Others, like this one, talk about how easy it is to use the new maps to search for pizza. As a local search tool, it is pretty good.

I suppose we shouldn’t be surprised that Google wants the new Google Maps to be more about Google than Maps. But building in the new features doesn’t have to mean ditching the best of the old. And I find myself using Google Maps a lot less these days, so all those new features are wasted on me.

So what are the alternatives? Here are some screenshots of the competition, starting with the most obvious, Bing:

I quite like Bing Maps. They get the colours right, and the web interface has the option of using OS maps at closer zoom levels, which is a very, very good option indeed. But, at the wider level, the colours still seem a bit too muted and there isn’t as much detail as there could be.

The other web-based map that most people will probably be familiar with is OpenStreetMap. Here’s the same area, again:

One of the nice things about OSM is that it gives you the option of different tile sets. Here it is with Mapquest Open tiles:

The Mapquest colour scheme is a lot like Bing, except clearer. Purely as a general purpose mapping application, I find OpenStreetMap to be by far the best, with the Mapquest tiles being better at overview levels and the standard OSM tiles being better when zoomed in.

One that has to be mentioned, of course, is the grandaddy of them all as far as UK mapping is concerned: OS Maps. Unlike the others, OS maps don’t have a website of their own, instead, they are incorporated into other mapping sites. And they come into their own at closer zoom levels: there isn’t really anything to be gained from them at wider levels than the classic 1:50,000 series. But here are Evesham and Pershore on the OS map:

At that level of zoom, OS maps are genuinely unbeatable. The colours and iconography have been honed over decades of careful refinement, and, without the distraction of route-finding and advertising to contend with, the cartographers at OS have been able to fully concentrate on the maps themselves. It’s the inclusion of OS maps in Bing which gives Bing the edge over Google for close-up mapping, and their ability to combine OS maps with route-finding is unmatched as well.

One other that’s worth mentioning, though, is a bit of a blast from the past. Veterans of European travel in the 20th century will be familiar with Michelin Maps, but not a lot of people know that they’re online as well. Michelin is the direct opposite to OS in that it’s the wider zoom levels where they excel, so here’s a screenshot of most of Worcestershire:

Once upon a time, before Google got into the mapping act, ViaMichelin was my favourite online mapping application. Unfortunately, their technology hasn’t really moved on much since those early days – just about the only enhancement is that their maps are now “slippy” – so they leave quite a lot to be desired now. But Michelin maps, like OS maps, are maps first and foremost rather than being a vehicle for search and route-finding (although ViaMichelin does do routes), so the quality of the cartograohy is second to none and vastly superior to Google. I only wish they did a useful API so that I could include them on my own websites!

Grant Shapps, (snake) oiling the wheels of IP reform

I blogged a couple of days ago about the fact that Grant Shapps, the new Conservative party chairman, turns out to have founded a company dedicated to selling SEO snake oil. The point of that article wasn’t particularly to criticise him, it was more to do with the fact that one of his company’s websites had inadvertently revealed just how useless the kind of stuff it sells is.

I commented at the time that “it’s probably rather embarrassing for Mr Shapps to be linked with this kind of stuff”, but, in retrospect, I doubt he’s embarrassed at all. You have to have a pretty thick skin to deal in black hat SEO techniques, and an equally thick skin to be successful in politics, so it’s more likely to be just water off a duck’s back. And, although I don’t use the kind of techniques he sells, I don’t think there’s anything which is particularly morally wrong with making money out of them. If someone is determined to use black hat SEO, then why not sell them the tools?

What’s more interesting, though, is the nature of the tools that HowToCorp (the company founded by Mr Shapps) sells. One, in particular (and the one that I blogged about the other day), is what’s called a “content spinner”. In the parlance of the black hat wearing SEO consultant, that means a program which takes content, such as an online article, and then “spins” it in order to produce another article which can then be indexed by Google as if it were original. For example, look at this page, and then this one – the second is the same article rewritten by machine (on new technology).

Obviously, to do that you have to have content to begin with, and there are two main sources. Firstly, you can write it yourself. Or (and far more commonly) you can simply copy it from somewhere else. In the example above, the original comes from one of many “free ezine articles” websites that are themselves a common SEO tool – people write articles, then submit them for syndication in the hope that each publication will generate backlinks to their own website.

If the second website I linked to there was simply republishing the article as written, then it would be entirely legitimate – the articles are originally published with republication in mind, and that’s allowed – and even encouraged – by the terms and conditions of their source. But “spinning” them isn’t permitted. In fact, it’s explicitly prohibited.

What that means is that the second site I linked to is infringing copyright in the original article. And the second site is one of HowToCorp’s own network of spun content websites.

Plenty of people are up in arms about that. I’m not one of them. I don’t like the plagiarism inherent in using spun third party content, and if pressed I’d probably call it morally wrong. But I’m a lot more relaxed about the copyright infringement aspect.

The thing which alerted me to this story was, as it happens, a tweet from Loz Kaye, the leader of the Pirate Party UK:

@grantshapps in IP infringement row: http://bit.ly/OIGpwk Can’t promise campaign to save minister or his wife.

Now, I’m no copyright abolitionist, but I think that Mr Kaye was possibly missing a trick here. Content spinning may be the sordid underbelly of copyright challenge, but a challenge it nonetheless is. It’s impossible, morally or rationally, to defend content spinning without also defending other forms of challenge to existing copyright laws such as filesharing. If Mr Shapps is being anything like consistent, then he has to be as much in favour of the latter as he is of his own actions.

Obviously, moral consistency is by no means guaranteed in politicians. But there’s another intriguing link here. Grant Shapps happens to be the cousin of former Clash vocalist and guitarist Mick Jones. And Jones is currently a member of Carbon/Silicon, a band of which Wikipedia has this to say:

The formation of the band was catalyzed by the internet and file sharing. The first song written by Jones and James was entitled “MPFree,” in which they expressed their willingness to embrace the technology of the internet and file sharing, in the interest of spreading music, rather than profit.

I have no idea how close Jones and Shapps are. Maybe they go out every week for a beer, maybe they only see each other at family occasions, or maybe they never speak. And it could easily be just a coincidence that one cousin believes in freedom to share music, while the other only believes in his own freedom to share text. But maybe, just maybe, Grant Shapps really does come from a position of genuinely wanting to see reform of the UK’s onerous and anti-innovation intellectual property laws. If so, then his reputation as one of the Conservative Party’s rising stars could be even more significant.

So what are the thoughts of Mr Shapps when it comes to the likes of the Pirate Bay, I wonder? Anyway, here’s Carbon/Silicon performing ‘MPFree’:

A bit of a curious case

A minor ripple of news yesterday concerned the fact that Curebit, a “social referral platform”, had been caught ripping off the design and layout of Highrise, a 37 Signals product. 37 Signals, of course, are probably more famous for their “Signal vs Noise” blog and book “Getting Real” than they are for their own products, although from my fairly limited experience of them (I’ve used Basecamp a bit) they do appear to know what they’re talking about. The blog’s rather pretentious style has spawned a number of parodies, including 38th Signal and the now unmaintained 47 Seagulls, which are worth reading as well!

Anyway, tech news website Venturebeat reports that 37 Signals founder David Heinemeier Hansson called Curebit “Fucking Scumbags“, and the whole thing seems to have led to a colourful exchange on Twitter.

If that was all there was to it, it wouldn’t really be worth discussing. But some commentators, including blogger and author Paul Carr, chose to interpret Hansson’s reaction (and the fact that most people in the tech community took Hansson’s side) as evidence of double standards. As Carr put it on Twitter:

If I understand the tech community correctly, stealing a movie, song or book is cool but stealing code should be punishable by death.

Several people, including me, challenged him on that assertion, without getting any real response, and Carr went on to write a fuller article on Pandodaily titled “Angry Nerds: Copyright Theft Is Bad, When It Happens To People We Like” in which he essentially repeats the same claims. After another Twitter exchange between me and Guardian Technology editor Charles Arthur in which we disagreed on the level of Carr’s understanding, I decided it was time to stop trying to make a complex point in 140 characters and blog about it instead.

So, that’s the background. I still don’t know whether Paul Carr is really misunderstanding the difference between piracy and plagiarism, or whether he’s just disregarding it in order to make a point, but it seems to me to be worth exploring further.

For the purposes of this article, I’m using “piracy” as shorthand for large scale unauthorised non-commercial copying, of the sort facilitated by the likes of The Pirate Bay and the now-defunct Megaupload. I’m aware that some in the pro-sharing camp dislike the term, and I agree that it’s misleading if used in the wrong context (in particular, it’s very misleading when used to conflate filesharing and counterfeiting, both of which are commonly described as piracy), but I really can’t be bothered to type “large scale unauthorised non-commercial copying” every time.

So, is piracy the same as plagiarism? Well, yes and no. It is in some ways, but not in others. Let’s look at the similarities and differences.

Piracy and plagiarism are the same in that…

  • Both are an infringement of a legally granted right. The law gives content creators the right to control copying of their work, and also to assert ownership over it.
  • Both are relatively recent additions to the legal canon. Creative works, in the form of books, music and art, have been with us for many thousands of years. But it wasn’t until the 18th century that copying, in any form, became subject to legal control. According to Wikipedia,

    The modern concept of plagiarism as immoral and originality as an ideal emerged in Europe only in the 18th century, particularly with the Romantic movement, while in the previous centuries authors and artists were encouraged to “copy the masters as closely as possible” and avoid “unnecessary invention.”

  • Both are widely considered morally wrong. There is, of course, a significant body of opinion to the contrary, particularly when it comes to piracy, but it’s still fair to say that most people support the basic principles which underlie objections to piracy and plagiarism.
  • In most cases, neither causes any real harm to the rightsholder. I say “most cases” because there are, of course, cases where both can be very harmful – plagiarism probably more so, since it not only has the potential to affect revenue but also reputation – but, on the whole, neither plagiarism nor piracy causes any significant harm to the victim. There is plenty of research which supports the assertion that the putative loss to rightsholders from unauthorised sharing is minimal, since in almost all cases those taking unauthorised copies would not have paid for it anyway.

But, on the other hand, plagiarism is unlike piracy in that…

  • Plagiarism involves a measure of deception which is absent in piracy. If I let one of my friends take a copy of a U2 CD in my rack, I’m not pretending that I wrote the songs and played the instruments. But plagiarism is making a false claim to have originated something which was actually originated by someone else.
  • Both have at least a theoretical potential to damage the revenue of the victim. But only plagiarism can damage the victim’s reputation. Nobody is going to be fooled into thinking that when YouTube user MissRipOff99 uploads a Lady Gaga video, it’s actually Miss Ripoff performing the song. But if a plagiarist’s copy gets wider circulation than the original then the original creator can face an uphill battle proving ownership. (That can happen even when the “plagiarism” is entirely unintentional: A lot of people still wrongly think that the words to Baz Luhrmann’s 1999 hit “Everybody’s Free (To Wear Sunscreen)” were written by Kurt Vonnegut).
  • Piracy is perpetrated by consumers, but plagiarism by competitors. Consumers pirate material because they can’t make their own. Plagiarists plagiarise because they won’t make their own. Piracy is often driven by the lack of available content in a format useful (or a price affordable) to the consumer. Plagiarism, on the other hand, is driven mainly by a desire of a potential competitor to avoid having to put in the work necessary to create original content, or because of an inability to do so.

Looking at the 37 Signals/Curebit spat, it seems clear to me that most of the tech community’s objections to Curebit’s conduct are based on reasons which are found in the second set of bullet points: The differences between piracy and plagiarism. So there’s no real basis for a charge of hypocrisy or double standards against those criticising Curebit, because they’re criticising something for reasons which are entirely different to the complaints of the media industries about piracy.

In particular, Curebit has broken an unwritten principle of webmaster geekdom: Learn from your competitors, don’t replicate them. There’s nothing immoral or illegal about using the “view source” button to see how someone else has done something, whether it’s a clever CSS/HTML trick or a neat bit of Javascript, provided that you’re doing so in order to expand your own knowledge of how HTML/CSS/Javascript/whatever works, rather than blindly copying what you’ve seen. It only becomes immoral when you take someone else’s work and pass it off as your own without either acknowledgement of the source or putting in your own contribution in order to create something else that’s new. As David Heinemeier Hansson puts it himself:

Nobody’s against inspiration or learning. Look at design, view source, forge the influences and come up with your own original work.

As it happens, Curebit haven’t caused any real harm to 37 Signals, so in that particular sense it is equivalent to piracy. But, on the other hand, I haven’t seen Hansson, or anyone else from 37 Signals or the wider tech community, calling for new legislation so that they can take down sites like Curebit without the hassle of going to court. Hansson is clearly angry, but – unlike a lot of the media industry – he isn’t stupid and he does understand the Internet. Again, here’s what he has to say on the subject:

BTW, stealing isn’t the apt metaphor here. Plagiarism is. Taking other people’s work and passing it on as your own.

So, Paul Carr clearly doesn’t “understand the tech community correctly”, given that nobody is making any accusations of “stealing” anything here – not code, not movies, not music, not books. And anyone else who, like Carr, thinks that this episode has any bearing on the filesharing debate is merely demonstrating just as much ignorance.

Bad municipal web design

One of the things about coming to parenthood relatively late is that I’m also late to experience something that all my contemporaries are already only too aware of: Schools are absolutely terrible when it comes to handling payments made by parents for the various non-free things that their children participate in, such a school trips, school dinners, after-school clubs, etc. Most schools, it seems, still have no alternative to cash or cheque handed over to the teacher by the pupil. In an age of credit cards, debit cards, PayPal, online banking, telephone banking, Google Checkout, etc a reliance on cash and cheques is not so much a throwback to the 20th century as positively Dickensian. And yet that’s still the case in the majority of our educational system.

I was, therefore, pleasantly surprised to discover that my daughter’s school does actually have an online payment facility, provided by Worcestershire County Council. At least, I was pleasantly surprised, until I tried to use it.

The URL to the payment facility, along with my daughter’s PIN (that’s “Pupil Identification Number”, in this context) and a list of instructions on how to use the system was provided on a couple of photocopied sheets of paper. Disregarding the fact that a well-written online payment system shouldn’t really need printed instructions, the basic idea of a PIN plus payment details seems reasonable. So, on to the site itself. According to the letter, I can find it at http://www.worcestershire.gov.uk/payments4schools – so off we go.

First impressions are that it’s a bit basic. Apart from the Favicon, there’s no Worcestershire branding at all – just the name of, presumably, the operator, Civica, and a rather strange phrase in the logo: “Authority Icon”. OK, it’s a functional site and there’s no real need to jazz it up too much, but a little more attention to visual design wouldn’t go amiss.

So, to make a payment. The instruction sheet tells me to select the child’s school from the drop-down menu, which also happens to be pretty obvious. So I do. The next step, according to the instructions, is to select the item I want to pay for from the next drop-down. But hey – it doesn’t work. There are no options in the second drop-down at all. None. Nada. Zilch. It doesn’t matter what school I select, nothing appears there.

OK, let’s skip that. I enter my daughter’s PIN and name, and then the amount I want to pay. In this case, I want to pay for her next three weeks of dinner money, at £2 per day. So that’s £30 in total. I enter “30” into the box.

Next, there’s a checkbox for Gift Aid. I have no idea why that’s there. I know what Gift Aid is for, and how it works, but I don’t see how it relates to a dinner money payment. Ah, the instruction sheet tells me that I should tick this if the school has asked me to. Well, fair enough, but it wouldn’t hurt to put that on the website as well.

Then there’s a drop-down menu for my address. Except that I don’t have any options there, either. So I enter the details manually.

Finally, there are three buttons at the bottom: “Add to List”, “Cancel” and “Back to Top”. It doesn’t say so, but I assume that the first is what I need to press. Oh yes, it mentions that on the printed instructions as well. So why not make it more intuitive to begin with?

I click it. And get two error messages.

The first error tells me that I haven’t selected anything from the second drop-down, so I haven’t said what the payment is for. Well, no, but that’s because I couldn’t. But, mysteriously, I do now have a set of options. OK, I’ll select it now. But… I can’t. There isn’t an option for dinner money. Why not? The letter from the school implies, but doesn’t explicitly state, that I can pay dinner money that way.

The second error tells me that “30” isn’t a valid amount. Apparently, I have to enter it as “30.00”. Although that’s moot, now, as I can’t make the payment I want to make. It would have been nice to know that before I started.

Ah, a bit of experimentation shows that I can get the list of payment options either by clicking on the “select” button next to the school list, or clicking on what looks like a menu choice on the left, the words “School Account”. That’s not exactly intuitive, and it shouldn’t be necessary – simply selecting the school should fire off the onSelect() trigger which populates the next drop-down list. This is just lazy, or incompetent, programming.

Anyway, I do have another payment to make – a pantomime visit – so I select that. Fill in the correct amount, add to list, and there it is at the bottom of the page. So, what next?

The “Pay” button is the obvious choice here, and, fortunately, it works. The rest of the payment process isn’t too bad, although it’s still poorly laid out visually. And when I got to the “3D Secure” page, it timed out. I’m no great lover of 3DS anyway, but it can work OK when cleverly integrated. Here, it isn’t. However, I manage to persuade it to load, and complete the payment. I wonder how many people would have given up at this point, though.

OK, so it does work, eventually. And it’s easier to use once I’d worked out how to use it. But, as an example of web design, both visual and functional, it’s very poor. As a web author, I’d be embarrassed to inflict that on paying customers. But, presumably, Worcestershire County Council paid someone to create that system. I’d be interested to know how much they spent. Because if it was any more than a fiver, they’ve been ripped off.

Update:

Having done a little more investigation into this software, and swapped notes with other people who have had similar bad experiences with it, I’ve also discovered that it has a horrendous security weakness. If you know how to do it (and no, I’m not going to give instructions here), it’s possible to obtain the names and addresses of other people using the same system to make their payments. Fortunately, it doesn’t leak credit card numbers, but even a name and address is bad enough. Consider how valuable that information might be to potential (or actual) stalkers, or aggressive ex-partners, etc.

For that reason, I refuse to use the software. And I have told my daughter’s school why I refuse to use it. I’ve gone back to the old-fashioned method of sending a cheque or cash. I would strongly recommend that everyone else avoids using it as well, if at all possible.

Web forms are not a replacement for email

You run an online business. You need a way for people to contact you. But if you publish your email address on your website, then spammers get hold of it. The solution? Don’t publish your email address. Instead, put a contact form on the website. Good idea, no?

No. It’s not a good idea. While contact forms have their uses, and they’re certainly a very valuable addition to publishing your email address, they are not a substitute for it. Here’s why.

If you are running an e-commerce site based in the UK, it’s illegal.

Yes, really. The Electronic Commerce (EC Directive) Regulations 2002 state that:

A person providing an information society service shall make available to the recipient of the service and any relevant enforcement authority, in a form and manner which is easily, directly and permanently accessible, the following information—
[…]
the details of the service provider, including his electronic mail address, which make it possible to contact him rapidly and communicate with him in a direct and effective manner

That’s pretty clear. Law firm Pinsent Masons, who specialise in online law, spell it out a bit more clearly in their guidance:

The email address of the service provider must be given. It is not sufficient to include a ‘contact us’ form without also providing an email address.

Unless you’re happy about breaking the law, therefore, you have no choice but to publish your email address on your website if you are engaged in any kind of e-commerce.

It’s not user-friendly.

OK, so maybe you don’t care about the law. Or mabe you’re not running an e-commerce site, and hence the law doesn’t apply to you. But it’s still a bad idea to force your users to contact you via a form.

Using a form leaves no permanent record with the sender. When I send an email, I’ve got a copy of the email in sent in my outbox. If I use a form, I don’t have that unless I manually copy and paste the content I’m submitting into another document and store it.

It breaks continuity of replies. When I get a reply to an email, if I need to follow it up I can simply reply to the reply. If the only way of making subsequent contact is the form again, then there is no continuity between the different messages.

I can’t be certain that the form worked. With email, I usually know if something hasn’t been delivered. With a form, if I get no reply then I have no way to tell if that’s because the form failed or because I’m simply being ignored. And, if I do want to submit the form again, just in case, I can’t simply resend it, like I can an email; instead, I have to retype it (unless I copied and pasted it into a saved document the first time, of course).

Forms often require data which simply isn’t relevant. I don’t necessarily want to have to give you my phone number and postcode, for example, just to be able to make a simple query.

Forms are often less accessible to the visually impaired. Many people with visual impairments use specially adapted email software. Forcing them to abandon that and use your website form instead is almost certainly a breach of anti-discrimination legislation. But, you may say, surely if someone can use a website then they can use a form? Well, no. Maybe they can’t use the website. They could ask someone to tell them the email address so that they can use it in their own software. Or they may use dictation software which won’t work with your form.

In conclusion

Don’t replace your email address with a web form. Even if you aren’t legally obliged to publish your email address, there are many good reasons why you should. It isn’t necessary to publish it in a format where it can be obtained by address harvesters, provided it’s simple enough for people who understand what it is (see the “about” page of this site for an example). But, if you run any kind of service where people will want to contact you, then letting them do it by email is a must.

Google Webmaster Tools Oops

Just a heads-up for anyone who uses Google Webmaster Tools. This morning at work, I had an email from the marketing department wondering why a bunch of Merchant Center feeds had failed. It turned out that, for some reason, Google had managed to lose the verification for all the domains to which this applied.

Curious, I logged in to my own Webmaster Tools account (my personal one, which covers this site as well as the others I run), and, lo and behold, the same had happened. It’s not such an issue on my personal domains as it is at work, since I’m not selling anything, but it’s still irritating.

So if anyone else is using Webmaster Tools, I’d suggest checking your site verification. That’s particularly important if you’re using Webmaster Tools for SEO purposes via the Sitemaps facility, or if you’re using Merchant Center or any other Google feed which relies on verification.

I’d be interested to know, too, how many other people have seen the same issue. When my colleague phoned Google about the company sites, they hmm’d a bit and didn’t exactly admit to there being a problem, but it seems too much of a coincidence that it happened on my own account at the same time. Has anyone else seen the same thing?

In web design, usability trumps everything

In the course of a discussion about fixed v fluid width layouts for web pages, I dared to make the comment that fixed width design can actually be a usability bonus in some cases – something which is close to heresy among web design purists. As I pointed out, most web users are non-technical, and most non-technical users don’t like having to resize their browser windows in order to avoid text columns being too wide to read. At which point, someone else, who seemed to have misunderstood the point I was making, chipped in with:

People heavily into the graphic arts like large fixed widths.

Which, of course, is one of the reasons why people who are heavily into the graphic arts almost always make very poor web designers 🙂

I’m not arguing for preferring fixed widths over fluid widths, at all. I am arguing for the inclusion of fixed widths in the web designer’s toolbox, to be used where appropriate, but that’s a somewhat different argument. On the whole, I prefer fluid designs and would use one in preference to a fixed width design unless the fixed width gave me some clear benefit in usability in that particular context. But it’s that word “usability” in there which is the key. My real argument is simply this:

In web design, usability trumps everything.

And that’s it. Usability is more important than visual appeal. Usability is more important than doctrinaire adherence to fluid or fixed widths. Usability is more important than winning awards.
Usability is more important than validation. Usability is more important than SEO. Usability is more important than cool new Web 2.0 features. Usability is more important than whether your site is hosted on Linux or Windows. Usability is more important than, well, you name it, really.

That’s not to say that none of these matter at all. On the contrary, they matter a lot. But the reason they matter is because they contribute to usability, if done right. Good visual design makes a site more usable. Valid, semantic HTML and CSS makes a site more usable. Good, human-centred SEO makes a site more usable. Ajax and Web 2.0 features can make a site more usable. So long as what you’re doing contributes to usability, then it’s worth doing. But the moment you take your eyes off usability and start doing things for their own sake, then you’re missing it completely.