Necessary hashtags and the art of detecting media bias

Twitter (or, at least, my particular Twitter bubble) has been busy this last 24 hours pouring scorn on the Home Secretary’s apparent admission in the Andrew Marr show on BBC1 (and later, in conversation with Sophy Ridge on Sky News) that she would consider legislating to force communication suppliers, such as WhatsApp, to break their encryption systems so as to permit governments to access messages.

I’m not going to rehash all the reasons why breaking or weakening encryption is wrong. Plenty of people more knowledgeable about it than me have already done that. I’m more interested in how she ended up making such a statement in the first place.

First, some background. The idea of forcing communication suppliers to add “backdoors” into their systems has been floating around for a long time, particularly in policing circles, as it would clearly be beneficial in some cases to be able to get at the content of every electronic message. So this is a proposal that tends to bubble up every time there is a major terrorist or criminal incident.

Such proposals have never actually come to anything, though, partly because they don’t stand up to technical scrutiny but also because they would be firmly resisted by many large and influential corporations – like banks and other financial institutions – as well as other government agencies which themselves rely on encrypted communications.

So, how did it crop up again this time, and why was the Home Secretary so willing to countenance it?

It’s important here to see the whole thing in context. If you haven’t already watched the full interview with Andrew Marr, then do so now on iPlayer before it expires. Because it’s clear that the first person to say something stupid in that exchange wasn’t Amber Rudd, but Marr. He introduced the topic of end to end encryption, made a complete hash of explaining it, and then invited Rudd to agree with him that it is “completely unacceptable” that the government can’t access terrorists’ messages on it.

This is intellectually unsustainable, but political dynamite. Rudd could not, realistically, disagree with him – imagine the tabloid headlines if she had dared to suggest that it is acceptable for criminals to be able to communicate in secret – but neither could she agree without falling straight into the trap that Marr had laid for her.

It was clear from that exchange that Rudd is not only uninformed about how encryption works, but was uncomfortable discussing it. It’s easy to mock her misguided use of terminology, but when she tried to divert the conversation into an area of safer ground, Marr dragged it back. It was, essentially, two people talking about something neither of them really understands, but both agree that it’s a bad thing.

Having fallen headlong into Marr’s elephant trap, though, Rudd couldn’t easily crawl out of it. This was more of an issue later on Sky News, on Ridge on Sunday. Unlike Marr, Sophy Ridge had done her homework, and was able to point out the glaring inconsistency between Rudd’s assertion that she fully backed strong encryption with the threat to legislate against it. But it was too late for Rudd to row back on the statements she had made to Marr, so instead she had to resort to the usual political trick of speaking firmly, keeping a straight face and refusing to acknowledge the contradiction in the hope that viewers would hear what they wanted to hear.

The real question this raises is: why was Rudd so poorly briefed in the first place? Given that it had already been publicised that Adrian Elms had used WhatsApp shortly before murdering four people, why was it not anticipated that the question of accessing it would crop up? Why couldn’t Rudd have defused Marr’s line of questioning by pointing out to him that he didn’t understand how it worked?

I can only speculate here, but it seems to me that this is an issue with the Home Office which goes back a long way – it was clearly visible during Theresa May’s time as Home Secretary, and even before before that under the last Labour administration. The hiving off of Home Office functions into the newly created Ministry of Justice was one attempt to deal with a department that former Home Secretary John Reid once described as “Not fit for purpose”. But this has seemingly resulted in not one, but two dysfunctional ministries.

The particular problem with the Home Office has been a long standing disregard of personal liberty, combined with an ill-concealed contempt for the tech sector. Apart from legislation drafted by the Home Office which combines illiberality with technical infeasibility, this has repeatedly manifested itself in a lack of desire to engage with reasonable and informed criticism. Ministers are left unbriefed, and in danger of looking stupid (as both Rudd, May and their Labour predecessors did, regularly, when talking about Internet-related issues), because there is a perception that the general public, and the tabloid media, doesn’t care about the details. Only nerds care, and nerds don’t matter.

I don’t believe that the government will legislate to force companies to break encryption. There would be too much opposition, both internally and from industry, for that to happen. But we will carry on getting these kites being flown every time there is a terrorist incident, until this anti-tech and anti-freedom factor within the Home Office is rooted out. Ministers could make a start by insisting on being properly briefed in future, and hiring a few SpAds who understand the issue and can offer unbiased advice.


While I’m on the subject (and apologies if this is turning into too much of a long read), consider for a moment why WhatsApp is in the news. As I said at the top of this article, it is known that Adrian Elms used WhatsApp only a few minutes before embarking on his murderous spree. But how do we know that?

Given that WhatsApp is end to end encrypted, and only the sender and recipients of a message can read it – or even know that it is sent – the only way to know this is to have access to either the sender or recipient’s phone.

In this case, media reports say that the police know Elms used WhatsApp because they found a message from him on a phone seized from a known acquaintance in Birmingham. But they don’t know who else he may have communicated with, because his phone is locked and they are unable to access it.

But if they can get that, though, then they have a history of the WhatsApp messages that Elms sent and received. They were not secret to him, and neither are they to anyone who successfully accesses his phone. End to end encryption protects messages from being viewed in transit by third parties; it doesn’t protect them from being viewed on either of the devices they were sent from or to.

In fact, if you read the media reports carefully, the idea that the police are being stymied by lack of access to WhatsApp isn’t coming from the police. They may be happy with that particular misbelief being spread around, because it may help minimise the prospect of accomplices deliberately deleting messages that may be relevant (although, in practice, it does now seem that Elms really was a “lone wolf” and had no accomplices). But the idea that WhatsApp is deliberately hindering the investigation is a suggestion that’s being fed by the media, supported by off the record comments from Home Office insiders (again, not explicitly, but with hints dropped in headlines that aren’t borne out in the text of the article).

The police’s problem is simply that they can’t unlock Elms’s phone. Or, at least, aren’t admitting to being able to, at least not yet. And if they do get into it, there are probably far more interesting things they can discover from it than who he was messaging.

There’s a subtext here that’s worth exploring. Google, Facebook, Twitter and other tech companies are in the firing line at the moment because of their seeming reluctance to remove extremist material. Some of these criticisms are justified, others less so – the tech companies do actually have a good record of addressing explicitly illegal material, as indeed Amber Rudd tried to point out before being interrupted by Andrew Marr; the real issue comes with the stuff that isn’t necessarily illegal but may be offensive or inappropriate. The fact that adverts from well known brands have been appearing on YouTube videos posted by Daesh and their sympathisers has been in the news a lot recently, particularly in the context that these adverts earn money for the videos’ creators.

This is a valid concern, and Google et al could certainly do more to ensure that advertisers have more control over the material that their adverts appear alongside. There are also perfectly legitimate concerns about where the line is drawn regarding offensive, rather than specifically illegal, content.

However, there’s an undercurrent to this which needs to be borne in mind. Google and Facebook, in particular, are very much in the business of attracting advertising expenditure away from the traditional media. The newspapers which complain about Google showing ads on jihadist videos are not neutral; they have skin in the game.

The traditional media also resent the way that search engines and social media have become the gatekeepers to their own content. There is a strong perception in the media that the tech industry is leeching away their traditional source of revenue, and offering nothing in return.

To some extent, that perception is true, although it’s also arguable that it’s not a problem – changes in technology and society’s behaviour always benefits some and not others. Airlines put the ocean liners out of business, steamships spelled the end for the tea clippers and the printing press rendered scribes redundant. The traditional print media can’t really complain if they are now on the downward slope of a hill they were once ascending.

What this means, though, is that there has, for some weeks, been a media campaign in progress against Google, Twitter and Facebook – a campaign driven as much by self-interest as any real public concern. This wasn’t helped by a particularly inept response by Facebook to an investigation by the BBC into sexualised images of children. The Westminster attack has simply played into this narrative, by allowing the media to say “I told you so”. It also gives impetus to their anti-Google and anti-Facebook campaign (and remember that WhatsApp is owned by Facebook).

The traditional print media and broadcast media would love nothing more than to see the tech giants taken down a peg or two. And their reporting reflects that. It is not unbiased. Andrew Marr’s carefully laid trap for the Home Secretary has to be seen in that context, too.


Edited to reflect media reports that the police know about Elms’s WhatsApp use from one of his acquaintances, rather than his own phone.

Throw your emails in the air like you just don’t care

I received my first batch of template emails from 38 Degrees recently.

As a general rule, most of their mail-bombing campaigns are aimed at MPs, rather than councillors, so they tend not to come our way. But the recent announcement by the Prime Minister that the UK will take several thousand Syrian refugees and then distribute them across local councils has made it a local issue, and many people are, understandably, keen to let their own councillors know their opinions.

That much I don’t have any problem with. It’s a genuine local issue, and I welcome emails from local residents on any matter which directly affects them, or may potentially do so. What I am less happy with, though, is the use of identikit mass emails as a tool.

There are all sorts of reasons why using a pre-filled email template is a bad idea. I won’t go into all of them in detail, as there’s already an extremely good explanation of why you shouldn’t use them on the WriteToThem.com website. However, having not received any personally until a few days ago, I couldn’t speak from personal experience. So, having now got a bunch in my inbox, these are my thoughts.

The couch activist’s digital cop-out

Like most councillors, I get a fair number of emails from constituents. They cover a wide range of issues, although the two most common are planning applications and dog poo. They also vary a lot in content; some are lengthy screeds and come complete with attachments of supporting documentation, others are brief “I’ve got a problem with X, can you help?” type messages.

One thing all these emails have in common, though, is that they are in the sender’s own words. And another is that they are almost always the start of a conversation rather than a one-off message.

The 38 Degrees emails, though, have neither of these characteristics. All of them have had campaign-ese wording which clearly isn’t that of the sender, and mostly doesn’t even relate to my role as a councillor or to the specific wards that I represent and the councils that I sit on – it’s just generic boilerplate. And, although I’ve replied to all of them, none of the senders has made any further contact.

When someone takes the time to look up my email address, and then compose an email about whatever is bothering them, I know it’s something they care about. They have put effort into it, or at least are planning to – even if all they want me to do is phone them, they’re willing to spend the time talking to me. I also get a feel for the personality of the sender and how strongly they feel about something.

With the 38 Degrees emails, I get none of that. Sending a campaign email via 38 Degrees is trivially easy, and requires no knowledge about either the subject matter, the person the email is being sent to or the public authority that they represent. All you need to do is go to a website, enter your name and postcode and click a button. It’s a throwaway, inconsequential act that treats serious subjects, such as the Syria crisis, in the same way as expressing an opinion on which boy-band is better or which chocolate tastes nicer.

Participating in a 38 Degrees mail-bomb doesn’t show that you care about the subject. On the contrary, it shows that you really don’t care at all. Because if you really did care, you would take the time and make the effort to get in touch directly, in your own words.

Veruca Salt’s campaign class

As it happens, I do actually support the government’s plans to resettle Syrian refugees in the UK, and I’m confident that my local authority will be capable of doing whatever is necessary locally. To that extent, I agree with the emails I’ve received. But the details are all still to be finalised, as things stand, and in any case I’m just one of many councillors and have no ability to set council policy or dictate what will be done. So an email demanding that I “ensure” that the council takes in refugees and gives “immediate” sanctuary to them is heading towards moon on a stick territory.

Asking for something that I, personally, cannot deliver is pointless. By all means, ask for my support in your campaign to get the council to do something. You may or may not get my support, but you’re perfectly entitled to ask for it. Or ask what my position is on a policy, and I’ll usually be happy to tell you. But simply demanding that I do something is just plain daft. Even if I agree with what you want to happen, I can’t do that.

This is where it gets even more silly. I’d be a lot more tolerant of seemingly unreasonable emails if they were directly from a constituent and written in their own words. I don’t expect everyone in the ward to know how local government works and what my responsibilities are. But, of course, these emails aren’t written by the people whose name appears on them. They’re written by some faceless and unaccountable activist at 38 Degrees.

Given that their style makes them instantly recognisable as being from 38 Degrees (even though the email itself does its best to hide the true origin), they’re almost certainly being written either by the same person, or a small group of people. In which case, these people ought to know how government, including local government, works. They are, after all, trying to persuade us to do something. You’d have thought they’d at least make the effort to find out what form of persuasion is most effective.

So, what we have is a mass-mail campaign being run by people who don’t really care whether their emails are persuasive – they just want to make sure we have the inconvenience of having to read them – and sent by people who can’t be bothered to do anything more constructive. If anyone really thinks this is a helpful way to engage in the democratic process, then they are sadly mistaken.

Killing the golden goose

I installed Adblock on my computer the other week. I’m well aware that this opens me up to charges of hypocrisy, given that online advertising is an essential part of how I earn a living – both in my day job, where we use advertising to get customers, and in my spare time business where I make money from adverts on my own websites. If everybody installed Adblock and used it in full, both of those would suffer considerably.

For that reason, I’ve made a point of setting Adblock to be disabled by default, and only enable it for specific websites. The websites that I enable it for (and it is still only a small proportion of those I visit regularly) are those which have egregiously intrusive advertising.

This isn't actually an ad. It's just a screenshot of one.
This isn’t actually an ad. It’s just a screenshot of one.

In particular, I enable Adblock on sites which have adverts that autoplay audio, that overlays the content of the page in any way, or that cause the content of the page to move, reflow or reformat after it has initially loaded.

I’m less bothered about autoplaying video, so long as it’s contained within a predefined border of an advert that doesn’t otherwise intrude into the rest of the page. But autoplay audio is a complete no-no. Do that, and it’s an immediate block. It’s also an immediate block if an advert causes the content of a page to shift or reflow after I’ve started reading it. Which is why I no longer see adverts on The Guardian’s website, for example.

One annoying form of advert in particular seems now to have become endemic on newspaper websites. It’s the one which doesn’t seem to be there to begin with, but then suddenly appears in the middle of the content as you scroll down. I’m sure you’ve all seen them. I’m equally sure that you all, like me, hate them. As it happens, they were mentioned in the Feedback section of The Times today:

I really don’t like the pop-up video adverts in the middle of articles on your website,” wrote AC Ruston. “There’s even one in the guidance on how to complain, for heaven’s sake. They’re an annoying distraction, and in some articles they are totally inappropriate.”

This complaint was followed by several others, and ended with an admission by the newspaper that using them was wrong:

We got the message and, as it happens, we agree. Advertisements shouldn’t interfere with the enjoyment of reading a newspaper or a website. We’ve asked the advertising department to remove this campaign.

Now, it’s easy to argue that The Times doesn’t need to use them, because, unlike most newspaper websites, it is subscription only. So advertising income is an add-on, and a fairly trivial one at that (since the paywall keeps most people out to begin with, and hence makes adverts far less valuable as the number of viewers is tiny by comparison with other national newspaper websites). Other newspapers, it can be argued, really need the advertising revenue.

The challenge of paying for free content has always been there. Research shows that up to 80% of users are unwilling even to consider paying for content. But even the 20% who are theoretically willing to pay rarely do, in practice. Traffic to The Times’ website now is less than a tenth of what it was before the paywall. News UK, owner of The Times, has backtracked on a similar “solid paywall” on the Sun’s website following an equally drastic drop in traffic. For news websites in particular, social media is an important driver of traffic – but there’s no point sharing a link to a story that most of your friends or followers can’t read.

The failure of pure subscription based models means that advertising is always going to be a key revenue source for the vast majority of websites. So maximising that revenue is important, and if another form of advert comes along which pays more – which these in-content video ads certainly do – then using them is very attractive.

If my experience, though, and that of the contributors to the Feedback column of the Times, is anything to go by, these adverts are almost universally disliked. And it isn’t a big step from disliking the adverts to finding a way of blocking them.

Unsurprisingly, use of adblockers has grown significantly over the past few years. Reliable statistics are hard to come by, but some estimates suggest that usage increased by nearly 70% between 2013 and 2014, with anything from 5% to 50% of ads blocked, depending on the website.

That’s a problem. It’s a very real problem. And it isn’t just a problem for websites which have excessively intrusive ads. Because when people install adblocking systems, they are quite likely to just accept the default of blocking all adverts, everywhere. Which means is that sites which don’t have don’t have intrusive adverts – which don’t autoplay audio, or relocate/obscure content – get their adverts blocked as well.

If the trend towards more and more intrusive adverts continues, therefore, the websites which use them will end up killing the golden goose that they rely on. And in turn take down the advert-supported economy of millions upon millions of small-scale websites which don’t have the resources to find alternative income. If that happens, it won’t just be those of us who rely on the advertising economy who will suffer. It will be every web user.

American Idiots

I was an early adopter of Gmail, so I have a fairly short and simple Gmail address. Unfortunately, that means it’s only a typo away from plenty of other short Gmail addresses, so I get a fair amount of email intended for someone else because of misspellings.

I also get a lot of email intended for someone else because they wrongly believe that they own my email address. I have no idea why they think that, but it seems to happen a lot.

Anyway, one of the emails I got recently intended for someone else was this one:

eticket

That, as you can see, is an American Airlines e-ticket. Obviously, the fact that I got it means that the intended traveller didn’t, which means that unless he sorts it out he won’t be taking his flight.

He must have realised that himself, because about a week later I got the same e-ticket again. I presume he had phoned customer services and complained about not getting it, so they had helpfully re-sent it. Except, of course, they re-sent it to the same, wrong address, so he still didn’t get it.

Anyway, this time I decided that I’d try and be helpful, so I replied to the email and told them:

I have no idea why this has been sent to my email address, as I am not Gary Marks and I did not buy this ticket. Please can you amend your records.

That prompted an auto-response, which is what I’d expect. Unfortunately, what I didn’t expect was that the person dealing with the issue would entirely fail to understand what the problem was. Here’s the eventual response:

response

This, in full, is what it said:

Dear Mr. Goodge:

Thank you for contacting Customer Relations.

Based on the information you have requested, I have determined that our reservation personnel can better address your concerns. They can be reached via 1-800-433-7300 and are available to assist you 24 hours a day, 7 days a week. If you are calling from outside the United States or Canada, please click on the URL below to determine the reservation center or General Sales Agent nearest you. Please have your flight details readily available to provide the representative.

www.aa.com/i18n/utility/internationalReservationsPhoneContact.jsp

Should you require similar assistance with reservations in the future, we recommend you call the above number for a more expeditious response. We have an around-the-clock dedicated staff of professionals eager to resolve concerns for customers holding open reservations. If you still have questions or concerns after your trip is completed, we’d be happy to hear from you in Customer Relations.

Mr. Goodge, thank you for bringing this to our attention.

Sincerely,

Lourdes Foyt

Customer Relations

American Airlines

This is the wrong response on so many levels that I don’t really know where to start. So I might as well end with this.

Fighting Yesterday’s Battles

Duty Calls

I recently got involved in an argument on the Internet with a small group of people who think that the web design standards of the 1990s are still valid today. In support of this, one of them pointed me at anybrowser.com – a site which, not unconcidentally, dates from the late 90s.

My antagonists’ argument was, basically, that if it works it’s good enough, and the use of tables for layout, font tags, etc are perfectly acceptable provided you follow the guidelines on that site (and others) not to target your site at a particular subset of browser users. As a corollary of that, they also argued that if a website doesn’t work perfectly with IE6, it’s the fault of the web designers who aren’t doing their job properly. One even went so far as to accuse me of “Nazi tendencies” for insisting on web design standards rather than pragmatically using what works.

I don’t think I need to tell anyone who works in the web design industry that that’s wrong. But it’s worth exploring a little bit why it’s wrong.

Firstly, of course, the Any Browser campaigns didn’t always agree with each other. Anybrowser.com is happy with tables for layout. Anybrowser.org cautions against them. Anybrowser.com doesn’t use CSS. Anybrowser.org recommends it, albeit with caveats. So even historically, there’s no consistency.

However, there are two main themes running through all the Any Browser campaigns of the 90s:

1. Websites should not be targetted at users of any specific browser.

2. All websites must be backwards compatible with all older browsers, and it’s unrealistic to expect users to upgrade.

As far as the first point is concerned, it’s worth noting that that argument has been won. Back in the late 90s, shortly after the introduction of Internet Explorer, there was a real danger that the web would become siloed into sites compatible with Netscape (remember that?) and sites compatible with IE, each using different proprietary extensions. The Any Browser campaigns argued fiercely against that, and rightly so.

These days, though, that simply isn’t an issue. I use all five of the major browsers – Chrome, Firefox, IE, Safari and Opera – and I can’t remember the time I last encountered a website which worked better in one of them than the others.

What matters these days isn’t browsers, it’s devices. “Any Browser” is no longer an issue. “Any Device” is very much so. Sites which work well on a desktop can fail miserably on a tablet or smartphone. And, of course, vice versa. The challenge for web designers these days isn’t to make sure that their sites work on two or more browsers, it’s to make them work on a multiplicity of desk based and mobile devices.

I have to admit that not all of my own websites meet that requirement. I’ve been working towards it, of course, and one by one I’ve been updating them to use responsive frameworks that will work on phones and desktops alike. But I’m not completely there yet. In my day job, cross-device compatibility is utterly crucial, and on my desk I have not only the PC that I do my work in but also a tablet and smartphone that I need to test any visual change in before signing it off to go live.

Making that work, though, means sticking to standards. And it means sticking to standards designed for modern technology, not the technology of 20 years ago. For example, using tables for layout breaks horribly on smartphones – it makes sites unreadable without horizontal scrolling, which is one of the bigger no-nos. These days, we have to be standards complaint, and we have to be both cross-browser and cross-device compatible. In short, we have to write websites, not just for any browser, but for any device.

Which leads on to the second major theme of the Any Browser campaigns: It’s unacceptable to expect users to upgrade their browsers.

Unlike the first theme, of cross-browser compatibility, this theme has been proven wrong. And there are two main reasons why it is wrong.

The first is that, like a lot of statements made about the Internet at the time, it was based on an entirely false assumption about the rate of technological change. When installing an updated browser meant, at the very least, a lengthy download over a dial-up connection, an often complex user-initiated installation routine and (in the case of Netscape), even paying for the upgrade, there were very many good reasons not to do it unless you really wanted to. And, equally, website operators needed to be aware that users had many good reasons not to upgrade. Backwards compatibility was, therefore, an essential part of the Any Browser principle.

These days, none of those apply. All modern operating systems have an auto-update function for software, and all modern browsers either take advantage of it or use their own auto-update system. In some cases (Chrome being one), the auto-update takes place in the background and the user won’t necessarily even notice that it has happened. And people don’t connect to the Internet over dial-up any more. Even a slow ADSL connection is easly fast enough to download a browser update with no problems. With the exception of some PCs on locked-down corporate networks (which are an entirely different scenario, and one that isn’t really relevant here), the argument that upgrading a browser is difficult, complex or inconvenient simply no longer exists.

The second reason why it’s wrong to insist on backwards compatibility is, though, even more powerful. And it was competely unforeseen by the Any Browser campaigners back in the 90s. That reason is security.

The growth of the Internet has facilitated a lot of things, many of them entirely beneficial. But it has also facilitated a lot of bad things. When I started out in the IT industry, back in the 80s, hacking into a computer generally meant having physical access to it, or at least being part of the local network it was connected to. The Internet made it possible to hack into a computer anywhere in the world without going anywhere near it. And malicious software authors rapidly created programs to do just that. Viruses, trojans and other forms of malware are an everyday part of the 21st century Internet. And so is the need to defend against them.

But defending against the efforts of malware creators means keeping up to date with the necessary defensive measures. Which in turn means keeping Internet-connected software up to date and patched against any newly discovered vulnerability.

This might not be so bad if the only thing at risk was the user’s own PC. I used to have a friend who always left his car doors unlocked, because, as he put it, “there’s nothing in the car worth stealing, and I’d rather they didn’t smash the windows to discover that”. A lot of people have the same approach to their computers.

The problem with that approach is that it isn’t just the user’s own PC which is at risk. Once malware gets onto a PC, it can become part of a botnet which in turn is used to attack other computers, maybe sending spam, or acting as part of DDoS attack, or simply spreading the infection. Most spam comes from botnets these days.

If you have unpatched older software connected to the Internet, therefore, you are not merely a danger to yourself, but to other Internet users as well. If you are using a centrally managed computer at work then it isn’t your problem, it’s your IT department’s, and they can answer to their own management if they allow company PCs to become infected. But if you are a home user, or a small business running your own equipment, then not only is it basic common sense to keep up to date with software upgrades but there’s a very strong argument that it is a moral imperative. People who refuse to upgrade are contributing to the problems of spam and malware experienced by everyone else.

As far as website operators are concerned, that means they also have a moral obligation to encourage their users to keep their software up to date. And if that means deliberately refusing to cater for the small minority using broswers several generations behind, then, overall, that is a positive move.

The best way to encourage users to keep up to date is to stick with modern web standards. That doesn’t necessarily mean using all the bleeding-edge features of HTML5 and CSS3, but it does mean writing websites that comply with HTML5 standards rather than using outdated feature sets of older implementations. And, as a bonus, using HTML5 will also make it much easier to create websites that are truly cross-browser and cross-device compatible, which is the ultimate aim of the Any Browser campaigns.

The Any Browser campaigns of the late 20th century had two fronts. History shows that they’ve won one, and lost the other. There is no point now in revisiting either of them. What matters now is ensuring that the web remains an open, interoperable platform accessible to any user and any website developer on an equal basis. That outcome is still far from assured. Let’s not waste time fighting yesterday’s battles when we still have today’s to win.

Meltwater and the copyright right – a brief update

Back in 2011, I blogged on the disturbing case of Meltwater and the Newspaper Licensing Agency. If you’ve got time, go and read the original article before coming back to this one. But, if you want the TL:DR version here it is: The appeal court decided that following a link to material on the web could be an infringement of copyright if you didn’t have permission, because doing so would inevitably create a local copy of that material in your browser, and, unless authorised, that’s an infringement. As I said at the time,

Copyright law does have explicit exceptions for temporary or transient copies which exist merely to facilitate the transmission or lawful use of a work. The basis behind the Meltwater judgment is that such a permission only applies to lawful use, so if a particular use is not lawful then even a temporary copy is a breach of copyright.

I ended that post by hoping that Meltwater would appeal and that common sense would prevail. But I wasn’t holding my breath.

Now, nearly three years later, we have an update. And, fortunately, common sense has prevailed. Meltwater did appeal to the Supreme Court, and they won. And the Supreme Court itself then referred the question to the European Court Of Justice to get an EU-wide ruling. The ECJ, in turn, upheld the Supreme Court’s verdict. So the original appeal court’s ruling that, as I put it at the time, “a use can be unlawful just because the publisher says so” has been overturned. On the contrary, as the Supreme Court put it, “a use of the material is lawful, whether or not the copyright owner has authorised it, if it is consistent with EU legislation governing the reproduction right”.

The judgment is quite complex, and goes into a lot of detail, but the gist of it is that the exemption to copyright for transient copies is a lawful use in and of itself, and does not rely on any other right in order to be lawful. There may be other rights being infringed, of course, and the judgment refers to a case where they were. But, crucially, it makes the important declaration that even if other rights are being infringed, the exemption for transient copies is absolute and cannot be nullified by the publisher’s lack of consent. Which means that if the transient copies were the only possible infringement, then no infringement at all has taken place.

This has a lot of ramifications beyond the case in question. The Guardian has headlined the story, “Internet users cannot be sued for browsing the web“, which is certainly true, but there are other aspects as well. One of them is that this also settles once and for all the question of whether simply linking to publicly available material can be an infringement of copyright.

This has been addressed in the past, in other cases, but there hasn’t until now, been a definitive answer from a senior court. But this decision makes it clear that a link alone cannot be an infringement of copyright, because the link itself is not a copy of anything and the transient copies made by someone following the link are not an infringement either. (There are other, more tenuous rights, such as a “making available right”, which can, theoretically, be infringed by links in some circumstances. But if the material linked to is already public then that cannot be the case).

It also means that someone viewing or listening to a live stream online is not infringing copyright, even if the source of the stream is doing so. Because the person viewing the stream is only making transient copies, no infringement is taking place. It would be, of course, if they were making a permanent download, as well as if they were also communicating that material to the public. But both of those are an entirely different scenario. Private viewing of an illicit stream is not infringement, even when broadcasting the stream is.

So, overall, this is a sensible decision. And it’s nice to know that the courts don’t always follow a copyright maximalist agenda.

How Facebook is killing language

Lots of things are accused of killing language. Texting, for example. Or, to give its more common name, txtng. It’s quite easy to find articles in the popular media complaining that schoolchildren are using abbreviations such as ‘ur’, ‘gr8’ and ‘b4’ in their essays.

Twitter gets a bashing, too. Its 140 character limit means that it’s all too common to find yourself in the position of composing a witty and intelligent tweet, only to find yourself with -1 characters left and having to choose which spelling or grammar solecism to commit.

But no. Text and Twitter are not the worst offenders against language. Txtspk arises from the sheer awkwardness of using a phone keyboard as much as anything else. In many ways, Twitter’s limit forces you to think carefully about what you are writing. Neither of those are bad, even if they can sometimes accidentally give rise to bad habits in other contexts. The worst offender is different. The worst offender is Facebook.

That may seem a strange assertion. After all, Facebook imposes no overly-restrictive limit on message length. You don’t find yourself having to cut out words or abbreviate others. And, if you’re using it on a real computer, it doesn’t have the awkward keyboard problem of SMS. So what’s the problem?

The problem, quite simply, was Facebook’s decision to remove the “post” button and make the Enter key post instead. That may seem innocuous, but what it also did was remove the ability to insert newlines and paragraph breaks the way you normally do – by pressing the Enter key.

Facebook does still allow you to insert newlines by pressing Shift-Enter. But that’s non-intuitive and it isn’t well documented, and I’d hazard a guess that most people aren’t aware of it. It’s certainly true that most people don’t use it.

I’m not really sure why Facebook did this. Comments from elsewhere on the web suggest that Facebook were trying to encourage short posts, in order to make the news feed more Twitter-like. If so, it hasn’t really worked.

To be sure, a lot of posts are simple one-liners or single sentences. But they always have been. There doesn’t seem to have been any noticeable reduction in average post length since the change.

What has happened is that most people no longer craft lengthier comments, possibly going back over them and checking for typos and maybe reformatting them, before posting them. Instead, Facebook’s newsfeed and comments under a post often read more like a stream of consciousness. Instead of well-formatted text with paragraphs where appropriate, people just keep on typing until they’ve finished and then just hit Enter.

I don’t know about you, but I find this really irritating. It makes it a lot harder to read longer posts and comments. Facebook’s rather small and closely spaced regular font size (a fixed 12 pixels) doesn’t help here, either – both Twitter and Google Plus have larger, easier to read text.

The reason why this is particularly bad, though, is that unlike Twitter and SMS, Facebook’s lack of a short text limit means that habits learned on Facebook do transfer to other situations far more easily. People who write 140 character comments on Twitter don’t restrict themselves to 140 characters elsewhere. But people who write long screeds of unformatted text on Facebook do write long screeds of unformatted text elsewhere.

I’ve been on the Internet a long time – nearly twenty years, now – and I’ve been involved in a lot of online discussion forums, including mailing lists, Usenet newsgroups, web forums and now social media, in that time. I’m not a net-Luddite; I don’t think that everything was necessarily better back in the early days and I’m very much a fan of social media in general. But, over the past few years, I have noticed a distinct decline in the quality of writing on many of the online discussion forums I inhabit. And, in most of those cases, the decline is specifically into the type of unformatted, un-crafted text encouraged by Facebook.

So, what can be done about this? I don’t really know. I do make a point of using paragraphs in any longer content that I post on Facebook, in the (possibly vain) hope that it might encourage others to do the same. But what I’d really like is for Facebook to reverse this particular change. Maybe I should start a Facebook page about it.

If it aint broke…

Remember the Apple Maps fiasco? Google clearly didn’t.

I’ve previously blogged about how Google has managed to get the colour scheme horribly wrong in the latest redesign, but the latest change plumbs yet new depths of inanity.

You may have seen media reports of how Google managed to rename Basingstoke, but when my Maps were suddenly “upgraded” to the new version I noticed an equally glaring error right here in Evesham. Or, as Google now calls Evesham, “Raphaels”. Here’s a before and after screenshot:

Old Google Maps

New Google Maps

Actually, Evesham itself hasn’t been misnamed (unlike Basingstoke, which really was). What’s happened here is that a local business, Raphael’s Restaurant at Hampton Ferry, has, for some inexplicable reason, been given more prominence than the name of the town. If you zoom further in, or back out, then “Evesham” reappears on the map.

But why do this? I initially thought it might be a bodged attempt at personalisation, as I happen to know the owners of Raphael’s and eat there often. It’s not beyond the bounds of plausibility that, somehow, I’ve created enough of a digital footprint via social media that Google knows that, and is therefore highlighting it to me. But then again, neighbouring Pershore also shows up as “Holy Redeemer RC Primary School”, and I have no connection with that institution at all. In fact, until earlier today, I didn’t even know it existed.

So, what is the connection? My next thought was that it’s because Raphael’s Restaurant has a Google+ page, and a couple of generally positive reviews (currently rated 4/5, which is pretty good, really). But no, the Holy Redeemer RC Primary School doesn’t appear to have a Google+ page of its own yet. (It has an auto-generated Google one, but not a “real” one, if you know what I mean).

So I’m still none the wiser. And, while I’m not going to say anything negative about Raphael’s Restaurant (you should try the Sunday carvery, it’s excellent), I can imagine that other business owners in Evesham are somewhat less than chuffed about this. Why should Raphael’s be the first food outlet to appear on the map as you zoom in to Evesham? And why should the Evesham Pizza and Kebab House, in Port Street, be the second? (Other than the fact that I am a regular customer of theirs as well!). Why do St Richards First School and St Mary’s Catholic Primary School show up on the map of Evesham before the considerably larger Prince Henry’s High School? Why does the Vale of Evesham Christian Centre show up before Evesham Methodist Church? Why is Bonk the first shop to show up in the High Street (which is wrong now, anyway, as Bonk is moving to Port Street), and Phones 4U the first to be visible on Bridge Street?

I could go on. The entire selection of businesses on the new Google maps seems utterly random, and bears very little relationship to what people are likely to be looking for. If you want a primary school, a riverside cafe and a skate shop then it’s not a bad selection. But, realistically, how many people are going to care about these things?

I said in my previous post that Google seems to have stopped considering Google Maps to be first and foremost a map, and instead sees it primarily as a kind of geo-located business directory. That in itself is a bad move, of course. But it’s compounded by the fact that Google Maps is an absolutely atrocious business directory. It’s missing 90% of the businesses and organisations that people actually use, and of those it does include, it ranks them in an entirely arbitrary order of priority.

Anyway, enough ranting. There are three important things you need to know:

1. To opt out of the new Google Maps, click on the question mark icon at the bottom of the screen, and select “Return to classic Google Maps”.

2. Raphael’s Restaurant is definitely worth a visit if you’ve never been there before, particularly the Sunday carvery.

3. Buy your skate stuff from Bonk. Kim does a lot for the town, and needs all the business she can get.

Oh-oh-oh-oh-oh-oh-oh-oh-oh-O2

It’s Christmas day and I’m blogging about porn and censorship on the Internet. How sad is that? Anyway, there have been a lot of comments on social media about how various websites have been blocked by O2, including those by prominent campaigners in favour of filters.

Now, I’m not in favour of compulsory filtering either, for all sorts of reasons, as I’ve made abundantly clear in the past. But O2 is not the villain here, and the supposed over-blocking is nothing of the sort.

All the blocks that I’ve seen reported are blocked under O2’s “Parental Controls” setting. That is a whitelist-only setting, with all but a handful of specially selected sites blocked by default. Customers who use it have to explicitly add all other sites that they want to be able to access. The fact that a site is blocked by this setting does not in any way imply that it has been judged unsuitable for children, and in particular it does not imply at all that it contains porn or other unsavoury material. All it means is that it hasn’t been added to the whitelist.

As far as O2’s system is concerned, the setting which matters is “Default Safety”. That’s what you get if you enable filters and allow O2 to make the choice for you of what’s accessible. And the sites which are blocked by that are mostly the ones you’d expect: porn, gambling, alcohol, etc. I’m sure there are some sites which have been wrongly classified in that setting, but so far nobody has reported any.

O2 are also doing one important thing absolutely correctly, and I applaud them for it. Their unfiltered option is labelled “Open Access”. It’s not “Adult”, or “Explicit Material”, or anything which gives the impression that the only reason you’d choose it is because you want to look at dodgy stuff on the Internet. Instead, it’s labelled precisely as it is: “open”. Which is the normal state of the Internet, and what a large number of customers will prefer even if they have no desire to look at porn.

So, by all means, campaign against compulsory filtering. But don’t blame O2 for doing their best to meet customer demand at both ends of the scale, by offering a whitelist setting for those who want it, a basic filtered option for others and a properly labelled unfiltered option for everyone else.

Anyway, I’m off to watch Doctor Who. Happy Christmas, everyone!

Google Maps, where orange is the new blue (and also the new green, and red)

Google Maps is going through a bit of a makeover at the moment. There will, sooner or later, be an entirely new version of the web-based maps (which you can see in preview if you switch to the beta option), but in the meantime some of the changes that are part of the new version have also been rolled out the the existing system.

One of the things that has been changed is the colour scheme. Previously, Google used standard local mapping conventions for road colours. So, for example, in the UK motorways were blue and trunk roads were green. In France, toll autoroutes were green and non-toll autoroutes were red. That fits with signage, in both countries.

The new colour scheme, though, does away with all that and renders all roads, everywhere, in various shades of orange and grey.

I think that’s a really bad move. So do lots of people. But it’s probably best illustrated with an example. Here’s a screenshot of my local area using the new version:

(Clicking on the map will open it in a lightbox. If you don’t have a large monitor, then right-clicking and choosing “open link in new tab” will probably be better as it will allow you to see it actual size. The same goes for all the maps on this page).

The map shows Evesham at the bottom right, Worcester at the top left and Pershore in the middle. Up the left hand side runs the M5.

The major routes are reasonably easy to see, although there isn’t much of a visible difference between the motorway and other trunk roads. But can you see where the non-trunk A roads are on that map? What about the B roads? Can you tell the difference between them and unclassified roads?

The answer to that, as I’m sure you’ve realised, is that you can’t tell. Here’s the same area in the older version:

It’s immediately obvious at a glance how much clearer that is. Most importantly, Pershore is no longer isolated in a sea of back roads – you can see both the A4104 running north-south through the town, as well as the B roads linking it directly with Evesham and Worcester. Evesham, too, now has the key central spine road showing in a different colour, and, to the west of the M5, you can see the A38 which forms an important local connector in the area.

OK, so you may argue – that’s just the overview, you can see more detail by zooming in closer. Which is true. But the colours still don’t work. Here’s a rather bizarre splash of colour in Droitwich Spa, for example, where the main road is white but the slip roads at a junction are orange:

And here it is in the older, clearer version:

So why the change?

It seems to me that Google has forgotten one of the key principles of cartography: a map is intended as a representation of reality, not a work of art. To be sure, roads aren’t really painted blue, or green (or orange), so the actual colour you use for them is something of an arbitrary choice. But the way that roads are classified and used is not arbitrary, and there is a long-standing convention in map-making that the colours and iconography relate to those use in non-mapping documentation.

Going back to the first map, at the top, if you wanted to get from Wyre Piddle to Upton upon Severn, which way would you go? The map gives no obvious clues – you might assume that the only alternative to negotiating a maze of twisty country lanes is to go via Worcester. In the second map, it’s obvious: follow the A4104 through Pershore and Defford.

But, of course, people don’t use online maps in that way any more. Instead, if you wanted to get from Wyre Piddle to Upton on Severn, you’d use the “show directions” facility of the map. And, yes, it will correctly take you through Pershore. (Here’s a link showing just that, for comparison purposes).

And I think that is the key point here. Google no longer expects users to use its maps as maps. Instead, it expects the maps to be merely a means of conveying other data, such as computer-generated routes, and advertising, and links to other Google products. The idea that someone would look at a map, and, just by looking at it, be able to tell how to get from one place to another seems incredibly old-fashioned. And so there’s no longer any need for the visual clues necessary to make map-reading easy and intuitive.

I think, though, that that’s still a mistaken assumption. Yes, one of the primary uses of Google Maps (and Apple Maps, and Bing Maps) is for computer-generated route-finding. But it isn’t the only one.

It’s telling, too, that many of the positive comments you can find about the new Google Maps (and yes, there are plenty) online are all about how slick it looks and how “cool” the colours are. One review points out that “The redesign brings Maps into sync with the look and feel of the modern Google design aesthetic”, which is certainly true. Others, like this one, talk about how easy it is to use the new maps to search for pizza. As a local search tool, it is pretty good.

I suppose we shouldn’t be surprised that Google wants the new Google Maps to be more about Google than Maps. But building in the new features doesn’t have to mean ditching the best of the old. And I find myself using Google Maps a lot less these days, so all those new features are wasted on me.

So what are the alternatives? Here are some screenshots of the competition, starting with the most obvious, Bing:

I quite like Bing Maps. They get the colours right, and the web interface has the option of using OS maps at closer zoom levels, which is a very, very good option indeed. But, at the wider level, the colours still seem a bit too muted and there isn’t as much detail as there could be.

The other web-based map that most people will probably be familiar with is OpenStreetMap. Here’s the same area, again:

One of the nice things about OSM is that it gives you the option of different tile sets. Here it is with Mapquest Open tiles:

The Mapquest colour scheme is a lot like Bing, except clearer. Purely as a general purpose mapping application, I find OpenStreetMap to be by far the best, with the Mapquest tiles being better at overview levels and the standard OSM tiles being better when zoomed in.

One that has to be mentioned, of course, is the grandaddy of them all as far as UK mapping is concerned: OS Maps. Unlike the others, OS maps don’t have a website of their own, instead, they are incorporated into other mapping sites. And they come into their own at closer zoom levels: there isn’t really anything to be gained from them at wider levels than the classic 1:50,000 series. But here are Evesham and Pershore on the OS map:

At that level of zoom, OS maps are genuinely unbeatable. The colours and iconography have been honed over decades of careful refinement, and, without the distraction of route-finding and advertising to contend with, the cartographers at OS have been able to fully concentrate on the maps themselves. It’s the inclusion of OS maps in Bing which gives Bing the edge over Google for close-up mapping, and their ability to combine OS maps with route-finding is unmatched as well.

One other that’s worth mentioning, though, is a bit of a blast from the past. Veterans of European travel in the 20th century will be familiar with Michelin Maps, but not a lot of people know that they’re online as well. Michelin is the direct opposite to OS in that it’s the wider zoom levels where they excel, so here’s a screenshot of most of Worcestershire:

Once upon a time, before Google got into the mapping act, ViaMichelin was my favourite online mapping application. Unfortunately, their technology hasn’t really moved on much since those early days – just about the only enhancement is that their maps are now “slippy” – so they leave quite a lot to be desired now. But Michelin maps, like OS maps, are maps first and foremost rather than being a vehicle for search and route-finding (although ViaMichelin does do routes), so the quality of the cartograohy is second to none and vastly superior to Google. I only wish they did a useful API so that I could include them on my own websites!