Effective copyright in a networked world

Attended a very interesting and provocative lecture last night by William Patry, Senior Copyright Counsel at Google, Inc. and author of the 6,000-page epic Patry on Copyright, reportedly the world’s largest copyright reference work.

Patry was giving the SCL Annual Lecture in London. This year’s lecture was dedicated to the late Prof Sir Hugh Laddie, who died last year, and the topic, “Crafting an Effective Copyright Law”, was taken from a statement by Sir Hugh that “we do not need a strong copyright law, but rather an effective copyright law”.

The full lecture should be available on the SCL website before too long (and I’ll add a link when it is), and I highly recommend listening to it. Patry’s thesis was that copyright should not be treated as private property, but as a government programme to promote creativity and innovation by legislative means. Therefore copyright laws – and in particular any proposal to extend their scope – should be assessed on the basis of empirical data rather than appeals to what is “right and just” or the “moral case at the heart of copyright” (to quote Andy Burnham’s vacuous justification for extending the term of copyright for sound recordings).

I think Patry’s rhetorical characterisation of copyright as a government programme is a fruitful one, which can encourage a more level-headed assessment of copyright (and indeed other “intellectual property”) laws. Copyright is a good thing, but its effectiveness needs to be scrutinised continually, and it should be remembered its purpose is to benefit society as a whole by encouraging the creation of more copyright works, not simply to enrich copyright owners (not that those two ambitions are mutually exclusive, of course).

Equally, I think Patry overstated the case against treating copyright as a property right. He is correct that referring to copyright as “property” imports a wide range of legal, moral and emotional connotations which can be problematic (“An Englishman’s copyright proprietorship is his castle”, perhaps?). However, there is also a strong pragmatic case for treating copyright as property, in that it allows copyright to be assigned, licensed, bequeathed, valued and so on like other forms of property.

The big lesson, though, is that attempts to use copyright to protect business models that have been superseded by new technologies – not least the growth of high-speed internet connections and the “pervasive network” – is ultimately harmful to society and indeed to copyright owners themselves. Technology is going to drive new ways both of doing business and of accessing copyright material, and copyright owners cannot expect the law to protect them from the hard work of adapting to this. That’s just a bailout by another name.

SaaS and cloud computing: a low priority for 2009?

I was interested to read Silicon.com’s “CIO Agenda 2009”, summarising the priorities for 2009 IT spending as identified by twenty-five IT directors and CIOs.

Security came top of the list (I wonder if it ever doesn’t?), followed by virtualisation and IT governance & measurement. Software as a service (SaaS), and grid and utility computing, came a long way down the list. So does this mean that claims about the growth of SaaS and other hosted services are hype?

It clearly shows that some of the more enthusiastic claims made for cloud computing are over-hyped. As another article pointed out recently, there are good reasons to doubt that enterprise software is going to disappear into the cloud en masse. Locally-installed software – whether on an internal server or individual PCs – continues to have many benefits over systems accessed over the internet, ranging from speed to resilience.

In any event, much of the move to SaaS and hosted services is supplier-driven rather than customer-driven. I’m not surprised that CIOs are making SaaS a low priority, because in most cases the driver for their decisions is not going to be, “How can we get this functionality off our local systems and into the cloud?”

However, when they come to address the higher-priority issues, in many cases they will find that the solutions that vendors are offering them are cloud-based hosted systems, rather than traditional, locally-installed software. That may lead to decisions as to whether cloud-based systems are appropriate for a particular business function, but the end result is that the overall use of hosted services increases, even if that is not a priority in itself for CIOs.

Indeed, the fact that decisions about IT investment are driven (quite rightly) by other priorities is one reason why the crucial differences between locally-installed and remotely-hosted systems can be overlooked. But that’s a matter for future posts.

An Open Platform – but read the small print

The Guardian’s launch of its Open Platform API is a fascinating example of how pervasive networking is encouraging radical experiments in new business models. The Guardian is loosening (some) control over its content in order to extend its reach as a global brand. In the wider picture, this is an attempt to break free from a business model that is seen by many as in terminal decline (printed newspapers) and find a new identity on the web.

Open Platform has two main elements: the Content API, which allows content from the Guardian to be integrated into other websites and online services, and the Data Store, which is a collection of data sets “curated” by Guardian journalists.

The Guardian’s terms and conditions for use of its Content API and Data Store are slightly at odds with the publicity about freedom and openness. Use of Guardian content is subject to detailed requirements to which website owners will need to give careful consideration before including content on their site. Terms of particular interest include:

  • The current free access only applies during the beta trial period. After that (or during the beta trial, if the need arises), the Guardian may introduce “alternative partnership models” – in other words, paid-for access for particular uses.
  • Websites using Guardian content must comply with the Guardian’s requirements. In particular, the site must not contain any illegal or discriminatory content, promote violence or illegal activity, or “be capable, in our sole discretion, of interpretation as racist, sexist or homophobic or promoting such views”.
  • Only 5,000 API requests can be made per day. So high-traffic website may find they need to increase their allowance of requests: which in turn may be an area of potential revenue for the Guardian (though this is not stated explicitly at present).
  • Content must be replaced or deleted every 24 hours. It cannot be retained in a static form for more than 24 hours.
  • The content must not be edited, translated or otherwise adapted.
  • A provision of critical importance: those using Guardian content under this scheme must carry Guardian advertising on their sites.
  • The Guardian can terminate access at any time without giving a reason.
  • The Guardian can use your name, logo and website address for promotional purposes.

So this is a long way from a Creative Commons-style free-for-all. In some respects it represents a “toe in the water”: allowing relatively small-scale access to unamended content. Clearly the Guardian’s concern will be to ensure that this move helps it generate advertising revenue, both directly from advertisements placed on participants’ websites, and by extending its reach and thus enabling it to raise its online advertising rates over time.

It will be interesting to see whether other newspapers follow suit, and if so whether they will take a more or less “conservative” approach than the Guardian in terms of retaining control over their content. At the very least, this provides a welcome contrast to the prevailing funereal tone of media coverage on the future of the newspaper industry.

Introduction

This blog is intended to take a look at the legal issues surrounding one of the key trends in how information technology is used in business today: the move “into the cloud”, away from locally-installed software, data and systems towards hosted services accessed over the network.

I am a technology lawyer working in the south east of England, and in the past few years – and at an accelerating rate in the past couple of years – have noticed an increasing move towards the use of hosted solutions. However, in many ways this shift is not fully reflected in how the legal and contractual issues are addressed: “Software as a Service” (SaaS) contracts are often expressed in terms of “granting a licence” rather than “providing a service”, which can cause problems for both the supplier and the buyer.

Then there are the broader ways in which the move to “pervasive networking” threatens existing business models and provides opportunities for those models (and new models) to develop. At the time of writing, two key technology stories are the dispute between YouTube and the PRS over licensing terms, which has led to YouTube blocking access to music videos for UK users, and the Guardian’s launch of an open API allowing reuse of its content (albeit with strings attached, notably as regards inclusion of advertisements).

The first can be seen as a collision between an old copyright licensing model and new technological means of delivering copyright material, the second as a radical attempt to find a new business model for newspapers in a networked world. On the one hand, restricting access in order to maintain control; on the other, loosening control in order to extend reach. It remains to be seen which of these approaches will prevail.

As regards this blog itself, at the time of writing this first post it is somewhat “rough round the edges”. This is a matter of deliberate choice: one feature of the networked world is that it enables projects to get off the ground quickly, rather than getting bogged down in preparatory matters. What I hope this blog will provide is useful and interesting content on issues relating to “law in the cloud”.