The latest Quilliam Club podcast is the first in a new series on ‘identity and technology’. The discussion took as its starting point Norberto Andrade’s “Oblivion: The Right to Be Different… from Oneself // Reproposing the Right to be Forgotten” (2012) 13 Revista De Internet, Derecho Y Politica 122-137. I am one of the voices heard on the recording, which can be found here.
I’m pleased to say that a paper of mine, first presented in the Soillse seminar series in Edinburgh last year, has now been published in the Journal of Media Law. An open access version (post-peer review) is available for download here, and if you or your institution subscribe to the journal, the final version is found here.
The abstract is below. In essence, what I was trying to do here was (a) identify some of the issues concerning language rights and language policy in respect of the media in the context of changes in how media technologies are used, and (b) propose some approaches and tools that can inform a more thorough response to those issues. Much of the evidence in the paper is taken from periodic reports under the European Charter for Regional or Minority Languages, although in the later sections I set some (tentative) theoretical points on the convergence of a number of ways of thinking – especially what I am thinking of as a digital linguistic landscape (in the line of my interest in physical and virtual spaces). I will have more to say on this in later papers.
Oh, and the title is the often-quoted remark attributed to a President of Kyrgyzstan (Askar Akayev), quoting his son on why he wanted to learn English. Al Gore told the story in a speech about the Internet in 1994, although I encountered it in Goldsmith & Wu’s Who Controls The Internet? a decade later, and it is frequently cited in work on multilingualism and technology.
Legal measures in support of minority language media often take for granted particular models of broadcasting, but are these models valid? How flexible are key instruments such as the European Charter for Regional or Minority Languages? After assessing the applicability of existing law on minority languages to various media platforms and services, it is argued that combining approaches from cyberlaw with sociolinguistic themes of the linguistic landscape and functional completeness can provide a more elaborate account of minority language rights and policy in the context of technological development.
I’ve written a short piece for The Conversation on the Harmful Digital Communications Act 2015, legislation recently adopted in New Zealand.
I’ve had half an eye on this proposal for a while. I originally came across it through following the progress of the New Zealand Law Commission’s project on new media regulation (of which this was an offshoot), and before that its huge privacy project, which also affected the new law and the NZLC’s draft Bill.
My goal in the piece isn’t to justify or criticise the HDCA (there’s more than enough out there on the latter in particular, e.g. on sites like BoingBoing). Instead, I’m particularly interested in the tools that NZ chose – and how they are trying to tackle the issues through a number of different remedies (and indeed doctrines). The NZLC’s original idea for a specialist Tribunal didn’t survive, but that of an agency to do mediation etc, along with the potential for court orders, did. Also notable is the setting out of principles for digital communication. Not saying that I’d endorse them as a comprehensive statement, but they are intriguing. Above all, the presentation of a package gives a great opportunity to look at how it all works out in practice – and, as I say late in the piece, something other places might think about (e.g. the initial moves in the Seanad in Ireland, and what will come from the Law Reform Commission in due course).
I also enjoyed reading through the legislative debates on the HDCA (even the one that took place . Obviously, I didn’t get to put much of that in the final short article, but even the split within the Green Party (the third party in the Parliament), and the way in which legislators of all parties referred to particular problems and scandals, was worthwhile reading. Also interesting, if not entirely novel, was the set of submissions from various parts of the tech industry. On that score, the intermediary provisions are also worth watching. The result is something not a million miles from the Defamation Act 2013 provisions for England and Wales – a bespoke notice and takedown regime with an emphasis on passing on complaints to authors (and, consequentially, favouring non-anonymous postings). In contrast with the DA, though, much of the detail is in the legislation rather than the statutory instruments!
Dr. Emily Laidlaw (University of Calgary) and I have a joint project, as part of CREATe, on copyright, human rights and the public interest. We’ve just published (as CREATe Working Paper 2015/04) a very extensive literature review on copyright and freedom of expression, put together by our fabulous research associate Dr. Yin Harn Lee (University of Sheffield). (All of us at some point worked for the University of East Anglia, where the project was put together).
The full review can be found here, including in the opening pages a summary, and a bibliography at the end. And here’s the preface, to give you an idea of what it’s all about. We’d love to hear your thoughts, including on where we should go from here.
The relationship between copyright law and freedom of expression has always been controversial, but this tension has deepened in recent years with the emergence of the digital environment and expansion of copyright law. As part of CREATe’s theme on human rights and the public interest, our project explores the relationship between freedom of expression and copyright, including how it has changed over time and/or depending on the business model, and whether freedom of expression needs to be reconceived in relation to copyright.
We are pleased to publish this literature review on copyright and freedom of expression. The review has been expertly researched and written by Dr. Yin Harn Lee, who was employed by the University of East Anglia while completing her doctoral studies at the University of Cambridge. Her report is the result of an extensive period of research, and regular conversations with and reviews by us. She has compiled a remarkable range of materials from around the world (both from courts and scholars), and sets out clear examples of what happens when these areas of the law meet. This review traces the nature of the debates about the interaction between copyright and free speech, treatment by the courts (focusing namely on UK (in its wider European context) and USA jurisdictions), specific scenarios where the issues are particularly acute, and current proposals for reform.
It is our hope that this literature review provides insight to the reader on what is an incredibly uncertain area of the law. We invite you to read this literature review and provide us with your comments to help inform the second stage of this project.
From our end, the literature review has certainly been revealing about the extent of the lack of coherence in law (both statutory and case law) concerning the nature and extent of a person’s right to use a third party’s copyrighted work under the umbrella of fundamental rights. It is questionable at this stage whether there is any such right in substance, although the framework is there in law. When courts have engaged with freedom of expression it is often not in the most direct fashion – especially when disputes arise within the terms of copyright law, as they are likely to be litigated on that basis by experts in that field. The human rights implications typically emerge at a late stage or in subsequent academic writing.
Greg was a great scholar of law and technology, and his 2010 book Virtual Justice: the new laws of online worlds made a major impact. His opening chapter, tracing the role of law from Cardiff Castle to Cinderella’s Castle (Walt Disney World) to Dagger Isle Castle (Britannia, in Ultima Online), is possibly one of the best introductions ever written to an academic book. Since it was published in 2010, I have asked students to read it on at least one course every year; generously, the book was made available for free download, here. His other writing, and his enthusiastic blogging at Terra Nova, reached a wide audience.
I had the pleasure of meeting Greg in person twice. The circumstances speak to his particular approach to his work. We had corresponded intermittently (not least after this tongue in cheek blog post of 2008), and I had hoped to submit an abstract for an event he was organising. Due to an injury I didn’t get myself together in time. So, Greg invited me to join a panel he was putting together (including his sometime co-author, Dan Hunter) for the conference (“The Game Behind The Video Game”, in New Brunswick, NJ in April 2011), and encouraged me to come – even though I wasn’t sure what I would be able to add. During the event itself, which reflected his wide interests and connections in the world of gaming, he was welcoming, funny and interesting – and we even had a chance to talk about shared interests in lesser-used languages and technology (as many obituaries point out, Greg wrote a Turkmen-English dictionary in the 1990s!).
I last encountered Greg, in virtual form, in an exchange of emails in 2013, where (again typically) he wrote a very thoughtful peer review for a journal issue I was editing, offered good wishes on a recent job change, and hoped to stop by on a visit to Scotland.
Greg made a really substantial contribution to how we think about law, technology and culture. So many of us in this field were lucky to read him, to know him (not very well in my case) and to benefit from his support and advice. He will be missed.
Dr. Tom Phillips worked on a CREATe project with me, as a research associate (Dr. Keith M. Johnston was the co-investigator). Our project on Games and Transmedia dealt with a wide range of issue pertaining to law, business and these emerging creative industries – including art/business tensions, formal and informal regulation, and how risk and disputes are handled. One point that we kept coming to, from a range of starting points, was the tricky and often emotive subject of ‘cloning’ in the games industry. I had a few paragraphs on this in my article last year, but the real outcomes of these discussions can be found in Tom’s article, published as open access today (free for anyone to download) in the journal Cultural Trends.
In “Don’t clone my indie game, bro”: Informal cultures of videogame regulation in the independent sector (click to read/download), Tom reports on the history of cloning as an issue, informed by events and conversations in the games world, and academic and legal developments. The article also gives a great insight into discussions we had with a fascinating group of developers and others in December 2013, as part of the project. Tom has made use of many of the key points from those discussions, to try and provide a greater understanding of how the rights and wrongs of cloning are discussed within the industry (or industries). He concludes by wondering whether we have reached a position where further legal interest is inevitable.
Do read the article – and I address this in particular to legal readers of the blog, because Tom’s take on how law affects the development of and conversations within a fast-moving industry is worthy of your consideration.
In the contemporary games sector, independent developers feel there is an inadequate level of protection for their intellectual property, particularly with regards to game clones. There is also a sense that neither players nor policy-makers completely understand the specificities of how IP may be creatively, if not legally infringed. As a result, there has increasingly been a shift towards the construction of a culture of self-regulation for indie developers, attempting to publicly shame cloners via social media, directly impacting infringers’ reputation and sales and bypassing formal regulation.This article uses interviews and workshop discussions with developers to examine the manner in which this informal culture of regulation has been perpetuated in relation to current videogame copyright legislation, and suggests how the interrelation between producers and policy-makers may help to inform the direction of future policy decisions. Examining the way appropriate practice is informally managed in independent gaming, the article considers the soundness of policy in the contemporary videogames industry.
(Edit: updated with better formatting)
I’m just returning from a fascinating two-day conference on ‘designing smart cities’ at the University of Strathclyde, chaired by Prof. Lilian Edwards (who is responsible for the title of this post) and supported by CREATe, Horizon, and Glasgow City Council.
I particularly enjoyed this event. I have an on-off academic interest in the interactions between law and the city (which brings in geography and architecture) (seen most obviously in my ‘virtual walls’ article), and further personal interests in transportation and in modernist architecture. And, of course, in both domains, “technology”. Glasgow has received Government funding after a competition: see Future City Glasgow, and so was an ideal location.
There are various plans for audio, articles and the like; these are just a few quick first impressions. No offence to those omitted – my note taking varied across the two days, especially in and around my own contributions. (I was there to speak on the sharing economy, which is work at an early stage, and leading me into interesting place – I had a lively lunchtime conversation about English vs London vs Scottish taxi and private hire licensing, on which I could bore for, well, Scotland/London/England…).
Richard Bellingham directs Strathclyde’s Institute for Future Cities, and is involved with the new MSc Global Sustainable Cities. He was introducing the theme, highlighting that a majority of the world’s population will live in cities, which to be successful will need to be equitable, distinctive, and delightful. There are drivers for change, which include resources, the ongoing recession, and changes in business processes. He gave a range of examples of smart city projects, including analysis with multiple datasets.
Rob Kitchin (NUI Maynooth) gave a wide-ranging talk, including a peak into the Dublin Dashboard, but the highlight was addressing 7 critiques of smart cities. Ahistorical, aspatial, homogenizing; the politics of urban data; technocratic governance and solutionism; neoliberal political economy and corporatisation of governance; buggy, brittle, hackable – combining two open complex systems (cities, digital systems); profound social, political, ethical effects; reinforcing power geometries and inequalities. Need to think critically, but there is promise and smart cities are already coming into being.
My former Edinburgh colleague Judith Rauhofer reminded us that there’s always a good reason why the use of a new service makes sense, even when privacy lawyers potentially play the role of party poopers – one can be tempted to jump into the smart city, or the Internet of things, out of convenience, lack of alternatives (e.g. if smart TVs become the norm and non-smart TVs fade from the market), economic interest, and the public interest (altruism?). Yet, we see the continued gathering of information, including location (e.g. eCall – for all new cars – sends out beacon to emergency services – sounds great but), behaviour, as technology becomes ‘invisible’, and in particular physiological – e.g. FitBit dietary apps, even the smart vibrator.
David Murakami Wood, once of Newcastle but now at Queen’s University in Canada, gave a keynote address and also participated in a panel. Unfortunately I missed the start of the keynote, but was able to catch much of it, including his distinction between three uses or approaches (rational spatial planning in the European style, technology as a driver in US approaches, and discourses of modernisation and nationalism e.g. India. He wry noted how smart city debates have become a vehicle for another round of ideal cities, although this time the corporate involvement is particular significant. Amusingly, the ISO is already on the case with an attempt to standardise what a smart city is, with 46 core and 56 supporting indicators. (More on David’s contributions in the note below).
Other issues discussed included CCTV, the position of Singapore, transition towns, and a barnstorming and much-anticipated presentation on driverless cars by engineer Prof. John Miles.
We had a neat wrap-up session (with eloquent people, and me), and I made two general points as part of this final panel.
The first is how some of the debates and experiences from the early period of the commercial Internet (1995-2000) still have value. The conference included critiques of terra nullis portrayals (Rob Kitchin, Ayona Datta), a thorough investigation of the role of intermediaries and brokers (Alison Powell), a call for open platforms and to be wary of company towns and a drive towards ambient government (David Murakami Wood), a need to interrogate algorithms and data (Rob Kitchin), and bubbling away, how to handle questions of privacy and consent (Judith Rauhofer and Derek MacAuley). All of these things, to some extent, were up for debate as lawmakers ‘met’ the Internet, some for the first time.
The second was the degree to which questions of subsidiarity shone through, especially in the sessions on energy. For instance, Francesco Sindico wondered what role cities should be playing in global debates (and negotiations) on climate change, characterised as they have been so far by traditional negotiations between sovereign states, while others on his panel considered questions ranging from the innovation within post-stock transfer social housing to Singapore’s international strategy to the regional impact and consequences of the feed-in tariff in England.
(Apologies again. I’ll update this post when the proper stuff comes out…)