Papers from BILETA 2010: Scharf, Marsden, Jones and more

The first bunch of papers from last spring’s British & Irish Law, Education & Technology Association (BILETA) conference in Vienna have just appeared in the European Journal of Law and Technology. The EJLT, for those that haven’t come across it yet, is the successor to the long-running Warwick-based Journal of Information, Law & Technology (JILT). The current issue, Volume 1 No. 2, contains revised versions of papers first circulated at the conference.

Of particular interest to me is the paper by Nick Scharf, a doctoral student at the UEA Law School, supervised by Prof. Chris Wadlow with assistance from myself. His paper on Digital Rights Management and Fair Use considers recent developments in relation to DRM in the context of the various legal and technical developments that have brought us to the current position, and he argues that modern DRM is network- rather than just content-based.

I also enjoyed the papers by Marsden (which I saw at the conference) and Jones (which I didn’t). Jones (Intellectual Property Reform for the Internet Generation) looks at current debates in copyright reform and pays particular attention to the actions of and future for the record labels and the recording industry. Marsden (European Law and Regulation of Mobile Net Neutrality) adds to his work on net neutrality more generally by considering the position of mobile (or in US terms ‘wireless’) ISPs regarding the great neutrality debate. Despite the title and the obvious appeal of the neutrality question, it’s also a very interesting take on the general business model and regulatory climate for mobile networks per se.

Self-promotion alert: a paper based on my own BILETA presentation (on computer games) appears shortly in the Entertainment Law Review – I’ve approved the proofs and it’s due out in volume 21(8), between now and the end of the year.

Calling PhD students: IT, IP, Cyberlaw

The Information Law & Policy Research Group at Oxford Brookes University and the British & Irish Law, Education & Technology Association (BILETA) are organising a one-day event for PhD students in the areas of IT, IP and cyberspace law. If you are in the early stages of your doctoral studies, this will be a very useful event for you to attend. It takes place in Oxford on 11th September 2009.

Registration is necessary (although the event is free of charge): contact Dr. Martina Gillen to register or to request further information. The day will consist of informal talks from both OBU and BILETA speakers, as well as presentations by participants (please let Martina know if you wish to present or not).

If any readers of this blog do attend, do consider sharing your experiences of the day and of your fellow researchers, and if such students are not bloggers, I’d be very happy to host those comments as a guest post here.

April is the conferencest month

April is a mini-conference-season for law in the UK, with a number of big ones taking place – I’m due to be attending two and aware of another. Probably the biggest (but not in my diary, alas) is the Socio-Legal Studies Association, meeting in Leicester in the first week of the month. Up in Edinburgh, as previously blogged, SCRIPT-ed’s Governance of New Technologies conference is almost here. And finally, down in Winchester, BILETA meets for its annual conference on 22nd/23rd April, with a draft programme showing that there is a wide range of topics due to be covered. Coverage of last year’s BILETA conference is available through my (too lengthy) posts tagged bileta2008. I hope to do the same this year.

The big event in Ireland this month was the third annual Legal Education Symposium, held this year at UCD. Eoin O’Dell’s even more detailed reports are available at his blog, cearta.ie.

BILETA 2008 : In Theory

I attended a session on ‘theory’ as I was one of the speakers! I don’t perceive my work as particularly theoretical but it was a very interesting session and I”m glad that I ended up there. My paper, Expression 2.0: from known unknowns to unknown knowns, will be posted on the BILETA website and I’ll link it then; here are my slides (PDF). I spoke about the control of expression by social networking and web 2.0 hosts and also by internet service providers, particularly through terms of use. In the second part of the paper, I looked at different approaches to freedom of expression, freedom of communication, and the relationship between human rights and private parties.

Next up was Hayley Hooper, who talked about new technologies and the enduring role of constitutional rights. Recent trends promote ‘liberal legalism’ and a particular, marketised approach of supranational constitutional development.

1. Model of legal constitutionalism; not a purely structural theory : outlined by Alan Tomkins – separate to politics, in the courtroom, control of government, etc. While they sound reasonable, argued that they are normatively undesirable. There is a need to look at supranational entities as they have more constitutional power and control over citizens. We haven’t moved beyond inherent bias in constitutions, despite what some may say. The bias that Hayley suggests in the EU context is market-based, e.g. the four (economic) freedoms. The judiciary are suitable for this model; dealing with socio-economic rights is difficult.

2. UK and the development of judicial review. Until Malone v Metropolitan Police (telephone tapping), couldn’t make human rights claim at that time (no legal or equitable claim), everything is permitted unless it is forbidden. But the judiciary became much more activist in ‘discovering’ constitutional rights in the common law … but the record isn’t consistent, the EHCR incorporation hasn’t reinforced this trend. Pro-Life Alliance as an example, due deference applied. So despite the upsurge, the judiciary have a particular mindset, they won’t go into the more controversial areas.

3. Juristocracy. Recent developments in the UK show a trend towards this. Judicial, economic and political elites have converging/confluent interests, which is on the back of neo-liberal technological progress. What are the consequences? US situation is interesting, especially in the political debate re: Roe v Wade. The Government’s ideas reinforce the juristocracy trend; the 1997 election generated it and it’s been ongoing since then.

The discussion touched on Dworkin, Gearty, alternative approaches to judging and decision-making, the role of ‘governance’. And I enjoyed it a lot.

Finally, Martina Gillen (Oxford Brookes) spoke about developing a ‘Sociology of Law 2.0’. Our identity has always been ‘defined’ in a way by technology – even from the definition of homo sapiens. In Durkheim’s original classification, he focused on divisions of labour etc; but buried in it is a differentiation on the use of technology; the elephant in the living room. It’s shaping what’s happening but theoretically it’s poor. Our tendancy to want ‘legal certainty’ has given us a mechanical/scientific mindset. We’re (we as lawyers) viewing technology as something we should be ‘attracted’ to and we want the analogy to apply to law.

We focus on ‘nodes’ but what are we missing? There’s also the proliferation of economic control and interests. “Just because it’s new, does that make it significant”. And Martina also mentioned the ‘cult of the shiny’ which is something I could rant about but won’t…

What would the new features be? public sphere; multiple jurisdictions; inter-sections.

We then saw a very useful diagram about security models (I haven’t got the technological ability to reproduce it but I’ll link to it when published), arguing to focus on what the user is actually doing! It seems trite but it’s a key feature that has been ignored. We need to move beyond categories and consider normative change. Finally, Martina outlined a research method proposal, based on proxies, to study *what* people are doing on the Internet and *why*. The conclusion, then, is that we need to use the technology to get data.

BILETA 2008 : Trojans and Thiefs

Wiebke Abel (Edinburgh) and Matthias Damm (an attorney in Karlsruhe, and LLM graduate of Strathclyde) both addressed the topic of trojans and spy software and their use by law enforcement agencies in particular.

Wiebke started things off with an overview of how ‘the world has changed’ and what this means for crime. Are traditional investigation methods and laws sufficient to deal with new challenges? Can a ‘new generation of investigators’ (and investigative tools) help? She picked a particular example, the ‘German Federal Trojan’ (aka Bundestrojaner!). Trojans are familiar (as used by hackers, spammers and others) – but are they only for criminal use? The plan here is for covert search and surveillance of private computers by police or secret services. This can be implemented through spyware, through existing ‘backdoors’ and even download-contamination. There was – naturally – outrage in Germany about this – but was this a once-off? No: the US ‘magic lantern’ and Austrian ‘online search’ are other examples. These technologies are special because of the way they combine factors such as mobility, ubiquity, invisibiity and digital evidence collection; but they are unpredictable and can even raise international issues (trojans operating outside national borders), and the use of gathered data is wholly unclear at this stage (would it stand up in court? should it?). And how do you prevent antivirus software from identifying the supposedly hidden trojan? Wiebke mentioned R v Aaron Caffrey (existence of trojan used as defence in a criminal trial about material on C’s machine). A possible solution is seeing source code as the ‘DNA of software’; hardwire the law into software. But the overwhelming need is an approach where regulation through law and regulation through code are working together

Matthias then started his presentation, ‘I know what you saved last summer’. He also took guidance from history, mentioning fingerprints, DNA and CCTV as examples of new investigative ‘technologies’. Today’s investigators look more like computer operators than Sherlock Holmes. CIPAV (Computer and IP Address Verification) is in use in the US, although it’s not supposed to be dealing with content. The FBI haven’t been very helpful in explaining how it works. As for the Bundestrojaner, the Federal Constitutional Court dealt with this (on 27th February 2008) and gave the go-ahead to such software in its ruling, subject to strict conditions (such as a court order and the respect for private data). This was the same case where the Court formulated a new constitutional right, the guarantee of the confidentiality and integrity of IT systems. More than 60% of the German population apparently support the system, although are they aware of the Orwellian nature of such software?

After a discussion on the trojan issues, Angus Marshall (Teeside) then reported on the EPSRC-funded ‘Cyberprofiling’ project. The project looked at offender and geographic profiling, in particular in the context of intelligence and intelligence-sharing. How can existing information (server logs etc) be used in a useful way? Overcoming various problems, they developed a ‘data collection appliance’. But one of the most interesting legal issues that arose was whether an IP address is a ‘personal identifier’ (relevant for sensitive data / data protection / sharing / etc). Information Commissioner has given various answers; European practice varies. But the research group didn’t feel that IP addresses were personal, though they did accept the advice and used anonymisation. This itself required some new work. So how does this type of ‘dataveillance’ compare with other things like (on one hand) CCTV, DNA and wiretapping and (also, or on the other hand) credit cards, mobile phone tracking, loyalty cards etc. The first category is ‘biometric keyed’ and the second is ‘token mapped’. Angus gave an overview of the regulation and effectiveness of each. He concluded that a telephone number is not a personal identifier; neither, they argued, is an IP address (but combined with other factors ‘may be personal data’). Again, the discussion was extremely vibrant, and now it’s off to lunch.

BILETA 2008 : Net Neutrality

Chris Marsden (Essex) is an expert on net neutrality in Europe (and other things); see his SCRIPT-ed article on the topic here. And that’s also his topic for this morning’s keynote. Oh, and and it rained lots aréir but it’s quite mild this morning.

This presentation : It’s about convergence, it’s different in Europe, it’s “politics not economics” and it’s not going away.

Convergence – but this isn’t new, the arguments have been seen in the 1950s (spectrum use), 1970s/80s (cable), 1990s (satellite – in particular Sky and football), 2000s (mobile) – and now Internet.

In the US – monopoly power (see Madison River / Vonage case); it’s a result of the Telecoms Act 1996 and the Trinko and BrandX decisions (which means that all networks are, for FCC purposes, ‘information services’ and therefore not common carriers). Should ‘common carriage’ be reintroduced? He mentioned the papers by Lemley & Lessig, Tim Wu’s arguments, the opposition (from techies, economists and lawyers), and the fun times at the FCC hearing in Harvard this year.

Europe is different, though, because of local loop unbundling, control of significant market power, and there is in fact a trend towards *more* regulation (e.g. roaming, reforms to the electronic communications directives). Also, the ‘content’ is different (in the US, it’s often “a commercial dispute hidden as a freedom (or fr’dom) argument”), whereas Europe has EPG regulation, ‘must carry’, etc. We even have the BBC iPlayer – the ‘death star’ for ADSL networks. What if it’s not VOIP that’s being blocked, but Eastenders? UK consumers are paying for broadband, licence fee, Sky subscription…

Japan, now, is an interesting example – net neutrality is in place, and there’s a privileged role for consumer protection in the legal framework; there are incentives to roll out high-speed (e.g. incumbent NT&T can do so without regulation for a ‘holiday’ period).

The lobbies are the networks (trying to protect investment, not to mention the need to ensure quality of service) vs the content providers (who don’t want to be charged). But the networks *are* actually blocking things like BitTorrent (under the headings of traffic management, antivirus,etc) while advertising unlimited access. And the content providers (like the BBC telling users to lobby their ISPs to switch on simulcasting!) are having a free ride, especially for video and P2P.

Also, the interaction between filtering and net neutrality, which has lots of unforseen possible consequences. And there are issues with competition law, and what of BT which has a dominant position?

Chris also spoke about Phorm, a very interesting yet terrifying ‘adware’ system at the ISP level (“Google on steroids, or Big Brother incarnate”) (couple of links here) – is it even legal? He wondered, though, if Phorm is the response to net neutrality, i.e. if the telco can’t make money through NN, can they make it through something like Phorm?

We also heard a little about ‘end to end’ and other such pronouncements; how much innovation happens “at the edge” in reality? And a related question is on what basis filtering can actually be allowed…

The conclusion looked at DRM, privacy, blocking, hate speech and even the AVMS Directive. The legal provisions, aside from the directive, include the electronic communicaitons directive, the IS Security Framework Agreement, the E-Commerce Directive and more – which taken together mean greater intervention by ISPs in what goes through its network. The regulators are passing the buck – we are going in circles. “They’re all a bunch of tubes”.

BILETA 2008 : Open Access

The last (official) business of the day was a series of parallel presentations/experiments/workshops; I passed by the fun-looking projections and computers and went to an interesting panel under the banner of the recently relaunched SCRIPT-ed (the open access periodical managed by the University of Edinburgh). Journal editors Wiebke Abel and Shawn Harmon put together a session on open access and legal scholarship.

Speaker Andres Guadamuz (Edinburgh), co-director of SCRIPT, previewed the session on his blog, here and session chair Shawn Harmon, after introducing the panel, discussed SCRIPT-ed and their approach to peer review and rapid turnarounds (always welcome). He pointed in particular to the interdisciplinary nature of work in the technology-law-society area. Finally, he highlighted the call for papers and registration for the March 2009 conference.

Andres then spoke about the importance of open access to legal information. Licences such as the GNU FDL (used by Wikipedia) or those developed by Creative Commons are important; it’s not just about making something available online without charge, although a lot of publishers/republishers have yet to grasp these subtleties and are still quite risk-averse. The Berlin Declaration, on the other hand, is more about access, but requires peer review. Policymakers and research councils, then, may have different definitions again; the latter are interested in the role of public money and the making available of the resulting work to the public, and policies on public sector information (including caselaw) add further complexity. In response to a question, he also discussed pre-prints (SSRN etc).

John MacColl (St. Andrew’s University Library), spoke about open access repositories and his experience in developing such at the University of Edinburgh. Librarians come at these issues with lots of reasons in mind : in particular, budgets are stretched (research libraries can spend over three-quarters of their materials budget on periodicals including licence fees). Interestingly, the debate has been a more gentle one in Australia, because without a strong publishing industry, that major source of opposition wasn’t present as it is in the UK. He explained how ERA, the Edinburgh Research Archive worked, and how academics deal with it, referring to the two databases for checking on publisher and research council policies (Romeo and Juliet). Institutional, national or funding-council ‘mandates’ are extremely important; the new research assessment methods in the UK, which will include metrics, will also be relevant. (For the record, the institutional repository at my own institution was very much influenced by Edinburgh’s; our library is still working hard on getting staff and research students to submit, so if you’re one of my TCD audience, think about doing so?).

Diane Rowland (Aberystwyth) opened with a story about a colleague who didn’t want to publish in an online-only journal (“I like paper” was his reason). The serious issues are quality, prestige and impact – and the perception of these issues. A stronger commitment (from the discipline, as well as from individual schools or departments) is necessary – for example, in the RAE just gone by, articles in web-based journals were not in the same category as ‘journal articles’ more generally. Like John MacColl, she was interested in the development of new metrics and what this would mean for the journals as well as for individual behaviour.

Prof. Abdul Paliwala (Warwick) is the director of the Journal of Information, Law and Technology (JILT). He reflected on the development of the journal and in particular looked forward to a forthcoming issue on legal education that would make some use of multimedia, reinforcing the decision not to have a paper-version. He suggested a meeting of all those people involved in the free and open access movements.

Finally, Burkhard Schafer (Edinburgh) spoke about the purpose and status of peer review more generally. He went from low-copy numbers in DNA to the credibility and accuracy of citations and wondered about the development of standards (‘seals of approval’, perhaps?)

And now my battery is dying, so that’s it for ‘live’ blogging for today, but there’ll be more tomorrow…