This event at the Royal Society made me really feel at home!
The first session covered Machine Learning and The Law.
It started with an introduction by Professor Sofia Olhede, one of the organisers, and a presentation by Christina Blacklaws from the Law Society who sits also on the Family Justice Council, on Transparency and Accountability.
Her presentation was followed by Marion Oswald who presented how Durham’s constabulary used forecasting to predict criminality and illustrate Algorithmic Risk Assessment Policing Models. How many people ‘outsource thinking’ and why??? She pointed to these interesting quotes from W. Twining, Preparing Lawyers for the 21st Century (1992) 3 Legal Education Review 1, 14:
Most lawyers are innumerate and most law students are terrified of figures.
It means that any number we claim for a ‘probability’ is constructed by us based on what we know.
Risk in this sense is a measure of what don’t and can’t know as much as a measure of what we can.
After coffee, Carl Wiper from the Information Commissioner’s Office addressed Algorithms, ethics and data protection: a regulator’s view.
- trust is supposedly at the core – hence there are Tools to build trust:
- Data Protection Impact Assessments
- Innovation with Privacy
- beyond the law addressed
- Ethical approaches
- Data trusts
- Education and training.
My sense is while the ICO has noble intentions, the gap is far too wide to be bridged between criminals who get away with murder and all those who try to comply.
Next was Professor Mireille Hildebrandt on Algorithmic Regulation and the Rule of Law – from ‘text driven law’ to ‘data driven law’.
- Regulation by algorithm?
- computational systems ‘infuse’ governmental legislation, administration and adjudication
- What does this mean for the Rule of Law?
- if these systems are not testable, can they be contestable?
- do new types of interpretability require a new hermeneutics?
- Two types of algorithmic ‘law’
- If This, Then That: deterministic, predictable, decision-types;
- no discretion whatsoever;
- decisions can be explained but not necessarily justified.
- Artificial Legal Intelligence (data-driven)
- Law and regulation
- regulation = subset of law
- modern positive law is TEXT-DRIVEN.
- From text-driven to data-driven law
- text-driven law determines what counts as lawful or unlawful
- data-driven law simulates and predicts qualification of lawfulness
- performance metric
- predictive accuracy
- generates different interpretability problems.
- Agonistic machine learning
- not antagonistic but agonistic
- turning enemies into adversaries
- Explanatory Machine Learning is different from Confirmatory Machine Learning! [Duncan Watts – Open Science Forum]
After lunch, Session 2 focussed on Algorithms: from Regulation to Privacy and Trust.
Cat Drew spoke about From Ethics to Experience Design to humanise AI.
- here’s the 17-page report on the Government’s Data Science Ethical Framework
- and here’s her Data Value Chain:
- Data// Collate
- Information// Aggregate
His big issues included Data Monopolies, Privacy, Democracy and Public Participation.
Truly inspiring and thought provoking to unify across the digital divide!
- We need to ensure trustworthiness of algorithms!
Rebecca Endean OBE spoke about her field of innovation for Big Data!
- How do we curate and hold data cost effectively?
- Not just for the statisticians and the data scientists, but also the arts and humanities!
- How was data generated?
- How do we collect data?
- What was the original purpose?
- Is it misleading and create a lack of confidence?
- What happens when we connect it with other data?
- How is it transmitted?
- Rubbish In => Rubbish Out!
- How do we ensure Public Trust?
- Government as well as the Research System!
- Better outcomes for driving economic growth!
- Privacy, Accountability and Openness!
The discussion was most interesting again:
- ‘Data obesity’ is leading to ‘pattern obesity’!
- Will adding data to data add value?
- There is not enough thought given to collecting data: curation is important!
- There is a ‘data mythology‘!
- It is incumbent on us to define ‘data methodologies‘!
When I asked her about how to best get our new visualisation styles into research and innovation, she invited me to expand and explain!
The last and most entertaining speaker of the day was Chris Reed, Professor of Electronic Commerce Law at the Centre for Commercial Law Studies.
- Transparency and Trust: legal liability for algorithmic decision.
- Liability and knowledge
- Law regulates humans, not machines
- Can liability law cope with AI?
- Yes, but the law needs to know how an AI made its decision that caused the liability claim!
- Law regulates humans, not machines
- Regulation must wait until we understand the social problems properly
- Early regulation might be needed where AI threatens fundamental rights
- because no-one should be able to buy off someone’s fundamental rights!
The poster session ended a day full of fabulous conversations and thinking, especially about visualising multi-dimensional data!