AI data-monopoly risks to be probed by UK parliamentarians
The UK’s upper house of parliament is asking for contributions to an enquiry into the socioeconomic and ethical impacts of artificial intelligence technology.
Among the questions the House of Lords committee will consider as part of the enquiry are:
- Is the current level of excitement surrounding artificial intelligence warranted?
- How can the general public best be prepared for more widespread use of artificial intelligence?
- Who in society is gaining the most from the development and use of artificial intelligence? Who is gaining the least?
- Should the public’s understanding of, and engagement with, artificial intelligence be improved?
- What are the key industry sectors that stand to benefit from the development and use of artificial intelligence?
- How can the data-based monopolies of some large corporations, and the ‘winner-takes-all’ economics associated with them, be addressed?
- What are the ethical implications of the development and use of artificial intelligence?
- In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable?
- What role should the government take in the development and use of artificial intelligence in the UK?
- Should artificial intelligence be regulated?
The committee says it is looking for “pragmatic solutions to the issues presented, and questions raised by the development and use of artificial intelligence in the present and the future”.
Commenting in a statement, Lord Clement-Jones, chairman of the Select Committee on Artificial Intelligence, said: “This inquiry comes at a time when artificial intelligence is increasingly seizing the attention of industry, policymakers and the general public. The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.
“We are looking to be pragmatic in our approach, and want to make sure our recommendations to government and others will be practical and sensible. There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations.”
“If you are interested in artificial intelligence and any of its aspects, we want to hear from you. If you are interested in public policy, we want to hear from you. If you are interested in any of the issues raised by our call for evidence, we want to hear from you,” he added.
The committee’s call for evidence can be found here. Written submissions can be submitted via this webform on the committee’s webpage.
The deadline for submissions to the enquiry is September 6, 2017.
Algorithmic irresponsibility
Concern over the societal impacts of AI has been rising up the political agenda in recent times, with another committee of UK MPs warning last fall the government needs to take proactive steps to minimise bias being accidentally built into AI systems and ensure transparency so that autonomous decisions can be audited and systems vetted to ensure AI tech is operating as intended — and that unwanted, or unpredictable, behaviours are not produced.
Another issue that we’ve flagged here on TechCrunch is the risk of valuable publicly funded data-sets effectively being asset-stripped by tech giants hungry for data to feed and foster commercial AI models.
Since 2015, for example, Google-owned DeepMind has been forging a series of data-sharing partnerships with National Health Service Trusts in the UK which has provided it with access to millions of citizens’ medical information. Some of these partnerships explicitly involve AI; in other cases it has started by building clinical task management apps yet applying AI to the same health data-sets is a stated, near-term ambition.
It also recently emerged that DeepMind is not charging NHS Trusts for the app development and research work it’s doing with them — rather its ‘price’ appears to be access to what are clearly highly sensitive (and publicly funded) data-sets.
This is concerning as there are clearly only a handful of companies with deep enough pockets to effectively ‘buy’ access to highly sensitive publicly-funded data-sets — i.e. by offering five years of ‘free’ work in exchange for access — using that data to develop a new generation of AI-powered products. A small startup cannot hope to compete on the same terms as the Alphabet-Google behemoth.
The risk of data-based monopolies and ‘winner-takes-all’ economics from big tech’s big data push to garner AI advantage should be loud and clear. As should the pressing need for public debate on how best to regulate this emerging sector so that future wealth and any benefits derived from the power of AI technologies can be widely distributed, rather than simply locking in platform power.
In another twist pertaining to DeepMind Health’s activity in the UK, the country’s data protection watchdog ruled earlier this month that the company’s first data-sharing arrangement with an NHS Trust broke UK privacy law. Patients’ consent had not been sought nor obtained for the sharing of some 1.6 million medical records for the purpose of co-developing a clinical task management app to provide alerts of the risk of a patient developing a kidney condition.
The Royal Free NHS Trust now has three months to change how it works with DeepMind to bring the arrangement into compliance with UK data protection law.
In that instance the app in question does not involve DeepMind applying any AI. However, in January 2016, the company and the same Trust agreed on wider ambitions to apply AI to medical data sets within five years. So the NHS app development freebies that DeepMind Health is engaged with now are clearly paving the way for a broad AI push down the line.
Commenting on the Lords enquiry, Sam Smith, coordinator of health data privacy group, medConfidential — an early critic of how DeepMind was being handed NHS patient data — told us: “This inquiry is important, especially given the unlawful behaviour we’ve seen from DeepMind’s misuse of NHS data. AI is slightly different, but the rules still apply, and this expert scrutiny in the public domain will move the debate forward.”
Featured Image: razum/Shutterstock