Imagine you’re shopping in your local supermarket. You pull out an app on your phone and scan a product. Instantly, it flashes with a personalised risk rating of how likely it is to set off your allergies.
In the next aisle, smart packaging on a ready meal updates you in real time with the carbon footprint of your food’s journey. But once you take it home, the label changes to display a live warning: allergens were detected unexpectedly in the production factory, and your food may have to be recalled.
How much extra energy would be used to power such a system? Who makes sure the app is taking care of your personal medical information? And what if an accidental alert meant you were told to throw away your food when it was perfectly edible, or resulted in a small business being blacklisted by supermarkets?
These are just some of the questions that will require our attention if we manage to connect up the vast amounts of data flowing through the food system by storing and sharing it in systems called data trusts.
As part of a working group on ethics in food data trusts, my colleagues and I have been developing new methods to help food technology organisations ensure systems like these meet ethical standards.
Food production is the largest sub-sector in UK manufacturing, and currently the subject of much scrutiny. In the face of climate change, there is increasing pressure to make supply chains more efficient and cost-effective, as well as to promote sustainability and reduce waste: objectives that new technologies are helping to achieve.
These technologies will allow us to collect data on our food like never before: from sensors which track every moment of a crop’s life cycle; to instant temperature readings on chilled or frozen meat deliveries; to detailed records of what moves on and off supermarket shelves and when.
Sharing this data between different parts of the food supply chain, which may involve many organisations, could bring enormous sustainability benefits: especially if we can then use advanced technologies such as AI prediction and recommendation algorithms to pinpoint customer needs and reduce waste even further.
Many organisations are wary about this data sharing, especially if it might reveal private business information. This is where data trusts come in. The idea is that their trustworthy infrastructure, bolstered by strict policy and legal agreements, will allow companies to be sure that private data will not be seen by, for example, their competitors. But data trusts are likely to also present their own set of unforeseen problems.
Our research
Our working group assessing data trusts, funded by the Internet of Food Things project and the AI3SD project, is made up of experts from many different specialities including computer science, law and design. Before we could start thinking about data ethics, we had to make sure we all understood each other’s language.
Our first task was to create a glossary of terms to tease out the ways in which we use language differently. For example, when we talk about AI, we may be imagining the super-smart intelligence of science fiction, new forms of machine learning that do tasks we cannot, or simply the algorithms that already support many of our day-to-day activities.
Building a common understanding helped us move onto analysing the ethics of data trusts: which do not yet currently exist. By the time they do, it may be too late to correct any major problems they cause.
This is an example of the “Collingridge dilemma”. This philosophical quandary suggests that in the early stages of a new technology, not enough is known about its potentially harmful consequences. By the time these consequences become clear, it may be too expensive – or too late – to control the technology.
One way to get around this is to map out these harmful consequences by imagining, in great detail, what a future which includes these technologies might look like. This is what we do in the research method called “design fiction”, where we create real objects representing a fictional future.
Some of the things we’ve created include minutes from a board meeting that never took place, a clip from a documentary reporting on something that hasn’t happened, a website showcasing a non-existent allergy alert app, and smart packaging that simulates the shopping experience depicted earlier.
In creating these, we tried to consider all the different people who might be positively or negatively affected in the future world to which our objects belonged: including not only decision-makers at large supermarket chains but also smaller suppliers, consumers and workers.
Next steps
The next step was to use these objects to delve even deeper into what challenges they might present. We used a set of cards called the Moral-IT deck, which were developed to evaluate the ethics of technology.
These cards helped lead a discussion with experts about possible harms created by the use of new food systems. For instance, we considered whether smaller food businesses might be unfairly excluded from supermarkets due to not being able to afford the myriad of sensors needed to hook their products up to AI-enabled prediction systems.
Although we focused on the food sector, these methods can be carried over to analyse other areas where new technology is being introduced, such as adding smart bins (that sense when rubbish collection is needed) to public spaces. Our next step is to use what we’ve learnt to create a formal framework guiding people towards informed decisions around how to ethically introduce new technology.
We hope that, through using new, creative methods to analyse and imagine future technologies, we can help people create technological systems that are fairer and more ethical.
This article was originally posted on Smart labels and allergy sensors – how to make sure the future of food is ethical
Be First to Comment