Regulating scientific and technological uncertainty: The precautionary principle in the context of human genomics and AI
DOI:
https://doi.org/10.17159/sajs.2023/15037Keywords:
precautionary principle, risk-based approach, human genomics, AI, regulationAbstract
Considered in isolation, the ethical and societal challenges posed by genomics and artificial intelligence (AI) are profound and include issues relating to autonomy, privacy, equality, bias, discrimination, and the abuse of power, amongst others. When these two technologies are combined, the ethical, legal and societal issues increase substantially, become much more complex, and can be scaled enormously, which increases the impact. Adding to these complexities, both genomics and AI-enabled technologies are rife with scientific and technological uncertainties, which makes the regulation of these technologies not only challenging in itself, but also creates legal uncertainties. In science, the precautionary principle has been used globally to govern uncertainty, with the specific aim to prevent irreversible harm to human beings. The regulation of uncertainties in AI-enabled technologies is based on risk as set out in the AI Regulation that was recently proposed by the European Commission. However, when genomics and artificial intelligence are combined, not only do uncertainties double, but the current regulation of such uncertainties towards the safe use thereof for humans seems contradictory, considering the different approaches followed by science and technology in this regard. In this article, I explore the regulation of both scientific and technological uncertainties and argue that the application of the precautionary principle in the context of human genomics and AI seems to be the most effective way to regulate the uncertainties brought about by the combination of these two technologies.
Significance:
The significance of this article rests in the criteria framework proposed for the determination of the applicability of the precautionary principle and lessons learnt from the European Union’s attempt to regulate artificial intelligence.
Published
Issue
Section
License
All articles are published under a Creative Commons Attribution 4.0 International Licence
Copyright is retained by the authors. Readers are welcome to reproduce, share and adapt the content without permission provided the source is attributed.
Disclaimer: The publisher and editors accept no responsibility for statements made by the authors
How to Cite
Funding data
-
Fonds National de la Recherche Luxembourg
Grant numbers IS/14717072 -
Horizon 2020
Grant numbers ID 956562
- Abstract 745
- PDF 987
- EPUB 519
- XML 539