The Speakers

bild-diana-saplacan.png

Diana Saplacan

Department of Informatics, University of Oslo, Norway

Making Care Robots Understandable: An Introduction to Universal Design Principles as Design and Ethical Guidelines of Social Assistive Robots

This talk aims to introduce the audience to Universal Design (UD) principles as design and ethical guidelines of Social Assistive Robots (SARs). First, the talk addresses how and why care robots, e.g., robots used within the home- and/or
healthcare services, shall be designed to be more understandable for a diverse group of users (elderly, people with low digital literacy, people with an immigrant background, medical staff etc.). Second, various arguments are presented, including human rights and the right to health(care), along with the European Artificial Intelligence Act (AIA). Finally, the talk covers findings from our empirical work conducted with legal experts, Human-Robot Interaction (HRI) experts, and user group representatives, supporting these arguments.

Biography: Diana Saplacan is a researcher at the University of Oslo, at the Department of Informatics, Robotics and Intelligent Systems Research Group. She currently works in Vulnerability in Robot Society (VIROS) research project. She has also recently been listed as one of the 30 women in Norway changing the field of Artificial Intelligence. The nomination was made by the Norwegian Artificial Intelligence Consortium (NORA). She received her Ph.D. degree (2020) from the University of Oslo, Norway, and her M.Sc. degree (2013) from Kristianstad University, Sweden. Her Ph.D. degree is interdisciplinary within Design of Information Systems, at the cross of Human-Computer Interaction (HCI) and Human-Robot Interaction (HRI), and Computer-Supported Cooperative Work (CSCW) fields, with Universal Design (UD) knitting these fields. She previously worked as a Lecturer in Computer Science at Kristianstad University, Sweden (2013-2016/2020). Her current interests include Human-Robot Interaction and Human-Robot cooperation, ethics regarded through Universal Design principles, inclusion, and accessibility. 

Gellers Hi-Res Headshot 2018-2.jpeg

Joshua C. Gellers

Department of Political Science and Public Administration, University of North Florida

Designing Robots for More-Than-Human Justice

How can robots be designed to advance the prospects for obtaining justice? While a considerable body of literature focuses on the ethics and rights implications of artificial intelligence (AI), surprisingly less energy has been dedicated to understanding the conditions under which emerging technologies can contribute to the pursuit of justice. In addition, much of the relevant scholarly discourse has examined key moral and ethical issues from an almost exclusively anthropocentric perspective. Meanwhile, the onset of the Anthropocene has animated concerns about the implications of human-centered thinking, although this conversation has scarcely influenced the tenor of debates in AI ethics. This talk seeks to overcome these gaps and missed opportunities for dialogue by exploring theories of justice that include the more-than-human world, be it natural or technological. The goal is to prescribe ways in which design can promote justice for all the Earth’s inhabitants and contribute to a more ethical future.

Biography: Joshua C. Gellers, PhD, is an Associate Professor in the Department of Political Science and Public Administration at the University of North Florida, Research Fellow of the Earth System Governance Project, and Core Team Member of the Global Network for Human Rights and the Environment. A former Fulbright Scholar to Sri Lanka, his research focuses on environmental politics, human rights, and technology. Josh's work has appeared in numerous peer-reviewed journals and cited in several UN reports. He is the author of The Global Emergence of Constitutional Environmental Rights (Routledge 2017) and Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge 2020).

Takashi_Izumo.jpeg

Takashi Izumo

College of Law, Nihon University, Tokyo

“How Should We Communicate in the Human-Robot Co-existence Society”–
A talk About Ethically Appropriate Communication of Robots

This talk aims to present an appropriate communication model in HRI governance through an analysis of human ethical communication. In communication between users and robots, it is not only required to be efficient, but also to be ethical. What does it mean for communication to be ethical? In terms of legality, it means that the robot’s speech does not insult or harm humans, and that humans do not damage their robots. Furthermore, from the standpoint of legality, robots should not divulge to others the information that they obtain from their users too. However, these legal obligations do not cover the entirety of ethically appropriate communication. This talk will consider this issue from the broader perspective of the explainability of robots.

Biography: Takashi Izumo, Dr. jur., is an Associate Professor in the College of Law at Nihon University. His research focuses on legal history and legal theory, especially on natural law in the early modern period. He worked as gest researcher in the Max Planck Institute for Legal History and Legal Theory in Frankfurt am Main. He is the author of Die Gesetzgebungslehre im Bereich des Privatrechts bei Christian Thomasius (Peter Lang 2016).

Gordana-Dodig-Crnkovic.jpeg

Gordana Dodig-Crnkovic

Chalmers University of Technology, Sweden

Robot-Human Interactions in the Case of Robotic Autonomous Cars

Development of the intelligent autonomous robot technology presupposes its anticipated beneficial effect on the individuals and societies. In the case of such disruptive emergent technology, not only questions of how to build, but also why to build and with what consequences are important. The field of ethics of intelligent autonomous robotic cars is a good example of research with actionable practical value, where a variety of stakeholders, including the legal system and other societal and governmental actors, as well as companies and businesses, collaborate bringing about shared view of ethics and societal aspects of technology. It could be used as a starting platform for the approaches to the development of intelligent autonomous robots in general, considering human-machine interfaces in different phases of the life cycle of technology - the development, implementation, testing, use and disposal. Drawing from our work on ethics of autonomous intelligent robocars, and the existing literature on ethics of robotics, our contribution consists of a set of values and ethical principles with identified challenges and proposed approaches for meeting them. This may help stakeholders in the field of intelligent autonomous robotics to connect ethical principles with their applications. Our recommendations of ethical requirements for autonomous cars can be used for other types of intelligent autonomous robots, with the caveat for social robots that require more research regarding interactions with the users. We emphasize that existing ethical frameworks need to be applied in a context-sensitive way, by assessments in interdisciplinary, multi-competent teams through multi-criteria analysis. Furthermore, we argue for the need of a continuous development of ethical principles, guidelines, and regulations, informed by the progress of technologies and involving relevant stakeholders.

Biography: Gordana Dodig-Crnković is Professor of Interaction Design at Chalmers University of Technology and Professor of Computer Science at Mälardalen University, Sweden. She holds PhD degrees in Physics and Computer Science. Her research focuses on the relationships between computation, information and cognition, including ethical and value aspects. She is a member of the editorial board of the Springer SAPERE series, World Scientific Series in Information Studies, and number of journals. She is a member of the AI Ethics Committee at Chalmers University of Technology and the Karel Capek Center for Values in Science and Technology. More information can be found at http://gordana.se

takeda_001.jpeg

Mizuki Takeda

Department of Mechanical Engineering, Toyohashi University of Technology, Japan

Care Robot Design With Consideration of Accountability

The rapid aging of society has led to an increase in accidents of the elderly and a shortage of caregivers. Although AI and robots are expected to play an active role in addressing this problem, there are not a few people who are reluctant to introduce care robots due to a sense of uneasiness about them. Due to the black box characteristics of the system, humans do not know what the robot does or why the robot does it. Therefore, the user feels danger or cannot identify the cause of an accident, thus it is thought that there is a sense of uneasiness about the introduction of the system. Therefore, I have proposed a design methodology for care robots that is accountable to various stakeholders. The proposed method is composed of two phases: the description of the entire system and the representation of the information to the stakeholders. First, the entire robot system is made transparent by using the modeling language SysML. The appropriate interface and information presentation methods are then determined based on the relationship between the stakeholders, their situation, and the information to be presented. The talk will explain this design method using an actual care robot as an example.

Biography: 

Mizuki Takeda is an Assistant Professor in the Department of Mechanical Engineering, Toyohashi University of Technology, Aichi, Japan.
He received his B.E., M.E., and Ph.D. degrees from Tohoku University, Sendai, Japan in 2015, 2017, and 2020, respectively.
His research interests include assistive robots, human-robot cooperation systems, and robot ethics.
He is a member of IEEE and JSME.

alinson-xu.jpeg

Alison Xu

Waseda Institute for Advanced Study, Waseda University, Japan

Driverless Cars and the Distribution of Liabilities Among Stakeholders: How to Slice the Cake?

The emergence of autonomous vehicles has posted serious challenges of allocating liabilities in the case of a road accident. The situation will become even more complicated in the context of driverless taxi services provided by a sharing platform. Manufacturers, insurance companies, drivers/nondrivers, service platforms, have all become stakeholders whose respective obligations should be determined with certainty. This presentation will firstly introduce the various regulatory approaches towards autonomous vehicles technology adopted by different jurisdictions, including China, US, EU, and Japan, with a view to identifying the gaps in the current regulation. Then it discusses a newly proposed model of sharing responsibilities among relevant stakeholders and provides an assessment of applying the model in the case of autonomous taxi service under sharing platform. It finally makes some recommendations as to further improve the model.

Biography: Dr Alison Xu is an Assistant Professor at Waseda University, Waseda Institute for Advanced Study. Her research interests lie broadly in the area of public and private international law, technology and law, and interdisciplinary study of law. Prior to joining Waseda, she attended law schools in both China and the UK, holding a Ph.D. from the University of Leeds in 2019, a LL.M. from China University of Political Science and Law and a LL.B. from Huazhong University of Science and Technology. Her papers have appeared in highly regarded law journals, including the International & Comparative Law Quarterly and Asia Journal of International Law. Her academic achievement were acknowledged by distinguished institutes in this area, including a scholarship awarded by the Hague Academy of International Law, Academic Achievement Prize by China Society of Private International Law, and a writing contest prize by Stanford University.

MNL (1).png
MNL (1).png

Naomi Lintvedt

Norwegian Research Center for Computers and Law, Faculty of Law, University of Oslo, Norway

What we (should) Talk About When We Talk about Consent in HRI: A Taxonomy of Consent in Robotics

In robotics, much attention is given to how to obtain informed consent from users in human-robot interaction. But references to consent or informed consent need to take into consideration how these terms are defined and practiced in different areas of law and across jurisdictions. The reason we talk about consent is because it is linked to the issue of privacy in human-robot interaction. However, due to the physical presence of robots, it is not sufficient to consider informational privacy, but we must also take into
account bodily, spatial, communicational, proprietary, intellectual, decisional, associational, and behavioural privacy. Thus, we need to establish a common taxonomy of privacy and consent to ensure a meaningful conversation about consent in HRI. In this talk, we will look at variations of consent, and whether consent can be truly specific, informed, and voluntary when humans interact with advanced robots with complex data processing capabilities.

Biography: Mona Naomi Lintvedt is a doctoral research fellow at the Norwegian Research Center for Computers and Law (NRCCL), University of Oslo. Her research and investigates ethical and legal concerns that rise from how the design and functionality of robots influence human-robot interaction. The research is carried out under the aegis of the research project ‘Vulnerability in the Robot Society’ (VIROS), funded by the Norwegian Research Council. Naomi holds a law degree from the University of Oslo, and has extensive experience with issues that arise in the intersection between technology and law.

robert-square.png

Robert Van Den Hoven van Genderen

Faculty of Law, VU University of Amsterdam

Empathy as Indispensable Element in Engaging with AI in Legal Relations.

Combining artificial intelligence (AI) with empathy and human ethics for better engagement is clearly one of the main targets in the development of several AI applications, such as for AI-generated innovations and creativities. Even though law and regulations should in principle drive towards reaching such a goal, there is a general conviction that law has to be objective in the sense that legal rules have to be cleansed of emotions and empathetical influences. Indeed, one could question whether it is even possible to embed empathy in the development of AI technology, but not in the regulatory framework that drives AI innovations and developments. Notably, morals, as well as empathy, are a more or less hidden element in rulings by judges and will therefore influence the interpretation and creation of the normative law. But is this level of empathy in law sufficient to really drive emphatically-steered AI systems and entities of near future appliances performing tasks and
creating works with legal effects? This presentation addresses the topical question via engaging into an analysis that stems from legal theory and expand to the concrete application and interpretation of aspects in intellectual property rights and data protection legislation as key legal areas that govern these kind of AI activities.

Biography: Professor Dr. Robert van den Hoven van Genderen has a vast history in business and science. Currently, he is director of the Center for Law and Internet of the Law Faculty of the VU University of Amsterdam and managing partner at Switchlegal Lawyers in Amsterdam. In the recent past, he has been an executive legal officer for the European Project Hemolia on anti-money laundering for financing terrorism and advisor for the Council of Europe sand Nato on privacy. Further, he has been director for regulatory affairs of BT Netherlands and Telfort and secretary Information Policy for the Netherlands Employers Organisation. He has published several articles and books on telecommunication law, IT law, privacy and robot law and lectured on these subjects at different universities in the Netherlands and abroad

MNL (1).png
mk.jpeg

Maksymilian Kuźmicz

The Swedish Law and Informatics Research Institute, Stockholm University, Sweden

Robots and the Europe’s Digital Decade – A Talk About the EU Policy Towards Robots

This talk aims to introduce the audience to the EU policy towards robots. Firstly, the main principles of the EU policy, expressed mostly in the 2030 Policy Programme “Path to the Digital Decade”, will be identified. Secondly, it will be noticed that instead of product-specific regulations, the EU adopted a horizontal approach, which means that particular regulations aim to solve specific problems, not to regulate group of products. As a result, multiple issues connected with robots are regulated by various legal acts. In the third part, main legal acts and their significance for the robot law will be presented, including the General Data Protection Regulation, Artificial Intelligence Act, and a European Declaration on Digital rights and principles for the Digital Decade. It may steam the further discussion whether that regulatory policy is correct, and how it may be amended.

Biography: 

Maksymilian obtained a BA in Law and Philosophy (summa cum laude) from the College of Interdisciplinary Individual Studies in Humanities and Social Sciences at the KU Lublin. During both his BA and MA, he spent a total of three semesters at KU Leuven as an exchange student. He completed his education with an MA in Law from KU Lublin (summa cum laude). Since September 2020 Maksymilian has been working as a member of the RESHUFFLE project at the Institute for European Law at KU Leuven, (under the direction of Prof. dr. Elise Muir, a position supported by a Starting Grant from the European Research Council). In May 2021 he joined the visuAAL Innovative Training Network (Maria Skłodowska-Curie Action) as a PhD student at Stockholm University.

MNL (1).png
grabiele.jpeg

Gabriele Trovato

Shibaura Institute of Technology

“A Device in Your Home? Lessons Learnt Through the Interaction With Elderlies of Japan and Germany”

With the problem of ageing population increasingly prominent, the burden of families, caregivers and medical workers to take care of older adults will be heavier. Social exclusion and cognitive dysfunctions make things worse, especially in times of a pandemic. One of the most effective approaches to solve these problems can be technology, which application is often limited by the acceptance of the end user. Within the HORIZON 2020 e-ViTA project, we introduced two new devices, DarumaTO-3 and CelesTE. For both of them special measures were needed in order to employ them in a safe and ethical way. In this presentation we will see the lessons learnt.

Biography: 

Gabriele Trovato is Associate Professor in Shibaura Institute of Technology as well as visiting researcher in Waseda University, Tokyo, Japan, and Principal Investigator in Waseda of the EU-Japan Horizon 2020 project e-ViTA. He received his M.S. degree in Computer Engineering from the University of Pisa, Italy, and Ph.D. degree in Biorobotics in Waseda University. Within the relations between the two countries, Gabriele Trovato has been in the organising committee of Italy-Japan Workshops since 2011, and has been appointed "Ambassador of Livorno in the world" by the Municipality of Livorno, Italy. He has been Visiting Researcher in Karlsruhe Institute of Technology (Germany), Carnegie Mellon University (USA), University of Sao Paulo (Brazil), PUCP (Peru) and Imperial College London (UK) among others. Gabriele Trovato has worked in the video game industry, being involved in the development of the world-wide notorious game series "Sid Meier's Civilization" and having created popular innovative mods for the game. His main research interests are interdisciplinary and include Human-Robot Interaction, with focus on culture and religion related aspects, artificial emotions in humanoids, robot aesthetics, and procedural content generation. Gabriele Trovato's latest creations, such as SanTO robots, are a combination of engineering, AI, art and humanities, and raised interest among the worldwide press, including the Wall Street Journal and the BBC.