Opinion Article
Some Remarks on the Relevance of Ethics for Machines and Algorithms
Celso Vargas-Elizondo, The Costa Rica Institute of Technology
Received Date:July 15, 2025; Published Date:July 21, 2025
‘Introduction
According to The Future of Job Report 2025, complemented this year with the ILO´s World Employment and Social Outlook (2024, report updated in September), the share of technology (machines and algorithms) in jobs shows that human and technology interplay is shifting rapidly. In 2025 human share in job delivery is 47%, and it is expected to fall to 33% within five years. The combination of humans and technology in 2025 is 30%, and it is expected to be 33% in five years. But the performance of technology alone now represents 22%, and in five years it will be 34%. This trend will be consolidated in the next decade without any doubt, and the share of AI will increase it. So, “machines” refers here to those AI-systems. Much of this trend can be captured in terms of the following three governance principles (HLEG-IA 2019) [1]: Humanon- the-Loop (HOTL), Human-in-the-Loop (HITL) and Humanin- Command (HIC). From a social and human perspective, these three principles are related to human oversight requirements of technology. These are hierarchically ordered in terms of the human control (surveillance) of the technological process, from design, implementation, deployment and decommissioning. Less surveillance is needed for HOTL, more for HITL and much more for HIC.
The first principle, HOTL “refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation” (HLEG-IA, 2019:18), and much of the technology alone in delivery jobs is related to this principle. In this case, the behaviour of the machine or algorithm is predictable and transparent. So, surveillance is needed only in some control points aimed at assuring that the deviance is within the range of errors permitted by the standards. The second principle, HITL “refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable” (HLEG-IA, 2019:18), i.e., the system and the human have complementary working roles in achieving some specific tasks. For example, experts get more precision using an algorithm than without it. These processes receive different names depending on the field: co-botics in human and robotics, combination humanmachines or system and human collaborative work.
The third principle, HIC is the widest on human control of the system and “refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation. This can include the decision not to use an AI system in a particular situation, to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by a system. Moreover, it must be ensured that public enforcers have the ability to exercise oversight in line with their mandate” (HLEG-IA, 2019:18). These three principles provide important guidance to analyse different automation systems and give useful insights on the set of controls that should be apply in each case. On the other hand, the classification of a system into the second or third category is provisional, because as we get more knowledge or the system is improved, it could indicate that less controls are needed.
What is the ethics behind these three principles? In the ethical framework proposed by the HLEG-AI (2019), deontology is the underlying theory. However, from the perspective of automation, I consider that some of the theories called “consequentialism” are relevant. As indicated by Walter Sinnott-Armstrong (2025): “Any consequentialist theory must accept the claim that I labeled “consequentialism”, namely, that certain normative properties depend only on consequences”. According to consequentialism, an act is good or bad depending on the consequences of this act on some domains considered valuable. The general principle is that the ratio between positive (P) and negative (N) consequences be as close as 1 as reasonably achievable.
In autonomy systems, rule-consequentialism is especially relevant. According to this theory, “rule-consequentialism selects rules solely in terms of the goodness of their consequences and then claims that these rules determine which kinds of acts are morally wrong” (Brad Hooker (2023)) [2]. So, these systems should behave according to some set of ethical rules whose consequences are good, predictable and transparent (it is very hard to make transparent the behaviour of neural networks-based systems). However, this set of rules should be complemented with the deontological approach in those cases where harm is a consequence of the behaviour of the autonomous system (Sinnott-Armstrong Walter (2023)) [3]. Who is responsible is a complex issue, but also how this harm should be repaired. As automation increases its impact on jobs, new ethical analysis should be made aimed at preventing disruptions in social and economic issues such as social security, economic and fiscal stability.
References
- HLEG-AI (2019) Ethics Guidelines for Trustworthy AI.
- Hooker Brad, Edward N Zalta, Uri Nodelman (eds.) (2023) Rule Consequentialism. The Stanford Encyclopedia of Philosophy.
- Sinnott-Armstrong Walter, Edward N Zalta, Uri Nodelman (eds.) (2023) Consequentialism. The Stanford Encyclopedia of Philosophy.
-
Celso Vargas-Elizondo*. Some Remarks on the Relevance of Ethics for Machines and Algorithms. Iris On Journ of Sci. 1(5): 2025. IOJS.MS.ID.000525.
-
World Employment and Social Outlook, Human-on-the-Loop, Human-in-Command, Performance of technology, Consequences
-
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.