Thesis Chapters by Michael Camilleri

The term ‘algocracy’ was coined by A.Aneesh to refer to a new form of governance which rivals bur... more The term ‘algocracy’ was coined by A.Aneesh to refer to a new form of governance which rivals bureaucracy. The suffix ‘cracy’ derives from the Greek word for ‘rule’. Algocracy means rule by algorithms as opposed to bureaucracy which means rule by non-elected officials. The term ‘bureaucracy’ carries with it negative connotations. When faced with an unjust decision by a governmental institution which seems frustratingly impossible to rectify, we speak of bureaucracy being ‘faceless’, and feel like Josef K in Franz Kafka’s ‘The Trial’, charged with an unspecified offence and tried by means of an incomprehensible court process. If decisions that affect the lives of ordinary individuals are produced by algorithms coded in a computer, then there is a threat that algorithmic decision-making can be even more Kafkaesque than decisions made by faceless bureaucrats.
In R v. Secretary of State for the Home Department ex parte Anufrijeva, Lord Steyn actually referred to Kafka to justify the importance of due process in a constitutional state saying that: “The antithesis of such a state was described by Kafka: a state where the rights of individuals are overridden by hole in the corner decisions or knocks on the door in the early hours.” Thus, where a system of law imposes a requirement of due process in administrative decision-making, algorithmic decision-making is subject to this requirement. In one of the earliest judgements on due process, Justice Fortescue exclaimed that “even God himself did not pass sentence upon Adam before he was called upon to make his defence.” The implication is that even an all-knowing being, who had all the necessary information to reach a just decision, would grant a human being a fair hearing. Can the same be said of algorithmic decision-making?
Papers by Michael Camilleri

An autonomous vehicle is heading into another vehicle with five people inside. It cannot brake in... more An autonomous vehicle is heading into another vehicle with five people inside. It cannot brake in time. The two options are: to do nothing and allow itself to crash into the vehicle or to divert its trajectory and crash into a wall, almost certainly killing the autonomous vehicle’s sole occupant. This is the Artificial Intelligence version of the ‘Trolley Problem’ - at its core a philosophical/ethical problem tackled from a number of perspectives, not least by a utilitarian approach.
This moral question, which was previously a problem which ‘merely’ troubled the mind, now troubles us in an actual, physical and even a legal way - in the form of ‘embodied AI’. From a legal point of view, the question is “who would be responsible in such a scenario: the AI, computer programmers, manufacturers, vendors or the vehicle’s occupant?”. The reality is that the courts would not revert to philosophical arguments when faced with an accident involving an AI powered machine, but would be confined to enforcing the letter of the law - primarily laws equipped to assign responsibility to a human for malicious or negligent acts or omissions. Yet to keep the status quo would be dangerous since judges would thus be confined to black letter law even if it produced unjust results. Hence the responsibility gap.
Uploads
Thesis Chapters by Michael Camilleri
In R v. Secretary of State for the Home Department ex parte Anufrijeva, Lord Steyn actually referred to Kafka to justify the importance of due process in a constitutional state saying that: “The antithesis of such a state was described by Kafka: a state where the rights of individuals are overridden by hole in the corner decisions or knocks on the door in the early hours.” Thus, where a system of law imposes a requirement of due process in administrative decision-making, algorithmic decision-making is subject to this requirement. In one of the earliest judgements on due process, Justice Fortescue exclaimed that “even God himself did not pass sentence upon Adam before he was called upon to make his defence.” The implication is that even an all-knowing being, who had all the necessary information to reach a just decision, would grant a human being a fair hearing. Can the same be said of algorithmic decision-making?
Papers by Michael Camilleri
This moral question, which was previously a problem which ‘merely’ troubled the mind, now troubles us in an actual, physical and even a legal way - in the form of ‘embodied AI’. From a legal point of view, the question is “who would be responsible in such a scenario: the AI, computer programmers, manufacturers, vendors or the vehicle’s occupant?”. The reality is that the courts would not revert to philosophical arguments when faced with an accident involving an AI powered machine, but would be confined to enforcing the letter of the law - primarily laws equipped to assign responsibility to a human for malicious or negligent acts or omissions. Yet to keep the status quo would be dangerous since judges would thus be confined to black letter law even if it produced unjust results. Hence the responsibility gap.
In R v. Secretary of State for the Home Department ex parte Anufrijeva, Lord Steyn actually referred to Kafka to justify the importance of due process in a constitutional state saying that: “The antithesis of such a state was described by Kafka: a state where the rights of individuals are overridden by hole in the corner decisions or knocks on the door in the early hours.” Thus, where a system of law imposes a requirement of due process in administrative decision-making, algorithmic decision-making is subject to this requirement. In one of the earliest judgements on due process, Justice Fortescue exclaimed that “even God himself did not pass sentence upon Adam before he was called upon to make his defence.” The implication is that even an all-knowing being, who had all the necessary information to reach a just decision, would grant a human being a fair hearing. Can the same be said of algorithmic decision-making?
This moral question, which was previously a problem which ‘merely’ troubled the mind, now troubles us in an actual, physical and even a legal way - in the form of ‘embodied AI’. From a legal point of view, the question is “who would be responsible in such a scenario: the AI, computer programmers, manufacturers, vendors or the vehicle’s occupant?”. The reality is that the courts would not revert to philosophical arguments when faced with an accident involving an AI powered machine, but would be confined to enforcing the letter of the law - primarily laws equipped to assign responsibility to a human for malicious or negligent acts or omissions. Yet to keep the status quo would be dangerous since judges would thus be confined to black letter law even if it produced unjust results. Hence the responsibility gap.