Exam code:1CP2
What is artificial intelligence?
-
Artificial intelligence (AI) is a machine that can display intelligent behaviours similar to that of a human
-
AI is a system that can:
-
Learn – acquire new information
-
Decide – analyse and make choices
-
Act autonomously – take actions without human input
-
What is machine learning?
-
Machine learning is one method that can help to achieve an artificial intelligence (AI)
-
By giving a machine data so that it can ‘learn over time‘ it helps towards training a machine or software to perform a task and improve its accuracy and efficiency
What is robotics?
-
Robotics is the principle of a robot carrying out a task by following a precise set of programmed instructions
-
Robots can be categorised into two groups:
|
Dumb robots |
Smart robots |
|---|---|
|
Repeat the same programmed instructions over and over again (no AI) |
Carries out more complex tasks and can adapt and learn (AI) |
|
E.g. Car assembly line |
E.g. Assisting surgeons in delicate procedures |
-
The development of artificial intelligence, including the increased use of machine learning and robotics raises ethical and legal issues such as:
-
Accountability
-
Safety
-
Algorithmic bias
-
Legal liability
-
Accountability & Safety
Why is accountability & safety an issue?
-
Accountability can be an ethical issue when the use of AI leads to a negative outcome
-
Safety can be an ethical issue when you try to ensure safety in an algorithm that is designed to make it’s own choices, learn and adapt
-
The choices made by AI will have consequences, who is held accountable when things go wrong?
Driverless car accident
|
Scenario |
Ethical issues |
|---|---|
|
As a passenger in a driverless car, the car suddenly swerves to miss a child in the road and kills a pedestrian walking on the pavement |
|
Algorithmic Bias & Legal Liability
Why is algorithmic bias an issue?
-
Algorithmic bias can be an ethical issue when AI has to make a decision that favours one group over another
-
If data used in the design of AI is based on real-world biases then the AI will reinforce those biases
-
If the programmer of the AI has personal biases they could make design decisions that reinforce their personal biases
Loan approvals
|
Scenario |
Ethical issues |
|---|---|
|
A bank introduces the use of AI to streamline loan approvals. Historical loan data is used and a client is denied based on historical loan approval rates in certain races or post codes |
|
Why is legal liability an issue?
-
Legal liability is an issue in all aspects of AI, but particularly when the use of AI leads to the loss of human life or criminal activity
-
In the eyes of the law, who is responsible?
-
The programmer?
-
The manufacturer?
-
The consumer?
-
Smart toy
|
Scenario |
Legal issues |
|---|---|
|
A person buys a smart toy designed to interact with a child and personalise the play experience, learning their preferences etc. A hacker gains access to the smart toy stealing personal data |
|
Worked Example
A hospital uses an algorithm to help decide how many nurses are needed on each day
Discuss how algorithmic bias can affect the decision the hospital makes [6]
Your answer should consider:
-
the cause of algorithmic bias
-
the impact on induvial and communities of algorithmic bias
-
the methods available to reduce the risk of algorithmic bias
Answer
Causes of algorithmic bias
Algorithms being trained used historical data – past scheduling practices not fair. the algorithm would continue the bias
Algorithm design focussed on efficiency over fairness – filling shifts without considering experience
Lack of transparency – hard to check and fix any potential bias
Impacts of algorithmic bias on individuals and communities
Nurse safety – unfair scheduling could lead to nurse burnout, leading to medical errors
Unequal scheduling – bias could lead to groups of nurses being assigned more shifts than others or regularly assigned undesirable hours
Patient care – short staffing compromising patient care
Methods to reduce algorithmic bias
Human oversight – algorithmic recommendations should be reviewed and adjusted by human schedulers first
Transparency – nurses and all employees should understand how the algorithm is making decisions so that concerns can be raised if needed
Auditing – regular audits to identify and address any emerging bias in the algorithms output
Responses