Back to 课程

Computer Science GCES EDEXCEL

0% Complete
0/0 Steps
  1. Decomposition And Abstraction Edexcel
    2 主题
  2. Algorithms Edexcel
    11 主题
  3. Truth Tables Edexcel
    3 主题
  4. Binary Edexcel
    6 主题
  5. Data Representation Edexcel
    4 主题
  6. Data Storage And Compression Edexcel
    2 主题
  7. Hardware Edexcel
    5 主题
  8. Software Edexcel
    3 主题
  9. Programming Languages Edexcel
    2 主题
  10. Networks Edexcel
    7 主题
  11. Network Security Edexcel
    2 主题
  12. Environmental Issues Edexcel
    1 主题
  13. Ethical And Legal Issues Edexcel
    3 主题
  14. Cybersecurity Edexcel
    2 主题
  15. Develop Code Edexcel
    6 主题
  16. Constructs Edexcel
    4 主题
  17. Data Types And Data Structures Edexcel
    5 主题
  18. Operators Edexcel
    1 主题
  19. Subprograms Edexcel
    2 主题
课 Progress
0% Complete

Exam code:1CP2

What is artificial intelligence?

  • Artificial intelligence (AI) is a machine that can display intelligent behaviours similar to that of a human

  • AI is a system that can:

    • Learn – acquire new information

    • Decide – analyse and make choices

    • Act autonomously – take actions without human input

What is machine learning?

  • Machine learning is one method that can help to achieve an artificial intelligence (AI)

  • By giving a machine data so that it can ‘learn over time‘ it helps towards training a machine or software to perform a task and improve its accuracy and efficiency

What is robotics?

  • Robotics is the principle of a robot carrying out a task by following a precise set of programmed instructions

  • Robots can be categorised into two groups:

Dumb robots

Smart robots

Repeat the same programmed instructions over and over again (no AI)

Carries out more complex tasks and can adapt and learn (AI)

E.g. Car assembly line

E.g. Assisting surgeons in delicate procedures

  • The development of artificial intelligence, including the increased use of machine learning and robotics raises ethical and legal issues such as:

    • Accountability

    • Safety

    • Algorithmic bias

    • Legal liability

Accountability & Safety

Why is accountability & safety an issue?

  • Accountability can be an ethical issue when the use of AI leads to a negative outcome

  • Safety can be an ethical issue when you try to ensure safety in an algorithm that is designed to make it’s own choices, learn and adapt

  • The choices made by AI will have consequences, who is held accountable when things go wrong?

Driverless car accident

Scenario

Ethical issues

As a passenger in a driverless car, the car suddenly swerves to miss a child in the road and kills a pedestrian walking on the pavement

  • Who takes accountability for a driverless car?

  • Should AI be programmed to prioritise the passengers safety or the safety of pedestrians?

  • What rules should be programmed to ensure safety?

  • What happens when danger is unavoidable?

Why is algorithmic bias an issue?

  • Algorithmic bias can be an ethical issue when AI has to make a decision that favours one group over another

  • If data used in the design of AI is based on real-world biases then the AI will reinforce those biases

  • If the programmer of the AI has personal biases they could make design decisions that reinforce their personal biases

Loan approvals

Scenario

Ethical issues

A bank introduces the use of AI to streamline loan approvals. Historical loan data is used and a client is denied based on historical loan approval rates in certain races or post codes

  • How is it fair to deny a person based on historical data?

  • Is the AI reinforcing biases?

  • Is the AI programmer aware of biases in historical data?

  • Who is responsible for biased outcomes?

  • Legal liability is an issue in all aspects of AI, but particularly when the use of AI leads to the loss of human life or criminal activity

  • In the eyes of the law, who is responsible?

    • The programmer?

    • The manufacturer?

    • The consumer?

Smart toy

Scenario

Legal issues

A person buys a smart toy designed to interact with a child and personalise the play experience, learning their preferences etc.

A hacker gains access to the smart toy stealing personal data

  • Can the programmer who wrote the code be sued?

  • Can the manufacturer of the toy be sued?

  • Have privacy laws been violated by the manufacturer?

  • Does the smart toy need to be recalled?

Worked Example

A hospital uses an algorithm to help decide how many nurses are needed on each day

Discuss how algorithmic bias can affect the decision the hospital makes [6]

Your answer should consider:

  • the cause of algorithmic bias

  • the impact on induvial and communities of algorithmic bias

  • the methods available to reduce the risk of algorithmic bias

Answer

Causes of algorithmic bias

Algorithms being trained used historical data – past scheduling practices not fair. the algorithm would continue the bias

Algorithm design focussed on efficiency over fairness – filling shifts without considering experience

Lack of transparency – hard to check and fix any potential bias

Impacts of algorithmic bias on individuals and communities

Nurse safety – unfair scheduling could lead to nurse burnout, leading to medical errors

Unequal scheduling – bias could lead to groups of nurses being assigned more shifts than others or regularly assigned undesirable hours

Patient care – short staffing compromising patient care

Methods to reduce algorithmic bias

Human oversight – algorithmic recommendations should be reviewed and adjusted by human schedulers first

Transparency – nurses and all employees should understand how the algorithm is making decisions so that concerns can be raised if needed

Auditing – regular audits to identify and address any emerging bias in the algorithms output

Responses

您的邮箱地址不会被公开。 必填项已用 * 标注