HemSök efter kurserCertified AI Governance Professional (AIGP)

Certified AI Governance Professional (AIGP)


Utbildningsformer

Längd
2 dagar

Pris
22890 kr

The Artificial Intelligence Governance Professional training or 'AIGP' teaches professionals how to develop, integrate and deploy trustworthy AI systems in line with emerging laws and policies around the world. The course and certification provides an overview of AI technology, survey of current law and strategies for risk management, among many other relevant topics.

Why should I take the Certified Artificial Intelligence Governance Professional training?

Businesses and institutions need professionals who can evaluate AI, curate standards that apply to their enterprises, and implement strategies for complying with applicable laws and regulations.

AIGP training teaches how to develop, integrate and deploy trustworthy AI systems in line with emerging laws and policies. The curriculum provides an overview of AI technology, survey of current law, and strategies for risk management, security and safety considerations, privacy protection and other topics.

This training teaches critical artificial intelligence governance concepts that are also integral to the AIGP certification exam. While not purely a “test prep” course, this training is appropriate for professionals who plan to certify, as well as for those who want to deepen their AI governance knowledge. Both the training and the exam are based on the same body of knowledge.

What's included:

  • Official learning materials
  • Exam Voucher (when available from the IAPP in Q1 2024)
  • 1st Year -IAPP Membership
  • Practice Exam (when available from the IAPP in Q1 2024)

There are no prerequisites for this course.

Who Should Train?

Any professionals tasked with developing AI governance and risk management in their operations, and anyone pursuing IAPP Artificial Intelligence Governance Professional certification.

Module 1: Foundations of artificial intelligence

Defines AI and machine learning, presents an overview of the different types of AI systems and their use cases, and positions AI models in the broader socio-cultural context.

Module 2: AI impacts on people and responsible AI principles

Outlines the core risks and harms posed by AI systems, the characteristics of trustworthy AI systems, and the principles essential to responsible and ethical AI.

Module 3: AI development life cycle

Describes the AI development life cycle and the broad context in which AI risks are managed.

Module 4: Implementing responsible AI governance and risk management

Explains how major AI stakeholders collaborate in a layered approach to manage AI risks while acknowledging AI systems’ potential societal benefits.

Module 5: Implementing AI projects and systems

Outlines mapping, planning and scoping AI projects, testing and validating AI systems during development, and managing and monitoring AI systems after deployment.

Module 6: Current laws that apply to AI systems

Surveys the existing laws that govern the use of AI, outlines key GDPR intersections, and provides awareness of liability reform.

Module 7: Existing and emerging AI laws and standards

Describes global AI-specific laws and the major frameworks and standards that exemplify how AI systems can be responsibly governed.

Module 8: Ongoing AI issues and concerns

Presents current discussions and ideas about AI governance, including awareness of legal issues, user concerns, and AI auditing and accountability issues.