The application of advanced technologies such as Machine Learning (ML) has been rapid and widespread, and more recently ChatGPT, which relies on a subsection of ML called 'large language models', showed to be both very useful and potentially dangerous. Responsible Artificial Intelligence (RAI) is necessary to mitigate the technology’s risks, including issues of bias, fairness, safety, privacy and transparency. Yet, it is by no means standard practice, and adoption of RAI across organizations worldwide has thus far been relatively limited.
As an industry leader in solutions for professionals, Wolters Kluwer has been at the forefront of embedding advanced technologies in their product and the Wolters Kluwer Internal Audit team has played a key role in helping to develop a governance framework for RAI. Hear first-hand from Deep Nanda, the AI Lead in the Wolters Kluwer Internal Audit Team, on the work done and lessons learnt in this critical new area of ESG (Environmental, Social, and Governance).
As an industry leader in solutions for professionals, Wolters Kluwer has been at the forefront of embedding advanced technologies in their product and the Wolters Kluwer Internal Audit team has played a key role in helping to develop a governance framework for RAI. Hear first-hand from Deep Nanda, the AI Lead in the Wolters Kluwer Internal Audit Team, on the work done and lessons learnt in this critical new area of ESG (Environmental, Social, and Governance).
Learning Objectives:
- What is Responsible AI
- Why do we need Responsible AI programs
- What role can Auditors play in implementing Responsible AI
In order to be awarded the full credit hours, you must attend the entire session and answer at least 3 of the polling questions. CPE certificates will be distributed the week after the webinar. Participants will earn 1.0 CPE credits.