Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

At the Trustworthy AI Standardization Workshop, stakeholders from science, business and politics came together around Trustworthy AI and discuss how organizations can prepare for the coming AI regulations. Ever new AI-based business models and the expected entry into force of the EU AI Act, significantly increase the importance of proven reliability and comprehensive quality standards of intelligent applications and systems.

AI evaluation and testing plays a central role in this:

  • What requirements do they would AI systems have to meet?
  • What are the advantages of tested AI applications?
  • What offerings for AI testing are already available on the market?
  • What is the role of standardization at this point? What is needed in terms of standardization activities?

The experts from BSI (German Federal Office for Information Security), Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, PwC PricewaterCoopers GmbH WPG, TÜV SÜD, Singapore IMDA and the German Standardization Organization DIN will share insights into their work in the field of AI testing, as well as current AI trends and challenges.

Participants from different backgrounds will present how they are preparing for regulatory requirements and the implementation of the AI Act. Interdisciplinary panel discussions and networking opportunities rounds off the program.

The in-person event will be held all day as part of the "Singapore Forum on Building Open Trust Systems 2023" on 27th October 2023 from 9 am to 5 pm at Pan Pacific Orchard Hotel in Singapore.
We look forward to keep you informed on new impulses, key findings and discussion points in the follow-up to the event.

If you have any questions, please do not hesitate to contact us!

Contact person:

Profile
viewmodecustomized
usertill.lehmann
userKey2c91808f7743d8d20177721fac96003e


Preliminary agenda:

 Agenda

Time

Content

Speaker

9:50 – 10:00

Opening and Introduction

Till Lehmann
Team coordinator
DIN

Part 1: Introduction

10:00 –10:30

Conformity Assessment meets Al: Challenges and Concepts 

Hendrik Reese
Partner
PwC

10:30 – 11:00

Implications of ML-Technologies to Trustworthiness Evaluation: Lessons Learned 

Dr. Maximilian Poretschkin
Team leader "AI assurance and certification"
Fraunhofer IAIS
Consortium leader at ZERTIFIZIERTE KI

Part  2: Horizontal vs. Sectoral Standards (Profile Concept)

11:00 – 11:30

EU Al Act Standardization Request

Daniel Loevenich
Unit Principle, Strategy and Evidence in Artificial Intelligence
BSI

Till Lehmann
Team coordinator
DIN

11:30 – 12:00

Horizontal vs. Sectoral Standards (Profile Concept)

Hendrik Reese
Partner
PwC



Lunch Break (12:00 – 13:30)


Part 3: Bridging the Gap: Al Trustworthiness Evaluation and Certification

13:30 – 14:00

TAISEC & TAISEM Criteria Approach

Dr. Maximilian Poretschkin
Team leader "AI assurance and certification"
Fraunhofer IAIS
Consortium leader at ZERTIFIZIERTE KI

Daniel Loevenich
Unit Principle, Strategy and Evidence in Artificial Intelligence
BSI

14:00 – 14:30

The Role of Tools and Frameworks

Dr. Martin Saerbeck
CTO Digital Service
TÜV SÜD Group

Part 4: Landscape and perspective of Trustworthy AI standards from Asia

14:30 – 15:00

The development of AI testing frameworks,  standards, and best practices.

Chung Sang Hao
Deputy Director of the AI Governance team
AI Verify Foundation of IMDA (Infocomm Media Development Authority)

15:00 – 15:30

Trustworthy AI Testing, Certificate and Standards

SAC representatives:

Wanzhong Ma, Jiaqi Liu

Huawei

15:30 – 16:10

Panel Discussion

TBC

16:10 – 16:20

Closing

Till Lehmann
Team coordinator
DIN

16:20 – 17:00

Networking