The increased demand for transparent, explainable decision making, that is accurate, consistent and effective, has never been greater. Legislations like GDPR are just a result of increasing concerns about privacy, safety and transparency in general. While AI/ML solutions are great at making sense of high volumes of data, the reasoning process is usually quite opaque, sometimes leaving us baffled as to why it made a particular recommendation.
You will learn about the latest research in the field of eXplainable AI (XAI), an approach that combines AI/ML and traditional business rules to better understand the factors that contribute to an automated decision. Presenters will introduce you to the latest standards for representing decision logic, and will demonstrate an XAI solution built from open source components that will show how we can finally answer questions about why an automated decision was made.