IBM calls for precision regulation on AI & for companies to check for bias & assess AI's potential for harm
As outlined in our Principles for Trust and Transparency, IBM has long argued that AI systems need to be transparent and explainable.. [W]e supported the OECD AI Principles, and in particular the need to “commit to transparency and responsible disclosure” in the use of AI systems... [I]t’s past time to move from principles to policy. Requiring disclosure... should be the default expectation for many companies creating, distributing, or commercializing AI systems... [W]e are calling for precision regulation of AI... [W]e propose a precision regulation framework that incorporates 5 policy imperatives for companies, based on whether they are a provider or owner (or both) of an AI system.. (1)Designate a lead AI official... (2) Different rules for different risks... (3) Don't hide your AI... (4) Explain your AI... (5) Test your AI for bias.
... [G]overnments should:
- Designate, or recognize, existing effective co- regulatory mechanisms... to convene stakeholders and identify, accelerate, and promote efforts to create definitions, benchmarks, frameworks and standards for AI systems.
- Support the financing and creation of AI testbeds with a diverse array of multi-disciplinary stakeholders working together in controlled environments
- Incentivize providers and owners to voluntarily embrace globally recognized standards, certification, and validation regimes