Tech workers’ professional organization recommends licensing scientists to produce AI goods.
After the Competition and Markets Authority (CMA) initiated an AI market review, Rashik Parmar, CEO of British Computer Society (BCS), the chartered IT personnel institute, said.
The regulator’s assessment follows concerns that giant IT corporations like Microsoft are dominating the fast-moving market.
Mr. Parmar stated, “I would not want a surgeon to operate on me that didn’t have the right kind of code of ethics, was competent, ethical… We let IT experts design and deploy complicated systems without professionalism.
“We need certified professionalism.”
He proposed registering computer scientists working on AI technology in “critical infrastructure” or that “could potentially be harmful to human life”.
The OECD reported that “occupational entry regulations” reduced company productivity by 1.5 percent in 2020.
The document proposed for “lightened” licensing and certification requirements and a shift toward “ensuring certain quality standards for goods and services” rather than “setting standards for the professionals providing them”.
The CMA claimed its AI market analysis was a “mapping” exercise, not a basis for regulation.
As ChatGPT and its derivatives become commonplace, public trust in AI systems to make decisions for humans is growing.
John Hill, founder of process simulation company Silico, said AI may be valuable for modeling diverse business situations and their results if users trusted the AI software.
Mr. Hill said it’s more than just trusting technology. “It’s a shift in using it for different aspects of the decision-making process and gaining a view of what your decisions will actually look like in the future,” which people “cannot achieve” alone.