AI products that pose risks need to be assessed before they are introduced
The European Union passed the first artificial intelligence law in the world
The European Union has passed the Artificial Intelligence (AI) Act. It is the first artificial intelligence legislation worldwide and provides a model for regulation and oversight. It could also become a global consensus standard on AI regulation. India is also currently making efforts in this direction and is trying to bring a declaration document on the subject at the ongoing Official Intelligence Global Partnership Summit.
Artificial intelligence has made manufacturing cheaper in many fields. This has led to intensified drug discovery and new initiatives in materials science research. From scientific research to autonomous transportation, healthcare and disease detection, and from smart power grids to financial systems and telecommunication networks, they can prove to be very useful at the level of private and public services.
But artificial intelligence can also lead to criminal activities. It puts more power in the hands of the supreme authority. Its hallmarks are instant facial recognition capabilities, extensive surveillance measures, and discriminating social audit methods. Threats can also arise from a number of military applications that can help create an automated show where human intervention is not required. This is far from the science-fiction possibilities of a self-aware artificial intelligence, which is assumed to understand its own nature intuitively and learn from curiosity.
Such concerns should be addressed holistically and there should be a consensus on such regulation in developed economies. The European Union regulations seek to establish a technology neutral, uniform definition for artificial intelligence that will apply to future systems. This is important because technology is evolving at a very fast pace. The conceptual framework classifies AI systems according to risk. The higher the risk, the stricter the monitoring and the greater the responsibilities of providers and users. A limited risk regime must be consistent with the transparency requirements that allow informed decision-making with the AI Act.
European Union law states that AI systems that affect security or fundamental rights are considered high risk and are divided into two categories. One is AI which is used in toys, aviation, cars, medical devices and elevators etc. and the other is AI which is used in various fields which should be registered in the European Union database. These include biometric identification, critical infrastructure, education and vocational training, and essential private public services powered by AI. AI products that pose such risks should be properly assessed before they are introduced.
Certain systems that pose unacceptable risks are prohibited in this law. This includes applications that alter people's cognitive behavior or affect vulnerable groups. Real-time or remote biometric identification systems, such as facial recognition, can only be used with court approval and that too for identification in serious crime cases. This framework will require change in certain areas, and while it does not include military research and development, it provides a baseline on which the entire world can agree. Consideration should be given to adopting some version of this and India should introduce domestic licensing on similar lines.
Comments
Post a Comment