On 2 August 2024, the AI Act, the first comprehensive regulation concerning the responsible and safe use of this technology, will come into force.
The arrival of what is considered the ‘cornerstone’ of the regulatory framework on Artificial Intelligence will place the emphasis on the ethical aspect. One of the main goals of the AI Act is, in fact, the protection of users and their fundamental rights: users will have to be informed when interacting with an artificial intelligence system, and artificial or manipulated images and audio or video content (so-called ‘deepfakes’) will have to be clearly labelled as such. Greater controls on the processing and management of personal data will be implemented, reducing the risk of abuse and privacy breaches.
The most significant challenges will be faced by SMEs. The cost of fulfilling a long list of requirements to ensure the security of the system could limit companies’ competition against competitors with greater financial and logistical resources. Another challenge will be to find the balance between regulation and the promotion of innovation: too rigid regulation could, in fact, curb the potential of technologies and thus also our ability to progress.
The regulation also provides for four levels of risk according to which AI applications should be categorised, ranging from ‘Unacceptable’, which concerns, for example, remote biometric identification systems, to ‘Low’, which affects most AI systems we interact with today. which affects most AI systems we interact with today.
Stations in between are ‘High’ risk, for which companies must produce a preliminary ‘compliance assessment’ to ensure the security of the system, and ‘Low’ risk, which requires the fulfilment of transparency obligations.
Photo Credtis: Tara Winstead on pexels