From February 2, 2025, the first few requirements of the EU’s AI law are legally binding. Companies operating in the region that do not comply with these requirements risk a fine of up to 7% of their global annual revenue.
Certain AI use cases are now not allowed, including using it to manipulate behavior and cause harm, for example to teens. However, Kirsten Rulf, co -author of the EU AI ACT and partner at BCG, said these are useful for “very few” companies.
Other examples of now-abandoned AI practice include:
- AI “Social Score” that causes unfair or disproportionate harm.
- Risk assessment to predict criminal behavior based solely on profiling.
- Unauthorized real -time remote biometric identification of law enforcement in public spaces.
“For example, banks and other financial institutions that use AI must carefully ensure that their creditworthiness assessments do not fall into the category of social scoring,” Rulf said. Read the complete list of prohibited practices through EU AI action.
In addition, ACT NOW requires staff at companies that either supply or use AI systems must have “a sufficient level of AI reading skills.” This is achieved through either education internally or hiring staff with the relevant skills set.
“Business leaders must ensure that their workforce is AI-literate at a functional level and equipped with preliminary AI education to promote an AI-driven culture,” Rulf said in a statement.
SEE: TechPublic Premium’s Ai Quick Glossary
The next milestone for the AI Act comes at the end of April, when the Europe Commission is likely to publish the final practice for general purposes AI models, according to RULF. The code comes into force in August, as well as the powers of the Member State’s supervisory authorities to enforce the law.
“Between every now and then, companies must require sufficient information from AI model providers to insert AI responsible and work in collaboration with providers, decision makers and regulators to ensure pragmatic implementation,” Rulf advised.
AI ACT does not suffocate innovation but allows it to scale according to its co -author
While many have criticized the AI law as well as the strict approach that the EU has against regulating technology companies in general, Rulf said during a BCG round table for the press that this first phase of the legislation marks “The start of a new era in AI Skaling . ”
“(The law) brings the framework of protective ranks and risk management into place as it needs to scale up,” she said. “It’s not suffocating innovation … It enables scaling of AI -Innovations that we all want to see.”
She added that AI is by nature coming with risks and if you scale it up, the efficiency benefits will suffer and bring the reputation of the business. “The AI law gives you a really good plan for how to tackle these risks, how to tackle these quality problems before they occur,” she said.
According to BCG, 57% of European companies cite uncertainty about AI rules as an obstacle. Rulf acknowledged that the current definition of AI that falls under the AI law “cannot be operated easily” because it is so wide and was written as such to be in line with international guidelines.
“The difference in how you interpret that AI definition for a bank is the difference between 100 models that fall under this regulation and 1,000 models plus falls under this regulation,” she said. “Of course, it makes a huge difference both for capacity costs, bureaucracy, control, but can even policy makers follow all this?”
Rulf emphasized that it is important companies that deal with the EU AI office, while standards for the AI Act that have not yet been phased in, are still being prepared. This means that decision makers can develop them to be as practical as possible.
SEE: What is the EU’s AI office? New body formed to oversee the roll -out of models for general purposes and AI
“As a regulator and decision maker, you don’t hear these voices,” she said. “You can’t deregulate if you don’t know where the big problems and the steps are … I can only encourage everyone to really be as blunt as possible and as industry -specific as possible.”
Regardless of criticism, Rulf said the AI law has “developed into a global standard” and that it has been copycatted both in Asia and in certain US states. This means that many companies may not find it for taxation to comply if they have already adopted a responsible AI program to comply with other rules.
SEE: EU AI ACT: Australian IT PROS needs to prepare for AI -regulation
More than 100 organizations, including Amazon, Google, Microsoft and Openai, have already signed the EU AI pact and volunteered to start implementing the requirements of the law before legal deadlines.