Artificial Intelligence Act: All You Need to Know About the European Council’s First Worldwide Rules on AI

The Artificial Intelligence Act is a groundbreaking legislation approved by the Council of the European Union on May 21. It sets global standards for AI regulation, aiming to balance innovation and safety through a ‘risk-based’ approach, ensuring trustworthy AI systems while protecting citizens’ rights and stimulating investment.

Artificial Intelligence (AI) is a powerful tool in today’s tech world, offering amazing innovations but also presenting notable risks. As AI grows, it’s important to manage these risks effectively to maximize benefits while minimizing dangers. On Tuesday, May 21, the Council of the European Union made a major move by approving a new law to set global standards for AI regulation. This law uses a ‘risk-based’ approach, meaning stricter rules apply to AI systems that could cause more harm.

What is the Aim of the AI Act?

The AI Act aims to promote the development and use of safe and trustworthy AI systems across the EU’s single market. It targets both private and public sectors to help AI technology grow and enhance European innovation. The law also seeks to protect the fundamental rights of EU citizens while encouraging investment and innovation in AI.

Artificial Intelligence Act: All You Need to Know About the European Council’s First Worldwide Rules on AI

Key Goals of the AI Act:

  • Encourage Safe AI Development: Ensure AI systems are safe and trustworthy.
  • Protect Citizens’ Rights: Safeguard the fundamental rights of EU citizens.
  • Stimulate Innovation: Promote investment and innovation in AI.
  • Support Regulatory Learning: Create a framework that supports learning and adaptation of regulations based on evidence.
  • AI Regulatory Sandboxes: Provide controlled environments for testing and validating new AI systems.

How Will the AI Act Differentiate the Risks of AI Systems?

The AI Act categorizes AI systems by their risk levels.

Risk Categories:

  • Limited Risk: Minimal transparency requirements.
  • High Risk: Strict requirements and obligations must be met for market access.
  • Banned AI Systems: AI systems that pose unacceptable risks, such as those used for cognitive behavioral manipulation, social scoring, predictive policing based on profiling, and categorizing people by biometric data related to race, religion, or sexual orientation.

Who Will the AI Act Apply To?

The AI Act primarily targets the 27 EU member states but will have global implications. Companies outside the EU using EU customer data in their AI systems must comply. Other countries may adopt similar regulations based on this Act.

Join Our Whatsapp Group

Join Telegram group

How Will the AI Act Enforce the Rules?

Several bodies will ensure proper enforcement of the AI Act:

Enforcement Bodies:

  • AI Office within the European Commission: Enforce common rules across the EU.
  • Scientific Panel: Support enforcement activities.
  • AI Board: Representatives from member states to advise on consistent application of the AI Act.
  • Advisory Forum: Stakeholders providing technical expertise to the AI Board and the Commission.

How Will Rulebreakers Be Penalized?

Penalties for violating the AI Act include fines based on a percentage of the company’s global annual turnover from the previous financial year or a predetermined amount, whichever is higher. SMEs and start-ups will face proportional administrative fines.

When Will the AI Act Be Implemented?

Once signed by the presidents of the European Parliament and the Council, the AI Act will be published in the EU’s Official Journal and will take effect twenty days after publication. The regulation will be applied two years after it comes into force, with some exceptions for specific provisions.

Also read:

New Diploma Course in Artificial Intelligence and Machine Learning at GTTC: Belagavi

Nvidia’s Potential Entry into ARM-based Laptops

Information in Table format

TopicDetails
Approval DateMay 21
Governing BodyCouncil of the European Union
ApproachRisk-based approach
Aim of the AI ActKey Goals
Promote safe and trustworthy AI systems– Encourage Safe AI Development: Ensure AI systems are safe and trustworthy.
– Protect Citizens’ Rights: Safeguard fundamental rights of EU citizens.
– Stimulate Innovation: Promote investment and innovation in AI.
– Support Regulatory Learning: Create a framework for learning and adapting regulations.
– AI Regulatory Sandboxes: Provide controlled environments for testing AI systems.
Risk DifferentiationCategories and Requirements
Limited RiskMinimal transparency requirements
High RiskStrict requirements and obligations for market access
Banned AI SystemsCognitive behavioral manipulation, social scoring, predictive policing based on profiling, and categorizing by biometric data related to race, religion, or sexual orientation
ApplicationDetails
Applicable To27 EU member states, companies outside the EU using EU customer data in AI systems
Global ImplicationsOther countries may adopt similar regulations
Enforcement BodiesRoles
AI Office within the European CommissionEnforce common rules across the EU
Scientific PanelSupport enforcement activities
AI BoardRepresentatives from member states advising on consistent application
Advisory ForumStakeholders providing technical expertise

Join Our Whatsapp Group

Join Telegram group

Penalties for RulebreakersDetails
FinesBased on a percentage of global annual turnover or a predetermined amount, whichever is higher. Proportional administrative fines for SMEs and start-ups
Implementation TimelineDetails
Post-Signature ProcessPublished in the EU’s Official Journal
Effective DateTwenty days after publication
Regulation ApplicationTwo years after coming into force, with some exceptions for specific provisions

Leave a Reply

Unlocking Potential with Apple Vision Pro Labs Navigating 2023’s Top Mobile App Development Platforms Flutter 3.16: Revolutionizing App Development 6 Popular iOS App Development Languages in 2023 Introducing Workflow Apps: Your Flutter App Development Partner