AI’s rapid adoption in business has the potential to unlock major productivity gains and offering valuable support for workers. According to Eurostat, 42% of large companies (250+ employees) already use AI, while 13% of smaller firms (10+ employees) have begun to adopt it. However, as AI technologies evolve and spread, so do the risks, ranging from data protection concerns due to vast training datasets, to criminal misuse in cyberattacks, algorithmic bias, privacy breaches, and ethical issues such as manipulation and threats to human rights.

In response, the EU has introduced the world’s first comprehensive AI legislation – the European Artificial Intelligence Act (AI Act) – to ensure the responsible development and use of AI while safeguarding citizens’ health, safety, and fundamental rights.

So, what is the EU AI Act, and why does it matter for businesses and society alike? Let’s break it down.

What Is the EU AI Act?  

The EU AI Act, introduced on 1 August 2024, and fully applicable from 2 August 2026, regulates AI systems that affect people in the EU, even if developed outside the region. It aims to mitigate risks such as privacy breaches, bias, manipulation, and physical harm by enforcing transparency, ethical safeguards and human oversight:

  • Transparency: AI systems must clearly explain how they function and what risks they pose. If an AI system interacts directly with people (e.g. chatbots), it must be clear that it is an AI system.
  • Ethical safeguards: The Act classifies AI by risk level, bans the most harmful systems, and imposes strict obligations on high-risk systems.
  • Human oversight: The aspect of “human oversight” is particularly critical for high-risk AI systems. It ensures that humans can monitor, understand, and control how these systems operate. High-risk AI must be designed to allow human interaction, enabling people to verify that the system is functioning safely, and to intervene if something goes wrong.The aim is to prevent harm to individuals’ health, safety or fundamental rights. Developers, providers, and operators are responsible for ensuring data quality, conducting risk assessments and maintaining transparency in how AI systems function. High-risk systems must also be registered in a public database, with incidents reported. Compliance is enforced through conformity assessments, certifications, and significant penalties for violations.

The potential fines for non-compliance can be severe – up to 35 million Euros or up to 7% of a business’s total worldwide annual turnover for the preceding financial year, whichever is higher.

The four risk categories

The AI Act classifies AI systems by four levels of risk:

  • Unacceptable risk: Those that use social scoring, real-time biometric scoring, or manipulative technologies (e.g., targeting children or vulnerable groups), for example, are banned outright.
  • High risk: Strictly regulated systems in areas such as transport, energy, education, employment, law enforcement and justice. Developers must ensure transparency, accountability, human oversight, and data quality.
  • Limited risk: Less stringent rules apply to them and primarily require transparency when users interact with AI to ensure that they are aware of how it works, as seen in applications like chatbots or deepfakes.
  • Minimal risk: Most AI systems fall under this category and are largely unregulated. Examples include AI used in video games, spam filters, or customer service automation.

Who will be affected by the AI Act?  

The AI Act affects EU citizens and all organizations operating in the EU or providing services to the EU. The 5 groups affected include:

  • AI Developers & Providers: Must comply with regulations if their systems pose a high risk or require transparency.
  • High-Risk AI Users: When AI is deployed in sensitive areas such as hiring, banking/credit, healthcare/hospitals, or government/public services, strict oversight and risk management requirements must be met.
  • Distributors & Importers: Must ensure that the systems comply with EU requirements before sale.
  • Non-EU Companies: Must comply with the regulations if their AI solutions impact EU users.
  • Consumers: The majority of people will, in some form, be a consumer, and while not directly regulated, the Act aims to protect their rights and safety.

Opportunities and Challenges

While it creates new regulations that must be followed, the EU’s AI Act offers a number of advantages for businesses, too:

  • By establishing clear standards, it strengthens consumer trust and enhances credibility with customers, investors, and partners.
  • It also helps mitigate legal and reputational risks that could arise if users were to suffer harm as a result of using AI.
  • It provides companies with clear guidelines for developing and operating AI systems within the EU.
  • The EU AI Act motivates companies to establish clearer rules for dealing with AI and to better control their processes. This improves decision-making within the company and the reliability of AI systems. It also makes it easier for companies to adapt to other existing or future AI regulations.

However, while the AI Act reinforces safeguards and governance, it may also pose some challenges for businesses, especially smaller firms and start-ups:

  • High-risk systems face added costs from compliance tasks like risk assessments and documentation, potentially delaying development and market entry.
  • Meanwhile, companies in regions with looser regulations may innovate faster and undercut EU competitors.
  • Different enforcement measures across EU member states could also lead to uncertainty.

Getting Started with AI Act Compliance: 4 Key Steps

The AI Act is complex, but companies can initially focus on four key areas.

  1. Classify your AI systems according to the Act - identify high-risk systems, and set up ongoing monitoring.
  2. Ensure transparency by maintaining strong oversight of how data is sourced, managed, and used to train AI, and clearly explain to users what the system can and can't do.
  3. Train your team in AI governance and put a dedicated compliance team in place to manage adherence to the rules.
  4. Consider adopting technologies that make compliance easier, such as tools for anonymizing data, detecting bias, monitoring performance, and encrypting sensitive information.

Konica Minolta’s approach to Ethical and Compliant AI

Konica Minolta is committed to ethical, transparent AI that complies with the EU AI Act while fostering innovation. We carry out risk assessments that are reviewed by our AI Ethics Committee and support responsible development and operation through training for developers and key stakeholders, supported by dedicated teams like our AI Task Force, Data Protection division, and internal compliance units. We use technologies such as anonymization and encryption to comply with the requirements of the EU AI Act.

Beyond compliance, our goal is to lead in responsible AI. By embedding these principles into every stage of development, we not only ensure regulatory alignment but deliver added value to customers through safe, trustworthy, and cutting-edge AI solutions.

The following Konica Minolta AI-powered solutions already comply with the provisions of the AI Act:

  • Workplace Pure: A cloud-based platform that combines numerous document management services. These include an AI-based translation service that can be used either via a browser or via the control panel of a bizhub multifunction printer (MFP). This allows documents to be translated quickly and easily from almost any source language into almost any target language – anytime, anywhere. It is hosted in EU-based Open Telekom Cloud data centers to ensure full GDPR compliance and uses advanced HTTPS encryption to protect information in transit.
  • VSS (Video Solution Services): Konica Minolta's Video Solution Services (VSS) integrate advanced imaging technologies with AI-powered analytics to improve safety, efficiency, and operational control across various industries. Examples of applications include monitoring production lines, ensuring building security, and improving healthcare environments. Features such as facial recognition or behavioral analytics are processed securely and in line with data protection regulations like the GDPR and the AI Act.
  • FORXAI Mirror: The FORXAI Mirror uses AI-powered technology to increase workplace safety. It secures access to hazardous production areas and reduces the risk of contamination by ensuring that employees only enter the production area wearing the personal protective equipment (PPE) required by the company. It detects deviations in PPE in real time. The solution can be integrated into existing access control systems while strictly adhering to EU AI Act principles - prioritizing privacy, transparency, risk assessment, and explainability through encryption and rigorous AI model testing.

AI – A blessing or a curse? From Promise to Practice: How AI Can Truly Deliver

You can learn more about AI’s real-world impact and opportunities in this eBook.

Share:

Rate this article