Innovation, Agility and Resilience - DNA of the fittest!

Strategy, AI, Digital transformation, Operational Resilience, Cyber Security, Process automation, Risk management and Compliance are your focus domains for value creation. AI supported disruption and geopolitical uncertainty - new reality. Agile organizations see all that as opportunities. People, Processes and Planet are changing at a faster pace as any time before. Sustainability, Artificial Intelligence and new business models are shaping the future. Without efficient utilization of "Digital" most businesses are at risk. Quick fix, systematic transformation or independent sparring partner to CX team - your call. We provide tailored Advisory Services for your Sustainable Growth.

Artificial Intelligence

End User Computing (EUC) evolution in the AI era – Risk and Controls

End-user computing (EUC) refers to systems in which non-programmers can create working applications.[1] EUC is a group of approaches to computing that aim to better integrate end users into the computing environment. These approaches attempt to realize the potential for high-end computing to perform problem-solving in a trustworthy manner.[2][3] (Source: Wikipedia).

EUC was an important topic from 2010 and onward. It was mentioned briefly latter also as a subject of regulatory focus (eg. EBA-GL-2017-05 – ICT RISK ASSESSMENT UNDER SREP; EBA/GL/2019/04 – EBA Guidelines on ICT and security risk management). EUC was used as a buzz word until today, however it has matured and is not separately emphasised or regulated.

It is expected that “A financial institution’s processes for acquisition and development of ICT systems should also apply to ICT systems developed or managed by the business function’s end users outside the ICT organisation (e.g. end user computing applications) using a risk-based approach. The financial institution should maintain a register of these applications that support critical business functions or processes.” Source: EBA/GL/2019/04

EUC is disappearing as a buzz word.

What is the status of EUC and AI consumption from end users today in organisations and how will this domain evolve in the future?

End user applications and ICT devices in 2025 offer many functionalities that allow users to process large amount of data locally. Users can create complex rules, automations, embed AI capabilities and even write data back to corporate databases if allowed. Many decisions, reports and vital data is analysed, processed and distributed by end users – “developed or managed by the business function’s end users outside the ICT organisation” (plan, develop, test, use, update, leave) where corporate ICT risk management framework does not “see” or have the visibility.

By adding new AI capabilities to existing tools and introducing new AI supported architectures and concepts, managing risk and retaining control has to be, not only adjusted on the regular basis, but embedded in any change and use case.

Some risks related to “consumption” of ICT capabilities by the business function’s end users outside the ICT organisation (ex EUC) are:

  • Wrong version – Was the version of the EUC tool approved?
  • Unauthorised Change of parameters and logic – Was the change authorised and tested?
  • Lack of ownership and visibility – Was the ownership assigned and linked to business functions, processes and other information assets to allow visibility, transparency and risk management (GRC)?
  • Unauthorised Data manipulation (extract, store)
  • Unauthorised Data Change (write to corporate database)
  • Unauthorised access control (to EUC tool & data)
  • Unreliable availability of EUC tool or product (not included in the redundancy or resilience programs)

(Image source: AI generated – Copilot)

What are key controls to address consumption of EUC and AI capabilities?

While there are many suitable controls to address specific risk and use case, I have outlined a few steps that will help raise the maturity of any organization dealing with the risk of end user computing and related technologies:

  • Understand where, how, what is done = inventory: tools, use cases, functions supported
  • Evaluate existing risk
  • Educate users on expected and forbidden practices
  • Help users address existing, high risk, end user computing practices
  • Implement tools to auto discover high risk practices
  • Enhance data governance, risk and compliance programs
  • Enhance capabilities of ROC (Risk Operations Centres) and SOC (Security Operations Centres)
  • Coach leaders and employees on modern ICT capabilities and risks from the end user perspective
  • Build awareness and culture to support responsible and secure use of ICT capabilities for end users.

AI act implications

The Artificial Intelligence Act REGULATION (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 is the first framework at EU level to lay down harmonised rules on the use of AI systems, prohibitions on certain AI practices, specific requirements for high-risk AI systems and obligations for operators of such systems.

Regulation (EU) 2024/1689 establishes a comprehensive legal framework for the development, use and regulation of artificial intelligence (AI) systems in the EU, thereby introducing specific requirements for industries and the public sector, including the Slovenian market. This regulation aims to ensure the compliance of UI systems with EU fundamental rights and principles such as health protection, security, democracy and the rule of law, and to manage the risks posed by UI. In the following, I summarize the key impacts, milestones and effects of the regulation on industries and the public sector in Slovenia.

Impact on individual industries

  1. Technology sector (specifically for UI developers) :
    • The regulation sets strict requirements for high-risk UI systems that are often used in the technology sector, such as facial recognition systems, analytics and decision automation. The developers of these systems will have to ensure compliance with requirements covering transparency, traceability, data security and respect for fundamental rights.
    • Each UI system will have to obtain certification before being available on the EU market, which will affect the time and costs of development and implementation of new technologies.
  2. Financial sector :
    • The use of UI for financial analysis, credit risk assessments and advisory services will have to follow strict regulatory guidelines regarding accountability and transparency.
    • Automated lending decision-making and risk analysis systems will also need to be transparent and non-discriminatory, which means additional costs to maintain compliance.
  3. Manufacturing sector (industrial automation) :
    • Companies using UI to automate and optimize production will need to ensure that their systems are designed according to security standards.
    • Potrebna bo ocena tveganj za UI v primeru, da avtomatizacija vključuje nevarne naloge ali kritične infrastrukture. Stroški skladnosti se lahko povečajo, zlasti za manjša podjetja.
  4. Healthcare :
    • The healthcare sector, which relies on UI for diagnostic tools, health risk prediction and treatment recommendations, will need to ensure that these systems are tested and verified for accuracy, reliability and compliance with personal data protection regulations.
    • For the use of UI in healthcare, additional steps for certification and traceability will be prescribed, which may increase the development and implementation time of new solutions.
  5. Transport and logistics (including autonomous vehicles) :
    • The regulation introduces requirements for the use of AI in transport, especially for autonomous vehicles and drones, where safety, reliability and responsibility are key elements.
    • Manufacturers will have to provide certification and demonstrate safety mechanisms before putting vehicles and systems into circulation.

Impact on the public sector

The public sector will have to use UI responsibly, especially for tasks that affect the fundamental rights of citizens. UI systems used by public authorities will thus be subject to stricter verification and certification for compliance.

  1. Use of UI for public services :
    • Public organizations will need to obtain certified UI systems for tasks such as security surveillance technology, social services, decision automation in the allocation of social assistance or other benefits.
    • To prevent biased or unfair decision-making, authorities will have to ensure that UI is designed in a transparent and non-discriminatory manner, which may prolong the implementation of such systems in the public sector.
  2. Data collection and transparency :
    • Public institutions will have to strictly comply with the requirements for the protection of personal data when using UI, thus ensuring the trust of citizens.
    • The emphasis is on responsibility for all algorithms that affect access to public services or other rights, which means stricter control procedures and security standards for all UI infrastructure.

Key dates and milestones

  • 2024 – Publication and adoption of the regulation : The regulation was adopted on 13 June 2024 and entered into force after publication in the Official Journal of the EU.
  • 2025 – Start of use : It is envisaged that companies and public institutions will have a certain transition period to adapt existing UI systems. The start of use date is likely to be set to 2025.
  • 2026 – First conformity assessment and verification of high-risk UI systems : The certification system for high-risk UI is expected to become mandatory.
  • 2030 – Full implementation and compliance review of the regulation : By this date, the regulation will be fully implemented, all industries are expected to be compliant, and regular compliance reviews are planned.

Specific dates:

Entry into force and application This Regulation shall enter into force on the twentieth day following its publication in the Official Journal of the European Union. It applies from 2 August 2026.

However:

(a) Chapters I and II shall apply from 2 February 2025;

(b) Chapter III, Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, except for Article 101;

(c) Article 6(1) and the corresponding obligations from this Regulation shall apply from 2 August 2027.

This Regulation is fully binding and directly applicable in all Member States. Done at Brussels, 13 June 2024

Regulation (EU) 2024/1689 will require compliance with complex safety and ethical standards, which brings significant financial and organizational challenges for industries and the public sector.