Prelude
After having watched the course videos and other student blogs, my blog focuses on experiences from digitalisation in healthcare. It discusses both benefits and risks. Risks are not limited to conformity with GDPR or exposure to AI, but range further and deeper into societal inequities and psychological well-being. Please read my comments on healthcare consumers mental well-being and adaptation to illness. Further maturation of the field continues to unfold.
Introduction
Digital society has a profound meaning and integration into healthcare. Examples are numerous, including the electronic health records (see Digi Citizen for risks in healthcare), the ability to meet with a healthprovider online (telemedicine), connected medical instruments which store and process medical data online, consumer healthcare wearables, machine-learning in healthcare driving faster interpretations of scans and pathology reports, and many more.
Benefits of digitalization in healthcare
This transformation creates a connected ecosystem where data flows seamlessly between consumers (patients), healthcare providers and electronic cloud based systems which empower individuals with greater access, faciliates mobility through allowing continuous care, improves health outcomes and efficiency of care at a time when a global shortage of physicians and allied health professions are further stressed by the aging of the porpulation.
Risks of healthcare digitalization- beyond privacy
However, with the abovementioned advantages, this transformation carries a significant risk of increasing inequity. The “digital divide” instead of bringing access to vulnerable people, can work to prevent full acess to healthcare to older adults, low-income groups with lack of technological resources, and immigrants with less than fluent language and national system orientation skills. Without awareness, national intervention programs, and hybrid programs, the digital transormation might worsen pre-existing gaps.
GDPR- the important protection it offers is challenging to implement
Compounding these concerns, critical risks related to data protection and cybercrime need to be addressed in healthcare. The regulations set forth by the General Data Protection Regulation (GDPR) https://gdpr.eu/what-is-gdpr/ in the EU aim to protect citizens and require explicit consent, data use minimization, impact assessment, data controller rules. However, the implementation and surveillance of these safeguards is becoming exponentially difficult as more and more digitial tools are used to collect our demographic and health data.
Recent reports quote severe breaches of data safety and GDPR adherence in Finland (https://www.hannessnellman.com/news-and-views/blog/from-gdpr-compliance-failure-to-criminal-offence-where-the-helsinki-court-of-appeal-drew-the-line/ ), (https://www.dataguidance.com/news/finland-ombudsman-fines-aktia-eu865000-security-flaws) and globally (undefined).
AI conversations; some of the unique risks relating to healthcare
When someone is facing a major medical decision; they very often search for information online and lately through various AI platforms (Chat GPT, Gemini, Grok and more). I will demosntrate this through a situation familiar to me professionally. An individual is informed that surgery is recommended to relieve pressure from a benign brain tumor, a meningioma, which is affecting vision. This individual, now relying primarily on an AI system instead of the expertise of treating clinicians for understanding and coping with their sitaution, are at real risks for their mental well‑being.
While an AI system can feel comforting because it’s always available, calm, and able to explain things in simple language, it can create a false sense of certainty at a moment when the person is frightened or overwhelmed. If they begin to trust an AI more than their surgeons, they may experience increased anxiety, confusion, or even mistrust toward the medical team. That tension can heighten stress at a time when clarity and support are crucial.
There’s also the risk of emotional isolation. If someone turns to an AI instead of engaging with real clinicians, family, or friends, they may feel more alone with their fears. And because the individual can only provide the AI with information they understood, and the AI cannot interpret scans, or understand the nuances of their case, the AI responses might unintentionally reinforce misunderstandings. That can leave the person feeling torn between conflicting sources of information, which can worsen distress rather than relieve it.
So if you’re ever supporting a patient, encourage them to understand the limitations and emotional dangers of relying on AI support. Encourage non-judgemental, open conversations with their medical team and trusted people in their life can help them feel grounded and less alone in the decision-making process.
Self-Evaluation
While I encounter the risks and problems of digitalization and AI in medicine almost daily, this course has opened my eyes to the vast array of digitalization in many aspects of life. The dangers of enhancing social divides are very clear to me when I help patients who are blind, challenged by brain surgery, old or technophobic. This course highlights how extensively digitalization has changed our lives in fields such as banking, online shopping, other types of commerce from holiday making to air travel, governmental agencies (transportation, passports, social security and more). The course allowed an open yet asynchronous discussion on these issues through the blogs of other students. I have never designed my own wordpress blog site- I am very grateful to have been taught this skill! Unfortunately, I remain quite challenged in term of design, and I applaude my colleagues for their beautiful sites (such as Nina Farlin, Gilbert Gueye and Yasir Al-Taie !
My thoughts on other blogs appear here:
https://blogi.savonia.fi/yasiraltaie2/digi-citizen
