AI, Data, and Trust: Navigating the Future of Health as the New Currency

Author: J. Dennaoui

Who Owns Your Health Data? Building Trust in a World of AI and Genetic Data

In a world increasingly driven by artificial intelligence (AI) and big data, the future of health data management and genetic data handling has become a crucial concern. AI now enables the decoding of entire genomes, predicting diseases before they appear, and designing personalized treatments. However, with our sensitive health data and genetic information becoming the new currency, the pressing question remains: who can we trust with this data?

Recent events, like the resignation of the entire 23andMe board1 and the company’s controversial move to go private, remind us how fragile that trust can be. With vast amounts of sensitive health information at stake, these moments force us to ask critical questions: Who owns our data? How is it being used? And most importantly, how do we protect ourselves in a world where health is not only personal but a commodity?

This article explores the ethical dilemmas, regulatory gaps, and necessary protections to ensure our data, and by extension our health, remains safe in an AI-driven future.

The Fragility of Trust in Genetic Data: The Case of 23andMe

Companies across sectors, from genetic testing providers to wellness platforms, collect enormous amounts of personal health information. One prominent example is 23andMe, a pioneer in direct-to-consumer genetic testing. While initially hailed as a breakthrough in providing individuals access to their genetic information, 23andMe has recently raised concerns. The company’s CEO’s proposal to take it private and the resignation of the entire board has fuelled scepticism about what will happen to the vast genetic data it holds. Will data privacy protections remain intact, or will the company shift towards more opaque practices?

This scenario highlights the fragility of consumer trust when dealing with sensitive data. Healthcare data alone is projected to grow to 10,800 exabytes by 2025 2 . Companies often make bold promises of longevity based on genetic data or AI-driven health insights, but as we see with 23andMe, such claims can often over-promise and under-deliver. Companies must align their messaging with realistic outcomes to avoid undermining public trust.

Beyond these high-profile cases, data misuse, unauthorized sharing, or the sale of personal information remains a real risk. According to a 2023 Pew Research study, 79 % of Americans feel they have little to no control over the data companies collect about them 3 . Moreover, 83% of people believe that protecting customer data is critical to building trust in a company4. Transparency in data usage, user consent, and clear ethical boundaries are essential to retaining trust.

Beyond individual cases like 23andMe, the broader implications of AI and data-driven healthcare need to be addressed, particularly as technological advancements present both opportunities and risks.”

The Benefits and Pitfalls of AI in Healthcare Data Usage

The potential benefits of AI and data in healthcare are undeniable. AI algorithms can detect disease patterns, help personalize treatment, and even save lives by predicting outcomes before symptoms appear. For example, a study in Nature Medicine showed that AI can outperform human doctors in detecting breast cancer, achieving 94.5% accuracy compared to 88% for human radiologists 5. But with these advancements come risks. The pitfalls of data use range from privacy violations and discrimination to the commodification of personal health data without adequate safeguards.

Additionally, the over-hyped promises around AI-driven solutions, such as claims of unprecedented lifespan extensions, blur the lines between responsible health insights and marketing gimmicks. The gap between what’s promised and what’s scientifically validated undermines public trust and highlights the need for clear ethical standards.

Building Trust in Healthcare Data: The Ethical Responsibility of Data Use

Ethics must be at the forefront of any health-related data use. Companies handling sensitive health information have a responsibility to act with integrity, ensuring they don’t exploit data for profit without considering the potential harm to individuals. In the race to monetize health data, ethical considerations are often overlooked—especially when companies over-promise outcomes like extended life or miracle cures.

We also need to ensure that the vast amounts of data being gathered are not used against humans, whether directly or indirectly—such as through the development of superintelligence that could make decisions harmful to individuals or society at large. Building trust means being transparent not just in how data is used, but in the realistic benefits of these technologies. Overselling the capabilities of AI and data without acknowledging their limitations risks damaging the credibility of the entire field.

Gaps in GDPR and the Regulation of Health and Genetic Data

The General Data Protection Regulation (GDPR), implemented by the European Union in 2018, is one of the most comprehensive data privacy laws in the world. It sets strict rules for how personal data is collected, stored, and used, requiring organizations to obtain clear consent and protect individuals’ data. However, as technology evolves, several gaps in GDPR’s protections, especially concerning emerging technologies, have surfaced, leaving room for improvement.

While GDPR provides a solid foundation for data protection, it falls short in key areas like anonymized data and emerging technologies. Strengthening these regulations is critical as AI, IoT, and blockchain increasingly redefine data usage. Here are key areas that require urgent attention:

Key Areas for Improvement in GDPR’s Healthcare Data Protections

To better protect individuals from the misuse of their genetic and personal data, GDPR needs updates that include:

  1. Non-personal data concerns: GDPR does not adequately cover anonymized or non-personal data, which can still be re-identified using AI or machine learning.
  2. Slow adaptation to emerging technologies: GDPR struggles to address modern technologies like AI, blockchain, or the Internet of Things (IoT), leaving privacy challenges unregulated.
  3. Consent fatigue: The explosion of consent banners and pop-ups under GDPR has led to “consent fatigue,” where users accept terms without fully understanding their implications.
  4. International data transfers: GDPR’s provisions for cross-border data transfers, such as the invalidation of the EU-U.S. Privacy Shield, create uncertainty for businesses and consumers alike.
  5. Limited deterrence for big tech: Despite large fines, the penalties are often not significant enough to deter large tech companies from misusing data, as fines can become just another cost of doing business.

These issues suggest that even countries with GDPR protection face challenges in fully safeguarding human data—leaving other countries without similar regulations in an even more precarious position.

Strengthening GDPR: New Regulations for Health and Genetic Data in AI

  1. Specific regulation for emerging technologies: Clear guidelines are needed on how AI, IoT, and blockchain must handle personal and genetic data. This would include transparency in AI decision-making processes and encryption of sensitive data to prevent misuse or re-identification of health data.
  2. Mandatory cybersecurity and encryption standards: Genetic data should be explicitly subject to mandatory encryption and additional cybersecurity standards. Regular audits and specific industry standards (e.g., encryption algorithms) could be enforced to ensure compliance and reduce the risk of breaches.
  3. Stricter and more user-friendly consent requirements: Implement streamlined, purpose-specific consent processes for genetic data, allowing individuals to opt-in or out for each specific use, including third-party sharing. This approach reduces consent fatigue and ensures individuals fully understand how their data will be used.
  4. Stronger penalties for large companies: Increased fines or more stringent penalties for repeat offenders, particularly tech giants that are serial violators, will act as a deterrent. Penalties should be scalable, with potential criminal liability for executives responsible for major violations.
  5. Ban on genetic data use for insurance and employment: Enact a global ban on the use of genetic data for determining insurance eligibility, premiums, or employment decisions to prevent discrimination based on genetic information.
  6. Creation of a global genetic data oversight body: Establish an independent regulatory authority dedicated to genetic health data, responsible for auditing data usage, investigating breaches, and enforcing penalties for misuse of genetic data. This body would ensure compliance across jurisdictions.
  7. Right to Data Deletion: Extend the right to data deletion to genetic data, giving individuals the ability to permanently delete their genetic information from databases and backups upon request, limiting the potential for long-term misuse.
  8. Global data-sharing standards: Establish international genetic data-sharing agreements that set uniform standards for the handling and protection of genetic data across borders, preventing exploitation in countries with weaker privacy protections.

Global Policies for Health Data Protection: The Need for Universal Laws

As we move into this data-driven future, global policies and universal legal protections must be established to safeguard individuals from exploitation. A legal framework should govern the collection, storage, and use of health data, ensuring it cannot be misused or sold without explicit user consent. Regulatory bodies must hold companies accountable, regardless of where they operate, to ensure consumer protection worldwide. GDPR in Europe and HIPAA in the U.S. are steps in the right direction but need to be universally applied.

Health is the New Currency: Protecting Personal and Genetic Data in the Digital Era

In this evolving landscape, one thing is clear: health data is becoming the new currency. The digital health market is expected to grow to over $660 billion by 20256. Individuals are realizing the value of their personal health information, not just in receiving personalized care, but in its broader use in determining their access to services, insurance rates, and even financial incentives. Protecting this valuable asset is paramount.

Conclusion: A Future Rooted in Trust, Ethics, and Accountability

AI and data are transforming healthcare, but the real question isn’t just about technological progress—it’s about trust. As health becomes the new currency, we must build a future where data is used ethically and responsibly, ensuring transparency and accountability are always at the forefront. Protecting individuals and their health data must be our top priority.

As we enter a future shaped by AI and data, it is imperative for both regulators and companies to prioritize ethical practices. Only by fostering transparency and accountability can we ensure that health data, the new currency, is truly safeguarded for all.

Search