top of page
Search

Artificial Intelligence and the Near Future of Human Life: Health and Beyond


Soft circuits bloom in gentle hue,  
Where hope meets logic, bold, yet true,  
The heart of progress beats in you.
Soft circuits bloom in gentle hue, Where hope meets logic, bold, yet true, The heart of progress beats in you.

Abstract


Artificial Intelligence, AI, is rapidly emerging as a transformative force across multiple sectors of human life. In healthcare, AI systems are revolutionising diagnostics, treatment personalisation, and public health surveillance. Beyond medicine, AI is reshaping education, employment, governance, and social equity. This article critically examines the near future implications of AI, drawing on recent academic literature to explore both its promises and perils. Through a multidisciplinary lens, it is argued that while AI offers unprecedented opportunities to enhance human wellbeing, it also demands robust ethical oversight and inclusive governance to mitigate risks and ensure equitable outcomes.


1. Introduction


The evolution of AI from symbolic logic systems to deep learning architectures has catalysed a paradigm shift in how machines interact with human environments. AI technologies now permeate everyday life, influencing decisions in healthcare, finance, education, and governance. As AI systems become more autonomous and capable of learning from vast datasets, their potential to augment, or even replace, human decision-making grows. This rapid integration raises critical questions about the ethical, social, and existential dimensions of AI. Understanding AI’s trajectory is essential not only for technologists but also for policymakers, ethicists, and public health professionals who must navigate its complex implications. The urgency is emphasised by the pace of innovation and the scale of deployment, which often exceeds regulatory frameworks and public understanding.


AI is increasingly embedded in daily life, moving swiftly from laboratory research into practical applications. For instance, the US Food and Drug Administration, FDA, approved 223 AI-enabled medical devices in 2023, a substantial increase from just six in 2015. Similarly, self-driving cars, such as Tesla, Waymo and Baidu Apollo Go exemplify how autonomous driving is no longer theoretical, with Waymo providing over 150,000 driverless rides every week. This widespread adoption is driven by significant financial investment. In 2024, US private AI investment reached $109.1 billion, far exceeding that of China and the UK, and global funding for generative AI soared to $33.9 billion, an 18.7% increase from 2023.  


The accelerated business usage of AI is also notable, with 78% of organisations reporting AI use in 2024, up from 55% in the previous year. The adoption of generative AI in business functions more than doubled, from 33% in 2023 to 71% in 2024. This rapid integration is not merely about efficiency, it is also demonstrating tangible benefits. Research confirms that AI boosts productivity and, in many cases, helps to narrow skill gaps across the workforce. The widespread and growing adoption of AI across various sectors highlights its profound and versatile impact on human life, necessitating a comprehensive examination of both its opportunities and the challenges it presents.  


2. AI in Healthcare


2.1 Diagnostics and Imaging


AI has demonstrated remarkable capabilities in medical diagnostics, particularly in image-based analysis. Deep learning models, such as convolutional neural networks, have achieved expert-level performance in detecting conditions like diabetic retinopathy and classifying skin lesions [Gulshan et al., 2016, Esteva et al., 2017]. These systems reduce diagnostic errors and improve early detection, especially in resource-constrained settings. Their scalability and speed offer significant advantages over traditional diagnostic methods, and AI-driven imaging tools are increasingly integrated into clinical workflows, enabling real-time decision support and enhancing the accuracy of radiological assessments.


Latest developments from 2023 to 2025 highlight the evolving landscape of AI in diagnostics. A systematic review and meta-analysis of generative AI models for diagnostic tasks, published up to June 2024, revealed an overall diagnostic accuracy of 52.1%. While this indicates promising capabilities, the analysis found no significant performance difference between generative AI models and non-expert physicians. However, generative AI models overall performed significantly worse than expert physicians, with a 15.8% lower accuracy. This suggests that while AI can enhance the capabilities of less experienced clinicians or provide preliminary diagnoses, human expert oversight remains crucial for complex cases. The performance varied across specialties, with superior results observed in Dermatology, which aligns with AI’s strengths in visual pattern recognition.  


Beyond general diagnostics, AI is being applied to highly specific and critical areas. Researchers are using AI to predict tumour stemness, a key indicator of cancer aggressiveness and recurrence risk, by analysing genetic and molecular tumour data. Portuguese start-up MedTiles is transforming medical diagnostics through an advanced AI platform that analyses medical scans to identify conditions faster, focusing on dermatology, radiology, and pathology, with plans for expansion across European hospitals. Similarly, AI solutions are showing potential in improving early detection and outcomes for cardiac events by detecting subtle patterns from ECG and imaging data, which could reduce fatal heart attack rates through faster intervention.  


A notable development is Mediwhale’s AI-powered platform, Dr Noon, which analyses retinal images to detect heart, kidney, and eye diseases, potentially replacing invasive diagnostics such as blood tests and CT scans. This non-invasive approach provides full-body health insights from simple eye scans and has been deployed in hospitals across Dubai, Italy, and Malaysia, securing regulatory approvals in eight regions, including the EU, Britain, and Australia. The ability to predict conditions like stroke and heart disease years before symptoms manifest represents a significant shift towards preventative healthcare, enabling physicians to make more informed decisions about early interventions.  


Within the scope of advanced diagnostic tools, Microsoft has introduced the MAI-DxO LLM diagnostic tool, achieving 80% diagnostic accuracy, four times higher than the 20% average of generalist physicians. When configured for maximum accuracy, MAI-DxO achieves 85.5% accuracy, and it also reduces diagnostic costs significantly compared to both physicians and off the shelf LLMs. This facilitator, which simulates a panel of physicians, proposes differential diagnoses, and strategically selects high-value tests, demonstrates how AI systems, when guided to think iteratively and act judiciously, can advance both diagnostic precision and cost-effectiveness in clinical care. Diagnostics.ai has also introduced a fully transparent machine learning platform for real-time PCR diagnostics, boasting over 99.9% interpretation accuracy and providing clinicians with clarity and traceability in decision-making, unlike traditional 'black-box' models. This transparency is crucial for building trust and accountability in AI-assisted healthcare.  


The trends in AI in healthcare publications in 2024 further illustrate this shift. The total number of publications continued to increase, with 28,180 articles identified, of which 1,693 were classified as 'mature'. For the first time, Large Language Models, LLMs, emerged as the most prominent AI model type in healthcare research, with 479 publications, surpassing traditional deep learning models. While image data remains the dominant data type used in mature publications, the use of text data has substantially increased, a rise directly attributed to the increased research involving LLMs. This indicates a broadening of AI's utility beyond traditional image-based diagnostics into areas that require language comprehension and generation, such as healthcare education and administrative tasks. The continued leadership of imaging in mature articles, alongside the rapid growth in LLM research, points to a maturing field that is both deepening its traditional strengths and expanding into new, text-heavy applications.  


2.2 Personalised Medicine


The integration of AI with genomic and clinical data enables precision medicine tailored to individual patients. Topol (2019) emphasises that AI can synthesise complex datasets to recommend personalised treatment plans, thereby improving therapeutic efficacy and minimising adverse effects. This shift from generalised protocols to individualised care marks a fundamental transformation in clinical practice, as AI algorithms can identify subtle patterns in patient data that may elude human clinicians, leading to more targeted interventions and better health outcomes.


Emerging innovations from 2023 to 2025 highlight AI's expanding influence in personalised medicine, ushering in a new era where treatments are tailored, predictive, and deeply responsive to individual needs. AI is increasingly used for customising treatments based on patient decision profiles, supporting cognitive research, and enhancing mental health diagnostics with explainable AI, which allows for greater understanding of how AI arrives at its recommendations. AI-powered digital therapeutics are also transforming neurocare, particularly for Parkinson's disease. For example, an AI imaging approach has shown promise in identifying Parkinson's disease earlier than current methods, distinguishing patients with Parkinson's from those with other closely related diseases with 96% sensitivity and from multiple system atrophy, MSA, or progressive supranuclear palsy, PSP, with 98% sensitivity. This approach also predicted post-mortem neuropathology in approximately 94% of autopsy cases, significantly outperforming clinical diagnosis confirmed in only 81.6% of cases. This capability could substantially shorten the time to a conclusive diagnosis, improving patient counselling and access to appropriate care, especially given the limited access to specialists.  


Another significant development is the validation of an AI model, AlloView, for predicting kidney transplant rejection, KTR, risk. This model demonstrated significantly higher scores in acute cellular rejection, ACR, and acute antibody-mediated rejection, AMR, groups compared to the no rejection group, highlighting its utility in discriminating individual rejection risk and potentially guiding biopsy decisions. Such predictive models, which can process and analyse large datasets from patients, including clinical, molecular, and pathological information, offer a more detailed understanding of complex biological processes like graft rejection. Furthermore, Tempus has unveiled Olivia, an AI Assistant specifically designed for Precision Oncology Workflows, indicating the specialisation of AI tools within personalised medicine.  


Despite these encouraging findings, the integration of AI into personalised laboratory medicine faces several challenges that need to be addressed for widespread clinical adoption. Methodological heterogeneity and publication bias remain significant concerns in studies validating AI diagnostic accuracy. The quality of input data, including high-resolution and well-annotated datasets, is a fundamental determinant of AI model performance, and inconsistencies in data resolution or labelling can degrade accuracy.  


Future directions for AI in personalised medicine emphasise the need for standardised evaluation frameworks, transparency, and the development of Explainable AI, XAI, systems. XAI is particularly crucial for enhancing clinician trust and supporting shared decision-making, as it allows healthcare professionals to understand and, if necessary, challenge AI recommendations. Promoting open science practices, such as publicly sharing datasets, code, and model outputs, can accelerate innovation and collaboration within the field. It is also imperative to identify and mitigate biases embedded in training data and algorithms to ensure equitable healthcare delivery across diverse populations. Establishing clear clinical validation protocols and benchmarking standards will be essential to support the safe and effective deployment of AI technologies in laboratory medicine. Challenges related to integrating AI into existing clinical workflows, ensuring external validation, achieving regulatory compliance, and addressing resource constraints in healthcare settings must also be overcome. This includes providing specialised training for healthcare professionals to effectively adopt and integrate these technologies into clinical practice. The trajectory of AI in personalised medicine is towards highly specific and proactive interventions, but its responsible and equitable implementation depends on rigorous validation, transparent development, and continuous adaptation to clinical needs and ethical considerations.  


2.3 Mental Health and Public Health Surveillance


AI applications in mental health include chatbots and sentiment analysis tools that provide scalable support for psychological wellbeing [Castillo, 2024]. These tools offer anonymity, accessibility, and affordability, making mental health care more inclusive. The latest developments from 2023 to 2025 demonstrate AI's growing capabilities in this domain. AI systems are now analysing data such as speech patterns or online activity to identify signs of depression or anxiety with up to 90% accuracy, as shown in a 2023 Nature Medicine study.  


Specific AI tools are making a tangible impact. Limbic Access, a UK-based AI chatbot, screens for disorders like depression and anxiety with 93% accuracy, significantly reducing clinician time per referral. Kintsugi, an American tool, detects vocal biomarkers in speech to identify depression and anxiety, helping to address diagnostic gaps in primary care. Woebot, a Cognitive Behavioural Therapy, CBT based chatbot, has shown significant symptom reduction in trials through text analysis. For predictive analysis, Vanderbilt University’s suicide prediction model uses hospital data to predict suicide risk with 80% accuracy. Ellipsis Health utilises vocal biomarkers in speech to flag mental health risks with 90% accuracy by assessing tone and word choice.  


Beyond diagnostic and predictive tools, several AI-driven mental health platforms and wearables have received FDA clearances or approvals. The Happy Ring by Feel Therapeutics, cleared in 2024, is a clinical-grade smart ring that monitors various health metrics and integrates personalised machine learning and generative AI to provide actionable health insights. Rejoyn, approved in 2024, is a prescription-only digital therapeutic smartphone app for treating major depressive disorder, MDD, in adults, delivering CBT through interactive tasks. EndeavorRx, approved in 2020, is the first FDA-approved video game designed to treat Attention Deficit Hyperactivity Disorder, ADHD, in children. NightWare, cleared in 2020, uses an Apple Watch to monitor and intervene in PTSD-related nightmares, and Prism for PTSD, cleared in 2024, is the first self-neuromodulation device for PTSD as an adjunct to standard care.  


A comprehensive scoping review, synthesising findings from 36 empirical studies published through January 2024, found that AI technologies in mental health were predominantly used for support, monitoring, and self-management purposes, rather than as standalone treatments. Reported benefits included reduced wait times, increased engagement, improved symptom tracking, enhanced diagnostic accuracy, personalised treatment, and greater efficiency in clinical workflows. This suggests that AI is largely perceived as a supporter of human clinicians, augmenting their capabilities rather than replacing them, which is crucial for maintaining the human element in mental healthcare.  


In public health, AI models have been used to predict disease outbreaks and monitor epidemiological trends, as demonstrated during the COVID-19 pandemic [Morgenstern et al., 2021]. These tools enhance the responsiveness of health systems and support data-driven interventions, facilitating real-time analysis of social media and mobility data for early detection of public health threats. A systematic review on AI in Early Warning Systems, EWS, for infectious diseases highlights the prevalent use of machine learning, deep learning, and natural language processing, which often integrate diverse data sources such as epidemiological, web, climate, and wastewater data. The major benefits identified were earlier outbreak detection and improved prediction accuracy.  


A significant breakthrough in this area is a new AI tool, PandemicLLM, which for the first time uses large language modelling to predict infectious disease spread. This tool, developed by researchers at Johns Hopkins and Duke universities with federal support, outperforms existing state of the art forecasting methods, particularly when outbreaks are in flux. Unlike traditional models that treat prediction merely as a mathematical problem, PandemicLLM reasons with it, considering inputs such as recent infection spikes, new variants, mask mandates, and genomic surveillance data. This ability to integrate new types of real-time information and adapt to changing conditions fills a critical gap identified during the COVID-19 pandemic, where traditional models struggled when new variants emerged or policies changed. The model can accurately predict disease patterns and hospitalisation trends one to three weeks out, and with the necessary data, it can be adapted for any infectious disease. The substantial increase in LLM and text data use in healthcare research in 2024 further highlights the potential for AI applications in public health, moving beyond traditional data types to employ complex textual information for enhanced surveillance and response. The breakthroughs in both mental health and public health surveillance demonstrate AI's capacity to provide scalable, accessible, and personalised care, while also enhancing global preparedness for health crises.  


2.4 Risks and Ethical Concerns in Healthcare


Despite its benefits, AI in healthcare raises significant ethical concerns. Issues of data privacy, algorithmic bias, and the dehumanisation of care are increasingly prominent. Federspiel et al. (2023) warn that AI may exacerbate health disparities if not carefully regulated. Moreover, the potential for AI to manipulate health-related decisions echoes the need for transparent and accountable systems. The lack of explainability in many AI models poses challenges for clinical trust and legal accountability, necessitating the development of interpretable algorithms and robust validation protocols.


A deeper examination of ethical considerations from 2023 to 2025 reveals several key areas of concern. Algorithmic bias is a pervasive issue, as AI systems often reflect and perpetuate existing health disparities due to biased training data. This can manifest in models requiring patients of colour to present with more severe symptoms than white patients for equivalent diagnoses or treatments, as seen in cardiac surgery or kidney transplantation. Examples include Optum's healthcare risk prediction algorithm systematically disadvantaging Black patients because it was trained on healthcare spending rather than healthcare needs, and IBM Watson for Oncology providing unsafe recommendations due to biased training data. Facial recognition software has also shown less accuracy in identifying Black and Asian subjects, raising concerns about biased patient identification. This perpetuation of historical injustices through algorithmic decision-making, such as racial profiling in predictive policing or unequal access to credit, draws attention to the critical social dimension, where AI, if unchecked, can amplify existing inequalities.  


Data privacy and security are paramount, as AI systems require vast amounts of sensitive patient data, including medical histories and genetic information. Ensuring compliance with stringent data protection laws like GDPR and HIPAA is crucial, alongside addressing concerns about the re-identification of anonymised data. The digital divide also presents a significant challenge, as medically vulnerable patients, communities, and local health institutions often lack basic access to high-speed broadband, data, resources, and education, risking being left behind in the AI revolution. This lack of access can exacerbate existing health disparities, creating a two-tiered healthcare system where advanced AI-driven treatments are concentrated in well-funded urban centres.  


Concerns also extend to the potential for AI to dehumanise care and reduce human interaction. Over-reliance on AI may diminish the crucial teacher-student or clinician-patient relationships, impacting social-emotional aspects of learning and care. Patients may still prefer human empathy over AI interactions, particularly in sensitive mental health contexts. Furthermore, the lack of clarity regarding accountability and liability for errors in AI-driven decisions remains a significant legal challenge, as it can be unclear whether developers, healthcare providers, or institutions are responsible when harm occurs. The 'black box' nature of many complex AI models, which hinders understanding of their decision-making processes, further complicates clinical trust and the ability to challenge recommendations. This opacity can lead to over-confidence in AI's capabilities, potentially masking underlying flaws and risks. Failures of AI technologies embedded in health products can also significantly impact patient confidence, undermining the very trust essential for adoption. The increasing autonomy of AI systems also introduces complexities in obtaining truly informed consent and raises significant ethical and legal concerns, particularly in sensitive areas like end of life care.  


To mitigate these profound ethical and legal challenges, a multi-faceted approach is essential. Strategies include ensuring inclusive and diverse datasets for training models, which is critical for improving accuracy and fairness across all patient populations. Collaborative design and deployment of AI, involving partnerships with intended communities and developers who understand the subtleties of impacted groups, are vital. Prioritising accessibility by investing in high-speed broadband, energy, and data infrastructure for underserved communities is also crucial. Accelerating AI literacy and awareness by integrating AI education into healthcare training and public health messaging can empower both professionals and the public.  


A strong emphasis on explain ability and transparency is necessary, requiring developers to share AI benefits, technical constraints, and explicit or implicit deficits in the training data. This can be supported by promoting AI governance scorecards, conducting listening sessions, and empowering community engagement. Robust ethical and legal frameworks are needed to guide AI adoption, addressing informed consent, data privacy, algorithmic transparency, patient autonomy, and ensuring human oversight remains a central principle of patient care. Regular algorithm audits and fairness-aware design, incorporating fairness explicitly into algorithm design, are critical to identify and address potential biases. Continuous monitoring and feedback loops are also essential for ongoing assessment of patient outcomes across demographic groups, allowing for the identification and adjustment of emerging biases. Finally, public engagement is critical for building trust through educational initiatives, open dialogue, and community involvement in decision-making, ensuring that public concerns about AI ethics, privacy, and accountability are addressed. The careful calibration of risks and mitigation strategies emphasises that developing and deploying AI in healthcare responsibly is not just a technical challenge, it is a societal mandate requiring ongoing vigilance and adaptability


3. AI’s Broader Impact on Human Life


3.1 Education


AI is transforming education through intelligent tutoring systems that adapt to individual learning styles. These systems enhance engagement and retention, particularly for students with diverse needs. AI also supports inclusive education by providing real-time translation and accessibility features, thereby democratising learning. Virtual classrooms powered by AI can personalise content delivery, assess student performance, and offer feedback tailored to cognitive and emotional profiles.


Recent research indicates a significant shift in attitudes towards AI in education. A 2024 study found increasingly positive attitudes among students, teachers, and parents towards AI tools like ChatGPT, a notable change from the uncertainty prevalent in early 2023. Nearly 50% of teachers now report using ChatGPT at least weekly in their teaching practices, citing "learning faster and more" as the top advantage, alongside increased student engagement, easier teaching, and a boost in creativity. While student use of generative AI tools, with 27% reporting regular use in 2023, still far exceeds that of instructors, at 9%, the potential for AI to inspire creativity, offer multiple perspectives, summarise existing materials, and generate or reinforce lesson plans is becoming increasingly recognised. Furthermore, AI can systematises administrative tasks such as grading, scheduling, and communication with parents, freeing teachers to focus on their core pedagogical responsibilities and build more meaningful relationships with students.  


However, the rapid adoption of AI in education is not without its challenges and concerns. A significant gap exists between AI adoption and the development of supporting policies and training. Over 50% of teachers report that their schools do not have a formal policy regarding AI use in schoolwork, and many desire training but have not received it, with 56% expressing this need. This lack of clear guidelines and professional development leaves many educators navigating new technologies without adequate support.  


Privacy and security concerns are also prominent, with worries about how personal data is collected, used, stored, and protected from leaks. The potential for bias in AI algorithms is another critical issue. Studies have shown significant bias in generative pre-trained transformers, GPT, against non-native English speakers, with over half of their writing samples misclassified as AI-generated, while accuracy for native English speakers was nearly perfect. This occurs because AI detectors are often programmed to recognise language that is more literary and complex as more 'human', potentially leading to unjust accusations of plagiarism for non-native speakers.  


Other concerns include the potential for reduced human interaction, as over-reliance on AI might diminish teacher-student relationships and impact the social-emotional aspects of learning. High implementation costs also pose a barrier, with simple generative AI systems costing around £25 per month, but larger adaptive learning systems potentially running into tens of thousands of pounds. Issues of academic misconduct, particularly plagiarism, and the inherent unpredictability and potential for inaccurate information from AI tools, further complicate their integration. The transformative potential of AI in education is clear, offering personalised learning experiences and administrative efficiencies. However, realising these benefits equitably and responsibly requires overcoming significant hurdles related to policy, training, bias mitigation, data privacy, and ensuring that AI complements, rather than diminishes, essential human interaction in the learning process.  


3.2 Employment and Economic Shifts


The automation of routine tasks by AI threatens traditional employment structures, but it also creates new opportunities in fields such as AI governance, ethics, and engineering. Trammell and Korinek (2023) argue that AI could redefine economic growth models, necessitating policy innovation to manage labour displacement and income inequality. The rise of gig-based AI labour markets and algorithmic management systems introduces new dynamics in worker autonomy and job security, underscoring the need for governments to anticipate these shifts and invest in reskilling programmes, social safety nets, and inclusive innovation strategies.


Recent research from 2023 to 2025 provides a nuanced picture of AI's employment and economic impact. PwC's research indicates that productivity growth has nearly quadrupled in industries most exposed to AI, rising from 7% to 27% between 2018 and 2024. Workers with AI skills are commanding a substantial 56% wage premium, a figure that doubled from the previous year. Contrary to some expectations of widespread job destruction, PwC's data shows job numbers rising in virtually every type of AI-exposed occupation, even those highly automatable. This suggests that AI is currently more of an augmentative force than a destructive one in terms of overall job numbers.  


However, other reports highlight significant shifts and concerns. McKinsey Global Institute estimates that 40% of all working hours will be supported or augmented by language-based AI by 2025, and up to 30% of current hours worked could be automated by 2030, requiring 12 million occupational transitions in the United States. Deloitte's 2024 research reveals that over 60% of workers use AI at work, while nearly half worry about job displacement. Similarly, Accenture found that 95% of workers see value in working with generative AI, though approximately 60% are concerned about job loss. The World Economic Forum's Future of Jobs Report 2025 predicts that 41% of employers worldwide intend to reduce their workforce due to AI, but technology is also projected to create 11 million jobs and displace 9 million globally, with 85 million roles potentially displaced but 97 million new roles emerging by 2030. The International Monetary Fund, IMF, indicates that nearly 40% of jobs worldwide will be affected by AI, with advanced economies seeing 60% of jobs influenced, suggesting a dual impact where approximately half face negative consequences while others may experience enhanced productivity. Stanford's AI Index 2025 Report reinforces that AI boosts productivity and, in most cases, helps narrow skill gaps across the workforce, with additional research suggesting AI is directed at high-skilled tasks and may reduce wage inequality.  


The adoption of AI chatbots has become widespread, with surveys from late 2023 and 2024 showing most employers encouraging their use, many deploying in-house models, and training initiatives becoming common. Firm-led investments are boosting adoption, narrowing demographic gaps in take-up, enhancing workplace utility, and creating new job tasks. However, modest productivity gains, averaging 3% time savings, combined with weak wage pass-through, help explain these limited labour market effects observed so far, challenging narratives of imminent, radical labour market transformation due to generative AI.  


The overall pace of AI adoption is accelerating rapidly, jumping from 5.4% of firms using AI in 2018 to 38.3% in 2024, with a further 21 percentage point increase in just the past year, reaching 59.1% in May 2025. Generative AI drove much of this growth, increasing its share from 20% in April 2024 to 36% in May 2025. While productivity gains are cited as the top benefit, worker replacement is rare. Dallas Fed research suggests a limited negative impact on employment, with only 16% of firms reporting that generative AI changed the type of workers needed, shifting towards more highly skilled labour and fewer mid- and low-skilled workers, rather than reducing headcount. This indicates that AI is more likely to reshape job roles and skill requirements than to cause mass unemployment, particularly in the near term. The complex interplay of productivity gains, skill shifts, and varying adoption rates suggests that the economic impact of AI will be multifaceted, necessitating proactive policy responses to manage workforce transitions and ensure equitable opportunities.  


3.3 Social Equity and Bias


AI systems often reflect the biases embedded in their training data, posing a significant risk of discriminatory outcomes in healthcare and public services [Faerron Guzmán, 2024]. Addressing these biases requires inclusive datasets, participatory design, and rigorous ethical oversight to ensure that AI serves all communities equitably. The perpetuation of historical injustices through algorithmic decision-making, such as racial profiling in predictive policing or unequal access to credit, underscores the critical need for fairness audits and algorithmic transparency.


Recent research from 2023 to 2025 provides alarming evidence of these biases, particularly in generative AI. A UNESCO study on Large Language Models, LLMs, including GPT-3.5, GPT-2, and Llama 2, revealed regressive gender stereotypes and homophobic, as well as racial, bias. The study found richer narratives in stories about men, who were assigned more diverse, high-status jobs like engineer, teacher, and doctor, while women were frequently relegated to traditionally undervalued or socially stigmatised roles such as "domestic servant", "cook", and "prostitute". Stories generated by Llama 2 about boys and men were dominated by words like "treasure", "woods", "sea", and "adventurous", whereas stories about women frequently used words such as "garden", "love", "felt," "gentle", "hair", and "husband". Women were described as working in domestic roles four times more often than men by one model, and were frequently associated with words like "home", "family", and "children", while male names were linked to "business", "executive", "salary", and "career".  


The study also highlighted negative content about gay people, with 70% of Llama 2-generated content and 60% of GPT-2 content prompted by 'a gay person is...' being negative, including phrases like 'The gay person was regarded as the lowest in the social hierarchy'. High levels of cultural bias were observed when LLMs generated texts about different ethnicities; for example, Zulu men were more likely to be assigned occupations like "gardener" and "security guard", and 20% of texts on Zulu women assigned them roles as "domestic servants", "cooks, and "housekeepers", contrasting with the varied occupations assigned to British men. This unequivocal evidence of bias in LLMs is particularly concerning because these new AI applications have the power to subtly shape the perceptions of millions of people, meaning even small gender biases can significantly amplify inequalities in the real world.  


AI systems trained on biased data may unintentionally reinforce systemic discrimination and social inequality. There is currently limited empirical data on how AI and automation affect different socio-economic groups in nuanced ways, with studies often focusing on technological performance rather than social outcomes. A lack of interdisciplinary research integrating perspectives from social sciences, education, and public policy hinders a comprehensive assessment of AI's societal impact. Policy discussions around AI tend to prioritise innovation and economic growth over equity and inclusion, and despite some frameworks highlighting fairness and accountability, the lack of enforceable guidelines and inclusive participation means equity concerns are often overlooked. This indicates a wide gap between ethical ideals and implementation practices. Furthermore, there is minimal research focused on educational interventions that prepare citizens, especially underserved populations, to critically engage with AI technologies, which is crucial for building an equitable AI-driven society.  


A survey highlighted job displacement, at 68%, and bias in AI systems, at 55%, as the most prominent concerns among participants. Notably, only 25% of respondents reported meaningful inclusion of equity-focused policies in AI deployment, suggesting a substantial gap in governance. Participants from low-income communities particularly emphasised the lack of access to AI education and tools, limiting their ability to adapt to technological shifts. This disparity in perception and experience across social strata underscores that while some benefit from AI's efficiency gains, others face marginalisation and reduced economic stability. The implications are clear: the pervasive issue of bias in AI systems, particularly generative AI, poses a significant threat to social equity. Addressing these biases requires not only technical solutions like inclusive datasets and fairness audits, but also a fundamental shift towards participatory design, robust governance with enforceable guidelines, and widespread AI literacy, especially for vulnerable populations, to ensure AI serves as a tool for justice rather than further marginalisation.  


3.4 Governance and Global Policy


The global nature of AI development calls for coordinated governance frameworks. Grace et al. (2024) advocate for a Global AI Treaty to regulate the deployment of AI technologies and prevent misuse. Without such frameworks, AI could destabilise democratic institutions and amplify authoritarian control. International cooperation is essential to establish norms around data sovereignty, algorithmic accountability, and ethical AI deployment, with multi-stakeholder engagement, including civil society, academia, and industry, being critical to crafting inclusive and enforceable policies.


Recent developments from 2023 to 2025 illustrate a rapidly evolving landscape in AI governance. In the United States, while Tortoise Media’s June 2023 Global AI Index ranked the US first in AI implementation, innovation, and investment, it placed the country eighth in government strategy, highlighting a lag in policy compared to technological advancement. However, efforts are underway to address this. The White House’s Office of Management and Budget released a policy in March 2024 on Advancing Governance, Innovation, and Risk Management for Agency Use of AI, directing federal agencies to manage risks, particularly those affecting public rights and safety. Similarly, the US Department of the Treasury released a report in March 2024 on Managing AI-Specific Risks in the Financial Services Sector.  


A more comprehensive approach was outlined in the White House’s "Winning the AI Race: America's AI Action Plan" in July 2025. This plan aims to accelerate domestic AI development, modernise critical infrastructure, foster innovation, drive economic growth, and counter geopolitical threats, particularly from China. Structured around three core pillars, "Accelerating Innovation", "Building AI Infrastructure", and "Leading Globally", it includes initiatives to promote open-source AI, streamline permitting for data centres, modernise the legal system for synthetic media, and strengthen export controls and biosecurity measures. The plan emphasises developing AI systems that are transparent, reliable, and aligned with national priorities, supporting the creation of evaluation tools, testing infrastructure, interpretability research, and standards. It also encourages collaboration among government, industry, and academia, promoting shared infrastructure, pilot programmes, and regulatory sandboxes, while including initiatives for education, training, and workforce transitions. Measures to mitigate national security risks, strengthen export controls on critical AI-enabling technologies, and promote US leadership in international AI standards are also outlined.  


Globally, the Oxford Insights Government AI Readiness Index 2024, which assesses 188 countries, indicates a resurgence in national AI strategies, with 12 new strategies published or announced in 2024, triple the number seen in 2023. Notably, more than half of these strategies come from lower-middle-income and low-income countries, demonstrating growing momentum among economies that have historically lagged in AI governance. Examples include Ethiopia, which became the second low-income country to release a strategy after Rwanda in 2023, and lower-middle-income economies such as Ghana, Nigeria, Sri Lanka, Uzbekistan, and Zambia, which formalised their AI visions. This development highlights the increasing recognition of AI as a driver of national development and suggests that international cooperation and knowledge-sharing have played a role in supporting this momentum. Middle-income economies are actively closing the AI readiness gap by focusing on fundamental aspects such as developing national AI strategies, adopting AI ethics principles, and strengthening data governance.  


The intensification of global cooperation on AI governance in 2024, with organisations including the OECD, EU, UN, and African Union releasing frameworks focused on transparency and trustworthiness, further underscores this trend. Organisations themselves are also adapting, redesigning workflows, elevating governance, and mitigating more risks related to generative AI. While 27% of organisations report reviewing all generative AI content, a similar share reviews 20% or less, indicating varied approaches to oversight. Nevertheless, many organisations are ramping up efforts to mitigate generative AI-related risks, including inaccuracy, cybersecurity, and intellectual property infringement. The evolving landscape of AI governance reflects a clear global recognition of the need for coordinated frameworks. While leading nations are prioritising innovation and national security, there is a growing global movement towards formalising AI strategies and addressing ethical principles. This indicates a maturing approach to responsible AI deployment, but the disparities in AI readiness and varied oversight approaches highlight the ongoing challenge of achieving harmonised, inclusive, and enforceable global policies that can keep pace with technological advancement and ensure equitable outcomes worldwide.  


4. Future Directions and Recommendations


To harness AI’s potential responsibly, interdisciplinary collaboration is essential. Policymakers, technologists, ethicists, and public health experts must co-create governance models that prioritise transparency, accountability, and human well-being. Investment in explainable AI, equitable access, and ethical education will be critical to ensuring that AI enhances, rather than undermines, human life. Moreover, global cooperation is needed to address the transnational risks posed by AI and to promote inclusive innovation. Research should focus on developing AI systems that are not only technically robust but also socially aligned, culturally sensitive, and environmentally sustainable.


Several key future directions emerge from the current trajectory of AI development and its societal impact. Firstly, regulatory frameworks must exhibit adaptive regulation, remaining agile and responsive to the rapid evolution of AI. This will involve periodic reviews, the establishment of collaborative regulatory bodies, and flexibility in AI validation and certification processes to ensure that policies can keep pace with technological advancements.  


Secondly, international cooperation is critical for establishing unified regulatory frameworks, facilitating secure cross-border data sharing, and ensuring equitable access to AI technologies globally. Given the borderless nature of AI development and deployment, fragmented national regulations can hinder progress and exacerbate disparities. Harmonised global standards are essential for consistent safety, efficacy, and ethical oversight.  


Thirdly, building and maintaining public trust and engagement is paramount. This can be achieved through comprehensive educational initiatives, fostering open dialogue, and actively involving communities in decision-making processes related to AI. Addressing public concerns about AI ethics, privacy, its decision-making power, and accountability for errors is crucial for widespread acceptance and responsible adoption.  


A continued focus on human-centred AI is also vital, ensuring that AI systems augment, rather than replace, human judgment and empathy. This is particularly important in sensitive areas such as mental health and end-of-life care, where the human element of compassion and nuanced understanding is irreplaceable. The goal should be to empower human professionals with AI tools, not to cede autonomous decision-making in critical human domains.  


Addressing the persistent digital divide requires continued investment in essential infrastructure, including high-speed broadband and energy, especially for underserved communities. Alongside this, robust AI literacy programmes are needed to equip all populations with the understanding and skills necessary to navigate an AI-driven world, ensuring that the benefits of AI are broadly accessible and do not create new forms of inequality.  


Furthermore, the development of standardised evaluation and benchmarking protocols is essential for ensuring the safety, efficacy, and fairness of AI models across diverse populations and clinical settings. This will provide a consistent basis for assessing AI performance and identifying potential biases. Promoting open science practices, such as publicly sharing datasets, code, and model outputs, can accelerate innovation and collaboration within the AI research community, provided that ethical data governance frameworks are rigorously applied.  


Finally, greater interdisciplinary research, integrating perspectives from social sciences, ethics, and public policy, is necessary to comprehensively assess AI's societal impact and inform robust policy development. This holistic approach will ensure that technological advancements are aligned with broader societal values and goals. Coupled with this, continued investment in workforce adaptation, including reskilling and upskilling programmes, is crucial to prepare the labour force for evolving job roles and to mitigate potential inequalities arising from AI-driven economic shifts. By focusing on these interconnected future directions, society can proactively shape AI's development to amplify human dignity, equity, and resilience.  


5. Conclusion


Artificial Intelligence stands at the threshold of redefining human life. Its applications in healthcare promise more accurate diagnostics, personalised treatments, and scalable mental health support, fundamentally transforming how medical care is delivered. In education, employment, and governance, AI offers powerful tools for efficiency, personalisation, and strategic foresight, with the potential to enhance learning experiences, reshape labour markets, and inform policy-making.


Yet, these profound benefits are shadowed by significant ethical dilemmas, systemic biases, and the potential for existential risks. The pervasive issue of algorithmic bias, often embedded in training data, threatens to perpetuate and even amplify existing societal inequalities, particularly impacting vulnerable communities. Concerns over data privacy, the potential dehumanisation of care, and the complexities of accountability in AI-driven decisions underscore the critical need for robust oversight. The digital divide further risks leaving medically underserved populations behind, exacerbating health and social disparities.


The future of AI is not merely a technological question, it is fundamentally a human one. To ensure that AI serves as a force for good, society must embed ethical principles, inclusive governance, and interdisciplinary collaboration at the heart of its development and deployment. This requires a proactive approach to adaptive regulation, fostering international cooperation for harmonised standards, and building public trust through transparent engagement and education. Continuous investment in explainable AI, diverse datasets, and workforce adaptation programmes is essential to mitigate risks and ensure equitable access to AI's benefits. Only by prioritising human dignity, equity, and resilience in the design and implementation of AI can a future be shaped where this transformative technology truly amplifies human potential and well-being for all.


6. References


Ahmed, H., Ahmed, H., & Hugo, J. W. L. (2019). Artificial intelligence for global health. Science, 366(6468), 955–956.


Balaji, N., Bharadwaj, A., Apotheker, K., & Moore, M. (2024). Consumers Know More About AI Than Business Leaders Think. Boston Consulting Group.


Bennett Institute for Public Policy. (2024). Generative AI in Low-Resourced Contexts: Considerations for Innovators and Policymakers. University of Cambridge.


Castillo, F. A. (2024). Generative AI in public health: pathways to well-being and positive health outcome. Journal of Public Health, 46(4), e739–e740.


Esteva, A., Kuprel, B., Novoa, R. A., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.


Faerron Guzmán, C. A. (2024). Global health in the age of AI: Safeguarding humanity through collaboration and action. PLOS Global Public Health, 4(1), e0002778.


Federspiel, F., Mitchell, R., Asokan, A., et al. (2023). Threats by artificial intelligence to human health and human existence. BMJ Global Health, 8(5), e010435.


Grace, K., Stewart, H., Sandkühler, J. F., et al. (2024). Thousands of AI Authors on the Future of AI. arXiv preprint, arXiv:2401.02843.


Gulshan, V., Peng, L., Coram, M., et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22), 2402–2410.


Kermany, D. S., Goldbaum, M., Cai, W., et al. (2018). Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, 172(5), 1122–1131.


Omohundro, S. (2008). The Basic AI Drives. Self-Aware Systems.


Park, J., Wei, J., Wang, X., et al. (2023). Emergent Abilities of Large Language Models. Stanford University.


Rawas, S. (2024). AI: the future of humanity. Springer.


Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.


Trammell, P., & Korinek, A. (2023). AI and the Future of Economic Growth. National Bureau of Economic Research.


Villalobos, J. (2023). Forecasting AI Progress. AI Impacts.


Wang, F., & Preininger, A. (2019). AI in Health: State of the Art, Challenges, and Future Directions. Yearbook of Medical Informatics, 28(1), 16–26.


Xie, Y., Zhai, Y., & Lu, G. (2024). Evolution of artificial intelligence in healthcare: a 30-year bibliometric study. Frontiers in Medicine, 11, 1505692.


World Health Organization. (2024). Meet S.A.R.A.H.: A Smart AI Resource Assistant for Health. WHO Campaigns.

 
 
 

Comments


  • Twitter
  • LinkedIn
  • Facebook
  • Youtube

©2025 by Rakhee LB Limited, Nurse Entrepreneur. Proudly created with Wix.com

bottom of page