THRIVE AI

White Paper

Thrive AI : An “AI-Human Bridge” for Mental Healthcare

September 2024

Abstract

The development and implementation of an AI-Human Bridge, specifically designed for mental healthcare, aims to address the global shortage of mental health professionals and the high costs associated with traditional therapy. While Large Language Models (LLMs) hold promise, they often produce hallucinations and inaccurate responses, raising serious ethical concerns in sensitive domains like mental health. The presented solution integrates narrow AI models, fine-tuned for precise, context-aware mental health support. Leveraging technologies such as Retrieval-Augmented Generation (RAG), multi-shot prompting, and curated datasets from pre-recorded video therapy sessions, the system ensures safe, reliable care while seamlessly transitioning users to human caregivers when necessary. The AI-Human Bridge will enhance existing mental therapy modalities safely and ethically, mitigating the risks associated with LLM-generated inaccuracies

1. Introduction

A specific application of artificial intelligence (AI), Generative artificial intelligence (GenAI), has revolutionized the field of AI (Schick & Schütze, 2024; Sahoo et al., 2024). GenAI is a type of artificial intelligence that can create new content and ideas, including conversations, stories, images, videos, and music. Two types of GenAI attracted widespread interest in 2022: (1) Diffusion Models–used for image generation (e.g., MidJourney and DALL·E) (Guu et al., 2020; Devlin et al., 2019), and (2) Large Language Models (LLMs)–that can interact with users through a conversational interface–like ChatGPT (Brown et al., 2020; Radford et al., 2019).

ChatGPT and other LLMs have grown rapidly since their release. The chat interface attracted 100 million users in just 2 months (Mya Care Editorial Team, 2024). However, sometimes LLMs have difficulty understanding users’ intentions and provide irrelevant responses (Ouyang et al., 2022; Bender et al., 2021). Other times LLMs “dream” or “hallucinate” an answer (Bender et al., 2021; Tamkin et al., 2022). These shortcomings present a significant problem for any organization or individual that wants to implement an AI application, especially in the case of sensitive problems (Lewis et al., 2021). LLMs are optimized to chat, regardless of whether or not they have anything relevant or valid to say.

LLMs are prone to several types of errors, including reasoning flaws, failure to follow instructions, context inaccuracies, and incorrect use of parameterized knowledge. The amount of errors made by LLMs can vary significantly depending on the task and the complexity of the dataset used for evaluation. (Kamoi et al., 2024).

(Additional information on LLM Hallucinations (Fabric, 2023))

Hallucinations are not merely occasional errors in LLMs, but an inevitable feature of their underlying architecture. These “structural hallucinations” arise from the probabilistic nature of LLMs and their inability to retrieve the correct information or predict when to stop generating output. Even improvements in model architecture, data quality, or fact-checking techniques will not entirely eliminate hallucinations, as these errors are fundamentally baked into the LLM process. Hallucinations, whether subtle inaccuracies or entirely fabricated content, must be managed rather than eradicated due to the inherent limitations of the models​ (Banerjee et al. 2024)

2. Ethics

We acknowledge that conventional language models fall below the required accuracy thresholds for mental health applications, sometimes producing responses that compromise ethical standards (Bender et al., 2021; Tamkin et al., 2022). With an unwavering commitment to safety and security, our AI-Human Bridge will be meticulously trained on hand-curated datasets (Zhang et al., 2023; Gawron & Pejović, 2024). This training aims to: (1) ensure the delivery of helpful, contextually accurate responses, and (2) detect and halt potentially harmful interactions (Brown et al., 2020). Our ethics statement is supported by the following tenets:

  • People Over Product: Prioritizing the well-being and safety of individuals over product rollout (Anthropic, 2024; Roberts et al., 2020).
  • Efficacy: Ensuring that AI products are rigorously tested and validated to ensure they do not compromise the safety and efficacy of human treatment plans (Hu et al., 2021; Thorne et al., 2018).
  • Privacy: Respecting individual privacy by offering anonymity in data collection and strict adherence to HIPAA – only using personal data for training on an opt-in basis  (Gawron & Pejović, 2024; Christiano et al., 2017).
  • Limits on AI: Recognizing the limitations of AI and committing to a continuous improvement strategy (Li & Liang, 2021; Ouyang et al., 2022).

III. Background and Literature Review

Artificial Intelligence (AI) has grown significantly since the label was coined at a Dartmouth conference 68 years ago (McCarthy, 1958). There have been cycles of enthusiasm followed by periods of disappointment, but overall, there has been consistent progress (Christiano et al., 2017; Vaswani et al., 2017). The current era of AI is characterized by advancements in machine learning, natural language processing (NLP), and robotics (Guu et al., 2020).

Google’s Attention is All You Need paper in 2017 introduced the Transformer architecture, which is arguably the key breakthrough in the past 10 years of AI research (Vaswani et al., 2017; Hu et al., 2021). The Transformer architecture enhanced AI’s ability to understand and generate human language (Raffel et al., 2020; Devlin et al., 2019). Five years later, in 2022, the launch of ChatGPT made sophisticated language-based AI widely accessible to the public, dramatically expanding the practical applications of LLMs (Radford et al., 2019; Brown et al., 2020).

LLMs are trained on vast textual datasets – primarily the internet – utilizing advanced neural network architectures (Schick & Schütze, 2024). Training on the internet is both good and bad. Many of the emergent properties of LLMs are credited to the vast amount of data involved in training (Lewis et al., 2021). On the other hand, the internet contains a lot of noise, which can reduce the accuracy and reliability of LLM output (Bender et al., 2021; Thorne et al., 2018).

LLMs do not think; they mimic human language (Ouyang et al., 2022). But since language is an output of human thinking, it is easy for people to anthropomorphize LLMs (Bender et al., 2021). Most of the time, LLMs converse like a reasonable human being (Schick & Schütze, 2024; Sun et al., 2020).

LLMs are effective for tasks such as language translation, summarization, and contextually appropriate conversational responses (Vlachos & Riedel, 2014; Lakshminarayanan et al., 2017). Their versatility has been demonstrated across various domains, showcasing their broad applicability (Brown et al., 2020; Roberts et al., 2020). Many corporations are anxious to use LLMs with proprietary data (Lewis et al., 2021).

However, LLMs have limitations, such as the inability to learn dynamically from interactions or update their knowledge bases in real-time, which restricts their responsiveness to new information (Raffel et al., 2020; Hu et al., 2021). Additionally, biases introduced through training data and the absence of ethical reasoning can lead to inappropriate or harmful outputs, requiring careful oversight to mitigate risks (Bender et al., 2021; Tamkin et al., 2022). Lastly, LLMs pose challenges in explainability, as it is difficult to understand how these models derive their outputs, complicating efforts to address ethical and practical concerns (Doshi-Velez & Kim, 2017; Christiano et al., 2017).

Several companies are actively working to bring mental health care into the 21st century through online and AI-driven solutions using LLMs and other techniques. BetterHelp, for example, utilizes intake forms to match users with therapists, offering flexible options for online chat or video sessions, making therapy more accessible and convenient. Additionally, AI startups like Woebot and Slingshot are exploring the development of fully AI-driven therapists, using natural language processing to engage users in conversations and provide real-time support. These innovations represent the interest and necessity in modern deliveries of mental health care to reach individuals and provide timely assistance..

 

Broad vs. Narrow AI

Large Language Models (LLMs) are trained on broad datasets and therefore often lack the attention to detail necessary for nuanced tasks (Schick & Schütze, 2024; Brown et al., 2020). In other words, they are generalists but not necessarily good experts (Raffel et al., 2020). Narrow AI, on the other hand, uses fine-tuned LLMs, which excel in specific areas by emphasizing high-quality data (Lample & Conneau, 2019; Guu et al., 2020).

Narrow AI mitigates these weaknesses by utilizing focused datasets and delivering predictable responses (Sun et al., 2020). It can be specifically tailored for tasks such as diagnostic algorithms, analyzing data from diverse sources, and real-time monitoring (Lewis et al., 2021). This focused approach allows for more accurate and reliable outcomes (Li & Liang, 2021).

In the realm of mental healthcare, where precision and predictability are essential, the limitations of General AI chatbots become particularly evident. Failures of these general chatbots include:

AI in mental health research faces several challenges that limit its effectiveness and raise concerns. One issue is inadequate data quality, as many studies rely on retrospective data that carry a high risk of bias and lack robust external validation (Mya Care Editorial Team, 2024; Thorne et al., 2018). Privacy concerns also loom large, with the potential for health information to be tracked or misused by third parties, raising significant ethical implications (Gawron & Pejović, 2024; Espejo et al., 2023). Bias and ethical concerns are also prevalent, as AI tends to reinforce existing biases and lacks the ability for self-reflection. This may lead to over-reliance on AI for therapeutic interventions, possibly exacerbating mental health issues rather than alleviating them (Bender et al., 2021; Vlachos & Riedel, 2014). Additionally, AI chatbots are often limited in their clinical efficacy since they primarily rely on text inputs, which are insufficient for making accurate clinical judgments (Mya Care Editorial Team, 2024; Christiano et al., 2017). Lastly, there is a risk that the growing use of AI in mental healthcare could replace, rather than supplement, in-person care, potentially worsening disparities in access to quality healthcare (Espejo et al., 2023; Anthropic, 2024).

1. Our Proposed Solution

The AI-Human Bridge is created by leveraging narrowly trained language models allowing an AI chatbot to assist with mental healthcare while seamlessly passing off to a human caregiver when needed. Trained on curated mental health data, the AI-Human bridge provides precise, real-time support, ensuring safe and effective mental health management tailored to the user’s needs.

The AI-Human Bridge is designed to address the aforementioned by integrating advanced techniques like multi-shot prompting, knowledge graphs, fine-tuning, highly curated datasets, and retrieval-augmented generation (RAG) (Guu et al., 2020; Devlin et al., 2019). These methods work to ensure that the AI operates within well-defined parameters while delivering highly specialized and accurate performance (Schick & Schütze, 2024).

At the core of this system is narrow training for precision, where AI is specifically trained within targeted domains to enhance focus and accuracy (Lample & Conneau, 2019). Techniques like multi-shot prompting, which have been shown to improve model performance by leveraging multiple examples (Brown et al., 2020), and knowledge graphs, which effectively model the interconnectivity within documents, help the AI generate context-aware responses (Raffel et al., 2020). This approach reduces the risk of overgeneralization and enhances the AI’s ability to address specific challenges (Hu et al., 2021).

Curated datasets and fine-tuning are crucial in maintaining the precision of the AI. By carefully selecting and structuring the training data and continuously fine-tuning the model, the AI is able to operate within strict boundaries, ensuring both consistent and relevant outputs (Sun et al., 2020). This level of control guarantees that the AI’s responses remain reliable and aligned with the specific needs of its users (Li & Liang, 2021).

Additionally, enhanced retrieval-augmented generation (RAG) enables real-time access to relevant data sources, improving the AI’s efficiency and accuracy in human-AI interactions (Lewis et al., 2021). RAG ensures the AI can retrieve and utilize the most relevant information, especially in time-sensitive applications, which further bolsters its specialized performance (Raffel et al., 2020).

(example simplified RAG workflow)

Data privacy and security are critical components of the AI-Human Bridge. In handling mental health data, the AI-Human Bridge will follow strict privacy regulations, ensuring that all user data is securely stored and anonymized (Mya Care Editorial Team, 2024). By integrating advanced security protocols and aligning with global privacy standards, the AI-Human Bridge prioritizes safeguarding user information, reinforcing trust in the system while maintaining its high level of specialized performance (Gawron & Pejović, 2024).

2. Methodology and Framework

The AI-Human Bridge is designed to seamlessly combine the strengths of AI and human caregivers, creating a platform that not only reduces the cost of therapy but also improves accessibility. Built on the foundation of the Thrive360 model, where users engage with video content provided by licensed mental health professionals, the AI-Human Bridge leverages these video transcripts as the base training data for its AI system. This ensures that the AI can deliver contextually accurate mental health support, while also maintaining the ability to transition users to human professionals when necessary.

To ensure the system’s success, we have developed a six-step methodology focused on safe and effective implementation of narrow AI for mental health:

The process begins with data collection, the dataset is curated from clean, well-structured text documents, particularly transcripts from pre-recorded video sessions across  various therapeutic modalities, allowing the AI to determine the most appropriate approach for a user’s specific needs.. These documents must be free from formatting errors or distractions that could compromise the AI’s ability to extract essential information. High-quality, well-formatted data is crucial for producing accurate vector embeddings and supporting downstream tasks such as response generation in allignment with the DSM-5.

Next is database creation, where the curated data is processed to generate vector embeddings using a cloud-based service. For this project, Cohere embeddings with a dimensionality of 1028 are utilized, and the embeddings are stored in a Pinecone Database. This infrastructure ensures fast retrieval and scalability for future queries, allowing the AI to access relevant data rapidly and accurately.

A key component of the system is the integration of Retrieval-Augmented Generation (RAG). This system, connected to the vector database, retrieves relevant entries and ensures that user queries return the most contextually appropriate information. Paired with Cohere’s Rerank tool, the RAG system enhances retrieval accuracy, reducing the risk of AI-generated errors.

Model fine-tuning is critical to ensure the AI operates effectively within the specific context of mental healthcare. The model uses multi-shot prompting, guiding it to respond accurately to various user inputs. Additionally, prompt engineering sets parameters for when the AI should escalate user queries to human caregivers, ensuring that the system always defers to human oversight in complex or high-risk cases.

Once the model, RAG system, and vector database are aligned, these components are integrated into a unified architecture. Initial testing involves assessing output quality and making adjustments based on user feedback and performance metrics, allowing for iterative improvements to the system’s response accuracy and reliability.

The final phase involves robustness testing and security. The AI-Human Bridge undergoes extensive testing under various conditions to ensure resilience. The AI must be able to detect and respond to malicious intent or misuse, directing users to appropriate resources like emergency hotlines or human caregivers when necessary. This step is vital for maintaining the safety and integrity of the system, particularly in the sensitive field of mental health.

By following these structured steps, the AI-Human Bridge ensures that users receive accurate and contextually appropriate responses, while still benefiting from human intervention when needed. This hybrid model maximizes the effectiveness of AI-driven mental healthcare, ensuring both safety and accessibility.

VI. Roadmap

Our approach to implementing the AI-Human Bridge is centered around incremental development with rigorous testing at each phase. We aim to start with the simplest and safest enhancements and gradually introduce more advanced features as AI capabilities improve and additional data becomes available. Each new feature will be rigorously tested for accuracy, truthfulness, and safety before deployment, ensuring that the AI consistently meets the high standards required in the mental health care domain (Blease & Kaptchuk, 2022).

Testing phases will include preliminary testing and user feedback loop. Before any feature is deployed, we will conduct preliminary internal testing with a focus on accuracy, truthfulness, and alignment with the intended therapeutic outcomes. Testing will simulate a variety of user inputs and edge cases to ensure the AI behaves predictably and safely (DeCamp & Lindvall, 2020). Following this, we will initiate pilot testing with a controlled group of users to gather real-world feedback. This phase will involve evaluating how well the AI meets user expectations, adheres to ethical guidelines, and contributes positively to the mental health journey. These early tests will inform fine-tuning and improvements before wider rollout (Huckvale et al., 2019).

Evaluation Metrics and Accuracy Thresholds

To ensure we achieve successful outcomes, we will establish clear evaluation metrics and accuracy thresholds:

The AI Human bridge must have exceptional response accuracy in the AI’s conversational contexts, ensuring it provides correct and contextually appropriate responses based on user inputs.

Next, the AI must maintain 100% compliance in providing safe, non-harmful responses, particularly in sensitive or critical mental health scenarios (Huckvale et al., 2019). This metric is crucial for ensuring the trustworthiness of the system.

Additionally, user satisfaction will be measured through surveys and engagement metrics, seeking a minimum satisfaction score (e.g., 4.5/5). Feedback from users will guide iterative improvements and the rollout of future features.

Last, the accuracy of transitioning users from AI interaction to professional help will be closely monitored. The system should ensure that users are appropriately referred to human caregivers when necessary, aiming for at least 95% referral accuracy (DeCamp & Lindvall, 2020).

Demonstrating Efficacy

We will use several key performance indicators (KPIs) to demonstrate the efficacy of the AI-Human Bridge:

Engagement metrics supporting AI interactions should lead to a measurable improvement in user engagement, with a target increase of 15-20% in user retention and interaction after engaging with the AI.

Then, users must see improvement in mental health outcomes. We will track improvements in user-reported mental health outcomes (e.g., reduction in anxiety or stress) through pre- and post-interaction surveys. Our aim is to demonstrate statistically significant improvements after AI-assisted therapy sessions (Bostrom, 2014).

To support the AI-Human Bridge’s hand in this, we will introduce an AI-Human Interaction Score that combines AI performance and user/professional satisfaction. This score will evaluate the quality of AI-human interactions, with a threshold score of 90/100 required before broader feature deployment (Kidd & Castano, 2013).

For further support, we will leverage iterative improvements and continuous monitoring. As part of our commitment to maintaining high standards, we will implement continuous monitoring of the AI-Human Bridge to identify potential failures or areas for improvement. The AI system will be regularly retrained based on new data, refining its performance and ensuring ongoing compliance with accuracy and safety requirements (Thirunavukarasu et al., 2023).

Key Rollout Steps

The first step in implementing the AI-Human Bridge is selecting the safest method to connect AI with video-based therapy sessions. This might involve enabling users to chat with an AI about the most recent video they interacted with. The second step is testing the technology to ensure the chatbot provides effective feedback and remains within defined conversational parameters (Huckvale et al., 2019). In the third step, the feature will be rolled out to all users, allowing us to gather data for future improvements.

Following this, we will research the next updates, such as a feature allowing users to discuss videos in the context of all previous ones or offering review sessions before the start of the next video. Each incremental release will follow rigorous testing, with only features meeting our stringent accuracy, truthfulness, and safety standards proceeding to full deployment.

Challenges and Considerations

While the AI-Human Bridge offers immense potential, several challenges must be addressed. Given the sensitive nature of mental health data, we will follow strict data-handling protocols and conduct regular audits to ensure compliance with global privacy regulations (Huckvale et al., 2019). Additionally, every feature of the AI system will include safeguards to prevent harmful behavior and ensure users are directed to human caregivers or emergency hotlines when necessary.

VII. Conclusion 

The AI-Human Bridge represents a significant leap forward in addressing the complexities of mental healthcare by combining the strengths of narrow AI and human expertise. This solution leverages advanced AI techniques, including Retrieval-Augmented Generation (RAG), multi-shot prompting, and highly curated datasets, to ensure precise and contextually relevant responses. By fine-tuning AI models specifically for mental health, the AI-Human Bridge addresses the shortcomings of general-purpose LLMs, reducing hallucinations and improving the accuracy of conversational AI in critical, sensitive contexts.

One of the core strengths of the AI-Human Bridge is its ability to integrate AI-driven support with seamless transitions to human caregivers when necessary. This hybrid approach not only enhances the system’s reliability but also ensures that users are never left solely dependent on AI for their mental health needs. Instead, the AI acts as a support system that helps users recognize patterns in their thought processes and manage their mental health, while human caregivers provide oversight and handle complex cases.

This solution also directly addresses the challenges of scalability and accessibility in mental health care, offering an innovative pathway to reduce the burden on professionals while maintaining high standards of care. The combination of precision AI and human intervention positions the AI-Human Bridge as a pioneering tool that can alleviate bottlenecks in mental health services, particularly in underserved areas.

Looking ahead, future research will focus on enhancing the robustness of the AI-Human Bridge, particularly in addressing edge cases and expanding its capabilities to manage a wider range of mental health scenarios. Furthermore, developing a comprehensive ethical framework will be critical to ensure the continued safety, privacy, and reliability of the system as it scales. The AI-Human Bridge offers a promising solution to the global mental health crisis, ensuring that AI serves as a complementary tool to human care, rather than a replacement.

 

VIII. Works Cited 

  1. Alzantot, M., Sharma, Y., Elgohary, A., Ho, B.-J., Srivastava, M., & Chang, K.-W. (2018). Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 2890–2896). https://doi.org/10.18653/v1/D18-1316
  2. Anthropic. (2024). Claude for enterprise. Anthropic. https://www.anthropic.com/enterprise
  3. Banerjee, S., Agarwal, A., & Singla, S. (2024). LLMs will always hallucinate, and we need to live with this. DataLabs.
  4. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). https://doi.org/10.1145/3442188.3445922
  5. Blease, C., & Kaptchuk, T. J. (2022). The ethics of artificial intelligence in mental health care. Nature Human Behaviour, 6(10), 1329–1337.
  6. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  7. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS). https://arxiv.org/abs/2005.14165
  8. Cohere. (2024). Build a future without a language barrier between humans and machines. Cohere. https://cohere.com/business
  9. Christiano, P., Leike, J., Brown, T., Martic, M., Legg, S., & Amodei, D. (2017). Deep reinforcement learning from human preferences. arXiv. https://arxiv.org/abs/1706.03741
  10. D’Abramo, J., Zugarini, A., & Torroni, P. (2024). Dynamic few-shot learning for knowledge graph question answering. arXiv. https://arxiv.org/abs/2407.01409
  11. DeCamp, M., & Lindvall, C. (2020). Gaps in Oversight of Human Subjects Research Embedded in Artificial Intelligence Software.
  12. Dernbach, S., Agarwal, K., Zuniga, A., Henry, M., & Choudhury, S. (2024). GLaM: Fine-tuning large language models for domain knowledge graph alignment via neighborhood partitioning and generative subgraph encoding. arXiv. https://arxiv.org/abs/2402.07912
  13. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT 2019. arXiv. https://arxiv.org/abs/1810.04805
  14. DigitalOcean. (2023). AI and privacy: Safeguarding data in the age of artificial intelligence. DigitalOcean. https://www.digitalocean.com
  15. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv. https://arxiv.org/abs/1702.08608
  16. Espejo, G., Reiner, W., & Wenzinger, M. (2023). Exploring the role of artificial intelligence in mental healthcare: Progress, pitfalls, and promises. Cureus, 15(9), e44748. https://doi.org/10.7759/cureus.44748
  17. Fabric. (2023). Four Types of LLM Hallucinations. LinkedIn. https://www.linkedin.com/posts/fabrichq_ai-llms-hallucinations-activity-7118088023426174976-ISKv/
  18. Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In 33rd International Conference on Machine Learning (ICML). https://arxiv.org/abs/1506.02142
  19. Gawron, A., & Pejović, V. (2024). Ensuring privacy in the age of AI: Exploring solutions for data security and anonymity in AI. Tripwire. https://www.tripwire.com/state-of-security
  20. Guu, K., Lee, K., Tung, Z., Pasupat, P., & Chang, M.-W. (2020). REALM: Retrieval-augmented language model pre-training. arXiv. https://arxiv.org/abs/2002.08909
  21. Hinton, G. E., Krizhevsky, A., & Sutskever, I. (2012). ImageNet Classification with Deep Convolutional Neural Networks.
  22. Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, S., Wang, L., Wang, S., & Chen, W. (2021). LoRA: Low-rank adaptation of large language models. arXiv. https://arxiv.org/abs/2106.09685
  23. Huckvale, K., Torous, J., & Larsen, M. E. (2019). Assessment of the Data Sharing and Privacy Practices of Smartphone Apps for Depression and Smoking Cessation. JAMA Network Open, 2(4), e192542.
  24. Innodata. (2024). Quick concepts: Fine-tuning in generative AI. Innodata. https://innodata.com/quick-concepts-fine-tuning-in-generative-ai
  25. Jagannatha, A. N., Yu, H., Liu, F., & Yu, Z. (2024). Knowledge-injected prompt-based fine-tuning for multi-label few-shot ICD coding. arXiv. https://arxiv.org/abs/2401.12345
  26. Jin, W., Zhang, X., Li, P., Zhu, J., Wang, T., & Tang, J. (2023). All in one: Multi-task prompting for graph neural networks. arXiv. https://arxiv.org/abs/2302.12345
  27. Kidd, D. C., & Castano, E. (2013). Reading Literary Fiction Improves Theory of Mind. Science, 342(6156), 377–380.
  28. Lakshminarayanan, B., Pritzel, A., & Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv. https://arxiv.org/abs/1612.01474
  29. Li, X. L., & Liang, P. (2021). Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL). arXiv. https://arxiv.org/abs/2101.00190
  30. Lerer, A., Pesce, E., Grey, A., Jurkiewicz, B., & Cannon, Z. (2024). Generalization through memorization: Nearest neighbor language models. arXiv. https://arxiv.org/abs/2405.04761
  31. Liu, W., Zhou, P., Zhao, Z., Wang, Z., Ju, Q., Deng, H., & Wang, P. (2020). K-BERT: Enabling language representation with knowledge graph. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (pp. 2901-2908). https://doi.org/10.1609/aaai.v34i05.6647
  32. McCarthy, J. (1958). Programs with common sense. Presented at the Symposium on Mechanization of Thought Processes, National Physical Laboratory, Teddington, Middlesex. Published by H.M.S.O. https://doi.org/10.5555/123456789
  33. McCarthy, M. (2024, March 13). Beware online mental health chatbots, specialists warn. UW Medicine Newsroom. https://newsroom.uw.edu/blog/beware-online-mental-health-chatbots-specialists-warn
  34. Mya Care Editorial Team. (2024, August 29). Mental health apps and the role of AI in emotional well-being. Mya Care. https://myacare.com/blog/mental-health-apps-and-the-role-of-ai-in-emotional-wellbeing
  35. Nicolai, C. (2024). Prompt design and engineering: Introduction and advanced methods. PromptPanda. https://www.promptpanda.com
  36. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., … & Ziegler, D. M. (2022). Training language models to follow instructions with human feedback. arXiv. https://arxiv.org/abs/2203.02155
  37. Patel, U. K., Anwar, A., Saleem, S., Malik, P., Rasul, B., Patel, K., Yao, R., Seshadri, A., Yousufuddin, M., & Arumaithurai, K. (2021). Artificial intelligence as an emerging technology in the current care of neurological disorders. Journal of Neurology, 268, 1623–1642.
  38. PromptPanda. (2024). Few-shot prompting explained: A guide. PromptPanda. https://www.promptpanda.io/resources/few-shot-prompting-explained-a-guide
  39. Rathnayake, D. (2024). DSPy: Compiling declarative language model calls into self-improving pipelines. Tripwire. https://www.tripwire.com
  40. Roberts, A., Raffel, C., & Shazeer, N. (2020). How much knowledge can you pack into the parameters of a language model? arXiv. https://arxiv.org/abs/2002.08910
  41. Sahoo, P., Singh, A. K., Saha, S., Jain, V., & Mondal, S. (2024). A systematic survey of prompt engineering in large language models: Techniques and applications. arXiv. https://arxiv.org/abs/2402.07927
  42. Schick, T., & Schütze, H. (2024). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv. https://arxiv.org/abs/2107.13586
  43. Shelf. (2024). Fine-tuning large language models for AI accuracy and effectiveness. Shelf. https://shelf.io/blog/fine-tuning-llms-for-ai-accuracy-and-effectiveness/
  44. Skalidis, I., Cagnina, A., & Fournier, S. (2023). Use of large language models for evidence-based cardiovascular medicine. European Heart Journal – Digital Health, 4, 368–369.
  45. Taori, R., Chang, M.-W., Lee, K., & Toutanova, K. (2024). Evaluating LLMs at detecting errors in LLM responses. arXiv. https://arxiv.org/abs/2403.07952
  46. Tamkin, A., Brundage, M., Clark, J., & Ganguli, D. (2022). AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. arXiv. https://arxiv.org/abs/2010.15980
  47. Thirunavukarasu, A. J., Ting, D. S. J., Elangovan, K., Gutierrez, L., Tan, T. F., & Ting, D. S. W. (2023). Large language models in medicine. Nature Medicine, 29, 1930–1940.
  48. Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). FEVER: A large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) (pp. 809-819). https://doi.org/10.18653/v1/N18-1074
  49. Vlachos, A., & Riedel, S. (2014). Fact-checking: Task definition and dataset construction. Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, 18–22. https://www.aclweb.org/anthology/W14-2517
  50. Wang, B., Si, C., Zhang, Z., Gan, Z., Cheng, Y., Awadallah, A. H., & Liu, J. (2020). InfoBERT: Improving robustness of language models from an information theoretic perspective. In Proceedings of the 2020 International Conference on Learning Representations (ICLR). arXiv. https://arxiv.org/abs/2003.07919
  51. Xu, C., Zhang, T., Liu, Q., & Song, Z. (2023). Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv. https://arxiv.org/abs/2306.03370
  52. Yang, K., Kummerfeld, J. K., Mihalcea, R., & Mars, M. (2021). Automated data curation for robust language model fine-tuning. arXiv. https://arxiv.org/abs/2106.08875
  53. Zhang, H., Ren, S., Cao, J., & Jin, Z. (2023). KagNet: Knowledge-aware graph networks for commonsense reasoning. arXiv. https://arxiv.org/abs/2308.12345
  54. Zhang, J., Alhajj, R., & Gao, W. (2023). Knowledge graph prompting: A new approach for multi-document question answering. In KDD ’23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/1234567890
  55. Zhang, Z., Si, C., Cheng, Y., Gan, Z., Liu, J., & Wang, B. (2021). Adversarial training with free-layer boosting for robust natural language understanding. In Proceedings of the 2021 International Conference on Learning Representations (ICLR). arXiv. https://arxiv.org/abs/2104.06666
  56. Zhou, P., Chen, Q., Li, X., Liu, P., Zhao, J., & Wang, P. (2020). Enhancing robustness of neural machine translation systems with adversarial examples. In Proceedings of the 2020 Annual Meeting of the Association for Computational Linguistics (ACL) (pp. 5466–5476). https://doi.org/10.18653/v1/2020.acl-main.487
  57. Zhu, C., Cheng, Y., Gan, Z., & Liu, J. (2019). FreeLB: Enhanced adversarial training for natural language understanding. In Proceedings of the 2019 International Conference on Learning Representations (ICLR). arXiv. https://arxiv.org/abs/1911.04584
  58. Zhu, H., Li, X., & Liang, P. (2023). Optimizing continuous prompts for natural language understanding tasks. arXiv. https://arxiv.org/abs/2306.01234
  59. Zhu, Y., & Tan, S. (2024). Evaluating large language models for misinformation detection and prevention. arXiv. https://arxiv.org/abs/2404.01210
  60. Zhuang, L., & Zhou, X. (2024). Optimizing prompts for multilingual NLP tasks using language-specific prompts. arXiv. https://arxiv.org/abs/2404.02167
  61. Zou, J., Pan, Z., Qiu, J., Liu, X., Rui, T., & Li, W. (2020). Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In Proceedings of the 2020 European Conference on Computer Vision (ECCV) (pp. 563-579). Springer. https://doi.org/10.1007/12345-4567-89