top of page

Artificial Intelligence & Social Work

Description, positives, concerns, recommendations, place of social work, suggestions for practice

Three sections follow:

  1. Background Material that provides the context for the topic

  2. A suggested Practice Approach

  3. A list of Supporting Material / References

Feedback welcome!

At the time of writing this post (Australia, September 2023), artificial intelligence was very much in the news. Some people support its use, pointing out its many benefits. Others raise concerns, seeking to limit, control or slow down its development. Social work is already caught up in this debate and will continue to contribute.

This post draws on some of the recent material on the web and from journals to examine some of the current thinking on the impact of AI on social work. Most writers seem to conclude that AI can be beneficial to social workers but emphasise that social workers must carefully evaluate its use and, when incorporating it into practice, ensure they do not abandon the human contact dimension of social work. AI should be another tool social workers use to support people. Social worker knowledge, values and practical expertise should continue to take centre-place.

Before posting this material, I sought feedback from the Facebook site, ‘Social Work Toolkit: Connect and Share’. As a result I discovered the ‘socialworkmagic’ site. If interested in exploring AI further this site is worth a look:

Background Material

What is Artificial Intelligence? How Does It Work?

Generative artificial intelligence (AI) and large language models (LLMs) such as ChatGPT can generate coherent, relevant, and high-quality text based on simple, plain-language prompts provided by the user. Generative AI models analyze the nuances of human language to produce natural, human-like responses. LLMs are a specific type of generative AI trained on massive text data sets, including books, articles, and Web pages. These models use advanced deep learning techniques to analyze and understand the patterns and structures of language, enabling them to generate human-like text that can be used for a wide range of applications (Perron, 2023a; Victor et al, 2023).

Naughton (2023) provides some background that helps in understanding LLMs such as ChatGPT. This LLM has been trained on hundreds of terabytes of text, most of it probably scraped from the web, so you could say that it has “read” (or at any rate ingested) almost everything that has ever been published online. As a result, ChatGPT is pretty adept at mimicking human language, a facility that has encouraged many of its users to anthropomorphism, i.e. viewing the system as more human-like than machine-like. However, Naughton maintains ChatGPT is simply a tool that augments human capabilities. Naughton provides an example to illustrate: “So if you give the model a prompt such as ‘The first person to walk on the moon was ... ‘ and it responds with ‘Neil Armstrong’, that’s not because the model knows anything about the moon or the Apollo mission but because we are actually asking it the following question: ‘Given the statistical distribution of words in the vast public corpus of [English] text, what words are most likely to follow the sequence ‘The first person to walk on the moon was’? A good reply to this question is Neil Armstrong.”

Positives of AI

LLMs can be used for a variety of tasks such as such as generating original text, translating languages for international scholars, and even facilitating the generation of novel ideas and hypotheses. Software that is helpful in this area includes Elicit (extracting key points from large collections of articles), Scite (discovery and evaluation of articles) and Semantic Scholar (connections and links between articles (Victor et al., 2023). Particular areas mentioned in the literature include the following.

  • In qualitative research, LLMs can significantly reduce the time and effort required for researchers to analyze text data manually, enabling them to focus on interpreting and contextualizing the data (Spooner, 2023; Perron, 2023a; Victor et al., 2023).

  • In quantitative research, LLMs can help analysts select suitable statistical procedures for data analysis. LLMs can assist in identifying patterns and trends, with writing and debugging code, with data mining, with identifying and categorising different data types, and with sentiment analysis (determining emotions or attitudes in a text) (Shoaib, 2023; Perron 2023a; Victor et al., 2023).

  • AI-powered tools can be used to help facilitate better decisions through providing real-time information about a client's history, needs, and outcomes (ChatGPT in Spooner, 2023). Large language models can also help explore alternative strategies and interventions by suggesting evidence-based practices and potential solutions (Perron, 2023a).

  • AI can be used to automate routine tasks, such as scheduling appointments, filling out paperwork, writing case notes, treatment plans, and progress reports, updating training manuals, synthesising and updating policy documents, and tracking case progress (ChatGPT in Spooner, 2023; Dey, 2023; Perron, 2023a)

  • By analysing data on past interventions and outcomes, AI can help predict the likelihood of success for a particular intervention with a given client, helping develop more effective intervention plans and reducing the risk of negative outcomes. It can also help allocate resources more efficiently (Shoaib, 2023).

  • AI-powered chatbots (programs that simulate human responses in a conversational manner by using artificial intelligence and natural language processing) can be used in social work to provide counselling or emotional support to people who might not have access to traditional therapy or may be hesitant to seek it out. They can also help connect individuals with the appropriate resources and services. Chatbots can be especially useful in addressing mental health issues, where individuals may be hesitant to seek help from a human therapist due to social stigma or other barriers. (ChatGPT in Spooner, 2023; Dey, 2023; Perron, 2023a; Shoaib, 2023).

  • Generative AI can be a valuable tool at an international level by providing real-time translations and generating culturally-sensitive messages, helping one stay informed about global trends and emerging international issues, and fostering collaboration and knowledge exchange between individuals in different countries (Dey, 2023; Perron, 2023a).

  • AI can help social workers synthesise and evaluate policy documents more efficiently, extracting critical information and summarizing main points. It can identify gaps and opportunities in existing policies (Dey, 2023).

Concerns About AI

Several potential problems with AI and LLMs could significantly limit their utility and acceptability in some areas, including social work. Some of these issues are foreseeable, whereas others may arise unexpectedly.

  • There is a potential for LLMs to replace human expertise and judgment in academic research. LLMs are powerful tools that can quickly generate large amounts of text, but they lack the same level of critical thinking, reflection, and analysis as humans. Research output could drop in quality (Devlieghere et al., 2022; Shoaib, 2023; Victor et al. 2023).

  • Most LLMs are cloud-based services and sensitive data could be subject to a privacy breach. This is a problem in itself but is compounded by the lack of clarity around ownership of data entered into a LLM, e.g. OpenAI (developer of ChatGPT) notes it owns all data and content input into ChatGPT (Dey, 2023; Victor et al. 2023).

  • LLMs can be biased depending on the data used to train them, especially in text generation, sentiment analysis, classification, and thematic analysis. LLMs are often complex and opaque, making it difficult for users to understand how they work and what biases may be present in the model. This can make it challenging to ensure that the results generated by LLMs are accurate (Dey, 2023; Innorative AI, 2023; Victor et al., 2023).

  • LLMs are typically trained on large sets of text data, which may include sources from a wide range of authors and contexts. These sources may not have undergone the same rigorous peer-review process as traditional research publications, and LLMs might not be built on the most current available data (Victor et al., 2023).

  • In many professions, including social work, AI cannot replace a human connection and the empathy this brings to the situation. AI can enhance, but not replace, human connections (Devlieghere et al., 2022; Dey, 2023; Innovative AI, 2023; Shoaib, 2023).

  • Many social work organisations might not have the necessary resources or expertise to adopt and maintain AI technologies (Dey, 2023).

Responsible and Ethical Use – Recommendations

The concerns outlined above have caused some authors to make recommendations around using AI and LLMs.

  • Users should understand the limitations and potential biases of AI tools and make informed decisions about which tools are best suited for a particular situation (Perron, 2023b; Victor et al., 2023).

  • Encouraging better representation and participation of underrepresented and marginalized communities in the AI development process allows professionals to support more inclusive practices and serve the unique requirements of a broad range of individuals (Perron, 2023a).

  • Users are responsible for protecting the privacy and security of any data they collect and analyse using generative AI tools (Victor et al., 2023; Perron, 2023b).

  • Those using AI tools should explain to clients how the tool contributes to care, discussing benefits and limitations, and addressing concerns (Perron, 2023a)

  • As AI technologies evolve rapidly, users should engage in ongoing learning and development to stay current with the latest advances in AI tools, techniques, and ethical considerations (Victor et al., 2023).

After conducting a literature review Reamer (2023) concluded that an in-depth analysis of the key ethical issues related to the use of AI was lacking.  Reamer lists the following as core ethical challenges for social workers when using AI.

  • Informed consent and client autonomy           When using AI, practitioners should inform clients of relevant benefits and risks and respect clients’ judgment about whether to accept or decline the use of AI.

  • Privacy and confidentiality          Social workers have a duty to ensure that the AI software they are using is properly encrypted and protected from data breaches to the greatest extent possible.

  • Transparency          Social workers who use AI should inform clients of any unauthorized disclosure of clients’ protected health information, for example, as a result of computer hacking or failed online or digital security.

  • Client misdiagnosis             Social workers should supplement any AI-generated assessment with their own independent assessments and judgements.  Misdiagnosis may lead to inappropriate or unwarranted interventions which, in turn, may cause significant harm to clients.

  • Client abandonment             Social workers who rely on AI to connect with clients must take steps to respond to their messages and postings in a timely fashion, when warranted, e.g. if a client communicated suicidal ideation via AI.  Social workers should take reasonable steps to avoid abandoning clients who are still in need of services.

  • Client surveillance            One of the inherent risks of AI is the possibility that third parties will use available data inappropriately and without authorization for surveillance purposes, e.g. prosecutors may subpoena electronically stored information generated by AI to prosecute pregnant people who seek abortion services.

  • Plagiarism, dishonesty, fraud, and misrepresentation          One advantage of AI is that it enables social workers to generate useful job-related information quickly, e.g. for grant applications and fundraising appeals.  Social workers who take advantage of this powerful AI tool must be sure to cite their sources and comply with “fair use” doctrine to avoid allegations of plagiarism, dishonesty, fraud, and misrepresentation

  • Algorithmic bias and unfairness        AI’s dependence on machine learning, which draws from large volumes of available data that may not be entirely representative of social workers’ clients, comes with a risk that algorithms used will incorporate significant bias related to race, ethnicity, gender, sexual orientation, gender expression, and other vulnerable or protected categories.

  • Use of evidence-based AI tools            Because of the speed of development, social workers who incorporate AI into their practice have an ethical duty to keep pace with research-informed developments and adjust their protocols accordingly. A key resource is the Association for the Advancement of Artificial Intelligence (

The Role of AI in Social Work

Before examining the role of AI in social work it is of some interest to see what an AI tool, viz. ChatGPT, says in response to the question: “What role will AI play in the future of social work?” Here is a description in its own words (Spooner, 2023):

AI is expected to play an increasing role in the field of social work in the future, as it has the potential to improve many aspects of the profession. Some possible ways in which AI could be used in social work include:

Predictive modelling: AI algorithms can be used to analyze large amounts of data to identify patterns and make predictions about which individuals or families are most at risk of experiencing social problems, such as homelessness or child abuse. This can help social workers prioritize their caseloads and target interventions more effectively.

Decision support: AI-powered tools can be used to help social workers make better decisions by providing them with real-time information about a client's history, needs, and outcomes. For example, an AI-powered tool might analyze a client's medical records and social service history to recommend appropriate interventions.

Administrative tasks: AI can be used to automate routine tasks, such as scheduling appointments, filling out paperwork, and tracking case progress. This can free up social workers to focus on more complex and important tasks, such as building relationships with clients and collaborating with other professionals.

Chatbot: AI-powered chatbot can be used in social work to provide counselling or emotional support to people who might not have access to traditional therapy or may be hesitant to seek it out.

Virtual Assistance: AI-powered virtual agents may also be used to provide general information, referrals and help clients navigate the social service system.

However, it's important to note that, AI should be used as a tool to support human decision making and not as a replacement for human expertise and empathy.

Shoaib (2023) agrees with much of the above, maintaining AI has the potential to make a significant positive impact on social work by improving the efficiency and effectiveness of interventions and helping social workers make more informed decisions. Molala and Mbaya (2023) go further in suggesting the future of social work is intertwined with AI, and it is in the best interest of the profession to embrace AI as it has the potential to improve the efficacy of social work services. Shoaib (2023) also urges social workers, when using AI, to be aware of the potential risks and challenges outlined in the relevant sections above so they can use AI in a fair, unbiased and human-centred way to improve the wellbeing of individuals, families and communities.

Reamer (2023) acknowledges AI has come of age and has the potential to transform social work in clinical, administrative, advocacy and policy areas.  Reamer suggests that AI has been especially prominent in clinical social work to conduct risk assessments, assist people in crisis, strengthen prevention efforts, identify systemic biases in the delivery of social and behavioral health services, provide social work education, and predict social worker burnout and service outcomes.  Examples provided by Reamer include:

  • The ‘Crisis Contact Simulator’ that simulates digital conversations with LGBTQ youths enabling counsellors to experience realistic practice conversations before taking live ones.

  • 'Woebot’, a therapeutic chatbot that offers an automated therapist that simulates therapeutic conversation, remembers the context of past sessions and delivers advice around mood and other struggles.

  • ‘Wysa’, an AI service that responds to the emotions individuals express using evidence-based CBT, DBT, meditation, breathing, yoga and motivational interviewing.

  • ‘heyy’, an app designed to communicate with people who feel chronically lonely

  • ‘ChatGPT’,  offering people ways to address their distress through, for example, increasing relaxation, focusing on sleep, reducing caffeine and alcohol consumption, challenging negative thoughts, reducing high-risk behaviours and seeking the support of friends and family

  • ‘PTSD Coach’, an app designed to help veterans and service members manage symptoms of PTSD

  • The ‘AIMS’ app (Anger and Irritability Management Skills), designed to help veterans and military service members manage feelings of anger and irritability

  • ‘Mindfulness Coach’ app,  providing a variety of guided mindfulness exercises which can help users reduce stress, manage anxiety, and improve overall well-being.

  • ‘Annie’ app for veterans, a SMS text messaging tool that promotes self-care for veterans.

However, while acknowledging the role of AI in current and future social work, writers also suggest that AI cannot replace social workers. AI’s role is to handle mundane tasks to free up social workers’ time to focus on tasks requiring human interventions. Social workers have unique skills that AI lacks (Perron, 2023a). Lentiniemi (2023) came to a similar conclusion after a study of social workers in Finland who trialled AI as a means of predicting whether a child would need future emergency placement or taking into custody. The twelve social workers involved found the tool did not consider the person-in-environment approach central to social work. As well, the tool ignored the possibility that people can and do depart from historical trajectories. Unlike the AI tools, social workers do not view a client’s future as determined by the past; rather they support departures from it. In fact, AI cannot meet the demand for knowledge about a unique person in a specific context.

It is important that social workers have a solid grasp of AI technology, which includes understanding how AI models function, their strengths and weaknesses and the potential biases they may introduce. A key component of understanding AI is knowing the underlying training procedures and data, enabling social workers to identify potential inaccuracies, biases, and outdated information. Disappointment with AI models often arises from using them for tasks they were not trained to perform, such as researching highly specialized topics (James et al., 2023; Perron, 2023b).

If AI is to be used effectively social workers must become proficient at crafting practical questions or input statements to guide the AI model in generating useful, relevant, and accurate responses. Content expertise plays a significant role in generating relevant ‘prompts’ for an AI tool. Well-crafted prompts can significantly improve the quality of the AI-generated content, making it more focused, accurate, and applicable to the task at hand. Conversely, poorly designed prompts may lead to irrelevant, ambiguous, or misleading responses from the AI model. This again highlights the importance of social workers’ knowledge and expertise by not only providing the human connection but by allowing them to efficiently utilise AI tools to drive more effective, efficient, and ethical outcomes for clients and communities (Perron, 2023b).

With its continuing emergence into the everyday lives of individuals, social workers have a responsibility to continuously examine and question the effects of AI technology on vulnerable populations, ensuring that the tools used do not contribute to harm or injustice. Cultivating a deep understanding of ethical challenges enables social work professionals to advocate for developing and implementing policies and guidelines that support ethical AI use in their practice settings. This commitment to ethical AI integration is essential for upholding the core values of social work and promoting the well-being of individuals and communities (James et al., 2023; Perron, 2023b).

Continuous professional development (CPD) is essential if social workers are to utilise AI tools fully and appropriately. CPD will assist social workers to keep abreast of novel and emerging technologies in the dynamic digital society. Moreover, CPD can enhance the quality of intervention, and ethical conduct around, for example, maintaining confidentiality, gaining informed consent, setting professional boundaries, professional competence, record keeping (Molala & Mbaya, 2023).

Practice Approach

Themes around using AI in social work practice that emerge from the above include the following.

  • AI may assist the social worker in improving the wellbeing of individuals, families, and communities but it lacks the level of critical thinking, reflection, and analysis of a social worker. While AI can be a useful tool for social workers and it capabilities will continue to develop, it cannot replace the human connection and empathy that is essential in the field of social work. It is important for social workers to maintain a human-centered approach, using AI to enhance their work rather than replacing it.

  • AI has the potential to significantly reduce time and effort in some areas: mundane tasks in qualitative and quantitative research, routine tasks such as scheduling appointments, filling out paperwork, writing case notes, treatment plans, and progress reports, updating training manuals, synthesising and updating policy documents.

    • In some of these areas social workers may have to refine AI output to ensure it is personalised for the individual/situation (e.g. case notes and treatment plans).

  • Ultimately AI may be able to provide information about a person’s history, needs and outcomes; it may be able to provide alternative strategies for consideration by the social worker.

  • Social workers using AI will need education in how to ‘prompt’ the AI tool appropriately. Well-crafted prompts can improve the quality of AI-generated output; poorly designed prompts may result in the opposite.

  • Social workers have a responsibility to continuously examine and question the effects of AI technology on vulnerable populations, ensuring that the tools used do not contribute to harm or injustice.

  • AI tools are developing rapidly requiring social workers to stay up to date with the latest advances through continuous professional development.

When using AI social workers will need to be mindful of the following.

  • How does this AI tool come up with suggestions? What data was used to train the AI tool? Has data from under-represented and marginalized communities been included in the AI tool? Has the data undergone the same rigorous peer-review process as traditional research? Is the data accurate?

  • Will people’s privacy be respected?

  • Is this AI tool the most appropriate to use in this situation?

References/ Supplementary Material

Before posting this material, I sought feedback from the Facebook site, ‘Social Work Toolkit: Connect and Share’. As a result I discovered the ‘socialworkmagic’ site. If interested in exploring AI further this site is worth a look:

Devlieghere, J., Gillingham, & Roose, R. (2022). Dataism versus relationshipism: A social work perspective. Nordic Social Work Research.

Dey, N. C.  (2023, August).  Unleashing the power of artificial intelligence in social work: A new frontier of innovation. Social Science Research Network (SSRN):  or

Hodgson, D., Watts, L., & Gair, S. (2023). Artificial Intelligence and implications for the Australian Social Work journal. Australian Social Work, 76(4), 425-427.

Innovative AI (2023, March). AI and social work 

James, P., Lal, J., Liao, A., Magee, L., & Soldatic, K. (2023). Algorithmic decision-making in social work practice and pedagogy: Confronting the competency/critique dilemma. Social Work Education.

Lehtiniemi, T. (2023). Contextual social valences for artificial intelligence: Anticipation that matters in social work. Information, Communication and Society.

Molala, T. S., & Mbaya, T. W. (2023). Social work and artificial intelligence: Towards the electronic social work field of specialisation. International Journal of Social Science Research and Review, 6(4), 613-621.

Naughton, J. (2023, January 8). The ChatGPT bot is causing panic now – but it’ll soon be as mundane as Excel. The Guardian.

O’Connor, S., Yan, Y., Thilo, F., Felzmann, H., Dowding, D., & Lee, J. (2022). Artificial intelligence in nursing and midwifery: A systematic review. Journal of Clinical Nursing, 00, 1-18. DOI: 10.1111/jocn.16478

Perron, B. (2023a, March 21). Generative AI for Social Work Students: Part I. Medium.

Perron, B. (2023b, April 27). Generative AI for Social Work Students: Part II. Medium.

Reamer, F. (2023). Artificial Intelligence in Social Work: Emerging Ethical Issues. International Journal of Social Work Values and Ethics, 20(2), 52-71.    

Shoaib, M. (2023, May 7). Social work and AI: The role of technology in addressing social challenges. Canasu Dream Foundation.

Spooner K. (Jan 12 2023). Artificial Intelligence & ChatGPT. Australian Association of Social Workers: Technology and social work hub.

Thomas-Oxtoby, S. (2023, June 16). How the field of social work is adapting to modern technologies like virtual reality, A.I. Fortune.

Victor, B. G., Sokol, R. L., Goldkind, L., & Perron, B. E. (2023). Recommendations for social work researchers and journal editors on the use of generative AI and large language models. Journal of the Society for Social Work and Research, 14(3).


bottom of page