On January 8, 2025, Training Magazine Network hosted a webinar by Dr. Lewis Johnson entitled “Using AI to Train and Assess Soft Skills: Examples from Healthcare.” Here is a link to a recording of the webinar, and here is a link to the presentation slides. Over 1200 people registered and nearly 400 people attended live.
Dr. Johnson described how to use generative AI to create interactive role-play scenarios, and then evaluate trainees’ application of soft skills in those scenarios. It is possible to create proof-of-concept training scenarios very quickly using off-the-shelf tools such as ChatGPT. Dr. Johnson then described the additional development steps a designer should go through to develop the training concept into a reliable, effective training tool. Experts at Alelo can assist training developers through this process. The webinar included a live demonstration of a role-play simulation to train community health workers to communicate effectively with caregivers.
The webinar audience was very engaged and had many good questions. Unfortunately, there was not enough time to answer all of the questions that came up during the webinar. This blog post answers some of the questions that we did not get to during the webinar. The questions have been edited in some cases for clarity.
Q: Is Alelo Enskill just for healthcare?
A: Alelo Enskill can be used to train people for a variety of occupations, in a variety of industries. Example industries range from accounting to hospitality to software. Example roles include sales, customer service, and management. The webinar focused on healthcare because soft skills are especially important in healthcare and generative AI can be applied in multiple ways in this industry. If you are interested in training for other industries, contact us at inquiries@alelo.com.
Q: Do you have tips for getting GenAI approved for use in regulated industries like healthcare?
A: If you are interested in this question, I encourage you to look at our recent white paper entitled “Building Trust into AI Health Navigators.” The principles described there apply broadly to obtaining regulatory approval:
- Define clearly the role that the AI application will perform, and put in place guardrails and other restrictions to ensure that the AI stays within that role. It is easier to obtain regulatory approval for AI that performs a circumscribed role. For example, if the AI is intended simulate a patient in a training scenario, the regulatory requirements are typically less than for AI that simulates a doctor and makes clinical decisions.
- Talk with teams responsible for regulatory review and understand what documentation and other materials they will require to conduct to complete their review. Provide them with documentation that is similar to what they are already familiar with.
- When building your application, make use of documents that have already received regulatory approval, using retrieval augmented generation (RAG).
Q: In your avatar’s conversational rules, you state that if answers are not available then it should be creative, but realistic. How does it decide what to say? Is there concern that it will make something up?
A: A certain degree of creativity is necessary and is one of the strengths of generative AI. Without it, avatar responses would be limited and training would be repetitive. To ensure that creativity does not yield inappropriate responses, it is important to provide specifics about the context of the scenario and give concrete examples of desired avatar responses. This will help ensure that the model’s responses meet expectations. Certain types of creativity are unacceptable and should be explicitly ruled out. For example, avatars playing the role of medical professionals should rely on authoritative medical sources and never make up medical information.
Q: Have you used this type of model with technical skills? For instance, building an appropriate plan of care for a patient?
A: This type of model can be used for technical skills. For example, we have created a model to answer questions from health professionals about how to administer drugs. In such applications it is important to follow the guidelines outlined above: clearly define the role of the model and give it authoritative technical resources to draw on. Our drug navigators rely only on information sources that have undergone regulatory approval, such as package inserts, FAQ (frequently asked question) pages, and refereed publications.
Q: Is it true that you have to pay per response or input from the user?
A: If you are relying on commercial large language models such as OpenAI’s GPT-4o or Google Gemini, or commercial training providers such as Alelo, then yes you will have to pay for usage. Prices depend upon the particular service. Only individual trial subscriptions are typically free.
Q: If you’re anonymising transcripts, how do trainers know which learners need more support?
A: The trainer dashboard provides access to learner transcripts; however, learner personal information is not stored in the transcript data set. Instead, anonymized IDs are used. This helps protect the data from unauthorized use by third parties.
Q: Is there an Alelo sandbox link that you can share that we can test in and bring back to our teams?
A: Contact us and we can provide you with a trial account that your team can test out.