How to start building a simple symptom checker with Infermedica API?

We often hear questions like “Which API endpoints are obligatory?”, or “How do you use endpoints to create a symptom checker?” as well as more technical ones like “What is the relationship between an API endpoint and param(eter)s”?

We’ve taken a moment to collect the most common questions and answer them in this article, which shows how to build a simple symptom checker using Infermedica API. 

Also, parameters are just options that can be added to an endpoint to help shape exactly what information you want to receive in the response.

This workflow example will be based on the one used in the Infermedica Triage module. Configuration examples will complement each step of the workflow and include links to the corresponding API documentation when possible.

Use case overview - symptom checking tool:

Symptom checkers are commonly built to navigate new or existing users to the right medical services. The interview, built upon Infermedica’s platform, collects the initial evidence before analyzing it and asking any additional questions needed to calculate the most probable conditions.

If we take a look at the interview process in-depth, we can split it into the following steps:

  1. Welcome screen (core)
  2. Terms and conditions (core)
  3. 1st/3rd person point of view (complimentary)
  4. Pediatrics (extension)
  5. Age and gender (core)
  6. Common risk factors (complimentary) 
  7. Initial symptoms - (core)
  8. Regional risk factors - (complimentary) 
  9. Related symptoms  (complimentary)
  10. Red flags (complimentary)
  11. Dynamic interview - (core)
  12. Rationale (complimentary)
  13. Explanations (complimentary)
  14. Results screen - (core) 
  15. Specialist recommender (extension)
  16. Channel recommender (extension)
  17. Patient Education (extension)

In this article, we will focus on the core steps.

Before proceeding further, think about the goals of your symptom checker and how you could measure them. Wondering if you should add any additional analytical tools to your app? Check our exemplary KPIs here ->

Welcome screen

This is the initial step for every user. Here we look to inform the user about the tool, its goal, and the following steps. It’s a good place to mention Infermedica, as required by our Terms of Service. This step does not require API calls and should be developed separately by the implementation team.


Frame 27

Check what vocabulary to use when communicating about symptom checkers -> 

Terms and conditions

The next step involves taking care of all the legal issues. Here we provide the user with the Privacy Policy and Terms of Service as well as reminding users that the interview cannot replace an actual medical consultation or diagnosis. This step does not require API calls and is developed by the implementation team.


Frame 28

Age and gender

Our Inference Engine needs to know these two pieces of demographic information to properly assign the probabilities of the conditions at the end of the interview.

Age and gender information is required by the /diagnosis endpoint ->  for it to function correctly.

Frame 29-1

Why do we ask about which gender was assigned at birth? Check here ->

Initial symptoms

This is where we learn about the user's chief complaints, i.e., the first symptoms they can identify. A few options are available in the API when collecting symptoms and providing them to the /diagnosis endpoint.

The first option would be to use the /search endpoint. This endpoint allows the user to find individual observations or symptoms that match a given phrase. The search endpoint connects to the /symptoms and /concepts endpoints and returns a list of all available symptoms.

The second is through the /parse endpoint, which allows the user to enter free text and identify their symptoms through said text. This is mainly used in conversational-type solutions like chatbots. 

The third is a body map, which is not part of any endpoint but can easily be developed.

Frame 38

Dynamic interview

This step is a key part of every interview. After the initial information about a user has been collected, we move forward to the dynamic interview where we ask a series of questions about the user's health and narrow down the list of probable conditions to the most likely conditions. 

The interview uses the /diagnosis endpoint as the core point of contact and will continue until the engine is sure of the top 8 conditions. The end goal is to create the most accurate list of conditions. We also have various interview modes in case the interview goal is different, e.g., calculating only the triage recommendation. 

The should_stop attribute is used to determine how long an interview should go. Once the system has enough information to make a decision, the should_stop attribute will be sent in the response indicating that the interview is finished. It's important to note that for the should_stop attribute to function properly, initial evidence needs to be provided at the beginning of the interview. Find out more about the should_stop attribute here ->


Frame 31

Results screen

The final core step of our process is the presentation of the results. In the example here, we show two types of information: 

  • a list of the most probable conditions, which is generated by the /diagnosis  endpoint, 
  • the recommended triage level, which is generated by our /triage endpoint.

Frame 32

This is the end of the core flow for the symptom checker but it is not the end of the user journey. We strongly encourage you to look for options to connect the Results screen with other available healthcare services -> 

Check out our article on additional endpoints that improve the user experience and interview accuracy here →