R.A.P.I.D. (Remote Automated Patient Image Detection)
đŸ©ș

R.A.P.I.D. (Remote Automated Patient Image Detection)

Tags
c
python
raspberry-pi
sqlite
flask
esp32
html/css
opencv
Published
February 21, 2023
Author
Samar Qureshi
A few of my classmates, Joe, Gary, Swarnava and myself, attended MakeUofT 2023, Canada’s largest makeathon. We built a system of devices aimed at reducing pressure ulcers (bed sores) in hospital patients.
If the images and GIFs aren’t loading, be sure to refresh the page!

Inspiration

In the US alone, over 3 million patients suffer from decubitus ulcers, or bed sores, as a result of sustained pressure being placed on a particular part of the body, from lying down for a prolonged period of time; hospitals in particular, see up to 38% of patients affected by bed sores [1]. It is estimated that the cost of treating pressure ulcers is 2.5 times the cost of preventing them [2]. Our team set out on a mission to solve this issue by prioritizing nurses to frequent patients more often who are bedridden through R.A.P.I.D.

What it does

To avoid bed sores, a patient shouldn’t be stationary for more than two hours [3]. Our sensor module detects if significant movement is made by the patient. If the patient remains still for two hours, a nearby nurse is pinged every 10 minutes until they come and rotate the patient.

How we built it

Our design is comprised of five modules. The first ESP32 (patient sensor) is equipped with a camera, then sends the video stream over WiFi to our central controller, a Flask server, run on a Raspberry Pi.
The Flask server along with live video stream after OpenCV processing on the touchscreen display.
The Flask server along with live video stream after OpenCV processing on the touchscreen display.
On a separate ESP8266 (second patient sensor), a gyro sensor detects changes in the patient’s angular velocity, and 2 seconds of continuous positive input from the gyro sensor counts as “patient movement”.
The ESP8266 with the gyro sensor to detect changes in the patient’s angular velocity.
The ESP8266 with the gyro sensor to detect changes in the patient’s angular velocity.
The Pi then uses OpenCV to detect the difference in pixels between the video feed, and checks if there is 5 seconds of continuous movement, which counts as “patient movement”.
OpenCV determining if there was movement detected or not.
OpenCV determining if there was movement detected or not.
Using sensor fusion with the first ESP32, the Pi detects movement with more depth and precision as opposed to just the single camera.
Getting alerted by the pager to move the patient due to 2hrs+ of inactivity.
Getting alerted by the pager to move the patient due to 2hrs+ of inactivity.
To pull the movement data from the Pi, the M5Stack (another ESP32) pings the Pi every 5 minutes, and will alert the nurse if the patient requires movement. The data is also visualized on the display as separate patient rooms, running on a simple Flask server, with minimal HTML/CSS.
Our systems diagram of how each node sends information around.
Our systems diagram of how each node sends information around.
A visual of each of the components.
A visual of each of the components.
We also had access to a Qualcomm Hardware Development Kit 8450, stacked with a neural engine, on which we wanted to run TensorFlow Lite to be able to classify different types of patient movement (i.e., leg movement, twisting, rolling, etc), but were unable to get it to communicate with the Raspberry Pi due to our lack of Android development experience.
TFLite on the Qualcomm HDK.
TFLite on the Qualcomm HDK.

References

Â