Advice
Evidence review
Evidence review
Clinical and technical evidence
Regulatory bodies
A search of the Medicines and Healthcare Products Regulatory Agency website revealed no manufacturer Field Safety Notices or Medical Device Alerts for this device. No reports of adverse events were identified from a search of the US Food and Drug Administration (FDA) database: Manufacturer and User Device Facility Experience (MAUDE).
Clinical evidence
Two full‑text studies were identified which evaluated the validity of the Mersey Burns app (Barnes et al. 2015; Morris et al. 2014). Both studies involved patient simulations. A review of smartphone applications for burns including Mersey Burns was also identified (Wurzer et al. 2015).
Barnes et al. (2015) assessed the accuracy and speed of the Mersey Burns app to estimate TBSA compared with the Lund and Browder paper chart, a standard tool for estimating TBSA. They also studied the accuracy and speed of both methods to calculate a fluid resuscitation protocol. The paper describes 2 studies: a pilot study involving 20 clinicians (study 1) was used to inform the design of the main study involving 42 medical students (study 2). In the pilot study, clinicians (10 specialist trainees and consultants from plastic surgery and 10 from emergency departments) were shown a photograph of a child with a burn injury and asked to calculate the TBSA, and then devise a fluid resuscitation and background (maintenance) fluid protocol. A calculator and a Lund and Browder paper chart to estimate TBSA were provided. The method of calculating fluid requirements from TBSA was not stated. Several of the clinicians (8/20; 40%) were uncertain how to calculate background fluid requirements in children using the comparator method (not stated) and did not attempt to do this. Four of the 10 plastic surgery staff (grade ST2 to consultant) assessed the same burn with the Mersey Burns app. There were no significant differences in the TBSA, total fluid requirements, fluid rate or maintenance fluid requirements calculated using the Mersey Burns app and the comparator method. However, there was a significant difference in the between‑subject variance between the Mersey Burns app and the paper chart and comparator fluid calculation method for total fluids (p<0.05) and maintenance fluids (p<0.0001), with the paper method showing greater variance.
In the main study reported in Barnes et al. (2015), 42 senior undergraduate medical students were given a 1‑hour lecture on burns management and fluid resuscitation involving demonstrations of the Lund and Browder chart and the Mersey Burns app. The students were then given a prosthetic burn simulation with a mixed burn injury and were asked to calculate the TBSA and a fluid resuscitation protocol using both the Mersey Burns app and the Lund and Browder chart. Again, the comparator method of fluid calculation used with the Lund and Browder chart was not stated. Variations in TBSA calculations, time taken and the accuracy of fluid calculations were compared between the 2 methods. Fluid volumes calculated by each student were manually checked for accuracy by 2 of the authors of the paper and confirmed as either correct or incorrect. Measures including preference, speed and ease of use were also assessed using a questionnaire. No significant difference was observed for the TBSA value when the Mersey Burns app was compared with the paper chart, although the mean time to calculate the result was quicker with the Mersey Burns app (11.7±2.8 minutes for paper [range 6 to 17 minutes] and 4.6±1.2 minutes for the app [range 3 to 7 minutes], mean difference 7.1 [95% confidence interval [CI] 6.09 to 8.18]). The accuracy of the fluid calculation was considered to be correct in 100% of the cases using the Mersey Burns app for the first 8 hours and the following 16 hours of fluid resuscitation. This compared with fluid calculation accuracy for the paper chart in 62% of cases (26/42, 95% CI 0.33 [0.17 to 0.49]) for the first 8 hours and 64% of cases (27/42, 95% CI 0.33 [0.18 to 0.48]) for the following 16 hours. The total fluid volume for the full 24 hours was accurately calculated using the Mersey Burns app in 100% of cases, and in 81% of cases using the paper chart (95% CI 0.17 0.05 to 0.28). Students favoured the Mersey Burns app over the paper chart in the following categories: preference in emergency setting, confidence in output, accuracy, speed, ease of calculation, ease of overall use (p<0.0001) and shading (p=0.0007).
Morris et al. (2014) evaluated the accuracy and speed of the Mersey Burns app compared with the uBurn app (a similar app which has not been CE‑marked) and a general‑purpose electronic calculator for calculating fluid requirements using the Parkland formula. Thirty four participants of various clinical grades and specialties were provided with randomly generated simulated clinical data and asked to calculate fluid requirements using the electronic calculator, the Mersey Burns app and the uBurn app. All patients were from a regional burns unit and had previous experience of calculations using the Parkland formula. The clinicians scored the methods according to ease of use and order of preference, and were also invited to make written comments. There was no significant difference in the incidence or magnitude of errors with the calculator method or either of the apps. Both apps were significantly faster to use than the calculator with Parkland method (mean response time 86.7 seconds for the calculator method; Mersey Burns 69.0 seconds, p=0.017; uBurn 71.7 seconds, p=0.013), but were not significantly different to each other. All methods showed a learning effect (p<0.001). There were no significant differences in ease of use or preference ranking for the different methods.
The data tables for Barnes et al. (2015) and Morris et al. (2014) are presented in the appendix.
A review of smartphone applications used to aid burns management (Wurzer et al. 2015) reported mainly on the functions and costs of 32 individual apps, 13 of which, including Mersey Burns, were calculator apps for estimating TBSA or total fluid requirement (TFR). All the apps were tested using simulated data for 1 male patient with 18% TBSA. The TFR for the first 24 hours was manually calculated using the Parkland formula as 5400 ml. The Mersey Burns app correlated with the calculated TFR.
Costs and resource consequences
The Mersey Burns app is free to download, and has no direct cost implications – particularly the HTML5 compatible version, which could be used on computers available in clinical settings. There is a theoretical cost related to providing mobile devices if this were considered necessary, but it is likely that many clinicians already own smartphones or tablets. Using the app would not require any changes to current service provision.
Morris et al. (2014) found that Mersey Burns resulted in faster calculation of fluid requirements than the traditional paper method. However, they indicated that this is unlikely to be clinically significant in practice and therefore would not affect resource use.
No published evidence on the resource consequences of using Mersey Burns was identified.
Strengths and limitations of the evidence
The evidence for the clinical effectiveness of the Mersey Burns app is very limited in quantity and quality with only 2 papers identified. Both were clinical simulation studies and did not involve patients. In Barnes et al. (2015) participants were either shown a photograph of a child with a burn injury or instructed to assess a realistic prosthetic burn simulation. Participants in Morris et al. (2014) were given randomly generated simulated data to assess.
Both Barnes et al. (2015) and Morris et al. (2014) compared the Mersey Burns app with the current standard paper‑based method (the Lund and Browder chart) to estimate TBSA. However, although Morris et al. used the Parkland formula, Barnes et al. (2015) did not explicitly state the method of fluid calculation used in the comparison. In all studies, the order in which the calculation methods were used was randomised to reduce bias. Morris et al. (2014) also blinded participants and investigators to the response times and correct answers in the scenarios. Barnes et al. (2015) state that 2 of the authors manually checked the fluid requirements calculated by the medical students and confirmed as either correct or incorrect. This may have resulted in analytical bias. No information is given about how the fluid requirements calculated in the clinician study were assessed. In the Morris et al. (2014) study, error magnitude was calculated using bespoke software. The studies involved participants with different levels of burns experience. The clinicians testing the app in the Morris et al. (2014) study were staff from a regional burns unit; all had previous experience of performing calculations using the Parkland formula. In the Barnes et al. (2015) study, the app was tested mainly by undergraduate medical students with no previous experience of burns management and only 20% (4/20) of the experienced clinician group (plastic surgeons) used the app. The participants in each of the studies may not reflect actual users.
In Barnes et al. (2015) students completed an anonymous questionnaire assessing usability measures of each fluid calculation method using a Likert scale. A Likert scale is an ordinal scale, typically with 5 points, which allows agreement or disagreement to be measured. No details were provided in Barnes et al. (2015) as to whether the statements in the Likert scale were validated. Likert scales may also produce unreliable results because: the 'middle' statement could be considered an easy option for the respondent when unsure; respondents avoid selecting the extreme options; or respondents select options that might be considered the 'desirable' response. The significance levels for each category were not presented clearly: it was not clear if all were statistically significant or only ease of use overall and ease of shading.
Three of the authors of the Barnes et al. (2015) study were involved in designing the Mersey Burns app.