The British Journal of Medical Practitioners has adopted a “Continuous Publication” model from the beginning of year 2010 publishing articles online as soon as they are peer-reviewed and copy-edited. This provides faster publication of articles for the authors and a quicker access for readers.
The BJMP website will now be updated regularly with the latest articles and we will continue to collate published articles into archival "issues" (about 4 issues per year).
It is widely acknowledged that medicine can be a high-stress profession. The reasons behind this observation have been the focus of research over recent years, because concerns over the welfare of doctors have grown due to its relevance to burnout of individuals and safeguard of healthcare systems. However, a recent survey of hospital doctors still showed that 80% experienced workplace stress, and the junior doctors surveyed suffered significantly higher burnout rates than their consultants.1 Separate research has specifically found that junior doctors have a poor work-life balance, a composite measure of individual factors affecting wellbeing.2 There seem to be differences in the wellbeing of doctors in different specialities studied – a study in 2016 showed higher levels of leisure time enjoyed by general practitioners compared to doctors working in other specialities.3 Another survey showed that psychiatrists experienced lower levels of burnout than surgeons did.4 Furthermore, different burnout rates have been observed between consultants and junior doctors working in Psychaitry.5
We sought to build on existing research by studying the work-life balance of junior doctors and how some factors might affect that. We also decided to explore what factors might contribute to the differences in wellbeing between medical specialties and professional grades.
Method
Junior doctors working across an English county in general practice, medical and surgical specialities (the “non-psychiatric setting”), and in psychiatric specialities (the “psychiatric setting”), were recruited into a cross-sectional study between September and December 2019. To enable appropriate comparison between groups, junior doctors must have worked between the level of Foundation Year 2 (FY2) and consultant in their relevant speciality. This was necessary because the on-call responsibility of Foundation Year 1 (FY1) doctors in this locality varies significantly from that of more senior doctors.
All doctors were required to complete the SWING (Survey Work-home Interaction-NijmeGen), questionnaire6; a validated instrument measuring four aspects of work-home interaction. This questionnaire is split into negative (questions 1-12) and positive (questions 13-22) subscales, where lower and higher scores are better respectively. For each question four responses ranging from never to always could be returned. Demographic information was also collected to assess participant group similarity and identify any effect of these variables. These included age, gender, and whether they have children under the age of 18. No identifying information was requested to allow for staff anonymity, and no incentive was offered for participating.
Ethical approval for the study was granted by the local Medical Education Departments. Data from completed questionnaires was recorded in an Excel spreadsheet, which was used for collation and analysis. Significance of the between-group differences was calculated using the Chi-Squared test, with the threshold for statistical significance set at p<0.05. In order to allow comparison between the answers given for each questionnaire item, 1, 2, and 3 points were respectively allocated to each “sometimes”, “often”, or “always” response. The sum of these points for each question gave the “overall question score”, with lower and higher scores reflecting better work-life-balance on negative and positive subscales respectively. Overall question scores were also calculated as percentages of the maximum possible score for each question or subscale (i.e. if every respondent had answered “always”).
Results
Questionnaires were returned by 99 junior doctors (54 working in the non-psychiatric setting, and 45 working in the psychiatric setting). Demographic details are shown in Table 1. Not all respondents returned demographic details. There were no significant differences in the ages and genders of respondents between the two settings, but there were significantly more doctors with children <18 years in the psychiatric setting.
Table 1
Table 2
Questionnaire responses are shown in Table 2, along with calculated overall question scores and overall subscale scores for each subscale in both settings. Differences in overall question scores between settings are shown in Figure 1 and Figure 2.
Figure1
Figure 2
Overall question scores across the negative subscale were generally high, indicating a high incidence of negative work-home interaction among all respondents. Scores for questions 1-8, which ask about negative impact of work on home life, showed little/no difference between the two settings. Questions 9-12, which ask about negative impact of home life on work, recorded much lower scores in both settings, but there was separation between the settings, with scores in the psychiatric setting being higher than those in the non-psychiatric setting.
In the positive subscale, questions 13-17 ask about positive impact of work on home life, and questions 18-22 ask about positive impact of home life on work. Overall, there was a much more clear separation in scores between the two settings than that seen in the negative subscale. Aside from question 13, scores in the psychiatric setting being consistently higher than those in the non-psychiatric setting.
Main findings of this study can therefore be summarised as:
High negative impact of work on home life in both settings
Lower levels of negative impact of home life on work, but higher in the psychiatric setting
Higher positive impact of home life on work, and work on home life, in the psychiatric setting than in the non-psychiatric setting
Discussion
There has been a great interest in the wellbeing of junior doctors in recent years, resulting in a number of changes in working patterns, such as the move away from the old “firm” structure to medical training, and the introduction of the European Working Time Directive.7 However, the perceived wellbeing of junior doctors in the UK seems to still be poor, and has resulted in a so-called “Drexit” of junior doctors to other countries, such as Australia, providing a better quality of life or away from medicine altogether.7 One survey shockingly revealed that almost half of UK junior doctors have considered leaving the National Health Service, citing concerns over wellbeing.7 It is, therefore, unsurprising that in 2018, only 38% of FY2 doctors continued into speciality training.8
Various aspects of junior doctor wellbeing and contributory factors have been researched. For example, a large survey of Australian junior doctors published in 2020 showed that those working only a few more hours than the average were more than twice as likely to report common mental disorders.9 Many interacting themes have been qualitatively identified, such as those found in a recent Australian qualitative survey.10 These ranged from institutional issues such as discouragement to claim overtime, to cultural issues such as not wanting to ask for assistance, to personal issues such as time for personal care. Another study found multiple factors to be correlated with higher rates of burnout in hospital doctors, including male sex, younger age, and lower years of practice.1
It seems that wellbeing in junior doctors is a highly complex, multifactorial issue with many interacting contributory factors. In addition to considering the individual factors at work, it is also necessary to consider how these factors interact on a larger scale. One way which researchers have done this, and which we have replicated, is to consider the concept of “work-life balance”, which explores the interaction between work and home life, and vice-versa. Existing research in junior doctors has found work-life balance to be particularly poor in those with children and in women, who frequently cited that this had resulted in a change in career direction.2
Unsurprisingly, we have found high levels of work negatively impacting on home life in both psychiatric and non-psychiatric settings. Since work-life balance involves many interacting components, we speculated that it may differ between junior doctors working in different medical specialities. Indeed, we detected such differences, with the reported negative impact of home life on work being higher among those trainees in the psychiatric setting than those in the non-psychiatric setting. In a cross-sectional study like ours, it is not possible to comment on causality but we noted that there were significantly more trainees in the psychiatric setting who had children. This correlates with previous findings,2 and raises the possibility of a causative relationship between having children under 18 and negative impact on work. A study of stress in psychiatrists which gathered responses from 449 participants found that sickness of children and arranging childcare were among the top five stressors identified.11
Trainees in the psychiatric setting have consistently reported higher levels of positive impact of work on home life and vice-versa. One possible explanation is that the nature of psychiatry is inherently different to other areas of medicine, with a focus on promoting the quality of patient interaction, and training time dedicated to exploring this in detail. Supervision of patient contact is also conducted more thoroughly than in other specialities, which may lead to a greater sense of being supported in clinical decision making when trainees work in psychiatry.
Strengths and limitations
Regarding strengths of this study, we used an innovative method in seeking to compare trainees across two different settings. The questionnaire used was validated and holistic in examining bidirectional interaction between work and home life. Groups were well-matched in terms of the selection of trainees with broadly similar working rotas, and in their age and sex, which have been shown to be important variables which can affect work-life balance. We also used an innovative method in analysing the questionnaire responses which enabled us to compare directly between the two settings.
There are several limitations with this methodology which identify possible interesting and important areas for future research. For example, we did not investigate for differences in work-life balance between staff working in inpatient and community settings. Additionally, it was not possible to make conclusions about causality with this cross-sectional methodology, and the use of a longitudinal method with a more detailed exploration of demographic factors may provide interesting insights in the future. Due to local factors in the way psychiatric and general healthcare services are set up in our area, it was not practical to measure participant engagement with the study, and this would have presented a barrier making this study impossible. There were however 99 responses included in this study, with similar representation in both healthcare settings, which relative to the local population of doctors in the settings studied represents a good sample.
There will inherently be local differences in working patterns, and therefore the results of this study are not directly generalizable to a national or international population. The non-psychiatric setting is broad in its scope and includes trainees undertaking varied forms of medical and surgical training, and therefore there are likely to be more subtle variations which were missed in this approach.
Conclusion
This study adds to the literature on work-life balance in junior doctors, which is an important area of research in order to promote the wellbeing of the current and future medical workforce. It also explores how factors affecting wellbeing might interact on a higher level than when studied in isolation, and how these interactions may differ depending on the medical speciality in which the respective doctors work.
Because of the local variations in working patterns, we would suggest a replication of this research in other areas in the UK and abroad. We would also suggest that an interesting area for future research may be the exploration of differences in work-life balance between narrower groups of trainees, which may aid developmental policy generation in supporting doctors to maintain a healthy work-life balance across different specialities. The group we feel would benefit from further research in particular is the trainees with young children, as we found a possible negative association between this and impact of home life on work.
Sceptical attitudes towards Covid 19 vaccines effectiveness and/ or safety are currently a major risk to global health. However, not every person declining Covid 19 vaccination is an irrational conspiracy theorist (1). Patients suffering from specific conditions may have justified concerns that in the absence of safety data for their specific health problems, they may find it difficult to appraise the risks associated with the vaccination in their condition.
Patients suffering from long term complications of Covid 19 have coined the term long covid to describe their debilitating illness (2). Many clinicians feel that long covid complexity may reflect different pathological processes (3) with respiratory symptoms being primarily secondary to tissue damage whilst fatigue and its associated post exertional symptoms such as physical pain or brain fog resulting from a dysregulated immune response (4).
Two mRNA vaccines developed by Pfizer Biontech and Moderna have demonstrated impressive levels of immunity against SARS CoV-2 virus in randomised controlled trials (5,6). This relatively new technology had several advantages that made it one of the earliest vaccines to be developed, tested, scaled up and subsequently approved for use all over the world. The potency of the immune response is another significant advantage of mRNA vaccine as suggested by previous in vitro and animal experiments (7).
This potency is naturally a positive characteristic especially when mRNA vaccine technology is used against an easily transmissible and potentially lethal disease. However, for patients suffering from long covid, such a strong immune response could be a cause for concern.
As vaccination programmes against SARS CoV. 2 Virus are rolled out around the world, long covid patients face a difficult decision as no data is available about the impact of the mRNA vaccines on their condition. In the UK, long covid is not considered to be a contraindication for vaccination (8); however, in the absence of any safety data for this group of patients, it is very difficult to provide an informed opinion about the risk.
Methods
In the summer of 2020, Wrightington, Wigan and Leigh NHS Trust Hospitals established a dedicated service for staff suffering from long covid. As Health Care Workers (HCW) in the UK were prioritised for vaccination, Pfizer Biontech Vaccine was offered to all Hospital employees with the first dose provided between end of December 2020 and end of January 2021.
A survey questionnaire was sent to all long covid staff members 2 weeks following the conclusion of the first dose roll out. The e-mail addresses were obtained from the long covid clinic data base. This short questionnaire evaluated the rate of acceptance of the vaccine, reasons for declining, immediate side effects and any persistent change of the long covidsymptoms following the vaccination. The survey was approved by the information governance department.
Results
The questionnaire was sent to 117 HCW. Out of 83 responses, 77 subjects were offered the vaccine (age range:18 - 65 with only 7 male respondents).
10 HCW declined having the vaccine (13 %) with 5 of them citing concerns about worsening symptoms as the main reason. Out of 67 HCW receiving the vaccine 48 (72%) had immediate but self-limiting side effects.
Fatigue, shortness of breath and anxiety were the most common symptoms of long covid our cohort originally had (75%, 53% and 18% respectively). Several weeks following vaccination, 45 subjects reported no change (67%) in symptoms. Fourteen (21%) subjects reported improvement of one or more of their symptoms (8 of them experienced improving respiratory symptoms, 4 improving fatigue, 5 improving anxiety and 2 mentioned improving other symptoms). Eight subjects (12%) reported worsening symptoms including fatigue (3 subjects), respiratory (1 subject), anxiety (2 subjects). Two subjects experienced worsening of other symptoms.
Discussion
When offered vaccination, our long covidpatients showed higher rates of compliance (86%) compared to the general population (9). However, five patients declined the vaccine because of their concerns about worsening symptoms.
Despite having a small number of subjects, limitations to the survey methodology and the relatively short period following vaccination, our report is the first to comment on the response of a cohort of long covid patients to mRNA vaccination. Most of our HCWs didn’t report any change in their symptoms with encouragingly 21% experiencing subjective improvement of symptoms with 10% of all participants reporting respiratory symptoms improvement. The 8 subjects reporting worsening of symptoms experienced more diverse problems with worsening fatigue the most common.
Our results were consisted with unpublished data reporting the feedback of 473 long covid social media users (10). 32% of this self-selecting population reported improvement of symptoms whilst 17% reported worsening of symptoms.
We would like to suggest two potential explanations for our findings. Comprehensive investigations for the respiratory system could be normal in some long covid patients complaining of shortness of breath (11). Dysfunctional breathing might contribute to the severity of shortness of breath (12). The confidence given to the patients from taking the vaccine may act in a positive way to reduce their anxiety and subsequently such perception of the respiratory effort.
Another potential explanation is the complex way mRNA vaccines manipulate the immune system potentially improving or worsening the already dysregulated immunity in long covid patients (4). It is encouraging to see that long covid patients are about twice as likely to experience improvement of symptoms compared to patients experiencing worsening of symptoms. We hope that our findings may be an early source of reassurance that mRNA Covid 19 vaccines are not commonly associated with adverse effects in long covid patients.
We feel that longitudinal studies appraising long covid symptoms and immunological markers correlating the pre and post mRNA vaccines may have the potential not only to improve understanding of the main long covid pathologies but may also unlock the secrets of Chronic Fatigue Syndrome / Myalgic Encephalomyelitis (ME/CFS) as a common condition possibly sharing many of long covid characteristics.
A 40 years old non-alcoholic and non-diabetic agricultural laborer presented with skin lesions around his neck, forearms and feet (sun exposed areas) along with glossitis. Pellagra was suspected because of Casal's necklace (i.e., erythematous, hyperpigmented, scaly lesions around his neck- arrow mark in figure 1). However he did not have diarrhea or neurological manifestations. Pellagra is due to Niacin (Vitamin B3) deficiency. Typical cases of pellagra are associated with 3 Ds - Dermatitis, Diarrhea, Dementia, (and if not treated, the 4th D- Death).1,2 Not many will have all the three Ds. Most commonly involved is skin – dermatitis (Pelle-skin; agra -rough). The patient belonged to poor socioeconomic status.2 His vital parameters and basic investigations were all within normal limits and HIV-ELISA was negative.
The diagnosis of a pellagra-like dermatitis was entertained.3 He was treated with multivitamin capsules which included Niacinamide.2
The skin lesions had disappeared dramatically at the time of follow-up after one month (figure 2).
There is considerable evidence for the benefit of simulation among foundation year doctors.1 Simulation training delivered during the 2 years has tended to focus on the management of the acutely unwell patient, procedures and practical aspects of delivering medical care, such as DNAR discussions, breaking bad news and capacity assessments.2-5 However, to date, there has been less focus on the benefits of developing more complex communication skills that may assist foundation year doctors in dealing with patients with mental health diagnoses or needs. These skills may include performing risk assessments, managing the agitated patient and forming initial management plans for patients in medical settings with mental health problems. This is important, as people with mental health needs have a higher burden of physical morbidity and are hence likely to be encountered in acute care settings.6
Since Health Education England’s Broadening the Foundation Programme report in 2014, there has been a surge in the number of foundation trainees working in psychiatry.7 The development of complex communication skills was an expected natural outcome of these rotations.8 However, this has not always happened – foundation trainees on a psychiatry rotation have stated that they are often recognised only for their medical skills, and that assessment and management was predominantly senior-led.9
Taking this into account, we set out to develop a simulation-based complex communication skills programme available for all F1s and F2s based in the North Central and East London Foundation School. Our focus was on the development of the transferable skills in communication and management that would be useful for dealing with patients with mental health diagnoses in a medical setting.
METHOD
Following a pilot study in 2018, funding was secured for 2019 from Health Education England to run half-day simulation sessions to foundation trainees in complex communication skills and the management of common mental health presentations to primary and secondary care settings.
Half-day sessions took place in hospitals in North and East London hospitals. A total of 121 foundation year doctors took part in the sessions; a breakdown of this can be seen in Table 1. All sessions took place between May 2019 and March 2020.
Table 1: Participants by Site and Year
Year
Region
Site
Cohort
Number of trainees
2019
North London
Whittington
FY1 & FY2
9
Royal Free
FY1 & FY2
11
Barnet
FY1 & FY2
8
East London
Homerton
FY2
16
Homerton
FY1
14
Royal London
FY1 & FY2
3
2020
North London
Whittington
FY1 & FY2
19
East London
Homerton
FY1 & FY2
33
Whipp’s Cross
FY1 & FY2
8
Facilitators
Each simulation group had one facilitator who offered feedback to participants. Facilitators were consultants, higher trainees and core trainees from the North and East London deaneries.
Session organisers
A session organiser was present at every session. They delivered the introductory briefing for participating doctors, provided a briefing for the actors, time-kept and held a feedback session at the end.
Venues
Four half-day sessions were run in North London, and five half-day sessions were run in East London. Three sessions were cancelled due to too few doctors registering to participate, and a further session was cancelled due to COVID-19.
Scenarios
Participants were presented with six scenarios in each session (Box 1), covering presentations in a range of settings: acute general hospitals, accident and emergency, general outpatient clinics and general practice. The sessions required skills in history taking and management when interviewing patients with complex communication needs.
Box 1 Scenarios
1. Attempting to de-escalate an elated patient with manic symptoms and explain the need for a physical medical examination
2. Conducting a risk assessment and liaising with the psychiatric team regarding a patient who has attempted suicide and taken a paracetamol overdose
3. Assessing a patient with drug-seeking behaviour requesting a benzodiazepine prescription
4. Conducting a capacity assessment in a depressed patient who is refusing carers following a recent myocardial infarction
5. Managing an agitated patient with antisocial personality disorder who is experiencing chest pain
6. Assessment of a patient with a likely eating disorder and formulating a preliminary management plan
Timing
Each session lasted 3 hours. Scenarios were 20 minutes each, with 10 minutes for participants to complete the set task, and 10 minutes for feedback from the facilitator, actor, and other participating doctors.
Data collection
Quantitative data
Foundation doctors were asked to complete pre- and post-session anonymous feedback forms, to ascertain their level of confidence in four domains (see Box 2): Participants were asked to rate their confidence level on a Likert scale from 1 (strongly disagree) to 5 (strongly agree) for each of these components.
Box 2 Quantitative data statements
“I feel confident in assessing patients with mental health diagnoses”
“I feel confident in making initial management plans for patients with mental health diagnoses”
“I feel confident in performing initial risk assessments in a medical setting”
“I feel confident in dealing with agitated patients in a medical setting”
Post-session feedback forms also included three questions, asking if anything could have been done differently about the day, if anything was done well, and a white space for any other comments.
Qualitative data
Qualitative data was recorded in the form of the written feedback documented post session and cross-checked by three members of the organising team.
Moderations to 2020 model
Minor changes to the format of the programme were made in August 2019, following presentation of interim findings to Health Education England. These were based on feedback generated from doctors and facilitators and are shown in Table 2. The logistics of the set-up on the day, scenarios, methods of feedback collection and analysis of data remained the same as in 2019.
Table 2: Moderations to 2020 Model
Feedback from 2019 Sessions
Updates made to 2020 Sessions
Title for the sessions ‘Psychiatry Communication Skills’ may have discouraged foundation trainees who were not interested in a career in psychiatry
Title changed to ‘Complex Communication Skills’
The sign-up process for foundation trainees required simplification
Foundation trainees were able to book onto the session via a centralised system, which also enabled their attendance to be tracked
Difficulties with room availability
Medical education managers contacted early in the academic year, with centralising to larger, well-equipped sites, improving room availability
Some trainees were less incentivised to attend with sessions held late in the academic year
Sessions held earlier in the academic year
Low trainee/facilitator numbers, limiting the ability to run scenarios simultaneously
Sessions centralised with the aim to run 2 sessions in North London & 2 sessions in East London
Clarity of brief needed on capacity assessment scenario
Slight amendments to scenario made with
input from old age psychiatry consultant,
including more details on occupational
therapy assessment in the doctors’ and
actors’ brief
RESULTS
Quantitative data
Results showed a consistent increase in confidence across all domains following participation in the simulation session. Increases ranged from 0.83 (“I feel confident in performing initial risk assessments in a medical setting”) to 1.27 points (“I feel confident in dealing with agitated patients in a medical setting”).
Figure 1: Trainee confidence pre- and post-session by domain
There were consistent increases in overall confidence ratings at every site, ranging from 1.03 to 1.25. Similar increases in overall confidence were observed in North London (1.04) and East London (1.06).
Figure 2: Trainee confidence pre- and post-session by region
There was a 94% (n=114) completion rate of pre-session feedback forms, and a 91% completion rate (n=110) of post-session feedback forms.
Qualitative data
No changes were made to the themes following cross-checking for validity.
Thematic analysis of the free text in the post-session questionnaires generated the following themes, as below.
Quality of the stations
Trainees consistently reported positive experiences regarding the quality of the scenarios (48), actors (43), feedback (30) and facilitators (20). In particular, there was a good breadth of scenarios, they were realisticand pitched at an appropriate level. Feedback was constructive and individualised.
“enjoyed how challenging and how true to life the scenarios were”
“right level of difficulty. Took me out of my comfort zone!”
“really good to have an agitated patient as it was a very challenging scenario”
“quite clever to have capacity assessment in somebody with capacity because it’s harder in some ways!”
Five trainees would have liked to have had more scenarios, and three suggested that it would have been useful for the facilitator to have demonstrated a ‘model’ example of a scenario at the end of the session.
Environment/logistics of the circuit
General comments included that the circuits were well organised, and that there was a comfortable atmosphere for giving and receiving feedback. Eight trainees commented that the group size was too big (all were attendees at the Homerton session in 2020, which was the largest session run with 33 trainees in attendance).
Preparation of candidates for the circuit
Ten trainees (seven in 2019; three in 2020) said they would have liked clearer briefings or objectives for the scenarios – two trainees specified that this was in relation to the capacity assessment station.
DISCUSSION
Our results suggest that simulation training involving actors with mental health diagnoses can help foundation year doctors build confidence in their approach to such patients in a medical setting.
The greatest increase occurred in participants’ confidence in dealing with an agitated patient. It is likely that participants felt the most anxious about this prior to and during the session. Thus, they were able to gain a more immediate sense of progress in this domain by being able to practice this in a ‘safe space’ and after being able to see a visible de-escalation of the patient during the station. Participants also valued receiving supportive feedback from the actor, facilitator and their peers.
Participants also demonstrated large increases in confidence with respect to formulating initial management plans. This was the domain trainees were second least confident in prior to the session. It is likely that some trainees would be anxious about whether they have enough clinical knowledge when formulating an initial management plan for mental health patients. The chance to practice this in a controlled setting, with pertinent feedback, appears to have bolstered confidence.
Results were consistent between sites, suggesting that the content of the course, the experience of being in the roleplay itself, and the chance to receive feedback from experienced clinicians were of the most importance to participants, and local variations in delivery did not impact on participants’ experience to a great extent. The wide participation among foundation trainees in North and East London (121 trainees across two regions of London, over nine simulation sessions) suggests that there is a demand for such sessions and there might be an unmet need across other deaneries.
Qualitative data analysis showed positive feedback relating to the quality of the actors, the facilitators and the scenarios themselves. This likely contributed to the trainees reporting that the simulation was realistic and pitched at the right level, hence they were able to find benefit from them.
Limitations
There was a large difference in the number of participants enrolled in each session (three in the smallest, 33 in the largest). This will have given rise to a difference in experience between these participants, with the smallest group being able to partake in all six scenarios, and the largest group only being able to partake in one. This may have meant that those undertaking all six scenarios may have been exhausted by their experience, whereas those undertaking one may have felt that they did not get enough opportunity to practise. Confidence scores between these two groups were relatively similar, but it is unclear whether there would have been a difference if they were of similar size.
Linking of pre- and post-session feedback questionnaires to the respective trainees would have also enabled testing for statistical significance. A paired t-test could have been used to assess the increase in confidence observed by our simulation sessions in each domain.
This study tracked changes in confidence among foundation year doctors following a simulation session, but it did not assess the impact on their actual practice. This would be important to ascertain, to see if the session has allowed foundation year doctors to build on their experience of assessing and managing mental health patients in a medical setting. As a result, a cohort of participants has been selected for future contact regarding this to determine the potential impact on their clinical work.
The most recent outbreak of severe acute respiratory syndrome (SARS) has been caused by coronavirus-2 (SARS-CoV-2) – a new single-strand, positive-sense-RNA beta-coronavirus first reported in 2019 in Wuhan, China. The virus has spread to nearly all countries across the world.1-4
SARS-CoV-2 infection, also known as Coronavirus Disease 2019 (COVID-19), replicates mainly in the upper and lower respiratory tract. The transmission of COVID-19 from symptomatic and asymptomatic patients is usually through respiratory droplets, generated by coughing and sneezing or through contact with contaminated surfaces.4,5 The disease has an incubation period of approximately 5.2 days.6
Most infections are mild and uncomplicated.4 After one week of the onset of disease, 5-10% of patients tend to develop pneumonia, needing hospitalisation.4,6 Some of these patients develop further complications, often leading to death.4,6 The overall case fatality rate is 1.4%, with a noticeably higher rate after the sixth decade of life.4
People aged ≥ 60 years, especially with underlying medical conditions – such as cardiovascular disease, hypertension, diabetes mellitus (DM), chronic respiratory disease, cancer, immunodeficiency, obesity – and those of male-sex, have an increased risk of dying.4,7-12 Risk of severe adverse outcome is also associated with an increased number of associated co-morbidities.10
The impact of active cancer, endocrine disorders, autoimmune inflammatory rheumatic diseases etc. on COVID-19 outcomes has been investigated widely.13-18 Divergent views have emerged regarding the role of renin angiotensin aldosterone system (RAAS) inhibitors, steroids, and immunomodulators in COVID-19 mortality.
The objective of our study was to evaluate the risk posed by epidemiological and demographic variables in our local population. We also sought to analyse the impact of co-morbidities on in-hospital mortality in confirmed COVID-19 patients.
METHODS
Study design:
We conducted a retrospective analysis of demographics characteristics (age and sex) and medical co-morbidities – hypertension, chronic heart failure, ischaemic heart disease, DM, thyroid disorders, asthma, chronic obstructive pulmonary disease (COPD), chronic kidney disease (CKD) (eGFR < 60 mL/min/1.73 m2), chronic liver disease, active malignancy, immunosuppression, post-transplant status, chronic inflammatory arthritis and other rheumatic disorders – in all patients with confirmed COVID-19, who were admitted in two peripheral district general hospitals under a single National Health Service (NHS) trust serving primarily the rural population of western England.
Inclusion and Exclusion Criteria:
To determine COVID-19 status, nose and throat-swab specimens were obtained for real-time reverse transcription polymerase chain reactions (rt-PCR) in all adult (≥18 years) patients, attending one of the two district general hospitals (Royal Shrewsbury Hospital, Shrewsbury; and Princess Royal Hospital, Telford) under Shrewsbury & Telford Hospitals NHS Trust (SaTH) in the period from 1st March to 15th May 2020.
Patients who tested positive (either by N gene and ORF1ab gene positive / ORF1ab gene positive or N gene positive) and required subsequent in-hospital management were included in the study. Patients who were discharged after initial senior review (usually by a consultant physician), or brought in as a cardiac-respiratory arrest, were excluded. Re-admissions to the hospital beyond 48 hours following hospital discharge due to COVID-19 were excluded from the study. Patients diagnosed solely on radiological or clinical findings without a positive rt-PCR test were not included in our study.
We analysed the data based on the index-admission (including failed-discharge: re-admission within 48 hours following hospital discharge). No follow-up data was collected post-hospital discharge of these patients.
Data collection & analysis:
A list of all confirmed COVID-19 patients over a 76-day period was identified from the trust microbiology database. A search of the electronic patient records was completed by four members of our team. Supplementary data was gleaned from existing hospital paper records. Patient demographics, presenting symptoms, associated co-morbidities, medications, admission and discharge dates, intensive therapy unit (ITU) admissions, renal profile, referral source and outcomes were recorded in the specifically designed electronic datasheet.
Study Outcome:
The impact of epidemiological and demographic characteristics, and pre-existing medical conditions on the mortality of confirmed COVID-19 patients requiring in-hospital treatment was analysed.
RESULTS
A total of 303 confirmed COVID-19 (rt-PCR positive) samples were collected over a 76-day period. Five patients had been tested twice, and this was accounted for. Thirty-five patients were excluded from the study: twenty-four of them discharged after initial senior review without requiring in-hospital treatment, seven brought in with cardio-pulmonary resuscitation (CPR) in progress, three had inadequate data, and one was <18 years old. Of the 263 patients admitted, 70 (26.6%) died in hospital (Figure-1).
Figure-1: Flowchart of sampling and analysis
We stratified the mortality rates among the admitted patients by age (Table-1). A chi-square test of independence revealed that the mortality rate was significantly related to an advanced age (χ2 =27.078, p<0.001). The age and sex distributions of admissions and mortality are shown in Figure-2 (a, b, c).
Table-1: Medical admissions and mortality stratified by age
Age
Admission N(m/f)
Admission
(%)
Death
N(m/f)
Mortality
(%)
Chi-square
(χ2)
P-value
18 – 20 Years
0
0%
0
0%
27.078
<0.001
21 – 30 Years
9(2/7)
3.4%
0
0%
31 – 40 Years
9(6/3)
3.4%
0
0%
41 – 50 Years
26(17/9)
9.9%
3(2/1)
11.5%
51 – 60 Years
36(21/15)
13.7%
4(3/1)
11.1%
61 – 70 Years
43(26/17)
16.3%
11(7/4)
25.6%
71 – 80 Years
56(35/21)
21.3%
15(11/4)
26.8%
81 and Above
84(52/32)
31.9%
37(24/13)
44.0%
Total
263(159/104)
100.0%
70(47/23)
26.6%
N: number of patients, m: male, f: female.
Figure-2 (a,b,c): Age, Sex, Admission and Mortality pyramids
We considered two age cohorts - below 60 and ≥60 years of age and other relevant demographic parameters (sex and residence in own-home/care-home) to analyse the impact on mortality rates (Table-2). Of the admitted patients, 159 (60.5%) were male, and 104 (39.5%) were female. The mortality rate was strongly associated with advanced age ≥60 years (χ2 =17.120, p<0.001) but independent of sex distribution (χ2 =1.784, p=0.182). However, it was also affected by the care facility (χ2 =18.146, p<0.001) with a higher mortality rate among the group of patients with residence in a long-term care-home.
Table-2: Admission and Mortality stratified by demographic variables
Variables
Admission (N)
Admission (%)
Death (N)
Mortality (%)
Chi-square
(χ2)
P-value
Age
17.120
<0.001
<60 years
77
29.3%
7
9.1%
≥60 years
186
70.7%
63
33.9%
Sex
1.784
0.182
Female
104
39.5%
23
22.1%
Male
159
60.5%
47
29.6%
Care facility
18.146
<0.001
Own-home
211
80.2%
44
20.9%
Care-home
52
19.8%
26
50.0%
N: Number of patients; Care-home: Long-term care in residential or nursing home.
To identify the strength of the associations, we conducted a univariate logistic regression analysis with mortality as the dependent variable and the demography and presence/absence of the co-morbidities as the independent variable (Table-3). We found that age as a continuous predictor had an odds ratio of 1.058 (p<0.001), which translated to increased odds of dying by 5.8% for every year of advanced age. Using age as a categorical predictor with the other two categories, the odds of death for patients aged below 60 years was found to be 0.195 times the odds of death for the patients aged 60 years or above.
Table-3: Univariate logistic regression analysis of the demographic variables and co-morbidities
Based on the Charlson Comorbidity Index (CCI) score, the severity of co-morbidities was categorised into four cohorts: mild/no co-morbidity (CCI:0), moderate (CCI:1-2), severe (CCI:3-4), and very severe (CCI≥5) [Table-4(4a)].
Table-4: Impact of CCI score and specific medical-conditions on admission and mortality 4a) Admission and mortality stratified by CCI score based cohorts
CCI score
Admission
(N)
Mortality
(N)
Mortality
(%)
OR (95% C.I)
p-value
Overall
263
70
26.6
0
31
1
3.2
-
-
1-2
59
8
13.8
4.706 (0.56 – 39.49)
0.154
3-4
68
23
33.8
15.33 (1.97 – 119.67)
0.009
≥5
105
38
36.2
17.015 (2.23 – 129.78)
0.006
4b) Admission and mortality stratified by specific medical-conditions
Medical-conditions
Admissions
(N)
Mortality
(N)
Mortality
(%)
OR (95% C.I.)
p-value
DM
54
18
33.3
1.510 (0.791 – 2.883)
0.212
Thyroid Disorders
16
4
25.0%
0.914 (0.285 – 2.934)
0.880
Overall Hypertensives
75
16
21.3
0.707 (0.374 – 1.338)
0.287
ACEi/ARB* antihypertensives
51
11
21.6
0.760 (0.365 – 1.586)
0.465
Non ACEi/ARB§ antihypertensives
24
5
20.8
0.704 (0.253 – 1.964)
0.503
Long-term oral steroids
17
9
52.9
4.053 (1.091 – 15.063)
0.037
Immunomodulators
9
3
33.3
5.101 (0.659 – 39.460)
0.119
N: Number of patients; DM: Diabetes Mellitus; *RAAS-inhibitors; §Non RAAS-inhibitors.
The impact of CCI score-based cohorts on mortality are shown in Figure-3 (a-f). CCI value also predicted significant association with odds ratio 1.255 (p<0.001). If the CCI score was utilised as a categorical predictor with the other two parameters (age and place of primary care), it remained a significant predictor with the odds of death for the patients with CCI-scores between 0-4 turning out to be 44.8% (p=0.005) of the odds of death for the patients with CCI scores ≥5 (Table-3).
Figure-3(a - f): Pie-chart representing impact of CCI score-based cohorts on mortality
a) Overall admitted patients: discharge and mortality; b) CCI score 0: discharge and mortality; c) CCI score 1-2: discharge and mortality; d) CCI score 3-4: discharge and mortality; e) CCI score ≤4: discharge and mortality; f) CCI score ≥5: discharge and mortality.
Interestingly, the eGFR at presentation turned out to be a significant predictor of mortality (OR=0.961, p<0.001). Of the co-morbidities, pre-existing renal disease was found to be an important predictor of mortality with OR=1.996 (p=0.027). Long-term oral steroids were another significant predictor of mortality, with the odds of death for the patients with long-term oral steroids use being 341.2% (p=0.016) of the odds of death for the patients without such medication. Patients with no background medical conditions (OR=0.181, p=0.022) fared better, with significantly lower odds of death compared to patients with at least one known medical condition (Table-3).
We also analysed the mortality of our patients with specific medical condition-based cohorts [Table-4(4b)]. A high mortality of 52.9% [OR (95%CI): 4.053(1.091–15.063), p=0.037] was observed in patients who were on long-term oral steroids. A 33.3% [OR (95%CI):1.510(0.791–2.883), p=0.212] mortality rate was observed among in-patients with known diabetes on pharmacotherapy.
Many of the demographic variables and the co-morbidities were inter-related – the odds of death for a patient coming from their own-home was only 26% (OR=0.263, p<0.001) of the odds for those residing in a long-term care-home (Table-3). To offset the possibility of any confounding effect, we utilised multiple logistic regression analysis with all the important variables taken together (Table-5). Taking consideration of confounding effects, only age, care facility, presence of active malignancy and long-term oral steroids were found to be significant predictors of mortality. Interestingly, the presence of active malignancy was found to have a lower risk of death – this is possibly due to a bias on account of a relatively small number of patients in that subset of our study. Age was the most significant predictor of mortality, followed by a primary area of the care facility and the presence of active malignancy.
Table-5: Multiple logistic regression analysis of the demographic variables and co-morbidities
Odds Ratio
95% Confidence Interval
Variables
Lower
Upper
P-value
Age
1.049
1.013
1.086
.007
Sex (Female)
.588
.296
1.165
.128
Care facility (own-home)
.411
.195
.866
.019
CCI score
1.051
.826
1.337
.685
Active malignancy
.078
.008
.725
.025
Cardiovascular disease
.987
.491
1.984
.971
Respiratory disease
1.162
.517
2.612
.716
DM & endocrine disorders
1.370
.608
3.085
.448
Renal disease
.901
.419
1.937
.789
Rheumatic disorders
.927
.128
6.719
.941
Liver & hepato-biliary diseases
.364
.030
4.357
.425
Thyroid disorders
.827
.186
3.676
.803
Long-term oral steroids
4.053
1.091
15.063
.037
Immunomodulators
5.101
.659
39.460
.119
No medical condition
.685
.128
3.670
.658
DM: Diabetes Mellitus
DISCUSSION
COVID-19 has taken 800,000 lives world-wide as reported by the World Health Organisation (WHO) on August 30, 2020. A recent systematic review and meta-analysis have reported the association of COVID-19 with a severe disease course in about 23% of infected patients and has a mortality of about 6%.19 The mortality rate varies in different geographical areas. In-hospital mortality was significantly higher in the United States of America (USA) (22.23%) and Europe (22.9%) compared to Asia (12.65%) – (p<0.0001).20 However, there was no significant difference when compared to each other (p=0.49).20 Our study showed a 26.6% in-hospital mortality.
The mean age of the patients in our study was 68.74 years (SD:16.89) – 60.5% of them were male and 39.5% female. 70.7% of these patients were aged ≥60 years. Univariate analysis showed that the mortality rate was significantly age-dependent (OR=1.058, p<0.001) – mortality (33.9%) was higher in patients aged ≥60 years, rising sharply ≥80 years to 44.0% (χ2 =27.078, p<0.001). Our results were consistent with other studies.21
Among the demographic characteristics, mortality-risk was independent of sex distribution (χ2 =1.784, p=0.182) in our study. This is in contrast to a meta-analysis, which reported the association between male-sex and COVID-19 mortality (OR =1.81; 95%CI:1.25–2.62).22 Multicentric studies in the United Kingdom (UK) would be warranted to see the trend in the local population.
Long-term care-home residents suffered 50.0% mortality (χ2 =18.146, p<0.001). The London School of Economics report on May 14, 2020, estimated that the COVID-19 related deaths of care-home residents contributed to 54% of all excess deaths in England and Wales. Our study findings indicate long-term care-homes as hot-spots requiring shielding and protective measures against COVID-19 – a conclusion corroborating other studies.23
We aimed to define the predictive-role of co-morbidities on COVID-19 mortality, an aspect that has been probed earlier as well.7-12 The CCI score remains a reliable method to measure co-morbidity.24 For admission to intensive care, NICE recommended CCI-score ≥ 5 requires critical care advice to help in treatment decision regarding the essential benefit of organ support for seriously unwell COVID-19 patients. We examined the predictive mortality-risk of CCI scores among the admitted patients.
The mortality rate in cohorts with CCI ≤4 and CCI scores ≥5 were 20.3% and 36.2% respectively. The odds of death for CCI ≤4 cohort was less than half (44.8%) compared with CCI scores ≥5 cohort. Based on this finding, we strongly recommend CCI scoring as a clinical risk-stratification tool in COVID-19.
We examined the impact of organ specific co-morbidities on in-hospital mortality in our study as well. Patients with no background medical conditions showed a low mortality rate 6.9% [OR (95%CI): 0.181(0.042–0.782), p=0.022] and had better outcomes with significantly lower odds of death, compared to patients with at least one medical condition on univariate logistic regression analysis (Table-3). The mortality rate was 3.2% in CCI-0 cohort [Table 4(4a)].
The impact of COVID-19 on patients with CKD, glomerulo-nephropathies, on dialysis dependent patients and post renal transplant patients remains unclear. Patients with SARS-CoV-2 infection were frequently found to have renal dysfunction – the latter was associated with greater complications and in-hospital mortality.25 A mortality rate of 3.6%, was reported in patients attending an outpatient haemodialysis centre.26 Another study has concluded 3.07-fold (95%CI:1.43–6.61)mortality among renal failure patients.27 We found, the pre-existing renal disease to be a cause of significant concern with 37.7% mortality [OR(95%CI): 1.996(1.082 – 3.681), p=0.027] with the eGFR at presentation being a significant predictor (OR=0.961, p <0.001) (Table-3).
The use of steroids in COVID-19 continues to be explored.The RECOVERY trial in UK, after evaluation at 28 days, concluded that dexamethasone reduced deaths by one-third in ventilated patients [age-adjusted rate ratio (RR) 0.65; 95% CI: 0.48–0.88; p=0.0003], and by one-fifth in other patients receiving supplemental oxygen with or without non-invasive ventilation (RR 0.80; 95%CI: 0.67 to 0.96; p=0.0021), although no benefit was observed in mild or moderate cases not requiring oxygen support (17.0% vs.13.2%; RR 1.22; 95% CI, 0.93e1.61; p¼0.14). In contrast, a systematic review concluded that the results from retrospective studies are heterogeneous, and it was difficult to assign a definite protective role of corticosteroids in this setting.28 We found long-term oral steroids use to be a significant predictor of mortality – 52.9% [OR(95%CI): 3.412(1.261–9.23), p=0.016] – this was 341.2% of the odds of death for the patients without any long-term oral steroids use (Table-3). The sample size of this cohort was relatively small with 9 deaths out of 17 patients. However, based on our results, it may be safe to suggest that further population-based studies would be required to determine the impact of long-term oral corticosteroid use in COVID-19.
A major proportion of endocrine disorders are of autoimmune aetiology. The impact of thyroid disorders on COVID-19 is yet to be studied widely.15,16 We found no increased risk of mortality [OR (95%CI): 0.914 (0.285–2.934), p=0.880] in patients with thyroid disorders. However, 33.33% [OR(95%CI): 1.510(0.791–2.883), p=0.212] mortality was seen among the diabetic patients on pharmacotherapy in our study [Table-4(4b)].
Pre-existing hypertension is an accepted risk factor for COVID-19 mortality.26,27 However, the role RAAS-inhibitors and upregulation of ACE-2 receptors in COVID-19 mortality call for targeted clinical research for further clarification.29 A meta-analysis of four studies showed that patients treated with RAAS-inhibitors had a lower risk of mortality [RR: 0.65(95%CI:0.45–0.94), P=0.20].30 We did not observe any significant mortality-risk difference between RAAS-inhibitors treatment group [OR(95%CI): 0.760(0.365–1.586), p=0.465] and non RAAS-inhibitor treatment groups [OR(95%CI): 0.704(0.253–1.964), p=0.503] [Table-4(4b)]. We recommend the continuation of RAAS-inhibitors during COVID-19 unless there exist other compelling medical reasons for their discontinuation.
A prospective study in the UK concluded that the mortality from COVID-19 in cancer patients appeared to be driven principally by age, gender, and co-morbidities.13 The study could not identify evidence suggesting cancer patients on cytotoxic chemotherapy, or other anticancer treatment, were at an increased risk of mortality from COVID-19 compared to the general population.13 We also did not detect any increased risk of mortality in patients with active malignancy [OR(95%CI): 0.078(0.008–0.725), p=0.025)] (Table-5).
The impact of various non-specific immunomodulators in COVID-19 outcome remains inconclusive.14 Our study did not reveal any significant predictive mortality-risk with the use of long-term immunomodulators (methotrexate, tacrolimus, sirolimus, mycophenolate, dapsone, sulfasalazine and azathioprine) on multiple logistic regression analysis. We reached the same conclusion with patients suffering from chronic rheumatic disorders on similar analysis (Table-5).
Our study had some unique characteristics. We analysed all the eligible samples over a consecutive 76-day period at the initial peak of the pandemic. The study was conducted across two district general hospitals, allowing an insight into two differently located rural populations. We conducted univariate and multiple logistic regression analysis of the demographic variables and co-morbidities to examine the predictive-risk of contributing factors in COVID-19 mortality. The association between CCI scores and in-hospital mortality was also analysed in detail. We included demographic characteristics such as age, sex and residence in a long-term care-home while factoring in the associations.
Our study was not without limitations, though. We were unable to study the predictive-risk of obesity, socioeconomic status and ethnicity due to inadequate data. The “White British” group consisted of 80.61% of admitted patients, and no ethnicity was documented in 17.11% of our patients (Table-6, Figure-4).
Table-6: Medical admissions and mortality stratified by ethnicity
Ethnicity
Admission
(N)
Admission
(%)
Died
(N)
Mortality
(%)
White British
212
80.61
63
29.71
Asian
4
1.52
1
25.0
African
2
0.76
0
0.00
Not documented
45
17.11
6
13.33
N: Number of patients
Figure-4: Bar charts showing Admission and Mortality stratified by Ethnicity
We relied solely on electronic database and hospital records to conduct the study retrospectively. The few subsets of patients such as those on prescribed long-term oral steroids, immunomodulators, thyroid disorders, chronic liver disease, and active malignancy had relatively small sample sizes with possible introduction of bias. We did not categorise diabetic patients into insulin dependent/non-insulin dependent or well/poorly glycaemic control cohorts. We did not aim to split the respiratory group into well or poorly controlled asthma or COPD subsets. Patients on a long-term steroid inhalation treatment were not included in the steroid cohort – a more extensive population-based study may be better suited for such an analysis.
CONCLUSIONS
Patients aged ≥ 60 years, residence in a long-term care-home, pre-existing renal disease, multiple co-morbidities (especially those with CCI ≥ 5), and patients on long-term oral steroids need to be considered as having a high risk of dying from COVID-19, along with other established risk factors such as hypertension, diabetes and chronic respiratory disease. RAAS-inhibitors need not be discontinued due to COVID-19. Further studies are necessary to establish links between long-term oral steroids use, chronic rheumatic disease, non-specific immunomodulators and COVID-19 mortality.
The first documented case of COVID-19 in the UK was reported on 29 January 2020 followed by a rapid surge of infections leading to a UK national lockdown announced on 23 March 20201.
The COVID-19 pandemic has since required NHS hospitals to constantly adapt their protocols, workforce and logistics to keep pace with the evolving spread of the virus.
The variable clinical presentation of COVID-19 may result in those requiring admission being redirected under the care of different specialties within the hospital2. Furthermore the presence of asymptomatic carriers admitted with unrelated pathologies or cases of nosocomial cross-infections implies that COVID-19 related clinical noting and discharge summary documentation is likely to affect doctors across all hospital departments.
An initial review of 50 consecutive urology discharge summaries in Royal Shrewsbury Hospital in April 2020, revealed that only 27% included the patient’s in-hospital COVID-19 swab result (positive or negative) and only 2% documented any recommended patient self-isolation advice to be adhered to after discharge into the community.
Accurate COVID-19 related documentation is paramount to ensure the patient, their family and their GP / care setting where applicable are all aware of their COVID-19 status and any recommended self-isolation, to safeguard infection prevention in the community. Furthermore, there could be potential medicolegal sequalae for the Trust were a patient recently discharged from hospital to spread COVID-19 to their family and / or vulnerable adult cohabitants due to lack of clear self-isolation guidance.
An urgent collaboration between the urology team and the Trust IT department was undertaken to upgrade the Trust’s existing eScript discharge summary software.
Two new tabs were integrated:
1. COVID-19 test result [Figure 1] and date [Figure 2]: Positive / Negative / Not tested
2. Self-isolation advice [Figure 3]: No / Yes (please specify as free text)
Completion was made mandatory prior to being able to sign-off the document for printing and successful upload on the electronic records.
Figure 1
Figure 2
Collaboration with the infection prevention team (IPT) was undertaken to create a flow-chart style document accessible by hyperlink [Figure 3] to help discharging clinicians correctly determine and document patient self-isolation instructions following discharge from hospital, depending on individual circumstances. [Appendix 1]
Figure 3
The aim of this quality improvement project was to evaluate the impact of the dynamic upgrade made to the eScript discharge summary software in clinician compliance with COVID-19 related documentation.
Materials and Methods
The upgraded eScript discharge summary software was rolled out across the Shrewsbury and Telford NHS Trust (SATH) in the week beginning 28th September 2020.
All clinicians were informed regarding the upcoming software change by means of a Trust-wide email from the SATH Medical Director, with instructions provided on how to complete the new tabs.
The first 50 consecutive completed discharge summaries of patients admitted electively or as emergency under the urology team starting from 1st October 2020 were retrospectively reviewed by NL, EF, ZK by means of electronic records.
Note was taken of correct documentation of:
· any COVID-19 test outcome (positive or negative result)
· any recommended patient self-isolation advice after discharge from hospital
The findings were compared and contrasted with the results of the initial study in April 2020.
Results
49 / 50 (98%) patients had a COVID-19 test at any time during their admission – 1 patient did not have a COVID-19 test at any time in their admission.
3 patients were discharged prior to their COVID-19 result becoming available, 1 patient was discharged without a written discharge summary and 1 patient was incorrectly labelled as having been “not tested.”
46 patients’ results therefore became available in time before discharge and 44 (90% of all those tested) were documented on their discharge summary. All COVID-19 tests were negative. [Table 1]
All patients had either documented self-isolation advice or “none required” specified on their discharge summary following discharge from hospital. [Table 1]
The most common primary reasons for admission were urinary tract infection / sepsis (18%), catheter-related complications (14%) and urinary retention (12%).
Incidental note was made of two patient deaths within 28 days of admission.
Table 1
Initial Review
Review after software update
Number of patients
50
50
Patients tested for COVID-19
33 (66%)
49 (98%)
Patients testing positive
1 (3.3%)
0 (0%)
COVID-19 result on discharge summary
9 (27%)
44 (90%)
Self-isolation advice on discharge summary
1 (2.0%)
50 (100%)
Discussion
The results revealed that the upgraded eScript software resulted in a notable improvement in COVID-19 related documentation on discharge summaries.
In the initial study 33 / 50 (66%) had a COVID-19 test at any time during their admission – only 27% of these however had the result included on their discharge summary, compared to 90% compliance following the eScript software upgrade.
Following the finding of 3 patients’ (6%) COVID-19 result not becoming available prior to discharge, SATH IT was consulted and an extra option on the eScript COVID-19 result dropdown menu was added to include “awaiting result” to mitigate for this particular circumstance. [Figure 1]
Only 1 patient (2%) in the initial study had any self-isolation advice documented on their discharge summary – this figure soared to 100% following the eScript software upgrade. [Table 2]
The figures have to be interpreted in light of the change in COVID-19 testing availability, which only became widespread in mid-May 2020 and thus after the completion of the initial study3. This is likely to account for the lower proportion of in-patient COVID-19 tests being performed in the initial study (66%) vs. second study (98%).
Arguably a negative COVID-19 result such as those commonly encountered on the urology ward are less likely to be documented on a discharge summary compared to a positive test, particularly if admitted with unrelated pathologies (e.g. urinary retention) or asymptomatic carriers. By nonetheless documenting this pertinent negative, one ensures the patient is aware of their reassuring result and any community-based clinician such as district nurse or GP can be cognisant of this information if called to assess the patient soon after hospital discharge.
The findings of the study are directly relevant to all doctors working in acute NHS Trusts, as clear and accurate documentation is a key principle in the GMC’s “Good Medical Practice” document to which all registered practising doctors must abide to4. A discharge letter is a key component of the documentation of a patient’s journey and therefore must be completed accurately in line with GMC guidance. The updated software system safeguards the accuracy and clarity of the Trust’s discharge summaries in relation to COVID-19 results and self-isolation advice.
Self-isolation is a key principle of outbreak control for any infectious disease, and is a particularly important strategy in managing widespread vast numbers of cases such as in the COVID-19 pandemic in a libertarian society where strict quarantine is not routinely enforced5. The adherence with self-isolation has been notoriously poor in the UK – it is estimated that only 25% of symptomatic patients with proven COVID-19 complied fully with the government advice of not leaving the home during their isolation period6. It is therefore of paramount importance that patients being discharged from hospital in the COVID-19 pandemic era are given clear instructions on how to self-isolate and the recommended duration of this is documented.
Doctors preparing discharge summaries and their patients must be aware that COVID-19 may still be relevant to them even if the primary reason for admission was unrelated and their test on admission was negative – for example they may have been exposed to another in-patient or staff member later found to be positive for the virus. The discharging clinician should check for any such event and disclose this on the discharge summary where applicable.
From a medicolegal perspective, hospitals trusts may find themselves in a vulnerable position if COVID-19 positive or potentially exposed patients are discharged without any documented self-isolation advice. This in particular follows the controversy highlighted in the earlier months of the pandemic of thousands of elderly patients being discharged from hospital to care homes in the UK without a COVID-19 test7. Indeed, since then a judge has allowed legal action from a bereaved daughter to be brought against the Department for Health and Social Care, NHS England and Public Health England for failure to adequately protect vulnerable residents in an Oxfordshire care home8. Safeguarding the clear documentation of recommended patient self-isolation instructions on discharge summaries is likely to confer additional protection to a Trust facing any such legal challenge.
Writing a high-quality discharge summary is a difficult skill to teach and indeed they are often completed by the most junior members of the medical team9. The Trust’s IT software can therefore play a vital role in helping doctors ensure that COVID-19 result and self-isolation instructions are documented for all hospital discharges, by means of mandatory tabs for completion prior to sign off.
To our knowledge, although other Trusts have since similarly amended their discharge summary software in light of the COVID-19 pandemic, this is the only study in the literature which directly attests the degree of improvement in documentation as a result of such a software change. We urge that all Trusts in the UK consider amending their discharge summary software in line with the changes characterised in this study.
Conclusions
The updated eScript discharge summary software has greatly improved compliance within the Trust with COVID-19 test result and self-isolation advice documentation on discharge summaries.
This is a simple and highly effective modification whose benefits can have ramifications across the healthcare system.
By accurately documenting COVID-19 test results and any advised self-isolation for the patient after hospital discharge, one safeguards IPC in the community and protects the Trust from potential relevant medico-legal sequalae.
Appendix 1
Scenario 1: COVID-19 positive patient
Scenario 2: COVID-19 negative patient
Scenario 3: No COVID-19 test performed as rubbish and make someone else
The Medical Training Initiative (MTI) is a training programme to assist doctors with proven capability in anaesthesia/Intensive Care/Pain Medicine from low and middle income countries to undertake further anaesthesia training in the UK, for a maximum of 24 months1.
Why MTI?
It offers an opportunity not only to fine-tune their clinical acumen, but also to assimilate non-clinical skills (medical education, leadership and management, quality improvement projects) 2. The exposure most of the MTIs receive overseas is heterogeneous - in terms of level of supervision/independence, access to modern equipment and medications, lines of management, level of expectations and communication or interaction with patients. Funding received by training hospitals overseas can be variable thereby impacting on the resources available to provide standardised training. Under the MTI scheme, anaesthetic trainees can also take the FRCA examination.
From home to UK
A general awareness of the scheme helps the department to provide the MTIs with an appropriate support system. Details of the MTI scheme are available on the Royal College of Anaesthetists (RCoA) website.
It takes about 3-6 months after verification of the educational qualification by Educational Commission for Foreign Medical Graduates (ECFMG) via Electronic Portfolio of International Credentials (EPIC). The planning involves resignation from the current job, applying for a Tier 5 Visa to be in time for the GMC identity check (3 month deadline) and collecting the Biometric Residence Permit to be able to start work in the UK.
Medical staffing has more paperwork, one of which is the Disclosure and Barring Service (DBS). Prior intimation to the MTIs on the need for police verification from their home country would be of great benefit to make the process smoother. Hospital accommodation should be offered and organised in advance.
Acquaintance with the system
The MTI trainees often join at a time that doesn’t coincide with the UK training programme. Hence, a one-to-one induction customised towards overseas doctors will be beneficial. In addition to a named Educational supervisor (a mandatory requirement stipulated by the RCoA), the MTIs will benefit from having a nominated mentor within the department. The trainees can also get familiarised to the new healthcare system via the RCoA approved training courses – ‘Simulation for MTIs’ and ‘New to NHS’.
Allocating MTIs to theatre lists with only a select number of consultants in the initial stages helps them to settle in a new healthcare environment before they commence on-call (out of hours) duties. The MTIs should be encouraged to attend resuscitation courses like Advanced Life Support (ALS) as most of them follow the Advanced Cardiac Life Support (ACLS). They should be encouraged to document their progress like any other UK trainee via the RCoA Lifelong Learning Platform (LLP).
Anaesthetic training in the UK is very structured. The three stages of training (core, intermediate and higher/advanced) are well defined. The curriculum is well laid out and assists trainees to not only develop clinical but also gain non-technical skills. A six-monthly ARCP (Annual Review of Competence Progression) like assessment with annual anonymised multi-source feedback helps to create professional development plans, monitor progress and put supportive plans in place (if needed ) for a struggling trainee. The curriculum provides an opportunity for all-round development to every overseas trainee.
Gaining experience in non-technical skills (leadership and management, medical education and QI/audit projects) can be lacking in some home countries as the curriculum back home could be heavily biased towards the development of clinical acumen only.
What to expect from an MTI?
The MTIs have at least 3-5 years (may be more) of anaesthetic experience. The NHS benefits from their skills and experience. Their experience helps the department to allocate them to provide out of hours work (on-call) sooner than a UK trainee after an appropriate period of induction. The journey of patient experience also improves with the presence of experienced staff on the shop floor.
Departments gain from increased service provision too. As an example, after obtaining the initial assessment of competency (IAC), the MTI anaesthetists can be allocated to do solo theatre lists with a named supervising consultant anaesthetist present within the theatre suite. The reliance on locum staff is reduced thereby reducing unnecessary cancellations of theatre lists for lack of permanent staff. It reduces the financial burden on the NHS as staffing the department with locums can add to increased costs.
Patient safety is of paramount importance in any healthcare setting. Since the MTIs have a two year working contract, they are familiar with the department policies and guidelines unlike a locum doctor who does the odd shift in a hospital.
Equally, new skills gained by the MTIs are ultrasound guided regional anaesthesia, using a fibre-optic scope and different airway gadgets, ICU training, experience in geriatric and bariatric anaesthesia, total intravenous anaesthesia (TIVA) / target controlled infusion (TCI) alongside access to new medications like remifentanil, sugammadex which may not be available in low to middle income countries. The NHS provides excellent opportunities in simulation training and teaching courses.
However, a system of protocols can be unnerving to the MTIs. One may find them taking a step back when it comes to ‘decision making’ as they are not sure if it would be approved or criticised. At times, some of the MTIs may come across as unyielding despite adequate teaching. It is essential to remember that the process of unlearning to re-learning takes time, and therefore, patience is the key. This is where the concept of teaching experienced medical practitioners with knowledge comes in handy.
Training learners with knowledge
Medical education comprises three inter-linked domains - knowledge, skills and attitude.3 Though trainees may differ in terms of their motivation for learning, it can manifest only after the basic needs are satisfied - the external barriers to motivation such as life events and transitions, opportunities, and barriers to learning or obtaining information are addressed and they feel respected in the educational environment. The MTIs are essentially adult learners with pre-existing knowledge, who bring a great deal of first-hand experience to any work-place. Learning should, therefore, be integrative, which forms the basis of constructivism theory of learning4. New knowledge and skills should be integrated into the existing bank of knowledge. They also have pre-set strong tastes and habits which can be a real asset or a hindrance to effective learning. The educational supervisor should be able to encourage or curb them accordingly5.
Being adults, MTIs enter training situations with a self-image as independent, mature beings as they have already passed the qualifying exams in their home country. They can direct their own learning, including decision making and plans for taking examinations. The supervisors should engage with the trainees in activities that create a sense of self-responsibility to facilitate better learning opportunities.
Many adult learners suffer from a fear of failure and living up to expectations6 and thus, educational supervisors should be cautious to avoid unnecessary criticism. Instead the focus should be to offer constructive positive feedback. Any educational plan for them should start with an awareness of their prior acquired knowledge, an assessment of their educational needs along with room for motivation and reflection. This helps the trainees retain the original “frame of reference” while continuing to constantly challenge and transform practice via reflection on-action and reflection in-action. The educational supervisors need to provide a supporting educational environment, a structured guide for reflection and constructive feedback to develop the trainees’ reflective practice7.
A simple multi-step approach involving active participation from both the trainee and the educational supervisor can be summarised into a model as below (Figure 1) 8. This model begins with the trainee’s prior knowledge.
Figure 1: Multi-step approach to training
The RCoA LLP, work-place based assessments and multi-source feedback along with the six-monthly meeting with educational supervisor are useful tools to deliver a holistic learning experience. It helps to refine the existing knowledge, reflect and provide constructive feedback. The supervisor can provide advance structures upon which the MTI can continue to build opportunities and gain confidence to rehearse and apply their new knowledge.
Summary
An organised induction programme, a period of familiarisation and good mentorship with patience helps to remove the barriers to learning for the MTI trainees. A dynamic trainee-supervisor relationship to accommodate the changing educational goals and an appropriate mix of strategies can help the MTI trainees attain medical competence, which is defined as “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values and reflection in daily practice for the benefit of the individual and the community being served3.”
Lung carcinoma the most common malignancy worldwide, presents as a metastatic disease in majority of the cases. The most frequent sites of distant metastases are liver, adrenal glands, bones, and brain. Skeletal muscle metastasis is an unusual presentation of lung adenocarcinoma.
Case report
A fifty three year old male patient, labourer by occupation, beedi smoker for twenty five years, admitted to tertiary care hospital with pain and swelling over left arm, cough and expectoration for the past two months, accompanied with significant weight loss. There was no evidence of chest pain or haemoptysis. On local examination there was hard swelling over extensor compartment of left upper limb with mild tenderness, no loss of sensation and mild restriction of range of movements on flexion at elbow. Respiratory and other systemic examination was within normal results. Plain CT of thorax defined multilobulated lesion in right perihilar location in right middle lobe (Figure 1).
Figure 1: CT of Thorax: Well defined multi-lobulated lesion in right perihilar location in right middle lobe measuring 2.9 x 2.4 cm.
USG of left arm showed an irregular heterogenous soft tissue lesion noted within the triceps muscle with few areas of intra-lesional necrosis, MRI of left arm showed lobulated lesion in posteromedial aspect of mid and distal arm, involving triceps muscle, medial aspect of brachialis, encasing brachial artery, veins and median nerve (Figure 2).
Figure 2: MRI of Left arm: Well defined irregular lobulated enhancing T1 hypo intense, T2 and T2 flair hyperintense lesion in the posteromedial aspect of mid and distal arm.
PET scan showed enhancing nodular soft tissue lesion noted in middle lobe of right lung, 2.9 x 2.4 centimetres. Biopsy revealed metastatic adenocarcinoma. For further studies Immunophenotyping was done which showed negative for EGFR and ALK. Patient was treated with palliative RT, Pem-Carbo f/b Pemtrexed maintenance and recently 3# of Gemcitabine. The patient died of metastasis to brain within eight weeks of diagnosis.
Discussion
Lung carcinoma is a leading cause of cancer-related mortality. The most common sites of distant metastasis in lung carcinoma are brain, bones, the liver, and the adrenal gland1. The most common tumours that cause skeletal muscle metastasis are tumours originating from thyroid, oesophagus, stomach, pancreas, colon, rectum, bladder, breast, ovary, and prostateand skeletal muscle metastasis of lung carcinoma was first reported by Fisher ER2. Willis RA reported four skeletal muscle metastases in their autopsy series composed of 500 lung carcinoma patients3. Skeletal muscle metastasis is a rare occurrence for any tumour with a reported incidence lees than one percent4-5. The most common sites of muscle metastasis are thigh muscles, iliopsoas, and paraspinous muscles6.
The mechanism of skeletal muscle metastasis is unclear. Despite its rich vascular blood supply and a large mass in the body, it is resistant to haematogenous metastases. Organs that are frequently metastasized, including liver, lung, and bone have rich capillary networks and blood supply. As a result of the muscle metabolism, substances such as lactic acid, free oxygen radicals, and low pH in the environment constitute an infertile medium for proliferating tumour cells. In addition, mechanical insults due to contractions, high tissue pressure, and widely alternating blood flow are also against the survival of tumour cells7.
There are several hypotheses proposed for skeletal muscle metastasis in lung carcinoma. The most widely accepted hypothesis is the haematogenous route. In this hypothesis, it is believed that tumour cells are formed through tumour embolism. Some authors suggested that skeletal muscle metastases might originate from abnormal lymph nodes found in the muscle. In a study by Bocchino M et al. 1754 lung carcinoma patients treated between 2007 and 2012 were analyzed and forty six (2.6%) had skeletal muscle metastasis8. Despite the variations between different studies in terms of the association between histological subtypes and skeletal muscle metastasis, forty patients in that study (87%) had non small cell lung carcinoma and six had (13%) small cell lung carcinoma. Among non small cell lung carcinoma patients, twenty four (60%) had adenocarcinoma. The most common initial manifestation of skeletal muscle metastasis is a pain. Pain can be accompanied by extremity swelling. The case presented herein also applied with pain and swelling. Diagnostic methods for skeletal muscle metastasis are not specific. Direct films usually show only soft tissue shadows. MRI usually reveals hypointense signal in T1 and hyperintense signal in T2. MRI is preferred to distinguish soft tissue metastasis from an abscess, sarcoma, and other conditions9, similarly; our patient had hypointense signal on T1 and the hyperintense signal on T2 series. The optimum treatment and prognosis of skeletal muscle metastasis from lung cancer is unclear. Depending on the clinical characteristics, treatment options include observation, surgery, chemotherapy and radiotherapy.
Conclusion
Lung carcinoma with skeletal muscle metastasis should be considered as a potential differential diagnosis in patients presenting with an intramuscular mass.
Acknowledgements
Authors acknowledge the immense co-operation received by the patient and the help received from the scholars whose articles are cited and included in references of this case report. The authors also acknowledge the authors/editors/publishers of all those articles, journals and books from where the literature for this case report has been reviewed and discussed.
Clozapine is an atypical antipsychotic, it is the treatment of choice for treatment resistant schizophrenia and more effective than conventional neuroleptic medications. Clozapine is associated with potentially life-threatening side effects, some of which appear early in treatment.
Myocarditis is an uncommon but serious early adverse event of Clozapine, the majority of reported cases occurring in the first 4-8 weeks.1 Clozapine induced myocarditis (CIM) can present with mild symptoms, but can progress rapidly to fulminant symptoms and thereafter heart failure and death.1 These symptoms and signs typically include dyspnoea, palpitations, chest pain, fatigue, flu-like symptoms, pyrexia and tachycardia.
Case Report
A 21-year-old Caucasian male with a two year diagnosis of schizophrenia and previously inadequate responses to Risperidone and Olanzapine was commenced on Clozapine. The patient had previously tolerated Risperidone and Olanzapine and did not experience adverse events, but there was inadequate therapeutic response to both; hence it was decided to commence Clozapine.
On admission, his physical examination, baseline blood investigations (these did not include cardiac markers such as troponin or C-reactive protein (CRP)) and electrocardiogram (ECG) were normal. His medical history was unremarkable and he did not have a family history of cardiac disease. He smoked 15 cigarettes per day.
A rapid Clozapine titration compared with the standard UK titration[2] was commenced with a target dose of 200 mg/day on day 14. He was not on any other psychotropic medication.
The patient remained asymptomatic in the first 3 days. On day 4, he developed tachycardia (114 BPM). A repeat physical examination and ECG was normal, eventually his heart rate settled to 94 BPM. The tachycardia was deemed to be a benign side effect of Clozapine, and the rate of titration was slowed down as a precaution.
On day 12, the patient reported dizziness when standing and a ‘cold air’ sensation in his chest. Nurses reported that blood pressure was normal with a heart rate of 145 BPM but when reviewed clinically his heart rate was 89 BPM. His titration was continued.
On day 14, the patient complained that his ‘internal organs were hurting’. His Clozapine dose was 125 mg/day at the time. He reported chest tightness with central pain, pain in his legs and abdomen, intermittent breathlessness and palpitations. The duration of his symptoms was 24-36 hours. Examination was normal except for a heart rate of 110 BPM. His ECG showed sinus rhythm with no ST segment or T wave changes. Blood tests showed markedly elevated troponin I—1211.5 ng/L (normal range: <34.3 ng/L), CRP—176 mg/L (normal range: 0-10 mg/L) and eosinophil count —1.28 109/L (normal range: 0.02-0.5 109/L).
The patient was afebrile throughout the titration period.
He was admitted to an acute hospital and a provisional diagnosis of Clozapine induced myocarditis was made. The echocardiogram did not reveal structural abnormalities or damage. An EBV (Epstein Barr Virus) serology was negative. Clozapine was withheld and the patient improved along with the blood markers, after 4-5 days he was discharged back to the psychiatric hospital.
Discussion
CIM is an often overlooked adverse event associated with Clozapine titration. Currently there is no mandatory requirement of laboratory monitoring for detecting myocarditis during Clozapine titration unlike the mandatory requirement for detecting neutropenia, despite roughly similar estimated incidence of the two adverse events at 3%.[3,4]
This case was unusual because of the very early appearance of symptoms, the patient’s age and atypical symptom presentation. Although CIM is an early adverse event, the onset within 2 weeks of initiation was unusual. Literature suggests that myocarditis typically presents within 4-8 weeks.[1] The patient was also younger than the reported median age of patients (30).[1] The symptoms appeared at a low dose of 125mg/day which literature suggests is unusual, although CIM at doses of 50mg/day has been reported.[5]
Tachycardia and fever are common early side-effects of Clozapine. Tachycardia usually settles after 4-6 weeks of treatment[6] and fever typically for 2-3 days.[7] Both symptoms can be suggestive of myocarditis, especially when they co-occur. CIM often presents in a non-fulminant form.[8] As this case demonstrates many patients may not report symptoms when CIM is mild.
Increasing age, concomitant administration of sodium valproate and increased rate of dose titration are significant risk factors for CIM.[9] In this case, the patient was young and sodium valproate was not co-administered. The titration was originally intended to be rapid but slowed down soon after commencement.
Given the clinical difficulties in detecting mild CIM, we suggest that all patients have baseline troponin, CRP, heart rate, blood pressure, temperature, resipatory rate and ECG. If medical history reveals history of heart disease, a baseline echocardiogram can be obtained. If there is history of congestive cardiac failure, then baseline brain natriuretic peptide (BNP) or N-Terminal pro-B-type natriuretic peptide (NTproBNP) should be measured.[10]
In clinically asymptomatic patients, if there is elevated baseline CRP (>100 mg/L), troponin, BNP or NTproBNP then Clozapine titration should not commence and further advice from cardiology should be sought.
Weekly CRP and troponin should be done in the first month of titration and levels repeated once after stable dose of Clozapine is reached. The dose increase should not be rapid.
Tachycardia developed should be checked with reference to the baseline heart rate measured before commencing Clozapine. A heart rate of greater than 120 BPM or increase of more than 20 BPM over the baseline pulse rate should lead to the review of physical health, blood monitoring, ECG, and Clozapine titration rate.
An increase in troponin above upper limits or an increase in CRP should trigger consideration of CIM. Literature suggests that troponin levels greater than 2x the upper normal limit are indicative of acute myocarditis.[9] CRP is raised on average 3 days before any increase in troponin levels is detected.[9] If the troponin levels are within the normal range and the CRP levels are raised but less than 100 mg/L, clozapine titration can continue, but the pace must be slowed. Troponin levels and CRP levels should be monitored daily and the patient should be closely monitored for clinical signs of developing cardiotoxicity.
We do not recommend routine eaosinophil monitoring as the marker in 90% of cases does not exceed normal limits at the onset of CIM and typically peaks 7 days after cessation of Clozapine.[11]
Conclusion
Clozapine induced myocarditis often presents with low level cardiotoxicity. Mild symptoms may be missed; however, progression to fulminant myocarditis can be rapid, with high mortality rates.[1] Myocarditis, including clinically asymptomatic myocarditis remains a risk with Clozapine every time the patient is titrated onto this medication[12]. Close clinical monitoring, high index of suspicion and monitoring of cardiac parameters will help early detection of adverse cardiac events.
World COVID-19 cases exceed 20 million as of today and the number of deaths surpass 733103. Behind these statistics is a great deal of pain and suffering. It is now increasingly getting recognized that COVID-19 is not just a respiratory disease at all. The face of COVID-19 is changing from a pulmonary disease to an inflammatory disease which particularly affects the blood vessels, the coronary vessels, the kidneys, the liver, brain and elsewhere. Its duration is also much longer with long term impact than initially speculated. Sufferers report a huge spectrum of problems beyond the three NHS-approved symptoms (persistent cough, fever and loss of taste or smell). These include fatigue, breathlessness, muscle aches, joint pain, 'brain fog,' memory loss, lack of concentration, and depression.
More morbidity is recognised in cases of infections among the aged populations and patients with suppressed immunity. The high incidence of complications among ethnic minority apparently points toward environmental factors of immunity rather than genetic factors. Underactive immune responses in cooler temperature and diminished synthesis of vit D and the genetic factors linked with these anomalies might explain only part of the higher incidence of COVID-19 among Black, Asian, and Minority Ethnic (BAME) communities.
Research on the first British patients to contract COVID-19 has shown that BAME people are more prone to critical impacts and care compared to white people. This research, conducted by the Intensive Care National Audit and Research Centre, observed that of nearly 2,000 COVID-19 patients, 35% were non-white, though people of BAME heritage only comprise 13% of the UK’s population.1 The study included data drawn from 286 critical care units across the UK and collected until 3 April 2020. According to another study from the UCL Institute for Global Health, Bangladeshi, Pakistani, Indian, Black African and Black Caribbean ethnic groups all had a substantially increased risk of death in comparison to white British and white Irish groups. Cook et al. pinpointed that of 119 NHS staff who died from COVID-19, 64 were from ethnic minority backgrounds.2 They also noted fewer deaths among critical care staff, highlighting PPE’s usefulness.
The UK BAME population’s mortality rate for the 2009 influenza A (H1N1) epidemic was nearly twice that of the white population.3 The Pakistani and Bangladeshi ethnic groups are now 1.8 times more likely to have a COVID-19-related morbidity than white males of a similar age, when other sociodemographic and health characteristics were compared.4Studies have also specified that Black men are 4.2 and Black women 4.3 times more likely to die from a COVID-19-related death than white people.5 Doherty et al. suggested that socioeconomic disadvantages and other circumstances only partially explicate this discrepancy, and that there are missing gaps that have not yet been expounded.6
There is some confusion regarding the cause of this higher incidence of morbidity and mortality among the BAME community, due to media propaganda failing to assess this relation’s intricacies. Higher morbidity and mortality have been observed among first-generation migrants to the UK, but not necessarily among the second generation, who were born and raised in the UK. Five months is a short period to develop any form of genetic immunity or susceptibility to a new viral infection. Each person’s genetic code differs only by 1% of 25,000 genes. The gene cluster largely responsible for our health is called the human leukocyte antigen (HLA), also known as the major histocompatibility complex (MHC). It also takes much longer for any sort of adaptation or mutation to occur. Suppressed general immunity due to various factors appears to be the main reason for the higher COVID-19 incidence among the BAME population. The following discussion examines the possible factors responsible for this anomaly.
a. Immunity and Temperature
Some data has suggested that immune cells are more active in higher temperatures, as supported by the fact that fevers are a bodily mechanism activated by the immune system to defend against pathogens. It has been established that if the body temperature is increased by 1°C, immunity instantly increases 5–6 times. Likewise, if the body temperature is reduced by 1°C, immunity decreases 5–6 times. This observation has some value in explaining the higher incidence of COVID-19 infection and mortality among BAME groups who were born and raised in warmer areas, then migrated to colder regions.
Temperature-dependent immune responses are linked to genetics. One probable explanation is that BAME individuals’ immune cells are genetically wired to function better in hot weather and are unable to optimally function in cold weather. Such a genetically determined immunological build-up means that their immune cells are slow to react to viral invaders. BAME individuals’ immune cells are well-adapted to warm weather but not so to cold weather, in comparison to white individuals who were born and raised in colder climates and adapted to lower temperatures. BAME individuals’ immune cells may even become underactive in cold weather. Though COVID-19 thrives equally well in hot and cold weather, the BAME population has immunity shortcomings in surviving colder months; this insight might prompt them to take special precautions in future cold seasons.
Low temperatures have been recognized as immunosuppressive. It has been observed that even coldblooded animals migrate to warmer places when they become ill. An increase in body temperature has long been a defence mechanism against infection and inflammation. The generation and differentiation of the lymphocytes CD8+ cytotoxic T-cells are enhanced by hyperthermia. Elevated body temperature changes T-cells’ membranes, which may help mediate micro-environmental temperature’s effects on cell function. Sub-thermoneutral laboratory housing temperature was shown to induce immunosuppression in mice experiments: when the mice were housed at a thermoneutral ambient temperature, striking reductions in tumour formation, growth rate and metastasis were observed.7
Mice experiments with antigens demonstrated that mice with antigen-induced raised temperatures showed a greater number of the CD8 T-cells capable of destroying infected cells.8,9 Parallels were observed in teleost fish.10 Higher temperatures also seem to interfere with microbe replication. This is particularly noticeable when a host has a high fever and their immune system temporarily enhances as their temperature rises. Hong Kong’s persistent cold weather was attributed to the rapid spread of SARS in 2003. BAME communities whose immune mechanisms are genetically evolved for survival in higher temperatures are compromised in Western countries’ lower temperatures. The cold weather puts additional stress on their immune cells, which give in to viral invaders.
This argument is further supported by the spread of the flu. During flu season, immune cells become less efficient and flu viruses, unaffected by low temperatures, are in an advantageous position to defeat these cells. Such a hypothesis explains the higher incidence of flu in the winter and challenges the misconception that the flu virus is killed by hot weather and thrives in cold weather. The 1918 Spanish flu that broke out in the United States in the winter seemed to ease off during the summer, but returned with a deadlier strain in the autumn, and a third wave followed the next year. The problem then seems to be in humans, as viruses are unaffected by seasonal temperature variations.
b. Diminished Vitamin D Synthesis
Other mitigating factors can explain the higher COVID-19 incidence among BAME people. Nearly all immune cells have vitamin D receptors that connect to vitamin D networks in the immune system. Vitamin D helps regulate both the innate and adaptive immune systems, and is critical for balancing immune function. Vitamin D has been demonstrated to reduce the production of pro-inflammatory cytokines associated with lung damage caused by acute viral respiratory infections, such as influenza and COVID-19.11 BAME communities are prone to vitamin D deficiency because higher melanin levels in their skin cause lower vitamin D absorption. Consequently, prolonged exposure to sunlight is required to accrue the equivalent vitamin D quantity produced in the white population. This is further exacerbated in colder countries like the UK, which see less sunlight, meaning BAME individuals spend more time indoors without much opportunity to absorb vitamin D. So, there may be a connection between lower vitamin D levels and more frequent COVID-19 cases in BAME communities, though there is no firm data defending this link.
Virtually all immune cells have vitamin D receptors, indicating vitamin D interacts with the immune system. Vitamin D is required to regulate both the innate and adaptive immune systems and its deficiency is associated with immune dysregulation. Many of the ways this vitamin affects the immune system are directly relevant to the body’s ability to defend against viruses. For example, vitamin D triggers the production of cathelicidin and other defensins, which are natural antivirals capable of preventing viruses from replicating and entering cells. Vitamin D also increases the number of CD8+ T-cells, which play a critical role in clearing acute viral infections in the lungs. Further, vitamin D suppresses pro-inflammatory cytokines and may also alleviate the cytokine storms occurring in the most severe COVID-19 cases. This vitamin plays an essential role as well in glucose homeostasis, insulin sensitivity and the regulation of adipokines, such as leptin and inflammatory cytokines.12
Evidence from randomized controlled trials suggested that regular vitamin D supplements may help protect against acute respiratory infections. Admittedly, the direct evidence of vitamin D’s role against COVID-19 is still scant. One study from the United States and another from Asia found a strong correlation between low vitamin D and severe COVID-19 infection. It is well-recognized that the elderly and people with pre-existing conditionsare more vulnerable to COVID-19. Notably, people with existing medical conditions are also often vitamin D–deficient. Studies assessing ICU patientshave reported these patients’ low vitamin D levels even before COVID-19. It appears logical to hypothesize a link between the high COVID-19 infection rates in UK and US BAME groups and their observed lower vitamin D levels. Moreover, it is not possible to gain a sufficient vitamin D supply through food alone, making exposure to sunlight indispensable.
c. Weakened Immunity
COVID-19’s spread in the UK is disproportionately high compared to its spread in the countries of origin of many BAME communities. BAME people should be mindful of their genetics in a new environment. These demographics also have higher rates of cardiovascular disease, type-2 diabetes and hypertension, conditions that have been linked to severe COVID-19 symptoms and complications. There may also be other genetic links that warrant further exploration.
Current evidence has illustrated that chronic stress can increase infection susceptibility by suppressing the T-helper 1 immune response in favour of the T-helper 2 immune response.40 Stress management, lifestyle changes and career management may reduce infection susceptibility in turn. When people are less mobile, food becomes a distraction and they can overindulge. Obesity is also an adaptation to cold weather, as fat protects against low temperatures. Black and South Asian populations in the UK have 3–5 times the prevalence of type-2 diabetes compared to the white population and are diagnosed 10–12 years sooner on average.13
Human immunity is generally fixed by age 5, as contributed to by bacterial flora, among other factors. Several trillion bacteria exist within our body, with the gut considered this bacterial colony’s front yard. We have 25,000 genes, but up to 3 million bacterial flora genes are the real immune cell trainers.14Bacterial worlds came into existence well before humans evolved. When people migrated to Western countries from tropical regions, their bacteria had to adjust to their host’s new lifestyle, with some even replaced. Others may not have survived at all; consequently, these individuals’ immunity may have weakened.
The high incidence of diabetes and coronary heart disease in the British Asian population has been well-recognized, along with many other risk factors and comorbidities, including obesity, chronic obstructive pulmonary disease, chronic kidney disease, hypertension, and age. These may partly account for this population’s increased COVID-19 mortality15,16, but warrant further exploration. Vitamin D level is also likely to drop with rising BMI and age.17Obesity is strongly associated with vitamin 19-22D deficiency.18 Admittedly, there are weaknesses in these data collections and interpretations, which are theoretical speculations yet to be confirmed, modified or falsified.
There are diverse genetically linked immune responses in a given population, and the spread and survival of complications in a pandemic like this are not well-defined. Research has suggested that people living in close communities, like certain regions of Lombardy, Italy, have a poor genetically linked immune response to COVID-19. Like our fingerprints, immunity genetics contribute to our physical identity, but our immunity is not permanently fixed by genetics. There thus remain many unknowns in immunity research.
It is now increasingly recognized that immune ageing and organismal ageing are intimately inter-related. Aging weakens the immune system and immunity decline further accelerates the aging process. Immune system protects the individuals against viruses and bacteria and it also helps identify and remove cancer cells and toxins. The potential for these elements to cause damage in the body increases as age advances. Critical cells in the immune system decrease in number and become less functional as people get older. COVID-19 affects seriously the aging people of the BAME community as in the case of general population, but they are more disadvantaged in terms of health care access.
d. Social Factors
Alongside these factors, many BAME individuals work in fields that carry a high risk of infection, such as healthcare, transport services and retail. In the UK, 40% of doctors, 20% of nurses and a large number of social care and unskilled migrant workers belong to BAME backgrounds. Another reason why these infections may become more prevalent among ethnic minorities is that BAME community members tend to spend more time indoors clustered together, often in cramped accommodations, which increases the likelihood of person-to-person transmission. A multigenerational family set-up is not helpful either to social distancing in a pandemic, so this lifestyle could contribute to higher infection rates. Some people living in this arrangement may also become stressed and obese due to unhealthy nourishment.
Migrant communities tend to visit and keep in contact with their country of origin, which involve international air travel and thus increased infection risk. Family studies demonstrate that BAME people living in the colder countries develop COVID-19 at a faster rate, but most of their family members living in the tropical climate in the countries of birth are spared from the infection. Such an observation point towards extrinsic factors like lifestyle and weather conditions rather than intrinsic genetic links-nurture than genetics. The second-generation immigrant population are less affected and the mixed-race individuals because of diversity of immune cells appear more resilient to the viral pandemic. There appears to exist an epidemiological trend of transmission concentrations within BAME communities living in colder countries, such a situation runs the risk of racial stigmatisation and discrimination and also risks to social cohesion.
BAME individuals should be cognizant of the additional risks and take preventive and precautionary measures. They should also adapt to balance their immunity. Enough sleep, healthy diet, moderate exercise, abstaining from excess alcohol and smoking and de-stressing are the cornerstones of enhancing immunity. Immunity is not absence of a specific disease or illness; rather, it is a balanced physiological and psychological state, the most sophisticated and elegant system of human physiology. Vaccines are pathogen-specific, and they do not bestow an overall balanced immunity.
Supporting Immunity
To defeat the tiger, one may need to become stronger than the tiger. To do this with COVID-19, we may need to foster our existing immune system, which can be done in many subtle ways. We all must grapple with the unprecedented threat posed by COVID-19, and frontline health workers must be mindful of their own immune systems when advising their patients to do the same. Unreasonable fear of COVID-19 only weakens the immune system, and fear attracts that which is feared. According to many COVID-19 survivors, remaining positive is a crucial factor in combatting this illness. Knowledge about the enemy and our potential resources lessens fears and helps us to plan strategies to defeat the adversary. With a quarter of the world’s population in the grip of COVID-19, it is a highly challenging period to learn to survive and strengthen our body and mind and enhance our immune system, even using the wisdom of unconventional medicines and faith traditions. We will have to battle with this invisible enemy until an effective vaccine is identified. Anything that fosters self-immunity should be encouraged in this time of a global medical emergency.
The two functions of immune system are defending the body’s health and maintaining health.The immune system is depicted as having two components: the innate and adaptive immune responses. The innate system is the more primitive and less specific. It is the body’s first line of defence against foreign substances that may lead to disease. The adaptive system, found only in vertebrates, is a much more specific, delayed response and requires sanction from the innate system to be instigated. Though considered separate, each interacts with the other in critical and complex manner. A basic understanding of both responses facilitates to clarify and further substantiate the significance of immune balance.
There are many myths surrounding immunity enhancement. Enrichment of the immune system is possible so that it becomes vigilant and active in the event of an invasion by pathogens and it may possibly prevent immunity anomalies. It is defending the defenders of the body. Immune system is our protective shield. Metaphorically, immune cells are the guardian angels of the body. Balancing of immunity can be achieved by focussing on ample sleep, healthy diet, moderate exercise, weight monitoring, restricting alcohol, free of smoking and destressing.In the nutshell, it warrants lifestyle changes- one size may not fit all and immune balancing has to be adjusted on an individual basis.
a. Restful Sleep
One healthy habit vital to preventing sickness is getting a full eight hours of sleep each night, which may help regulate immune function.19 Studies reveal that people who are deprived of quality sleep are more likely to get sick after being exposed to a virus. Respiratory infection has been linked to poor sleep.20 One study of over 22,000 people, for example, found that those who slept less than six hours per night or had a sleep disorder were more likely to suffer colds and other respiratory infections.21 Lack of sleep can affect immune system adversely.
During sleep, immune system releases cytokines, some of which even help promote sleep. Certain cytokines need to increase during an infection, or under stress. Sleep deprivation may decrease production of these protective cytokines. In addition, infection-fighting antibodies and cells are reduced during periods when person is deprived of ample sleep. Sleep and the circadian system are strong regulators of immunological processes.22 There is a bidirectional communication between CNS and immune system. This is mediated by shared signals though neurotransmitters, hormones and cytokines and direct innervations of the immune system by the autonomic nervous system. Differentiated immune cells with immediate effect or functions, like cytotoxic NK cells and terminally differentiated CTL, peak during the wake period. 23 These chemicals permit an efficient and fast combat of obtrusive antigens and reparation of tissue damage. The more slowly evolving adaptive immune response is initiated during nocturnal sleep and undifferentiated or less differentiated cells like naïve and central memory T cells peak during the night.
It is during sleep, the immune system heals, repairs, and prepares for the challenges of wakeful periods. During the deep stages of NREM sleep, the body repairs and recuperates, and this deep sleep also reinforces immunological memories of previous pathogens.23 The endocrine milieu during early sleep critically supports (a) the interaction between APC and T cells, as evidenced by an enhanced production of IL-12, (b) a shift of the Th1/Th2 cytokine balance towards Th1 cytokines and (c) an increase in Th cell proliferation and (d) probably also facilitates the migration of naïve T cells to lymph nodes.22 A feeling of lethargy when fighting an infection may be a signal from the body—which produces chemicals that act on the brain—to sleep, so that the body can recover. A single night of poor sleep can lead to a dramatic reduction in NK cells, the first line of defence against viruses and cancer cells, which negatively impacts other immune cells.
b. Nutrition
The size of the inoculum, the virulence of the exposure, the immune response of the host, and the health of the host are the four vulnerability factors of an infection. The former three ingredients are beyond the control of the host and the fourth one is within the control of the host and is very much based on the nutritional status. Food is generally viewed in terms of calories, but nutritionists have started appreciating the noncaloric micronutrients in the food, including those that are neither vitamins nor minerals, but phytochemicals (plant-chemicals) that strengthen and support normal immune function. The recent research discovery that food is not only a calorie supplier, but also adds to disease resistance and longevity benefits, has rekindled an interest in phytochemicals that support defensive and self-reparative functions. Modern diet consists of processed food mixed with additives, colouring agents, and preservatives; there is no room for unrefined vegetables in the dietary pie. Nutritional excellence can be achieved through green vegetations and friuts.
Antioxidants are vitamins, minerals and phytochemicals that support the clearance of free radicals and controlling its production in the body. Free radicals are molecules that contain an unpaired electron which causes them to be highly chemically reactive and these unstable molecules are destructive as they come in contact with structures and other molecules within the cells.24 Antioxidants are the natural enemy of free radicals which creates inflammation leading to dysfunctional immune system and to premature aging. Vitamin C, E, folate, selenium, and alpha and beta-carotene, as well as various other phytochemicals have antioxidant properties. They are available in plentiful amounts in vegetables and fruits and consumption of them enhances the immune functions. The Namboothiri caste of Kerala are famous for their strict vegetarian dietary habits and disease-free life, and longevity. The nutritional status of the host is critical in permitting or prophylaxis against viral and bacterial infections as well as the nutritional deficiencies in the host allow mutation of viruses into more lethal forms. 24 This is evident in the meat-eating food markets where the SARS-Cov-2 initially began to breed and mutate.
Pro-inflammatory foods can sabotage the immune system and should thus be checked in its quantity of consumption. Thirty minutes after they are consumed, carbohydrates may begin suppressing the immune system, and this effect may last for up to five hours. Foods with extreme diuretic properties also have detrimental effects on the immune system, which functions better when well hydrated. Drinking plenty of water facilitates efficient cell operation and allows the body to process food and eliminate waste. Following a diet rich in antioxidants is also essential to supporting the immune system, so eating fruits and vegetables is recommended. Fruits and vegetables are rich in antioxidants that combat free radicals—chemical by-products known to damage DNA and suppress the immune system.25 Choosing healthy fats—such as the omega-3 fatty acids found in oily fish, flaxseed and krill oil—over the saturated fats found in meat and dairy products is generally recommended by health authorities. These oils may help increase the body’s production of compounds involved in regulating immunity.26
Dietary supplements and medicines may be required for people who suffer from micronutrient deficiencies. Vitamin D is linked with a healthy immune system, and a large body of well-established data highlights its antiviral effects; it not only directly interferes with viral replication but also has immunomodulatory and anti-inflammatory effects.27-29 A research study in the U.S. suggests that having low levels of vitamin D doubles the risk of death due to heart attack compared to having higher levels.30 It is therefore recommended that all UK citizens take a vitamin D supplement between October and March to help maintain healthy levels during less sunny months. Such supplements are available in several forms, including capsules, sublingual sprays, and liquid drops, that are usually oil-based, as the vitamin is fat-soluble.
Nutritional excellence is in one’s own individual control. An over-boosted immune system, however, can lead to autoimmune reactions, so it is important to balance supplements and not over-boost. Moreover, vitamin D toxicity can cause hypercalcemia, which may lead to excess calcium deposits in the kidneys, lungs, or heart. A well-balanced diet is crucial in balancing immunity. An ideal immunity diet maintains caloric balance and consists of healthy fats, phytonutrients, fibre, quality carbs and diverse protein sources. Multiple micronutrients, including lutein, lycopene, folate, bioflavonoids, riboflavin, zinc, selenium, and many others have immune modulating functions.31 In general, the Mediterranean diet pattern has been praised as anti-inflammatory and good for fortifying immunity.32 The Mediterranean diet is associated with older age, as well as increased activity and reduced stress
c. Hygiene
Simply keeping the hands clean is one of the best ways to ward off illness, according to the Centres for Disease Control and Prevention (CDC). By washing the hands for 20 seconds using warm water and soap before preparing food or eating, as well as after coughing, sneezing, using the bathroom, or touching public surfaces can prevent the invasion of several pathogens. Hygiene hypothesis in medicine is quite often misinterpreted and misunderstood. It does not suggest that having more infections during childhood would be an overall benefit.
The hygiene hypothesis promulgates the view that early childhood exposure to particular microorganisms such as the gut flora and helminth parasites shields against allergic diseases by contributing to the maturation of the immune system. Lack of exposure is thought to lead to defects in the establishment of immune tolerance. The time period for exposure to microbes commences in utero and probably terminates at school age.
d. In Praise of Microbes and Nature
The preindustrial lifestyle that made available for the daily intake of trillions of friendly microbes is now replaced by a world of sanitisers and wet wipes. Alternative medicine takes into account the friendly bacterial flora inhabiting human body and we ought to be mindful of their role in balancing immunity. Even though humans are controlled by 25000 genes, the genes of the microbes cohabiting with ours are taken into account, it would be more than 3 million. In fact, these genes of the microbes are the immunity trainers and coaches of human immunological genes. Conventional medicine also takes into account the existence of intestinal microbes and their role in health and illness. Approximately, 95 percent of the total number of cells in the human body are constituted by these GI tract microflorae and play a prominent role in the health of our immune system. In fact, these guts bacterial flora is the meeting point of alternative medicine and modern medicine.33
The intestinal microflora serve several useful functions that may include the supplementation of the digestive process, produce vitamins, short-chain fatty acids, protect against the overgrowth of pathogenic bacteria and yeasts, strengthen immune abilities and generate beneficial nutrients that stop weight gain. 24 Pathogenic bacteria, on the other hand, produce toxic substances, become bacterial invaders, cause digestive disturbances, trigger immune system dysfunctions, and even stimulate weight gain.
Modern urban life is at a low level on microbial variety and has poor contact with helpful environmental microbes.34 Asthma and allergies are found to be much less among children brought up in farm and drunk farm milk.35 People living in urban areas are more susceptible to allergies and inflammatory diseases. Children exposed to outdoor microbes have more robust immunity. Obese people with 30% fewer intestinal microbes tend to gain more weight. 36
People should enjoy the smell of green grasses and appreciate the healing powers of mother nature. Ecopsychology, which is the study that explores the connection between the world of nature and the world of humans, is a new branch of psychology. Studies have revealed that spending some time outdoors, in the nature, can actually reduce stress, as well as improve our overall emotions and feelings of happiness and wellbeing, raise the levels of energy, and enhance immunity. It is healthier to do exercises outdoors than indoors. A lifestyle admirably adapted to mother nature alone can guarantee robust mental and physical health.
e. Antibiotic Overuse
Research findings suggest that antibiotic abuse can result in damage to the immune system, and memory problems caused by a lack of growth in new brain cells. Overusing antibiotics, which happens when antibiotics are overprescribed or prescribed inappropriately, has many negative outcomes. In the first place there is no relief of symptoms or rationale in prescribing antibiotics for a viral rather than bacterial infection. It results in disruption of the normal, healthy flora in the digestive system, which can take nearly two years to correct and lead to other infections. Antimicrobial resistance is another established complication of overprescribing antibiotics. Antibacterial adverse effects are attributed to 25% of all drug reactions in hospital patients.37
Even a single course of antibiotics has a detrimental effect on the gut flora and can result in harmful alterations in the composition and diversity of gut flora disrupting ecosystem. 38 Antibiotic exposure in children have long standing impact on the health and is linked with increased risk of immune system disorders. Antibiotic induced autoimmunity has been reported. Low levels of antibiotic administration lead to fatter mice by up to 40%.39
f. Physical Activity
To enjoy a good night sleep, one has to be pleasantly tired. Being active reduces stress and causes individuals to feel more energetic and alert, thereby helping the body prepare for better sleep. The main principle underlying exercise is keeping the body moving. Stress hormones are slowly released during exercise, which has a favourable effect on the immune system.40 Physical activity can also facilitate clearing of bacteria from the pulmonary system and can alter levels of white blood cells and antibodies. It is believed that during exercise, leucocytes and antibodies move faster in the circulatory system, allowing them to detect internal threats and diseases sooner; however, there is not yet proof that infections are prevented by these changes. Bacterial growth may also be blocked by the increased body temperature during and after exercise, which may help the body fight infection in a way similar to a fever.
Keeping muscles active releases high levels of interleukin 7 into the blood, which helps to stop the thymus from shrinking. This would help production of new T cells and balance our immunity. Maintaining a healthy basal metabolic rate is crucial. Walking is the simplest but highly effective exercise. Regular walks strengthen our immune system. It improves the mood and energizes the body. Walking in green spaces could give a big mood boost Walking has no set rules and can be carried out in the busiest cities and in the sprawling countryside. Too much of exercise can become a stressor for the body and turn out to be counterproductive.
g. Immunity and Obesity
Obesity is the result of a disruption of energy balance that leads to weight gain and metabolic disturbances that cause tissue stress and dysfunction. 41Metabolic syndrome is a cluster of metabolic disorders and is rampant in the 21st century. It results in conditions combining diabetes, hypertension, and obesity. Metabolic syndrome is also linked to several types of cancers and it has strong inflammatory underpinnings linked to dysregulated immunity.
Obesity and immunity are inversely related. It has been observed, for example, that the same amount of vaccine generates different immune responses from obese and lean people. Obesity has been identified as a modifiable risk factor of severe COVID-19, but weight loss also brings other health benefits. Having a healthy body weight is important in maintaining strong immunity because the presence of too many fat cells suppresses immunity. Obesity can depress the immune system by reducing the body’s ability to produce leucocytes, generate antibodies and locate infection sites. Persistently enlarged fat cells place a body in a constant state of inflammation, keeping the immune system permanently on the go. Maintaining the right amount of body fat is crucial to immunity and health.
h. Alcohol Impairs Immune Cells
Much remains unclear about the impact of alcohol consumption to immune system. Alcohol abuse result in diminished liver and pancreas functioning which can lead to immune system problems. Chronic alcohol abuse and pneumonia are linked. Alcohol has an immunosuppressant effect, and binge drinking is particularly detrimental. One study reports that after four shots of vodka within a 20-minute period, blood samples reveal initial ramping of the immune system followed by sluggish immune responses a few hours later. Acute and chronic alcohol use impedes cellular immune function, placing binge drinkers at greater risk for bacterial and viral infections. A multi-layered interaction between alcohol and immunity exists, and alcohol abuse has negative effects on both innate and adaptive immunity.42
Drinking alcohol immoderately can cause damage to the immune system in two ways. First, it reduces the availability of nutrients, thus depriving the body of resources that strengthen immunity. Second, it can hinder the ability of white blood cells to destroy microbes. It is well recognised that excessive alcohol consumption suppresses white blood cell replication, inhibits the action of killer white cells on cancer cells and hampers macrophages’ production of tumour necrosis factors. Immune system damage increases in proportion to the quantity of alcohol consumed. While wine promoters assert that one daily glass of red wine may be helpful to maintaining health, any amount of alcohol large enough to cause intoxication is also large enough to suppress immunity.
There is a perilous myth circulating among the inner quarters of the public that consuming high-strength alcohol can kill the COVID-19 virus and it has stemmed from fear and helplessness and is totally unfounded. Consuming any alcohol poses health risks, but consuming high-strength ethyl alcohol (ethanol), particularly if it has been adulterated with methanol, can result in severe health consequences, including death. Alcohol consumption is associated with a range of communicable and noncommunicable diseases and mental health disorders, which can make a person more vulnerable to COVID-19. In particular, alcohol compromises the body’s immune system as described above and increases the risk of harmful health effects. Though there is still limited data on the link between alcohol and COVID-19, past evidence shows alcohol consumption can worsen the outcomes from other respiratory illnesses by damaging the lungs and gut and impairing the cells responsible for immune function.
i. Avoiding Substance Use
Marijuana, cocaine, heroin, and other opiates are widely used illegal drugs. Drug abuse compromises immunity, so it is imperative to stay clear of illicit drugs during a pandemic. Numerous clinical reports indicate the association between infectious diseases and the use of illegal drugs. These drugs alter not only neurophysiological and pathophysiological responses but also immunity responses. Thus, it is vital to determine the mechanisms through which drugs compromise immune responses both independently and in concert with immunosuppressive viruses.43
Snorting cocaine harms mucous membranes in the nasopharynx and pulmonary areas. This increases the chance of upper respiratory infections.44Marijuana affects several kinds of cells in the body, which can ultimately harm the immune system. Smoking marijuana reduces the body’s ability to resist infections from viruses, bacteria, fungi, and protozoa. Because of the suppressed ability of the immune system, it may also reduce the ability of an immune system to be able to destroy cancer cells. Drugs of abuse include heroin, morphine, fentanyl, opium, and prescription painkillers. While all narcotics have some effect on the immune system, injecting drugs into the veins increases the risk of viral infections like HIV and hepatitis B or C (due to sharing needles) and bacterial or fungal infections. This is especially dangerous in people whose immune systems are already compromised. Crushing and snorting narcotic drugs can also increase the risk of upper respiratory infections due to damage to the mucous membranes in the nasopharyngeal regions. Morphine and related opioids have been found to directly impact white blood cells, which can reduce the ability of the immune system to react to diseases.45
j. Pulmonary Health
As with marijuana and crack cocaine, smoking cigarettes can lead to upper respiratory problems and a lowered immune system response to infections in the pulmonary system.46 Studies indicate that smoking increases the risk of more severe lung disease in cases of SARS-CoV-2 infection. It has been argued that exposure to cigarette smoke increased the number of infected and apoptotic cells in the airway and that SARS-CoV-2 prevented the usual repair response to airway injury.47 SARS-CoV-2, the causative agent of the COVID-19, infects cells by binding to the angiotensin-converting enzyme 2 (ACE2) receptor present on host cells. ACE2 is highly expressed in ciliated cells of the upper airways. Smoking is linked with both a negative progression and adverse outcomes of COVID-19.48
Smokers touch their lips frequently, which may accidentally pass the virus to their mouths, and they tend to have existing respiratory conditions consequent to their smoking habit. These factors make them more vulnerable to viral respiratory infections and more prone to COVID-19-related complications. Indeed, smoking has been linked to a plethora of respiratory diseases and poor disease prognosis.49 Smokers are more vulnerable to infectious diseases because smoking harms the immune system, adversely affecting how it responds to infections.50 During the previous MERS-CoV epidemic, for example, smokers were found to have high mortality rates.51 One retrospective analysis of 78 patients in China found that smoking was correlated with greater COVID-19 severity and poorer prognosis. Though analytical studies conflict,52 smoking continues to be linked with higher risk.53
k. Balancing Bodily Temperature
It has been recognised that an increase in body temperature by 1°C than normal would result in an instantaneous increase of immunity by 5 to 6 times. On the contrary, as soon as the body temperature drops, the activity of white blood cells will be retarded, resulting in a decrease in immunity. Low temperature is well recognised as immunosuppressive. It is an accepted fact that fever is the body’s defence mechanism that activates the immune system in response to inflammation. The immune system functions optimally at higher comfortable temperatures and becomes underactive in cold environments. This is why seasonal infectious diseases like influenza are more prevalent during lower temperatures. Warm temperature restricts viral replication through type I IFN-dependent and -independent mechanisms in vitro.54 In addition, both humidity and temperature affect the frequency of influenza virus transmission among guinea pigs.55 One way to warm the body is to be metabolically active while keeping a relatively high room temperature and/or wearing warm clothes. The significance of thermal balancing to maintain healthy immunity has been discussed in preceding paragraphs.
l. Destressing
Good relationships protect mental health and wellbeing. People who are more socially connected are happier, physically healthier and have longevity. One should put more time aside to connect with friends and family, learn to live in the present and switch out of work mode whenever possible. It is important to invest time on and value relationships and make them a priority, listen to others, and speak openly about feelings. People should be good listeners and concentrate on the needs of other people. Happiness is the reflection of what one does for others. People should make an effort to be surrounded by positive individuals and allow themselves to be listened and supported. The key to destressing and happiness is being honest and respecting others.
Severe anxiety suppresses the immune system and the coronavirus may thus literally feed on fear. Relaxing and focussing on the present, however, can improve mental health and counteract negative feelings. Various forms of meditation and progressive muscle relaxation techniques, for example, can help one unwind from the assault of daily stressors, and such post-work relaxation may enhance the immune system. Incorporating relaxing practices like meditation, yoga or deep breathing into a daily routine has been found to be helpful. Psychological health and immunity are causally related. The current pandemic is forcing us all to adjust to new and strange ways of life, which can adversely affect mental wellbeing.
While short-term exposure to stressors can accelerate immune defence, prolonged stress may wear down the immune system, increasing vulnerability to illness. 56 In this way, chronic stress can be a killer. Immuno-psychiatry is a fledgling sub-speciality which deals with the immunological components of psychosis and depression.57 The autoimmune aetiology of schizophrenia is gaining ground 58-60 and the neurotoxic effects of cytokine storm due to COVID-19 have recently caught high attention.
Extra-physiological Immunity?
The chemical effects of allopathic medicines are a scientific reality, but their therapeutic effects are also partially due to placebo effect. Placebos are aimed at the symptomatic relief of illnesses. Disease and illness have different connotations; disease is understood scientifically in terms of pathophysiology and illness is understood phenomenologically, as a lived experience. 61 It is increasingly being recognized that what we call the ‘placebo effect’ may involve changes in brain chemistry induced by quantum bioenergy fields. That implies the placebo effect may be a quantum reality that is created by the mobilisation of quantum bioenergy fields.62 The placebo effect is believed to be brought about when the subjective mind produces medicinal agents and accelerates the healing process. It is estimated that up to 40 per cent of the effects of medicinal drugs may be a placebo effect. The placebo effect often seems to be associated with measurable changes in brain chemistry and there have been observed quantifiable changes in neurotransmitters, hormones, and immune regulators. 63 Placebos also relate to the disposition to heal, no matter what treatment is offered, if those being treated believe the treatment is helpful. 64 Regarding the effects of drugs, expectations appear to have a significant influence. The very existence of placebos offers an indirect proof for the existence of extrasomatic energy system and we need to incorporate their effect in the immune balancing. A quantum conceptual model of placebo is essential to understand certain hidden channels of medical sciences. The placebo component of immunity is highly significant and needs further evaluation.
Immunity is not a single entity; it is a system, and for a system to function well, it requires balance and harmony. Not everything about the immune system is known to science, and according to integrated medicine, immunity may not be confined to physiology alone, but may have non-physiological aspects as well. Numerical age and physiological age are two different things. This is particularly so if extra-physiological energy system is brought into the equation of immunity. It is true that the existence of extra-physiological systems is not scientifically well established, but they are strong hypothetical possibilities.
Studies of quantum bioenergy fields should be an integral part of the science of human physiology and homeostasis should be redefined as the state of steady internal physical and chemical conditions maintained by different regulators, including extrasomatic energy fields.65 Humans are multidimensional or psycho-spiritual entities with several layers of energy bodies with increasing subtilty.66 Complementary medicines work on the assumption that humans are associated with a subtle energy system, in addition to their material body. Even though such extrasomatic energy systems are not recognised in the modern medical sciences, there are energy fields that cannot be explained by the classic Maxwell–Schrodinger equation.
The material body and energy bodies are in a complementary relationship: if the material body is the container, vital energy is the content.67 Beverly Rubik postulated that biological systems may be regarded as complex, non-linear, dynamic, self-organising systems of energy and field phenomena68,69 Many researchers have attempted to bring the existence of extrasomatic energy fields into the arena of mainstream sciences.70--73 If such quantum bioenergy fields really exist, they may play a major role in maintaining homeostasis in the human physiology, and it would be of great clinical interest to evaluate their role in immune system functionality, as long as they do not overrun the scientifically accepted views. To bring the concept of extra-physiological immunity into immunology, we may also have to accept the possibility of ‘nano immune cells’ and a ‘nano-level immune mechanism’.
Conclusion
The high incidence of complications among ethnic minority points toward the thermal conditioning and the role of immunogeneticsgenetics. Underactive immune responses in cooler temperature and diminished synthesis of vit D and the genetic factors linked with these anomalies might explain part of the higher incidence of COVID-19 among BAME. It is the physically and psychologically resilient people of a community who normally migrate to overseas countries. If migrants are to develop mental health problems, it would manifest within 6 months of migration, but physical health problems come about any time of their stay abroad as the weakening of immunity is a slow process. COVID-19 has a direct impact on co-existing disease processes worsening them because of the added immunity impairment. There are still missing gaps in the pervasive occurrence of this viral affliction among the BAME people. They should be mindful of the vulnerability factors and special precautionary measures should be adapted to prevent the infection.
COVID-19 appears to be a test of self-immunity. To combat COVID-19, efficient tests, novel treatments, and vaccines are the three means. An effective and safe vaccine would drastically change the pandemic situation for good. Vaccination and developing novel form of medication would take some time to become available. In such circumstances, one way of protecting from COVID-19 is by balancing one’s own immune mechanisms. It is a good thing that there is ample promotion of preventive measures of the contagion, there should be more awareness of improving personal immunity. More research works are warranted in immunology including extra-physiological immunity. Strengthening immunity is achievable for everybody if sufficient attention is paid. A safe and effective vaccine with long term immunological properties would drastically change the pandemic situation for good. Thus far, the research findings of the pandemic are inconsistent, and many dimensions of this pandemic warrant further clarification. COVID-19 will have a serious impact on virology and the neurotoxic effects of cytokine storm may be a stimulus for the growth of immuno-psychiatry.
Science is good enough to study the physical and visible, but it has obvious limitations when it comes to the unphysical and non-physical. Unphysical is undetectable only because they cannot be identified with the present-day instrumentation but can become detectable when our technology advances and their presence should not be stubbornly denied. The well-established placebo effects may point towards the existence of quantum bioenergy fields. Existence of extrasomatic energy system indirectly support the concept of extra-physiological immunity. Placebo effects are not psychological artefacts, but quantum manifestations. If extra-physiological immunity exists, it may be guarding and supervising the physiological immune system.
Processed sugar has a high glycaemic index (GI) as it is easily digested and absorbed triggering a prominent insulin response, which if repeated over time leads to insulin resistance and type two diabetes1, 2. The appealing nature of high calorific sugary food combined with their low satiating nature means they also tend to be eaten in excess which contributes to obesity and metabolic syndrome2, 3. Obesity and diabetes raises the long-term risk of poor gut health and chronic inflammation increasing the risk of chronic fatigue, low mood and degenerative disease conditions such as cancer, cardiovascular disease, dementia and stroke2, 3.
Despite these obvious risks, a recent survey of NHS health care professionals reported that over half are overweight and over a quarter are living with obesity4. Both obesity and high sugar content-foods are associated with musculoskeletal disorders, lower mood, unhappiness, fatigue and depression which significantly contribute to sickness absence from work4, 5, 6, 7.
Despite these risks, consumption growth continues to escalate especially in low and middle income countries. Since 2000 consumption has grown from 130 to 180 million tonnes in 20208, and its production is contributing to poor health as well as greenhouse gas emission and deforestation9, 10.
In an attempt to reduce sugar intake, NHS England introduced a voluntary reduction scheme in July 2017, recommending that NHS Trusts and retailers on NHS premises reduce the proportion of monthly sugar-sweetened beverages sales. They reported in March 2018, a reduction as a proportion of total drinks sales from 15.6% to 8.7%11. However, to date, there is no information as to whether this has had any impact on consumption of sugar, wellbeing or weight reduction. In our cancer unit there is a constant availability of sweet snacks, predominantly gifted by patients, and during busy clinics these often replace balanced meals. Some argue that this display of sugary foods, together with the high proportion of overweight staff undermines the NHS’ ability to give patients ‘credible and effective’ behavioural lifestyle advice.
The hypothesis for this intervention was that a removal of sugary foodstuffs from the field of vision on nurses’ stations and replacing with fruit, nuts and seeds enables healthy snacking, resulting in weight loss and increased mood.
Methodology
This pilot intervention used quantitative methods to observe the feasibility of delivery and outcome of a real-world intervention. This project was registered with and approved by Bedford Hospital NHS Trust Research and Development Department, but classed as a practical service evaluation, hence no Ethics approval or written consent was required.
Participants: Fifty eight members of staff at the Primrose unit, Bedford Hospital were invited to participate for this 3 month nutritional intervention; 44 (75%) volunteered. The cohort consisted of 36 nurses, 2 consultants, 2 secretaries and 4 administration staff. There were 41 females and 3 males, aged 28-72 years (average age 45 years). A further 100 consecutive patients attending for treatments were asked for their views on the intervention.
Measures and outcomes: The primary endpoints were Body Mass Index (BMI) (Kg/m2) and happiness measured with the previously validated Subjective Happiness Score (SHS)12. As a secondary end point, patients attending the Oncology unit during the intervention period were asked anonymously for their opinion and likely influence on their eating habits.
Procedure: At baseline the Primrose Unit research department recorded staff demographics, BMI and SHS questionnaire scores. From the date of entry of the first participant (June 2019) to completion of the last participant (September 2019), all sugary foodstuffs were removed and replaced with bowls of mixed whole and dried fruit, seeds and mixed nuts. Non-participating staff were asked to voluntarily keep sugary items out of general sight. At baseline, 3 months and 5 months, participants were weighed by one of the research team and completed a SHS questionnaire.
In the final month of the intervention, 100 consecutive patients attending for treatments at the unit were asked their opinion of this intervention, specifically if they felt that removing sugary items from public display was a welcome gesture and whether seeing staff making efforts to reduce sugar intake would encourage them to do the same.
Statistical methods and analysis
The completed dataset was compiled in an excel spreadsheet then transferred for independent statistical analysis. The pre- and post-intervention weight differences datasets were analysed by the T-test as were the difference in happiness scores. The differences in participants’ opinion were analysed by the chi squared test. There were no missing data and in view of the relatively small numbers in the cohort, sub-group analysis was not planned or performed. The study advisory committee predetermined that a change in weight of 1 kg was meaningful13.
Results
Average weight: At baseline the average was 72.12 kg, and 71.23 kg.at 3 months; an average loss of 0.89 kg (T-test p= 0.02). The average weight at 5 months was 71.09 kg; an average loss of 1.03 kg from baseline (T-test p= 0.01). Twenty participants (46%) lost >1kg in weight (average 3.01 kg) as opposed to 7 (16%) participants who gained >1kg (average 2.23 kg) T-test p< 0.03.
Happiness score: Average happiness score increased from 21.65 to 23.44 (+6.6%), T-test p< 0.04). Amongst those who lost >1kg weight, average happiness score increased from 21.54 to 23.75 (+9.3%), T-test p<0.03. In those who gained >1 kg weight, average happiness score decreased from 22.28 to 21.43 (-3.8% T-test p< 0.08. There was a 13.1% difference in the happiness score in those losing >1kg compared to those gaining >1kg in weight (p< 0.001).
Patient opinion: 94 (94%) of patients indicated that this initiative gave a good impression; 6 (6%) were not sure or felt it did not give a good impression (Chi2p<0.001). Ninety seven (97%) indicated that the initiative would encourage them to reduce sugar in their own diet versus 3 (13%) who were not sure or felt that it would not change their behaviour (Chi2 p<0.001).
Discussion
This small pilot evaluation has a number of methodological weaknesses but what it lacked in statistical strength it gained in novelty and potential importance. This was the first nutritional intervention involving hospital staff within a routine working practice. It addresses a health issue which affects hundreds of thousands of health workers every year, and demonstrated that a practical behavioural change initiative was welcomed by the majority of staff (75%), with no drop-outs or objections from non-participating staff. This implied a larger national study would be feasible.
These data clearly demonstrated a statistically significant reduction in meaningful weight similar to the best designed weight loss programmes14. A fundamental rule of behavioural change is not to dictate to people, but to encourage them to want to make the decision to change for themselves. This simple intervention did not stop staff eating what they wanted as there was no restriction to their overall food choices. The big difference was that, within their field of vision, there were healthier fruit and nuts instead of high-calorie, sugar-laden foods, which are usually readily available.
This intervention was overwhelmingly supported by patients. Surveys have repeatedly reported that patients look to health workers for guidance, and this study confirmed that this manoeuvre made patients think about their own eating habits. Although a further trial would have to establish whether this initiative objectively reduce processed sugar intake amongst patients, a reduction in intake would confer considerable benefits as several large cohort studies have linked high sugar intake with a higher risk of cancer, greater complications of treatments and worse outcomes, for several reasons3.
Sugary foods increase the risk of weight gain, already more common after cancer; increases levels of oestrogen in post-menopausal women; and increases insulin like growth factor (IGF) and other hormones such as leptin, all of which in laboratory experiments increase proliferation and markers of aggressiveness and spread of cancer cells 2, 15, 16, 17. Cohort studies have also reported that those who ate more than 10% of their daily calories as sugar had higher total LDL cholesterol levels further adding to the cardiac risks of herceptin and anthracycline chemotherapy drugs. Independent from obesity, high sugar intake directly increases the risk of type 2 diabetes (T2D) by overloading the insulin pathways1. Individuals with T2D have higher serum insulin levels (hyperinsulinemia) which triggers proliferation in cancer models18, is linked to higher oxidative stress and low-grade chronic inflammation, causing epigenetic genetic damage and ongoing malignant transformation19. These laboratory findings are supported by several cohort studies which have linked diabetes with a higher risk of cancer and a higher risk of relapse post-treatment20.
Patients on chemotherapy should be particularly discouraged from eating sweets and cakes as they are more prone to dental caries which contributes to the risk of osteonecrosis following consequent bisphosphonate therapy. Dental caries may also be an increased factor for bowel cancer itself as DNA codes from bacteria, commonly found in caries (Fusobacterium), have been detected in the genes of bowel cancer but not in normal guts21.
Patients receiving the new generation of targeted therapies should be particularly vigilant of their sugar intake. PD-1 inhibitors recruit the body's immunity to recognise and target cancer cells, the influence of diet and lifestyle is becoming even more important. Studies have demonstrated that better gut health is linked to significantly better response rates. Processed sugar is the preferred fuel for pro-inflammatory firmicutes bacteria whilst the healthy bacteroidetes utilise glycans from the breakdown of polyphenols, which explains why there is a reverse correlation between sugar intake and gut health22. However, whole fruit intake is associated with better gut and general health as it provides polyphenol which feed healthy bacteria3, 23. Despite having between 9-14% fructose, the fibre and pulp makes fruit satiating and slows gastric emptying, thus reducing the GI3. Additionally, the polyphenols in fruit, vegetables, nuts, legumes, herbs and spices slow transportation of sugar across the gut wall by inhibition of sodium-dependent glucose transporter 1. They enhance insulin-dependent glucose uptake, activate 5' adenosine monophosphate-activated protein kinase, which explain why their regular consumption is associated with a lower risk of T2D3, 23, 24. They also improve reduced gut and systemic inflammation; enhance anti-oxidant enzyme production so reduce intracellular oxidative stress; and reduce the risk of cancer and other chronic diseases including those associated with diabetes3, 25, 26.
The evaluation was not robust enough to measure whether this resulted in less sickness absence, but this endpoint should be included in a larger design. It also did not include data for those staff who did not actively participate, but who benefited from removal of sugary foods from their work areas; the evaluation committee did not receive any complaints or objections to their removal.
Government initiatives such as a sugar tax and public information campaigns may help but as individuals within the NHS, we have an opportunity to influence our staff, the patients whom we serve and the wider public. The evaluation reported in this paper is a small start, but demonstrates that a multicentre study would be feasible and if the results are confirmed, it could initiate a national cultural change attitude towards sugar in the NHS.
Across the UK there has been a reduction in the number of children and young people (CYP) presenting acutely to hospital during the COVID-19 pandemic. This was highlighted in a recent survey of consultant paediatricians in the UK and Ireland1. It showed that not only were fewer children being brought to emergency departments, but there were also delays in acute presentation of critical illness (such as sepsis and diabetic ketoacidosis) and reductions in referrals for cancer treatment and child protection assessments1.
The reasons for the reduced attendance are thought to be related to the initial government messaging of Stay Home, Protect the NHS, Save Lives2. However, as it became clear that not only parents, but other potential patients were not presenting even if warranted, the government adjusted the messaging to make it clear that the NHS was still open for urgent care that was not just COVID-19 related.
In CYP the cause of delayed presentations were likely to be manifold: parents following the initial governmental message; families concerned that hospitals were unsafe; the initial presumption that COVID-19 in CYP would present in the same manner as in adults potentially leading to primary care and NHS 111 pathways channelling them to domestic isolation. It may be that some delays in hospital presentations may be due to reduced referrals from primary care, and that in turn may be influenced by fewer CYP accessing their local General Practice facility. The ‘Take the Temperature’ survey which assessed the views of 1535 respondents (predominantly aged 16-25 years) found, “85% knew that they shouldn’t go to a doctor if they got the virus”3. However, it is possible that CYP and parents may not be able to make the often challenging differentiation between symptoms of COVID-19 and what may be another illness in need of medical attention.
There has been a significant increase in pressure on many aspects of the health service, including on primary care. Automated telephone messages have been used as a tool by General Practice to direct service users to the correct service or point of care for some time. As such, it is unsurprising that automated messages may be used to try to address some questions about the pandemic prior to speaking to a call handler at a practice. In addition to this, significantly limiting face to face contact with patients during the pandemic in Primary Care has been essential to prevent the potential spread of the virus and closure of services. We aimed to review the initial advice that parents and carers may be receiving from their first point of contact when telephoning their local General Practice and whether this considered CYP specifically.
Methods All General Practices within four Clinical Commissioning Groups (CCGs) in NHS Sheffield CCG, NHS Manchester CCG, NHS Leeds CCG and NHS Birmingham and Solihull CCG were identified using the NHS website. These were chosen as they are large cities, with diverse populations.
Practices were only contacted within their standard opening hours by three of the authors, within a four-day time period (7th July 2020 to 10th July 2020). The data collected is shown in table 1. All practices were telephoned and identified as to whether they had the following (see table 1):
Table 1: Questions asked during data collection
Was there an automated message?
Yes/No
Was COVID-19 was mentioned in the automated message?
Yes/No
Was there was advice to stay away from the practice if COVID-19 symptoms present?
Yes/No
Was there advice to self-isolate with COVID-19 symptoms
Yes/No
Was there any age segmentation or differing advice for children?
Yes/No
If worsening COVID-19 symptoms, was there advice to go to NHS website or telephone NHS 111 service?
Yes/No
What was the length of the automated message (In seconds)?
Percentages, means, standard deviation, and standard error of the mean were calculated. Proportions were compared using Fisher’s Exact test to calculate statistical significance of some data.
Table 2: Reasons for exclusion from analysis
Reason for exclusion from analysis
Number of practices
Private screening clinic
1
Duplication of practice already listed
5
Permanently closed
1
Call failed or no telephone number available
4
Line busy despite repeated attempts
1
Total
12
In total, 549 practices were listed under these four CCGs. 12 practices were excluded (see table 2), leaving 537 practices from which we could obtain results.
Table 3: Analysis of results from 537 GP practices
ALL GPS COMBINED
Automated message
Coronavirus mentioned in automated message
Advice to stay away from practice if coronavirus symptoms
Advice to self-isolate with coronavirus symptoms
Did have age segmentation
Advice if worsening COVID-19 symptoms to go to NHS Website or phone 111
Length of automated message (seconds)
TOTAL
440
290
153
120
5
169
23694
% of surgeries contacted
81.9%
54.0%
28.5%
22.3%
0.9%
31.5%
% of surgeries with automated message
100.0%
65.9%
34.8%
27.3%
1.1%
38.4%
Mean
54.1
Standard Deviation
26.9
Table 3 demonstrates that of the 537 practices, 81.9% (n=440) had an automated message. When an automated message was present, the mean length was 54.1 seconds (SD = 26.9). Of all of the practices with an automated message, 65.9% (n=290) mentioned ‘coronavirus’ or ‘COVID-19’ in their message, 34.8% (n=153) gave specific advice to stay away from the practice if the caller had symptoms of COVID-19, 27.3% (n=120) gave advice about self-isolating with COVID-19 symptoms, and 38.4% (n=169) re-directed callers to telephone NHS 111 or visit the NHS 111 website for advice on worsening symptoms. Only 1.1% (n=5) practices mentioned children specifically. Of these, two said that the advice about self-isolating also applied to children, and the other three said the following: “…anyone with a new continuous cough or fever of 37.8 degrees centigrade or higher must self-isolate for 7 days. This includes children. Travel history is now irrelevant. Anyone with these symptoms who are well are to stay at home and do not need to ring 111 or be tested. Anyone with these symptoms who are unwell should go to NHS 111 online for advice. You must not come to the surgery…” “…anyone with a new continuous cough and/or a high temperature should stay at home and self-isolate for the next 7 days. This includes children. All other members of your household will need to self-isolate for 14 days even if they remain asymptomatic. Do not attend the university health service, hospital, pharmacy or other NHS service in person. If you have these symptoms, use the NHS 111 online coronavirus service to find out what to do. Do not call NHS 111 unless you cannot get help online…”
“…anyone with a new continuous cough, a fever of 37.8 degrees or higher, or a loss or change to your sense of smell or taste must self-isolate for 7 days. This includes children. Anyone with these symptoms who are well must stay at home and order a COVID-19 test… Anyone with these symptoms who are unwell should go to 111 online for advice. You must not come to the surgery…”
Sheffield CCG had the fewest number of automated messages compared with all the other CCGs:
Sheffield CCG (n=75, 70.8%) vs Leeds CCG (n=119, 88.8%) p<0.0005;
Sheffield CCG (n=75, 70.8%) vs Manchester CCG (n=74, 81.3%) p=0.0974;
Sheffield CCG (n=75, 70.8%) vs Birmingham and Solihull CCG (n=172, 83.5%) p=0.012.
Sheffield CCG had the most automated messages with advice to stay away from the practice compared with the other CCGs:
Sheffield CCG (n=44, 58.7%) vs Leeds CCG (n=34, 28.6%) p<0.0001;
Sheffield CCG (n=44, 58.7%) vs Manchester CCG (n=26, 35.1%) p=0.0052;
Sheffield CCG (n=44, 58.7%) vs Birmingham and Solihull CCG (n=49, 28.5%) p<0.0001.
Manchester CCG had the fewest messages with advice to self-isolate compare with the other CCGs: Manchester CCG (n=9, 12.2%) vs Leeds CCG (n=30, 25.2%) p=0.0415;
Manchester CCG (n=9, 12.2%) vs Sheffield CCG (n=26, 34.7%) p=0.0018;
Manchester CCG (n=9, 12.2%) vs Birmingham and Solihull CCG (n=55, 32%) p=0.0009. See Table 4.
Table 4: Breakdown of results for individual CCGs
CCG
% of surgeries with automated message
% Coronavirus mentioned in automated message
% Advice to stay away from practice if coronavirus symptoms
% Advice to self-isolate with coronavirus symptoms
% Did have age segmentation
% Advice if worsening Covid-19 symptoms to go to NHS website or phone 111
Mean length of message in seconds (95%CIs)
Sheffield (n=106)
70.8 (n=75)
62.7 (n=47)
58.7 (n=44)
34.7 (n=26)
4.0 (n=3)
34.7 (n=26)
52 (46-57)
Leeds (n=134)
88.8 (n=119)
62.2 (n=74)
28.6 (n=34)
25.2 (n=30)
1.7 (n=2)
53.8 (n=64)
56 (51-60)
Manchester (n=91)
81.3 (n=74)
68.9 (n=51)
35.1(n=26)
12.2 (n=9)
0 (n=0)
56.8 (n=42)
58 (52-64)
Birmingham and Solihull (n=206)
83.5 (n=172)
68.6 (n=118)
28.5 (n=49)
32.0 (n=55)
0 (n=0)
21.5 (n=37)
52 (49-56)
Automated messages were all in English (although a small number of practices provided a translation in other languages after the message) and orated by a mixture of computerised voices, doctors or staff from the practice. Many automated messages indicated a range of options for the caller to be re-directed to a different line (such as to arrange an urgent appointment or to obtain a repeat prescription) but for the purposes of this study, the key data points listed in table 2 were the only parts of the message which were recorded.
There was no statistically significant difference in mean message length between the four CCGs. Sheffield CCG 51.7 seconds (95% confidence interval 46.5 to 56.8); Leeds CCG 55.7 seconds (95% confidence interval 51.2 to 60.1); Manchester CCG 58.0 seconds (95% confidence interval 52.2 to 63.7); Birmingham and Solihull CCG 52.4 seconds (95% confidence interval 48.7 to 56.0) (p<0.05).
Discussion
This study found that very few practices specifically mentioned children in their automated messaging in relation to the current pandemic. 81.9% of the practices contacted had automated telephone messaging. Of these, 65.9% mentioned COVID-19 in their message but only 1.1% (n=5) specifically mentioned children in their message.
38.4% of practices re-directed callers to either the NHS website or NHS 111 telephone advice line. The website advice states, "Call 111 if you're worried about a baby or child under 5. If your child seems very unwell, is getting worse or you think there's something seriously wrong, call 999”4. There is also further advice particularly focussed upon babies and very young children on the website. This is helpful advice for parents or carers of an unwell child and it is important that it is emphasised. However, it relies upon parents and carers to make an assessment as to when something may be getting worse or is ‘seriously wrong’. Whilst this would increase the workload for primary care, it perhaps would be more beneficial for CYP, particularly those under 5 years to be triaged by a call handler at the local practice and have a much lower threshold for a telephone consultation with a clinician at the surgery or advice to attend hospital.
This study provides a timely representation of first point of care health advice which is being provided in England during the current pandemic. It seeks to look specifically at automated advice given to CYP and whether this may contribute the delays in presentation to secondary care for acutely unwell CYP which have been seen.
It is difficult to know for certain how this may be directly attributable to the reported delays in presentation of serious illness.
Practices from within only four CCGs were contacted in this study. However, this covered a sizable number of different practices, 537 in total, all of which were in large cities and towns in England. It is notable that we did not assess any advice that may have been given by those answering the telephone call. Once the automated message had been completed there may have been opportunity to provide targeted advice. Also, for the 18.1% (n=97) practices where there was no automated message, we do not know if any further advice is relayed by those answering the call. It may have been at this point when age specific advice might have been received.
To our knowledge there have been no other studies looking at the spectrum of automated messages in General Practice during the COVID-19 pandemic.
This study highlights the need for tailored and consistent advice for CYP specifically during the COVID-19 pandemic.
There is significant variation in the advice being given by different General Practices. The Royal College of General Practitioners (RCGP) states that ‘as with all patients, children should be triaged prior to any face to face consultation’ and ‘every effort should be made to avoid face to face assessment’5. It is very important to note that the pandemic has been an extremely challenging time for General Practice with rapid adaptations to working being made in a very short time period. There have been repeated changes in guidance which highlight the challenges faced by General Practice in providing the most up to date information. Since 18th February 2020, patients with a travel history or suspected symptoms were advised to call NHS 111 and to not go to their local General Practice, pharmacy or hospital6. On 5th March 2020, General Practitioners (GPs) were advised by NHS England to switch to a telephone-only triage system, to reduce the change of potentially infected patients attending the practice7. The latest NHS England Standard operating procedure for General Practice (at the time of writing; 24 June 2020, Version 3.3)8 offersspecific advice for GPs regarding children; “Prolonged illness and/or severe symptoms should not be attributed to COVID-19 and should be evaluated as usual”. The rapidly changing advice, coupled with large amounts of uncertainty and anxiety among staff in Primary Care may have contributed to the challenges of providing consistent, standard information for service users such as through automated messaging. For some practices, a telephone triage service was a completely novel way of working, making this large process change over a very limited time frame must have been extremely challenging.
Logistically, the ability to alter automated telephone messaging is often not straightforward and, in many cases, requires outsourcing of this to external companies. This requires an already pressured service to keep up to date with rapidly altering advice whilst arranging for a staff member to formulate a new script and then arrange for this recording to be amended. A process which would have been required to be repeated multiple times over the preceding months, due to regularly changing government messaging.
Although evidence continues to emerge, we know that COVID-19 is less likely to develop into serious illness in healthy children and adolescents compared to adults9.
There have been concerns regarding a serious but rare complication of COVID-19 infection in children PIMS-TS (paediatric inflammatory multisystem syndrome temporarily associated with SARS-CoV-2). A recent paper in the Lancet10 reviewing children admitted to PICUs in the UK between 1st April 2020 and 10th May 2020 suggested that incidence of PIMS-TS requiring intensive care was around 1.5%. However, at the time only hospitalised patients were being tested for COVID-19 in the UK, so this does not take into account the number of children who may have had COVID-19 but were not tested. As a result, it is likely to be an overestimation. Whilst this condition can be serious, the likelihood of a child progressing to PIMS-TS after developing Covid-19 remains low. The greater concern is delayed presentation of other serious illness.
As other publications have suggested, there is a greater risk that children may delay in presenting to hospital or be delayed in being referred to secondary care for important investigations due to the widespread ‘stay away’ advice, seen in both the UK11 and in Europe12.
We suggest that adapting the messaging that parents or carers receive when they first contact their GP to include CYP would be possible and may reduce the number of unwell CYP who have delays in receiving medical care. It would also be important to aim to have consistent messaging across different practices, advice which perhaps should be standardised at a national level. This could greatly assist those working in Primary Care to be able to provide accurate and up to date messaging for their patients. Any adaptations required could be made by individual CCGs to take account of local differences.
Increased amounts of wider public health messaging directed towards encouraging parents and carers to seek medical advice if they are worried about their child, despite the pandemic, are paramount to aid in getting this vital message to those caring for CYP. It is important that additionally where appropriate, this advice is also available in languages other than English.
This study does not prove a direct link between the advice provided at the first point of contact in Primary Care and the delays in CYP presenting to hospital with serious illness. We do not know what influence the advice on automated messages has over CYP and their parents in their decision making about accessing care. Future research should seek to answer this question specifically, perhaps involving directly interviewing CYP and their parents or carers.
Depression and osteoporosis are two extremely common comorbidities in geriatric patients. Each have their associated mental and physical impacts on the patient, and economically on the wider healthcare system. Staggeringly, up to 39% of frail patients suffer with depression.1 Selective serotonin reuptake inhibitors (SSRIs) have long since been used in the management of depression and anxiety states and are one of the fastest-growing classes of drugs prescribed. Their use is not without the potential for negative effects; their side effect profile includes nausea, anxiety, insomnia, sexual dysfunction and gastro-intestinal upset, with the impact on bone mineral density (BMD) being controversial.
Statistics from the International Osteoporosis Foundation (IOF) reveal that in 2015, 6.8% of men and 21.8% of women over the age of 50 had osteoporosis. The estimated lifetime risk of hip fractures for women over 50 is 17.2%, with fracture-related costs at 5.3 billion pounds in 2017.2 Osteoporosis is a progressive, systemic skeletal disorder characterised by loss of bone tissue and disruption of bone microarchitecture, that leads to increased bone fragility and consequently an increased risk of fracture. As well as increasing age and female sex, other well documented risk factors for reduced BMD include early menopause, alcohol use, corticosteroid use, smoking, sedentary lifestyle, low body weight, impaired eyesight, and recurrent falls. What is more, depression itself cannot be overlooked as a risk factor for osteoporosis.
The mechanism by which depression leads to lower BMD is by that of alternation of the hypothalamic-pituitary-axis system, resulting in hypercortisolism. Cortisol is a well-known factor in bone loss. Proinflammatory cytokines have been implicated in depressive disorders, and they may directly stimulate osteoclastic activity.3 What must also be considered is the impact that depression has on certain lifestyle choices such as the potential for increased alcohol and nicotine consumption, inadequate nutrition and low physical activity.
The presence of serotonin receptors, neurotransmitters, and transporters have been found within osteoclasts and osteoblasts.4 95% of serotonin is synthesised in the gut and cannot cross the heteroencephalic barrier. Gut derived serotonin reduces osteoblast proliferation, thereby leading to bone loss. Brain derived serotonin signals to the ventromedial hypothalamic neurones leading to decreased sympathetic output and therefore favours bone formation by action on the beta-2 adrenergic receptors on the osteoblasts. It appears that with shorter duration of use, decreased bone resorption predominates, and with longer term use, bone loss outweighs.4
The impact of SSRIs on bone health has long since been the subject of research, with a possible link with both increased risk of fractures, and reduced bone mineral density being identified. In response to emerging evidence, the MHRA issued advise to healthcare practitioners, stating that we “should be aware of epidemiological data showing a small increased risk of fractures associated with the use of TCAs and SSRIs, and should take this risk into account in discussions with patients and in prescribing decisions”, yet this has not yet filtered down to prescribing guidelines.5 The National Institute for Health and Care Excellence (NICE) guidelines state that on choosing the antidepressant to prescribe, healthcare practitioners must consider that there is currently no evidence to support using specific antidepressants for specific physical health problems.6
We therefore present a case of recurrent depressive disorder in a patient with a background of osteoporosis. We also include a review of the most up-to-date literature, with the aim of increasing awareness of the impact of SSRIs on bone health for fellow prescribers. We aim to highlight the difficulties we face as clinicians whilst there are no formal recommendations regarding the use of SSRIs in high risk populations.
Case Description
This 78-year-old was referred to our services in late 2019 with low mood and loss of motivation. She lives alone following the death of her husband 3 years ago and sadly has no family. She has a past medical history of depression, hypertension, acute pericarditis, subclinical hypothyroidism, hiatus hernia, cataracts, previous cholecystectomy, and osteoporosis.
She was diagnosed with osteoporosis in 2000. At that time, she had been seeing an osteopath due to back pain, who advised her to see her GP to investigate for arthritis or osteoporosis. She has a family history of osteoporosis on her mothers’ side. She was diagnosed by Dual-Energy X-ray Absorptiometry (DEXA) scan, with osteoporosis at the lumbar spine and pelvis, at which time she was started on calcium supplementation.
She was initially started on oral alendronic acid but developed reflux symptoms, so this was discontinued. Over the following years she was tried on various medications for bone protection but sadly developed side effects. Briefly, pamidronate infusion caused iritis, and nausea was reported whilst on sodium ranelate. Later she was to be commenced on sodium risedronate, however did not start this due to concerns she had following reading the information leaflet. Denosumab was discussed as the next suitable option, however she was undergoing dental work including tooth extraction and so this has been delayed due to the risk of avascular necrosis of the jaw.
DEXA scanning in March 2019 showed a T score of -0.8 at the neck of femur, -4.5 at the forearm, -1.3 at the total hip, and -4.2 at the spine. This had, unsurprisingly, worsened from her last DEXA in 2016 (-3.6 at the spine). Her risk of major osteoporotic fracture was last calculated at 21.6%, with the risk of hip fracture 11.5%. She has had no falls or fractures to date since her diagnosis.
Other than Adcal-D3 she is now no longer on bone protection. Her current medications also include levothyroxine, ramipril, bisoprolol, cetirizine, fluticasone nasal spray, and Hypromellose eye drops.
She had initially been started on citalopram by her GP which she had discontinued herself after a period of weeks as she felt it had no positive effect. In December 2019 she scored 92/100 on the Addenbrooke’s Cognitive Examination (ACE-III), with no significant deficits in any one category. As well as low mood and loss of motivation, she described frequent tearfulness, anhedonia, lack of energy, difficulty concentrating and poor sleep. There was no clear trigger for her current mental state, and her physical health was otherwise good. She had no suicidal ideation or thoughts of self-harm. There was some evidence of anxiety but no symptoms of psychosis. We could not identify any alcohol or substance use risks. Her mental state examination was unremarkable. She was given the diagnosis of moderate depressive episode, F32.1, and was started on sertraline. However, upon reading the patient information leaflet, she refused to start this medication due to it mentioning a link with bone disorders.
As a result of this discussion, we accessed the medicines.org patient information leaflet, where an increased risk of bone fractures is mentioned under the heading ‘symptoms that can occur when treatment is discontinued’. It also states that following clinical trials in adults, sertraline was found to cause ‘bone disorder’ in up to 1 in 1,000 people.7
Following in-depth discussions, our patient was very hesitant in agreeing to take any medication that may have an impact on her bone density. We were aware of the potential association between SSRIs and BMD but were unable to quantify this risk to our patient.
Discussion
Our case above represents a common situation; a patient that is worried about a side effect, concerning which there are no formal guidelines available to aid decision making. The link between depression, SSRIs and BMD is a complex one, with numerous confounders making analysis and application yet more difficult. We looked at the evidence surrounding SSRIs and their impact on bone health, in order to suitably advise our patient on the most appropriate treatment options.
Impact on BMD
We found several meta-analysis and systematic reviews concerning BMD. The majority showed no significant association between BMD and SSRI use.
Of note, a 2015 systematic review by Gebara et al, suggested that antidepressant use may well be associated with lower BMD. 4 of the included studies assessed the relationship with BMD, 3 of which highlighted an association with lower BMD. This association was reported with SSRIs but not TCAs. However, they concluded that there was insufficient evidence that SSRIs adversely affect bone health, and therefore a change in current recommendations for the use of antidepressants in older adults was not justified at the present time. They stated that the evidence did not satisfy the Bradford Hill criteria, it is inconsistent, and whilst there is biological plausibility, there are no experimental studies to support a causal relationship.8
Yet a 2012 literature review indicated effects on both BMD and fracture risk.9 Each and every study included, indicated a risk of reduced BMD, increased fracture risk, or both. Even when controlling for potential confounders this conclusion was drawn. Authors suggested on the basis of this evidence, that caution is advised when considering the use of SSRIs in those with osteoporosis or a history of osteoporotic fractures, despite there being no formal recommendations.
A 5-year longitudinal study involving 1988 women, 319 of which were using antidepressants, measured femoral neck BMD. A dose-response increase in bone mineral loss was evident.10 An older cohort study also showed that even after adjustment for potential confounders, mean total hip BMD decreased 0.47% per year in non-users, compared with 0.82% in SSRI users.11 A year later, and a community-based study revealed that after controlling for age, weight, height and smoking history, BMD among SSRI users was 5.6% lower at the femoral neck, 6.2% lower at the trochanter and 4.4% lower at the mid-forearm than nonusers.12
Fracture Risk
The evidence surrounding fracture risk is more unanimous. Of the systematic reviews and meta-analyses we found, all highlighted an increased risk of fracture in SSRI users.
Wu et al concluded that the significantly higher risks of fractures observed for patients who received SSRIs compared with patients with no exposure, remained statistically significant in studies that controlled for important risk factors and studies that scored highly in the quality assessment.13
Eom et al extrapolated their data, estimating that the increased risk of fractures translates to about one case of fracture for every 42 patients treated with SSRIs.14 The dose and duration of SSRIs also seems to contribute to fracture risk, with both an early increased risk (under 6 weeks), and a late risk associated with prolonged use.14,15
A notable literature review by Panday et al on medication-induced osteoporosis summarised that treatment decisions concerning SSRIs should be considered on an individual basis for patients with osteopenia, osteoporosis, or fracture risks greater than 3% and 20% for hip and major fractures respectively.16 Of particular note from this review, a 10-year cohort study revealed that 14.7% of SSRI users suffered at least one fragility fracture over the study period.17 Whilst those using SSRIs do tend to have more fracture risk factors than the general population; they are more likely to be women, have more comorbidities, use other antidepressants/ anxiolytics, and have a previous history of falls; the significant association remained even after these variables were controlled for. The risk of first fracture specifically was increased by more than 50%, and similar to other studies, a dose–response relationship was evident.17
Conclusion
The impact of SSRIs on bone health is clearly a topic of contention. Whilst the impact on BMD is unclear, the increased fracture risk is more unanimous. There are plausible biological mechanisms to explain these risks, yet there is also the fact that the risk of falls themselves are higher when taking SSRIs.
Yet why hasn’t this filtered down to making formal recommendations in prescribing guidelines? Questions remain as to whether we should be prescribing SSRIs in individual’s with osteoporosis at all. Regardless, the relatively high risk of fracture with SSRI use may have a significant clinical impact. These risks must be balanced against the benefits gained by the treatment for depression; both in terms of mental state and in osteoporosis risk factor modification. What would perhaps be more relevant would be to consider a patient’s falls risk independently to their bone health, when deciding whether to prescribe SSRIs. Consideration towards the use of concomitant medications, co-morbidities and other confounders is vital.
It is on this basis that we suggest discussing bone health with your patients (particularly those at high risk), prior to prescribing these medications, and being wary of prescribing SSRIs in those with osteoporosis or more importantly, those at high risk of falls.
Summary
Impact of SSRIs on bone health is complex with significant confounding factors
Whilst the impact on BMD is contentious, the increased fracture risk is more significant
Risk-benefit decision is needed
Consider the patients falls risk most importantly before prescribing an SSRI
General Practice is the first point of contact for most patients who ask for professional medical advice in the United Kingdom (UK) National Health Service (NHS)1. Primary care makes up around 90% of all NHS activity and, as a result of increasing populations overtaking the number of newly qualified General Practitioners (GPs), the burden of tasks from patients has increased exponentially. GPs characterise their workload as “unmanageable” or “unsustainable” and 93% have reported that patient care has been subsequently affected.1 Funding into General Practice from the NHS expenditure has fallen by almost 20% which has halted the expansion of new practices and recruitment of substantial GPs. The growth of new GPs increased by 0.2% only, between 2009-2014, and this has indirectly pressured existing doctors to care for more patients. This is reducing job morale as well as patients’ satisfaction with services. The main causes of increased workload are increased administrative load, high patient expectations and increased risk of litigation.2
Four years ago, there were four doctors at our practice. As time passed, one doctor emigrated, another doctor passed away and the third had retired. This has left two doctors at the practice at this current time. The practice currently employs locum GPs to cover the pressures of daily patient appointments as, according to new studies, there are now on average an astonishing 2,100 registered patients per GP.3 The loss of permanent doctors in this practice may be due to the location of the GP surgery. Barnsley, according to uSwitch in 2015, was ranked 122 out of 138 local areas across the UK based on 26 factors such as household income, life expectancy, hours of sunshine and the cost of essential goods including food bills, fuel costs and energy bills.4 Adding to the lack of permanent GPs, recruitment into General Practice as a specialty has been scarce. Studies have shown that medical graduates chose medical careers that they considered as more stimulating and interesting. One study mentions that medical students are attracted to technical or biomedical forms of medical practice, as opposed to a holistic view of medicine such as that of General Practice.5
Non-permanent GPs in the practice are keen on taking on flexible working hours, which meant the permanent doctors are left with a majority of the work including all of the on-call tasks. These tasks include dealing with patient requests that come through to the receptionist such as booking appointments, patient referrals, prescribing medication and issuing sick notes. We aim to identify the prevalence of specific tasks and evaluate ways to reduce the tasks performed by the doctor. We intend to analyse the number of prescribed acute medication that can be placed on a repeat or variable repeat prescription.
METHODS
Data was collected from a single NHS England GP Centre. This centre utilizes the Egton Medical Information Systems (EMIS) web platform for recording consultations, tracking investigation results, prescribing medications, and communicating within the practice.6 Using EMIS, we collected all the tasks of the on-call doctor for a single month. In this month, there were no school or public holidays. These tasks are sent to the on-call doctor from the receptionist who receives them directly from patients. At this centre, all tasks from 2pm on a particular day form part of the following days’ workload. Therefore, the tasks of each day were recorded from 2pm the previous day until 2pm that day.
We separated tasks by allocating them into 1 of 5 categories: medication request; request for appointment, advice, or test results; request for a referral; request for sick note; and other which included all miscellaneous tasks.
RESULTS
Total task distribution
A total of 969 tasks were performed in the month. The proportion of tasks over 4 weeks was as follows: week 1 had 26.7% (n=259) of the total tasks; week 2 had 25.6% (n=248); week 3 had 25.1% (n=243); and week 4 had 22.5% (n=218).
Figure 1: Total number (n) of tasks per day across each week for the four weeks of the month
Further to this, regarding the proportion of tasks over the days of the week: Monday had 23.1% (n=224) of the total tasks; Tuesday had 19.8% (n=192); Wednesday had 17.0% (n=165); Thursday had 18.0% (n=174); and Friday had 22.1% (n=214). Figures 1 and 2 show the number and percentage of task distribution respectively across the days and weeks for the month.
Figure 2: Percentage (%) of task distribution per day-of-the-week across each week for the four weeks of the month.
Type of task
The tasks for the month were separated unevenly across the five categories: medication tasks were 50.9% (n=493) of the total tasks; requests for appointments, results and advice were 35.9% (n=348) of the total tasks; referrals were 2.4% (n=24) of the total tasks; sick note were 4.6% (n=45) of the total tasks; and other tasks made up the remaining 6.1% (n=59) of the month. Figure 3 shows the distribution of tasks for the month.
Figure 3: Distribution of tasks/requests according to task-type for the month (total tasks n=969).
We recorded the type of tasks completed per week. Figure 4 shows the distribution of tasks according to task-type for weeks 1, 2, 3, and 4, respectively. Of the total 260 tasks recorded for week 1, 55.8% (n=145) were tasks involving medication; followed by 29.2% (n=76) request for appointments, results and advice; referrals made up 4.2% (n=11); sick notes were 7.3% (n=19); and miscellaneous tasks came to 3.5% (n=9).
The second week had a total of 248 tasks. Of these, 51.6% (n=128) were medication tasks; 34.7% (n=86) were requests for appointments, results and advice; referrals made up 2.4% (n=6); sick notes made up 2.8% (n=7); and miscellaneous tasks were 8.5% (n=21).
The third week had a total of 243 tasks. Of these, 44.4% (n=108) were medication tasks; 45.7% (n=111) were requests for appointments, results and advice; referrals made up 1.2% (n=3); sick notes made up 3.3% (n=8); and miscellaneous tasks were 5.3% (n=13).
The fourth week had a total of 218 tasks. Of these, 51.4% (n=112) were medication tasks; 34.4% (n=75) were requests for appointments, results and advice; referrals made up 2.8% (n=6); sick notes made up 5.0% (n=11); and miscellaneous tasks were 6.4% (n=14).
Figure 4: Comparison of distribution of tasks/requests according to task-type for week 1, 2, 3, and 4, respectively.
Medication tasks
Focusing on the medication category, we had a look at whether medication requests sent to the on-call doctor were drug prescriptions that should have been on a repeat/variable repeat prescription rather than on acute. Out of a total 493 medication tasks for the month, 49.1% (n=242) medication requests could have been on repeat prescription rather than being acutely prescribed. A further analysis of this data yielded comparable findings per week. In the first week, there were 145 total medication tasks, about 50.3% (n=73) of drug prescriptions could have been on repeat. In the second week, out of 128 medications, 46.1% (n=59) of medication could have been on a repeat or variable repeat prescription. In the third week, out of 108 medications, 39.8% (n=43) of drug prescriptions could have been on a repeat or variable repeat prescription. In the fourth week, out of 112 medications, 59.8% (n=67) could have been on a variable repeat or repeat prescription. Table 1 represents the total number of medication tasks that could have been on repeat or variable repeat prescription per day. Figure 5 represents the percentage of medication tasks that were on acute prescription but could have been on repeat or variable repeat prescription across each week.
Table 1: Total medication tasks that could have been on repeat or variable repeat prescription, per day across each week for the four weeks of the month.
Figure 5: Percentage (%) of medication tasks that could have been on repeat or variable repeat prescription, across each week for the four weeks of the month.
DISCUSSION
SUMMARY
The total number of tasks did not differ significantly day-to-day: each day per week (Monday-Friday) held about 15-25% of the total weeks’ tasks. The medication requests contributed to the majority of the total tasks (50.9%); followed by requests for appointments, results and advice (35.9%). Upon further analysis of the medication category, 10-25 medication tasks per day could have been avoided by having certain drugs on repeat prescription rather than being acutely prescribed. Taking into account that a GP would typically spend 2 minutes per task, this could save 20-50 minutes per day, which amount to 100-250 minutes per week, and 400-1000 minutes or 6.5-16 hours per month.
The drug prescriptions that we thought should have been on repeat or variable repeat prescription, rather than on acute prescription, included requests for drugs that patients typically take long-term. This included Proton Pump Inhibitors (PPIs) such as Omeprazole, statins such as Simvastatin, and Angiotensin Converting Enzyme Inhibitors (ACEIs) such as Ramipril. These are for chronic conditions such as gastric reflux, hypercholesterolemia, and hypertension, respectively. Other drugs that we considered would be more feasible if put on repeat or variable repeat prescription were those for palliative patients in care homes that require a constant need for laxatives such as Senna or Lactulose, or drugs such as Paracetamol. These are for constipation or pain management, respectively.
The medication requests that could not have been on repeat or variable repeat prior to the request being sent were drugs that were required acutely, such as for short-lived infection, transient pain relief, changing of drug doses, and prescribing of alternative drugs due to a possible manufacturing problem or unavailability from the pharmacy. These are tasks that we deem necessary to be sent to the GP so that drug doses are changed based on clinical judgement, and not merely on a request sent to the receptionist. This upholds a standard of drug-control and patient safety within the practice.
STRENGTHS & LIMITATIONS
This retrospective study provides an in-depth analysis of the on-call doctors’ day-to-day tasks in terms of the nature and number of tasks. This is a study involving a large number of tasks collected from a month in a single GP surgery which has produced significant results. As non-GPs collected all the data, including data in the medication category, this eliminated bias in reporting acute medication that could have been prescribed as repeat or variable repeat medication. Limitations include the sample size being considered as a relatively small number which cannot be representative of all on-call GPs’ tasks in the rest of England. In addition, this study took place in the month of September and the tasks can be distinctly different when looking into other months.
COMPARISON WITH EXISTING LITERATURE
To date, the existing literature that looks at GP tasks from this perspective is limited. Most studies look at the receptionists’ role in handling patient requests or focus on scrutinizing the technology that GPs rely on to issue repeat or variable repeat prescriptions.
Our study included the number of tasks completed in a single month as well the stratification of tasks done within the month. We separated our results week per week to see if there were any differences between them. In 2014, a quantitative analysis of incoming calls into three GP surgeries described basic numbers of calls and type of patient enquiries that came into the practice. They had received a total of 2,780 calls and found that the most dominant type of request was making a doctor’s appointment. The main finding in the study is that it identifies an aspect of non-effective communication in GP receptionists’ encounters with patients. It describes how some receptionists failed to meet the initial requests of the patient by directing the telephone call forward or even closing calls prematurely before understanding the problem. This increased ‘patient burden’ and lead to lower patient satisfaction score when recorded. Effective receptionists understood and summarized the patients’ requests as well as making alternative actions to help the patients enquiry.7
Repeat prescriptions are defined as those that are printed by a practice computer from its repeat prescribing program8. In the UK, repeat prescriptions account for up to three quarters of all drugs prescribed, and four fifths of drug costs in General Practice.9,10 Repeat prescriptions are mostly done as a technology-supported practice that requires collaboration between clinical and administrative staff to ensure patient safety.11 Two conflicting opinions exist around repeat prescribing: the first is that the increased automation aids in improving safety; the second is that the process as a whole may be weakened if assumptions built into the technology do not take full account of the nature of healthcare work such as real life demands like time, space, and resource constraints.11,12 It is important that the GPs at our practice are aware of the risks involved in potentially putting more drugs onto repeat prescription, and consequently monitor this closely.
IMPLICATIONS FOR RESEARCH AND/OR PRACTICE
The findings collected in our study demonstrate the increasingly demanding role of the on-call GP outside of consultation hours. According to recent surveys, the GP occupation has had its lowest job satisfaction since 2001 because of a higher workload which indirectly lowers quality of patient care and increases negative patient experiences.13,14 This should be taken with paramount importance, as this can cause harm to both patients and GPs. As results have described the huge number of tasks, it is important to find a way to avoid unnecessary tasks telephoned into the GP surgery. The results of our study were presented to all of the staff in the practice and the underlying message was well received. Medications that are prescribed by the doctors are double checked by the Clinical Commissioning Group pharmacist in the practice to ensure that drugs are safely given to patients.
CONCLUSION
As the funding formula has changed in the last decade, the government budget into the NHS primary care has decreased more than in secondary care even with the ever-growing pressures on primary care services.13 Some strategies, such as telephone triage, have been introduced at the practice to reduce workload crisis. However recent evidence has shown this is not effective.15 In 2015, the primary care workforce commission laid out recommendations to restructure primary care services as the current model for primary care was under doubt. The underlying message in the report was that continuity of care was important for the majority of GPs - the GPs understood patients better when they had been under their care for many years.16 With this, extra tasks can be avoided if GPs know their patients well. At a glance of primary care, from literature and our findings, it seems that General Practice may follow an unsustainable path. The pressures of workload include increasing patient lists, higher public expectations and growing bureaucracy.17 Our data collection has proven that there are a lot of tasks to be done in a month by an on-call doctor, however the amount of time that could be saved by prescribing repeat or variable repeat rather than acute medication can save significant time. From our positive results in the medication task section, we hope this can inspire further research into other areas of the GP surgery that can help optimize the time of the doctors. Furthermore, we would like to repeat our retrospective study in one year’s time with the suggestion implemented (appropriate acute medications changed to repeat or variable repeat prescriptions) over a longer period of time. With limitations corrected for, we want to re-analyse the number and type of tasks completed to determine whether this has truly optimized the time of the overworked on-call doctor.
Insomnia is a disturbance of normal sleep patterns. It is characterised by sleep onset latency and/or sleep maintenance. Short term insomnia is defined as having symptoms for less than four weeks, whilst long term insomnia is symptoms lasting more than four weeks1. Hypnotics can provide relief from the symptoms of insomnia; they do not treat any underlying cause.
Several hypnotic agents are licensed for the treatment of insomnia, including the benzodiazepines and Non-benzodiazepine hypnotics (Z-drugs)2.
NICE guidance for Insomnia management states “After consideration of the use of non-pharmacological measures, hypnotic drug therapy is considered appropriate for the management of severe insomnia interfering with normal daily life; it is recommended that hypnotics should be prescribed for short periods of time only, in strict accordance with their licensed indications” 1.
NICE guidance also advises to use the lowest effective dose of the hypnotic agent for the shortest time frame possible. The exact duration will depend on the underlying cause, but treatment should not continue for longer than two weeks. We should also inform the patient that further prescriptions for hypnotics will not usually be given, ensure that the reasons for this are understood, and document this information in the patient’s notes.
Side effects are common with hypnotic usage including, most importantly, the development of tolerance and rebound insomnia. Other side effects can occur such as daytime sedation, poor motor coordination, cognitive impairment, hallucinations, anxiety, delusions and sleep disorders2.
Aims
To reduce the amount of hypnotic medication being prescribed to patients on an Acute Inpatient psychiatric ward in the Northwest of UK. The ward is in a semirural psychiatric hospital and is a Male ward containing 17 inpatient beds. The patients are 18 years old onwards with varying diagnosis including Generalised Anxiety Disorder, Bipolar Affective Disorder, Schizophrenia, Depression, and patients with mental and behavioural disorders due to psychoactive substance use.
The reason for doing this project was most importantly for improving patient safety by reducing unnecessary prescriptions and therefore administration of hypnotic medications, but also to reduce NHS expenditure and carbon footprint.
Inclusion criteria
Patients who have been an inpatient on the selected ward between the 09/12/2020-20/01/202 and 28/01/2020-10/03/2020.
Intervention
We developed a prescription aid flow chart (Appendix 1) for all newly admitted patients to the ward. This will guide doctors when making the decision if a hypnotic prescription is warranted.
All patients on the ward during this intervention period, who are currently on a hypnotic agent and are not newly admitted, will have their hypnotic prescription reviewed using the flow chart (Appendix 1) at their weekly consultant ward round.
We then decided on some interventions to fulfil our aims. The interventions were as follows:
1) Development of an educational presentation about Insomnia and sleep management.
2) Development of an Insomnia management Flow chart (Appendix 1) to be used at admission point.
3) Training sessions for ward staff.
4) Shared teaching programme with patients at their sleep management sessions.
5) Face to face and E-mail correspondence to inform medical trainees about this project.
6) Gather feedback from ward patients and staff before and after this project.
The Hypnotic prescription flow chart aid (Appendix 1), has been put on the ward office notice board, the clinic room and the On Call Doctors Room. It was also e-mailed out to the regular ward doctors, as well as all on-call doctors working during the intervention period.
As discussed above, we created an educational PowerPoint presentation entitled “Insomnia and hypnotic agents”. This included insomnia definitions and types, NICE guidance on insomnia, sleep hygiene advice, the medications used for insomnia, their mode of actions, side effects, cautions and cost of these medications. We also included our new hypnotic prescription flow chart aid (Appendix 1).
From the 21/01/2020-27/01/2020 we had two of these educational training sessions. This was to ensure that all staff working on this ward attended at least one of these sessions. Staff in attendance included the ward managers, nursing staff, health care assistants, the pharmacist, a junior doctor, and the ward consultant. This was very important as all these health care professionals are involved with the management of patients on the ward, and those suffering from insomnia. We felt that this session was vital as we wanted to ensure that all the staff knew the importance of this project and could raise their own concerns and issues that they have with regards to managing patients with insomnia. This proved to be very useful as we all brainstormed and voiced some realistic ward changes that could happen on an inpatient psychiatric ward. We also acknowledged that sleeping on the ward as an inpatient can often be disturbed, due to regular nursing checks and noise from the staff and other patients. We did however discuss some feasible interventions which included:
1) The time at which all the automatic ward lights are turned on in the morning could be delayed.
2) Caffeine –free coffee/tea available only after a particular time in the evening.
3) Discourage daytime napping.
4) Have regular sleep-hygiene sessions on the ward.
Between 28/01/2020 to the 10/03/2020 we started these interventions, and this is the time period for collection of our next six weeks of data. We had multiple patient group sessions on sleep hygiene during this time led by the occupational therapist. Other health professionals assisted with this, including the ward pharmacist and the junior doctor. During these sessions we asked for patients to give their feedback on the current management of insomnia on the ward. Some responses included:
· One patient with Severe Generalised Anxiety Disorder stated that he feels that the sleep hygiene advice is helpful, as he doesn’t like to “jump straight into taking tablets” and likes to “fix the root” of his sleeping problem.
· A second patient with a diagnosis of Mental and Behavioural Disorder due to use of cannabinoids, stated that he needs both sleeping medications and sleep hygiene advice, as sometimes he still cannot get to sleep on the ward by solely using relaxation methods.
· A third patient with Generalised Anxiety Disorder stated that he found the sleep hygiene sessions useful. He is now using relaxation methods and is trying to avoid daytime naps which are both helping with his sleep. However, he still on occasions struggles with sleep. He said it is important to have a tidy, clean and relaxing sleeping environment, which is sometimes difficult to implement on the ward.
Appendix 1
Results:
Data was collected prior to any intervention on the ward between the dates 09/12/2020 and 20/01/2020.
The table below (Table No.1) includes the type and number of sleeping tablets prescribed on the ward between the dates 09/12/2020 and 20/01/2020. The total number of patients treated from 9 December 2019 to 20 January 2020 were 28 and the total number of patients were prescribed hypnotic medication during this time were 14.
Table No.1 - Hypnotic medication prescribed:
Name
dose
Number of tablets
Zopiclone
7.5mg
191
Zopiclone
3.75mg
12
Zolpidem
10mg
4
Nitrazepam
5mg
7
Temazepam
10mg
10
The table below (Table No.2) includes the number of hypnotics prescribed and administered after the interventions mentioned above. The total number of patients treated from 28 January 2020 to 10 March 2020 were 25 and the total number of patients who had prescribed hypnotic medication were 11.
Table No. 2- Hypnotic medication prescribed:
Name
dose
Number of tablets
Zopiclone
7.5mg
96
Zopiclone
3.75mg
6
Zolpidem
10mg
0
Nitrazepam
5mg
0
Temazepam
10mg
0
With our ward interventions we have significantly reduced the amount of hypnotic tablets being administered. The total number of tablets administered during this 6 week period was 102. The total number of patients who were prescribed hypnotics was 11. Prior to our interventions the total number of tablets administered between 9th December 2019 and 20th January 2020 was 224 and 14 patients in total were treated. This demonstrates a 44.5% reduction in tablets which is significant.
Discussion
The total reduction in tablet administration was very significant with a 44.5% reduction post-intervention. This demonstrates the positive change in our clinical practice that has resulted from using the flow chart aid (Appendix 1) as well as patient and staff educational and feedback sessions. This will improve patient safety by reducing the risk of side effects. The risk of patients developing tolerance to hypnotic medications has been reduced, as well as reducing those being discharged on a regular prescription which will further improve long term expenditure of hypnotic medications for the NHS. With the changes that have occurred in our clinical practice, we have reduced the number of hypnotics being unnecessarily prescribed and administered.
Over prescribing/unnecessary prescribing is an issue within the NHS and is impacting negatively on the environment. The NHS constitution states that the NHS is ‘committed to providing the most effective, fair and sustainable use of finite resources’3. By reducing the number of inpatients being unnecessarily started on hypnotic medications, another positive from this project will be the reduction in the negative pharmaceutical impact on the environment. The number patients being discharged with hypnotic medications, who may no longer need them, due to their insomnia improving when they are discharged from the inpatient setting will also improve. Furthermore, if they are unnecessarily started on a hypnotic prescription as an inpatient, they may continue this prescription regularly and become tolerant, which will inevitably have an undesirable effect on the environment.
The feedback that we received from the educational insomnia teaching sessions also proved to be very useful. As stated above the staff sessions allowed us to brainstorm simple ward-based interventions, as well as discussing possible drawbacks which may result. This allowed us to modify the flow chart so that it worked for all staff effectively.
The feedback given from patients was also very encouraging. As health professionals we sometimes overlook how some patients want and need more involvement in making decisions on aspects of their care. Ensuring patients are informed about medications prior to prescribing, especially about side effects is something that is very important and allows patients to make informed decisions which is a more holistic approach to clinical practice. This is vitally important prior to prescribing any medications but especially medications with more severe side effects which some hypnotics have.
The patient educational sessions were a key part of this project. We gathered feedback and established that the patients involved found these sessions informative. Some, but not all, of the sleep hygiene advice was feasible to implement into their daily routine on the ward. The ward can be disruptive at night, due to other patients, or due to regular staff checks in patient rooms, as well as rooms not being familiar surroundings, were two of the difficulties raised from the patients. This is something that we appreciate can’t be changed, however with the interventions that can be feasibly made on the inpatient psychiatric ward, we continue to strive to implement and improve for patients. For these reasons both the staff and the patient educational sessions should be continued and proved a vital part of this project
Following on from this initial intervention, we feel that we can continue to make further changes and expand the changes we made on this ward, to other similar wards in our hospital and to other inpatient psychiatric wards in the Trust.
Since 2005, there has been an increase of 10% in hospital admissions with acquired brain injury (ABI), with 348,453 United Kingdom (UK) admissions in 2016-17.1 With improvements to both medical and surgical management, a higher proportion of patients survive to hospital discharge, resulting in more people with complex physical and cognitive disabilities reaching the community.2,3
Prolonged disorders of consciousness (PDOC) can occur following ABI. This can vary from coma, to vegetative state (VS), and minimally conscious state (MCS). Following acute stabilisation, the treating team must provide the correct diagnosis, prognosis, and management. Ethical and legal issues, such as best interests decision-making (considering patient wishes, advanced decisions, and best possible quality of life), deciding when appropriate to provide end-of-life care, and understanding the legal framework around these issues can further complicate the process.
Whilst there is currently no national registry for patients with PDOC, information taken from patients in nursing homes in the UK give an estimated 4000 – 16000 patients in VS, and up to three times this many in MCS.4
Early and ongoing assessment of the patient is vital, as is good communication with those close to the patient, and an understanding of the legal requirements of the treating clinician. These are likely to present even more of a challenge to General Practitioners (GP) in the community who are managing these patients as part of their larger responsibilities.
This review article summarises guidance from the Royal College of Physicians (RCP) and British Medical Association (BMA), in conjunction with our own clinical experience, to improve understanding surrounding the assessment, long term management, and the ethical and legal issues in patients with PDOC, aiming to improve the confidence of clinicians managing these patients.5,6
Identifying Patients
Consciousness requires a combination of wakefulness and awareness (self and environment). Patients with significant deficits in either of these can be said to have a disorder of consciousness. Various brain injuries can result in disorders of consciousness (see Table 1).
Table 1: Aetiology of acquired brain injury.5
Cause
Examples
Vascular
Stroke, subarachnoid haemorrhage
Hypoxic
Cardiac arrest, hypovolaemia
Infection / inflammatory
Encephalitis, vasculitis
Trauma
Primary brain trauma, diffuse axonal injury
Metabolic / Endocrine
Hypoglycaemia, drug overdose, alcohol
Degenerative
Primary neurodegenerative conditions such as dementia
This article focuses on acute causes of PDOC rather than those with primary neurodegenerative conditions, as they present separate clinical entities with different issues affecting prognosis and management choices.
Disorders of consciousness, like a sliding scale, vary from coma, to VS, and MCS:
Coma - unrousable unresponsiveness. Patients cannot be roused, lack a sleep-wake cycle, exhibit no purposeful movement, and do not respond to stimuli.
VS - wakefulness but not awareness. Patients have a sleep-wake cycle and open their eyes spontaneously, but lack awareness of self or their environment. These patients can exhibit spontaneous and reflexive movements, and external stimuli can produce arousal responses.
MCS - wakefulness but reduced or inconsistent awareness. Patients have a sleep-wake cycle and demonstrate reproducible but inconsistent awareness of self, and ability to interact with others and their environment.
Diagnosis
As per RCP guidance 2020, patients with impaired consciousness for over 4 weeks are deemed to have PDOC.5 It is first important to differentiate possible VS / MCS from other conditions:5
Abnormalities on electroencephalography (EEG) can aid diagnosis of coma. These patients tend to progress to VS or death within weeks, so assessments of consciousness are not appropriate during this period.
Patients with locked-in syndrome have wakefulness and awareness, but paralysis of the limbs and majority of facial musculature, preventing communication by these means. EEG in locked-in syndrome is usually normal, and patients may be able to communicate using eye movements.
Patients with brainstem death have loss of all brainstem reflexes and respiratory effort, and organ survival is only temporarily achieved with life support machines.
Once ‘mimic’ conditions are ruled out, making a diagnosis of VS or MCS in patients with a suspected disorder of consciousness follows a 3-step process with input of clinicians trained in the management of PDOC:
1. Establishing a cause
This can be straightforward in some cases, such as those with direct trauma to the brain, or acquired brain infections or inflammation causing structural damage to the brain. In other cases this can be more difficult, and it may not possible to reach an exact diagnosis. The treating clinician must establish that the patient’s current condition is due to a brain injury, and take reasonable steps to determine the cause.
2. Reversible causes should be excluded
This includes reviewing medications to stop sedative medications whenever possible, blood tests to look for infection or metabolic / electrolyte abnormalities, up-to-date imaging to rule out new onset hydrocephalus, or performing an EEG to rule out subclinical seizures needing antiepileptic medication. This step also includes establishing that neurological pathways are intact, so that any assessment of consciousness provides an accurate reflection of the patient’s condition. Briefly, this involves examination and investigations to confirm that sensory, visual, auditory, and motor pathways are intact.
3. Structured assessment
There are several tools available which can confirm the diagnosis of VS or MCS. All of these require a trained assessor and an appropriate environment. These tools provide a structured method of assessing the patient to:
Observe spontaneous behaviours.
Observe the patient’s reaction to stimuli from different sensory modalities.
Document the findings of family / friends / members of the healthcare team following their interactions with the patient.
Tools available include the Wessex Head Injury Matrix (WHIM), the JFK Coma Recovery Scale-Revised (CRS-R), and the Sensory Modality Assessment and Rehabilitation Technique (SMART), amongst others. As per RCP guidance, the CRS-R should be the primary assessment tool, and WHIM or SMART can be used to provide additional information. Furthermore, assessments need to be performed on at least 10 occasions, at several different times of the day, and over the course of a 2-3 week period.5,7-9
The 2020 RCP guidance also addresses how to manage patients that do not present through the acute hospital pathway.5 In these ‘late assessment’ cases, formal assessment is still required to establish their level of consciousness and guide management. These patients should be referred to an experienced PDOC assessor to establish the cause of PDOC, rule out reversible causes, and arrange formal evaluation. This should ideally be achieved by outreach assessments, but if this is not possible, structured interviews should be held with family and care staff to complete the CRS-R. If these measures do not provide a definite diagnosis, admission to a PDOC centre can be considered.5,8
Vegetative State & Minimally Conscious State
Patients in VS are unable to interact with their surroundings or those around them (no voluntary behaviours / communication / purposeful movements), and show no evidence of awareness of self. The patient may demonstrate reflexive behaviour (such as increased heart rate or startle response to noise), or spontaneous, purposeless movements (such as eye movements, teeth grinding, or limb movements). These behaviours can be misleading, which is why an objective and structured assessment method is vital.
Patients in MCS have some evidence of awareness of self or their environment, on a reproducible but inconsistent basis. Patients demonstrate behaviours such as: following simple commands, verbalisation, and purposeful behaviour MCS which is further classified based on the level of responsiveness:
MCS-minus - less complex behaviours such as orientation to noxious stimuli, or purposeful eye movements.
MCS-plus - more complex behaviours such as following instructions or interacting with objects.
Prognosis depends on cause, time since brain injury, and the trajectory of improvement (better prognosis for those who quickly progressed from VS to MCS). Those with traumatic brain injury are more likely to regain awareness and have a longer window for potential recovery. The majority of VS patients that regain consciousness tend do so within 12 months in traumatic cases, and 3 months in non-traumatic cases. The majority of MCS patients that regain consciousness do so within 2 years post injury, although others can emerge at up to 4 years. Whilst these are the expected outcomes, there are, however, rare case reports of patients emerging later than this.
VS / MCS-minus are classed as ‘continuing’ at >4 weeks post brain injury, and ‘chronic’ at >3 months for non-traumatic cases, or >12 months in traumatic cases. MCS-plus is classed as ‘continuing’ at >4 weeks post brain injury, and ‘chronic’ at >9 months for non-traumatic cases, or >18 months in traumatic cases. Chronic VS / MCS can be classed as ‘permanent’ when there has been no further change in trajectory of serial CRS-R for 6 months. In permanent PDOC it is predicted that consciousness is highly improbable to recover. It is important to remember these time frames, and their implications during discussions with family, when making best interests decisions and planning further assessments of consciousness.5,10,11
With the longer time period for potential emergence, and improved survival rate compared to VS, GPs are more likely to come across these patients in the community. Figure 1 outlines the key time points for assessment of VS and MCS.12
Figure 1: Timeline for assessment of VS & MCS.5
Emergence
A patient is considered to have ‘emerged’ from PDOC if they are able to consistently demonstrate awareness of self and surroundings. The RCP advise that patients who have emerged are able to do at least one of the following:5
Functional interactive communication (accurate yes/no responses to 6/6 basic questions on 2 consecutive evaluations).
Functional use of objects (intelligent use of ≥2 objects on 2 consecutive evaluations).
Consistent discriminatory choice-making (correct identification between 2 pictures, 6/6 times, on 2 consecutive evaluations).
Specialist Involvement
Early specialist input from a neurological rehabilitation team is recommended. The Royal College of Physicians Guideline Development Group advise that those with an ongoing disorder of consciousness at 4 days (Glasgow Coma Scale ≤10/15) should be referred for assessment, and advice regarding neurological disability and prevention of complications.5,13 At 2 weeks the patient should be referred for specialist neurological evaluation to identify the cause of the disorder of consciousness, assess the primary neurological pathways, and advise on further investigations.
Patients with ongoing disorder of consciousness at 4 weeks should have regular input from a specialist neurological rehabilitation team, led by a consultant in Rehabilitation Medicine. Once stable the patient should ideally be transferred to a specialist neurorehabilitation unit for multidisciplinary care, objective assessment of level of consciousness, formal best interests decision-making, and discharge planning.
Following this initial period, the patient should be placed in a unit away from the acute setting, where they can be monitored until it is evident that they are likely to remain in VS / MCS. These ‘slow-stream’ rehabilitation units, are designed to deliver care to patients with complex neurological disability, and provide appropriate maintenance therapy to manage physical disability. Medical input is usually provided by the GP surgery covering the area, although units should also have access to rehabilitation medicine physicians with experience in managing PDOC.
If it is agreed that a patient has permanent VS / MCS, then longer-term of care can be provided in a nursing home or, if appropriate, in the patient’s own home. A skilled assessor should review the patient yearly, with formal assessment of consciousness until either the patient emerges or dies.
Medicolegal & Ethical Issues
Capacity Assessments
By definition a person in PDOC lacks capacity to make decisions about medical treatment. The Mental Capacity Act 2005 requires this to be formally documented in the medical notes. A Deprivation of Liberty Safeguard should be put in place during hospital admission or nursing / residential home stay, providing that restraint and restrictions are in the patient’s best interests.14
Identifying Advance Decisions
The team providing care need to identify as early as possible whether the patient has a valid and relevant Advance Decision, Health and Welfare Lasting Power of Attorney, or Court-appointed Welfare Deputy. If one of these is in place, the team need to request to see the relevant documentation to understand what exactly it entails.
Best Interests Meetings
All medical treatment provided must be in the patient’s best interests. In the UK, the treating clinician must by law identify those people close to the patient that can provide insight into the patient’s beliefs / previous expressed wishes / likely wishes, and take part in best interests meetings. If there is nobody to fulfil this role then an Independent Mental Capacity Advocate must be appointed. An initial best interests meeting should be held to discuss the diagnosis, likely prognosis, and to plan treatment. Further meetings should be held at planned regular intervals, for major medical decisions, and following repeat assessments to decide future management, discharge planning, and ceilings of care.
Ceiling of Care Discussions:
Many relatives may not feel comfortable bringing up these topics themselves, so it is advisable to make the discussion part of a routine review as standard for PDOC patients.
In patients with PDOC, cardiopulmonary resuscitation (CPR) has a very low success rate, and will likely result in further brain injury due to hypoxia. For the majority of patients where emergence is not expected, or if it is felt that the patient would not accept their level of quality of life, CPR could be considered to be futile. This is because CPR would not provide a perceivable benefit to the patient, but would carry significant risks of harm (worsening brain injury, injury related to the CPR itself, undignified end of life). Decisions regarding ceiling of care or appropriateness of resuscitation should either follow the instructions set out in existing advanced directives, or be discussed together with the treating multidisciplinary team (MDT). It is highly advisable to involve close family / friends in discussions, but ultimately it is a medical decision.
For similar reasons, it should be considered whether hospital admission for treatment of acute deterioration is in the patient’s best interests. For example in a patient with permanent VS, treating an acute chest infection may improve their lungs, but will not improve the patient as a whole in a way that can be perceived and appreciated by that patient, so may be considered futile. Additionally, it may be considered appropriate to stop medications not aimed at providing comfort, or stop performing observations and investigations. As with all major medical decisions, this should be discussed within the MDT and with those close to the patient. Although patients with PDOC have absent / reduced awareness, care should be taken to maximise patient comfort, and if appropriate consider input from the palliative care team.
Decisions relating to withdrawing clinically assisted nutrition and hydration (CANH) have previously been managed differently than withdrawal of life-sustaining treatment. Until recently, the decision to withdraw CANH could not be made without referring to the Court of Protection (COP). More recent guidance published by the BMA advises that in PDOC this is not always necessary. The treating team should first establish whether there are any valid and relevant advance directives / health and welfare attorney with relevant power, and then follow a best interests decision-making process. If all parties are agreed that withdrawal of CANH is in the patient’s best interests, then a second opinion should be obtained (from an independent, expert PDOC physician); if they also agree, then CANH can be withdrawn. If there is any doubt or disagreement about the decision, then an application to the COP is required. Now in the UK, it is essential to have best interests meetings to decide whether provision or continuation of CANH is of benefit to the patient, rather than deciding whether to withdraw it. If CANH is determined to not be of overall benefit to the patient, then it should not be continued. Prior to the withdrawal of CANH, an appropriate end-of-life care plan should be agreed and be ready to put in place.6
Conclusion
Disorders of consciousness can occur following brain injury, and vary from coma to MCS. If the disorder of consciousness continues for 4 weeks, it is described as a PDOC. Diagnosis requires structured assessment by trained clinicians, once the patient is medically optimised and reversible causes are excluded. Ongoing assessment is crucial to monitor recovery, guide prognosis, and establish when the disorder is permanent.
There are many ethical and medicolegal issues involved in managing patients with PDOC, which are mainly centred on the patient’s loss of mental capacity to make decisions. The cost implications of providing care as outlined in these guidelines can be quite significant. This article reflects our experience working within the National Health Service (NHS) within the UK, which provides free healthcare to all at the point of delivery. Therefore the costing is less relevant to the patients, although this does need to be considered when commissioning services. In other private healthcare settings, costs may vary widely based on hospital and wider multidisciplinary team costs, and this may need to be taken into account when commissioning services. Also, we appreciate that in other countries there are likely to be different laws surrounding PDOC, and varying views regarding the ethical decisions discussed.
Currently, these guidelines are based on expert opinion from the Royal College of Physicians Guideline Development Group. In future, management of patients with PDOC could be improved with the establishment of a national registry, further studies into PDOC, and better integration with community services. Furthermore, an improved education about PDOC and the issues surrounding it, as we have aimed to outline in this article, will help physicians understand their responsibilities and provide the best possible patient care.
The pituitary gland is a tiny gland located at the base of the brain and is connected to the hypothalamus. Dubbed as the body’s “master gland”, it produces important hormones that control many bodily functions such as those involved in the control of haemodynamics, glucose, fight or flight response, body growth and many more. Any of the pituitary hormones may be affected in pituitary disease, with acute adrenocorticotropic hormone (ACTH) deficiency being the most catastrophic and life-threatening.
Pituitary apoplexy occurs following acute haemorrhage or infarction of the pituitary gland, causing patients to be acutely unwell due to hormonal as well as local compressive effects. These effects cause the usual presentation of pituitary apoplexy such as severe headache, diplopia, visual loss and hypopituitarism.
We report a case of pituitary apoplexy that presented with a 2-week history of loss of peripheral vision and lethargy with stable vital signs.
CASE PRESENTATION
A 49 years of age gentleman complained of loss of peripheral vision in the left eye and lethargy for 2 weeks. The loss of vision was sudden, painless and non-progressive and had caused him considerable difficulties with driving where he would shift into the wrong lane and was honked at. He had no known medical or surgical history of note. Prior to presentation, he had no history of eye pain, eye redness or a history of trauma to the left eye. There were no headaches, neurological deficits or constitutional symptoms.
Clinically, he had bitemporal hemianopia with no other cranial nerve deficits. His Glasgow Coma Scale was 15/15, vital signs were stable and there was no postural change in blood pressure. Examination of other systems was unremarkable. Blood investigations revealed a decreased morning cortisol of 46 nmol/L and a normal thyroid stimulating hormone (TSH) with borderline low free thyroxine. Serum electrolytes, plasma glucose and all other anterior pituitary hormones were within reference range.
A computed tomography (CT) of the brain showed an enlarged pituitary sella with a large well-circumscribed and heterogeneously enhancing mass within. This mass measured 3.5cm x2.7cm x3.5cm (AP xW xCC) and had no calcifications within. It was also compressing onto the optic chiasm.
Two days later, a brain pituitary Magnetic Resonance Imaging (MRI) was done which reported a heterogeneous mass occupying the sella with suprasellar extension measuring 2.7 x2.8 x2.9cm (AP xW xCC) (Figures 1.1 & 1.2). This mass returned mixed solid-cystic intensity with significant enhancement post-contrast administration. There was evidence of layering within the cystic component of this mass. Inferiorly, the right border ended lower than the left (Figures 2.1 & 2.2).
Following consultations with endocrinologists, neurosurgeons and radiologists, a clinical diagnosis of pituitary apoplexy with hypocortisolism and central hypothyroidism was reached. The patient was started on oral hydrocortisone 20mg in the morning and 10mg in the evening; and oral L-thyroxine 100mcg in the morning before he was referred to the neurosurgeon for trans-sphenoidal surgery. While awaiting surgery, no clinical deterioration was reported. An endoscopic trans-sphenoidal surgery successfully took place a week later which revealed an enlarged haemorrhagic pituitary gland (Figure 3.0). The patient was discharged well a week post-surgery.
His histopathology report later confirmed pituitary adenoma where monomorphic tumour cells arranged in nests and trabeculae and some pseudorosettes were seen. The tumour cells exhibited mild pleomorphism with moderate amount of cytoplasm. The stroma was highly vascularised. No necrosis, calcification or mitosis was seen. Immunohistochemistry studies were positive for follicle-stimulating hormone (FSH) and luteinising hormone (LH) and negative for ACTH, growth hormone, prolactin and TSH.
Figure 1.1 and 1.2: MRI brain on coronal view illustrating well-defined and heterogenous suprasellar mass
Figure 2.1 and 2.2: MRI brain on sagittal view illustrating mixed solid-cystic intensity pituitary mass
Figure 3.0: Intraoperative finding showing haemorrhage of the pituitary gland
DISCUSSION
Pituitary apoplexy is a potentially fatal condition caused by haemorrhage or infarction or both. Most cases occur during the fifth decade of life, predominantly in males.1 In the majority of cases, it is associated with a pre-existing non-functioning macro-adenoma which accounts for 14-54% of pituitary adenomas and has a prevalence of 7-41.3/100,000 population. The standardised incidence rate is 0.65-2.34/100,00.2
The many clinical presentations of pituitary apoplexy result from local compression of adjacent structures or deficiency of pituitary hormones – the former being more common where affected individuals present with headaches, visual disturbances and other symptoms of raised intracranial pressure.3
Subclinical haemorrhages refer to asymptomatic individuals with evidence of pituitary haemorrhage on MRI. In a 2018 retrospective transversal analysis involving 64 patients, 34.38% had subclinical haemorrhage within a non-functional adenoma.4 In another retrospective overview by Turgut et al, 186 cases of apoplectic pituitary adenoma presenting with monocular or binocular blindness were published in the last century.5 In a case report by Sasaki et al, a 65-year-old gentleman was only diagnosed with pituitary apoplexy following weeks of blood investigations for hyponatraemia and repeat imaging. His only presenting complaints were anorexia, low energy and fever for two weeks.6 These studies show that while an early correct diagnosis of pituitary apoplexy is important, it is not necessarily urgent.
On the other end of the spectrum, pituitary apoplexy may also present as a life-threatening situation where patients are unconscious and hemodynamically unstable due to hypopituitarism. In its acutely deficient state, ACTH causes acute adrenal insufficiency hence resulting in hypotension, hypoglycaemia, hyponatraemia and hyperkalaemia. Sometimes, non-specific symptoms precede the symptoms of hypocortisolism. A drop in consciousness level may be due to the tumour’s mass effect transmitting pressure to the brainstem or causing hypothalamic compression.7 Espinosa et al reported a 48-year-old gentleman with pituitary apoplexy who presented with the worst headache of his life, requiring urgent neurosurgical intervention which proved to be life-saving.8
Complex as it already is, diagnosing pituitary apoplexy may be further complicated when non-specific symptoms can be explained by other causes such as post-general anaesthesia drowsiness, hyponatraemia in a patient on diuretics and headaches in post-partum women receiving spinal anaesthesia.9, 10
While most patients consequently suffer from pituitary insufficiency, the extent, type and duration of therapy differs between patients. Cases of spontaneous recoveries whether a surgical or conservative approach was adopted have also been reported.11, 12 However, robust control studies comparing the outcome of surgical with conservative management in patients with pituitary apoplexy have yet to emerge. Nonetheless, studies have proven that visual outcomes significantly improve with surgery.13, 14
Having discussed the varied presentations of pituitary apoplexy, it can be agreed upon that the life-threatening endocrinal condition should be considered in any patient with abrupt neuro-ophthalmic deficits despite the state of clinical stability. This is imperative as prompt medical and surgical management may not only be life-saving, but also significantly improve visual and cranial nerve outcomes.15
CONCLUSION
Pituitary apoplexy is an endocrinal emergency which requires immediate investigation and treatment. Despite its disastrous pathology, there have been cases where affected patients present with isolated visual disturbances or with no symptoms at all. It is therefore important to have early suspicion of pituitary apoplexy in stable patients with eye complaints as early detection and management are life-saving and significantly improve neuro-ophthalmic outcome.
The Royal College of Psychiatrists and NICE Guidelines both stress the importance of carrying out physical examination on psychiatric in-patients due to their high level of physical health issues. Carrying out and carefully documenting these physical examinations at the time of admission allows physical health issues to be appropriately taken into account when creating management and medication plans and, in more severe cases, to allow diversion for medical treatment if that is required or the underlying cause of the presentation.
Monitoring physical health of patients in psychiatric settings is vital and is recommended by NICE in its guidelines; documentation of physical health assessment carried out at the right place is also imperative. According to Louth/Meath Mental Health Services Admission Policy, 2016, all psychiatric patients admitted should have their Physical Examination completed and recorded on Physical Examination Proforma.
Psychotropic medications can effect on physical health of psychiatric patients1. Patients with medical co-morbidities are more at risk from psychotropic medications compare to normal healthy population2. In addition, depression is considered as an independent risk factor for cardiac events in patients with coronary artery disease3. Adding that, depression may also possibly increase the risk of cardiovascular disease in population without medical co-morbidities. Hence, psychotropic medications are carefully chosen for treatment of individual patients to avoid any adverse events1. Depression is not the only risk factor for medical co-morbidities; other psychiatric problems also make patients vulnerable for physical health issues1. Moreover, prevalence of medical problems is relatively high in psychiatric patients compared to cohorts without mental health disorders4. The risk of medical co-morbidities do not always increase after prescribing psychotropic medications; the risk of cardiovascular disease also increases for patients suffering from anxiety and not necessarily using medications5.
Psychiatric patients receiving psychotropic medications should have their physical health monitored regularly as recommended by NICE6.
Methods
The audit cycle was completed in St Brigid’s Complex, Ardee. The audit cycle comprised initial audit (phase 1), implementing changes following recommendations and re-audit to compare results with initial audit. All patients in Unit 1, which is an acute admission ward, were included for the audit and re-audit. Patients admitted in another ward, which is a long stay ward, were excluded in the audit cycle. The rationale for not including patients admitted in long stay ward was that these cohorts of patients are already well established on psychotropic medications and their physical health is regularly monitored. Data collection was carried out from physical health proforma completed upon admission and filed in notes. No patient identifiable data was collected during the audit cycle.
During phase1, a review of the notes of all in-patients on a specific day in Unit 1, St Brigid’s Complex, Ardee was carried out. Data was collected from physical health proforma of each patient. This data was then entered in Xl-spread sheet for the analysis purpose. Results were analysed and feedback obtained from non-consultant hospital doctors. The findings were presented during local teaching to both the consultant and NCHD bodies and means of improving compliance were discussed openly. These discussions led to a redesign of the proforma to make it shorter and simpler to complete. This proforma was then attached to an assessment booklet, whereas physical health proforma was not part of an assessment booklet. A re-audit was carried out during a single day on all in-patients in Unit 1 several months after the first phase of the audit. In-patients who remained in Unit 1 since the initial phase of the audit were excluded from the re-audit.
Results
The results of initial audit demonstrated only 50% (10/20) compliance with physical health proforma. Furthermore, in phase 1 the proformas were only partially completed with elements of the physical exam documented on the proforma. In addition, other components were documented elsewhere in the admission notes and many elements omitted altogether. Only 15% (3/20) of the proformas contained a complete, documented physical examination.
One of the sections on proforma that lacked information significantly was information about patient’s current circumstances. On the other hand, demographic details were recorded for only 50% of patients. However, admitting doctor’s details were recorded on 35% (7/20) of proformas, the details of professional carrying out physical information was also not available on large number (19/20) of proformas.
Table 1:
Yes
No
Partial
Patient Demographics
10
10
0
Date & Time of Admission
6
11
3
Referral Agency
7
13
0
Admission Status
8
12
0
Drug Allergies
6
14
0
GP Details
7
13
0
NOK Details
3
17
0
Religion
1
19
0
Marital Status
2
18
0
No of Children
2
18
0
Occupation
2
18
0
Nationality
3
17
0
No of Previous Admissions
1
19
0
Medical Card No
0
20
0
V.H.I
0
20
0
Provisional Diagnosis
6
14
0
Admitting Doctor Name
7
13
0
Admitting Doctor Signature
7
13
0
General Examination
9
11
0
CVS
9
11
0
R.S
9
11
0
C.N.S
9
11
0
Alimentary System
6
14
0
G.U.S
3
17
0
L.M.P
1
19
0
Signature
1
19
0
Date
8
12
0
Data analysis of the re-audit shows that 80% (16/20) of the proformas were been completed. In overall, there was a huge improvement seen in the results of the re- audit and doctor’s details performing physical health was recorded on 75% of the proformas. Adding that, general examination section of the proforma demonstrated huge compliance of 80% along with Cardiovascular and Respiratory system.
Table 2:
Yes
% Yes
No
% No
Name
12
60%
8
40%
DOB
10
50%
10
50%
General Examination
16
80%
4
20%
CVS
15
75%
5
25%
R.S
15
75%
5
25%
C.N.S
14
70%
6
30%
Alimentary System
14
70%
6
30%
G.U.S
14
70%
6
30%
L.M.P
6
30%
14
70%
Signature
15
75%
5
25%
Date
15
75%
5
25%
Discussion
A total of 20 patients in each phase of the audit were included for data analysis. The number of patients included may seem small for a research study with a different design; however, quantitative number is not taken into account with this particular design used. On the other hand, number of patients admitted in any acute ward is similar.
During data collected, it was apparent that physical examination findings were recorded in the notes instead and proforma was not used for some of patients, which is evident through results. Even though physical examination may have been carried out, it was not possible to include in data analysis and results due to the study design.
The results of first phase demonstrated poor compliance with physical health proforma despite carrying out physical examinations and findings been recorded elsewhere in admission notes. It is an arguable fact that regardless of physical health proforma been filled, physical examination of patients are been carried out as per local and NICE guidelines. However, physical examinations documented elsewhere in the admission notes makes it difficult to locate; hence, a proforma is completed upon admission as a pre agreed standard procedure.
Once the results of initial audit were analysed, these results were presented in the local academic session to all the NCHDs and Consultant Psychiatrists. While all involved agreed the importance of carrying out physical examination on all patients upon admission; the design and complex nature of the initial proforma made very difficult for NCHDs to complete it. Adding that, some of the information, such as demographic details and personal information, was also repeated making it duplicate that had been recorded elsewhere in the notes. The physical health proforma was then redesigned and simplified to complete. Unnecessary and duplicate information was omitted in the new proforma and was attached with the initial psychiatric assessment booklet. The new physical health proforma was then implemented in the service after discussions with fellow NCHDs, Consultants and management.
Second phase of the audit cycle was conducted after number of months and redesigned physical health proforma been in circulation for some time. Data was again collected as per study design and methods and entered for analysis. These results demonstrated a huge improvement in compliance with physical health proforma after the change of practice. Although compliance with proforma has improved significantly, some gaps were noted to reach the desired outcome of 100% in practice. Case notes were studied to understand the reasons for not completing physical health proformas. Several themes emerged through case note reviews and one of the reasons was assumed that patient was transferred from medical ward of General Hospital after been medically cleared. Time and mode of admission also resulted in physical health proforma not been completed.
Conclusion
While all involved agreed that carrying out physical examination on all admissions was advisable; the length and complexity of the initial proforma contributed to poor completion rates by NCHDs. A combination of teaching to underline its importance and a redesign focused on usability and speed led to significantly increased completion of the proforma with attendant benefits for patient assessment and treatment.
Health Education England (HEE) runs the Medical Training Initiative (MTI) scheme on behalf of the Department of Health (the Government Sponsor) and is influenced by the Home Office Tier 5 Government Authorised Exchange Visa Scheme1. The Academy of Medical Royal Colleges is the national sponsor for visa purposes. Major stakeholders involved in this scheme are the GMC and GMC Approved Sponsors (e.g. Medical Royal Colleges), Postgraduate Deaneries/Local Educational Training Boards (LETBs), and National Health Services (NHS) Trusts, with support from the Department of Health.
The Royal College of Psychiatrists (RCPsych) Medical Training Initiative (MTI) Scheme enables qualified overseas psychiatrists to undertake training posts in the National Health Service (NHS) for a maximum of two years (2). The purpose of the scheme is to provide training opportunities for international psychiatrists in the UK to improve capacity as a professional and return home with broad knowledge and experience. Vacant core training (CT3) posts approved by Deaneries/LETBs are offered to eligible international doctors. Thus, the MTI psychiatry scheme can benefit overseas doctors, the NHS and the countries that trained them.
Although the MTI scheme was first established in 2009, the RCPsych only formally adopted the program in 2014. Some lessons were learned from the experience of the scheme in other specialities and provided an opportunity for the RCPsych to develop its own scheme. It developed a selection process for successful candidates and matches them with relevant placements in NHS trusts across the UK. This process takes into consideration the training needs of the overseas doctor and vacancies available in NHS trusts. The MTI Psychiatry Scheme is now in its sixth year and has gradually grown over the years as evidenced by an increase in annual allocation of the training post to 40 placements, a rise in the number of applicants from different regions of the world and an increase interest from employing NHS trusts. However, there are areas for further development in this scheme and there is a need to ensure that it consistently provides a good training experience to international doctors.
Various researches suggest that there are diverse difficulties faced by overseas doctors during their transition into a new country 3,4. Lack of information about NHS; clinical, educational and work-culture challenges; language and communication challenges; and discrimination challenges were issues experienced by international doctors while initially working in the UK hospital settings 5. The College has recognised these difficulties and wanted to understand how these are impacting on the international doctors and what can be done to help them.
Aims
The aim of this survey was to evaluate the trainee’s experience of the MTI psychiatry training scheme and explore difficulties during the training and what can be done to help. The purpose of this survey was to gather feedback on the current implementation of the MTI scheme.
Methods
An anonymous online survey consisting of 28 questions was sent to doctors using SurveyMonkey as part of the RCPsych Annual MTI survey. All doctors enrolled in MTI Scheme were identified through the RCPsych MTI mailing list. The survey was open in November 2018 for one month.
Results
Out of seventy-six, a total of thirty-one trainees completed the survey with a response rate of 40.78%. Most of them (n= 13) were from the age group 31-35 years. The findings of the survey are summarised in Table 1-3.
Table 1: Description of MTI doctors (n=31)
Gender
Male
17 (54.83%)
Female
13 (41.93%)
Prefer not to say
1 (3.22%)
Age (years)
<30
5 (16.12%)
31-35
13 (41.93%)
36-40
6 (19.35%)
41-45
5 (16.12%)
>45
2 (6.45%)
Year of MTI scheme
First
16 (51.61%)
Second
7 (22.58%)
Completed
8 (25.80%)
Country of Primary Medical Qualification
Egypt
3 (9.67%)
India
8 (25.80%)
Lebanon
2 (6.45%)
Nigeria
12 (38.70%)
Sri Lanka
3 (9.67%)
Trinidad & Tobago
1 (3.22%)
Skipped
1 (3.22%)
Previous psychiatric experience (Years)
3-5 years
17 (54.83%)
6-7 years
7 (22.58%)
8-10 years
5 (16.12%)
>10 years
2 (6.45%)
Worked in other countries besides the country of primary medical qualification prior to working in UK
Yes
2 (6.45%)
No
29 (93.54%)
Reason for choosing MTI Scheme
Recommendation from senior colleagues
15 (48.38%)
College reputation
16 (51.61%)
Training opportunities
24 (77.41%)
Research opportunities
6 (19.35%)
Job prospects
15 (48.38%)
Others
2 (6.45%)
Table 2: Induction, Supervision and Mentoring (n=31)
Initial induction at workplace prior to starting work
Yes
28 (90.32%)
No
3 (9.67%)
Allocation of educational supervisor
Yes
29 (93.54%)
No
2 (6.45%)
Frequency of educational supervision
Never
5 (16.12%)
1-2 times/year
14 (45.16%)
1-2 times/month
5 (16.12%)
Every week
5 (16.12%)
Other
2 (6.45%)
Able to attend course/study days
Yes
26 (83.87%)
Sometimes
4 (12.90%)
None
1 (3.22%)
Frequency of clinical supervision
Weekly
19 (61.29%)
Fortnightly
7 (22.58%)
Monthly
5 (16.12%)
Quality of clinical supervision
Excellent
7 (22.58%)
Good
16 (51.61%)
Fair
7 (22.58%)
Poor
1 (3.22%)
Access to out of hours support/advice
Always
18 (58.06%)
Sometimes
11 (35.48%)
Rarely
2 (6.45%)
Forced to cope with clinical problems
Weekly
2 (6.45%)
Monthly
3 (9.67%)
Rarely
17 (54.83%)
Never
9 (29.03%)
How often do you meet your MTI mentor?
I don’t have mentor
16 (51.61%)
1-2 times per year
5 (16.12%)
1-2 times per month
2 (6.45%)
Others
8 (25.80%)
Table 3: Work experience in MTI scheme (n=31)
Have you experienced any of the following?
Clinical training second to service
16 (51.61%)
Feeling unsafe
3 (9.67%)
Being punished for seeking help
4 (12.90%)
Being bullied
3 (9.67%)
Others
6 (19.35%)
Challenges encountered
Lack of relevant information about National Health Service (NHS)
14 (45.16%)
Lack of knowledge of regulatory framework
19 (61.29%)
Unfamiliarity with multidisciplinary teamwork approach
11 (35.48%)
Communication difficulties
8 (25.80%)
Cultural differences
15 (48.38%)
Varied level of training and support
11 (35.38%)
Others
7 (22.58%)
Reasons for choosing MTI Scheme
Training opportunities in the UK were considered by three quarters of the respondents for joining the MTI scheme. However, about half of the respondents reported job prospects, recommendation from senior colleagues and college reputation as pull factors.
Clinical and Educational Supervision
Three-fifths of trainees had weekly supervision with their designated clinical supervisor and three quarters (75%) of them rated the quality of supervision as either good or excellent. The majority (93.54%) of them had an educational supervisor and less than half met the supervisor 1-2 times per year. RCPsych has a mentoring scheme to support MTI doctors but half of the trainees (51%) did not have a mentor.
Out of hour support
Less than one-third of the trainees were never forced to cope with clinical problems beyond their competence. However, three-fifths of trainees reported that they always had access to out of hour support and advice.
Challenges encountered
Lack of knowledge of regulatory framework was reported by three-fifths of trainees while working in the UK settings. In addition to that, half of the trainees reported a lack of knowledge of NHS and cultural differences. One third had difficulty regarding multidisciplinary team settings and varied levels of support and training. About 51.61% felt that their clinical training was secondary to service and few reported feeling unsafe, being bullied and being punished.
Discussion
This is the first evaluation of the training experience of MTI psychiatric doctors. This study showed that most of the trainees had good work experience of psychiatry before coming to the UK. One of the undoubted strengths of the MTI psychiatry scheme is the recruitment of international psychiatrists with skills and experience of working in diverse cultural backgrounds and low resource settings. This is one of the potential benefits that the NHS can draw whilst delivering the health care smoothly. The majority of respondents in the present survey cited training opportunities as the main reason for choosing the MTI scheme. Child and Adolescent Psychiatry, Old Age Psychiatry, Addiction Psychiatry and Forensic Psychiatry were the subspecialties that received the highest interest in the MTI post in a 2017 survey 6. It is encouraging that most doctors were keen to gain further experience and training in subspecialties that were not readily available in their respective home countries.. A similar finding has been reported in the Royal College of Anaesthetists’ annual MTI survey where the majority chose subspecialties that were poorly developed in their respective countries, e.g. ICU and pain7.
Transition to the UK is not a smooth process for overseas doctors and must be supported during this transition phase (5). Lack of knowledge of the NHS, regulatory framework and cultural differences were the challenges faced by most MTI doctors in this study. The RCPsych International Medical Graduates (IMG) conference acknowledged that IMGs face more problems than British counterparts in succeeding in the system and recognised the importance of trainers, the role of employers in developing meaningful induction programmes and giving IMGs additional support and remediation if required8. This study showed that most of the trainees had attended local induction in the workplace before starting a job. Induction course content must be relevant and reflect issues concerning overseas doctors 9. It is particularly important to remember the specific needs of overseas doctors as they were trained in culturally diverse and low resource clinical settings. Several studies have shown that a structured induction program is a useful way to integrate doctors during the transition to the NHS10-12. Few trainees missed the local hospital induction as they arrived in the UK months later than expected and the trust could not arrange the training. With this hindsight, RCPsych organises the annual national MTI induction program to the new doctor in this scheme to complement and compensate for any shortcomings in the local hospital induction.
MTI posts should provide the trainee with an opportunity to train in a highly supported environment. Supervisors provide regular support and ongoing feedback during the training. Trainees value the support they receive through supervision, senior and peer support, and the opportunity to work in multidisciplinary team 13. It was reassuring to find that three-fifths of trainees had weekly clinical supervision as recommended by the Royal College of Psychiatrists.The quality of clinical supervision was rated as good by 51.61% of trainees and 22.58 % reported as excellent. Most of them had access to out-of-hour support/advice. Supervision is important for continued professional development as international doctors need more support than UK trained doctors 9. Unfortunately, few reported serious issues such as being bullied at the workplace and feeling unsafe. A survey of bullying of psychiatric trainees in the workplace reported that it was experienced equally by both IMGs and UK graduates, but IMGs were less likely to report the incident to the organisation14. It is important to educate IMGs about the mechanisms to escalate this concern for proper action. Besides that, it would also be prudent to include these pertinent issues during the annual MTI induction program to raise awareness among IMGs.
The MTI doctors had identified areas for additional support from the College, trusts, local deaneries, and senior colleagues in the 2017 annual survey6. The College took the following steps:
1. Annual MTI Induction Program: Full day induction program is held annually in the Royal College of Psychiatrists’ for new doctors in the scheme. The program is specifically tailored for doctors who are working in the UK for the first time. Highlights of the program include an introduction to the NHS, Good Medical Practice, Psychiatric training in the UK, ‘Person-Centred Care’, resources and support available for trainees and most importantly, communication skills workshop. It also provides an opportunity to meet with other MTI fellows and share experiences and set up informal support networks such as WhatsApp group. Twenty-three doctors attended the MTI induction program in 2019. Not all doctors recruited in the MTI scheme were able to attend the annual induction program because of the variable start date resulting from delay in visa processing. RCPsych could provide support to these IMGs by organizing the induction program two times a year.
2. MTI Mentoring Scheme: RCPsych runs a mentoring scheme and has been offering mentorship to MTI doctors for the past three years (15). Mentors are usually experienced RCPsych members who have volunteered in the mentoring scheme. RCPsych MTI team matches the mentor and mentee who will stay together for the duration of the placement. The current study shows that 50% do not have a mentor. We did not explore the reason for this, but we speculate that as doctors must actively express their interest in participating in this mentoring scheme and this might have shown less engagement.
3. Annual MTI Scheme Survey: Feedback is collected from MTI doctors each year as part of ongoing efforts to improve the RCPsych MTI scheme.
4. Sharing of experiences about the scheme between the trusts: Trust has varying levels of experience regarding the training scheme and the College has been facilitating the exchange of shared experience by the experienced trust to a new host trust.
This survey explored the experiences of doctors involved in the MTI scheme and it would be interesting to know findings from longer-term studies. Longer-term follow-up studies are needed to evaluate the positive impact of the scheme after the doctors return home on completion of the training. It is hoped that invaluable insight gained from the survey can be used to strengthen the scheme as well as provide learning points to other specialities with similar training scheme for international doctors.
Conclusions
This survey provides useful information regarding training experiences in the MTI psychiatry scheme. The first step in making the difference is getting feedback directly from those involved in the scheme. RCPsych MTI Scheme is an evolving program and measures were put in place to address the needs/concerns that emerged from the survey to enhance the training experience of the MTI doctors.
Parkinson’s disease (PD) is the second most common neurodegenerative disease. It is associated with loss of dopamine leading to motor disorders 1. However, non-motor symptoms such as anxiety, stress, and depression as well as cognitive impairment are also abundant among patients 2. It has been hypothesized that non-motor symptoms can affect the quality of life in PD patients 3. The current therapeutic approach relies on dopamine substitution, which has no curative effect and does not improve non-motor symptoms. Studies have shown that meditation and other relaxation techniques can provide relief in non-motor symptoms. Mindfulness-based stress reduction (MBSR) is a technique used for improving stress-related symptoms in long-term conditions such as stroke, cancer, and PD 4-6. It involves focused attention, open monitoring, and self-awareness of body movements in a non-judgemental state in the present moment. Studies have shown that mindfulness improves brain plasticity in some areas of interest. The areas of plasticity are involved in emotional regulation and processing 7, 8. Thus, we hypothesized that mindfulness techniques could also have a positive effect on non-motor symptoms of PD patients which can enhance the quality of life after training sessions. This clinical trial aimed to investigate the impact of mindfulness training on the quality of life of PD patients.
Materials and Methods
Participants and Ethical issues
This randomized clinical trial was conducted at the neurology outpatient clinic of Imam Reza and Razi University-Hospital. Participants were 40 patients aged 67.95 ± 6.8 years (56-80) with a definite diagnosis of PD who were receiving dopaminergic drugs for at least one year. Twenty-seven of the patients were males, and 13 were females. They all were married, and 4 of them reported a family history of PD. Participants were randomly categorized into two experiment and control groups with 20 patients in each. For randomization, a list of random numbers was used based on the computer program and applied to the patients at the time of their neurologist visit at the clinic.
The inclusion criteria were: definite diagnosis of idiopathic PD based on UK Brain Bank criteria, mild and moderate forms of disease according to Hoehn and Yahr (HY) staging (1-3), stable and normal dosage of PD medications within last six months, normal cognitive function or mild cognitive impairment according to Mini-Mental State Examination (MMSE) score 17-30, enthusiasm and commitment to participate in mindfulness training sessions and to practice the required works at home.
The patients with the following criteria were excluded: focal neurologic deficit, abnormal brain imaging findings suggestive of brain lesions, other medical conditions that would affect the quality of life, use of antiepileptic drugs and symptoms of psychosis,
The protocol of the study was reviewed and confirmed by the local ethics committee of Tabriz University of Medical Sciences (IR.TBZMED.REC.1397.551). All patients received an informed written consent to participate in the study and to the use of their information. This trial was registered on the IRCT.ir website (IRCT20181007041258N1).
Mindfulness Training sessions
The interventions included 8-week mindfulness-based stress reduction (MBSR) sessions each for 2hours with a 15-minute break between the first and second hours. The sessions followed by a one-day retreat program between sixth and seventh sessions and took for 7 hours. The patients were asked to practice the requested homework at least for 30 minutes after each session. The protocol of the training sessions was conducted as per the steps described by Kabat-Zinn 9. The sessions were performed by a psychiatrist with over 5-year of experience in MBSR instructions. The instructions were based on the teaching of three techniques: body scanning, mindfulness meditation, and gentle yoga. The sessions focused on physical and mental awareness of body, how to diminish the physiological effect of pain and stress, how to perform less emotional reaction when facing distress, mental calmness in challenges through life, non-judgmental awareness, equity in stress management and joy of every moment.
Controls
The patients in the control group received eight 1-hour sessions during the same time as the experiment group. The sessions centered on basic information about PD based on brochures published by the American Parkinson Disease Association with topics: medications, symptoms of the disease, mood and sleep, and connecting with resources.
Assessments
All participant's general data, regarding age, gender, type of medication, and duration of disease were gathered according to patients' self-report and the information documented in patients' clinical records. Two neurologists assessed the HY stage, disease severity, and probable motor disturbance at baseline (within patient recruitment within one week before the initial session). The assessments of the quality of life were conducted at baseline (on the day of the first training session before the class), and after the experiment.
For the evaluation of the quality of life, the PDQ-39 questionnaire was used. PDQ-39 is a 39-item questionnaire based on the patient report of health status. It evaluates eight scales of daily activities including Mobility (MOB), Activities of daily living (ADL), Emotional well-being (EMO), Stigma (STI), Social support (SOC), Cognitions (COG), Communication (COM) and Bodily discomfort (BOD) and how these scales are being affected by PD. Participants are required to choose one of five orders of responses based on how often due to their disease, they have faced difficulties defined in each item. The final score of each item is calculated as a percentage score. The overall score is measured by calculating the mean percentage score of eight items as Parkinson`s disease summary Index(PDSI). The assessments were conducted in-person by the principal investigator (N.Gh) who was blinded about the group of study which patients were enrolled in.
Statistical Analysis
The scores of each item were described as mean ± SD. The between-group and within-group comparisons were made by the independent and sample T-test, respectively. The chi-square test was performed for the comparison of categorical variables. To investigate the change in the quality of life each PDQ-39 item scores and the PDSI scores were compared before and after the experiment in either control or experiment group by splitting the data into two study groups and comparing the mean scores of each item using independent T-test. All the analyses were performed using SPSS software version 19.0 (IBM Corp., Armonk, N.Y., USA). The boxplot figures were drawn using medCalc.ink software. Figures of the change in questionnaires' scores were provided by GraphPad.prism v.6.0.7 Ink software.
Results
All the 40 patients completed the training sessions in 8 weeks. The primary assessment was made after the last MBSR session and during their first neurologic visit at the clinic.
The general characteristics of the patients in each experiment and control group are shown in Table 1. The baseline characteristic data did not differ significantly between the two study groups.
As it is demonstrated in Table 2, at baseline, the PDQ-39 item scores did not differ significantly between two study groups, except for the SOC score, which was significantly higher in control subjects compared to the experiment group (35.80 ± 9.7 vs 29.11 ± 8.7, p = 0.02).
Quality of life assessment
The statistical analysis revealed a lower mean score in all PDQ-39 items in the experiment group compared to control subjects; however, the difference was insignificant for MOB, ADL, EMO, STI, COG, COM, and BOD and was only significant for SOC (34.13 ± 9.7 vs 26.19 ± 7.7 for control and experiment group, respectively. P = 0.007) (Table 2).
On the other hand, the within-group analysis yielded a significant improvement in the mean score of subjects in the experiment group. Their mean PDSI score was 31.88 ± 6.5 after one month compared to the baseline score of 33.93 ± 6.2 (p < 0.001). However, the mean scores of the participants in the control group did not significantly differ from the baseline.
A comparison of the delta values between the experimental and control groups showed MOB, ADL, and EMO to be significantly different.
The classification of the patients based on the stage of disease by HY revealed a significant improvement in PDSI score in patients in the experiment group at the severe stage (III). In contrast, the PDQ-39 item scores did not significantly differ (except for the ADL) after the training for mindfulness. The analysis also showed that patients in milder stages (I) have significant improvement after the experiment. However, the same improvement was noted in the control group (Table 3).
Table 1. Patients' demographic data in each study group
Table 2.PDQ-39 score items and PDSI before and after the mindfulness sessions in control and experiment group pf patients
Before Experiment
After Experiment
P ϯ value
95% CI €
The mean difference in score
Study Group
Mean Score
P* value
Mean Score
P* value
Delta
P* value
95% CI €
MOB
Control
47.87 ± 8.7
0.84
48.50 ± 8.4
0.62
0.26
-1.7 – 0.5
0.62 ± 2.4
0.02
0.24 – 3.28
Experiment
48.37 ± 6.8
47.23 ± 7.5
0.04
0.04 – 2.2
-1.14 ± 2.3
ADL
Control
34.72 ± 12.3
0.90
34.95 ± 13.8
0.47
0.75
-1.7 – 1.2
0.22 ± 3.2
0.004
1.11 – 5.60
Experiment
35.17 ± 10.8
32.04 ± 11.1
0.002
1.3 – 4.9
-3.13 ± 3.7
EMO
Control
37.41 ± 6.3
0.56
37.21 ± 7.0
0.43
0.83
-1.8 – 2.2
-0.23 ± 4.3
0.01
0.78 – 6.65
Experiment
38.92 ± 9.4
34.96 ± 10.4
0.001
1.7 – 6.1
-3.95 ± 4.7
STI
Control
25.94 ± 9.9
0.26
25.29 ± 10.2
0.26
0.47
-1.2 – 2.5
-0.65 ± 4.0
0.97
-2.24 – 2.18
Experiment
22.81 ± 7.3
22.18 ± 6.8
0.33
-0.6 – 1.9
-0.62 ± 2.7
SOC
Control
35.80 ± 9.7
0.02
34.13 ± 9.7
0.007
0.10
-0.3 – 3.7
-1.67 ± 4.3
0.40
-1.71 – 4.20
Experiment
29.11 ± 8.7
26.19 ± 7.7
0.01
0.6 – 5.2
-2.91 ± 4.8
COG
Control
28.43 ± 7.4
0.69
28.75 ± 7.9
0.15
0.71
-2.0 – 1.4
0.31 ± 3.7
0.09
-0.47 – 5.47
Experiment
27.60 ± 5.9
25.41 ± 6.3
0.08
-0.3 – 4.7
-2.18 ± 5.3
COM
Control
29.14 ± 9.9
0.47
29.97 ± 9.5
0.88
0.32
-2.5 – 0.9
0.83 ± 3.7
0.08
-0.36 – 5.35
Experiment
31.21 ± 8.0
29.54 ± 8.7
0.16
-0.7 – 4.0
-1.66 ± 5.1
BOD
Control
44.71 ± 11.2
0.08
43.05 ± 11.2
0.12
0.04
0.6 – 3.2
-1.66 ± 3.4
0.47
-3.11 – 1.46
Experiment
38.31 ± 11.9
37.47 ± 10.9
0.32
-0.9 – 2.5
-0.84 ± 3.7
PDSI
Control
35.50 ± 7.1
0.46
35.23 ± 7.5
0.14
0.29
-0.2 – 0.8
-0.27 ± 1.1
< 0.001
0.84 – 2.72
Experiment
33.93 ± 6.2
31.88 ± 6.5
<0.001
1.2 – 2.8
-2.05 ± 1.7
Note:Abbreviations:Confidence Interval (CI), Mobility (MOB), Activities of daily living (ADL), Emotional well-being (EMO), Stigma (STI), Social support (SOC), Cognitions (COG), Communication (COM), Bodily discomfort (BOD), Parkinson`s disease summary Index (PDSI). ϯ: P value of the differences before and after the experiment in each group; *: P value of the differences between mean score of experiment and control group; €: 95% CI of the differences between mean score of experiment and control group.
Table 3. The quality of life in patients with different stages of PD before and after the mindfulness sessions in each experiment and control group
Control group
Experiment group
Stage (HY)
PDQ-39
Before Experiment
After Experiment
P value
95%CI of difference
Before Experiment
After Experiment
P value
95%CI of difference
I
PDSI
25.93 ± 2.1
24.62 ± 2.0
0.03
0.31 – 2.30
26.60 ± 1.8
23.34 ± 0.9
0.009
1.55 – 4.95
MOB
37.50 ± 0.0
37.50 ± 2.5
1.00
-6.21 – 6.21
40.62 ± 1.2
38.12 ± 1.2
ADL
23.61 ± 2.4
18.01 ± 2.4
0.06
-0.42 – 11.62
23.93 ± 7.1
19.72 ± 6.2
0.09
-1.24 – 9.66
EMO
29.13 ± 4.1
29.13 ± 4.1
29.15 ± 7.6
22.87 ± 7.9
0.10
-2.27 – 14.84
STI
18.75 ± 10.8
14.58 ± 3.6
0.42
-13.76 – 22.09
17.18 ± 5.9
17.19 ± 5.9
0.39
-0.03 – 0.01
SOC
22.20 ± 4.8
22.16 ± 9.6
0.99
-20.70 – 20.77
22.80 ± 8.0
20.70 ± 8.4
0.39
-4.58 – 8.78
COG
20.83 ± 7.2
22.91 ± 9.5
0.42
-11.04 – 6.88
27.07 ± 4.1
20.31 ± 5.9
0.08
-1.51 – 15.04
COM
24.96 ± 8.3
24.96 ± 8.3
27.07 ± 4.1
22.87 ± 7.9
0.18
-3.51 – 11.91
BOD
30.50 ± 12.7
27.73 ± 9.6
0.42
-9.13 – 14.67
24.97 ± 6.8
24.97 ± 6.8
II
PDSI
32.02 ± 4.0
31.48 ± 3.8
0.21
-0.38 – 1.45
31.36 ± 3.2
30.03 ± 4.0
0.12
-0.48 – 3.14
MOB
43.12 ± 4.5
44.37 ± 4.1
0.31
-3.98 – 1.48
45.62 ± 4.9
44.65 ± 4.8
0.41
-1.71 – 3.66
ADL
27.59 ± 7.3
28.11 ± 6.9
0.35
-1.75 – 0.71
34.34 ± 6.9
31.22 ± 8.6
0.11
-0.91 – 7.16
EMO
36.93 ± 6.0
36.88 ± 7.5
0.98
-4.91 – 5.01
37.46 ± 6.3
32.77 ± 6.0
0.06
-0.03 – 9.40
STI
23.43 ± 9.8
22.61 ± 10.0
0.32
-1.01 – 2.65
21.87 ± 7.4
21.09 ± 7.4
0.35
-1.06 – 2.62
SOC
33.30 ± 7.6
30.18 ± 6.1
0.08
-0.47 – 6.70
27.05 ± 8.6
24.96 ± 7.7
0.17
-1.14 – 5.31
COG
25.78 ± 6.1
25.78 ± 7.0
1.00
-2.79 – 2.79
24.21 ± 5.3
25.25 ± 6.2
0.35
-3.49 – 1.41
COM
23.93 ± 8.2
24.97 ± 6.3
0.34
-3.49 – 1.41
27.06 ± 7.3
27.05 ± 8.6
0.99
-5.24 – 5.27
BOD
42.05 ± 8.9
38.92 ± 6.5
0.08
-0.48 – 6.73
33.30 ± 7.6
33.30 ± 6.2
1.000
-3.70 – 3.70
III
PDSI
41.79 ± 3.6
42.09 ± 3.6
0.43
-1.14 – 0.53
40.18 ± 3.1
37.99 ± 3.3
0.001
1.19 – 3.18
MOB
55.55 ± 5.6
55.83 ± 5.5
0.59
-1.43 – 0.87
55.00 ± 2.9
54.37 ± 4.1
0.35
-0.85 – 2.10
ADL
44.77 ± 10.0
46.67 ± 10.2
0.03
-3.63 – -0.16
41.64 ± 11.3
39.02 ± 10.1
0.04
0.02 – 5.20
EMO
40.60 ± 4.5
40.18 ± 5.5
0.74
-2.42 – 3.24
45.26 ± 8.7
43.20 ± 8.0
0.10
-0.55 – 4.67
STI
30.57 ± 8.5
31.23 ± 8.2
0.61
-3.59 – 2.26
26.56 ± 6.4
25.78 ± 5.2
0.59
-2.56 – 4.12
SOC
42.55 ± 6.5
41.62 ± 5.9
0.34
-1.21 – 3.08
34.32 ± 6.9
30.17 ± 6.1
0.10
-1.09 – 9.39
COG
33.33 ± 5.4
33.33 ± 6.2
0.99
-3.39 – 3.39
31.24 ± 5.7
28.12 ± 5.7
0.17
-1.70 – 7.94
COM
35.15 ± 9.0
36.07 ± 9.2
0.59
-4.75 – 2.91
37.42 ± 6.2
35.37 ± 5.8
0.17
-1.12 – 5.22
BOD
51.82 ± 6.9
51.82 ± 6.9
49.98 ± 4.4
47.88 ± 5.9
0.17
-1.15 – 5.35
Note:Abbreviations:Confidence Interval (CI), Mobility (MOB), Activities of daily living (ADL), Emotional well-being (EMO), Stigma (STI), Social support (SOC), Cognitions (COG), Communication (COM), Bodily discomfort (BOD), Parkinson`s disease summary Index (PDSI)
Discussion
Significant improvement in the quality of life between the patients who received mindfulness training and the control group was observed in this clinical trial of people with Parkinson’s disease within eight weeks of trial.
Overall PDSI decreased modestly in the experiment group by 2.05 points and decreased in the control group by 0.27 points after the experiment.
Among the PDQ items, MOB, ALD, and EMO significantly improved in the experiment group compared to the control group. These results show that mindfulness training has a significant impact on not only motor symptoms of the disease but also the non-motor emotional wellbeing of the patients. The most significant effect of mindfulness training was on patients’ daily activity, which was also obvious in the severe cases of the disease.
Up to now, a few trials have been conducted on the effect of mindfulness training on PD 10-13. The effect of mindfulness on different features of motor and non-motor symptoms has been measured. However, the outcome was discrepant regarding the time duration of the follow-up and improvement in the measured symptoms.
Similar to our findings, Geong son et al. found a significant difference in the quality of life and ADL of 33 experiment patients who received mindfulness training in comparison to 30 control subjects 13. Some other studies found mindfulness an effective modality for a few subscales of PDQ-3911, 12.
In a clinical trial by Cash et al. 39 patients were enrolled in 8-week mindfulness sessions and their EMO and COG improved after the experiment11. In a similar study conducted by Advocat et al., the effect of mindfulness training on the quality of life in 35 PD patients was compared with 37 control subjects within seven weeks and six months. In a two-step analysis, ADL was the only improved factor in experiment group 14.
In contrast, Dissanayaka et al. examined the effect of mindfulness on fourteen PD patients in the 8-week training program and compared the results with baseline at post-intervention assessment and 6-month follow up 15. Their results did not yield a significant improvement in any subscales of the quality of life in primary and secondary evaluation. Similarly, non-significant results were reported by Rodgers et al. and Pickut et al. 12.
Birtwell et al. also assessed the long-term efficacy (16 weeks) of mindfulness training on STI and EMO in thirteen individuals with PD. They found an insignificant change in these two subscales of PDQ-39 16.
In the present study, EMO and ADL were more subjective to the short-term effect of mindfulness training. The results of Rodger’s et al. study were consistent with our primary outcome. Their between-group analysis revealed a significant difference in depression subscale of DASS-21 after mindfulness intervention in PD patients 17. Cash et al. also found depression to improve after mindfulness interventions in PD patients 11.
In contrary to our findings, the difference between PD experiment and control subjects was not meaningful in Pickut et al.'s study 12. COG was unaffected to mindfulness training in our study. This finding was supported by the clinical trial of Cash et al. They found an insignificant change in PD patients' cognitive function in immediate pots-intervention assessment 11.
On the other hand, Dissanayaka et al. found post-interventional improvement in PD patients' cognition by obtaining PD Cognitive Rating Scale (PDCRS), extended for six months 15.
Similarly, Geong son et al. showed a significant difference between experiments who received mindfulness training with controls in the mean score of Korean Montreal Cognitive Assessment 13.
As described above, there are discrepant results regarding the role of mindfulness stress reduction sessions on quality of life in PD patients. Our results were consistent with some studies and contrary to others. The main factors that might have affected these differences are the size of the sample, including a control group in the study, subjective mood changes in the patients, the severity of the disease, and the likelihood of practicing the Learned lessons at home.
Mindfulness-based interventions aim to improve the current wellbeing of the individuals by self-awareness of present emotions and body movements. It might also help individuals to manage daily stress, have a better judgment of their own, and adjust to daily life. There is also evidence suggesting that mindfulness training leads to neuroplasticity in the brain areas which are involved in emotions 18.
Studies have also suggested that early therapeutic interventions are more practical in terms of diminishing the probable severity of the disease in the future 13, 18. In our study, patients in the early stages had improvement in their overall quality of life, which was also noted in controls of the same stage, too. However, a meaningful change in the quality of life of patients at the severe stage of PD was recorded after the training sessions. We suggest a long term follow up of the patients in each group and with different stages of the disease to find if the mindfulness training would help in diminishing the progress of the disease.
This study was a pilot study in which MBSR showed a great impact in improving the quality of life in PD patients. However, there were limitations in the study that must be considered. First, the sample size of the study was not large comparing to the prevalence of the disease, and it was constrained by other important factors such as disease severity and level of education. Patients needed to have a minimum level of education to be able to attend the sessions and apply them in their routine life. Second, the psychological nature of the intervention limited the blindness of the patients to the intervention.
We did not perform an intention to treat analysis or crossover randomization as all the randomly selected patients completed the trial, and none dropped out of the clinical trial.
Conclusion
In our study, mindfulness training improved the overall quality of life in PD patients. However, long-term follow up on a large-scale population is required to evaluate the impact of mindfulness-based stress reduction on each item.
In December 2019, the Wuhan province of China was struck by an outbreak of viral pneumonia due to the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) or COVID-19.1 On the 30th of January, WHO declared a state of global emergency due to the rapid spread of COVID 19 2 and since then it has developed into an epidemic, contributing to over 247,000 global deaths as of 4th May 2020. COVID-19 has commonly presented with respiratory symptoms, but some gastrointestinal symptoms have also been described.3, 4 Here we describe a rare case of COVID-19 presenting with acute psychosis with initially false negative RT-PCR nasopharyngeal swab upon hospital admission.
Case Review
A 40 year old, previously fit and healthy male, with a six day history of dry cough, breathlessness and nasal congestion, presented to accident and emergency via ambulance. Prior to the respiratory tract symptoms, he had a progressively worsening fever, anosmia and intermittent diarrhoea for four days. His observations included a temperature of 39⁰C, oxygen saturations of 95% on room air and a respiratory rate of 30. His initial laboratory tests are shown in Table 1 and imaging in Figure 1.
Table 1: Table showing the relevant laboratory results of the patient upon admission
Investigations
Value
Reference Range
White cell count (x109/L)
12.0
3.7 - 11.1
Neutrophil count (x109/L)
10.3
1.7 - 7.5
Lymphocyte count (x109/L)
1.1
0.9 - 3.2
C-reactive protein (mg/L)
190
0 - 6
Cerebrospinal Fluid Protein (g/L)
2.4
0.15 - 0.45
Cerebrospinal Fluid Glucose (mmol/L)
3.7
2.5 - 4.5
Cerebrospinal Fluid White Cells (/µL)
0
0 - 5
Influenza A, B and RSV nasopharyngeal swab
Negative
COVID-19 nasopharyngeal swab
Negative
Pneumococcal urine antigen
Negative
Legionella urine antigen
Negative
Figure 1: (Left) An AP X-ray showing bilateral patchy consolidation. (Right) A cross-sectional CT thorax image showing multifocal, peripheral, bilateral, ground-glass opacities with bilateral consolidation.
Over the course of the next two days, he developed acute confusion. A CT scan of his head was done in the first instance to identify any intracranial cause of confusion, but the scan was unremarkable. His behaviour included severe anxiety, aggression, wandering and agitation. His wife confirmed that he had never behaved like this before and had no history of psychiatric illness. He felt as if he was living in a dream, exhibiting derealisation and depersonalization. Worryingly, he also experienced suicidal ideation which he hoped would bring him back to reality. One of the ways in which he tried to kill himself was by jumping out of the hospital window. Due to verbal and non-verbal de-escalations being ineffective, 5mg of Haloperidol was given, but failed to settle the patient. This was the maximum daily dose of haloperidol in accordance with the British Geriatric Society Guidelines for the management of COVID-19 related confusion5. Subsequently, the patient was successfully managed with intubation and ventilation for 24 hours, despite the absence of respiratory failure. After extubating, he recovered back to baseline over 2 days, during which a 2nd RT-PCR nasopharyngeal swab result returned positive for COVID-19. After recovery, he had insight into the events that took place prior to intubation. Retrospectively, he reported auditory hallucinations of hospital staff talking about him all day and night, and the delusions that the hospital staff were against him, and that he was in a dream which could only be escaped by committing suicide.
Discussion
There are two clinically relevant learning points to convey from this case relating to, firstly, the difficulties encountered in diagnosis and, secondly, the management of acute psychosis in COVID-19 with intubation. The diagnosis of COVID-19 was confounded by the first nasopharyngeal RT-PCR swab being negative. Since his symptoms were typical of COVID-19 and with strongly suggestive radiographic findings, it was deemed appropriate to send a repeat COVID-19 nasopharyngeal RT-PCR swab (which indeed came back positive). This patient thus had COVID-19 pneumonia and the official diagnosis was delayed due to a false negative nasopharyngeal RT-PCR swab upon hospital admission.
Various studies have identified a high false negative rate with the COVID-19 swab.6, 7 Ai et al., describes 287 patients (n=1014) who had radiographic findings suggestive of COVID-19 with negative nasopharyngeal swabs.8 It is important for clinicians to be aware of the poor sensitivity of the RT-PCR COVID-19 swab so that it can be interpreted appropriately when being used to make clinical decisions. Various studies have estimated the RT-PCR COVID-19 swab sensitivity to be approximately 70-75%.9 This is hypothesised to be even lower if clinical staff do not use the correct technique when taking the nasopharyngeal swab. Subsequently, there is a growing clinical need for more sensitive laboratory tests for COVID-19 such as antibody tests.10
Chest radiographs may be normal in early or mild disease, but can assist diagnosis. Of patients with COVID-19 requiring hospitalisation, only 69% had an abnormal chest radiograph at the initial time of admission. Findings are most extensive about 10-12 days after symptom onset. The most frequent findings are bi-basal, peripheral, consolidative and ground-glass airspace opacities. In contrast to parenchymal abnormalities, pleural effusion is rare.11, 12 Indeed, this patient’s chest radiograph shown in Figure 1 (left) was performed after 10 days of symptoms, showing features of COVID-19.
The primary findings on CT have been reported in multiple studies to include ground glass opacification, ‘crazy-paving’ texture, air space consolidation, broncho vascular thickening, adjacent pleural thickening and traction bronchiectasis. The ground glass, or consolidative, opacities are usually bi-basal, peripheral and ill-defined.13-18 Four stages on CT have been described, as shown in Table 2 below.19, 20 This patient’s CT Thorax shown above in Figure 1 (right), was performed after 12 days of symptoms and displays features in keeping with the ‘peak’ stage.
Table 2: Table showing the radiographic staging of COVID-19
Stage
Timescale
Radiographic Findings
Early/initial stage
0-4 days
Normal CT or GGO only
Progressive stage
5-8 days
Increased GGO and crazy paving appearance
Peak stage
9-13 days
Consolidation
Absorption stage
14 days<
With an improvement in the disease course, "fibrous stripes" appear and the abnormalities resolve at one month and beyond
It is important to mention that in a retrospective, COVID-19 case-controlled study of 104 patients, 54% of asymptomatic patients had CT radiographic features in keeping with COVID-19.21 CT scan changes are estimated to be as high as 97%-98% sensitive and can thus be useful when there is a strong suspicion of COVID-19 despite negative nasopharyngeal swabs.8, 9, 22 This can avoid clinicians having a false sense of security when managing potential COVID-19 patients who may otherwise be nursed in open bays, consequently exposing unprotected clinical staff and patients; a common problem that we unfortunately encounter in our clinical practise.
The second interesting learning point in this case is with regards to the clinical reasoning behind why this patient was intubated. Patients with severe COVID-19 symptoms such as hypoxaemia, respiratory distress, shock or an SpO2 of <90% are usually commenced on supplemental oxygen therapy of 5L/min, which should then be titrated to maintain an SpO2 of >94%. Continuous positive airway pressure or non-invasive ventilation can then be trialled, and if ineffective, the patient can be intubated for ventilation.23 This patient’s SpO2 prior to intubation was 94%. Interestingly in this case, the clinical reasoning behind intubation was not respiratory failure, but instead acute psychosis secondary to COVID-19 which had failed to respond to conservative de-escalation measures, as well as haloperidol.
The intubation of this patient aimed to reduce respiratory effort, cross-infection risk, as well as prevent further suicide attempts. As mentioned in the history above, this patient was non-compliant with isolation regulations as he was severely confused and wandering around clinical areas, thus posing a cross-infection risk to staff and other patients.24 Self-isolation precautions have been heavily implemented in the UK because COVID-19 is an extremely virulent infectious disease.25 The basic reproductive number of COVID-19 has been estimated to be 1.55-5.5,26,27 making it more infectious than the seasonal influenza, at 1.28.28 This highlights the importance of strictly following isolation protocols, and thus, the rationale behind intubation.
Conclusion
There are two primary learning points to be appreciated from this case report. Firstly, the false negative rate with RT-PCR COVID-19 nasopharyngeal swabs is high, and this identifies a crucial diagnostic role for CT Thorax in ‘swab-negative’, symptomatic patients with suspected COVID-19. Secondly, acute psychosis is an emerging indication for intubation to consider when managing patients with highly virulent respiratory infections, such as COVID -19. The mechanisms behind COVID-19 induced acute psychosis remained yet to be elucidated, but, in this case, COVID-19-induced encephalitis was amongst the differential diagnoses.
Nocardia sp. are aerobic, gram-positive microorganisms found mainly in soil and stagnant water. Nocardiosis commonly occur in immunocompromised patients, is often multi-systemic and easily mistaken for tuberculosis when involving the lungs 12. We report a complex case of disseminated nocardiosis in a patient already suffering from Myasthenia Gravis (MG) and azathioprine-induced myelosuppression, in which selection of antimicrobials and management planning became complex.
Case Report
A 68-year old lady, known to have generalized MG for 12 years, presented to our centre following a week history of diarrhea and lethargy. She describes having loose stools, up to 8 times per day, chills and rigors, but denied per rectal bleeding or melaena. The patient was, on regular oral prednisolone (50mg a day). She had recently ceased her azathioprine 4 months prior due to severe anaemia, linked to myelosuppression following a bone marrow aspiration test revealing markedly reduced erythropoiesis, normal oesophageo-gastro-dudodenoscopy and colonosocopy, and having normal thiopurine methyltransferase levels prior to azathioprine initiation. The patient had several admissions in that 4 months due to her anaemia, requiring packed cell transfusions, and had recently sustained an interhemispheric subdural bleed two weeks prior following a mechanical fall. At the time, hemoglobin and platelet levels had returned within normal range. Clinical examination was unremarkable and her vitals on arrival included an oxygen saturation of 98% on room air with a normal chest radiography on arrival (Figure 1a). The patient was subsequently treated for infective gastroenteritis, and was started on metronidazole.
Figure 1: Chest radiography on (a) initial presentation and (b) 48-hours into admission, showing newly developed diffuse bilateral consolidation.
Unfortunately, 48 hours into her inpatient stay, the patient developed acute dyspnoea in which her oxygen saturation dropped to 77% under room air and examination revealed diffuse, coarse crepitations bilaterally. A repeat chest radiograph confirmed diffuse consolidation bilaterally (Figure 1b). She was subsequently treated for a hospital-acquired pneumonia, and had her antibiotics swapped to intravenous ceftriaxone. Unfortunately, the patient showed little improvement on the ward, which prompted further investigation. Her blood cultures revealed presence of nocardia farcinica which was resistant to ceftriaxone. In view of being immunosuppressed, a computed tomography (CT) imaging of the brain, thorax, abdomen and pelvis (Figure 2) was performed. Imaging showed extensive bilateral lung consolidation and reticulonodular opacities, predominantly in upper lobes. There was also evidence of sigmoid colon diverticulitis with a rim-enhancing collection adjacent to the posterior aspect of the proximal-to-mid sigmoid colon (2.9 x 2.6 x 2.6 cm).
Figure 2: Computed tomography (CT) imaging of (a) the brain, revealing inter-hemispheric hyperdensity consistent with a subdural haematoma, (b) the thorax, revealing extensive bilateral lung consolidation and reticulonodular opacities, predominantly in upper lobes, and (c) abdomen, showing sigmoid colon diverticulitis with a rim-enhancing collection adjacent to the posterior aspect of the proximal-to-mid sigmoid colon (2.9 x 2.6 x 2.6 cm).
Following consultation with our Infectious Disease team, the patient was treated for likely disseminated nocardiosis using intravenous trimethoprim/sulfamethoxazole as a recommended first line therapy. Unfortunately, following 72 hours of trimethoprim/sulfamethoxazole administration, the patient developed severe pancytopenia (haemoglobin 63 g/L, white cell count 2.0 x 109/L and platelets 33 x 109/L) requiring packed red cell and platelet transfusions, as well as immediate cessation of trimethoprim/sulfamethoxazole. Further decisions on antimicrobial proved difficult as it was noted that other effective alternatives against nocardiosis, including amikacin and fluoroquinolones may potentially exacerbate MG symptoms. Furthermore linezolid, another possible alternative, could potentially exacerbate her thrombocytopenia and worsen the pre-existing subdural haematoma. Thus, a decision was made to commence co-amoxiclav and meropenem concurrently despite neither being first-line therapy. The patient subsequently had an uneventful CT-guided drainage of the sigmoid abscess, with the assistance of our General Surgical colleagues, and was kept on prolonged intravenous antibiotic therapy for 3 months. Following improvement in clinical state, she was allowed discharge from hospital with ongoing oral co-amoxiclav and linezolid with regular bloods tests to monitor for myelosuppression as an outpatient, which has not occurred till this date. With support from clinical and radiological improvement, we aim for 12 months of therapy.
Discussion
Various cases of nocardia sp bacteraemia have been reported, in which immune system dysfunction was primarily due to chronic glucocorticoid therapy 134. A retrospective review of 40 patients from a Chinese tertiary hospital revealed that half of patient with nocardiosis were on corticosteroids prior to infection onset 5. An even larger case series reported up to 94% of pulmonary nocardial infection being linked to use of immunosuppressants 6.
Management of nocardiosis can be challenging. Often, imipenem, trimethoprim/sulfamethoxazole, amikacin or a combination of these antibiotics are recommended, with treatment duration being guided by clinical and radiological improvement often extending up to a year 7. Bactericidal agents including carbapenems and amikacin are often recommended alongside trimethoprim/sulfamethoxazole in cases of disseminated nocardiosis to ensure greater success in treatment 7, 8. The use of amikacin however should be cautioned in cases of MG, and reports have described management using linezolid as an alternative 1, 9.
In our patient, the risk of worsening pancytopenia as illustrated following trimethoprim/sulfamethoxazole administration led to hesitance in using linezolid. Furthermore, the consequences of thrombocytopenia would have been devastating for the patient due to having residual subdural hematoma. Thus, the choice of antibiotics was limited to that of second-line agents including co-amoxiclav and meropenem on top of surgical drainage of existing abscesses, which has also been shown to be beneficial 15. However, it should be noted that the risk of pancytopenia remains to be multifactorial in our patient, ranging from pre-exisitng myelosuppression to ongoing sepsis and initiation of culprit medications
Mortality of nocardiosis infections ranges depending on organ involved, rates being under 40% in cases of pulmonary spread, and up to 100% when disseminated to the central nervous system (CNS) 6. Unfortunately, nocardia farcinica, the subtype reported in our case, has been shown to be more resistant to antimicrobials and carries a greater risk of dissemination to the CNS, a point being greatly considered as our centre continues to manage the patient 10.
Conclusion
Nocardiosis, although not uncommon in immunocompromised individuals, poses a unique challenge in the MG population due to limitations in suitable antibiotics. Our case highlights the importance of a multi-disciplinary approach, involving various specialties including the infectious disease, neurology, general surgical and microbiology to ensure successful management of such complex cases.
Convex probe EBUS-TBNA has been a major development in respiratory medicine. In the last decade we have seen numerous articles supporting the high diagnostic accuracy of EBUS-TBNA in the diagnosis of lung cancer, staging of lung cancer, diagnosis of extra-thoracic malignancies & benign conditions (e.g., TB & sarcoidosis)1. Patients included in this study reflect the real-life referrals that we see as respiratory physicians in our daily practice. This shows that the trend of doing EBUS-TBNAs for non-cancer patients is rising. Lung cancer is a common cause of cancer death worldwide2. Various guidelines (including NICE) have found this procedure safe & recommend it for the staging of lung cancer. In the last 10 years, lots of district general hospitals have started this service in UK & it is mainly delivered by respiratory physicians.
This has provided a specialist service for patients in their local area, which has reduced travelling and waiting times.
Setting & Methods
In this district general hospital under discussion, EBUS service was setup in 2018, under supervision of a tertiary care centre. We carried out 82 procedures during the first year of this service. All of these cases were reviewed for this article. Data was recorded on an excel spreadsheet (data included: number of cases, age, gender, lymph node stations sampled, complications, pathology & microbiology results of EBUS TBNA). Minimum of 4 passes were done at each lymph node station. Where EBUS was done for diagnostic purposes, stations to be sampled were at the discretion of the operator. Samples obtained via EBUS-TBNA were flushed into CytoLyt (methanol-water solution). EBUS-TBNAs were carried out in the absence of rapid on-site evaluation (ROSE). Where a cancer was suspected but EBUS-TBNA showed normal findings, samples were obtained via another modalities (e.g., CT biopsy) & FDG PET was carried out as well (if not done already). In cases of isolated mediastinal & hilar lymphadenopathy (IMHL), where EBUS-TBNA did not reveal any pathology, interval surveillance CTs were carried out for monitoring purposes. Where lymphadenopathy did not resolve, surveillance scans were carried out for a year. The outcomes of these surveillance CTs & PET CTs were also reviewed for this study. Diagnosis of reactive lymphadenopathy was made if EBUS-TBNA sample did not reveal any pathology, repeat CT did not show any change (or showed reduction/ resolution of lymphadenopathy) & the clinician did not consider the patient to have another diagnosis. EBUS-TBNA was labelled as false negative, if pathology result was negative, but node was positive on PET (in suspected cancer patients).
Results
Out of these 82 patients who underwent EBUS-TBNA, 55 (about 67%) were male & 27 were female (about 33%) (Figure 1).
The age range of patients at the time of procedure was 28 to 88 years. Majority of the patients were between the age of 52 – 88 years (80% of the cases) (Figure 2).
The 82 EBUS-TBNA procedures were carried out for the following main reasons (Figure 3): A. 42 procedures for cancer reasons (i.e. 51% of the total procedures) a. For diagnosis of lung cancer (38 procedures) b. Diagnosis of suspected extra-thoracic cancer (3 cases) c. Staging of lung cancer (1 case) B. 40 procedures for IMHL (i.e. 49%)
The final diagnoses in 38 procedures carried out for “diagnosis of lung cancer” were as follows: 1. 25 patients were diagnosed with lung cancer (12 squamous cell cancers, 7 adenocarcinomas, 4 small cell cancers, 1 undifferentiated lung cancer & 1 neuroendocrine tumour) 2. Final diagnosis in 9 cases was reactive lymphadenopathy (repeat CT showed resolution of lymph nodes in 3 cases, reduction in the size in 1 case & stable nodes in 5 cases) 3. Extra-thoracic malignancies were diagnosed in 2 cases (1 metastatic prostate cancer & 2nd was metastatic disease from primary parotid gland tumour) 4. We had false negative results in 2 cases (1 patient was diagnosed with small cell lung cancer on CT biopsy & 2nd with adenocarcinoma on ultrasound biopsy)
It was found in 11 cases, where the clinicians initial suspicion was a possible lung cancer, that the final diagnoses were reactive lymphadenopathy and extra-thoracic malignancies.
Some of these patients had lung nodules as well (along with mediastinal & hilar lymphadenopathy). These nodules either resolved or remained stable. In the case of metastatic prostate cancer, prior MRI showed prostate confined disease & the clinician suspected this size significant lymphadenopathy to be due to a lung primary. In the case of metastatic parotid tumour, the initial diagnosis of parotid cancer was a very long time ago & metastatic disease was not expected.
Final diagnoses in 3 patients who had EBUS-TBNA for “extra-thoracic malignancies” were as follows: 1. Prostate cancer (here pelvic MRI showed locally advanced disease) 2. Colon cancer (known colon cancer) 3. Ovarian cancer (patient had ovarian mass & abdominal/pelvic lymphadenopathy)
As most of the surgical patients go directly to tertiary care centres (from this hospital), we therefore didn’t have many patients for staging purposes during the 1st year of the service. We only had 1 patient for “staging EBUS-TBNA” during this time. In this case stations 4L, 7 & 12L were sampled. Only station 12L was PET positive & also positive on EBUS-TBNA sample. Station 4L & 7 were PET and EBUS-TBNA negative. There was no size significant nodes seen on staging CT in any other area, only 12L node was PET avid & we were not able to identify any size significant lymphadenopathy at any other station via EBUS as well. Sensitivity in this staging EBUS was 100%.
In these 42 diagnostic & staging procedures (carried out for cancers or suspected cancers), summary of the pathological diagnoses from lymph nodes aspirates is as follows: 1. Squamous cell carcinoma of lung 13 approximately (31%) 2. Adenocarcinoma of lung origin 7 approximately (17%) 3. Small cell lung cancer 4 approximately (9.5%) 4. Neuroendocrine tumour of lung origin 1 approximately (2.3%) 5. Undifferentiated lung cancer 1 approximately (2.3%) 6. Metastatic prostate cancer 2 approximately (4.75%) 7. Metastatic parotid gland cancer 1 approximately (2.3%) 8. Metastatic ovarian cancer 1 approximately (2.3%) 9. Metastatic colon cancer 1 approximately (2.3%) 10. False negative 2 approximately (4.75%) 11. Reactive lymphadenopathy 9 approximately (21.5%)
Out of the 40 procedures for IMHL, we were not able to get an adequate sample in 1 case and this patient underwent repeat EBUS-TBNA. Repeat sample showed granulomas; findings were consistent with the clinical diagnosis of sarcoidosis. Final diagnoses in these 40 cases are as follows: 1. Metastatic adenocarcinoma from pancreaticobiliary origin = 1 (2.5%) 2. Bronchogenic cyst = 1 (2.5%) 3. Insufficient sample = 1 (2.5%) 4. Tuberculosis = 3 (7.5%) 5. Granulomas = 16 (40%) 6. Reactive lymphadenopathy = 18 (45%)
Serious diagnoses were made in 10% cases of IMHL (4 out of 40). One patient had metastatic adenocarcinoma from pancreaticobiliary origin & didn’t have any abdominal symptoms or any abnormalities on CTs in the abdomen. 3 patients were diagnosed & later treated for active tuberculosis. Out of these 3 patients only 1 had features of active disease, but sputum negative. The other 2 patients had only mediastinal lymphadenopathy, no lung infiltrates & no sputum production.
A total of 122 lymph nodes were sampled. Details are as follows (figure 4):
Lymph node station
Times sampled
%
Station 7
65
53.3
4R
18
14.8
11R
15
12.3
11L
11
9
10R
4
3.2
4L
3
2.5
2R
2
1.6
10L
2
1.6
12R
2
1.6
2L
0
0
12L
0
0
Commonly sampled nodes were station 7 nodes. This is consistent with international literature published on EBUS-TBNA.
There were no complications from the procedures performed. None of our patients experienced significant airway bleeding (requiring admission or blood transfusion), mediastinal infection, pneumothorax, pneumo-mediastinum, haemo-mediastinum or airway lacerations.
Discussion
EBUS TBNA is one of the methods to access the mediastinal & hilar lymph nodes. This is a minimally invasive way to get samples from these nodes. Several invasive, minimally-invasive & non-invasive techniques are available to diagnose & stage lung cancers. Choice depends upon the extent of the disease. About 50% of lung cancer patients have evidence of metastatic disease at the time of presentation 3. Patients with intrathoracic disease undergo several investigations. Now we know that EBUS-TBNA should be considered as the initial investigation for patients with early stage suspected lung cancer 4. Research carried out has shown that EBUS-TBNA had a sensitivity of 90% 5. A recent national BTS audit on bronchoscopy & EBUS showed national diagnostic sensitivity of 90% for staging EBUS-TBNA. BTS quality standards statement sets target of 88% sensitivity for staging EBUS-TBNA6. As far as diagnostic EBUS-TBNA is concerned, we had 2 false negative results out of 41 (4.8%), that gives the sensitivity for diagnostic procedures of 95.2%.
There is significant evidence available that ROSE does not increase the diagnostic yield of even conventional TBNA 7. Trisolini et al demonstrated in this randomised controlled trial that ROSE did not give any significant diagnostic advantage & did not affect the percentage of adequate specimens. Articles have also shown that ROSE does not reduce the EBUS-TBNA procedure time 8. The use of immunohistochemistry on EBUS-TBNA reduces the rate of unclassified non-small cell lung cancer when compared with cytological diagnosis alone 9. EBUS-TBNA samples are sufficient to allow immunohistochemical and molecular analysis. I am happy to say that we were able to get ALK, EGFR & PDL1 testing on the EBUS-TBNA samples (where indicated), at our centre. The presence of a cytopathologist or cytotechnologist during the procedure for ROSE purposes can increase the cost significantly. This increased cost can have a significant impact on starting the service at the level of a district general hospital. Another issue which needs clarification, is the number of passes required before declaring the material is inadequate while using ROSE technique. Studies have shown that significant number of samples inadequate on ROSE were still able to give a diagnosis with the help of immunohistochemical analysis.
Here we have seen that 40 EBUS-TBNA procedures were carried out for IMHL. Unfortunately, in this group, one patient was diagnosed with unexpected malignancy, i.e., metastatic adenocarcinoma of pancreaticobiliary origin. In the remaining cases we had benign diagnoses. In the IMHL group about 45% cases had the diagnosis of reactive lymphadenopathy. Out of the total number of 82, about 33% cases were diagnosed with reactive lymphadenopathy. We made the diagnosis of reactive lymphadenopathy in patients where EBUS samples showed normal lymphocytes; these patients had surveillance CTs & clinical follow up as well. Clinicians’ impression & surveillance scans were also reviewed for the purpose of this diagnosis. In this IMHL group, 40% cases were diagnosed with sarcoidosis. In these cases, in addition to clinicians’ impressions, we reviewed cytology, microbiology & surveillance CT reports. Processing method for specimens impacts on the yield for granulomas. Cell block preparation, as carried out in this hospital, showed higher yield for granulomas 10.
During the first year of the EBUS service at this centre, there was no suspected or diagnosed lymphoma patient who underwent this procedure. International data suggests, for the diagnosis of lymphoma, EBUS-TBNA aspirates should be sent for cytopathology, immunohistochemistry, flow cytometry, cytogenetics and molecular studies 11,12,13.
Conclusion
EBUS-TBNA is a safe & minimally invasive procedure. It is a first line investigation for lung cancer staging. EBUS-TBNA has been effective in diagnosing extra-pulmonary malignancies 14. In the last decade we have also seen that its utility has increased significantly in diagnosing benign conditions like sarcoidosis and TB.
We feel operators’ training is also very important in achieving excellent results. Mastering the complexity of this procedure is time consuming. Standardised training is mandatory to achieve high skill levels15 and we hope there will be a standardised approach to this in future.
Lichtenstein tension-free mesh repair has been the standard practice in open inguinal hernia repair for many years. The procedure involves suture fixation of the mesh via an anterior approach to the inguinal canal. It is hypothesised that this invasive fixation contributes to the development chronic postoperative inguinal pain (CPIP), a condition which can cause significant morbidity.
A sound repair should restore the groin anatomy whilst minimising recurrence and not adversely affecting the patient quality of life. Considering the large number of these operations performed each year, reducing complications such as chronic postoperative pain will have a significant impact on healthcare resources.
The introduction of anatomical self-adhesive meshes such as Parietx ProGripTM addresses this concern in theory by obviating the need for mesh fixation. This mesh is a macro porous polyester mesh that utilizes polylactic acid micro grips (PLA) to aid placement within 60 seconds1 without the need for additional fixation. The manufacturer does suggest, however, that additional fixation is left to the discretion of the operating surgeon.
We conducted a review of the literature to evaluate the reported outcomes of using this mesh in open inguinal hernia repair.
Methods
We conducted a PUBMED/MEDLINE search using the search words “Self-adhesive mesh”, “Lichtenstein repair”, “Open inguinal hernia repair” and “Self-gripping mesh” .We looked primarily at the outcomes of postoperative pain and recurrence. The result highlighted five well-structured meta-analyses and several RCTs and retrospective reviews.
Results
In a retrospective review of 211 patients who underwent open inguinal hernia repair with self-adhesive mesh, Tarchi P et al reported a recurrence rate of 0.5% at 1 year and 2.4% at 2 years. They incidence of chronic pain was less than 3%. There were no cases of seroma, testicular complications or mesh infection at 1, 2 and 3-year follow-up. The report highlighted the shorter operative duration with no effect on recurrence rates as a point in favour of self-adhesive mesh. The authors acknowledged the limitation of the study design and the need for randomised trials to address the issue.8 A few other small non-randomised trials draw similar conlusions.9
A randomised blinded trial from the Danish Multicentre DANGRIP Study Group allocated 163 vs 171 patients to self-adhesive and suture fixation respectively. There were no significant differences between the groups in postoperative complications (33.7 versus 40.4 %; P = 0·215), rate of recurrent hernia within 1 year (1.2 % in both groups) or quality of life. The 12 month prevalence of moderate or severe symptoms was 17.4 and 20.2% respectively (P = 0.573).
The study concluded that the avoidance of suture fixation using a self-gripping mesh was not accompanied by a reduction in chronic symptoms after inguinal hernia repair. 5
The FinnMesh trial is a randomised multicentre trial from Finland that Compared glue fixation, self-gripping mesh, and suture fixation of mesh. 625 patients were randomised to cyanoacrylate glue (Histoacryl, n = 216), self-gripping mesh (Parietex ProGrip, n = 202), or conventional non absorbable sutures (Prolene 2-0, n = 207) There was no significant differences postoperatively in pain response or need for analgesics between the study groups at 1 year follow up. The mean operative duration was lower in the self-adhesive mesh group.6
The HIPPO trial is a randomised double-blinded trial of 165 patients. The reported hernia recurrence rate after 24 months was 2.4% for the ProGrip mesh and 1.8% for the sutured mesh (P = 0.213).
The incidence of CPIP was 7.3% at 3 months declining to 4.6% at 24 months and did not differ between both groups. 7 The mean duration of surgery was significant shorter with the ProGrip mesh (44 vs 53 minutes, P < 0.001).
In a systematic review of 7 studies comparing self-gripping versus sutured mesh for inguinal hernia repair totalling 1353 patients, Zhang C et al found no difference in recurrence (risk difference -0.02 [95% confidence interval -0.07 to 0.03], P = 0.40) or chronic pain (risk difference -0.00 [95% confidence interval -0.01 to 0.01], P = 0.57). 2 This review found no difference in wound infection, hematoma, and seroma formation. Self-adhesive mesh was again associated with a shorter mean operative duration. In its conclusion, the authors surmised that both mesh types are comparable in outcome but further long term analysis might be needed.
Pandanaboyana S published a meta-analysis of 5 RCTs and 1170 patients, that also found no significant difference in recurrence or chronic pain. Wound infection was lower in the self-gripping mesh group compared to sutured mesh but this was not statistically significant (risk ratio (RR) 0.57, 95% confidence interval 0.30-1.06, P = 0.08). The duration of operation was significantly shorter with self-gripping mesh compared to sutured mesh with a mean difference of -5.48 min [-9.31, -1.64] Z = 2.80 (P = 0.005).3
In another meta-analysis, Li J et al included 5 RCTs and 2 prospective comparative studies and 1353 patients. Statistically, there was no difference in the incidence of chronic pain [odds ratio = 0.74, 95% confidence interval (CI) (0.51-1.08)]. There was no statistical difference in the incidence of acute postoperative pain [odds ratio = 1.32, 95% CI (0.68-2.55)], hematoma or seroma [odds ratio = 0.89, 95% CI (0. 56-1.41)], wound infection [risk difference = -0.01, 95% CI (-0.02 to 0.01)], and recurrence [risk difference = 0.00, 95% CI (-0.01 to 0.01)]. The self-gripping mesh group was associated with a shorter operating time (1-9 minutes).10
In Ismail A et al’s meta-analysis of 12 randomized controlled trials and 5 cohort studies, 3722 patients were included in the final analysis. The two groups, using self-gripping mesh or sutured mesh fixation, did not differ significantly in terms of recurrence rate (odds ratio = 0.66, 95% confidence interval 0.18-2.44; P = 0.54) or postoperative chronic groin pain (odds ratio = 0.75, 95% confidence interval 0.54-1.05; P = 0.09). The operative time was less in the self-gripping mesh group (mean difference = -7.85, 95% confidence interval -9.94 to -5.76; P < .0001). There were comparable risks between self-gripping mesh and sutured mesh fixation groups in terms of postoperative infection (odds ratio = 0.81, 95% confidence interval 0.53-1.23; P = 0.32), postoperative hematoma (odds ratio = 0.97, 95% confidence interval 0.7-1.36; P = 0.9), and urinary retention (odds ratio = 0.66, 95% confidence interval 0.18-2.44; P =0.54).11
A more recent meta-analysis including 10 RCTs and 2541 patients also draws similar conclusions, with no significant difference in the incidence of chronic pain (odds ratio = 0.93; 95% confidence interval, 0.74-1.18), recurrence (odds ratio = 1.34; 95% confidence interval, 0.82-2.19), or foreign body sensation (odds ratio = 0.82; 95% confidence interval, 0.65-1.03).4 The mean operating time was significantly shorter (odds ratio = -7.58; 95% confidence interval, -9.58 to -5.58) in the self-gripping mesh group which is consistent with the reported literature.
Discussion
Open inguinal hernia repair is a routinely performed operation and chronic postoperative inguinal pain is a significant cause of morbidity that can impact negatively on patients’ quality of life. Eliminating the need for suture fixation seems theoretically a step in the right direction.
The published literature, however, seems to arrive at similar conclusions. Whilst using self-adhesive mesh results in a shorter operative duration and seemingly does not affect the outcome negatively otherwise, there is no evidence that it reduces postoperative chronic pain and therefore should not be advocated on this merit. A shortened operative time coupled with a non-inferior outcome does seem like a more reasonable evidence-based argument for its proponents.
The decision of which mesh fixation technique to use can be left to the discretion of the operating surgeon. Further long-term follow up data is required to arrive at more definitive conclusions as the mean follow up duration in the reviewed studies ranged from 4 months to 3 years. The cost implications involved in the choice of mesh used should also be taken into account in future studies.
The creative process is an enigma; there are conflicting opinions about creativity and creative people. Research studies on creativity have produced contradictory results. The long-standing belief that creativity results from a strange clairvoyant state is still occasionally associated with psychiatric disorders. 1 Although a decline in creativity with aging indicates that it is biologically based, a relationship between creativity and psychopathology is overstated in both print and media. Reductionism tends to misconstrue creativity as a product of psychopathology. Nonetheless, whilst psychopathology can facilitate creativity, it does not produce creativity. The inspirational characteristics of creativity remain shrouded in mystery.
Methodological issues that include both a definition and an evaluation of creativity impede the research into creativity. These challenges make the correlations between the studies problematic, and they deliver opposing outcomes. Although there is no confirmed relationship between psychopathology and creative accomplishment, the search for such a relationship hinders our understanding of human potential and the deeper levels of consciousness. Early detection of creative talents in children might enable providing them with special guidance, thereby averting potential psychiatric problems.
The superficial reductionism of 20th century biological psychiatry compressed all mental phenomena, including creativity, into compact neurobiological compartments, and the only way to achieve this was to medicalise it. Any assumed correlations between creativity and mental disorders will be clarified only when we gain a greater understanding of the creative process. In cases where creativity and mental illness indeed coexist, a psychiatric understanding of creativity may provide insights into patient functioning and assist in defining both normalcy and psychopathology.
Whilst human beings have existed on this planet for millions of years, the technological advancements of the last few centuries transpired without any perceptible changes in the development of the human brain. Historically, our ancestors were drawing two-dimensional pictures until just a few centuries ago. No one has hitherto been able to explain this sudden burst of creativity. An expanded model of brain-mind-consciousness that can appreciate the wonder of creativity is needed.
Defining Creativity
Researchers have long been interested in a potential connection between creativity and mental illness. The major challenge here is to define creativity and establish measurable indicators. Creativity has been described as the process of bringing something new into existence. It involves the capacity to take unrelated structures and combine them harmoniously in different ways for new purposes. The creative mind is alert to unexpected connections. An individual with a rich reservoir of knowledge is regarded as intelligent, whereas an individual who uses that knowledge in an original and constructive way is considered creative. 2 The creative process is not fully understood; some even feel that a precise definition is unattainable.3 Nonetheless, creativity can be described as the process of bringing something new into being where the outcome is larger than the input received by the creative mind. 4,5 Creative individuals are sensitive to gaps in human knowledge and these voids act as catalysts in their search for solutions. This is the highest form of human adaptation; whether to a greater or a lesser degree, it may exist in all people.
The creative process can be likened to a four-stage computer process.6 If information processing and storage is the primary process, the second stage is the incubation or pondering phase, during which ideas germinate at a subconscious level. The third phase involves illumination, or flashes of insight, and the fourth is the period of elaboration during which the new idea is developed and tested. These stages can be additionally likened to the biological rhythm of conception, gestation, birth and infancy. This pattern is not strict. As a rule, the process of illumination is gradual with countless small bursts of insight, such as with Charles Darwin’s elaborations on his theory of evolution.
Dream processes shed some light on creativity. As with poetry, dreams are replete with visual and highly idiosyncratic metaphors. Dreams are the art of the unconscious; whilst dreaming, we tap into a creative source. The dreaming psyche has seemingly unlimited creative potential. An anecdote about Kekule, the chemist, recounts that he conceived the benzene ring after a dream in which he saw a serpent biting its tail.
The Creative Personality
Creative people must be assessed on an individual basis. Not all persons of superior intelligence are creative, and not all creative people have superior intelligence. Although creative potential is dependent on intelligence, actual creative achievement is independent of intelligence (e.g. one does not have to be tall to be a successful basketball player). Highly intelligent people are prone to self-criticism, which has an inhibiting effect on the development of creativity. A combination of high intelligence and special aptitudes appears to promote creativity. Unconventionality, egocentrism, flexibility, tolerance for ambiguity and a preference for complexity are among the attributes of creative individuals. 7 Psychological testing has shown that creative individuals are frequently more emotionally troubled than are non-creative individuals; however, they also have more ego strength for dealing with problems. Their personal qualities include imagination, persistence, perseverance, dedication and stamina. Creative children tend to be egotistic and gullible. This egotism provides them with the confidence to believe that they are capable of unique achievements, whilst momentary gullibility enables them to break through scepticism and into creativity.
McClelland illuminated a controversial notion when he described the creative individual as one who is characterised by competition, either with an external standard of excellence or with his or her own internal aspirations. 8 Driving absorption, the ability to ignore failure and adversity and tremendous curiosity are noted as a predictive set of personality traits.9Although creative individuals are difficult to live with, whether their creativity flourishes or not frequently depends on the support that they receive from others. 10 Among the characteristics of creative people, Tarlaci (2014) included openness to experimentation and change, rebelliousness, individuality, sensitivity, playfulness, self-assertiveness, curiosity and simplicity. 11
Although there is a compulsion for order, symbolisation and communication are at the core of creativity. Intelligence, domain-specific knowledge/expertise, motivation and adaptive traits such as openness, broad interests and self-confidence are closely associated with creativity (Feist 1999). 12 Despite the fact that these characteristics of creative people are obviously independent of psychopathology, they point towards better mental health. Research on creativity in neuroscience has revealed that creativity is associated with ‘ordinary’ rather than psychopathological brain processes.13
Psychopathology
Since the time of Plato, philosophers have debated a conceivable connection between creativity and psychopathology. He proposed a logical paradox when he stated that a poet does not know what he is going to write, and yet he cannot produce a poem if he has no picture of what he describes. As a Greek philosopher, Plato was a reincarnationist. He obviously solved his own riddle by attributing hidden creative knowledge to remembrances of a previous life and to springing from ‘Divine madness’. Aristotle noted the predisposition of great artists and poets to melancholia, but he perceived creativity as a rational process. Shakespeare repeated the older perspective through one of his characters who states, ‘the lunatic, the lover, and the poet are of imagination, all compact’. During the 20th century, systematic investigations into this relationship were unable to either support or refute this association. Cesare Lombroso failed to clarify this confusion in his book, Genius and Mental Illness. Nonetheless, his influence led to speculation that genius is an ‘ancestral gift’ transmitted in families in the same manner as mental disorders.
Recent empirical research has shown that creative individuals have a higher tendency towards psychopathology than those in non-creative professions. This propensity is expressed in personality traits, behaviours and experiences similar to those identified in clinically ill patients (Jamison 1989). The evidence has not clarified whether the psychopathology linked to creativity relates more closely to features of schizophrenia or affective disorders. Countless novelists and dramatists have family histories of psychiatric disorders. Severe personality deviations have been observed among visual artists and writers and possibly among thinkers and scholars as well. Jamison noticed mood disorders among writers and artists. 14
Bipolar disorder may be more frequent among creative individuals than in the normal population. One study reported a higher incidence of depression and bipolar disorder among creative people, and especially among writers.15Another study noted a higher incidence of depression and alcoholism among writers and artists. Following recent epidemiological studies with large samples, Kyaga et al. (2013) argued in favour of an association between professional authors and psychiatric disorders. 16 They illuminated familial associations between the creative professions and schizophrenia, bipolar disorder, anorexia nervosa and possibly autism. 16They noted that this association was more evident in cases of self-employed artists and less so in scientific creativity, where the subjects had passed through several professional screening procedures.
In another epidemiological study, Parnas et al. (2019) found that the relatives of academics have a significantly increased risk of suffering from schizophrenia or bipolar disorder. 17 In another study, they suggested that ‘creativity and an increased risk for mental disorders seem to be linked by a shared vulnerability that is not manifested by clinical mental disorders in the academics.’ 18 The literature has made significant connections between bipolar disorder and creative accomplishment, with much of the thinking inspired by biographical accounts of poets and musicians who presented with signs of bipolar disorder. 19 Studies by Burkhardt et al. (2018) suggest that, in persons at-risk for bipolar disorder, their mood swings are strongly associated with creativity, but whilst there is evidence of increased creativity, there is no evidence of higher creative achievement. 20
Observations of the bipolar mood domain identify a high prevalence of changes in intuition, empathy, appreciation of danger and predictive capacity. However, these changes do not necessarily include supra-sensory changes in the primary senses of smell, taste, vision, touch or hearing. Parker et al. (2018) suggested that clinicians should be aware of non-psychotic, supra-sensory phenomena in patients with bipolar disorder and that the identification of such features could explain the increased creativity evident in those with a bipolar condition. 21
After examining the life of Charles Dickens, Longworth and Carlson (2018) maintained that there was very little historical evidence for the suggestion that he experienced bipolar disorder. 22 However, they did suggest that he displayed characteristic bipolar symptoms. They also maintained that his childhood was an outstanding example of personal resilience and that his own story was just as fascinating, if not even more intriguing, than any of those that he had created. Their investigations concluded that Dickens’ story confirmed the connection between writers, creativity and mood disorders. Retrospective psychiatric assessment of historical figures and the slotting of these celebrities into biological compartments may be risky. Biographical studies of creative people are criticised for having possible recall, interviewer, selection and cultural sampling bias. 23
The suicide rate is high among artists, and this has been linked to manic depression. Adverse financial circumstances and disappointments due to the rejection of their artistic productions are sufficient to explain this apparently high rate. In contrast, musicians have a low suicide rate, very likely reflecting the healing effect of music. In addition to alcohol, opium has been a historical favourite addictive drug of writers, of which Charles Dickens is an example; opium addiction was partially responsible for his death. 24 Ludwig’s study on 1000 outstanding individuals found an upsurge in alcohol abuse in artists, especially writers. 25 Post (1994) found a similar result among prose writers and playwrights. 26 Although Ernest Hemingway, the Nobel Prize winner for literature, may be a good example of this phenomenon, he committed suicide later in his life. Creative individuals may be notorious for their alcohol and drug misuse; however, it is not clear whether drug induced psychopathology promotes their creative expression. Whilst it is possible that the disinhibiting influence of mild psychopathology and the judicious use of alcohol or drugs could facilitate creativity, this phenomenon has potentially contributed to the confusion in which psychopathology is described as the ‘producer’ of creativity.
Absence of Psychopathology
Alongside these studies, other reports glorify the mental health of geniuses and eminent individuals. The Stanford 35-year follow-up study of over 1000 geniuses, the MacKinnon study of creativity in architects and Havelock Ellis’s psycho-biographical study of eminent men all emphasised the absence of psychopathology among these creative individuals. 27
In an investigation on the prevalence of psychopathology, in a sample of 291 famous men, Post (1994) noted that they all excelled by virtue of their abilities, originality, drive, perseverance, industry and meticulousness. 26Even though most of them had unusual personality characteristics and minor neurotic abnormalities, all of the subjects in this study were emotionally warm, with a gift for friendship and sociability. Post additionally noted that, among creative individuals, scientists show the fewest psychological abnormalities. Functional psychoses are less frequent than epidemiology would suggest. Depressive conditions, alcoholism and possible psychosexual problems are more prevalent than expected in some professional categories, particularly among writers. Hare (1987) noted that banning stimulant drugs in sports did not lower the achievements significantly, and that the same should be true of creativity. Poetic vision has been equated with psychedelic experiences. 28 Creative activity has been observed to be at its highest level in patients who are moderately ill, and at its lowest level in groups identified as severely ill. 29
Although there is no significant difference in the incidence of psychotic illness among males and females, there is less creativity among the latter. If the hypothetical connection between creativity and psychopathology were valid, the incidence of creativity should be proportional to gender. Historically, unfavourable social pressures and opposing cultural factors have represented major explanations for the lower incidence of creativity among women. This disparity points towards the fact that creativity has to be nurtured and is not automatically generated by psychopathology. Despite an equal incidence of mental illness in men and women, there have been few female geniuses in any culture; this challenge the probability of a clear connection between psychopathology and creativity. The same argument may be used against a pure biological view of creativity; both men and women have the same biological make up, yet fewer geniuses have been identified among female population.
Psychodynamic Perspectives
Psychoanalysts have postulated dynamic psychopathologies for the creative process. Analysts incline towards seeing artists as neurotics and their productions as sublimations of sexuality and regression in the service of the ego. 30 They consider the motives for creative activity as impulses that compensate for dissatisfaction and as defences against depression. Some perspectives differ from traditional psychoanalytical ideas, emphasise the crucial role of synthetic ego operations and draw distinctions between psychopathology and creativity. 31Analysts suggest that novel ideas exist in the subterranean regions of the mind. Whilst the conscious mind has no access to these hidden areas in the normal state, it is easier for a disturbed mind to tap information from the unconscious or preconscious. 32 Sims suggests that the psychotic and the creative states are subjectively indistinguishable and that delusions arrive in the minds of the mad in the same manner that ideas drop into the minds of the creative.33In contrast,Slater and Meyer report only minor psychiatric disorders among creative people. 34Although it would appear that psychopathology does not preclude creative activity, it may release it. In general, the creative person enjoys conflict free intimacy with the preconscious and is a model of psychological health. 35
Orderly Mind
The neurobiological model of schizophrenia suggests that a deficit in the systems involved in information-processing could contribute to its symptomatology. 36It has been hypothesised that such a deficit could favour the creative association between information units.37Psychopathology linked creativity has even been associated with abstract disciplines such as mathematics. If these views were accepted, creativity and schizophrenia would be separated only by a ’neurological difference’. Andreasen challenged the hypothesis of a connection between creativity and schizophrenia.38He argued that the bizarre nature of schizophrenic experiences is far from original, and that the cognitive impairment of such patients inhibits their creativity.
The creative intelligent person experiences an attention surplus, whereas a schizophrenic patient suffers from an attention deficit. As a case in point, a creative child may figure out in two seconds what the teacher is going to say, after which he may be looking around, waiting for the teacher to finish and appearing as if he is not paying attention. In contrast, because of a failure in the normal filtering of stimuli, schizophrenics tend to make unusual associations that result from over-inclusive thinking in which countless disconnected elements are included in their reasoning. 39 Although higher cognitive individuals also demonstrate ‘pseudo over-inclusive thinking’, this is due to their capacity to conceive and utilise two or more contradictory concepts simultaneously.40
Bleuler (1950) described intellectual ambivalence as both characteristic of schizophrenia and as superficially similar to the janusian process of oppositional thinking that involves conceiving of two or more opposites simultaneously. 41 The Kent-Rosanoff word association test has been used to assess this process. 42 In contrast to the creative thinker who is fully aware of logical contradictions, the schizophrenic patient is unconscious of the contradictory nature of his or her utterances. For example, when Albert Einstein derived his theory of relativity, derived from the fact that a man falling from the roof of a house was both in motion and at rest, he was fully aware of the contradictory nature of his thinking. 43 Another example is Frank Lloyd Wright’s revolutionary design of Falling water, in which nature and interior space coexist. The janusian process was initially identified in highly creative writers, visual artists and scientists. The fluency of association observed among creative individuals can be mistaken for over-inclusive thinking. 44Since their brains process increased sensory input effectively without cognitive overload, creative individuals derive an advantage from their higher levels of associative thinking.
Contrary to popular belief, in their cognitive and conceptual style, creative writers resemble those suffering from the manic phase of affective disorders, rather than schizophrenics. However, whereas the over-inclusiveness of maniacs is based on bizarre associations, that of writers is due to an imaginative recognition of original associations. Whilst writers are capable of controlled flights of fancy, manic imaginations are bizarre and based on personalised reason. The racing thoughts of a creative intellect are productive, whereas those of the manic are destructive. Albert Einstein claimed that he discarded a new idea every two minutes.
Creative thinking is polythetic and should not be confused with flight of ideas. Schuldberg (1990) investigated the overlap between schizotypal and hypomanic traits and suggested that affective symptoms may be more important than primary process thinking in determining creativity within the general population. 45 The fluctuation of thoughts experienced by higher cognitive ability individuals can be mistaken for mood swings. Fink et al. (2014) challenged the connection between creativity and psychopathology and proposed that the domains of artistic and scientific creativity should be analysed separately. 46
Although the creative potential of autistic people has been recognised, they differ from over perceptive children in many respects. One fundamental difference is that the creative potentials of the latter are polythetic, whereas such potentials of the of autistic individuals are generally monothetic. A key diagnostic criterion for autism—restricted and repetitive behaviours and interests—combined with a small number of research studies, suggest that generating original ideas or artefacts may be challenging for autistic individuals. 47Nonetheless, a minority within this population has exceptional artistic gifts, and a wider group embraces activities typically associated with creative expression, including visual art, music, poetry and theatre.
A three-level multilevel meta-analytic approach investigated the relationship between creativity and schizophrenia. The analyses of Acar et al. (2018), with 200 effect sizes gathered from 42 studies, detected a mean effect size of r =−0.324, 95%CI [−0.42, −0.23]. 48When the analyses focused on the moderators, they found that the relationship between schizophrenia and creativity was moderated by the type and content of the creativity measure, the severity of the schizophrenia and the patient status. The negative mean effect size was firmer with semantic-category or verbal-letter fluency tasks than the divergent thinking or associational measures. They submitted that when these findings are analysed along with previous meta-analyses on the association between creativity, psychoticism and schizotypy, creativity and psychopathology appear to have an inverted-U relationship. Whilst a mild expression of schizophrenia symptoms may support creativity, a full demonstration of the symptoms challenges it.
Schizophrenia and schizotypy have frequently been associated with above average creativity; nonetheless, empirical studies on the relationship between schizophrenia spectrum disorders and enhanced creativity have generated inconsistent results. 49 Even though some mental processes may appear to be similar in creative and psychotic thinking, the current literature challenges this conclusion. 50,51,52Psychopathology does not play a role in the genesis of higher order creativity; nonetheless, the psychological defence mechanism of overcompensation goes some distance towards explaining the high achievements of mentally or physically disabled individuals.53
The Myth of Drug Induced Creativity
The belief that brain alone is the source of creativity led to the idea that altering brain chemistry could make people more creative. The truth may be that the gentle psychopathology created in the brain might serve as a facilitator of creativity rather than a producer of creativity. The psychopathology generated by the psychedelic drugs might help to open Aldous Huxley’s ‘doors of perception.’ Huxley (1954) proposed “Doors of Perception” to illustrate the enlightenment induced by LSD etc.54 Interestingly, such a proposal is close to Zizzi and Pregnolato’s depiction of ‘very fast switches from the quantum logic of the unconscious to the classical logic of consciousness’ (Zizzi & Pregnolato,2012). 55Those who glorify such drug induced creativity are unaware that long term substance misuse can only kill creativity as the ‘switches’ become permanently damaged and lead to psychopathological states.
When one’s sense of self is suspended and space-time sense dissolves, psychedelic experiences occur, and such experiences should not be confused for true mystical experiences. Psychedelic experiences are pseudo-mystical experiences. True mystical perceptions and cognitions relate to what is essentially ineffable, pertaining to the nature of existence rather than being limited to familiar objects that are intrinsic to everyday experience. The hallucinating drug user or alcoholic is functioning at the level of impaired consciousness, while the mystic is operating at a higher level of consciousness. Mystics have full awareness of their altered state of consciousness and they are also in a position to switch back to their ordinary mode of perception, unlike a hallucinating patient. It may be true that psychedelic experience has created an interest in artistic activity and the raw materials obtained in such experience may be useful in eventual artistic creation, but the psychedelic experience as such is not a creative experience because motor functioning is impaired during psychedelic experience and information flow to the hands and fingers are affected. 56 The natural state of a relaxed, happy, and well-adjusted person is more creative rather than the perplexed psychedelic state. There may be ‘psychedelic artists,’ but not psychedelic scientists indicating the difference in the creative process of scientific generativity and artistic.
Drug induced creativity is a conundrum that need serious clarification as many young people are trapped in such faulty perceptions. Cannabis is the most widely used illegal substance globally. Schafer et al (2011) suggested that cannabis produces psychotomimetic symptoms, which in turn might lead to connecting seemingly unrelated concepts.57Such divergent thinking is considered primary to creative thinking. They argue that a drug induced altered state of mind may indeed lead to breaking free from ordinary thinking and associations, thereby, increasing the likelihood of generating novel ideas or associations. But the harmful effects of cannabis use have been extensively evaluated.58,59,60,61,62,63Cannabis abuse is quite unlikely to generate any sustainable creativity-‘the creative Big Bang’ would soon end up as a big crunch.
If creativity is a neurological phenomenon, creative people should have additional neural pathways, but psychedelic drugs have not been proven to create such new neural pathways. Speculations about specific brain regions promoting creativity is of great scientific interest. Creativity involve an architect and a set of engineers. According to Amit Goswami, quantum unconscious domain is the architect and the real source of creativity if brain does the engineering works. 64 Psychoactive substances do not act directly on the quantum consciousness but may help to open the gates to the hidden dimensions of consciousness. When quantum views of creativity are given due significance, the neurologically based psychedelic promotional views of creativity crumble. If not having creative abilities is deemed as a ‘brain deficit,’ use of illegal drugs to promote creativity can be compared to using medications to treat ADHD. But only if we use the ‘brain disease model’ of psychiatry, the argument of ‘brain deficit model’ will hold water. It may be even true that psychedelic drugs may have a quick and transient destressing effect and that could promote a creative mental state, but the production of any direct creativity through the use of such drugs is questionable.
Problematic Childhood
Some children of superior intelligence attempt to mask their creativity by being over-talkative and overactive. Such children run the risk of being misdiagnosed as ADHD. Creative children frequently have a unique sense of humour that their peer group cannot appreciate. Creative children are every so often resented by peers because of crazy or unusual ideas and their forcefulness and passion in presenting them or for pushing their ideas on others. Their divergent thinking is not helpful in school examinations, which require convergent thinking, and this could explain the poor academic achievement of several geniuses. The divergent thought processes of creative children must be differentiated from inattention and underachievement. For example, a highly intelligent child might fathom out what the teacher is going to say next and become inattentive. Although creativity is associated with divergent thinking, this alone does not correlate well with creative achievement.65 Creative children overflow with ideas and play with new ideas and concepts. They are not motivated or even concerned about high grades and need individualized attention lest they might fall on the wayside. There is nothing more frustrating than being a creative intelligent and become underachieved.
Creative children demonstrate certain unusual traits such as daydreaming, wanting to work alone, sharing bizarre thoughts and conflicting opinions. These qualities will not please the traditional teachers and bring them in conflict with them and their lack of conformity to the classroom structure can be even confused with attention deficit hyperactivity disorder. Highly critical parents kill creativity; unfortunately, countless creative individuals have chaotic childhoods leading to psychological problems in their adult lives. Mismatching due to variation in I.Q could lead to mismatching with parents and siblings. Mother and father may be of average intelligence, but the child can be above average intelligence, and could cause mismatching leading to behavioural problems.
There is a special group of children around the world who have high intelligence and intuition, healing abilities, and a strong spirituality and they are grouped as Indigo people in appropriate cultures. It can be stated that Indigo is people with high sensitivity level, unique creativity, high intuition ability, healer, and people with their own charisma for those around. 66According to the proponents of these new ideas, these children are often mislabelled as having behaviour disorders. Little is known from scientific research about the Indigo phenomenon. Indigenous populations are familiar with Indigo-like children. The purpose of studying these children when they are adults is to better understand these children when they are older and advance behaviour health sciences by increasing awareness of the Indigo phenomenon and learning about their lived experiences. There has been hardly any serious scientific study on the Indigo phenomenon.
A phenomenological study looked at the lived experiences of 10 adult Indigos. The study explored the lived experiences of 10 adult Indigos on the island of Oahu, Hawai'i (> or = 18+ years old-7 females, 3 males; mean age = 52.4 + SD). 67 Through in-depth semi-structured personal interviews, the experiences of these adults were analysed and interpreted to identify the common experiences faced during childhood, what worked for their assimilation into society, and recommendations for parents, educators, and health professionals on how to work with Indigos. Bioenergy field photographs of each participant were also taken. Statements related to the phenomenon were placed into themes, coded, and categorized as the investigators reached a consensus of common themes. Seven primary themes and nine secondary themes emerged from the findings.
The primary themes were: grandmother/mother had a similar gift; guided by a higher power to heal self and others; felt 'different' or misunderstood; did not openly share their unique abilities; having challenges with partner relationships; history of abuse/violence or frequently disciplined; and use of intuition at work and/or school. Secondary themes included: Using Hawaiian and cultural healing methods; everyone has a degree of intuition and the use of intuition to know when to see a doctor or not; various unique abilities from body and multiple careers; mental health institutions, and financial struggle. Self-reports on participants' life purpose, their unique abilities, and being misunderstood were also collected. It was concluded that Indigos felt mislabelled or misunderstood throughout their lives despite their belief that their life purpose was to help humankind.
Academic psychologists are sceptical about Indigo phenomenon and argue that the phenomenon is a cover up to normalise the odd behaviour of children who could otherwise be included under the category of attention deficit disorder, attention deficit hyperactivity disorder, autistic spectrum disorders and learning disabilities. Health experts are concerned that labelling a disruptive child an "indigo" may delay proper diagnosis and treatment that could help the child or investigate the parenting style that may be causing the behaviour.
Inspiration and Perspiration
Creativity is regarded as the product of inspiration or creative imagination combined with meticulous, disciplined effort. The Edisonian perception of invention as 1% inspiration and 99% perspiration is explained by the hypothesis of interactive creativity; it assumes that the inspirational aspect has a paranormal component. In his thesis on interactive creativity, Laszlo supported his hypothesis with observations on cultural creativity. 68These observations included the collective advance of entire populations through the typical creative activity of their members and by documented incidents in modern science in which different investigators developed scientific insights simultaneously, without any known contact. 68 Early cultures developed similar 7tools; calculus was discovered simultaneously by Newton and Leibni and biological evolution was described independently by Darwin and Wallace. Similarly, Graham Bell and Elisha Grey both applied to patent the telephone on the same day. The Rubic’s cube was conceived simultaneously and designed both by Rubic and a Japanese inventor. Nylon was discovered in both New York and London, hence, the name NyLon.
Jung’s research into the phenomenon of creative synchronicity helped him to formulate his concept of the collective unconscious. Psychological disturbances may represent the consequences of creative endeavour and Jung (1973) considered them the price to be paid for persistent exploration of the unknown.69 Polayni (1994) suggested that scientific discovery is informed by the imagination and integrated by intuition, and vice versa.70 This statement is close to the Edisonian perception of creativity: If imagination is a property of the brain, intuition occurs in the unconscious realm. Whilst Laszlo’s views are not definitive, they indeed supplement our existing knowledge about creativity. The inspirational aspect can be better explained by an expanded model of brain-mind-consciousness, and Xavier suggests a para-psychodynamic.71
Biological Perspectives
Particularly gifted individuals have determined the evolution of civilisation. Karlsson (1984) commented regarding creative individuals: ‘Without their genes, men might still live in caves’. 72Nonetheless, countless gifted individuals have a very ordinary family background, with no ancestral history of creativity. For example, Newton came from an undistinguished family. Genetics researchers look for the biological roots of creativity, with some believing that the mind is reducible to chemistry. Whilst intelligence may be a trait that can be cloned, creativity may not be attached, and it may prove even more complex than genetic manipulation. Kelly et al. (2007) suggested that creative inspiration is akin to mysticism. 35
Responses to both dopamine inhibiting drugs and to the psychoses triggered by the drugs that increase dopamine activity underlie the dopamine hypothesis of psychosis. However, dopamine over-activity in psychosis should not be confused with dopamine fluctuations in creative individuals. Dopamine diminishes with aging, which may explain the decreasing powers of creativity after the age of twenty.73
The relationship between age and outstanding achievement has captured the attention of researchers into creativity.Whilst Lehman maintained the perspective that creative achievement has a curvilinear single-peak function for age, other researchers have described two separate age-peaks.74Outstanding contributions among mathematicians after the age of fifty are exceptions. The age-related observations support a biological basis for creativity.
Future Directions
Study of creativity is an important area of research where consciousness studies and psychopathology meet each other. Cognitive scientists have pondered over the link between psychopathology and creativity for a long time without making any firm conclusions and appear to be barking at the wrong trees. The very process of creativity ought to be explored before any progress in this area of research could be achieved and the current reductionist model of mind stands as a hindrance. The following suggestions may be helpful for future researchers.
1. Establish the psychopathology of psychotic disorders
2. Creativity linked genetic studies are warranted, biological correlates of creativity need further elucidation.
3. Expand the current model of brain-mind-consciousness complex so as to explain the inspirational element of creativity.
4. More longitudinal, international and multicultural studies recommended.
5. Given the affinity of psychosis-proneness to the artistic creativity domain,
studies should be focussing artistic creativity and scientific creativity separately.
Clinical Implications
The study of creativity has clinical implications: A) Psychiatric understanding of creativity provides a better picture of patient functioning that could assist in clarifying the definitions of both normalcy and psychopathology. B) Early detection of creative talents in children might help to give special guidance for such children and thereby prevent potential psychiatric problems. C) When they coexist, differentiating creativity and mental illness is useful: The former might require nurturing and the latter warrant clinical intervention.
Discussion
Whilst it is true that investigators have observed a high incidence of psychiatric symptomatology of an affective nature among creative writers and artists, it is debatable whether this relationship is causal, an effect or a contributory factor. The increased psychopathological states observed in artistic creative individuals suggest that art and science reflect two different arenas of creativity. The mechanism of generation of novel ideas may be identical in art and scientific creativity, but they are evaluated differently by the two groups resulting in different types of products. Creativity and mental illness can coexist, and the creative impulse has a therapeutic effect on the psychiatric condition. Creativity can be therapeutic for those who are already suffering from mental illness; creative art therapies applied in clinical and psychiatric settings report positive health-related outcomes. 75 Even in rare cases of psychopathology induced creativity, the individual will require highly developed intellectual protective factors. It is the disciplined portion of the mind that enables outstanding creative achievements. Creativity of the highest order is a product of laborious intellectual effort. When they coexist, psychopathology is a mediator, not the producer of creativity, and the creativity may be cathartic.
There is no scientific consensus regarding the association between psychopathology and creative achievement. The literature does not substantiate the high reported incidence of mental illness among creative people. It is possible that predispositions to mental illness and creativity tend to co-occur because they reflect an underlying personality and cognitive style predisposed to both creativity and mood disorder. The high reported incidence of mental illness potentially signifies an ‘occupational hazard’ and creativity stands independent of psychopathology. The normal creative person can swing back to reality from a transient ‘creative psychical shift’ (e.g. such as a diver who searches for diamonds in the deep sea and then returns to the shore). The sensitivity and intensity that facilitate creative expression may additionally make highly creative people more susceptible to depression.
Early investigations of geniuses were retrospective. Formal meta-analyses were not considered justifiable. All forms of creativity were mixed in the studies, without distinguishing the different domains of creativity. Most of the studies were confined to English speaking people, whilst creativity is a global phenomenon. A multiplicity of literature does not mean that the ideas expressed are established scientific truth. This is true of creativity, which may be as mysterious as creation itself. The prevailing model of the mind may be inadequate for a full appreciation of the creative process. It would be easier for a ‘camel to pass through the eye of a needle’ than to explain creativity from a reductionist perspective. The inspirational component of creativity continues to be an enigma. It is my contention that creativity is essentially an inner, psychic phenomenon. We do not have even an approximate model of the brain-mind-consciousness complex, let alone know the true aetiology of schizophrenia and bipolar disorder. Therefore, it would be prudent to suspend our psychopathology allied perspectives of creativity until we develop a deeper understanding of consciousness.The association between creativity and psychopathology has soared to the level of cultural myth and this is evident in many films in which mentally ill persons are portrayed as extraordinarily creative. 76
Tuberculosis (TB) involving the central nervous system (CNS) account for 2 to 5% of all TB cases1. Commonly it manifests in three ways – meningo-encephalitis, tuberculomas or abscesses2. Treatment should be guided by histological evidence, which unfortunately can be difficult to obtained in those with CNS dissemination. We present a rare case of solitary basal ganglia tuberculous abscess, which provided a diagnostic dilemma and led to complex management planning.
Case Report
A 53-year old man, with no known medical illness, presented with a 5-day history of fever, vomiting and altered mental behaviour. His vital signs were stable, on arrival aside from being pyrexial at 38°C. There was, however, neck stiffness noted on clinical examination as well as evidence of increased tone on the right upper and lower limbs, with an upgoing right-sided plantar response. Power was preserved in all limbs.
An initial contrast-enhanced computed tomography (CT) imaging of the brain revealed a hypodense lesion measuring 1.6 cm x 2.1 cm x 1.8 cm in the left basal ganglia region, with rim enhancement, complicated by cerebral oedema causing mass effect and mild hydrocephalus (Figure 1). A lumbar puncture was performed, and cerebrospinal fluid (CSF) analysis suggested the possibility of bacterial infiltration (Table 1). Initial, Ziehl-Neelsen (ZN) staining, mycobacterium tuberculosis (MTB) cultures and Gene Xpert nucleic acid amplification test (NAAT) from CSF were negative and a HIV antibody serology was negative as well. A transthoracic echocardiogram and CT imaging of the thorax and neck were both performed, failing to identify a possible source of spread. Furthermore, there was no evidence of focal lung infection or collection, and no evidence of lymphadenopathy of note.
The patient was initially treated for 2 weeks with intravenous antibiotics, using intravenous Ceftriaxone 2g BD and Metronidazole 500mg TDS. Intravenous Dexamethasone was also commenced in view of the cerebral oedema. Unfortunately, the patient showed no clinical improvement, prompting repeat CT and Magnetic Resonance Imaging (MRI) which revealed an unchanged left basal ganglia enhancing lesion, with worsening obstructive hydrocephalus, perilesional oedema causing midline shift. The lesion was in contact with the lateral wall of the left lateral ventricle, with evidence of acute ventriculitis and involvement of the basal cisterns (Figure 2). A right-sided ventriculo-peritoneal shunt was inserted in view of the worsening hydrocephalus and persistent symptoms of headache and vomiting. Subsequently, stereotactic drainage and biopsy of the lesion was discussed but could not be performed in view of its deep location. A decision was thus made to initiate anti-tuberculosis therapy (ATT) empirically and to cease antibiotics. Subsequently, the patient’s clinical state improved with ongoing ATT and active inpatient rehabilitation, alongside improvement in the lesion, via radiological evidence (Figure 3).
Table 1: Cerebrospinal Fluid (CSF) test and other investigations performed during admission.
Test for Cerebrospinal Fluid (CSF)
Results
Other Test
Results
Micro-Total Protein
3.8 g/dL
Lumbar Puncture Opening Pressure
19 cm H20
Glucose
1.65 mmol/L
Lumbar Puncture Closing Pressure
16 cm H20
Culture & Sensitivity
Negative
Random Capillary Glucose
7.3 mmol/l
India Ink
Negative
Ziehl-Neelsen Stain
Negative
HIV Antibody
Negative
MTB Culture
Negative
Echocardiogram
No vegetation or mass
MTB NAAT (GeneXpert)
Negative
Cell Count & Cytology
No atypical cells. Mixed of pleomorphs and lymphocytes seen
Figure 1: CT imaging of the brain on (a) axial and (b) saggital view, illustrating a well-circumscribed, rim-enhanced lesion in the left basal ganglia region, suspicious of an abscess
Figure 2: T2-sequence of MRI brain on axial view (a) pre- and (b) post-procedure involving ventriculo-peritoneal shunting, illustrating a well-circumscribed hyperdense basal ganglia lesion, with peri-lesional oedema, as well as evidence of hydrocephalus.
Figure 3: Non-contrasted CT brain on axial view, illustrating less apparent left basal ganglia lesion, with hypodensities in areas of previous oedema, as well as an in-situ ventriculo-peritoneal shunt.
Discussion
Basal ganglia abscesses are very rare, incidence varying between 0.9 to 4% of total brain abscesses3. These are often disseminated lesions from sources such as congenital heart disease infections, intrathoracic and abdominal sepsis, dental caries, otitis media or sinusitis3-4. Solitary tuberculosis lesions, which includes tuberculous abscesses (TBA), in the basal ganglia are additionally more uncommon, even in endemic countries, as they normally have a predilection for the cerebellum and brainstem5. TBA has been seen in both the immunocompromised, or otherwise, where clinical presentation differs only slightly6-9.
Similar to other brain abscesses, stereotactic aspiration remains the gold standard diagnostic tool but there is a risk of rupture into ventricles or the subarachnoid space (leading to ependymitis or meningitis), worsening neurological deficits and more importantly the possible need for repeated procedures in as many as 70% of patients9.
Although experts advocate the combination of both surgical and chemotherapeutic therapy in managing TBA, the former was limited by the location of the lesion, whereas the latter was due to lack of histological evidence. Furthermore, the lack of risk factors, negative yield from cultures, NAAT and ZN stain, and inability to biopsy made the decision-making complex in our patient. Fortunately, there are evidence to support presumptive ATT for diagnostic and therapeutic purposes, which was adopted in our case5. This, however, requires cerebrospinal fluid examination, lung CT imaging and brain MRI that can provide circumferential evidence for diagnosis and to monitor treatment progress, all of which were performed in our gentleman5. In fact, CT imaging of the thorax is considered mandatory in cases suspicious of asymptomatic and subclinical extra-neurological tuberculosis, although yield remains poor (detecting abnormalities in only 38 to 56%)10.
Conclusion
Although CNS involvement in extra-pulmonary tuberculosis is not uncommon, TBA in the basal ganglia region remains a unique entity which poses a challenge in terms of diagnosis, as histological evidence is often difficult to obtain. As adopted in the case highlight, empirical therapy using ATT remains a valid option, especially in areas limited by resources and appropriate skills to perform intracranial biopsies, which is a common occurrence in endemic areas globally.
Aesophagogastroduodenoscopy (EGD), is a common same-day procedure used for both diagnostic and therapeutic purposes during which a small flexible fibreoptic tubular camera is introduced through the mouth and advanced through the pharynx into the oaesophagus, stomach, and duodenum. EGD procedures are performed under deep sedation, since they can elicit significant pain, discomfort, and anxiety. When pain is not controlled properly, this can lead to an increase use of adjunctive medications such as opioids. This can extend the length of stay and increase adverse outcomes.1 In a prospective, randomised, double-blinded study, Bedirli et al. found that use of opioid medications such as fentanyl are associated with increased adverse outcomes.2
Since procedural sedation-related complications (such as hypoxia, hypotension, desaturation, and emergent airway intervention) remain one of the biggest challenges in EGD procedures, it is important to select the correct medications to reduce these complications.3 Currently, propofol is the most common and popular main procedural sedation agent used for EGD procedures.4,5
Propofol is the preferred main intravenous agent in EGD procedures due to its amnestic, sedative, and hypotonic properties.6 It has also been favoured due to its ultra-rapid response and duration of effects, usually taking 30-60 seconds for onset of action and lasting up to 4-8 minutes.7 However, propofol has been associated with significant adverse effects that include dose-related hypotension, bradycardia, laryngospasms, and apnoea.8,9 Propofol does not have analgesic properties and will usually be combined with opioids for pain control.10
Ketamine is classified as a dissociative anaesthetic and is known to provide analgesia and amnesia. Important to note, it causes less respiratory or cardiovascular depression when used alone for children greater than 4 months of age.11 However, ketamine alone can cause such side effects of laryngospasm, increase secretions, and vomiting.12,13
The mixture of propofol and ketamine in one syringe (coined ketofol) has been shown to be effective in sedation for various procedures, such as spinal anaesthesia,14 along with orthopaedic15 and cardiovascular procedures16 in adults and children. Use of this combination has been favoured in brief but painful emergency room procedures due to the opposing haemodynamic and respiratory effects of both sedative medications.17 The negative cardiac effects produced by propofol can be attenuated with the use of ketamine, resulting in an increase in mean arterial pressures and cardiac indices.18,19 The complementary effect of both mediations has enabled the use of lower doses for each medication, thus lowering the toxicity and side effects.20,21 Although there are studies comparing activities of ketofol sedations,22-25 there has not been a study published on ketofol compared to propofol use in children during EGD procedures.
The primary goal of this study was to determine if there were differences in the outcomes of adverse cardiac and pulmonary events, vital sign parameters including objective pain, and administration of adjunct pain medication (for which fentanyl was used in this study) during EGD procedures between propofol and ketofol groups. A secondary goal was to determine if there was a difference in site performance metrics for EGD procedures (sedation time, stop of sedation to discharge time, and length of stay) between propofol and ketofol groups.
Methods:
This study protocol was approved by the Naval Medical Center Portsmouth Institutional Review Board in compliance with all applicable federal regulations governing the protection of human subjects. Research data was derived from an approved IRB protocol: number NMCP.2018.0021. Written informed consent was not required by the Naval Medical Center Portsmouth IRB, as this data concerned historical dae-identified patients.
Study Design
This was a single centre, retrospective study of a cohort of children (ranging from 2 to 18 years). Data was collected from March 2011 to September 2013 at Naval Medical Center Portsmouth’s Paediatric Sedation Centre. EGD and sedation protocol was not changed during this time period. EGD procedures were performed by the same paediatric gastroenterologists throughout this time period. Sedation was performed by the same paediatric intensivists throughout this time period. All patients used in this study were involved in a same day EGD procedure. Forty-one patients underwent deep sedation via propofol as main sedation agent from March 2011 to May 2012. Forty-nine patients underwent deep sedation via propofol and ketamine combination as main sedation agent from May 2012 to September 2013. Each main sedation agent was given in a similar fashion as an initial intravenous loading dose based on 1 mg/kg propofol followed by an intravenous infusion of 200-250 mcg/kg/minute that was started less than 10 minutes prior to start of the procedure and stopped at the end of the procedure. Ketofol solution used was a 1:5 ratio of ketamine to propofol (40 mg:200 mg). One mcg/kg of fentanyl was given when the sedationist during the EGD perceived the patient experiencing pain. Exclusion criteria included: patients less than 2 years and greater than 18 years, those less than 10 kg, and patients receiving adjunct medications other than fentanyl, ondansetron, or lidocaine.
Data Collection
Fourteen patients were excluded due to weight less than or equal to 10 kg. In addition, 12 patients were excluded due to age less than 2 years or greater than 18 years. Past medical history, ASA class, procedure indications, age, weight, sex, pain scores, vital signs (mean arterial blood pressure [MAP], heart rate [HR], and respiratory rate [RR]), unplanned events (hypoxia defined as SPO2 less than 85% at any time point, and hypotension defined as mean arterial blood pressure with greater than 20% decrease from baseline blood pressure), emergent airway intervention (defined as apnoea needing bag mask ventilation or CPAP use), and unplanned intubation. Vitals and pain scores were obtained from initial presentation in the sedation suite, start of procedure, every 5 minutes during the procedure to stop of sedation, and at time of discharge. Midway procedure vitals used for statistical analysis were obtained by selecting vitals that were at the half-way point of the patient’s EGD procedure. The Children’s Hospital of Eastern Ontario Pain Scale (CHEOPS) was performed every 5 minutes during the EGD procedure by an independent United States Navy Corpsman and used to evaluate patient’s pain prior, during, at stop of sedation, and after procedure for this study.26
Statistics
All statistics performed in this study were calculated using SPSS Statistics programme (IBM Corp. Released 2017. IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY: IBM Corp.). Mann-Whitney U nonparametric test between independent groups was performed to compare age, weight, total EGD procedure time, total time of sedation during EGD procedure, time from stop of sedation to discharge, total length of stay, and total fentanyl use (in mcg/kg) between the two main groups in this study (propofol and ketofol group). The significance level was set to 0.008 after a Bonferroni was corrected, in order to address the probability of making one or more false discoveries when performing multiple hypotheses tests. A repeated measures ANOVA was run to determine the effect of treatments over time for HR, MAP, and RR. Mauchly's test of sphericity was performed for both age groups. The Fisher Exact test was performed to compare propofol to ketofol in unplanned hypotensive events and unplanned apnoea requiring bag valve mask or CPAP. Pearson Correlation statistics were performed to study possible correlations between propofol or fentanyl and sedation time or length of stay. Statistical significance for Pearson Correlation statistics was considered when two-tailed p<0.001.
Results:
Table 1. Demographic divided into 2 age groups. EE= eosinophilic esophagitis. * indicates statistically significant difference.
Sedation Risk Age Groups
2-11 years old
12-18 years old
Propofol
n=26
%
Ketofol
n=27
%
Propofol
n=15
%
Ketofol
n=22
%
General
Demographics
Male
11
42.3%
16
59.3%
6
40%
10
45.5%
Female
15
57.7%
11
40.7%
9
60%
12
54.5%
Mean Age (years old)
6
7
15
16
Mean Weight (kg)
21.6
23.7
57.7
57.4
ASA I, II
26
100%
25
92.6%
15
100%
22
100%
ASA III
0
0%
2
7.4%
0
0
EGD Indications
EE
4
15.4%
4
14.8%
2
13.3%
0
0%
GERD
11
42.3%
9
33.3%
2
13.3%
5
22.7%
Dysphagia or Feeding Intolerance
0
0%
3
11.1%
4
26.6%
4
18.1%
Foreign Body Ingestion
1
3.8%
2
7.4%
0
0%
0
0%
Failure to Thrive
3
11.5%
1
3.7%
0
0%
0
0%
Abdominal Pain
7
26.9%
9
33.3%
6
40%
11
50%
Recurrent Emesis
3
11.5%
1
3.7%
1
6.6%
1
4.5%
Gastritis
0
0%
0
0%
1
6.6%
3
13.6%
Other
2
7.7%
0
0%
2
13.3%
4
18.1%
Unplanned Events
Hypotension (blood pressure >20% decrease from baseline)
4
15.4%
1
3.7%
2
13.3%
0
0%
Apnea requiring bag valve mask or CPAP
14
53.8%*
1
3.7%*
5
33.3%*
0
0%*
Intubations
0
0%
0
0%
1
6.6%
0
0%
Figure 1. Propofol amount total per weight (mg/kg). Bars represent standard error of the mean. There was no significant difference between the propofol and ketofol in all age groups.
Figure 2. Fentanyl amount total per weight (mcg/kg). Bars represent standard error of the mean. In all age groups, there was a significant difference between propofol and ketofol. *p<.008.
Figure 3. Vital signs. Mean arterial pressures for ages A) 2-11 years and B) 12-18 years. Main sedative agents’ MAPs were statistically different for ages 2-11 years (p =0.004), but not for ages 12-18 years (p =0.224) for all time periods. Heart rate for ages C) 2-11 years and D) 12-18 years. For Heart Rate, there was a statistically significant interaction between treatment and time for both age group using the Greenhouse-Geisser correction, p =0.002, p =0.014. Main sedative agents’ HRs showed to be statistically different for both age groups, p <0.001 and p =0.004.
Figure 4. Duration of time. A) total sedation time, B) stop of sedation to discharge, and C) total length of stay. In all age groups, there was no significant difference between propofol and ketofol. Bars represent standard error of the mean. *p<0.008.
Figure 5. Pearson Correlation. Propofol versus total sedation time for ages A) 2-11 years (p<0.01*), and B) 12-18 years (p<0.01*). Propofol versus length of stay for ages C) 2-11 years (p<0.01*), and D) 12-18 years (p<0.01*). * Correlation is significant at the 0.01 level (2-tailed).
Figure 6. Pearson Correlation. Fentanyl versus total sedation time for ages A) 2-11 years (p=0.506), and B) 12-18 years (p=0.961). Fentanyl versus length of stay for ages C) 2-11 years (p=0.378), and D) 12-18 years (p=0.352). No significant correlation was not seen for all age groups.
Ninety patients were retrospectively analysed in this study. Baseline demographics were similar between the groups, except for gender proportions (Table 1). Similar amounts of propofol per weight were used in each group (Figure 1). Ketofol significantly reduced fentanyl use in all age groups by ≥.99 mcg/kg compared to propofol alone (p<0.008 for all age groups, Figure 2). In the 2-11 year old age group, the propofol group required a mean fentanyl dose of 1.96 mcg/kg ± 1.24 mcg/kg and the ketofol group required a mean fentanyl dose of 0.44 mcg/kg ± 0.49 mcg/kg; thus ketofol required about 1.52 mcg/kg less fentanyl during EGD procedures. In the 12-18 year old group, the propofol group required a mean fentanyl dose of 1.38 mcg/kg ± 1.05 mcg/kg and the ketofol group required a mean fentanyl dose of 0.39 mcg/kg ± 0.59 mcg/kg; thus ketofol required about 1 mcg/kg less fentanyl during EGD procedures.
Vital signs (HR, RR, and MAP) and CHEOPS pain scores were obtained and analysed for baseline vitals, at the procedure midpoint, at stop of sedation, and at time of discharge. CHEOPS pain scores’ mean was 6 throughout all time periods for both groups, thus there was no statistically significant difference.
A repeated measures ANOVA was run to determine the effect of treatments over time for RR. There was sphericity for the interaction term, as assessed by Mauchly's test of sphericity in both age groups (p =0.070, p =0.762). For RR, there was not a statistically significant interaction between treatment and time for both age groups, p =0.163, p =0.804. The treatments were not found to be statistically different for either age group, p =0.736 and p =0.224.
A repeated measures ANOVA was run to determine the effect of treatments over time for HR. As assessed by Mauchly's test of sphericity, no sphericity was found in either age group (p <0.05). For HR, using the Greenhouse-Geisser correction, a statistically significant interaction was found between treatment and time for both age group, p =0.002, p =0.014. The treatments were found to be statistically different for both age groups, p <0.001 and p =0.004. A repeated measures ANOVA was run to determine the effect of treatments over time for MAP. Sphericity was found in both age groups for the interaction term, as assessed by Mauchly's test of sphericity (p =0.209, p =0.269). For MAP, there was not a statistically significant interaction between treatment and time for either age group, p =0.261, p =0.591. The treatments were statistically different for the 2-11 year old group (p=0.004), but not statistically different for the 12-18 year old group (p=0.224). (Figure 3 A-D)
Unplanned events were analysed for clinically significant hypoxia, hypotension, emergent airway intervention, and unplanned intubations. Ketofol compared to propofol had fewer hypotensive events in the 2-11 year old age group (3.7% versus 15.4%, p=0.192) and in the 12-18 year old age group (0% versus 13.3%, p=0.144). Ketofol compared to propofol had statistically significant fewer apnoea events requiring bag valve mask or CPAP intervention for the 2-11 year old group (3.7% versus 53.8%, p<0.001) and for the 12-18 year old group (0% versus 33.3%, p=0.005). There were no significant hypoxia events. There was one unplanned intubation in the propofol group of a healthy ASA I twelve year old female who had a 20 mL emesis episode after a loading dose of propofol was given on induction. (Table 1)
With propofol leading to significantly more fentanyl usage, more hypotension and emergent airway intervention during EGD procedures, we performed analysis to see if there was an effect on sedation time, time from stop of sedation to discharge, and length of stay (LOS). In all age groups, there was no statistical difference between propofol and ketofol for sedation time (p=0.115 for ages 2-11 years, p=0.124 for ages 12-18 years), time from stop of sedation to discharge (p=0.033 for ages 2-11 years, p=0.511 for ages 12-18 years), and LOS (p=0.026 for ages 2-11 years, p=0.109 for ages 12-18 years) (Figure 4). Based on Pearson correlation test, it was established that propofol was positively correlated with sedation time and LOS (+0.753 for sedation time and +0.611 correlation for length of stay with <0.001 significance) (Figure 5). There was no significant correlation between fentanyl and sedation time or LOS (Figure 6).
Discussion:
This was a retrospective study looking at 90 paediatric patients, ages 2-18 years, undergoing EGD procedures with either propofol or ketofol as the main sedative medication. Patients in this study had similar weights, ages, ASA scores, and EGD indications; however there baseline demographics were not similar with regards to gender proportion (Table 1). Based on the large prospective database on sedation use for procedures outside the operating room obtained by the Paediatric Sedation Research Consortium,27 we separated our patient population into two age risk groups to allow our results to be generalisable. To our knowledge, this study is the first to show that ketofol, when compared to propofol, significantly reduces fentanyl use (≥.99 mcg/kg, Figure 2) and cardiopulmonary adverse outcomes (Table 1) in paediatric EGD procedures.
EGD procedures are known to be invasive and painful. To decrease the pain appreciated by a patient, various adjuncts are often utilised. In our study, we used fentanyl for adjunct pain control. In both main sedation medication groups (propofol and ketofol), there was no statistical significance seen in objective pain based on CHEOPS scores with a mean score of 6 in all age groups (p>0.05). However, the amount of adjunct needed to maintain adequate pain control between both groups statistically and clinically differed (Figure 2). In all age groups, the propofol group compared to the ketofol group required almost 1 mcg/kg more of fentanyl, which leads to the potential for the patient to experience higher incidence of side effects, such as respiratory depression and hypoxia.28-30
The mechanism that lead to this significant difference in opioid demand between ketofol and propofol is most likely due to ketamine’s ability to stimulate opioid sigma receptors,31 thus leading to opioid sparing effects. It is unclear if propofol and ketamine interaction heightens this opioid sparing effect, because ketamine alone has not been shown to lead to opioid sparing effects in children.32
In a review of our population’s vital signs, there was a significantly higher MAP in the ketofol group compared to the propofol group for ages 2 to 18 years (Figure 3 A-B). In addition, the ketofol group had statistically higher HR compared to the propofol group for ages 2 to 18 years (Figure 3 C-D). It is our thought that this increase in MAP and HR in the ketofol group mediated by ketamine is a protective factor against major hypotensive changes (31). It is this effect that minimised unplanned hypotensive events in our ketofol group compared to the propofol group by 11.7% in ages 2-11 years and 13.3% in ages 12-18 years (Table 1).
One of the more noticeable differences in our main sedative medication groups was the unplanned apnoea events requiring CPAP or bag mask ventilation intervention. In our 2 to 11 year age group, propofol had 50.1% more apnoea events needing intervention compared to ketofol. In our 12 to 18 year old group, propofol had 33.3% more apnoea events needing respiratory intervention compared to ketofol. This was found to be statistically significant. It is unclear if this is also clinically significant since no emergent airway intubation was needed for these events. However, there was 1 emergent airway intubation in the 12-18 year old propofol group. Based on the review of the medical records, it appears that these apnoea events needing respiratory intervention were mostly associated after the initial loading dose of either propofol or ketofol prior to or at the start of the infusion. Thus, bring into question if not starting with a loading dose bolus would decrease adverse effects.
Despite the propofol group requiring more respiratory intervention, increased fentanyl use, and less haemodynamic stability seen, there was no statistical differences in total sedation time and LOS between the two main sedation medication groups (Figure 4). Yet when we ran correlation statistics, there was a positive correlation to increased propofol dosages, total sedation time, and LOS in ages 2 to 18 years (Figure 5); therefore, if our procedures were longer, there could be a statistically and possibly clinically significant increase in total sedation time and LOS for the propofol only group. Thus, we can infer that propofol compared to ketofol is not the main sedation medication of choice for longer similarly painful procedures in children ages 2 to 18 years, such as EGD procedure followed by a colonoscopy.
A limitation of our study is that it is retrospective. With that, we were not able to blind our sedationists to the main sedation medication that was chosen and could not control the interventions that were performed. Yet, based on strict exclusion criteria, we were able to control the interventions used in our analysis. Also, in our study we did not analyse patients under the age of 2 years. Reviewing the data, we had seven patients, and thus this patient population size was not powered to perform the proper statistical analysis. Another limitation to consider is the time difference in procedure protocols. However, since this study included over 2 years of data, procedure protocols were the same. Additionally, the EGD indications were similar between ketofol and propofol (Table 1). Another consideration of this study is generalisability. This study was conducted at a single centre, but the way we analysed our results makes it easier to perform large scale prospective studies to further investigate our findings. Last limitation is directly correlating fentanyl dose with a significant adverse event. Due to the short length of the EGD procedure, constant sedation medication infusion, and fentanyl dose, it is difficult to directly say that a particular fentanyl dose alone leads to an adverse event. Therefore we can only speculate based on correlation.
Conclusions:
Our study found that ketofol significantly reduced fentanyl use in all age groups by ≥0.99 mcg/kg and cardiopulmonary adverse events compared to propofol alone. In addition, an increase in the amount of propofol was positively correlated to increasing LOS and total sedation time for a child undergoing an EGD procedure. This suggests that increasing amounts of propofol leads to greater LOS. To conclude, our study indicates that ketofol can be a safer alternative to propofol use alone for deep sedation in EGD procedures for children ages 2 to 18 years.
Acute upper gastrointestinal bleeding presenting as either hematemesis or melena or both is an important medical emergency. The etiological spectrum of upper gastrointestinal bleeding (UGIB) varies from region to region1. Various endoscopic therapy for patients with signs of recent haemorrhage in peptic ulcer have changed the outlook of UGIB management. An addition of proton pump therapy to non-variceal UGIB has further reduced hospital stay, recurrent bleeding and need for surgery2. Another milestone in the decline of UGIB has been eradication of H pylori. Globally, the prevalence of H. Pylori infection has decreased due to better hygiene, early diagnosis and eradication3. These factors have contributed to the changing trends in UGIB. The patients with UGIB have 50% incidence of H. pylori infection positivity and re-bleeding occurs in 7-16 % of the total cases1. Once frequent UGIB due to peptic ulcer have now declined all over the globe as demonstrated by various researchers 4,5,6. Unfortunately, despite advancement in endoscopic and pharmacological treatment, the mortality in UGI bleeding ranges between 3 and 14%7. Particularly, patients with UGI bleeding due to the duodenal ulcer are known to be more prone to death as demonstrated by Quan et al8. The advanced age and patients admitted in hospital with comorbidity are at an increased risk of re-bleeding and mortality. Re-bleeding and mortality rates are higher among patients with variceal bleeds and invariably 50-60% of patients with cirrhosis have variceal bleeding1. This warrants a careful approach in the management of UGI bleed. To predict the re-bleeding rates in a given case of UGI bleeding various clinical and endoscopic models have been developed from time to time. Of these – Rockall score, combining clinical (Age, shock, presence of co-morbid diseases) and endoscopic findings have proved quite valuable in the prediction of hospital admission duration and mortality rate9. The reason for the feasibility of Rockall score is that it depends mainly on simple clinical data and after an endoscopic procedure the score becomes more practical 9. The Rockall score divides patients into 4 subgroups according to their clinical data to estimate death and re-bleeding tendency. While comparing Rockall score, Blatchford scores at first assessment, and the Addenbrooke score it was concluded that Rockall score has an accuracy of 98% in predicting death, and was sensitive in 86.4% of cases in predicting re-bleeding10. Hence we calculated Rockall score in our study cohort and assessed various prognostic factors including changing trends over the past decades.
Methods
Study design
This retrospective study was conducted from January 2015 to December 2017 at King Abdul Aziz Specialist hospital Taif, a tertiary care centre in the western region of Saudi Arabia. The data was collected from case files and electronic medical records. The data about age, comorbid diseases, presence of shock, endoscopic intervention, hospital stay duration, the requirement for blood transfusion, surgery were collected to measure the outcome of UGI bleeding.
Depending upon hemodynamic status upper GI bleeding patients were managed either in the intensive care unit (ICU) or high dependency unit of the hospital. Blood transfusion had been given to maintain Hb levels above 8gm/dl. Platelets transfusions if the platelet counts were < 70,000 and fresh frozen plasma when INR was deranged in chronic liver disease patients.
The recurrent bleeding was defined by hematemesis, melena, or both, with either shock
(pulse rate>100 beats/min, systolic blood pressure< 100mmHg accompanied by cold sweats, pallor, oliguria) or a decrease in haemoglobin concentration of 2 g/dL over 24 hours.
Re-endoscopy, if needed, was used only to confirm recurrent bleeding.
The timing of UGI endoscopy after admission was recorded in each patient. The details of stigmata of recent haemorrhage (spurting vessels, active bleeding in an ulcer, a visible vessel, or a clot over the ulcer that could not be dislodged upon gentle washing with water delivered through the endoscope channel). Rockall score was calculated in all patients.
Patients with variceal bleeding were primarily managed with octreotide infusion ,antibiotics and endoscopic variceal ligation (EVL) or endoscopic sclerotherapy (EST) depending upon the situation. All patients were followed for rebleeding clinically and by haemoglobin levels during their hospital stay. Patients who remained hemodynamically stable for 72 hours were discharged.
After the fifth day, patients positive for H. pylori on CLO test during endoscopy received triple therapy (Capsule Amoxicillin 1gm twice daily and Tab. Clarithromycin 500 mg twice daily for 2 weeks. Tab. Es omeprazole 20mgdaily twice daily was continued for 4weeks. The patients who were H. pylori-negative received Tab. Esomeprazole 20 mg twice daily for 4 weeks.
Inclusion criteria:
Patients with confirmed upper GI bleeding (variceal and non-variceal) were enrolled in this study.
The variceal bleeding due to portal hypertension included both cirrhotic & non-cirrhotic patients.
Exclusion criteria:
Patients with terminal cancer.
Patients with upper GI bleeding where endoscopy had not been done due to any reason and Rockall score could not be calculated.
Patients with persistent shock necessitating emergency surgery, as a life-saving procedure.
Statistical methodology:
Data were statistically described in terms of frequencies (number of cases) and valid percentages for categorical variables. Mean and the standard deviation was used to describe parametric numerical variables while the median and inter-quartile range were used for non-parametric variables. Spearman's rho test was used for testing the correlation between the non-parametric numerical variable (Rockall score) and patients’ age. All statistical calculations were done using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) release 21 for Microsoft Windows.
Results
A total of 120 participants (76 males,63.3%) with a mean± SD age of 58. 4± 18. 7 years, were included in this study. The Rockall score showed a median (IQR) value of 3 which indicates a low to moderate risk of bleeding recurrence and death.
Majority of the study cohort [n=88(74%)] were Saudi nationals and [32(6.7%)] patients were from other nations. All patients had received an initial resuscitation as per the UGIB protocol of the hospital. Of 120 patients, 30 patients (25%) had undergone endoscopy immediately after admission in the intensive care of the hospital due to hemodynamic instability. Fourteen patients (11.7%) had undergone endoscopy within 6 hours of hospital admission and 63% patients had undergone endoscopy within 24 hours of hospital admission. The details are shown in figure 1.
The Rockall score was calculated for all patients based on their age, presence of shock, comorbidities, diagnosis and major stigmata of recent haemorrhage. The details are shown in table 1.
Table 1: Parameters of Rockall score in the non-variceal bleeds
Percentage
Frequency
Category
Parameter
52
43.3
<60=0
Age
16
13.3
>80=2
52
43.3
60- 79= 1
120
100.0
Total
80
66.7
No shock=0
Shock
40
33.3
Tachycardia: Pulse ≥100, Systolic BP ≥100= 1
120
100.0
Total
70
58.3
Any co-morbidities except renal failure, liver failure, and/ or disseminated malignancy=2
It was observed that 46(38.3%) patients had undergone endoscopic variceal ligation (EVL) , and 20(16.7%) patients endoscopic sclerotherapy (EST) .
Heater probe had been used in 5.0% and Gold probe in 5.0% of patients with signs of recent haemorrhage (SRH) . Nevertheless, 48(40.0%) patients had no features of SRH therefore they had not received any endotherapy. Instead they had been managed with IV proton pump inhibitors as per the protocol and supportive treatment.
The data on final EGD diagnosis are shown in table 2.
None of the study cohort patients had undergone surgery to control his or her UGI bleeding and there was no mortality due to UGIB recorded during this period.
In order to test the correlation between Rockall score and age, spearman rho correlation test was carried out and data showed a significant (p<0. 001) moderate positive relationship (correlation coefficient=0. 553) between age and Rockall score of included patients. This means a lower tendency for recurrent bleeding and a lower mortality rate among younger patients.
Discussion
The results of our study showed that hematemesis was the most frequent presenting symptom of UGI bleeding. These results are similar to the study by Minakari et al 11, however, majority of the patients in the above-mentioned study had a peptic ulcer as the commonest aetiology. Contrary to their results portal hypertension outnumbered the peptic ulcer disease as shown in Table 2 in this study. UGI endoscopy had been carried out immediately after admission in 25% of the patients due to ongoing bleeding and the majority of patients had UGI endoscopic examination within 24hrs as shown in Fig1.
Figure 1: Timing of Esophagogastroduodenoscopy (EGD) after hospital admission
While assessing the endoscopy timing especially among variceal bleeding Hsu YC et al 12 concluded that the delayed endoscopy for more than 15 hours , high MELD score, failure of the first endoscopy and hematemesis were independent risk factors for in-hospital mortality in cirrhotic patients with acute variceal haemorrhage. The cirrhotic patients in our study cohort were either Child-Turcotte-Pugh class A or B and none of the patients had hepatic encephalopathy on presentation. After endoscopic therapy, they were managed with standard treatment for variceal bleeding.
The prevalence of H. pylori positivity among UGI bleeding in this study was 60% and all positive patients had been given standard eradication therapy. Data from a different Saudi Arabian centre revealed H. pylori prevalence to the tune of 70% affecting predominantly females in patients with peptic ulcer disease13 but authors had not studied it’s prevalence in UGI bleeding.
This study demonstrated rebleeding in 16 (13. 33%) patients who were re-endoscoped and bleeding was controlled by various endotherapies. In this study re-bleeding was found to be more frequent among older patients with comorbidities. Rebleeding was also common among patients with history of NSAID intake and presence of oesophagal varices which is following literature14. While comparing present results with previous data we observed that previously the commonest cause of UGI bleeding was a duodenal ulcer it is the variceal bleeding due to cirrhosis (HCV) now. We also compared the results of the current study with previous studies across different parts of the globe14,15,16. Duodenal ulcer previously used to be the most frequent cause and invariably various surgical methods like vagotomy etc were used to control bleeding prior to PPI era. However, with the advent of PPI and H pylori eradication the frequency of UGI bleeding due to peptic ulcer have declined.
We recorded only 120 patients with UGI bleeding over three years at our centre nevertheless, this may not reflect the true incidence in the region as it was the data from a single centre only. But the overall incidence of UGI bleeding has decreased over the past decades all over the globe. The study by Loperfido et al17 compared the incidence of 587 patients with UGIB between 1983-1985 and 2002-2004 period. The authors observed that UGI bleeding decreased from 112.5 to 89.8 per 100,000/y. The peptic ulcer incidence also decreased to the half between the two studied periods. In the above-mentioned study it was also revealed that frequency of ulcer bleeding decreased by 41.6% in people younger than 70 years. There has been an obvious change in the trend of UGI bleeding in Saudi Arabia over the past 23 years like other regions of the globe. The number of patients with UGI bleeding has decreased and aetiology of UGI bleeding has also shifted from an ulcer to variceal one.
In a large study, published in 1995 ,the data on 1246 patients over 14 years Al Karawi et al18 observed that duodenal ulcer was the most common cause of UGI bleeding followed by varices. The bleeding rates per annum in their study was 89 cases per annum while this study recorded only 40 admissions of active UGI bleeding per annum. Further, the variceal bleeding outnumbered the duodenal ulcer bleeding contrary to their results connoting a changing trend in Saudi Arabia. The data from southern region of Saudi Arabia also showed variceal bleeding to be the commonest cause of UGI bleeding 19.
In yet another study from Riyadh central hospital, it was revealed that most of the patients with UGI bleeding were having oesophagal varices20. Non cirrhotic portal hypertension (NCPH) was documented in 8 patients in the current study and all were hailing from Egypt which is an endemic region for schistosomiasis and NCPH. The predominant cause of portal hypertension was Chronic liver disease due to chronic HCV in this study. While studying the pattern of liver disease in Saudi Arabia Fashir B et al20 have demonstrated HCV to be the commonest cause of CLD in this part of the globe. This reflects that meticulous screening and treatment of chronic HCV can go a long way in the reduction of UGIB in the region. Having said this it may not be out of place to mention that keeping in view the global epidemic of obesity variceal bleeding due to CLD following NASH may steep up in coming years and become the important cause of UGIB. This highlights a red alert to curb the menace of obesity all over the globe and halt the increasing trend of variceal bleeding in future.
Regarding the trend of gastric ulceration, UGI bleeding has now shifted from H. Pylori infection to the massive use of medications such as NSAIDs or steroids, all over the world16 especially among older people. We demonstrated drug-induced UGI bleeding in 20(16%) patients in this study as shown in table 2. Further the use of warfarin is estimated to increase as the population ages and atrial fibrillation , other cardiovascular ailments are increasing steadily. In a study by McGowan et al22 Tablet Warfarin was an independent predictor of major bleeding after the percutaneous coronary intervention (PCI) in patients receiving dual antiplatelet therapy.
Another most common diseases in the elderly population is Rheumatoid arthritis (RA) . Rheumatoid arthritis is considered as a comorbid disease in the Rockall score and increases the scale for mortality and hemorrhagic shock. The wide use of NSAIDs in RA patients steeps the incidence of peptic ulcers and its complications including UGIB23. This risk is significantly elevated when SSRI medications in combination with NSAIDs are prescribed to allay anxiety and depression in these chronic disorders. The physicians prescribing these medications together should exercise caution and discuss this risk of UGIB with patients24.
UGI bleeding due to malignancies were noted in 4 patients in our study cohort which is similar to the data shown in the southern region of Saudi Arabia19.
In this study, about 12-16% of the patients were diagnosed with either gastric ulcer or hemorrhagic gastritis. The data from Arar, the northern Saudi Arabian city , revealed prevalence of gastric ulcer to be twice as common as duodenal ulcer. The authors of this study observed that the use of NSAIDs, H. Pylori infection and stress were among the most relevant reasons for developing peptic ulcer disease , however authors in their study didn’t study bleeding complications of peptic ulcer disease25.
Conclusion
Based on discussed results, it can be concluded that the frequency of UGI bleeding has declined and peptic ulcer is no longer the most predominant cause of UGI bleeding in Saudi Arabia. Instead, variceal bleeding outnumbers other causes of UGIB. This changing trend now demands that to prevent variceal bleeding we need to focus on the management of chronic HBV, HCV and NASH. Further all medications especially NSAIDs must be cautiously used particularly in elderly people. A step further would be to control hypertension and subsequent atrial fibrillation so that drug-induced UGI bleeding are reduced in future.
The way a doctor dresses is a fundamental part of establishing therapeutic alliance with patients.1,2 It has been shown that doctor’s dress can influence patient confidence, offer greater reassurance, higher levels of trust, better adherence to prescribed medication regimens, enhanced willingness to complete return visits, and discuss sensitive issues.3,4 The literature outcomes in this field are mixed; for example, some studies suggest a non-correlation with perceived courteousness or professionalism,5,6 but we believe there is enough evidence to suggest that the manner in which a doctor dresses forms an important part of non-verbal communication, which is important for their interaction with patients, carers and with other staff members.
Various studies have examined patient preferences towards doctors’ dress. Formal dress or a white coat have been cited as favoured due to their perceived association with empathy, competence and trust.2,4,7,8 This is in contrast to other studies which found semiformal dress as preferred.9
In psychiatry, studies of inpatients have indicated a preference for smart attire and white coats as part of their doctors' dress code.10,11 Mcguire et al also found that community patients preferred their psychiatrists to be dressed as “smart/formal”.12
In recent years, dress code policy for doctors in the UK has become more informal, and white coats have been abolished for a number of reasons.13 In this study, we sought to determine the attitudes of multiple stakeholders towards doctors’ dress in both general and psychiatric hospital settings.
Methods
We surveyed healthcare staff, patients, and carers in an emergency department at a district general hospital (“medical setting”), and in a psychiatric hospital (“psychiatric setting”) in the South East of England. The data was collected on a week day between 09.00 and 17.00 at both settings, using a questionnaire based on Rehman et al.14 There were no exclusion criteria.
The survey questionnaire sampled demographic details, and used nine questions and two sets of images (a male doctor & a female doctor) depicting three styles of dress; white coat, formal (tie & trousers for male; dark skirt and white shirt for female) and smart casual (“bare below the elbows”). The survey questionnaire was piloted amongst volunteer staff and assessed for their user-friendliness and ease of comprehension before use. It was amended in line with the feedback received.
Results
337 individuals responded to the questionnaire, giving a response rate of 94%. Our sample was predominantly white (72%), female (62%) and married (43%). Respondent age, ethnicity and employment status were broadly representative of the local population.
Overall (Table 1), we found that the majority of respondents felt that the way that doctors dress was important to them, and that the location of respondents significantly affected their preferences (p <0.001). Although in these overall results there was no majority preference for one dress code over another in either location, preferences within each varied significantly (medical: p<0.01 and psychiatric: p<0.001). This numerical preference appeared to be for formal dress in both settings, capturing 35% and 45% of respondent vote respectively.
Within the three stakeholder-specific breakdowns (Tables 2-4), differences in preference reached significance for medical staff (p<0.001), psychiatric staff (p<0.001), psychiatric patients (p<0.05), and psychiatric carers (p<0.01). Like the overall results, there was no majority preference in any of these groups, but formal dress captured the highest numerical vote in medical staff (41%), psychiatric staff (55%), and in psychiatric patients (41%). Psychiatric carers preferred formal and smart casual dress broadly equally, which captured 36% and 40% of the vote respectively. Carers were the only stakeholder whose preferences were significantly influenced by their location (p< 0.01).
Dress code statistically significantly influenced the attributes associated with the doctor wearing them (p< 0.0001), as shown in Table 5. Formal dress captured the greatest proportion of every attribute tested, and considering total responses, formally dressed doctors were almost twice as likely to be associated with these attributes as those dressed in smart causal or a white coat.
52% of respondents were not aware that a doctors’ dress code policy existed, and while 53% of respondents felt they should not be consulted when considering dress code, 41% believed they should. 59% of respondents believed doctors adhered to their sites’ dress code policies, while 27% did not think so.
Discussion
To our knowledge, this is the first study in the world to compare preferences in doctors’ dress code between a psychiatric hospital and a medical hospital. Also, no other study to our knowledge has simultaneously explored the attitudes of different key stakeholders in both medical and psychiatric settings regarding this important issue.
In this study, we have successfully captured the attitudes and perceptions of key stakeholders regarding doctors’ dress code. We found that overall, doctors’ dress code was felt to be important, and that in medical and psychiatric locations a formal dress code is preferred. Looking at staff, patients and carers specifically, we found a preference for formal dress among medical staff, psychiatric staff, and in psychiatric patients. Among psychiatric carers, formal dress was preferred equally to smart casual. There were no significant preferences among the other stakeholders surveyed.
This preference for formal dress is easily explained by the results shown in Table 5. Seeing a doctor in formal dress made it almost twice as likely that that doctor would be seen as possessing any of the eight positive attributes included. Clearly, in the eyes of the respondents to our survey, a formally dressed doctor was most likely to provide good care.
Location
Dress code preference
Total
Within-group p value
Between-group p value
Smart casual
White coat
Formal
No preference
Medical
42
40
59
26
167
<0.01
-
Psychiatric
57
18
76
19
170
<0.001
-
Total
99
58
135
45
337
-
<0.001
Table 1. Dress code preferences among all stakeholders. P values were calculated using Chi-squared test. NS = not significant (p=>0.05).
Location
Dress code preference
Total
Within-group p value
Between-group p value
Smart casual
White coat
Formal
No preference
Medical
22
10
27
7
66
<0.001
-
Psychiatric
22
4
35
3
64
<0.001
-
Total
44
14
62
10
130
-
NS
Table 2. Dress code preferences among staff. P values were calculated using Chi-squared test. NS = not significant (p=>0.05).
Location
Dress code preference
Total
Within-group p value
Between-group p value
Smart casual
White coat
Formal
No preference
Medical
14
14
15
10
53
NS
-
Psychiatric
16
9
24
10
59
<0.05
-
Total
30
23
39
20
112
-
NS
Table 3. Dress code preferences among patients. P values were calculated using Chi-squared test. NS = not significant (p=>0.05).
Location
Dress code preference
Total
Within-group p value
Between-group p value
Smart casual
White coat
Formal
No preference
Medical
6
16
17
9
48
NS
-
Psychiatric
19
5
17
6
47
<0.01
-
Total
25
21
34
15
95
-
<0.01
Table 4. Dress code preferences among carers. P values were calculated using Chi-squared test. NS = not significant (p=>0.05).
Dress code
Associated doctor attribute
Total
Trust
Advice
Conf.
Return
Knowl.
Caring
Resp.
Auth.
Smart casual
77
57
59
74
49
109
51
38
514
White coat
74
91
89
77
107
65
87
103
693
Formal
142
138
142
134
132
110
143
145
1086
Table 5. Doctor attributes associated with different dress codes.Respondents were shown images of each dress code asked “Which doctor would you…”: Trust the most (trust), Follow the advice of (advice), Have confidence in their diagnosis and treatment (conf.), Return to for follow-up care (return), Regard as knowledgeable & competent (knowl.), Regard as caring & compassionate (caring), Regard as responsible (resp.), Regard as authoritative & in control (auth.). P <0.0001.(calculated using Chi-squared test). Results were excluded for where more than one dress code was selected for an attribute, or where no choice was made.
Discussion (continued)
Interestingly, we also found that the location of healthcare influenced the preferences of carers to such an extent that it offset the non-significant results among staff and patients; such that this significance was carried through to the overall results. Exploring this in more detail, we see a marked preference for smart casual in the psychiatric setting over the medical setting (40% vs. 13%), for a white coat in the medical setting over the psychiatric setting (33% vs. 11%), and an almost equal preference for formal dress in both. This starkness in difference in preference between care locations indicates differences in the cultural perceptions of doctors by carers, but not by staff or patients. Perhaps an explanation for this difference is that historically, carers have been more involved and influential in the psychiatric setting, being an essential component to care, whereas in the medical setting they have tended to be more passive partners in care. A negative perception of mental health care portrayed to the public through film and media may have driven preferences away from white coat in the psychiatric setting, whereas in the medical setting perhaps a positive association with the white coat and physical health may have done the opposite.
Conclusion
We have identified a clear preference for a formal dress code for doctors from all stakeholders at medical and psychiatric care locations studied. However, we identified several interesting variations in preferences among individual stakeholders, and found that the location of care significantly impacted the preferences of carers. We believe these findings could be harnessed in the future development of dress code policies for doctors in order to enhance the doctor-patient relationship, and to improve the quality of doctors’ relationships with both carers and with other staff members. Additionally, there may be merit in involving these stakeholders during the policy development process.
Evidence suggests that Serotonin has an important role in bladder control through central and peripheral neurological pathways. The three main serotonin receptor sites involved in the micturition pathway are 5-HT1A, 5-HT4, and 5-HT7. 5-HT7 and 5-HT4 are excitatory to acetylcholine release and 5-HT1A is inhibitory. Increased serotonergic activity leads to parasympathetic inhibition, which results in urine retention. It is through this mechanism of action and their effect on pre-synaptic serotonin 1A and peripheral 5-HT3 receptors that SSRIs were observed to have anti-enuretic effect. However, the exclusive role of serotonin in this regard is not fully understood because along with serotonin, other neurotransmitters, particularly acetylcholine are also implicated in micturition physiology. Acetylcholine is released from nerves innervating the detrusor muscle and causes bladder contraction resulting in voiding. Contrarily, adrenergic pathways lead to constriction of the bladder sphincter and promote continence. There have been suggestions that at lower intrasynaptic 5-HT concentrations, there is prevalence of inhibitory control of micturition, whereas excitatory effect is more pronounced at higher concentrations of 5-HT. This may suggest a dose-dependent relationship between Sertraline and urinary side effects. 1
Case Reports:
Case 1
A 14 year old girl with a diagnosis of moderate depressive episode was prescribed Sertraline 150 mg once daily. On follow up with her community psychiatrist, mum reported that she had been having episodes of bedwetting on a regular basis for almost two weeks. There was no past history of enuresis, no medication changes, or changes to her diet or routine. She had been drinking fluids during the day and had limited fluid intake after 6 pm. On a visit to the Sheffield Children’s Hospital, she had been diagnosed with a urinary tract infection and was prescribed a five-day course of antibiotics. She denied symptoms of abdominal pain, dysuria or fever.
On discussion with the trust pharmacist, it was reported that urinary incontinence is a rare listed side effect of Sertraline with nocturia occurring in 1 in 100 to 1 in 1000.2 At further medication review appointments, the patient continued to report being incontinent on approximately every alternate night and had to use incontinence pads. It was agreed with the patient to reduce the dose of Sertraline to 100 mg once daily to test if her urinary incontinence was linked to Sertraline and review after 2 weeks in clinic. The reduction in Sertraline dose to 100mgs once daily did not alter the frequency of bedwetting that continued on most week nights and varied from being partial to full emptying of the bladder. As a result, she was then referred to the Paediatric Community Incontinence clinic for further investigation regarding the sudden onset of these night bedwetting episodes. Concurrently, Sertraline was gradually reduced and stopped. She was switched to Fluoxetine liquid for treatment of her depressive symptoms, which was titrated to a dose of 16 mg once daily. At the community continence clinic, urine dipstick was negative. Systemic examination including a neurological examination was unremarkable. Mum reported that since the change in medication from Sertraline to Fluoxetine, there was a remarkable improvement in her urinary symptoms.
Case 2
A 16 year old boy with a diagnosis of mixed anxiety and depressive disorder was initiated on Sertraline which was gradually titrated to a maximum dose of 200 mg once daily. He reported improvement in his symptoms of anxiety and depression. However, a few days into taking the higher dose, he experienced symptoms of hesitancy with micturition and failure to ejaculate. On reduction of Sertraline to 100 mg once daily, he reported complete resolution of urinary and sexual side effects, while still reporting a reactive and stable mood. Due to his significant progress, he was eventually discharged from CAMHS back to the care of his GP.
Case 3
A 12 year old girl with a diagnosis of Generalized Anxiety Disorder and Attachment Disorder reported three incidents of urinary incontinence whilst being on Sertraline 200 mg once daily. Sertraline was discontinued by the patient against medical advice. No follow up information was available to observe for resolution of symptoms after discontinuation of Sertraline.
Discussion:
Selective Serotonin Reuptake Inhibitors (SSRIs) are a very commonly used class of psychotropic medication in the CAMHS population to treat depression, anxiety, PTSD and OCD. 3 It is evident by the cases discussed above that SSRIs may have a key link in causing symptoms of urinary dysfunction, which may range from nocturnal enuresis to acute urinary retention. This could be explained by Serotonin’s pivotal role in micturition through central and peripheral pathways. There is not enough evidence on the links in a child and adolescent population as most of the studies are on an adult cohort.4
Conclusion:
In conclusion, it is important for clinicians to bear in mind the genitourinary side effects of SSRIs, which may be debilitating for patients in the CAMHS population. It is equally important for us as clinicians to educate young people and their parents about these potential side effects and how they can be managed. It has also been observed that higher doses of Sertraline have shown a possible link between onset of urinary side effects. In order to establish a significant causal and dose-related relationship on the onset and severity of genitourinary symptoms, studies with a larger sample size followed up over a longer period would be required.
Schizophrenia (SCZ) is a chronic relapsing and remitting disorder with a lifetime prevalence of 4 per 1000 persons.1Positive symptoms include delusions and hallucinations. Negative symptoms are characterised by deficits in normal behaviour, which are categorised into five domains: blunted affect, alogia, social withdrawal, anhedonia, and avolition. In clinical practice, when monotherapy fails multiple augmentation strategies – such as another antipsychotic, mood-stabilisers, benzodiazepines, lithium, electroconvulsive therapy, and repetitive trans-cranial magnetic stimulation – have been used to improve the clinical state of these patients, but evidence relating to the use of these interventions is lacking.2None of the regulatory bodies has openly endorsed polytherapy with antipsychotics.
The introduction of chlorpromazine in the 1950s revolutionised psychiatry, and the coming of slow-release, slow-acting forms (depot medication) contributed to the closure of asylums and paved the way to community psychiatry. Second-generation antipsychotics ameliorated the situation for a number of psychotic patients, but some remained resistant to all forms of psychopharmacology. In 1958, clozapine was formulated and marketed commercially in 1972. 3 The arrival of clozapine facilitated the rescue of some schizophrenia sufferers for a short time, but the drug disappeared from the scene because of initial untoward incidents. 4,5 The observation that clozapine has the potential to control the motor symptoms of tardive dyskinesia and to treat the psychotic symptoms of patients already diagnosed with tardive dyskinesia, led to its reintroduction, but with restrictions. 6,7,8 Clozapine is recommended for use only after a trial of two other antipsychotics. Combining depot antipsychotics with oral drugs of a different class has been the practice ever since the introduction of depot medications, and this practice has come to have general clinical acceptance.
Treatment resistance
Historically, it was observed that a specific group of chlorpromazine users remained symptomatic. They were considered to be refractory to phenothiazines. The availability of clozapine led to a better definition of treatment resistance. ‘Response to treatment’ means a reduction in the severity of symptoms, while ‘remission’ implies an absence of symptoms for a considerable period. ‘Recovery’ signifies absence of the disease for a long period.9‘Treatment resistant schizophrenia’ (TRS) is the term used for persistence of psychotic symptoms despite a certain number of adequate treatments. Since the introduction of first-generation antipsychotics, clinicians have been cognizant of TRS and operational definitions have been used such as those developed by Kane et al. 10 Sometimes, treatment has been based on algorithms such as the Texas Medication Algorithm Project (TMAP).
According to the most common definition of TRS, if patients present with persistent, moderate to severe, positive disorganization or negative symptoms together with poor social and work function over a prolonged period of time after at least 2 adequate trials of neuroleptic drugs, they may meet the criteria of having TRS. 11 A common agreement is that adequate drug treatment requires a duration of 4 to 10 weeks, a dosage equivalent to 1000 mg/d of chlorpromazine, and trials of 2 to 3 different classes of antipsychotic drugs. 12 The current treatment guidelines recommend 2 or more treatment trials of atypical antipsychotics at adequate dosages. Adequate response to treatment has been defined as at least a 20% reduction in symptoms as measured by rating scales. Typical antipsychotics can also be used for 4 to 6 weeks to screen for TRS.
Resistance to treatment and poor outcome are different from genuine TRS. Resistance to treatment may be defined as a state in which the patient has access to medication, but the effectiveness of the treatment is suboptimal. TRS may be conceptualised as a state in which medication has reached target receptors but does not seem to be effective. Chronicity has often been misconstrued with treatment-resistance. Schizophrenia is a chronic disorder that progresses to various levels of clinical deterioration without sustained remission or full recovery. Poor-outcome SCZ applies to 50% of patients, and TRS comprises a subset of such patients. In these, cognitive impairment, negative symptoms and mood symptoms are independent of positive symptoms, resulting in poor-outcome SCZ.
It is generally accepted that 30% of SCZ sufferers have TRS. Many people with SCZ do not achieve a satisfactory treatment response to their initial antipsychotic drug treatment. They may manifest a poor response to therapy because of intolerance to medication, poor adherence and inappropriate dosing, as well as true resistance of their illness to antipsychotic drug therapy. Assessing treatment resistance is a priority in the management of TRS. 13 TRS has to be closely evaluated before a comprehensive management plan is developed (Table 1). From a multidimensional point of view, TRS is dependent on manifold factors, such as longer duration, several episodes, gender, early onset, poor pre-morbid personality, family history, substance misuse, presence of soft neurological signs and a long untreated period.14 Genes are thought to be involved in the development of TRS; reliable genetic prediction of which patients will be TRS would have serious clinical implications. Structural neuroimaging techniques have revealed that TRS patients do not differ importantly than those responsive SCZ in terms of brain abnormalities.15
When clozapine fails or rejected
Clozapine may be the preferred drug for TRS – effectively the gold standard – but its side effects put off many patients to the extent that some of them refuse clozapine therapy. It is a unique atypical antipsychotic and there is robust evidence supporting its use in people with TRS. Though clozapine often represents the best hope for recovery, it is associated with severe and enduring adverse reactions that may delay its prescription and increase morbidity and mortality. The major side effects are a) agranulocytosis; b) metabolic side effects; c) myocarditis; d) seizures; e) severe constipation with gastrointestinal complications such as intestinal obstruction, bowel perforation, paralytics ileus and toxic megacolon; and f) sialorrhea. These side effects hinder the popular use of clozapine in TRS. It is a life-saving drug, but without extra care it may itself shorten the life span. Side effects are more common with higher doses. It has been estimated that between 10 and 60% of patients resistant or intolerant with other antipsychotic drugs respond to clozapine.
The side effects mentioned above are inevitably an impediment to its common use. When standard doses (300mg to 5oomg) do not produce the desired effects or patients develop unwanted effects, combining clozapine with other antipsychotics is a common practice for TRS. To mention a few antipsychotics, amisulpride and aripiprazole are atypical antipsychotics ordinarily used in combination with clozapine. The anti-salivatory effect of amisulpride and the alerting effect of aripiprazole are added advantages of such a combination, and these drugs are fairly weight neutral – in contrast to clozapine. Clozapine, representing a second generation of so-called atypical antipsychotic drugs, has shown positive effects in desperate cases of TRS. Furthermore, two epidemiological studies have shown that clozapine has the lowest mortality rate among antipsychotics.
Nevertheless, even supported by the literature as the best-known antipsychotic in terms of efficacy and rates of response, a sizeable number of patients remain resistant to clozapine therapy and continue as symptomatic and dysfunctional. It has been estimated that 40–70% of patients on clozapine may not respond satisfactorily to it.16 When patients do not respond to clozapine, they are categorised as super-refractory, but the very concept of super-refractory state is debatable. They do not differ from the refractory cases in terms of demographical factors but have high score of positive symptoms. It may be simply explainable that the aetiological mechanism of the illness of such patients may be different from the clozapine responders and that makes them unresponsive to clozapine. There are no operational definitions for super-refractory schizophrenia. According to the schizophrenia algorithm of the International Psychopharmacology Algorithm Project (www.jpap.org), persistence of psychotic symptoms after a trial with adequate doses of clozapine(300-900mg/day) for at least six months is designated as super-refractory cases. 17
Many predictors of clozapine response have been suggested without any firm ground. These include severe clinical symptoms, higher levels of functioning before the onset of schizophrenia, low levels of homovanillic acid and 5-hydroxyindoleacetic acid in cerebrospinal fluid, reduced metabolism in the prefrontal cortex, reduced volume of the caudate, and the improvement of P50 gating at the 500-ms prepulse interval. 18 However, none of these factors is consistent or specific as a predictor of clozapine response. More genetic and brain imaging studies are warranted with such patients. In these cases, augmenting strategies are necessary, and some have been in use: typical and atypical antipsychotics, mood stabilizers, antidepressants and electroconvulsive therapy. Some studies have favoured ECT, but no definitive conclusion has been drawn. So also, half of clozapine patients discontinue taking the medication on their own accord. In a retrospectively studied sample of patients who discontinued clozapine, the majority terminated the treatment as a result of their own decision or because of non-compliance with medical procedures such as blood sampling.19
There are currently no evidence-based pharmacotherapies for the TRS patients who do not respond to clozapine 20,21 or those who terminate clozapine therapy due to adversative reactions. 22 In the nutshell, clinicians should be prepared to try different alternative treatment options for TRS and super-refractory cases. Thus, combination therapy may become a choice as pre-clozapine therapy or post-clozapine therapy. Clozapine is not a drug that could normally be imposed on patients, but it has to be earned by the patient.
Combination therapy
The range of antipsychotic medications available is wide, with variable effectiveness, and there are also differing profiles for typical and atypical agents, adding to a confusing array of terminologies and dilemmas regarding what the best drug for service users is.23 Combination therapy involves the addition of a second antipsychotic to the therapy regimen. It is different from adjunctive therapy, in which a second agent is employed to reverse an emergent side effect or to obtain a complementary clinical effect. Augmentation involves the use of a non-antipsychotic along with the antipsychotic already in use. Combination therapy and augmentation therapy are sometimes used interchangeably. In general, ‘combination’ refers to the use of more than one type of disease-specific treatment to treat a particular illness.
A change from one antipsychotic to another in same class seldom produces any additional benefit, whereas switching to an antipsychotic with a different mechanism of action has proved to produce a more impressive response rate. Combination becomes desirable when the drug already in use produces some favourable effect, but that is not sufficient to control the symptoms. It is imperative to distinguish between partial response and no response when considering a change in medication. Past antipsychotic drug response, adverse effect profile differences, concomitant medical disorders and concurrent drug therapy are factors to be considered when choosing between switching and combination or augmentation approaches. A switch is indicated when there is no response to the drug and combination therapy; augmentation is recommended for partial response. Another antipsychotic combination may become necessary as an option for TRS patients who cannot be treated with clozapine for various reasons. It is common practice in such situations to add a second antipsychotic, in combination with the original one.
Clinical team do not have to be disheartened or disillusioned when clozapine therapy fails due to non-response or clozapine intolerance, and also when augmentation and combination therapies do not bring about the desired outcome. Switching back to atypical drugs once again may turn out to be effective in some cases and clozapine is not to be considered as the last resort. A multicentre open label 18-week trial evaluated a switch to olanzapine in 48 clozapine resistant or intolerant patients. 24 Switching to olanzapine 5-25 mg per day resulted in a mean drop in total scores on the Positive and Negative Syndrome Scale (PANNS) and Brief Psychiatric Rating Scale (BPRS) of 17.7 (14.2%) and 9.8 points (20.2%) respectively.
Cautions
Monotherapy is the most desirable form of treatment for SCZ. There is no good objective evidence to support dual antipsychotic therapy except in combination with clozapine. The evidence base supporting such combinations consists mostly of small open-label studies and case series.25 Combination therapy should be considered only when several attempts at monotherapy, including one atypical antipsychotic, fail. It is assumed that two different treatments together may have a different mechanism of action and therapeutic response from that of either drug alone. Studies have been conducted to determine whether treatment with antipsychotic combinations is effective for SCZ and whether such treatment is safe for the same illness. The results of trial studies are based on very low or low-quality results, and research that provides high-quality evidence is needed before firm conclusions may be drawn. The results so far show that there may be some clinical benefit in combination therapy in that more people receiving a combination of antipsychotics showed an improvement in symptoms. For other important outcomes – such as relapse, hospitalisation, adverse events and discontinuing treatment – no clear differences between the two treatment options were observed. Currently, most evidence regarding the use of antipsychotic combinations comes from short-term trials; the assessment of long-term efficacy and safety is limited. There is some very low-quality evidence that a combination of antipsychotics may improve the clinical response.
There are published case reports of serious side effects, such as a higher prevalence of extrapyramidal symptoms (EPS), metabolic side effects, paralytic ileus, grand mal seizures and prolonged QTc in association with a combination of antipsychotics.26Combining three antipsychotics may be extremely dangerous; studies have revealed that such a procedure substantially increases mortality.27A negative case control study exists.28 It should be usual practice to document the rationale for combined antipsychotic use in individual cases in clinical records, along with a clear account of benefits and disadvantages, including side effects.
Newer combinations and augmentation strategies are supported only by case reports and open trial data. Along with advantages, a number of potential concerns regarding antipsychotic combinations have been identified (Table 2) and specific clinical cautions have to be implemented in combination therapy (Table 3). Yet, fixed combinations of drugs are common in medicine and at one time were common in psychiatry. An example is small doses of an antipsychotic in combination with an antidepressant for treating major depression; this lost popularity because of side effects. Also, SNRI-NaSSA combination therapy (e.g. California Rocket Fuel) is prevalently used for treatment-resistant depression.
Olanzapine–amisulpride combination
In spite of the objections put forward against combination therapy, there are isolated case studies favouring the olanzapine–amisulpride combination. Zink et al. (2004) performed a retrospective study, aiming at the systematic evaluation of patients on combined olanzapine and amisulpride. The open study designed as a retrospective chart review of Zink et al. concludes that the olanzapine–amisulpride combination for TRS is encouraging, but requires further evaluation in prospective and randomised studies.29They point out that a reduction of the daily dose of both drugs helped to minimise the side effects of these drugs – such as weight gain and EPS – resulting in better compliance. They did not notice any additional side effects or undesirable drug interactions.
Within the heterogeneous group of atypical antipsychotics, the thienobenzodiazepine derivative olanzapine has a receptor profile that is quite similar to that of clozapine, indicated by having a greater affinity for serotonergic 5-HT2A receptors than for dopaminergic D2 receptors. The positive and negative symptoms of schizophrenic psychoses usually respond well to this drug. In contrast to clozapine, olanzapine does not induce major agranulocytosis but may, in a significant number of cases, lead to troublesome side effects including significant weight gain, type ii diabetes, sedation, anticholinergic effects and transient increases in liver enzymes. Assertive weight management from the start of treatment is recommended. Weight should be monitored and also waist circumference measurements made. In addition, blood lipids should be assessed routinely. A suggested schedule for these investigations would be at 3, 6, and 12-month intervals, and biannually thereafter.30The pharmacology of antipsychotics is not the only factor that determines their effect on weight. Olanzapine has also been shown to elevate prolactin significantly in some patients.31 As indicated earlier, Olanzapine can succeed in some cases even where clozapine fails.24
Amisulpride is an atypical antipsychotic of the benzamide class. It blocks D2 and D3 receptors (presynaptic in low doses, postsynaptic in higher). Unlike other atypical or typical antipsychotics, it has low affinity for serotonin, α-adrenergic, histaminergic, muscarinic and sigma receptors including D1, D4 and D5 receptors. It can lead to dose-related EPS that are significantly less than those of typical antipsychotics such as haloperidol and comparable to risperidone.32It is recognised that amisulpride is only sparingly metabolised by liver enzymes, and thus it is not known to participate in many drug interactions.33 Amisulpride may elevate prolactin, which may cause sexual dysfunction, osteoporosis, amenorrhoea, gynaecomastia or galactorrhoea. It is a weight-neutral compound and may ameliorate negative symptoms.34 Both olanzapine and amisulpiride are not associated with QTc prolongation.
One advantage of the combination of these drugs is that when olanzapine and amisulpride are combined, they may be given at a lower dose, which will spare the patients from the main unwanted side effects of the individual drugs: the over-sedation and weight gain of olanzapine; and the hyperprolactinemia of amisulpride, resulting in sexual side effects of a particularly undesirable extent. Our limited studies have found that this combination was well tolerated by TRS patients and its efficaciousness was similar to that of clozapine, but without any major side effects. Patients have been fully compliant. The combination of these drugs is managed by slowly introducing them one at a time and has been transformative in many cases. More studies of the olanzapine–amisulpride combination are needed in order to report on such outcomes as relapse, remission, social functioning, service utilisation, cost-effectiveness, satisfaction with care, and quality of life.
Table 1. Assessing Treatment Resistance
Re-evaluate current antipsychotic treatment Has an adequate trial been given? Suboptimal dose and non-adherence can lead to pseudo-resistance-poor adherence is unwaveringly associated with adverse effects, poor insight, and a poor therapeutic alliance. Consider exceeding BNF limits-recommended only in specialist centres Review the differential diagnosis eg schizo-affective disorder or bipolar affective disorder-Bipolar Disorder can present with first rank symptoms in the initial stages, it could take up to 10 years to establish a diagnosis of BD. Asses for psychotic symptoms Re-evaluate personal history and psychological pressures Investigate co-morbid psychiatric symptoms eg substance misuse or alcohol dependency, depression, obsessive compulsive disorder and panic attacks Investigate organic factors-temporal lobe epilepsy, endocrinopathies Check blood levels if facilities available Longer duration Multiple episodes Male gender Onset of illness at an earlier age Poor pre-morbid functioning Length of untreated psychosis Family history of schizophrenia Soft neurological signs-lateral and third ventricular enlargement and low catecholamine level in CSF Suicidal tendencies Aggression Asses adverse effects of psychiatric and other medications that may mimic worsening of positive and negative symptoms Complete physical and neurological examination and specialist consultation, as appropriate Rule out the desire to to be ill
Table 2. Advantages and disadvantages
Advantages: Discontinuation symptoms due to the withdrawal of the first antipsychotic could be avoided Patients unresponsive to the initial antipsychotic may achieve clinical response when the second agent is introduced Patient does not have to cope with another waiting period for the substituted drug to produce full results The benefits of the first drug are preserved in addition to the favourable effects of the added drug Switching involves tapering off the initial drug, wash out period and delay in the onset of the second drug Switching of antipsychotic drug requires additional supervision and care in the transitional period and could be delayed due to discontinuation symptoms; the addition of a second antipsychotic drug solves these problems Disadvantages: The possibility of unnecessarily high doses An increased acute and/or chronic side-effect burden Adverse pharmacodynamic and pharmacokinetic interactions Difficulties in determining cause and effect of multiple treatments Potential increased mortality, Higher costs Poorly documented risks and benefits of this practice Reduced compliance
Table 3 Physical cautions with combination
History of cardiac disorder (eg, MI, arrythmias, abnormal ECG) Hepatic impairment Renal impairment Obesity (high BMI) Heavy smoking High alcohol intake Substance misuse Hyperlipidaemia Above age 70 ECG, Haematological investigations. Side effect rating scales Physical effects Record justification for combination
Summary
Combination therapies are the second choice when monotherapy fails. Clozapine is the first choice in severe cases of TRS, but there are super-refractory cases of TRS where clozapine fails. At least in isolated cases, the combination of olanzapine and amisulpride (Ami-olan combination) is worth considering for TRS patients who are reluctant to go on to clozapine therapy or in instances when clozapine failed, or patents dropped out. Combination therapies are normally avoided, but clinicians’ helplessness and patients’ despair justifies such measures in hard-to-treat cases of TRS. Only time will tell whether this combination will become an important part of clinical practice in future or will be ruled out as just another dual antipsychotic therapy.
The aetiology of SCZ remains obscure. The symptoms of different psychotic disorders are not clearly demarcated and there are no physiological parameters on which to make a firm diagnosis. In such a situation, the treatment of TRS has to be tailored on an individual basis. Even though it is normally well calculated, it may be somewhat hit and miss. Finding the right combination of antipsychotics or augmenting agents when the clinician is stranded and torn between monotherapy and polypharmacy is a gargantuan task. Clinical judgement along with patient preference must take over when treatment algorithms fall short. Given the data on polytherapy with antipsychotics that is available, it is hard to make any firm recommendation regarding its efficacy and safety of its use. Clinicians should be reminded that they should try monotherapy in adequate dosages before considering combinations.
For the management of TRS, comprehensive treatment strategies that integrate pharmacological, psychological, and psychosocial approaches are highly relevant and for that to happen, TRS should be clearly recognised. NICE offers very little guidance on clozapine resistant cases of SCZ. Combination of antipsychotics is not a panacea or a permanent solution for TRS. More investigation of schizophrenic illness is the only way forward. In comparison with other medical conditions (eg,HIV), research into it is making little progress. As it stands now, deconstructing clozapine’s unique pharmacology may offer ‘light at the end of the tunnel’ for patients who are clozapine intolerant or non-responders.
Temporomandibular joint disorder (TMD) refers to a broad spectrum of disease states characterised mainly by pain and tenderness in the temporomandibular joint (TMJ) and adjacent soft tissues, TMJ clicking and limitation in jaw movements. TMD symptoms vary in severity and if left untreated, may lead to debilitating pain and limited function with a significant impact on quality of life. The estimated prevalence of TMD is 2-6 % 1 although up to 25 % has also been reported. The aetiology of TMD is not fully understood and it is multifactorial including organic disease of the TMJ, trauma, malocclusion and stress. Treatment options include reassurance and education, physical and splint therapy, simple analgesia and other drugs, surgical intervention or combined treatment. Most cases of TMD can be managed non-surgically. Most patients with TMD have traditionally been initially managed by a GDP and are often referred to a specialist for further non-surgical or surgical therapies if symptoms are not controlled.
Andersen et al (1999) reported approximately 3 out of every 100 attendances to GMP services in Wales, UK were due to oral and dental problems 2. The number of people attending their GMP for dental problems has been increasing 3, 4. GMPs have expressed concerns about their ability to treat dental diseases 5 as these conditions are beyond the scope of their expertise.
Consulting GMPs for TMD has been observed dating back to over nearly six centuries 6. Similar to the rising trend of attending GMP for oral problems in general, there has been an increasing tendency for patients with TMD symptoms to approach their GMP as the first point of contact due to comparatively easier availability and financial feasibility. Prompt referral to a GDP or relevant speciality is likely to improve management and reduce the adverse impact on quality of life. This could potentially reduce the burden on overstretched NHS hospitals in UK. There is paucity of data on the management of TMD among GMPs in UK. To the best of our knowledge, there has been no prior survey of their knowledge of and attitude towards assessment and management of TMD. The objectives of this study are to assess the current experience of UK GMPs with the care of TMD patients in primary care.
Method:
Design
A Single-Centre Cross-sectional survey
Study population and survey development
GMPs listed within the Leicester City Clinical commissioning groups 7 with access to refer to the regional NHS Oral and Maxillofacial Services Providers. GMPs were formally invited to complete a specifically prepared postal questionnaire (See Appendix) consisting of their knowledge and management of TMD. In order to ensure the reliability and validity of the results of survey, the questionnaire was pretested on the GMPs in five different Urban GP surgeries other than Leicester city. To maximise response rates, a follow-up questionnaire and telephone calls were arranged after four weeks if no reply had been received. Confidentiality was maintained by number-coding the questionnaires. Selection bias was avoided by sending the questionnaire to all the GMPs in the Leicester city area.
The questionnaire Survey was conducted in February 2018 and comprised of 16 questions on TMD and two demographic questions .The questionnaire assessed knowledge of TMD including clinical features, diagnostic criteria, prevalence and aetiology. Participants were asked about awareness of current guidelines and treatment options, and their management practice, whether they would refer to a GDP, or oral and maxillofacial surgeon or TMD specialist. They were asked whether they update or have updated their knowledge about TMD. They were also invited to propose which means of TMD knowledge provision they would prefer to receive demographic data included information on the gender and clinical experience. There were no open-ended questions and participants were asked to select the most correct statement from more than one option in some of the questions. Participant GMPs were informed in the invitation letter that participation was voluntary, all responses were anonymous and that the study would be published in a peer-reviewed journal. Participation in the survey implied consent.
Data analysis
Data was analysed descriptively using IBM SPSS Statistics for Windows version 21 (IBM Corp, Armonk, USA). We aimed to determine whether there is any relationship between GMPs knowledge of diagnostic features of TMD and their length of experience in practice. We stratified GMPs into two groups according to the seniority [certificate of completion of specialist training (CCST) obtained within 5 years or earlier]. Chi square test was used to compare the proportion between two groups and a p value < 0.05 was considered to be statistically significant.
Results:
Out of 259 GMPs who were contacted and invited to participate, a total of 126 practitioners returned the questionnaire by post {response rate (48.6%)}. Of the respondents, 2 did not correctly fill the survey questionnaire; the remaining 124 responses were analysed. There was a slight male preponderance (55%). Only 12% GMPs rated themselves above average (score >4) in terms of being familiar in general with TMD. Five percent of responders were aware of published guidelines of TMD management. None of them were familiar of Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD). Seventy-four percent of participants, including both GMPs with experience less than 5 years and more described the clinical features consistent with the diagnosis of TMD. 4% selected the correct option when asked about the possible causative factors. None of them knew about the actual prevalence of TMD symptoms in the community and majority of GMPs underestimated the proportion of population with TMD. Fourteen percent were correct in identifying the age group affected by TMD. While majority of them (56%) chose ‘ No’ and 12 % of them selected ‘Don’t know’ , thirty-two percent, participants believed that subjects with TMD symptoms require initial radiographic assessment before any treatment is commenced. 95% of respondents believed that they have seen on average 2 to 4 TMD patients per month. Eighty nine percent of respondents referred patients to GDPs whereas remaining 11 % of GMPs contacted Oral and maxillofacial surgery service providers for TMD management (see Figure). Only one of the participants was familiar of specialist-clinical services for TMD who, in addition to sending these patients to GDPs, also referred TMD patients directly to specialists. Majority of them (66%) were not comfortable in seeing and provide initial management of TMD and 34% of GMPs, in addition to referring TMD patients to other services, also provided initial treatment to these patients. All those who offered this initial non-surgical treatment to manage TMD, selected combined modalities i.e. patient education, pharmacological and physical therapy. In every 25 participants (6%) has updated their knowledge through internet resources in order to increase their awareness and knowledge about the TMD management in community. Almost all (97%) of the GMPs would welcome relevant continued education programmes and receiving leaflets / published literature. The summary of GMPs responses from survey is given in Table 1. Group analysis of participants (See Table 2) did not show any statistical association between the experience of GMPs and their knowledge of TMD clinical features (Chi-square statistics 3.78, p = 0.5).
Figure: GMPs Referral for TMD patients GMPs: General Medical practitioners, TMD: Temporomandibular joint disorders, GDPs: General Dental Practitioners
Table 1: Summary of the main responses from the GMPs survey about TMD knowledge
Familiarity of TMD rated as above average
12%
Awareness about TMD guidelines
5%
Familiarity with RDC Criteria of TMD
0%
Correctly identified the etiological factors of TMD
4%
Correctly identified TMD clinical features
74%
Correctly identified the TMD prevalence in General population
0%
Correctly identified the age group suffered most with TMD
14%
Selected ‘No’ about the need of radiograph before TMD management is initiated
56%
Not comfortable in seeing and provide initial management of TMD
66%
Selected combination of pharmacological and physiotherapy to treat TMD
34%
Have referred TMD patients to GDPs
89%
Have referred TMD patients to Oral and maxillofacial surgery
11%
Have updated the TMD knowledge through any resource
6%
Keen to receive further information about TMD
97%
GMPs: General Medical practitioners, TMD: Temporomandibular joint disorders
Table 2: Distribution of participant GMPs according to their seniority and familiarity with TMD clinical features
Experience as GMP
Correctly identified TMD features (n)
Incorrectly identified TMD features (n)
Greater than 5 years
50
11
Less than 5 years
42
21
Chi-square statistics 3.7894 p = 0.5 GMPs: General Medical Practitioners, TMD: Temporomandibular joint disorders
Discussion:
Main Findings
Our study is the first which has explored in-depth the experience of GMP with TMD management. Findings from the survey indicate that uncertainty exists among GMPs regarding their level of knowledge. Most GMPs had no awareness of TMD management guidelines. The RDC/TMD 8 is a valuable tool to assess signs and symptoms and to classify patients with TMDs. Participants were not aware of these guidelines. The response from GMPs indicated that the prevalence of TMD within the general population is not accurately recognised at all along. The majority of respondents do not appreciate that TMD patients require radiographic evaluation before treatment planning. None but one of the GMPs was aware of clinicians with a subspecialty in TMD. All patients with such condition were referred either to dentists or maxillofacial surgeons. This reflects an awareness of an appropriate chain of referral 9. There was a generalised consensus in considering the general medical practice environment as an unsuitable place to manage dental problems 5, including TMD. A positive finding of our study was that a significant proportion of GMPs in Leicester city are interested in learning about TMD. This indicates there is a need for designing formal training courses for GMPs. If appropriately trained, these practitioners will potentially have an enhanced capability of not only managing TMD at an initial level but also providing knowledge and guidance to other practices and community services
Comparison with existing literature
The knowledge, attitude and practices of GDPs regarding TMD management are widely reported 10-12 but there is hardly any study relating to General Medical Practice. Results of a questionnaire survey based on screening of TMD in 38 London teaching General Medical Practices were similar to our findings 13. .Thirty-six of 38 GMPs, who replied in that survey, routinely assess the TMJ as part of the physical examination for symptoms of TMD whereas TMJ assessment was not included in primary health care screening. Similarly to Cope et al 2015 5 another qualitative study in the North-west of England GMPs experiences of chronic orofacial pain, including TMD, revealed primary health care providers consider themselves unable to meet the diagnostic and management challenges of TMD 14 .GMPs in the face to face interviews explained that despite these limitations, they do offer TMD patients pharmacological and other complimentary approaches, particularly acupuncture. Similar experiences of GMPs are also reflected in our current findings.
Strength and limitations
The main strength of this survey is that, to the best of the authors' knowledge, it was the first study which determined Knowledge and experience of GMPs towards Management of TMD. In simple language but a comprehensive and pilot tested questionnaire was designed to assess GMPs knowledge of TMD which they were expected to have gained from available literature.
There were mainly two limitations in our survey. Firstly, the sample size was small as the study was confined only to the participant GMPs practising in Leicester City, hence it may not be representative of all GMPs across the country. Despite this weakness the results may serve as a scoping study to justify further research such as qualitative surveys. Secondly, there was a relatively low but acceptable response rate (48.6%). Although this raises concerns about the research validity, but studies have demonstrated that there is no direct correlation between response rate and validity 15. Also, Surveys with comparatively low response are only marginally less accurate than those with much higher reported response rates 16.
Implications for research and practice
In addition to other main areas of practice, the Royal College of General Practitioners (RCGP) curriculum also highlights the importance of Specialist GMP trainees attaining competency in learning about common oral and maxillofacial conditions 17. Considering the frequent attendance of patients with oral and facial diseases in primary care and the limited undergraduate Medical training, valuable suggestions have been made for GMPs to promote attendance at specialist oral medicine and oral surgery clinics to enhance exposure to common maxillofacial diseases. Despite these recommendations, surprisingly little no active interest has yet been shown by GMPs trainees. There is a need to integrate GMP training with some exposure to the specialty of Oral and Maxillofacial surgery to improve expertise in the management of TMD and other oral diseases, especially in view of the increasing trend for patients to initially present to their GMP for advice about TMD and other chronic orofacial pain conditions. .
Evidence based literature regarding dealing with TMD at a non-specialist level have been published in the medical literature 18-20. This provides clinicians including GMPs with sufficient knowledge to diagnose and refer TMD to the relevant clinician. The British association of Oral and Maxillofacial Surgeons (BAOMS) TMD commissioning guide 2014 8 suggests GMPs to refer TMD patients to a GDP in the first instance to start initial treatment. Early diagnosis, counselling and management of TMD tend to improve prognosis and reduces the severity of impact on the quality of life 21, 22. It is crucial that GMPs are have sufficient knowledge to make an early referral to an appropriate clinician in order to commence conservative measures including education and advice, use of a bite guard, medications and self-directed physical therapy. The limited access to dental care within the UK, despite a National Health Service (NHS), is a well-recognised challenge. There are multiple barriers to accessing dental care 23 including delays or failure in getting appointments which results in the patient turning to General Medical Practice for advice 4. GMPs have also expressed concerns regarding accessibility to and the collegiate relationship with GDPs in the management of chronic facial pain including TMD 14. Whether the aforementioned limitations are system related or simply patient factors, they are certainly hindrances to timely assessment and intervention. We suggest that suitably trained GMPs should be able to commence the initial conservative management of TMD patients whilst simultaneously referring patients to a GDP or appropriate specialist so as to optimize the management and possibly reduce subsequent referrals in the long term. There is an immense potential for primary care to be integral part of initial management of TMD. A large scale nationwide study could potentially help future planning for care within the community.
Conclusion:
Respondent GMPs in East midlands England, demonstrated limited knowledge and confidence related to the diagnosis and management of TMD. Appropriate post-graduate training and educational opportunities for ongoing continuing professional development related activities would improve the knowledge and awareness of TMD management, potentially leading to more effective care within the community.
The continuous growth in patient numbers and needs poses several challenges for medical professionals and support staff within the National Health Service (NHS).1 Health care services are under financial strain in the light of the changing demographic structure of the UK population that requires improved access to health services. Managing patient satisfaction represents another major challenge. Evidence from a recent national survey in the UK shoes that dissatisfaction with the NHS has increased by seven percentage points in 2017, reaching 29 percent, its highest level since 2007.2 Staff shortages, long waiting times for surgical operations and access to care, inadequate funding, and slow-paced government reforms are among the reasons for dissatisfaction. For hospitals, long waiting times at the A&E department, and delays for patients in need of critical care represent major concerns.3
Unsatisfactory health care service experiences generate negative outcomes for health service providers in terms of managing patients’ experience of care, and meeting performance targets. As patients are ultimately the receivers of health care provision, understanding their experiences of care is pivotal.4 The psychological processes underlying patients’ perceptions and evaluations of service provided by the health care professionals, play a crucial role in patient satisfaction. The cognitive processes of patients and their support network, such as friends and relatives, influence perceptions and attitudes towards health care treatment and service. Research underpinned by knowledge from social psychology can shed light on such cognitive processes and generate insights for effective management of patient satisfaction.
The concept of psychological threat in health care service experiences can be explained through the notion of ‘lock-in situations’5 perceived by the patients. For instance, when visiting a hospital or a GP surgery, patients often undertake externally-imposed activities, such as long waiting time for a doctor’s appointment, ease of self-service check-in, lack of acuity in self-care and monitoring, and/or unsatisfactory interactions with support staff – all parts of the service provision. Such situations can be perceived as a threat to the self-determination needs, such as the need for autonomy. Patients who regularly use health care services in the UK associate four main types of threat to health care service experiences, in response to which coping strategies are activated. We discuss these below.
Perceived threats associated with health care services
Patients who use health care services in the UK often report situations they find threatening or questioning their astuteness and sense of control. Interactions with health care staff can make patients feel unintelligent and/or incompetent and restricted in personal control. This is typical of encounters where healthcare support staff are unable to address patients’ queries accurately, and their attempt at resolving the issue is perceived subconsciously as unnecessary and inappropriate by the patients. The above seems to be due to a general lack of trust in the competence of health care personnel, and more conspicuously the perception that they were not willing to act in the interest of patients. Poor health status at the time of accessing health care services might also hinder patients’ willingness to accept advice from health care professionals. Such experiences of threat to self-competence are often associated with negative or even vengeful behavior towards the health service provider, which is the party perceived as threatening. The psychological mechanism behind such behavior is that retaliation alleviates the emotional discomfort caused by threat perceptions6.
Threats to personal control are often reported when processes in the health care service provision are perceived as inadequate and lead to, for instance, long waiting times for appointment booking and/or rescheduling. Our qualitative research show that patients perceive the process of booking a doctor’s appointment as ‘a nightmare’, ‘particularly time-consuming’ and ‘complicated’. They perceive a loss of control when seeking to book or reschedule an appointment. When appointments are not scheduled around their commitments, patients perceive that they are not being heard.
Furthermore, health care service experiences are perceived as threatening to the individual’s self-esteem; especially in situations where patients feel ignored by the health care personnel, and their own self-esteem and social identity are being undermined. A key reason is the perceived lack of empathy and concern of health care personnel during interactions with patients.
How patients activate coping strategies
The lock-in situations discussed above can affect satisfaction and well-being, despite patients’ general compliance with requests from health care personnel.5 Social psychology research shows that perceived threats, such as those reported in health care service experiences, increase feelings of anxiety, averseness, lack of control, and aggressiveness.6 Crucially, in response to threats and consequent negative feelings, patients activate coping strategyas a mechanism of self-defense. We postulate that coping strategies, in turn, influence their behavior, aimed at compensating for the unsatisfactory experience7. Such behaviour can be negative, and at times even vindictive towards the health care service provider.
Social psychology research distinguishes between individual’s coping strategies8 aimed at addressing the source of the threat (i.e. problem-focused coping), and those focused on re-establishing positive emotions, for instance through the act of venting dissatisfaction caused by the threat (i.e. emotion-focused coping). In health care services, patients often seek to proactively react to threats, thereby engaging in problem-solving. This is especially the case when unsatisfactory health care service experiences are aggravated by a serious illness. Severity of the illness markedly influences patients’ willingness to take actions in response to threats. Crucially, the decision to act seems to benefit patients, as they report feeling ‘back in control of the situation’ – a form of compensatory behaviour9. Cerebral activities, such as rational and positive thinking, influence the extent to which patients confront threats. Rational thinkingcan induce patients to take a step back from the experience, reconsider the factors at play, and plan their next actions.
Crucially, in the process of coping with threats imposed by health service experiences, patients often feel overwhelmed. Negative emotions in such threatening circumstances are heightened, and the support from their network of friends and family appears to be fundamental. Intriguingly, for some patients, social media is increasingly seen as a useful source of emotional support, which appears to be gradually replacing conventional forms of verbal, face-to-face support.
Final remarks
We offer an overview on how insights from social science research can be valuable for informing decision-making of health care service providers. This is especially the case in decisions related to staff hiring, training and development, service process improvement and supporting systems design. Lack of empathy and concern from frontline health care staff, outdated service processes and systems represent threats to patients. An implication is that innovative training of frontline staff is necessary for the development of soft skills, which are highly valued by patients. Developing caring and supportive relationships between health care personnel and patients is necessary, as these have considerable bearing on the outcome of healthcare service experiences. Similarly, introducing the practice of simulating patients’ care experience can help to identify threats whilst introducing service improvements and innovations. There is also need for health care service providers to be aware of the fact that patients’ health status at the time of seeking access to and experiencing health services influences their evaluations of the quality of care and of the service experience. It follows that the service provision needs to be adapted to account for patients’ health status and vary according to different patients’ groups. Insights from social science research can inform practice for enhanced provision of health care services. Further survey-based research focusing on the causal links between psychological threat, coping and patient well-being10 is on hand.
In the absence of systemic inflammation, procalcitonin synthesis is mainly restricted to the neuroendocrine cells of the thyroid.1 This is not released into the blood until cleaved/mature form (i.e. calcitonin). Therefore, procalcitonin levels remain undetectable.2 Almost all body tissues can produce procalcitonin. The main triggers for its synthesis are bacterial toxins (endotoxins) and cytokines released in response to bacterial infections (TNF alpha, IL-1-beta and IL-6). See Table 1. Cytokines released due to viral infections (e.g. interferon-gamma) inhibit TNF-alpha production.1, 3 During an inflammatory response, procalcitonin levels start rising within 2-4 hours and peak in 24-48 hours. Peak levels correlate to the severity of the bacterial infection. When inflammation resolves, procalcitonin levels fall quickly, falling by 50% every 24-36 hours. If the inflammation is ongoing, procalcitonin levels plateau (due to ongoing production of procalcitonin).4
Table 1: Points to remember: 5-11 1. Most bacterial infections will cause a rise in procalcitonin levels (levels >0.25ng/ml). 2. The following bacterial infections will not cause a rise in procalcitonin levels: a. Mycoplasma pneumoniae. b. Chlamydia pneumoniae. 3. Parapneumonic effusions, empyema and lung abscesses may not cause a rise in procalcitonin levels. 4. Mycobacterium tuberculosis, can and can’t cause a rise in the procalcitonin levels 5. Viral infections will not cause a rise in procalcitonin levels (levels <0.25ng/ml). 6. Amongst fungal organisms, candida infections can cause a rise in procalcitonin levels (levels >0.25ng/ml). 7. Malaria can cause a rise in procalcitonin levels (levels >0.25ng/ml). 8. Clostridium difficile colonization will not cause a rise in procalcitonin levels (levels <0.25ng/ml). 9. Lung cancers (especially neuroendocrine) and medullary thyroid cancers can cause a rise in procalcitonin levels (levels >0.25ng/ml). 10. Renal insufficiency (which hinders the clearance) can cause a rise in the baseline procalcitonin levels. 11. Physiological stress can cause a rise in procalcitonin levels (levels >0.25ng/ml). This includes trauma, surgery, burns, bowel ischemia, cerebrovascular accident (infarct and haemorrhage), pancreatitis and any kind of shock-like situation.
Community Acquired Pneumonia (CAP) and Procalcitonin
As we know, it can take 24-48 hours for the procalcitonin to reach its peak levels, hence in an acute clinical setting (where CAP is the diagnosis, or suspected), the decision to start antibiotics can’t depend on the initial procalcitonin levels (because of high morbidities associated with CAP). Nevertheless, serial levels will help in guiding antibiotic therapy. a. If procalcitonin levels are persistently <0.25ng/ml in a CAP patient with suspected viral aetiology (based on history and investigations), antibiotics can be stopped. We should keep in mind that procalcitonin levels do not normally rise in the case of mycoplasma and chlamydia pneumonia. b. Suspected or known CAP patients should receive empiric antibiotics as per local protocol in an acute setting. c. Antibiotics can be stopped in patients with suspected or known bacterial CAP who have received antibiotics for at least five days and shown clinical improvement with procalcitonin levels dropping <0.25ng/ml. d. CAP patients who are not clinically improving, and procalcitonin levels are rising or not decreasing, will need a review of antibiotics. e. Optimal threshold for discontinuing antibiotic therapy has not been established.12 f. Procalcitonin levels have prognostic value. Again, there is no optimal threshold. Serial levels have more prognostic value than a single level.
Ventilator Associated Pneumonia (VAP) and Procalcitonin
Patients with VAP are usually very unwell. Antibiotics should be started as soon as VAP is suspected. Procalcitonin can be used to stop antibiotics in VAP patients. As per ProVAP trial, stopping antibiotics when procalcitonin level drops <0.5ng/ml, or >80% from its peak value, did not result in an adverse outcome.
Acute Exacerbation of Chronic Obstructive Pulmonary Disease (AECOPD) and Procalcitonin
Use of procalcitonin to guide antibiotic therapy in patients with AECOPD has not been established yet. Some experts use the levels to help in making decisions about stopping antibiotics (in a similar way as mentioned in the above section of CAP). Infections in AECOPD are less invasive and pathogens differ from CAP; procalcitonin levels may not correlate well with the severity of the episode. In one trial, antibiotic use was found to be of no benefit in patients with AECOPD with levels <0.1ng/ml.13
Acute Bronchitis and Procalcitonin
Mostly acute bronchitis is caused by viral infections and do not need antibiotics. In patients where the need for antibiotics is unclear, serum procalcitonin levels can help in making this decision.
Summary
Table 2: Procalcitonin levels in lower respiratory tract infections:
Level (ng/ml)
Likelihood of bacterial infection
<0.10
Very unlikely
0.10 – 0.25
Unlikely
0.25 – 0.50
Likely
>0.50
Very likely
(This should aid clinical decision making i.e. decision should not be solely based on these levels).
In an acute clinical setting, where pneumonia is suspected or is the cause for sepsis, empirical antibiotics should be started according to a local protocol without considering the serum procalcitonin levels. If serial serum procalcitonin levels remain below 0.10 ng/ml on day 3, antibiotics can be stopped, aided by clinical judgment. The above-mentioned points should be kept in mind with the fact that certain bacterial infections do not cause a rise in serum procalcitonin levels. The levels also have prognostic value in case of CAP and VAP. Usually, acute bronchitis is a viral illness; if symptoms are not improving or bacterial infection is suspected, raised serum procalcitonin levels can aid the clinical judgment in starting antibiotics. In the case of infective AECOPD, the levels are not very helpful in making a decision about starting antibiotic therapy. In respiratory tract infections, where the patient has received adequate duration of antibiotic therapy, and procalcitonin levels fall <0.10 ng/ml, treatment can be stopped safely (if clinical judgment allows). See Table 2.
Colorectal cancer is the fourth most common cancer in the United Kingdom (UK), and accounts for 10% of all cancer deaths.1 The symptoms of colorectal cancer are often non-specific and in its early stages there may be no symptoms at all. Survival is directly linked to stage of disease at diagnosis – five-year survival falls from 98% for stage I disease down to 40% for stage IV disease.2
Thirty percent of all colorectal cancers are diagnosed via the ‘Two-Week Wait’ (2WW) referral route in the UK. The remainder are diagnosed following emergency presentation (24%), non-2WW GP referral (24%), bowel cancer screening (9%) or by other pathways (13%).3
The TWW referrals for patients with suspected cancer were introduced in 2000 by the NHS Cancer Plan,4 and built on the earlier recommendations of the Calman-Hine report into commissioning cancer services.5 These improvements sought to address the United Kingdom’s relatively low cancer survival rates compared to the rest of Europe, and to address the delays in diagnosis and treatment that some patients were encountering. In order to standardise cancer care nationally, 2WW referral guidelines were introduced by the Department of Health in 2000.6 These guidelines were reviewed and updated in 2005 by the National Institute for Clinical Health and Care Excellence (NICE).7
In November 2015, NICE updated all its 2WW referral guidelines, including those for suspected colorectal malignancy.8 The recommendations were developed following a systematic review of the literature which recommended referral for patients with symptoms deemed to have a positive predictive value for colorectal cancer of 3% or more. This was a reduction from the previous guidelines, which used a positive predictive value of greater than 5%.9 The original (2005) and updated (2015) NICE colorectal 2WW referral guidelines are outlined in Table 1.
This study measured the effect of the change in colorectal 2WW referral guidelines on the following outcomes:
Volume of referrals to the colorectal 2WW clinic
Rate of detection of colorectal cancer
Rate of detection of non-colorectal cancer
Adherence to the 2WW referral guidelines
Table 1. Summary of the 2005 and 2015 NICE Two-Week Wait referral guidelines for suspected colorectal cancer 7,8
2005 Criteria
2015 Criteria
Age >40 with rectal bleeding and a change in bowel habit for >6 weeks
Age >40 with unexplained weight loss and abdominal pain
Age >60 with rectal bleeding without a change in bowel habit for >6 weeks
Age >50 with unexplained rectal bleeding
Age >60 with change in bowel habit without rectal bleeding for >6 weeks
Age >60 with change in bowel habit or iron-deficiency anaemia
Right lower abdominal mass consistent with involvement of the large bowel
Positive faecal occult blood test
Palpable rectal mass (intra-luminal)
Palpable rectal or abdominal mass
Unexplained iron deficiency anaemia in: non-menstruating Women with an Hb <10g/100mL men with an Hb <11g/100mL
Age <50 with rectal bleeding and one of: abdominal pain change in bowel habit weight loss iron-deficiency anaemia
Methods and materials
We undertook a retrospective analysis of referrals to the colorectal 2WW service at a large inner city teaching hospital (Bristol Royal Infirmary, UK). All the patients referred in two-month periods before (July to August 2015) and after (July to August 2016) were included in the study. The referral guidelines changes were identified and their clinical notes were reviewed. The specific variables recorded for each referral included: age, gender, presenting symptoms and signs and subsequent diagnosis. All records were cross-referenced against the regional cancer registry.
Differences between the two groups were assessed for statistical significance using Chi-Squared and unpaired T-tests. Count data was assessed for significance using the Poisson Means test at a 95% confidence interval. Statistical tests were calculated using the MEDCALC statistical software.
Results
A total of 193 and 268 patients were referred in each of the two study periods. The data collection was complete for all patients. The demographics, referral data, and cancer detection rates are summarised in Table 2.
There was a significant increase in the volume of patients referred via the 2WW pathway following the change in the guidelines (193 vs. 268, p<0.01). There was no significant change in the rate of colorectal cancers detected (8.3% vs. 7.5%, p=0.75).
There was no significant difference in the rate of detection of any cancer (including colorectal cancer) following the 2WW referral (11.4% vs 10.8%, p=0.83). The non-colorectal cancers detected (15 in total) were predominantly metastatic cancers; from lung, ovarian, or prostatic primary malignancies. There was no significant difference in the detection rate of non-colorectal cancers (3.1% vs. 3.4%, p=0.85).
The rate of compliance to the referral guidelines was significantly higher following the update in referral guidelines (72% vs 89%, p<0.01).
In the second study period (July - August 2016), there was a sub-group of 31 patients whose referrals met the new (2015) referral guidelines, but who would not meet the previous (2005) referral guidelines. The mean age in this group was 58.5 and none of these patients had a cancer detected following the 2WW referral.
Table 2. Summary of the results
Jul-Aug 2015
Jul-Aug 2016
p value
Patients referred
193
268
<0.01 a
Cancers detected (% of total)
22 11.4%
29 10.8%
0.74 a 0.83 b
Colorectal cancers (% of total)
16 8.3%
20 7.5%
0.58 a 0.75 b
Non-Colorectal cancers (% of total)
6 3.1%
9 3.4%
0.61 a 0.85 b
% of referrals compliant with the guidelines (at that time)
72%
89%
<0.01 b
Mean age in years (Median age, range)
68.2 (69, 24-92)
67.9 (69, 22-93)
0.81 c
Sex ratio (M : F)
43 : 56
46 : 53
Frequency of referral signs/symptoms (%)
Change in bowel habit
60
63
0.51 b
Rectal bleeding
33
39
0.18 b
Abdominal pain
37
33
0.37 b
Unexplained weight loss
22
20
0.60 b
Iron deficiency anaemia
27
22
0.21 b
Statistical test used: a Poisson Means Test, b Chi squared test, c Unpaired t-test
Discussion
This study has shown that the volume of patients being referred to the colorectal 2WW service has significantly increased in a large inner city unit following the update to referral guidelines in 2015. A significantly greater proportion of referrals are compliant with the new guidelines compared with the previous guidelines. Despite this, we found no significant change in the rate of colorectal cancer detection. Our colorectal cancer detection rates following 2WW referral are similar to the published data series (6-14%).10,11,12
The factors contributing to the increased referral rate includes removal of time constraints and referral for symptoms not previously included within the guidelines (e.g. abdominal pain, unexplained weight loss). The updated guidelines are subsequently less specific and use signs and symptoms with a lower positive predictive value for colorectal cancer than previously.8
In their costing statement for the new guidelines, NICE acknowledge that the updated guidelines are likely to increase referral volumes. The justification given is that “benefits are anticipated from earlier diagnosis of cancer”.9 This study challenges that supposition – no cancers were detected in the latter group of 31 patients whose referrals met the new guidelines, but would not have met the old referral guidelines.
Studies prior to the update in guidelines have also challenged the view that 2WW referrals lead to earlier detection of cancer. When compared with ‘non-2WW’ outpatient referrals, patients referred via a 2WW pathway had no significant difference in the stage of disease at diagnosis,13,14 nor any significant difference in the related outcomes such as 2-year survival,15,16 5-year survival,15,17 or proportion undergoing curative surgery.14,15
Bowel cancer screening remains the only method with a strong evidence base for detecting colorectal cancers at an earlier stage.18 Cancers detected in this manner are disproportionately lower in stage,19 and are associated with a significant reduction in mortality.20 This study did not assess the impact of screening on cancer detection rates via the 2WW referral process, although the logical effect of increased detection of cancers via screening would be a proportional fall in cancers detected by other routes, including the 2WW pathway.
The findings of this study appear to challenge the anticipated benefits of the new 2WW referral guidelines. A group of patients were identified whose referrals only met the 2015 guidelines; these referrals would have been deemed inappropriate by the 2005 guidelines. This group of patients were generally younger and none went on to a cancer diagnosis. If other units (or multi-centre studies) corroborate these findings then this should prompt urgent review of the 2WW guidelines with regards to cancer stage at diagnosis and longer term outcomes.
Conclusion
The updated 2WW referral guidelines for suspected colorectal cancers have increased the volume of patients being seen via the 2WW service without increasing cancer detection rates. This is anticipated to have secondary effects on waiting times for routine and endoscopic services; this has not been evaluated in this study. Further research is needed to contextualise all of these findings with cancer detection rates via screening and other non-2WW routes to diagnosis.
According to DSM V, delirium is defined as disturbance in attention (i.e., reduced ability to direct, focus, sustain, and shift attention) and awareness (reduced orientation to the environment). This disturbance develops over a short period of time and it represents an acute change from baseline attention and awareness, and tends to fluctuate in severity during the course of a day.
The focus of the researchers has shifted from treatment to prevention of the syndrome. There is a need to study risk factors for prevention of delirium1. Data on delirium in the intensive care unit is scarce in the Indian subcontinent2.
A multicenter study indicated risk factors significantly contributing to delirium were related to patient characteristics (smoking, daily use of more than 3 units of alcohol, living alone at home), chronic pathology (pre-existing cognitive impairment), acute illness (use of drains, tubes, catheters, use of psychoactive medication, a preceding period of sedation, coma, mechanical ventilation) and the environment (isolation, absence of visit, absence of visible daylight, transfer from another ward, use of physical restraints)1. Psychoactive medications can provoke a delirious state. Lorazepam has an independent and dose related temporal association with delirium3.
Each additional day spent in delirium is associated with 20% increased risk of prolonged hospitalisation and 10% increased risk of death4.
Hence, the present study was done to assess risk factors and precipitating factors of delirium in a medical intensive care unit of a tertiary care hospital.
Materials and methods:
This is an observational study done over a period of 1 year in a tertiary care medical college hospital located in southern part of India. Ethical committee approval for the study was obtained from the institutional ethical committee.
All patients admitted to medical intensive care unit in our tertiary care hospital, were screened for presence of delirium during the first 72 hours of admission using Richmond Agitation Sedation Scale (RASS) and Confusion Assessment Method for ICU (CAM-ICU). Patients with delirium were classified as delirious and the remaining as non-delirious patients. Comatose patients (RASS score -4 or -5) were excluded from the study.
Patients were initially screened with Richmond Agitation Sedation Scale (RASS). It is a 10-point scale, with 4 levels of agitation (+1 to +4) and 5 levels of sedation (-1 to -5). Level zero indicates calm and alert patient. Patients with RASS score of -4 or -5 (deep sedation and unarousable patients) were excluded from the study. Patients with RASS score of +4 to -3 were then screened for presence of delirium using Confusion Assessment Method for ICU (CAM-ICU). CAM–ICU has 4 criteria:
1) Acute onset and fluctuating course of delirium
2) Inattention
3) Disorganized thinking
4) Altered level of consciousness
The diagnosis of delirium requires the presence of criteria 1 and 2 and of either criterion 3 or 4.
Risk factors for developing delirium were assessed in the study population. Risk factors are those proven factors which may also be present before patient’s admission to intensive care unit, and which predispose the patient to develop delirium. Risk factors were compared between delirious and non-delirious patients. Risk factors which were assessed were history of diabetes and hypertension, history of previous stroke, history of previous cognition impairment, history of previous psychiatric illness, history of previous trauma, history of previous episodes of delirium, history of bowel and bladder disturbances prior to admission (such as constipation and urinary retention respectively), history of alcohol abuse (consumption of more than 2 units of alcohol), history of smoking (more than 10 cigarettes per day), history of consumption of substances other than cigarettes and alcohol (such as cannabis, cocaine etc.), history of uncorrected visual or hearing disturbances before admission, history of usage of barbiturates (such as phenobarbital), benzodiazepines (such as alprazolam, chlordiazepoxide, clobazam, clonazepam) & opioids (such as morphine) before admission, history of usage of sedatives (such as haloperidol, midazolam, fentanyl) and pain killers (such as morphine, tramadol) at the time of admission. Metabolic risk factors which were compared between delirious and non-delirious subjects were uraemia, hyponatremia, hyperbilirubinemia, metabolic and respiratory acidosis.
Precipitating factors weredefined as factors that were the likely causes of delirium in delirious patients. Precipitating factors for delirium which were looked into were exposure to toxins (alcohol/drugs), deranged metabolic parameters, infections and central nervous system causes.
SPSS21 software was used to calculate statistics. Independent t-sample test and the Pearson Chi-square test were used to calculate differences between delirious and non-delirious subjects. Odds ratios (OR) was calculated for all factors using univariate binary logistic regression.
Results:
Total number of patients enrolled in the study was 1582, of which 406 were diagnosed with delirium. Percentage of patients developing delirium within first 72 hours of admission was 25.7%. Hypoactive delirium was present in 52% and hyperactive delirium in 48% of patients. Patients who experienced delirium (57.5 + 17 years) were older compared to their non-delirious (53.3 + 18.1 years) counterparts (p value <0.0001). Among delirious subjects, majority were in the age group of 61-70 years (Figure 1).
Figure 1- Age distribution among delirious patients
38.2% of delirious patients and 39.3% of non-delirious patients were females. 61.8% of delirious patients and 60.7% of non-delirious patients were males.
Alcohol consumption [OR = 6.54 (95% CI 3.76-11.4, p = 0.0001)], previous psychiatric illness [OR = 3.73 (95% CI 1.712-8.159, p = 0.033)], previous cognition impairment [OR = 2.739 (95% CI 1.509-4.972, p = 0.001)], sedatives usage at the time of admission [OR = 2.488 (95% CI; 1.452-4.264), p = 0.001)], visual disturbances [OR = 2.227 (95% CI; 1.328-3.733, p = 0.002)], bowel and bladder disturbances [OR = 1.677 (95% CI 1.044-2.693, p = 0.032)] were significant risk factors contributing to delirium after univariate analysis (Table 1). Metabolic acidosis [OR = 1.996 (95% CI 1.469-2.711, p = 0.0001)] and hyperbilirubinemia [OR = 1.448 (95% CI 1.111-1.886, p = 0.006)] were significant metabolic parameters contributing to delirium after univariate analysis (Table 2).
Precipitating factors (Table 3) for delirium are those factors that were considered the most likely causes of delirium among the delirious patients. Precipitating factors for delirium were classified into toxins, deranged metabolic parameters, infections and central nervous system causes, of which metabolic parameters were most common. Among metabolic parameters, uraemia (25.1%), hepatic encephalopathy (22.7%) and hyponatremia (19.5%) contributed to the majority of cases with delirium.
Table 1 – Univariate analysis of risk factors of delirium
NO DELIRIUM
DELIRIUM
COUNT
%
COUNT
%
P
UNIVARIATE
Diabetes
No
729
62
226
55.7
.025
1.3(1.1-1.6)
Yes
447
38
180
44.3
Hypertension
No
684
58.2
239
58.9
.8
.97(0.8-1.2)
Yes
492
41.8
167
41.1
History of Stroke
No
1107
94.1
379
93.3
.6
1.14(0.7-1.8)
Yes
69
5.9
27
6.7
Previous memory disturbances
No
1149
97.7
264
89.7
<.0001
4.9(2.9-8)
Yes
27
2.3
42
10.3
Previous psychiatric illness
No
1161
98.7
386
95.1
<.0001
4(2-7.9)
Yes
15
1.3
20
4.9
Trauma
No
1137
96.7
396
97.8
.3
0.6(0.3-1.3)
Yes
39
3.3
9
2.2
Previous episodes of delirium
No
1155
98.2
402
99
.3
0.55(0.2-1.6)
Yes
21
1.8
4
1.0
Bowel & bladder disturbances
No
1107
94.1
350
86.2
<.0001
2.6(1.8-3.7)
Yes
69
5.9
56
13.8
Alcohol
No
1089
92.6
336
82.8
<.0001
2.6(1.8-3.7)
Yes
87
7.4
70
17.2
Smoking
No
981
83.4
354
87.2
.07
0.7(0.5-1.03)
Yes
195
16.6
52
12.8
Other substance abuse (apart from cigarettes and alcohol)
No
1071
91.1
391
96.3
.001
0.4(0.22-0.6)
Yes
105
8.9
15
3.7
Visual disturbances
No
1062
90.3
298
73.4
<.0001
3.4(2.5-4.5)
Yes
114
9.7
108
26.6
Hearing disturbances
No
1104
93.9
338
83.3
<.0001
3.1(2.2-4.4)
Yes
72
6.1
68
16.7
Barbiturates
No
1155
98.2
401
98.8
.5
0.7(0.3-1.8)
Yes
21
1.8
5
1.2
Benzodiazepines
No
1155
98.2
400
98.5
.7
0.8(0.3-2.1)
Yes
21
1.8
6
1.5
Opioids
No
1176
100
405
99.8
.9
4.7(0-IN)
Yes
0
.0
1
.2
Sedatives usage in present admission
No
1143
97.2
369
90.9
<.0001
3.5(2.1-5.6)
Yes
33
2.8
37
9.1
Pain killers usage in present admission
No
1080
91.8
400
98.5
<.0001
0.17(0.07-0.39)
Yes
96
8.2
6
1.5
Table 2- Univariate analysis of metabolic parameters
NO DELIRIUM
DELIRIUM
COUNT
%
COUNT
%
P
UNIVARIATE
Uraemia
NO
648
55.1
186
45.8
0.001
1.45(1.2-1.8)
YES
528
44.9
220
54.2
Hyponatremia
NO
645
54.8
202
49.8
0.08
1.2(0.98-1.5)
YES
531
45.2
204
50.2
Hyperbilirubinemia
NO
837
71.2
246
60.7
<0.0001
1.6(1.3-2)
YES
339
28.8
159
39.3
Metabolic acidosis
NO
990
84.2
286
70.4
<0.0001
2.2(1.7-2.9)
YES
186
15.8
120
29.6
Respiratory acidosis
NO
1092
92.9
377
92.9
1
1(0.6-1.5)
YES
84
7.1
29
7.1
Table 3- Precipitating factors of delirium in the present study
PRECIPITATING FACTORS
%
Toxins
Drug or Alcohol overdosage
1.5
Alcohol withdrawal
2.7
Metabolic conditions
Hyponatremia
19.5
Hyperglycaemia
6.2
Hypoglycaemia
2.5
Hypercarbia
5.7
Uraemia
25.1
Hepatic encephalopathy (hyperammonemia)
22.7
Infections
Systemic infective causes
16.5
Meningitis/ Encephalitis
8.9
Central Nervous System causes
Hypoperfusion states
14.5
Hypertensive encephalopathy
5.9
Cerebrovascular accident (CVA)
7.6
Intracranial space occupying lesion (ICSOL)
5.4
Seizures
10.3
Psychiatric illness
4.9
Discussion:
Delirium is classified into hyperactive, hypoactive and mixed type. Hyperactive subtype is present if there is definite evidence in the previous 24 hours of at least two out of the following factors - increased quantity of motor activity, loss of control activity, restlessness, wandering. Hypoactive subtype is present if there is definite evidence in the previous 24 hours of at least two of the following factors - decreased amount of activity, decreased speed of actions, reduced awareness of surroundings, decreased amount of speech, decreased speed of speech, listlessness, reduced alertness, withdrawal. Mixed subtype is present if there is evidence of both hyperactive and hypoactive subtypes in the previous 24 hours5. Percentage of patients with hypoactive delirium was high in this study (52%). Hypoactive delirium often carries relatively poor prognosis, occurs more commonly in elderly patients and is frequently overlooked or misdiagnosed as having depression or a form of dementia.
In the present study, delirium was more prevalent in the elderly population. Most of the elderly patients will have multiple risk factors making them more vulnerable to delirium. Delirium is often the only sign of an underlying serious medical illness in an elderly patient and particular attention should be given to identify and correct the underlying illness.
History of alcohol consumption of more than 2 units per day, prior to admission of the patient, was the major risk factor contributing to delirium in this study (OR = 6.54). This was similar to other studies done by Bart1 et al & Ouimet6 et al where consumption of more than 3 units of alcohol (OR 3.23) & 2 units of alcohol (OR 2.03) respectively, was a significant risk factor for delirium. Patients with a previous psychiatric illness were at increased risk for delirium in this study (OR – 3.73). However, other studies explaining its importance in contributing to delirium were not available. Previous cognition impairment was a significant risk factor contributing to delirium (OR = 2.73). The study by Bart1 et al found that previously diagnosed dementia was an important risk factor (OR = 2.41). Positive correlation with dementia was reported by McNicoll et al7 (RR 1.4) and Pisani et al8 (OR 6.3). Usage of sedatives (OR = 2.48) at the time of admission was a significant risk factor for developing delirium. Bart1 et al found that use of psychoactive medication may disturb the neurotransmission in the brain provoking a delirious state and use of benzodiazepines is a risk factor for delirium (OR – 3.34). Pandharipande3 et al found that Lorazepam was an independent risk factor for daily transition to delirium (OR – 1.2). Pisani8 et al found that use of benzodiazepines was a significant risk factor for developing delirium with odds ratio of 3.4. Uncorrected visual disturbances were a significant risk factor for developing delirium in this study (OR-2.22). Inouye9 et al found that vision impairment (adjusted relative risk – 3.5) was an independent baseline risk factor for delirium. Bowel and bladder disturbances were a significant risk factor contributing to delirium in this study (OR – 1.67). Morley10 opined that constipation is a frequent, often overlooked precipitating factor for delirium. Tony11 et al was of the opinion that a careful history and physical, including a rectal examination with consideration of disimpaction, may be helpful in assessing and managing delirious patients. Waardenburg12 concluded that significant urinary retention can precipitate or exacerbate delirium, a disorder referred to as cystocerebral syndrome. Liem and Carter13 suggested that increased sympathetic tone and catecholamine surge triggered by the tension on the bladder wall may contribute to delirium. Metabolic acidosis and hyperbilirubinemia were significant metabolic parameters contributing to delirium in this study.Similar findings were reported by Aldemir14 et al.
Among delirious patients, most common precipitating factors for delirium in this study were uraemia (25.1%), hepatic encephalopathy (22.7%) and hyponatremia (19.5%). Alterations of serum electrolytes, renal function predispose to delirium15. Hyponatremia causes delirium and the mechanism is not well understood16, 17. Blood urea nitrogen/creatinine ratio greater than 18 is an independent risk factor for delirium in general medical patients9.Hepatic failure leads to hyperammonemia, which leads to excessive NMDA (N-methyl-D-aspartate) receptor activation, resulting in dysfunction of glutamate-nitric oxide-cGMP pathway and causing impaired cognitive function in hepatic encephalopathy18.Excess activation of NMDA receptors results in neuronal degeneration and death19. In hepatic failure, there may be a shift in regional cerebral blood flow and cerebral metabolic rates from cortex to subcortex resulting in delirium20.
Patients who develop delirium during their stay in hospital have higher 6-month mortality rates, longer hospital stay, increased economic burden and a higher incidence of cognitive impairment at hospital discharge21. Limitation of this study was long term follow up of patients who developed delirium was not done.
Conclusion:
Delirium is common in intensive care unit patients and hypoactive delirium is more common. Major risk factor contributing to delirium was alcohol consumption before admission. Most common precipitating factors contributing to delirium were deranged metabolic parameters.
Delirium in ICU patients especially hypoactive delirium is easily missed. Hence, all ICUs should implement both RASS and CAM-ICU for early detection of delirium. Future research needs to be directed at development of scoring systems for detection of delirium, which are easy to use and are accurate.
Hyperglycaemia is a condition in which an excessive amount of glucose circulates in the blood plasma. The origin of the term is Greek: hyper-, meaning excessive; -glyc-, meaning sweet; and -aemia, meaning "of the blood". Hyperglycaemia, or high blood glucose, is a serious health problem for those with diabetes. Normal fasting glucose is <100 mg/dl, impaired fasting glucose is 100–125 mg/dl, and diabetes mellitus is defined as a fasting glucose >126 mg/dl 1. Several values above normal are indicated before making a diagnosis of impaired fasting glucose or diabetes. Two types of hyperglycaemia can be seen in diabetic patients, they are, fasting hyperglycaemia defined as a blood sugar greater than 126 mg/dl after fasting for at least 8 hours and postprandial or after-meal hyperglycaemia defined as a blood sugar usually greater than 180 mg/dl. In people without diabetes postprandial or post-meal sugars rarely go over 140 mg/dl but occasionally, after a large meal, a 1-2 hour post-meal glucose level can reach 180 mg/dl.
Consistently elevated high post-meal glucose levels can be an indicator that a person is at high risk for developing type 2 diabetes. Stress hyperglycaemia also called as stress diabetes or diabetes of injury is a medical term referring to transient elevation of the blood glucose due to the stress of illness. It usually resolves spontaneously, but must be distinguished from various forms of diabetes mellitus. It is often discovered when routine blood chemistry measurements in an ill patient reveal an elevated blood glucose. Blood glucose can be assessed either by a bedside ‘fingerstick’ glucose meter or plasma glucose as performed in a laboratory. The glucose is typically in the range of 140-300 mg/dl but occasionally can exceed 500 mg/dl especially if amplified by drugs or intravenous glucose. The blood glucose usually returns to normal within hours unless predisposing drugs and intravenous glucose are continued.
Stress hyperglycaemia is especially common in patients with hypertonic dehydration and those with elevated catecholamine levels. Steroid diabetes is a specific and prolonged form of stress hyperglycaemia. In some people, stress hyperglycaemia may indicate a reduced insulin secretory capacity or a reduced sensitivity, and is sometimes the first clue to incipient diabetes (do you mean insipidus diabetes). Because of this, it is occasionally appropriate to perform diabetes screening tests after recovery from an illness in which significant stress hyperglycaemia occurred Table 1.
Acute hyperglycaemia is common in patients with ST- elevation myocardial infarction (STEMI) even in the absence of a history of type 2 diabetes mellitus (DM). Hyperglycaemia is encountered in up to 50% of all STEMI patients, whereas previously diagnosed DM is present in only 20% to 25% of STEMI patients 2 .The prevalence of type 2 DM or impaired glucose tolerance may be as high as 65% in myocardial infarction patients without prior DM when oral glucose tolerance testing is performed 3. Elevated plasma glucose and glycated haemoglobin levels on admission are independent prognosticators of both in-hospital and long-term outcome regardless of diabetic status 4, 5. For every 18-mg/dl increase in glucose level, there is a 4% increase in mortality in nondiabetic subjects 6. When admission glucose level exceeds 200 mg/dl, mortality is similar in non-DM and DM subjects with myocardial infarction (MI). Admission glucose has been identified as a major independent predictor of both in-hospital congestive heart failure and mortality in STEMI 7.
Fasting glucose the day after admission appears to be a better predictor of early mortality than glucose level on admission 8. Patients with both an elevated admission glucose and an elevated fasting glucose the next day have a 3-fold increase in mortality. Similarly, failure of an elevated glucose level to fall within 24 hours of admission is associated with excess mortality in STEMI patients without DM 9. The presence and degree of hyperglycaemia may not correlate with infarct size, as is commonly thought 6. Counter regulatory hormones like catecholamine, growth hormone, glucagons and cortisol are released in proportion to the degree of cardiovascular stress and may cause hyperglycemia and an elevation of free fatty acids, both of which lead to an increase in hepatic gluconeogenesis and a decrease in insulin-mediated peripheral glucose disposal.
Pathogenesis of hyperglycaemia
When acute coronary artery occlusion leads to symptoms, aid this is not always the case, there is stimulation of postganglionic sympathetic nerve endings with release of norepinephrine, and of the adrenal medulla with release of epinephrine. Both catecholamines are present in high concentrations in plasma and urine during the first 24-48 hours after the onset of symptoms. The concentrations of these catecholamines in plasma reach high levels within the first few hours after the onset of symptoms and later appear to be related to the severity of the infarct. Norepinephrine acts through beta-adrenergic receptors, to activate the adenylcyclase system in adipose tissue causing conversion of adenosine triphosphate (ATP) to cyclic adenosine monophosphate (AMP) and cyclic AMP activates a lipolytic system leading to hydrolysis of stored triglycerides to diglycerides, free fatty acids (FFA) and also glycerol. While some reesterification of FFA occurs, the net effect is release of FFA and glycerol into the circulation. In acute myocardial infarction, plasma FFA concentrations are elevated within 4 hours of the onset of symptoms. The highest values are found on the first day, and by the sixth day normal values are usually reached.
Glycerol levels are also elevated. There is a close relationship between blood catecholamine and FFA values in myocardial infarction. Epinephrine has a weak effect on adipose tissue lipolysis but its main action at this time is to stimulate glycogenolysis in liver and muscle with elevation of blood glucose levels. Epinephrine also suppresses beta cell activity in the pancreas with a decrease in insulin secretion leading to further elevation of blood glucose. Thus, hyperglycaemia occurs after acute myocardial infarction, and more than half of these patients have an abnormal glucose tolerance test during the first 72 hours of the attack. Reduction of insulin secretion has been demonstrated in patients after acute myocardial infarction following an intravenous glucose load and an intravenous Tolbutamide test. The degree of failure in these responses has been positively correlated with the severity of the illness and with the presence of cardiogenic shock.
Cortisol secretion and plasma growth hormone levels are increased during the first 24 hours after the onset of acute myocardial infarction. As the clinical condition improves, the degree of glucose intolerance diminishes and insulin secretion increases. In the second week plasma insulin levels are above normal, and at this stage the anabolic effect of insulin in enhancing the transport of amino acids into cells and their incorporation into protein is important for repair of the injured myocardium. Cortisol stimulates the breakdown of protein for gluconeogenic purposes and also the key gluconeogenic enzymes, but it is doubtful whether these actions operate until the acute period has passed 10, Figure 1.
Figure 1
Cardiovascular effects of hyperglycaemia
Acute hyperglycaemia is associated with numerous adverse effects that contribute to a poor outcome in STEMI. Acute hyperglycaemia rapidly suppresses flow-mediated vasodilatation, likely through increased production of oxygen derived free radicals 11. Hyperglycaemia increases intranuclear nuclear factor-B binding and activates proinflammatory transcription factors, which increase the expression of matrix metalloproteinase, tissue factor, and plasminogen activator inhibitor-1. The degree of oxidative stress correlates most closely with acute, not chronic, glucose fluctuations 12.
Increased oxidative stress interferes with nitric oxide mediated vasodilatation and reduces coronary blood flow at the micro vascular level. In STEMI subjects, acute hyperglycaemia is associated with reduced TIMI grade 3 flow before intervention compared with euglycemia and is the most important predictor of the absence of coronary perfusion 13. Similarly, diabetic subjects have reduced myocardial blush grades and diminished ST-segment resolution after successful coronary intervention in STEMI, consistent with diminished micro vascular perfusion 14.
Acute hyperglycaemia is associated with impaired microcirculatory function as manifest by “no reflow” on myocardial contrast echocardiography after percutaneous coronary intervention 15. Pre-existing HbA1c levels and diabetes status do not differ between subsets with and without no reflow, suggesting that acute, not chronic, hyperglycaemia is the dominant factor. Finally, the well-known adverse effects of hyperglycaemia on platelet function, fibrinolysis, coagulation, and ischaemic preconditioning likely contribute to the adverse effects of acute hyperglycaemia in STEMI. Hyperglycaemia is a reflection of relative insulinopenia, which is associated with increased lipolysis and free fatty acid generation, as well as diminished myocardial glucose uptake and a decrease in glycolytic substrate for myocardial energy needs in STEMI. Myocardial ischemia results in an increased rate of glycogenolysis and glucose uptake via translocation of GLUT-4 receptors to the sarcolemmal 16. Because glucose oxidation requires less oxygen than free fatty acid oxidation per molecule of ATP produced, myocardial energetics are more efficient during the increased dependence on glucose oxidation with ischaemia.
With relative insulinopenia, however, the ischaemic myocardium is forced to use free fatty acids instead of glucose as an energy source because myocardial glucose uptake is acutely impaired. Thus, a metabolic crisis may ensue as the hypoxic myocardium becomes less energy efficient in the setting of hyperglycaemia and insulin resistance. Acute hyperglycaemia may precipitate an osmotic diuresis. The resulting volume depletion may interfere with the frank starling mechanism for the failing left ventricle in which increased end diastolic volume leads to increased stroke volume thus decreasing the cardiac output 17, Table 2.
Table 2: Acute Cardiovascular Effects of Hyperglycaemia
Impaired microcirculatory function (“no-reflow” phenomenon)
Impaired ischaemic preconditioning
Impaired insulin secretion and insulin based glucose uptake
Conclusion
An essential diagnostic feature of diabetes is increased blood glucose concentration and the principal aim of diabetes treatment is normalisation of blood glucose. Hyperglycaemia can also occur when normal hormonal control of blood glucose concentration is disturbed by the stress associated with acute myocardial infarction.
The blood glucose is raised in the immediate period following acute myocardial infarction irrespective of diabetes status. In this review article the current understanding of the significance of hyperglycaemia occurring as a result of acute myocardial infarction is discussed.
A significant part of the review is directed to the discussion of epidemiological prevalence that confirms an association between hyperglycaemia and mortality following myocardial infarction.
It remains clear and undisputed that there is association between hyperglycaemia and increased mortality following acute myocardial infarction. Review of various articles ranging from experimental and clinical studies have demonstrated several mechanisms by which hyperglycaemia could adversely affect outcome of myocardial infarction. The final part of the review concluded that if treatment is aimed at normalising blood glucose improves outcome of acute myocardial infarction patients who present with hyperglycaemia.
Calcinosis cutis involves deposition of calcium salts in skin and subcutaneous tissue. It is commonly associated with autoimmune connective tissue diseases and can be a source of pain and disability1. It can occur in damaged or devitalized tissues in the presence of abnormal or even normal calcium/phosphorus metabolism. These calcifications can lead to contractures, muscle atrophy, skin ulceration and infections2. There are four types of calcinosis cutis: idiopathic, dystrophic, metastatic, and iatrogenic. Determining the type of calcinosis is very important for accurate management3.Calcinosis cutis is a condition seen in the middle to elderly aged population and has rarely been described in neonates in the medical literature. We discuss a neonate in the succeeding text who presented to our Emergency department with a leg swelling.
Case Report
A 20 days old full term neonate was brought to our Emergency department with right leg swelling for the past ten days. He was feeding well and was afebrile. On examination there was swelling of right lower leg including the right foot with minimal redness of overlying skin. We did x-rays of the right foot and right leg, which showed a sheath of cutaneous calcification in right foot (Image A and Image B) and anterior-lateral of right leg (Image C and Image D).
Image A
Image B
Image C
Image D
There was no evidence of any bony destruction. White cell count and other inflammatory markers were normal. Upon reviewing the previous records we found that soon after the birth the neonate was admitted with pneumonia and during the hospital admission there was extravasation of calcium gluconate infusion at the dorsum of the right foot which explains the whitish sheath seen in the imaging. Musculoskeletal ultrasound did not reveal any signs of fluid collection or periosteal swelling. The patient was treated conservatively and regular follow up was insignificant and showed complete regression of the swelling three months later.
Discussion
Calcinosis cutis is an uncommon disorder caused by an abnormal deposit of calcium phosphate in the skin in various parts of the body. It is often noted in the subcutaneous tissues of connective tissues diseases primarily systemic lupus erythematosus, scleroderma and juvenile dermatomyositis4,7. Four main types of calcinosis cutis have been recognized according to etiology: associated with localized or widespread tissue changes or damage (dystrophic calcification), that associated with an abnormal calcium and phosphorus metabolism (metastatic calcification), not associated with any tissue damage or demonstrable metabolic disorder (idiopathic calcification), and Iatrogenic2-3,6-7.
It is recommended that patients be evaluated for abnormalities of calcium and phosphorus metabolism and that they be assessed for associated systemic conditions, such as collagen vascular diseases, renal insufficiency, and vitamin D poisoning. Determining the exact type of calcinosis cutis is very important for selecting accurate management3. Many agents have been used for treatment of calcinosis but none has been accepted as a standard therapy. Case studies have shown that aggressive treatment of the underling inflammatory condition with intravenous immunoglobulin, anti TNF agents, thalidomide and haematopoietic stem cell transplantation has also led to improvement of the calcinosis1,3. Moreover, agents such as warfarin, bisphosphonates and diltiazem have been aimed at treating the process of calcinosis with varying success3.Some experts have advocated surgical excision in severe resistant cases4.Calcinosis cutis has been rarely reported in neonates. It almost exclusively occurs due to iatrogenic causes8. Calcium gluconate has been widely used in the treatment of neonatal hypocalcemia which is a common problem confronted in this age group. When extravasation of calcium gluconate occurs; swelling, erythema, signs of soft tissue necrosis or infection may be seen. Rarely local calcification appears, called calcinosis cutis9-10.
Plain radiography is gold standard for diagnosis but are initially negative because calcium solutions used therapeutically are radiolucent. X-ray findings usually appear within 1-3 weeks9.This is consistent with our case. The pathogenesis of calcinosis cutis caused by extravasation of IV calcium is degeneration and soft tissue necrosis11. If extravasation of calcium gluconate is suspected; the IV line must be removed immediately. Cold packs should be applied for 15 minutes four times a day to treat edema at extravasation sites and limb elevation for 48 hours is suggested12. Supportive care remains the main element of the treatment and only in case of skin necrosis and secondary infection, debridement and antibiotics should be used8.
Calcinosis cutis in neonate can be easily misdiagnosed ascellulitis, arthritis, pyogenic abscess, osteomyelitis and thrombophelibitis8.In the present case also initially we were suspecting an infectious etiology. Initial x-rays can be misleadingly normal as it take about ten days to precipitate. The clinical and radiological findings usually disappear over a span of 2-6 months which is compatible with our case too13.
Browse the December 2010 PDF Booklet (Volume 3 Number 4)