A family carer or caregiver is someone who gives a substantial amount of unpaid care and support regularly to a relative, partner or friend. Currently, there are over 850,000 people living with dementia in the UK, of which two thirds are looked after in the community by primary carers, and the demands on individuals and families are set to increase1. Without the work of unpaid family carers, the formal care system would be likely to collapse.
Many people in the UK still do not feel comfortable talking about dementia, especially with their own families. A recent survey of more than 2,100 carers, of which 17% of respondents cared for a person with dementia, found that 75% of carers were not prepared for all aspects of caring. Nor were they prepared for the emotional impact, lifestyle or relationship changes of their caring role2. Failure to prepare and support carers in their role not only affects their own personal health and wellbeing, but can also lead to the early and potentially avoidable admission of people with dementia into formal care.
As dementia progresses, family members often provide care under high level of stress for longer periods of time. The effects of being a family caregiver, though sometimes positive, are generally negative on their psychological and physical health, life expectancy and quality of life 3. It is therefore important to educate carers of family members with dementia to improve their knowledge of, and attitude towards people with dementia. Poor knowledge about dementia has been found to result in the underutilisation of support and treatment services, and in poorer outcomes of people with dementia and their caregivers such as inadequate care of the disease, misinterpretation of behaviours and increased caregiver stress due to failure to seek appropriate support4.
Currently there is too much reliance on people with dementia and carers seeking out information for themselves. The result is that people do not receive the information they need because they do not know what to ask for. Despite the existence of information for carers, people report that their information needs are not met. Information is provided too late or not at all. A key problem is that people have to ask for information, rather than it being provided proactively.
It has been found that education and training programme covering the information5, or an individual training programme6, improve attitudes towards caring for people with dementia as well as general knowledge of dementia7. Psychosocial interventions have also been demonstrated to reduce caregiver burden and depression, and delay care home admission8. A systematic review9 of 44 randomised controlled trials has found statistically significant evidence that group-based supportive interventions impact positively on caregivers of people with dementia.
Coon et al (2003)10 found that psychoeducational skill training, in small groups, improved both the affective states and the type of coping strategies used by caregivers. On the other hand, an information-orientated programme failed to improve caregiver’s mood11, and a befriending scheme was not effective in improving carer’s wellbeing12. Similarly, a randomised controlled trial did not show preventive effects of family meetings on the mental health of family caregivers13. Livingstone et al (2013)6, on the other hand, have found encouraging results of a manual based coping strategy programme in their London study.
A suitable training programme is therefore required for building caregivers’ knowledge and skills. We have developed a Dementia First Aid (DFA) course for the family carers of people with early dementia. This is a problem solving, stress reducing, and crisis preventive training programme. The DFA course was inspired by the principles of Mental Health First Aid programme14, developed in Australia in 2001 and introduced to England in 2007 by the National Institute for Mental Health in England.
Dementia First Aid Course
Description of the course
Dementia First Aid course is delivered over 4 hour in a group setting. Each participant received a course manual prepared by the author AJ. The content covered an overview of dementia, impact of dementia on the individual, impact of caring on families, mindfulness-based stress reduction training, and a detailed discussion of Dementia First Aid Action Plan for crises associated with behavioural and psychological symptoms of dementia (BPSD).
In November-December 2013, a group of 8 health care professionals, working within the specialist mental health services for older people in Hertfordshire, were offered the 12-hour advanced Dementia First Aid course, followed by an additional 12-hour practice training of presenting the course to a group of family carers of people with recently diagnosed dementia.
Evolution of DFA course
The original 12-hour Dementia First Aid course was delivered over three half days. Although the course was well received by both carers and trainers, the dropout rate was high. This was mainly due to the carers struggling to make alternative arrangement to look after the person with dementia while they were away. The course was therefore changed to 8 hours and then reduced to 4 hours based on feedback received by the carers.
The main aim of this pilot evaluation was to investigate the potential benefits of a Dementia First Aid course in terms of the knowledge and attitude of family carers of people with newly diagnosed dementia.
Methods
The participants were the primary family caregivers of people with dementia residing in northwest Hertfordshire. The DFA course was organised once every two month from November 2015 till March 2017.
An invitation letter, along with details of the pilot assessment, was sent to all those carers of people whose dementia was diagnosed recently in memory clinic and all participants were given at least 4 weeks’ notice prior to the course.
Selection criteria included: being aged 18 or above, the primary carer of a person with newly diagnosed dementia (i.e. currently providing at least 20 hours of direct care per week) & residing in Hertfordshire.
The training was delivered by a pair of qualified DFA instructors, who were mental health professionals experienced in dementia care in the NHS. The training was conducted using a power point presentation, group work, and audio-visual clips based on a specially designed DFA manual.
Evaluation questionnaire
The participants were asked to complete a questionnaire on their own at the beginning of the programme. Oral consent from participants were obtained prior to filling out the questionnaire, the participants were made aware that participation in the pilot assessment was voluntary and would not pose any barrier for them to join the programme.
Participants were given Alzheimer’s disease Knowledge Scale15, a questionnaire comprising of 30 questions before and after the training. They were also asked to complete the Zarit Burden Scale a 12 item self-reported scale16 to measure carer burden.
After 6 months the participants were contacted to complete ADKS and Zarit Burden Scale. ADKS is therefore completed thrice and Zarit Burden Scale is completed twice during the study.
Statistical analysis
The data collected were analysed in two ways. First, ADKS data collected at pre-test were compared to post-test scores to examine change in participants’ knowledge. The participants’ knowledge at the end of 6 months was also compared to pre and post-test scores. Similarly Zarit Burden Scores at the time of initial assessment were compared to scores 6 month post training. To evaluate the effect of the training, answers to the structured questions given at pre- and post-test and scores at 6 months were compared using a correlated group t-test.
Results
The study sample comprised 65 people who had completed the DFA course. All completed ADKS pre- and post-training and completed Zarit Burden Scale, and a further 34 provided follow-up data approximately 6 months later.
Sample characteristics:
Mean (±SD) age = 66.9 (± 13.8) years (range 31-90). 23 attendees were male, 42 were female
ADK scores
Looking first at all 65 attendees:
ADK scores for whole sample
Pre-course
Post-course
Mean
16.7
21.2
SD
5.7
4.5
Min
0
10
Max
26
29
ADK scores improved significantly immediately after attending the course (p < 0.0001).
Score improvement was not predicted by gender (p > 0.3), and the correlation between score improvement and age was not significant (R = 0.023). We did not examine age and gender further.
Analysis of sample of 34 who provided long-term follow up data:
ADK scores for sub-sample
Pre-course
Post-course
6+ Month
Mean
17.2
22.0
21.0
SD
4.9
4.5
4.8
Min
1
11
7
Max
24
29
29
For the smaller sample, ADK scores improved significantly immediately after attending the course (p < 0.0001), and this was sustained at the longer-term follow up (p < 0.0001). Although the mean ADK score dropped by a point at 6+ months, this was still a significant improvement over the pre-course (baseline) score.
Comparing post-course ADK score with 6+ month follow-up ADK score, no significant difference was observed (t[33] = 1.48, p = 0.15), suggesting that knowledge was not lost to a significant degree.
Zarit Burden Scale Scores
The response rate Zarit Burden scores was not good as only 19 of the sample completed this at 6 month follow up. The score for this cohort increased by 3.58 points, which was borderline significant and is expected as dementia is a progressively declining condition.
Discussion
This is the first report on the level of dementia knowledge among family caregivers in the UK before and immediately after the implementation of a novel post-diagnostic dementia training programme, the Dementia First Aid Course and whether the knowledge sustains after 6 month.
The mean pre-course score on the ADKS in the sub-sample that completed test at 6 months was significantly lower at 17.2 than 22.7 reported by Smyth et al. (2013)7. It was expected that the level of dementia knowledge would improve after attending the course and the findings largely fulfilled this expectation. There was a significant difference between the pre and post training score with p value < 0.0001. Further there is evidence that the knowledge sustained after 6 months of the training.
The intervention studied in a recent British trial6 is an individual therapy programme, consisting of psychoeducation about dementia, carer’s stress, behaviour management, and relaxation techniques. The effectiveness of the programme on carer’s depression and abusive behaviour was significant. To provide individual training for a huge number of families may not be possible in the NHS. Therefore, a group based training approach employed in our study may well be more sustainable.
The carer’s burden of care as measured by Zarit burden scale at the time of training and 6 months later showed only a modest increase of 3.58 points. However, it was apparent that training could not affect the relentless progression of dementia, most of which were of the Alzheimer’s type.
Limitations
Being a pilot evaluation, the sample size of this study was small. This pilot assessment may be limited by the fact that participants were not randomly selected. Since the current evaluation was conducted in only one part of the County, the sample may not reflect a wider community. The knowledge gained during the course was sustained at the end of 6 months. However training did not reduce carer burden nor it was clear whether the new knowledge and skills will be effective in preventing crises. Brodaty et al (1989)17 reported reduced psychological morbidity of the carer following dementia carers’ programme but cautioned against delay in institutionalisation of patient at the expense of the morbidity of the carer.
Finally, the present pilot evaluation was uncontrolled and non-randomised, so we do not know to what extent any impact is due to the dementia first aid training, passage of time or experience of caring. A randomised controlled study with follow-up measurements on caregivers’ knowledge, sense of burden, psychological health and wellbeing, would be the ideal next step.
Key points
Most people with dementia live at home and are cared for by their spouses, children or other family members, but these carers are not usually offered adequate information and training about dementia and the impact of caring at the time of diagnosis.
This paper describes the effectiveness of a short (4-hour) version of a novel training programme, the ‘Dementia First Aid’ course, for family caregivers of people with early dementia.br> The Dementia First Aid course includes overview of dementia including Alzheimer’s disease, impact of dementia on the person and their family caregivers, principles and practice of ‘mindfulness’ to enhance coping ability, and first action plan for common behavioural and psychological symptoms of dementia.
‘Dementia First Aid’ course, appears to enhance caregivers’ knowledge of dementia.
Conclusions
The significance of these results can be placed in the wider context of proactive dementia training for family caregivers at the time of diagnosis. The results are important in demonstrating that having dementia training is associated with improved knowledge.
This study adds to the existing literature and has implications for both care and policy regarding community care of people with dementia, and emphasises the importance of dementia training as a routine component of post-diagnostic support.
Although knowledge alone does not necessarily translate into change in care, nor does high quality of care solely depend on broad education about dementia7, our results suggest that the dementia first aid course is effective in changing the knowledge and attitude of dementia caregivers. Hopefully, this will also enhance their ability and skills of caring, which may in turn reduce caregivers’ sense of burden and wellbeing. A randomised controlled study with follow-up measurements is required to support these claims.
A 33 year-old ASA1, primigravida, presented to our delivery suite with spontaneous onset of labour at 38 weeks of gestation. Epidural analgesia was commenced to alleviate her labour pains. Subsequently, she underwent an assisted vaginal delivery of a live male baby (weighing 4660 gms) using Keiland’s outlet forceps after 90 min second stage of labour. 10 hours postpartum, she complained of dyspnoea & severe central substernal chest pain. She was noted to have an unusual swelling of face and neck with oxygen saturations of 90 % on room air. Ascultation of chest revealed normal bronchovesicular breath sounds, normal heart sounds with absence of added sounds. Arterial blood gases showed an O2 tension of 11 kpa, CO2 tension of 5 kpa and pH of 7.34. The diagnosis of subcutaneous emphysema, pneumomediastinum and small left apical pneumothorax (Hamman’s syndrome) was confirmed on chest X-ray (CXR 1). We ruled out differential diagnosis of pulmonary embolism, Tension pneumothorax, angina pectoris, pericarditis, dissection of aortic aneurysm, mediastinitis, cardiac tamponade, chest infection & oesophageal tear. She was managed conservatively by close monitoring for complications, administration of supplemental oxygen and use of simple analgesics. She demonstrated a complete uneventful recovery over the next 24 hours with normalising of chest signs (CXR 2).
CXR 1: shows pneumomediastinum, extensive surgical emphysema & a left apical pneumothorax.
CXR 2: shows small pneumomediastinum, the surgical emphysema & pneumothorax resolved.
Discussion:
Hamman’s syndrome is named after Louis Hamman (1847-1946), the physician who first described it in 1945. The first reference to this condition was in 1618, when Louise Bourgeois, midwife to the Queen of France, wrote, “I saw that she tried to stop crying out and I implored her not to stop for the fear that her neck might swell”3.
Hamman’s syndrome usually occurs in the 2ndstage of labour & is associated with prolonged and protracted labour and larger than usual babies 4. However, the clinical presentation is often delayed to the postpartum phase as was clearly seen in our case. The condition seems to be provoked by any valsalva manoeuver such as vigorous coughing/vomiting/sneezing, forced physical activity & enormous efforts during spontaneous vaginal delivery. Its occurrence is usually related to the expulsive phase of labour when ‘pushing down’ actively raised the intraalveolar pressure. This may subsequently increase the intrathoracic pressure up to 50 mm of Hg or higher1. Rupture of marginal alveoli with air entering along the perivascular sheath into the mediastinum is the most likely mechanism, in our case. It is probable that, the air tracts through the fascial planes into subcutaneous and retroperitoneal tissues. Other reported mechanisms of Hamman’s syndrome include oesophageal rupture during childbirth, or pneumomediastinum related to asthmatic bronchospasm5 or chest infection, or dissection of pneumoperitoneum, secondary to epidural catheter placement or caesarean section1.
Palpable crepitus on face & neck is suggestive of subcutaneous emphysema & appearance of this emphysema in labour is the hallmark of pneumomediastinum. Other features of pneumomediastinum include substernal chest pain, dyspnoea, voice change, cough, sore throat and tachycardia1. Hamman’s sign, a fine auscultatory crepitation synchronous with the heartbeat, heard along the left sternal border; is sometimes observed in this condition2.
Chest X-ray and CT thorax are the diagnostic tests. Majority of the patients with Hamman’s syndrome have pneumomediastinum & subcutaneous emphysema without any pneumothorax and this requires supportive management with strict monitoring. Our patient demonstrated a small pneumothorax, which was managed conservatively. A surgical intervention in the form of subcutaneous air drainage may occasionally be indicated in severe cases.
Overall most cases have a benign, self-limiting course when the aggravating factors are no longer present. Published data indicates that subsequent pregnancies pose no additional risk of recurrence5.
Conclusion:
Since Hamman’s syndrome is a potentially dangerous complication of normal childbirth. We propose that every obstetric anaesthetist and obstetrician should be aware of this syndrome.
Bipolar affective disorder (BPAD) is one of the commonest psychiatric disorders with a lifetime prevalence of about 3% in the general population and is the sixth leading cause of disability worldwide (1,2).This disorder is characterised by repeated episodes in which the patient’s mood and activity levels are significantly disturbed. This disturbance consists on some occasions of an elevation of mood and increased energy and activity (mania or hypomania), and on other occasions of a lowering of mood and decreased energy and activity (depression) (3). As the illness starts early in life, i.e., during teens or early adulthood, persons suffering from BPAD have symptoms of illness for the major part of their life (4, 5).
In India, since professional services, both in public and private sectors are not adequately developed due to shortage of trained human resources and infrastructure, the family support system plays a major role in caring for people with mental illnesses (6). The primary caregiver is identified as an adult relative (a spouse, parent or spouse equivalent) living with a patient, who is involved in the care of the patient on a day-to-day basis, takes the responsibility for bringing the patient to the treatment facility, stays with the patient during the inpatient stay, provides financial support and/or is contacted by the treatment staff in case of emergency (7). Intensive involvement in the care of the patient is often associated with significant caregiver burden.
Caregiver burden can be defined as the presence of problems, difficulties or adverse effects which affect the lives of caregivers of patients with various disorders or illnesses, e.g. members of the household or family (8). Family burden is broadly divided into objective and subjective burden. While the notion of the objective family burden relates to measurable problems (e.g. patients’ troublesome behaviours), the idea of subjective family burden is bound to caregivers’ emotions arising in response to the objective difficulties (9). Multiple studies across the world have shown that bipolar disorder is associated with significant caregiver burden (10-31). In view of the high caregiver burden, it is now suggested that the emphasis in psychiatric rehabilitation needs to shift from a patient-focused approach to a combined patient and caregiver-focused approach. Although there are studies from different parts of the country, there is a lack of data on caregiver burden from Kashmir, which is often faced with turmoil, which can influence caregiver burden. The present study is an effort in this direction to assess caregiver burden and its correlates among primary caregivers of patients with bipolar disorder.
Methodology
The present study was conducted on primary caregivers of patients with BPAD. Primary caregivers were defined as those caregivers who were closely involved in the care of the patient during the acute episodes and during the maintenance period in terms of bringing the patient to the hospital, supervising the medications and liaison with the treating team.
The study sample comprised of 100 caregivers of 100 patients diagnosed with BPAD as per the International Classification of Diseases classification of mental and behavioural disorders, 10th revision (ICD-10) (3), attending either the outpatient or inpatient services at the Department of Psychiatry, SKIMS, Bemina, Srinagar. The study was approved by the Ethics Committee of the institute and all the participants were recruited after obtaining written informed consent.
To be included in the study, the caregivers were required to be involved in the care of patients, aged 18 or above, living with the patient for at least 1 year and were a family member taking care of patients without any wages. Caregivers who were diagnosed with psychiatric illness and staying with the patient for less than 12 months were excluded.
The caregivers were assessed by following scales:
Family Burden Interview Schedule (FBIS) (32):This is a semi-structured interview schedule having 24 items, each of which is scored on a 3-point scale, i.e. 0 indicating no burden, 1 indicating a moderate level of burden and 2 suggesting severe burden. The items of the objective burden of the scale are divided into 6 domains, i.e. financial burden, disruption of routine family activities, disruption of family leisure, disruption of family interaction, physical health and mental health. Subjective burden is evaluated by a single item. This scale has been widely used in previous studies from India (26, 33-35).
DUKE-UNC Functional Social Support Questionnaire (FSSQ) (36):The Duke-UNC Functional Social Support Questionnaire (FSSQ) is an 8-item instrument to measure the strength of the person's social support network (36). Responses to each item were scored as 1 (‘much less than I would like’), 2 (‘less than I would like’), 3 (‘some, but would like more’), 4 (‘almost as much as I would like’) and 5 (‘as much as I would like’). The scores from all eight questions are summed (maximum 40) and then divided by 8 to get an average score. The higher score indicates better perceived social support. Cronbach’s alpha for this scale is 0.84.
Hindi General Health Questionnaire (GHQ-30) (37):The modified version of Goldberg's General Health Questionnaire (GHQ) (38) was used. This is a screening device for identifying minor psychiatric disorders in the general population and within the community or non-psychiatric clinical settings such as primary care or general medical outpatients. The self-administered questionnaire focuses on two major areas: the inability to carry out normal functions and the appearance of new and distressing phenomena. In each question of the 30-item GHQ, the caregivers were asked to choose among: Better than usual or same as usual = 0, less than usual or much less than usual = 1.The results were evaluated by the two-step assessment method (0-0-1-1-method). The minimum GHQ-30 total score was 0 and the maximum total score of GHQ-30 was 30. A cut-off of 6 was used to categorize those with and without psychiatric morbidity. Cronbach’s alpha value of the GHQ-30 was 0.93. The Kappa coefficient was 0.64 (p<0.001).
The recorded data was compiled and entered into a spreadsheet (Microsoft Excel) and then exported to data editor of SPSS Version 16.0 (SPSS Inc., Chicago, Illinois, USA). Continuous variables were summarised in the form of means and standard deviations and categorical variables were summarised as percentages. Student’s independent t-test and Chi-square tests were employed for comparing caregiver burden with different variables.
Results
Table 1: Description of socio-demographic variables of caregivers
Variables
Caregiver Frequency (n=100)(%)
Patients Frequency (n=100)(%)
Age (Years)
20-29
11(11%)
12(12%)
30-39
24(24%)
26(26%)
40-49
26(26%)
31(31%)
50-59
34(34%)
14(14%)
≥ 60
5(5%)
17(17%)
Mean± SD
43.4 ±11.25
34.3±12.86
Gender
Male
52(52%)
47(47%)
Female
48(48%)
53(53%)
Marital Status
Unmarried
7(7%)
37(37%)
Married
93(93%)
63(63%)
Educational Status
No formal education
48(48%)
36(36%)
Primary
5(5%)
6(6%)
Secondary
27(27%)
32(32%)
Graduate
20(20%)
26 (26%)
Occupation
Unemployed
3(3%)
10(10%)
Labourer
27(27%)
24(24%)
Student
3(3%)
16(16%)
House maker
44(44%)
34(34%)
Employed
23(23%)
16(16%)
Socio-economic Status
Low
60(60%)
60(60%)
Middle
40(40%)
40(40%)
High
0(0%)
0(0%)
Relationship of caregiver
Father
11(11%)
Mother
22(22%)
Spouse
55(55%)
Duration of care
1-5yrs
77(77%)
6-10yrs
16(16%)
>10yrs
7(7%)
Mean ± SD
4.8±4.16
Table 2: Clinical profile of patients.
Patient Variables
Frequency(n=100)(%)
Duration of illness
1-5 Yrs
77(77%)
6-10 Yrs
16(16%)
11-15 Yrs
5(5%)
16-20 Yrs
1(1%)
> 20 Yrs
1(1%)
Mean±SD
4.83±4.25
Number of hospitalisations
Never
47(47%)
Once
24(24%)
Twice
18(18%)
Thrice
6(6%)
Four Times
5(5%)
Mean±SD
0.98±1.16
Number of episodes of mania
1-2
55(55%)
3-4
39(39%)
5-6
6(6%)
Mean±SD
2.61±1.12
Number of episodes of depression
< 3
15(15%)
3-5
64(64%)
> 5
21(21%)
Mean±SD
4.05±1.87
Number of attempts of homicide
0
75(75%)
1
8(8%)
2
4(4%)
≥ 3
5(5%)
Mean±SD
0.37±0.93
Number of attempts of suicide
0
75(75%)
1
1(1%)
2
6(%)
≥ 3
2(2%)
Mean±SD
0.23±0.74
Compliance with medication
Yes
73(73%)
No
27(27%)
Table 3: Caregiver burden, social support and psychological morbidity among caregivers
Psychosocial parameters
Mean (SD)
Range
Caregiver burden (FBI scores)
Financial burden
7.01 (2.28)
3-12
Disruption of family routine activities
5.38(1.77)
3-9
Disruption of family leisure
4.12 (1.26)
2-8
Disruption of family interactions
4.04 (1.36)
3-9
Effect on physical health of others
2.28 (0.83)
1-4
Effect on mental health of others
1.51 (0.82)
0-4
Total family burden
24.31 (7.35)
13-44
Objective burden Score < 12 Score ≥12
3 97
Subjective Caregiver burden score
1.12(0.61)
0-2
DUKE UNC FSSQ
3.17 (0.84)
1.75-4.75
GHQ-30
13.14 (5.65)
2-25
GHQ score < 6 GHQ score ≥ 6
77 (77%) 23 (23%)
Table 4: Association of caregiver burden with socio-demographic variables of caregivers
Caregiver Variables
N
Mean
SD
P-value
Age (Years)
20-29 30-39 40-49 50-59 ≥ 60
11 24 26 34 5
20.63 22.67 25.08 26.93 29.25
4.860 7.409 6.211 5.839 6.675
<0.001*
Gender
Male Female
52 48
23.60 27.35
7.384 7.309
0.012*
Marital Status
Married Unmarried
93 7
26.97 21.29
7.409 6.211
0.041*
Educational Status
No formal education Primary Secondary Graduate
48 5 27 20
28.78 27.80 24.69 22.35
7.772 7.596 7.223 5.092
0.015*
Occupation
Unemployed Labourer Student House maker Employed
3 27 3 44 23
23.15 25.47 23.05 28.05 22.07
7.268 1.399 6.891 6.891 7.312
<0.001*
Socio-economic status
Low Middle High
60 40 0
26.88 23.38 0
7.958 5.687 0
0.018*
Type of family
Nuclear Joint
82 18
28.37 23.54
5.463 6.354
0.002*
Relationship to patient
Parent Spouse Offspring
33 55 12
24.47 28.04 21.57
7.972 7.038 6.024
0.008*
Duration of care
1-5 Years 6-10 Years > 10 Years
77 16 7
22.99 33.06 35.57
5.644 6.027 5.996
<0.001*
Table 5: Clinical Profile of patients with bipolar disorder
Disease Profile
No.
Mean
SD
P-value
Duration of illness
1-5 Yrs 6-10 Yrs ≥ 10Yrs
77 16 7
22.98 33.07 37.01
5.644 6.027 2.887
<0.001*
Number of Hospitalisations
Never Once Twice Thrice Four Times
47 24 18 6 5
22.21 25.83 26.54 28.50 31.00
7.896 7.438 6.527 4.506 6.042
0.045*
Number of episodes of mania
1-2 3-4 5-6
55 39 6
22.27 27.97 38.65
5.612 6.726 2.066
<0.001*
Number of episodes of depression
< 3 3-5 > 5
15 64 21
21.93 23.91 32.81
7.611 5.817 6.615
<0.001*
Compliance with medication (>75%)
Yes No
73 27
24.51 27.94
7.328 7.377
0.041*
Table 6: Clinical Profile of patients with bipolar disorder
The study included nearly equal number of male and female patients. About two-thirds of the patients were married (63%). About one-third of the patients had not received any formal education and another third had completed their secondary education and one-fourth had completed graduation (Table 1).
Description of socio-demographic variables of caregivers
The study included nearly equal numbers of male and female caregivers. The majority (55%) of the caregivers were spouses of the patient. The majority of the caregivers were married (93%). Nearly half of the caregivers had not received any formal education (48%), were homemakers (44%) and three-fifths of them were from low socioeconomic status (60%). The majority of caregivers (77%) had been caring for duration of one to five years (Table 1).
Clinical profile of patients.
In the present study, the majority of patients (77%) had duration of illness in the range of 1-5 years, nearly half of them were never hospitalised, the majority (55%) of patients had one to two manic episodes, most of them (64%) had three to five episodes of depression, and the majority (75%) of them never attempted suicide or homicide. The majority of patients (73%) were compliant with medication. (Table 2)
Caregiver burden, social support and psychological morbidity among caregivers
As is evident from Table 3, the highest burden was reported in the financial domain, followed by disruption in family routine activities, disruption of family leisure, disruption of family interactions, effect on physical health of others and least burden was reported in the form of effect on mental health of others. The mean DUKE UNC FSSQ score was 3.17 (SD=0.84) with range 1.75-4.75.
Mean GHQ-30 score was 13.14(SD=5.65) with a range of 2-25. Of the 100 caregivers, about one-fourth (N=23) had a GHQ-30 score of 6 or more, indicative of psychological morbidity.
Association of caregiver burden with demographic and clinical variables
As is evident from Table 4, higher caregiver burden was associated with higher age, female gender, lack of formal education, being a homemaker, lower socioeconomic status, a nuclear family set-up, being spouse of the patient and longer duration of being in the caregiver role.
Clinical Profile of patients with bipolar disorder
In terms of clinical variables, higher objective caregiver burden was associated with duration of illness more than 10 years, higher number of hospitalisations and higher number of manic and depressive episodes. Caregivers of patients consuming >75% of the prescribed medications reported lower caregiver burden (Table 5).
Advancing age of patient and caregiver, increasing duration of care, prolonged illness, greater number of hospitalisations and higher number of episodes of either polarity were significantly associated with higher caregiver burden. In terms of association of social support and caregiver burden, higher social support was associated with significantly lower caregiver burden, whereas higher caregiver burden was associated with higher psychological morbidity (Table 6).
Discussion
Families play an important role in care of patients with chronic mental illnesses. In the process of caring for such patients, relatives face a considerable burden.
Findings of the present study suggest that higher burden was seen among the caregivers who were relatively older, of female gender, uneducated or illiterate, homemakers and from nuclear families. Compared to parents and siblings, spouses reported significantly higher levels of caregiver burden. Furthermore, the caregivers involved in the care of the patient for longer durations reported significantly higher levels of caregiver burden.
In terms of clinical variables of patients, higher caregiver burden was associated with longer duration of illness, higher number of lifetime hospitalisations, higher number of manic and depressive episodes and poor medication compliance. Poor social support was associated with a higher level of caregiver burden. Higher caregiver burden was associated with higher psychological morbidity.
Many previous studies from India have evaluated caregiver burden among caregivers of patients with bipolar disorder (10-32). There is a lack of consensus with respect to caregiver variables and their association with caregiver burden (39). Some of the studies suggest that there is no significant difference in the caregiver burden as reported by caregivers of either gender (6), whereas others suggest that females report higher caregiver burden (13, 40). Our findings support the studies which have reported higher caregiver burden among female caregivers. This finding of ours could have been influenced by the relationship of caregivers with patients. In the present study, spouses of patients formed a large proportion of caregivers and they reported significantly higher burden, in contrast to parents and siblings. Cultural issues like restriction of females to household activities with lower opportunities to vent out their distress, inability to spend time on leisure activities, financial dependency and lack of independence could also be responsible for higher perceived burden. It was noticed that caregivers from nuclear families had higher caregiver burden as compared to those from joint families. The joint family system is considered to promote interdependence and possibly is associated with sharing of caregiver burden and this may explain why caregivers from joint families reported lower caregiver burden. Similar findings have been reported in earlier studies from India (41).
Findings of the association of higher caregiver burden with duration of illness are supported by existing literature (14). This finding suggests that possibly with passing time, frequent relapses of illness lead to caregiver burnout, which leads to higher caregiver burden. Previous studies have also noted an association of higher caregiver burden with higher numbers of hospitalisation (30). Findings of the present study too support this association. Higher caregiver burden with greater numbers of hospitalisations possibly indicate more severe episodes and hospitalisation associated with more expenditures and loss of earnings. This suggests that all efforts must be made to pick up relapses at the earliest and manage them effectively to minimise the chances of progression to severe episodes and resultant need for inpatient care. Previous studies have also reported association between higher caregiver burden and higher number of episodes, especially manic episodes (14) and more severe manic episodes (42). Manic episodes of the illness are very disruptive to daily life, work and family relationships. Due to this, these episodes place great demands on family members involved in caregiving. These demands can persist even during remission, where residual symptoms are often still present and lead to caregiver burden. Available data from India suggest that in contrast to patients from the West, patients from India have a higher number of manic episodes (43). Taken together, this finding has important implications as this suggests that efforts must be made to prevent frequent relapses in patients with bipolar disorder, especially in the Indian context to reduce the caregiver burden (44).
In the present study, higher burden was also associated with a higher number of depressive episodes and this finding is supported by existing literature (16).
Long-term management of bipolar disorder requires continuation of medications with good compliance. Poor medication compliance has been shown to be associated with many negative patient-related outcomes like higher risk of relapses, suicidality, poor quality of life, higher residual or sub-syndromal symptoms etc (45, 46). The present study adds to this body of literature and suggests that poor medication compliance in patients is also associated with higher caregiver burden and this finding is supported by the existing literature (11).
Among the demographic variables of caregivers, higher age of caregivers was associated with higher caregiver burden. This finding is also supported by existing literature (6). This association possibly suggests that with increasing age, the caregivers possibly experience more burnout, lose hope and also lose physical vigour to take care of the mentally ill relative.
Accordingly, it is important for the mental health professionals to support the ageing caregivers.
To conclude, the present study suggests that BPAD is associated with higher caregiver burden. Higher caregiver burden is associated with clinical variables of the patients and demographic variables of the caregivers. Among the patient-related variables, longer duration of illness, those with a higher number of lifetime episodes of either polarity and poor medication adherence are associated with higher caregiver burden. Hence, all measures must be taken to minimise relapse in patients with BPAD. Among the demographic variables of caregivers, higher caregiver burden is reported by caregivers who were relatively older, of female gender, uneducated or illiterate, homemakers and from nuclear families.
Our findings highlight the need for additional research on interventions to reduce burden among caregivers of patients with bipolar affective disorder. For better outcomes of disease, more attention needs to be given to the primary caregivers in terms of psycho-education and counselling.
Fractures in surgically fused scoliotic spines are very uncommon and only a few cases have been reported in the literature. It is not possible to predict the outcome of traumatic injuries in fused spines. There is no reported prevalence or prognostic data in the published literature and all we have are a few case reports from different parts of the world.
In this case report we describe an unusual case of a spinal fracture in a 60-year-old patient, who had surgical fusion of her scoliotic spine 50 years ago.
Case Report
A 60-year-old lady presented to A&E after a trivial fall on an icy path approximately 10 days before presentation. She had pain in her back since the fall, gradually getting worse despite escalating doses of opiate analgesics. Past medical history revealed that she had congenital lympho-haemangioma causing deformity in her back and left foot. At the age of 6 months she underwent an extensive surgical excision of the tumour along with amputation of her left foot. Subsequently she developed scoliosis at the age of 6 years, which was treated conservatively in a Milwaukee brace between the ages of 7 and 10. At the age of 10 she underwent an extensive thoraco-lumbar postero-lateral inter-transverse fusion using iliac crest bone graft without instrumentation to treat her progressive scoliotic curve. She was supported in a Milwaukee brace for further 6 months. Following this she had no problems in her back although she had a considerable residual deformity.
After this recent fall she developed pain in her right-sided thoracic hump. A full neurological examination revealed normal motor & sensory function in both lower limbs. Plain radiographs showed a thoracic scoliosis convex to the left and a broad fusion mass extending approximately from T4 to L1. There was no fracture seen. She was discharged from the A&E department with further analgesia. 3 days later she returned to A&E with increasing pain and respiratory depression due to excessive opiate usage. Investigations also revealed a very high level of serum lithium from her regular lithium medication combined with dehydration and deranged renal function. She was admitted in the high dependency unit for supportive care while the symptoms of pain and discomfort were progressively worsening. Another radiograph of her spine was again inconclusive of any bony injury. A CT scan was performed at this juncture. The CT scan (Fig. 1) showed a fracture line at the junction of T9-T10 extending through the fusion mass, with minimal displacement. She was neurologically stable on clinical examination.
The feasibility of surgical fixation of this fracture was discussed with a specialist scoliosis surgeon and a decision was made to pursue conservative treatment, considering her ongoing medical condition. Surgical fixation was deemed to be technically challenging and very risky. She was not found to be suitable for bracing either. She was advised bed rest with symptomatic management of pain, which was followed by protected and supervised mobilisation.
Further CT scans were performed after 6 weeks and after 12 weeks. These showed the fracture had remained stable but minimal signs of healing were observed with persistent gas shadows in the disc space. Throughout this she remained free of any neurological deficiency and her pain was under control. She was allowed mobilisation within the limits of comfort and under supervision. Serial CT scans were performed at the 7th month, which showed a stable spine and some callus formation at the fracture site. The latest follow up scans performed at the 12th month showed bony union had taken place (Fig 2 & 3). She was followed up in the outpatient clinic. She has resumed her normal activities and is now not requiring regular analgesia.
Figure 1- Coronal and Saggital CT Images of the Fracture from January 2011
Figure 2- Coronal Images compared between July 2011 and February 2012 showing healing
Figure 3- Saggital Images compared between July 2011 and February 2012 showing healing
Discussion
Fracture through a fused scoliotic spine is an uncommon entity. Healing of that fracture by conservative measure is fairly uncommon. Most authors point out that “the ankylosed spine breaks like a long bone, transversely, as a result of a bending force” (Bergmann)1. This fracture configuration results in higher rates of non-union and delayed union. In this light we have presented here a unique case report where a fractured fusion mass has healed without surgical intervention.
There are very few reported incidences of fracture through a spinal fusion mass after scoliosis surgery in the published literature in English. Two patients reported by Moskowitz et al2 had injuries as a result of traffic accidents. The exact mechanisms of the injuries were not described and their management was not discussed. One fractured through the fusion mass 20 years after surgery, the other 14 months after surgery.
King and Bradford3 described a fracture-dislocation of T11-T12 in a patient treated with Harrington rods. They decided to operate because of rod angulation and severe trunk de-compensation.
Tuffley and McPhee4 described a patient treated with posterior spinal fusion without instrumentation. The patient sustained a transverse fracture through the fusion mass without displacement after a fall. Posterior fixation with Harrington instrumentation was carried out.
Bagó et al5 described a 30 year old woman who had undergone anterior and posterior fusion with Cotrel-Dubousset instrumentation for progressive idiopathic scoliosis. Two years after surgery, she was in a car accident. A radiographic study and computer tomographic scanning depicted a fracture of T11 and bending of the rods. Observation was instituted and symptoms resolved.
Chung6 reported a post polio patient whose spine was fused from T7 to L4 as a teenager by spinal instrumentation, which was removed after achieving fusion. She fell down stairs fracturing the body of L2 without any neurological deficit. She was treated conservatively for 3 months after which non-union was observed. The fusion mass was fixed with an AO/ASIF broad dynamic compression plate rather than the convention spinal systems such as pedicle screws, Harrington or Luque system because of the absence of normal anatomical landmarks.
All the described case reports were from high-energy trauma unlike our case where the injury was very inconspicuous. We stress upon the fact that these injuries are very rare and can be very difficult to diagnose from plain radiographs. Our patient was fortunate not to have damaged her spinal cord, which is probably because of the low energy trauma she sustained. Our conservative management has worked well in alleviating her symptoms and achieving bony union.
The spectrum of psychiatric illness in Systemic lupus erthyromatosus (SLE) include psychotic, depressive, subtle cognitive and personality disorders of histrionic type. The occurrence of psychiatric manifestations in SLE varies widely from 5 to 83%. It is postulated that there is a direct action of the disease on the central nervous system by autoantibodies namely anti phospholipid and anti-ribosome P auto antibodies or cytokines like interleukin 2, interleukin 6, alpha interferon 1. During the course of the disease side-effects of glucocorticosteroids and hydroxychloroquine or anxious reaction to chronic and potentially lethal illness is postulated to be another mechanism of psychiatric manifestation of SLE . SLE patients are prone to develop myriad of psychological distress in addition to neuropsychitric manifestations which require a social and psychological support. While some of these manifestations are treated by corticosteroids and psychotropic drugs1 medications with anticholinergic side-effects, like phenothiazines, tricyclic antidepressants and hydroxyzine which enhance the oral dryness should be avoided in SLE.
Clinical scenario:
A 27-year-old male suddenly developed aggressive behaviour for the first time in his life ,while on his work place. The patient had no insight into his illness and was brought to the local psychiatric hospital by his colleagues where he was admitted as a case of acute mania. He was managed with electroconvulsive therapy (EST) in addition to antipsychotic medication as neuro imaging including CT scan and MRI brain were normal. A few days later , the patient was discharged on anti-psychiatric medicines. However, after six months while on antipsychotic medication, he developed a low grade fever .He was admitted to a local hospital where in addition to base line investigations a lymph node biopsy was done which revealed follicular hyperplasia, without any abnormal cell. Patient’s HBV, HCV, HIV were negative. The patient developed anorexia , significant weight loss and progressive difficulty in getting up from a sitting position .He also developed shortness of breath and presented to King Abdul Aziz specialist hospital in Taif, Saudi Arabia virtually in a bed bound state . He was admitted in the intensive care unit of the hospital .The examination revealed pallor, generalised lymph-adenopathy, palmer rash, alopecia and mouth ulcers. The patient had mild pericardial effusion and Mitral regurgitation (MR)++ on echocardiography. Further evaluation showed significant proteinuria. Serum ANA, dsDNA were positive .Lupus anticoagulant was negative. Keeping in view above symptoms and signs the patient was diagnosed as a case of SLE2 (Mouth ulcers, Pericardial effusion ,ANA positive , dsDNA positive ) The patient was managed with pulse dose of methylprednisolone 1g intravenously (IV) daily for 5days, followed by oral prednisone 60 mg once daily, which was tapered on follow up . Patient tolerated the treatment well and improved progressively . He became ambulatory and rejoined his job. The psychiatric medications were stopped.
However, on follow up the patient continued to have proteinuria 1.8 gm/24 hr. He was readmitted and the kidney biopsy revealed class IV lupus nephritis. He was given pulse cyclophosphamide 1gm/m2 intravenously and later started on tablet Mycophenolate 1.5gm once daily. The proteinuria improved and he is following our clinic for the last two years now .Patient’s follow up investigations are shown in table 1.
Table: 1 Patients’ hospital investigations and results
Test
Result Pre-treatment (On presentation )
Result Post-treatment (After 6 weeks )
Normal range
Haemoglobin
6.2
12.3
12. 2-15.3 gm/dl
White blood cell
3.2
6.7
6-16 × 109/l
Platelet
41,000
197
150-450 × 109/l
ESR
82mm first hour
56mm
Total bilirubin
1.2
1.0
.0.8 to 1mg/dl
Direct bilirubin
1.0
0.8
0.-0.6µmol/L
AST
335
30
5-30U/L
ALT
257
29
5-30U/L
ALP
182
100
50-100U/L
GTT
497
65
7-30 IU/l
Albumin
39
39
38-54 g/l
Total protein
5.2
4.5
INR
1.1
1.1
0.8-1.2
Urea
62
40
Creatinine
1.2
1.0
Na/K
131/3.8
142/3.6
Serum glucose
100
102
65-110mg/dl
ANA
Positive
Anti DsDNA
Positive
Lupus anticoagulant
Negative
24 hr urinary protein
2.3gm/L
500mg/L
<150mg/L
Discussion:
The correct diagnosis of central or even peripheral nervous system manifestations in patients with SLE can be challenging because of many SLE-related and non-SLE-related processes present in a patient. The index case proved to have acute mania as the first manifestation of SLE which remained under oblivion till he developed serositis another complication of SLE. While this patient came to clinical attention after one year a case of SLE masquerading schizophrenia for 14 years was reported by Funaunchi et al3. In another report, a 14-year-old boy with a two-year history of cognitive dysfunction and behavioural problems SLE was diagnosed after two years4 . It appears that the psychiatric symptoms may occur as the first manifestation of juvenile SLE. It will not be out of place to mention that the psychiatric manifestation can be at times dire which could even result in harm to others in a given society. The case of Folie a trios syndrome, characterized by the transfer of delusional ideas from one person to two other persons culminating in murder has been reported in a patient with SLE5 . In a significant retrospective data from China (a cohort of 518) neuropsychiatric manifestations in SLE were observed in 96(19%) of the above study cohort . The seizure disorder accounted for the most prevalent disorder of neuropsychiatric manifestations (NP) of SLE followed by cerebrovascular disease and acute confusional states. In the above study, 96 patients with psychiatric symptoms, acute psychosis was observed in 10(11%)patients. Authors in this study were of the opinion that this percentage could have been higher if subtle cognitive dysfunction were included as well. Authors of the same study further concluded that the antiphospholipid antibodies were significantly associated with NP manifestations, especially cerebrovascular disorders6.
The autoantibodies have been found to be biomarkers for future neuropsychiatric events in SLE. A prospective study throughout ten years conducted among 1047 SLE patients demonstrated that individuals who had evidence of lupus anticoagulant (LA) were at an increased future risk of intracranial thrombosis. Further, those with anti-ribosomal P antibodies were at an increased future risk of lupus psychosis7. The Lupus anticoagulant in the index case was negative, and anti-ribosomal P antibodies were not available . A study by Sanna et al8 have shown that an association exists between anti-NR2 antibodies and depressed mood in addition to decreased short-time memory and learning. Authors in this study concluded that antibodies to NMDA receptors thus might represent as one of the several mechanisms for cerebral dysfunction in patients with SLE.
The CT scan brain of the index case was normal. However, massive bilateral calcification of sub-cortical structures in a patient with SLE with the psychotic disorder has been reported9. The psychiatric diseases are related to vasculitis and non-inflammatory vasculopathy of the small cerebral blood vessels. Further, a study has shown that ninety per cent of the patients with psychosis, organic brain syndrome or generalised seizures had increased IgG antineuronal activity as compared with only 25 per cent of the patients who presented with hemiparesis or with chorea/hemiballismus. The authors in the above study concluded that the diffuse central nervous system manifestations of SLE are a direct result of the interaction of the antibody with neuronal cell membranes10.
The management neuropsychiatric manifestation in SLE should include treatment of the disease itself and specific psychotropic treatment. The index case had rapid improvement following Glucocorticosteroid therapy. Intravenous infusions of immunosuppressive agents, such as cyclophosphamide, have been found to be effective in such conditions 1 . Psychotropic drugs may be used, but it is prudent to mention that SLE-inducing drugs, like chlorpromazine, carbamazepine and lithium carbonate must be avoided. Following treatment with steroids, the index case improved and all his antipsychiatric medications were finally stopped and he resumed his job.
To conclude the index case highlights that even though SLE is more frequent among females of childbearing age but males are no way immune to SLE . While evaluating patients with multiple unexplained somatic complaints and psychiatric symptoms SLE ought to be ruled out. The existence of neuropsychiatric manifestations in SLE constitutes an indisputable clinical reality that every practitioner must be able to recognise and treat.
Medical Student Syndrome (MSS) is a unique type of hypochondriasis which specifically causes health anxiety related to the diseases medical students study during their medical training.1 However, this phenomenon does not translate into an increased number of consultations differentiating it from hypochondriasis.2 Nevertheless, the common denominator in both conditions is that the affected person persistently experiences the belief or fear of having severe disease, due to the misinterpretation of physical symptoms.3 The medical examination on multiple occasions does not identify medical conditions that fully account for the physical symptoms or the person’s concerns about the disease, making it a diagnosis of exclusion. Unfortunately, the fears frequently persist among medical students despite medical reassurance, affecting their concentration during their training.4
Earlier studies have shown a higher prevalence of MSS in various medical schools, but recent studies show a declining trend. While Howes et al5 demonstrated that 70% of medical students have groundless medical fears during their studies, Weck et al,6 on the contrary, recorded the prevalence of health anxiety only among 5-30 % of study participants. One of the reasons ascribed to this could be that earlier studies, showing a high prevalence of MSS, were uncontrolled. Also, age-matched peers were not used as controls in some studies, and no direct interviews had been conducted.7,8 Methodological issues in previous data have led to inaccurate interpretations and over-generalization of findings. For example, the high emotional disturbance in medical students resulted from comparisons made with the general population, rather than with other students of their age. 9-11
We were prompted to conduct this study because the magnitude of MSS is variable from region to region, and in this study we compared medical students with their peers, studying in different colleges of Taif University to avoid observational bias.
Methods
This study was carried out from September 2017 to June 2018 at the female campus of Taif University, Kingdom of Saudi Arabia (KSA) in medical (pre-clinical and clinical years) and non-medical colleges in accordance with research guidelines of the College of Medicine, Taif University, KSA.
Inclusion criteria
Age and gender-matched students were selected for inclusion in the study. These included:
1. Female medical students from the second to the sixth grades enrolled in the College of Medicine, Taif University, KSA.
2. Female non-medical students from first to fourth grades enrolled in colleges of Arts, Admin and Financial Sciences, Computer and Information Technology, Science and Islamic Law.
Exclusion criteria
Biology students were excluded due to the medical content of their courses. At the time of enrolment, permission for participant recruitment was obtained from the concerned faculty administrators.
The participants were approached in the common/study rooms or lecture halls. The students were informed of the voluntary nature of the participation and were randomly selected. They were not required to provide their names during completion of the questionnaire and were assured of confidentiality. The Hypochondria/Health Anxiety Questionnaire (HAQ), developed by the Obsessive Compulsive Centre of Los Angeles (http://ocdla.com/hypochondria-test), was used to collect the data. The questionnaire was translated into Arabic and underwent a revision in order to ensure compatibility with the original one. The questionnaire was not designed to provide a formal diagnosis but provided an indication as to whether or not the persons were exhibiting significant signs of the disease.
Results of this questionnaire were analyzed as under:
A) 1 to 3 test items checked: there is a low probability that the student has health anxiety, and it is unlikely that her concerns significantly impact his life.
B) 4 to 7 test items checked: there is a medium probability that she has health anxiety, and a moderately high amount of distress related to specific health-related thoughts. She spends more time than most people doing unnecessary behaviours related to these thoughts.
C) More than 7 test items checked: there is a high probability that she has health anxiety. She most likely has a significant amount of distress related to certain health-related obsessions, and likely spends a significant amount of time doing unnecessary compulsive and avoidant behaviours directly related to these obsessions.
Statistical methods
Data were statistically described regarding frequencies (number of cases) and valid percentages for categorical variables. The response of the two groups was analyzed by student t-test. P values less than 0.05 were considered to be statistically significant. All statistical calculations were done using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) release 21 for Microsoft Windows.
Results
400 students were included in the study. There were 200 medical students, and the other 200 students were from various non-medical colleges of Taif University (Colleges of Arts, Admin and Financial Sciences, Computer and Information Technology, Science and Islamic Law).
All participating students were females (100%), and the mean age of the medical students was 21 years (ranged from 19-22years). The mean age in the non-medical group was 20.5 years (ranged from 19-23 years).
All students in the non-medical colleges completed the HAQ while five students in the medical college (clinical years) did not complete it, so the data on 395 participants were finally analyzed.
According to the scaling criteria, this study showed that the overall prevalence of MSS among the total sample (medical and non-medical female students) was 16.2% (64 out 395 students). However, it was higher in the medical students (34 out of 195 students; 17.4%) than in the non-medical students (30 students out of 200; 15%) – see Table 1.
Non-medical students n=200
Medical students
p value
Pre-clinical (95)
Clinical (100)
Age
19-23
19-20
21-22
Medical student syndrome (MSS)
30 (15%)
20 (21.1%)
14 (14%)
0.22
One visit to doctor
33.3 % (10 /30)
20 % (4/20)
14.3 % (2/14)
0.0043
More than one visit to doctor
40 % (4/10)
25 % (1/4)
0 %
0.001
Table 1. The frequency of Medical Student Syndrome (MSS) among medical and non-medical students.
Figure 1. The difference of Medical Student Syndrome (MSS) between pre-clinical and clinical years (p=0.028).
Figure 2. Fears related to diseases in the study cohort.
While comparing the response of the two groups by student t-test, there was no statistically significant difference between the responses obtained from medical and non-medical colleges (p=0.31). However, from the MSS diagnosed cases in the medical college, there was a significant difference between pre-clinical and clinical years – 21.1% vs 14% (p= 0.028) – see Figure 1.
Regarding the percentage of students who visited the doctors during the last year due to fears from disease, or medical condition, it was higher in the non-medical student's group than in the medical student's group with a significant difference observed (p=0.043).
The medical conditions that caused worry among medical and non-medical students were, diabetes mellitus followed by cancers especially breast cancer. The least worried diseases were headache and heart diseases – see Figure 2.
Regarding the percentage of students who consulted more than one doctor for the same medical concern, because of doubt about the previous doctor’s diagnosis and laboratory results, it was higher in the non-medical student's group compared to the medical student's group. The difference was significant (p=0.001).
The students with MSS in the total sample (of 395 students) were categorized according to the degree of probability into low, medium and high as shown in Figure 3.
Figure 3. The probability of Medical Student Syndrome (MSS) among all groups compared to their non-medical peers.
Discussion
The unrealistic fears about illnesses recorded in this study among medical students were higher than their peers studying various non-medical courses at Taif University; however, the difference was not significant. The subgroup analysis revealed a correspondingly higher prevalence of health anxiety during pre-clinical years than clinical years as shown in Figure 1. Possibly during the pre-clinical years, students have an increased sense of body awareness and stress as demonstrated by Moss-Morris et al.7 The authors in the above study described this syndrome as a normal perceptual process and differentiated it from common hypochondriasis. Other researchers 8,12 as well affirmed this. Our results are in parallel with the finding of Azuri et al13 who recorded that first-year students visited a general practitioner (GP) or specialist more often than in other years. The authors in the above study suggested that the pre-clinical students` visits may be due to registering with a new doctor closer to university or due to necessary health checks before the beginning of their medical school. The dream content of pre-clinical medical students frequently involved a preoccupation with a personal illness of the heart, the eyes and the bowels in the above study.
Additionally, the fear of acquiring a future disease is a core feature of health anxiety, while fear of already having a disease is considered more central to the MSS.14 There is a number of instances where this syndrome manifests among students from time to time during their training. The students are even known to change their diagnosis depending upon their clinical rotation. For example, in a psychiatry rotation the student conceptualizes having schizophrenia and later shifts his or her diagnosis to Meniere's disease during an ear, nose and throat (ENT) rotation. The symptoms are thought to occur due to intensive exposure to knowledge affecting symptom perception and interpretation.15 The fact remains that the affected student is devoid of either. At times, the simple knowledge of the location of the appendix transforms the most harmless sensations in that region into symptoms of a serious threat.16 The students who study "frightening diseases" for the first time routinely experience intense delusions of having the disease, reflecting a temporary kind of hypochondriasis.17
In a study by Waterman et al18 it was observed that 80% of medical students conceptualize diagnoses ranging from tuberculosis to cancer while studying these diseases during training. This caused emotional distress and conflict in them. It was suggested that this phenomenon was present in approximately 70-80% of students in the study mentioned above. There may be multiple reasons for precipitation of this condition among medical students. The vastness of medical studies are undebatable, and medical schools cause students to experience a large amount of psychological pressure due to work required to grasp the subject matter, the stress of examinations, and the competitive environment.19
In this study, we compared medical students with the students of the same age and gender with the same cultural background in order to avoid any bias. Our results are in parallel with a more recent study, which compared three groups, medical students, non-medical students, and their peers who were not undergoing any academic course. The authors in the study mentioned above observed no significant differences between the groups on total scores in the questionnaires. However, when considering the individual components of the questionnaires, it was found that medical students were less aware of bodily changes and sensations than the other groups; nevertheless, they did not avoid seeking medical advice for any health-related fears.20
Regarding the percentage of students who visited doctors in the past 12 months due to fear of disease, it was observed in this study that the non-medical group had significantly higher visits to doctors compared to their peers studying in the medical college of the university. It is entirely possible that they had increased access to personal advice from peers, relatives, and various mentors. Of the various diseases, fear of diabetes mellitus was the highest, possibly due to a high prevalence of the disease in Saudi Arabia.21 Further, it is entirely possible that medical students subconsciously conceive these metabolic disorders as these are discussed in greater details during their courses.
MSS may lead to cyberchondria, a phenomenon of the public, seeking to diagnose themselves via the internet,11 which in turn may lead to hypochondriasis in any given student. Thus, it becomes imperative that students suffering from this disorder must be dealt with an empathetic approach and counselled properly after ruling out an organic cause of their illness. A step to circumvent it further would be that MSS must be thoroughly discussed among medical students during their training.
Limitation of the study
The drawback of this study is that that the questionnaire was translated from English into Arabic, and although it underwent a revision, there were no other formal tests such as linguistic and cultural validation to validate the translated version. Further, we believe that our focus was only on female students, and it is well known that females have better ability to cope up with anxiety and depression compared to males22,23 so the figures of MSS among male medical students needs to be studied as it may be different from what we reported in this female cohort.
Conclusion
In conclusion, the students who are suffering from MSS often overuse medical resources and outpatient’s services compared to others. Therefore, clinicians should be aware of these students, to avoid unnecessary procedures and treatments. However, it is vital that a proper evaluation is done before labelling a given student with MSS.
Stress-Induced Cardiomyopathy (SCM), also known as Takotsubo Cardiomyopathy or Apical Balooning Syndrome, is an acute, transient and non-ischaemic cause of left ventricular dysfunction often precipitated by periods of stress1. Diagnosis often follows evidence of left ventricular hypokinesia despite a normal coronary angiography. Prevalence is often underestimated, with an estimated 7% of suspected myocardial infarctions being in fact SCM2. We report a unique case of multi-nodal dysfunction following SCM.
Case Report
A 73-year old lady presented to our emergency department following a sudden onset of central, non-radiating chest heaviness 8 hours prior. She was a known chronic smoker of 20 pack years, and hypertension which had been left untreated for over 10 years. An initial electrocardiogram (ECG) revealed sinus bradycardia and T-wave inversions in the inferior, septal and lateral leads (Figure 1). Her Troponin-I levels was raised at 6532 pg/ml. She was treated as a Non-ST elevation myocardial infarction and was admitted to the coronary care unit for closer monitoring. She was kept on telemetry overnight, which disclosed several episodes of bradycardia. Rhythm strip revealed various transient conduction defects, including that of sinus node dysfunction (SND) and atrioventricular node (AVN) dissociation, although she remained asymptomatic throughout (Figure 2).
Figure 1: Electrocardiogram revealing sinus bradycardia, with T-wave inversion in the inferior, septal and lateral leads.
Figure 2: Telemetry rhythm strip revealing transient episodes of (a) sinus node dysfunction (SND) and (b) atrioventricular node (AVN) dissociation.
Figure 3: Electrocardiogram revealed ST-segment elevation with associated T-wave inversions in the inferior, septal and lateral leads.
Figure 4: An ‘Apical 4-Chamber’ view on echocardiography, revealing an akinetic apex on (a) diastole and (b) systole.
Unfortunately, following an episode of chest pain the following morning, her troponin levels and an electrocardiogram were repeated, showing a rise of the former to 12 996 pg/ml. A repeat ECG revealed evidence of ST-segment elevation in previously affected leads (Figure 3). She was brought into the catheterization laboratory within 1 hour. Her coronary angiogram showed no evidence of coronary obstruction. An echocardiogram was performed, which revealed an akinetic apex (Figure 4).
Upon further history taking, it was revealed that she was recently made redundant from her job as a cleaner, several hours prior to her presentation to the emergency department. Prior to that, she denied any other emotional or physical stressors. She was diagnosed as having Stress-Induced Cardiomyopathy (SCM). Following an observational period of close to 48 hours, she was allowed home. A 48-hour Holter monitoring was performed approximately 3 weeks from her initial admission, which returned unremarkable. A repeat echocardiography was also performed, revealing normal wall motion abnormality which further support a transient SCM.
Discussion
Despite being transient, multiple complications can arise from the condition, including arrhythmias. Prevalence of arrhythmias varies greatly, depending on population and types of defect (15% suffering atrial fibrillation, 2-5% of tachyarrhythmias, 2-5% of bradyarrhythmias and 5% of AVN dissociation amongst others)3,4. This is largely due to evidence being based on retrospective case report and series, leading to severe underestimation of their true prevalence. We suspect cases of sinus node dysfunction are far more uncommon, with only a handful of case report of note, and one retrospective review of 816 patients quoting a rate of 1.3%5. There are no reports of concomitant sinus node and atrio-ventricular node dysfunction, to our knowledge.
Proposed mechanisms for SCM-induced nodal dysfunction include reduced coronary flow to conduction tissues due to left ventricular dyskinesia, cathecolamine-driven coronary and microvascular vasospasm leading to both reduced blood supply and direct cardio-toxicity effects, and continual ischaemia-driven fibrosis of nodal tissue6. However, there have been reports of SND-triggered SCM, likely secondary to adrenergic compensative activation following bradycardia events. In both scenarios, pre-existing, subclinical SND may lower the threshold of developing significant, symptomatic bradycardia7-10. This is important to note, as majority of patients affected by SCM are post-menopausal women whom are already at risk of age-related SND.
In our patient, the SCM may have likely induced both SND and AVN dissociation, as subsequent 48 hours Holter monitoring, 3 weeks from presentation, was unremarkable. Furthermore, the patient denies any previous syncopal or pre-syncopal symptoms. However, the possibility of subclinical SND could still have existed, as we had earlier discussed, and ideally an internal loop recorder for prolonged monitoring, catheter-based electrophysiology studies and a Cardiac Magnetic Resonance Imaging to detect nodal and conduction tissue fibrosis would assist in ruling out pre-existing nodal dysfunction. However, due to financial and pragmatic reasons (as patient was asymptomatic), the patient declined further investigations, opting for periodic clinic reviews instead.
Conclusion
Both nodal and conduction tissue blocks are a rare but significant complications that can occur following SCM. The occurrence of SND following SCM should lead clinicians to routinely investigate for pre-existing conduction tissue diseases, if not already performed and allows for earlier device implantation if deemed indicated.
Necrotising soft tissue infections (NSTI) are severe and rapidly progressive, requiring rapid recognitions and early, often surgically-based, management. Mono-microbial types of NSTI (i.e. Type 2 NSTI), which amounts for 20 to 30% of overall cases, are often linked to invasive Group A Streptococcus or Staphylococcus Aureus infections 1. Rarely, Group B Streptoccocus (GBS), also known as Streptococcus Agalactiae, are implicated 2. We report a unique case of NSTI of the lower limbs due to GBS, with acute pericardial dissemination leading to cardiac tamponade, leading to a diagnostic dilemma due to co-existing cardiogenic and septic shock.
Case Report
A 51-year old gentleman of Chinese ethnicity presented with right foot pain and swelling over 2 weeks, associated with chest pain and shortness of breath during that period. He had a 10-year history of poorly controlled diabetes mellitus with a Hba1c level of 8.8 %, hypertension and dyslipidaemia.
He was hypotensive on arrival, with a blood pressure of 91/60 mmHg and hypoxic, requiring high flow oxygen of 15L/min to maintain saturations at 100%. Otherwise other vitals were stable, pulse rate being 72 beats per minutes, respiratory rate 24 breaths per minute and a temperature of 37.4 degrees Celsius.
Clinical examination revealed a gangrenous lateral two toes extending into the lateral malleolus on the right foot, with evidence of pus discharge and associated warmth and crepitus up to hindfoot level on palpation. There was also evidence of dry gangrene in the fourth toe of the left foot, with presence of a small puncture at dorsum of foot with pus discharge. Similarly, crepitus was felt up to midfoot level on palpation of the left side. Bilateral dorsalis pedis and posterior tibialis pulses were palpable but feeble.
Table 1: Blood Investigations on Admission
Blood Test
Results
Blood Test
Results
White Cell Count
26.99 x109L
Alkaline Phosphatase
168 U/L
Neutrophil
90.30%
Alanine Aminotransferase
37 U/L
Lymphocyte
4.50%
Aspartate Aminotransferase
40 U/L
Platelets
210 x109L
Sodium
121 mmol/L
Hemoglobin
10.0 g/dL
Pottasium
7.6 mmol/L
Lactate Dehydrogenase
441 U/L
Urea
40.5 mmol/L
International Normalised Ratio
1.2
Creatinine
323 μmol/L
Activated Partial Thromboplastin Time
36.5 s
Creatinine Kinase
43 U/L
Prothrombin Time
14.6 s
Total Bilirubin
21 μmol/L
Figure 1a & 1b: Radiography of the (1a) left foot and (1b) right foot respectively, demonstrating gas within soft tissue bilaterally.
Figure 2: Chest radiography demonstrating cardiomegaly and globular-shaped heart, with loss of left-sided cardiophrenic and costophrenic angles.
Figure 3: Electrocardiogram demonstrating widespread saddle-shaped ST-elevation, consistent with pericarditis.
Figure 4: Parasternal Long Axis view of bedside echocardiography showing evidence of pericardial effusion and right atrial and ventricular collapse
Figure 5: Purulent pericardial aspirate via pericardiocentesis
Initial blood investigations are highlighted in Table 1. HIV Antibody, Hepatitis B Surface Antigen and Hepatitis C Antibody serology were all negative. Lower limb radiography revealed evidence of gaseous shadows bilaterally (Figure 1). The clinical and radiological findings were consistent with necrotising soft tissue infection of bilateral feet, and the patient was advised for extensive wound debridement and possible amputation of the affected sites during an orthopaedic consult.
However, on closer review of the chest radiography, there was evidence of cardiomegaly with a globular-shaped heart (Figure 2). His electrocardiogram on arrival, revealed diffuse ST-segment elevations on majority of leads, ST-segment depression on lead aVR consistent with pericarditis (Figure 3).
A bedside echocardiogram was performed, revealing a massive pericardial effusion, measuring largest at 2 cm in depth, with evidence of both right atrial and ventricular collapse (Figure 4).
An urgent pericardiocentesis was performed, under echocardiographic guidance, which revealed turbid-looking aspirate (Figure 5). Urgent microscopic analysis revealed 45 Cells per mm3, majority of which were lymphocytes, and gram stain showed moderate amounts of pus cells with occasional gram positive cocci. Pericardial fluid was negative for acid-fast bacilli.
A repeat transthoracic echocardiogram was performed post-pericardial drain insertion, revealing minimal remnant pericardial fluid, with the pericardial drain in-situ, and no evidence of any mass or vegetation. Unfortunately, a transoesophageal echocardiography and Computed Tomography (CT) imaging of the mediastinum (to rule out mediastinitis and pneumonitis) was not performed, as management of the NSTI took precedence.
The patient was started on intravenous antibiotics, both tazobactom-pipericillin and clindamycin. There was a delay in performing limb saving wound debridement as the patient was reluctant for invasive management, but had later consented to the procedure which was performed only on day 3 of admission. Tissue cultures were taken peri-operatively. Unfortunately, the patient deteriorated post-operatively due to extensive blood loss and overwhelming septicaemia and succumbed to his illness 72 hours after. Subsequently, it was revealed that cultures from blood, pericardial aspirate and tissue aspirate were positive for GBS infection.
Discussion
GBS is a common microorganism, often colonising the gastrointestinal and reproductive tract 3. Rarely, GBS colonises the skin and can cause necrotising fasciitis, i.e. necrotising soft tissue infection (NSTI), with only 22 cases having been reported in the past ii. Majority of these patients are either immunocompromised or have other predisposing factors including recent thoracic intervention or trauma 4,5.
GBS-related infections of cardiac structures are rare, as a whole, with 2 to 3% of cases presenting as native valve endocarditis and far less as pericarditis, mycotic aneurysms and intraventricular abscesses 3. Parikh et al reviewed the types of microorganisms isolated from purulent pericarditis samples and revealed that only 5% were due to streptococcal organism, sansStreptococcus Pneumoniae, possibly less so due to GBS 6. Our literature search revealed only one case of GBS-related purulent pericarditis reported although the case was not linked in any way to a NSTI to our knowledge 7.
Our case was unique as, at the time of writing, there were no other reports of GBS-related lower limb NSTI in combination with mediastinal involvement. There have been only a handful of cases of necrotising fasciitis reported with mediastinal involvement, the majority of which were supra-diaphragmatic with only one reporting NSTI of the lower limb due to Aspergillus infection 89.
The similarity in culture results obtained from blood, tissue aspirate and the pericardial fluid in our patient suggest dissemination of GBS from the NSTI, possibly via a haematogenous route, although bacterial-related pericardial dissemination can also occur via direct spread from infected foci from neighbouring intra-thoracic structures or sub-diaphragmatic 4. The possibility of multi-routed spread should remind clinicians that, albeit rare, mediastinal involvement in NSTI is a possible complication of such disease.
Conclusion
This case highlights the rare possibility of cardiac involvement in cases of NSTI, and the possibility of cardiac tamponade causing cardiogenic shock masquerading alongside septic shock. It also highlights the importance of combining clinical findings with ancillary testing, including bedside echocardiography, when faced with challenging cases of sepsis to help look for possible foci of infection.
There have been continuing initiatives to transform and improve the National Health Service (NHS) in recent years. Mental health services in England have similarly shown evolution with regards to service provision. There has been a shift away from the perceived “medicalisation” of treatment, with traditional long-stay institutions replaced with more targeted and personalised care in the community.1 Furthermore, community services themselves have seen much remodelling over the years including decommissioning and integration, as well as increased involvement in outreach and early intervention teams.2
Mental health services are sometimes perceived as relatively well funded from outside but, as with most healthcare sectors, compared to the population requiring this service, these resources are inadequate to support the growing demand. This has been the case for some time, but it has become more evident with a significant reduction in funding observed since 2010/11.1 In addition, constant governmental pressures to meet key performance targets, as well as unachievable expectations from the public, have further stretched an already resource-depleted mental health service.
The implementation of new National policies3 was supposed to be a shift from large psychiatric hospitals to smaller specialist community centres with a promised reduction in the demand placed on inpatient services. In England, a peak number of 150,000 inpatient psychiatric beds was reported in 1955; this has since rapidly declined to 22,300 in 2012. Between 2010/11 and 2013/14, a further rapid reduction of 7% of all beds available was seen.4
Despite the promise of changes in service delivery within mental health to mitigate the continued reduction in the number of inpatient beds, demand for inpatient beds has not in fact reduced nationally.1 The recommended level of occupancy, for example, is 85% but 119 wards surveyed5 were operating at 91%, with some at 138% level of occupancy. The occupancy levels of over 100% usually occurred when long-stay inpatients were discharged home on short-term leave and their beds got filled during their absence.4 Where numbers of inpatient beds fail to meet the demands, or waiting list for their first assessment or review grows, the inadequacy lends these facilities to issues with regard to providing high quality and safe patient care. Examples of this may include inappropriate use of the Mental Health Act for detention of patients as a means of securing an inpatient bed,5 incomplete assessments of people detained in places of safety due to time or space constraints,6 and an increase in violent incidents on overcrowded inpatient wards.7
What is a Crisis Resolution and Home Treatment Team (CRHTT)?
In the late 1980s and 1990s, community mental health teams provided acute crisis support. This posed a number of issues including that these teams usually operated during normal working hours of 9am-5pm (Monday to Friday) and were not always available to provide support to patients in a crisis, and did not have the desired impact of reducing the number of acute admissions.8 This gap in service provision inspired the experimentation with and subsequent development of intensive home treatment services, some of which showed evidence of reduced hospital admissions, and holistic-working often preferred by families who were happy to have their loved ones receive the required support in the home environment.9 Over the last two decades, with remodelling of services, increased investment, NHS funding rising from £49 billion in 2000 to £122 billion in 2016, and a migration of mental health professionals, CRHTTs were established and are now available in every mental health trust across the United Kingdom (UK).10
CRHTT is a team of mental health professionals including psychiatrists, community psychiatric nurses, social workers and support workers, who provide rapid and intensive support at home during a mental health crisis.11 They are a 24-hour service operating seven days a week, and acting as the “gatekeeper” for acute services accepting referrals from various sources including inpatient, community, liaison and from outside the Trust for providing support to patients experiencing crises. These teams risk-assess patients and determine whether they require inpatient or home treatment. In the latter case, CRHTTs provide intensive home treatment by offering up to 2-3 visits a day as well as 24/7 phone support. These teams are also involved in facilitating early discharges from hospitals; in cases where patients are past the initial acute crisis, but may need further input prior to discharge to community mental health teams for longer term support.8
Definition of diagnosis and second opinion.
A second opinion is defined as “advice from a second expert (such as a doctor/psychiatrist) to make sure advice from the first such expert is correct” whilst diagnosis is defined as “the art or act of identifying a disease from its signs and symptoms”.12 Due to increased pressure on inpatient facilities and remodelling of community services, there has been a huge increase in the number of referrals made to CRHTTs. Between 2011/12 and 2013/14, it has been noted that referrals to CRHTTs increased by 16%.13 Reduction in inpatient beds and high workloads within community services often result in the formulation of arbitrary diagnoses and treatment plans. With increased pressures on other mental health services, the role of CRHTTs has begun to evolve. In addition to the previously discussed functions, CRHTTs appear to be becoming second opinion services by default enabled by the psychiatrists working in these teams.
We organised a project to establish whether a typical CRHTT is fulfilling the criteria of being a diagnostic or second opinion service provider.
Method
We examined 100 consecutively accepted referrals to a CRHTT from 1st December 2016. The patients were divided into three groups: those being discharged/referred from hospital (HR), those referred from the community (CR), and those who were not open to secondary mental health services at the time of referral (NR). The age range and gender of the groups were noted. Thereafter, the NR group was excluded from analysis for the obvious reason that the CRHTT was not providing a second opinion in their case. The HR and CR groups were further reduced by excluding patients who were not seen by a CRHTT psychiatrist. The remaining patients in both groups were scrutinised regarding a change in medication; this was also recorded for the previous and next care occasions. The likelihood of medication change at the next treatment event was analysed to establish whether it was affected by the previous event. The numbers of patients with CRHTT diagnosis change were also recorded for both groups.
Results
Figure 1: Project Flowchart
Figure 2: Group Demographics
n
Patient Gender
Patient Age
Time with CRHTT
Male
Female
Average
Range
1-7 days
> 7 days
No prior referral open (NR)
43
20 (47%)
23 (53%)
36.0
19-60
5 (12%)
38 (88%)
Community referral (CR)
36
13 (36%)
23(64%)
37.8
19-66
7 (19%)
29 (81%)
Hospital referral (HR)
21
10 (48%)
11 (52%)
39.0
19-63
6 (29%)
15 (71%)
There was little difference in age between the three groups (average ages were: CR=37.8, HR=39.0, NR=36.0). There was a lower proportion of men in the CR group than were present in the HR and NR groups (36% as against 48% and 47%). Whether a psychiatrist saw a patient appeared to be related to both the referral source and the length of CRHTT stay. Most (n=16, 76%) patients in the hospital-referred group (HR) were not seen by a psychiatrist while most (n=24, 67%) of those referred from the community (CR) did receive such an outcome. No community-referred patient was seen by a psychiatrist if they were with the CRHTT for less than a week. These short-stay patients accounted for 7 out of the 12 community-referred patients who were not seen. This suggests that a psychiatric assessment should be scheduled more quickly after community referrals so as to offer patients a more comprehensive service.
Psychiatric assessment led to changed diagnoses for 28% (8/29) of patients. This figure was 40% (2/5) for the HR group and 25% (6/24) for the CR group.
Medications were changed for 69% (20/29) of patients seen by a psychiatrist. In the subgroups; 60% (3/5) of HR psychiatric assessments resulted in a change of medication while 71% (17/24) of CR psychiatric assessments led to medication changes.
The chi-square statistic was used to evaluate whether a recent medication change, during the inpatient stay or at the most recent outpatient appointment, made the CRHTT less likely to adjust medication. This indicated that there was no relationship between the two events. A similar analysis indicated that the likelihood of a medication change at the patient’s next community appointment was increased by seeing a CRHTT psychiatrist but unrelated to whether that assessment had resulted in a change of medication.
Discussion
We have demonstrated in this study that a typical CRHTT is providing a diagnostic and second opinion service. Changes in medication were more than twice as frequent as changes in diagnosis – this is perhaps unsurprising as diagnostic changes would be likely to require a different prescription.
Most community referrals were actively evaluated in terms of both diagnosis and treatment. This is a significant change to the original function of the CRHTT where a psychiatric assessment was not a standard aspect of care when very few of the original CRHTTs included a psychiatrist. This may also reflect the current pressures on community teams, which are frequently short-staffed, leading to more competition for the available clinic appointments. Consequently, patients may not have seen a psychiatrist for some time and their requirements may have changed. It is, however, also known1 that community patients who have not been reviewed recently or who have a long wait before their first assessment are more likely to present in crisis.
The diagnostic and second opinion function of the CRHTT is more prevalent when patients have been referred by the community team (67% reviewed, 47% medication changed) rather than on discharge from hospital (24% reviewed, 14% medication changed). This appears to largely reflect the fact that relatively few discharges were seen by the CRHTT psychiatrist because these patients had just received a full consultant-led discharge treatment plan. This may be another example of community service pressures leading to patient crises and thus engagement with alternative services – in this case inpatient care may be offering a second opinion service. The current separation of community and inpatient services will augment this effect as previously the patient would have been more likely to receive continuous care from the same consultant. This is an interesting view of current service configuration. The reduced continuity of care is often seen as a disadvantage but it does present an opportunity for a fresh evaluation of a patient’s diagnosis and medication by a different psychiatrist.
Longer lengths of stay with the CRHTT made psychiatric assessments more likely. It was particularly clear that discharge within a week made a psychiatric review unlikely. The proportion of community-referrals seen by a CRHTT psychiatrist could be increased to 83% if patients were to be seen within 24 hours. This figure is derived from the assumption that psychiatrists would then see the same proportion of both long and short stay patients. The residue would include those patients who refuse to engage with such an appointment.
It is interesting that chi-square statistical analysis suggests that the only influence on prescription change at the next appointment is whether the patient was seen by a CRHTT psychiatrist. It is not related to whether or not the CRHTT psychiatrist changed the medication. It is difficult to see why this should be the case unless the community psychiatrists consider the patients’ needs in more detail or are tempted to regain control after the referral to another psychiatrist.
In conclusion, the addition of psychiatric care to CRHTTs may be a valuable adjunct to the current pressures on community teams. The current trend to separate community, inpatient and CRHTT care is often cited as a disadvantage due to reduced continuity of care for patients. This project has drawn attention to the fact that it also offers opportunities for new teams to re-evaluate both diagnosis and treatment which offers patients the advantage of an internal second-opinion service. This advantage could be offered to more community-referred patients, albeit with more resources, by ensuring that they are assessed by the CRHTT psychiatrist within 24 hours.
Limitations
This is a small study conducted in a single CRHTT. It does, however, offer an indication of the evolving role of the CRHTT and its relationship to other services.
Arrhythmogenic Right Ventricular Dysplasia (ARVD) was first described in a case series of 24 patients back in 1982 1, 2. Since then, our understanding of its pathophysiology has improved dramatically, with dedicated guidelines and literature being published to help with both diagnosis and management. Prompt diagnosis remains a struggle in majority of developing countries, including Malaysia, where resources and expertise are scarce, and obtaining both cardiac magnetic resonance imaging or endomyocardial biopsies remain a challenge. Furthermore, diagnosis is difficult in most cases as clinical presentation may vary and wide range of clinical mimics exist. We present a unique case of ARVD, diagnosed early through the knowledge of having a deceased sibling, whom had endomyocardial tissue characterization performed in the past confirming the presence of the disease in a first degree relative.
Case Report
A 21-year old gentleman presented to the emergency department following an episode of loss of consciousness, lasting approximately 30 minutes which recovered spontaneously. He denies having any similar episodes in the past. However, he had been suffering from reduced exercise tolerance, with a New York Heart Association (NYHA) Class II, over the past 1 year. He had no known medical illness at the time but smoked 6 cigarettes a day for the past 7 years.
His vital signs were stable on arrival, with a heart rate of 73 beats per minute, regular in rhythm, a blood pressure of 143/84 mmHg, respiratory rate of 19 breaths per minute, temperature of 37 degrees Celcius and oxygen saturation of 98% on room air. Cardio-respiratory examination revealed no murmurs, and normal heart and breath sounds. There were no carotid bruits audible. There was no evidence of any neurological deficits on neurological examination.
Figure 2 – Electrocardiogram revealing T-wave inversions in leads V2 to V4 with ventricular ectopic beats
Chest radiography revealed cardiomegaly (Figure 1). Electrocardiogram (ECG) revealed deep T-wave inversions in leads V2 to V4, with ventricular ectopic beats (Figure 2). Due to the suspicious-looking ECG, a serum Troponin-I test was performed, which was elevated at 480 pg/ml. The patient was treated for acute coronary syndrome complicated by cardiac syncope, and was later referred to the medical team for further inpatient management.
However, on further history, it was revealed that the patient had a sibling who had died from an unknown cause, 5 years prior. His younger brother, 14 years of age at the time, was brought in after collapsing whilst playing basketball in a school compound. Unfortunately, he was pronounced dead on arrival to the clinic. A post-mortem was performed due to the unexpected nature of the event. Fortunately, our patient was brought into the same hospital as his sibling, allowing us to trace previous autopsy reports and images, with consent.
Macroscopic examination of the right ventricular cavity revealed epicardial surfaces showing infiltration of excessive fat tissue with nodular fibrosis. The right ventricular cavity appeared dilated and cut sections showed diffuse transmural fibro-fatty replacement of the right ventricular free wall, extending into the endocardium and involving the right ventricular septum (Figure 3a).
Figure 3a – Macroscopic examination of right ventricular cavity, which was dilated and showing signs of transmural fibrofatty infiltration. Figure 3b – Histological evidence of focal lymphocytic infiltration, myocyte hypertrophy and degenerative cytoplasmic changes.
Histology revealed extensive fatty infiltration with interstitial fibrosis, primarily in the epicardium. There was associated myocyte loss with hypertrophy of cardiac muscle cells remaining (Figure 3b). Both macroscopic and microscopic findings were suggestive of ARVD.
After learning of the autopsy results, changes in clinical management took place, with priorities being shifted towards obtaining an echocardiogram, cardiac Magnetic Resonance Imaging (MRI) and Holter recording, as opposed to diagnostic angiography and coronary evaluation. Echocardiography revealed an ejection fraction of 25 to 30%, with evidence of left ventricular dyssynchrony, a tethered posterior mitral valve leaflet with mild eccentric regurgitation, consistent with dilated cardiomyopathy.
Cardiac MRI revealed both left and right ventricular dilatation, end diastolic dimensions being 5.8 cm and 4.4 cm and end-diastolic volume being 153 ml/m2 and 149 ml/m2 respectively, with evidence of bi-ventricular dyssynchrony. Left ventricular and right ventricular ejection fraction measured 31% and 8% respectively. There was also bilateral atrial dilatation. Gadolinium study revealed late enhancement in areas of the right ventricular wall (Figure 4).
Figure 4 – Four-chamber view of cardiac magnetic resonance imaging revealing evidence of right ventricular enhancement following gadolinium study.
A 24-hours Holter recording revealed up to significant ventricular ectopic burden, many of which were bigeminy and trigeminy in nature. In view of symptoms and the above investigative findings, the patient consented to insertion of an implantable cardiac defibrillator (ICD) 4 weeks later in our centre, and has since recovered well with regular monitoring.
Discussion
ARVD is rare, prevalence ranging between 1 in 2000 to 1 in 5000 (taking into consideration geographical variations) and accounts for 5% of deaths in young adults and 25% of deaths in athletes 3, 4. Typical histopathological feature of ARVD is the loss of right ventricular myocardium, replaced heavily by fibro-fatty tissue. Less commonly, left ventricle involvement have been reported 5, 6. Consequence from such pathological process leads to arrhythmias, heart failure and more importantly sudden cardiac death (SCD), with mortality rate ranging between 4 to 20% and peaking in the fourth decade, equally in both males and females 5.
Diagnosis is difficult in most cases as clinical presentation may vary and wide range of clinical mimic exist, including myocarditis, sarcoidosis, Brugada syndrome, idiopathic RV outflow tract VT and congenital heart diseases with right chambers overload amongst others 6. A Diagnostic Criteria was developed in 1994, with further modification in 2010 to assist in the diagnosis of ARVD and although the criterion has been shown to be specific, it lacks sensitivity 7. Nevertheless, it highlights several key areas, a mix of clinical, radiological, histological and electrophysiological features, that assist in diagnosis 4.
Despite not having any further evaluation or investigations performed at the time of presentation, in view of circumstances, our patient’s deceased sibling had supportive histological features. Therefore, our patient met the major criteria of having a first degree relative affected by the disease. More importantly, the suspicious family history had prompted further evaluation for the disease, allowing the medical team to prioritize investigations performed, specifically the Cardiac MRI and Holter evaluation. This led to early risk stratification and decision to implant an ICD for the patient, as he was deemed at high risk of SCD.
Conclusion
This case highlights the importance of good history taking, including a detailed family history of SCD or cardiac-related diseases, especially in young patients presenting with typical cardiac-related symptoms. Early identification and appreciation of risk will subsequently affect the outcomes of such patients affected by ARVD. Furthermore, important diagnosis like ARVD will have implications to relatives and future off-springs, further highlighting the need for detailed evaluation of patients similar to ours described in the above case report.
In a contemporary medical practice caring for complex patients with utmost efficiency, primary care physicians and specialists are expected to work together to organize a seamless transfer from acute to chronic care. The job of the generalist is to sort out and integrate different recommendations from numerous specialists and apply those strategies in the care of the patient long after the index admission. During such interactions with specialists, primary care physicians often realize the impact of differing viewpoints on the overall patient care well beyond the anticipated time frame, whether acute or chronic. To that end, and to better inform such recommendations, this paper proposes the top 10 things primary care physicians wish every specialist knew when addressing problems on the busy hospital ward.
1. Organ-systems work together, not independently
As we see in examples such as the cardio-renal syndrome, hepato-renal syndrome, or hepato-pulmonary syndrome, as the patient gets sicker, the interaction of organ-systems begins to dominate. Indeed, predicting the outcome in comorbid conditions depends not only on understanding the culprit organ, but rather quantifying a complicated interaction of multiple organ-systems. For example, the ADHERE registry algorithm shows the most important predictor for in-hospital death in heart failure patients is not the cardiac function per se, but rather creatinine clearance and BUN[1]. In other words, the commonly used comments from a specialist asked to evaluate their system of expertise, ‘such and such organ is fine’, might soon become irrelevant and obsolete in the context of multiple complex systems.
Moreover, recent research revealed that genotype, endotype and phenotype are quite different in COPD and asthma[2]. Therefore, even though a disease may manifest in a single system, the pathophysiological process from which it arose may have been triggered in different organs.
2. Mortality is not the only outcome measure
Specialists seem to treat all-cause mortality as the most important outcome measure in most cases. Or, they choose strategies based on organ specific survival as an alternative, such as MACE (major adverse cardiac events) or creatinine-doubling time[3]. Life is far more than just being alive. Subsequently, the quality of life (QOL) measures, which capture patient-centred outcomes, provide insight into the effectiveness of interventions but also their meaningfulness to patients, and such measures are gauging previously uncaptured positive aspects of interventions[4]. The difficulty of defining well-being remains a challenge for researchers and arises from the differences brought about by cultural and societal elements which are context-bound and unique to each community.
3. ADL is one of the most critical prognostic indicators
New biological markers are numerous around here - new renal injury markers, such as NGAL or KIM, to name a few. But a quick, old-fashioned, bedside assessment can easily reveal impairments in Activities of Daily Living (ADL) at each patient visit; and ADLs by Functional Assessment Measures have been consistently shown as strong outcome predictors in acute and chronic illnesses, especially within elderly populations[5]. In fact, functional measures were deemed to be as important as other objective measures in some prognoses[6]; for instance, in the BODE score for COPD survival prediction, the ADL measure carries the same weight as the PFT (Pulmonary Function Test). In the management of elderly patients, hospitalization[7] and initiation of haemodialysis[8] significantly influence the worsening of ADLs. In the development of a 1-year mortality index after hospital admissions among elderly patients, ADL was of pivotal importance[9].
Functional impairment is also a strong indicator for readmission: there is a dose-response correlation of severity of impairment and the risk of readmissions[10]. Intensifying the in-hospital post-ICU physical and nutritional therapy has been shown to improve many aspects of recovery[11]. In patients with numerous chronic illnesses, the number of comorbidities strongly correlates with the decline of ADL[12]. Interventions to maintain pre-hospitalization ADL is important in facilitating recovery from hospitalization, and in one study in-hospital mobility programs helped patients to maintain pre-hospitalization ADL while the usual care group experienced significant decline[13].
4. Effectiveness, not efficacy, matters most in the real-world
“Doctor, I cannot afford the medicine prescribed to me when I was discharged!” This is oft-repeated in offices of generalist physicians. If a patient cannot afford medication and therefore does not take it, the treatment lacks efficacy. In the inpatient setting, efficacy of intervention determines the outcome since patients are most likely to receive the prescribed intervention. This is not the case in the outpatient setting, and the effectiveness of an intervention depends on many other elements, such as the accuracy of diagnosis, patient compliance to the proven intervention, prescription drug coverage, access to care, and finally, efficacy of the intervention[14].
5. Mental wellness is essential to physical wellness
Health is not limited to the physical body; it also involves mental wellness. In fact, mental and physical health are inseparable. Naturally, serious illnesses affect mood and cognition: therefore, it is important to acknowledge that mental health issues lie squarely within the spectrum of physical disease management. Generalists can help patients with multiple comorbidities manage depressive symptoms through brief psychological interventions; such symptoms related to cognition and mood are expected consequences of any serious illnesses.
Studies have shown that among elderly patients without dementia at baseline, noncritical hospitalization is associated with the development of cognitive dysfunction[15]. Among elderly patients, the prevalence of cognitive dysfunction is significantly higher in ADHF (acute decompensated heart failure) admissions[16] or survivors of severe sepsis[17]. Depression and depressed mood are prevalent in patients suffering serious illnesses[18]. New models are emerging to integrate psychotherapy in multiple comorbid patients and have been proven to be effective[19].
6. Pay heed to illness trajectory
“My grandma has never been the same after her hip surgery. Please fix her!”
Primary care physicians often note a decline in the general function and cognition of their patients after index admissions to the hospital. As noted earlier, acute hospital admissions have a strong independent effect on the severity of disability amongst elderly persons[20]. The multidimensional frailty score, which incorporates ADL and cognitive function, predicts one-year mortality based on a simple scoring system[21]. Poor functional status attributes to frailty and has led to poor surgical outcomes in the elderly[22]. The prevalence of functional impairment steadily increases from 28% in the 2 years prior to death to 56% in the last month of life[23]. Studies demonstrate that gait speed is an important predictor for survival amongst the elderly[24][25] as well as grip strength[26][27].
Furthermore, elderly patients sustain significant impairments long after the index hospitalization[28]. Amongst elderly patients discharged from the ICU, more than 50% die within a month[29]. At one-year follow-up, critical ADL capacity, such as taking medications or shopping, was impaired in more than 70% of ICU survivors who remained ventilated for longer than 48 hours[30]. Delirium sustains a long-lasting effect even after patients are discharged from the hospital, the longer the duration of delirium, the more sustained is the cognitive impairment[31].
7. Care for the care-givers
There is increasing evidence that caregivers sustain long lasting effects from patient illnesses. Depressive symptoms increase overall for surviving spouses regardless of hospice use[32]. The RECOVER study[33] demonstrated that caregivers suffered from high levels of depressive symptoms up to 1 year after a loved one’s ICU admission. In the era of chronic illnesses, it is essential to be mindful of the contributions made by caregivers in disease management. Tools are widely available for the clinician to assess caregiver burden[34]. This is important because family-support interventions have been shown to improve the quality of communication and decrease the patient’s length of stay in ICU[35].
8. ‘Exercise and diet’ trumps ‘medicine and surgery’
The COURAGE trial demonstrated that after 7 years, there is no difference between medical management and percutaneous intervention (PCI) in managing coronary disease[36]. As time progresses after the initial event, the benefits of surgical intervention become less apparent. Similarly, in the long run, intensive statin therapy has not proven to be of greater clinical significance compared to those receiving moderate levels of statin[37]. As the saying goes, in the long run, “we are what we eat.” Innumerable studies have shown that diet and physical habits have a lasting effect on the health of individuals[38]. Bariatric surgery has been demonstrating dramatic and long-lasting effects on diabetes control, while the DiRECT study demonstrated that intensive exercise and diet successfully achieved remission in nearly half of the intervention group, compared to only 4% of controls[39]. Despite the substantial increase in chronic illnesses that are closely tied to our lifestyle and eating habits, physicians of all specialties are poorly trained to provide nutritional counselling to patients[40].
9. Whose definition of health matters?
If health is defined, as defined by the WHO, is not simply the lack of illness, but “a state of complete physical, mental and social well-being,” it must incorporate many other elements dictated by societal, cultural, moral and philosophical norms and values. Furthermore, the definition of health and the path to attain it should come from the society and community it reflects, since neither healthcare personnel nor the healthcare industry own health. Therefore, the definition should emerge from community interventions and multidisciplinary groups filled with varied stakeholders, rather than from the ivory tower of healthcare researchers. Therefore, medical decision-making processes are rapidly moving away from the paternalistic approach to consensus-based, collegial decisions. Shared decision-making, informed consent, discussions of different treatment options and acquiring second opinions have become standard practice and reflect the empowerment of patients, and communities, to define their own healthcare. Ultimately, as long as patients are competent, they decide their treatment after consulting with physicians, who advocate for the patients’ goals in care and advise them accordingly.
10. Empower healthcare recipients
In the long-term management of chronic illness, participation of the patient is essential. And transparent communication is pivotal for better participation and shared decision-making[41]. In the new model of health, healthcare providers must play an active role in advocating for patients and promoting well-being while acknowledging that health is a dynamic concept[42]; these physicians do not simply “coordinate care.” This shift from the physician-centred to the patient-centred approach, in and of itself, will be empowering for patients.
CONCLUSION
Transition of care is one of the most important steps connecting hospital care to primary care. Those problems currently labelled as miscommunication might be more than just a lack of handoff tools or timely messaging; they rather stem from a difference in priorities and varied interpretations of patients’ problems by these two groups of providers. Many questions remain unanswered when facing the future of collaborative healthcare: what kind of doctors are most suited to address the complex interaction of illnesses involving multiple organs? Who can develop a new framework to capture this dynamic and complex interaction of systems, covering many organs in a single patient? Moreover, the next generation of healthcare providers will need to be trained to bear in mind this fundamental concept of patient management. As the twenty-first century progresses, discoveries within medical science will continue to advance the field further away from the current organ-based specialization to pathophysiology-based specialization. This article advances the discussion on the altering role of generalist physicians and the advice of their specialist colleagues, as together they face more and more changes within the practice of medicine.
Psychiatric trainees in Iraq face many challenges that limit their immediate access to improved training opportunities. These include limited access to classroom teaching, regular clinical supervision meetings and fewer opportunities to attend international conferences and placements. These challenges are more acute in those specialities with the greatest shortage of consultants (for example, forensic and child and adolescent psychiatry).
Furthermore, the fragile security situation in the capital and larger cities and the post-conflict disruption to educational institutions consequent to these difficulties makes it difficult for those in the UK and elsewhere to visit the country and support educators and training on the ground.
Against this background and as a medical educational team in the UK (Oxford University Medical Education Fellows, OUMEF) with an interest in developing training opportunities for peers and colleagues in Iraq, we set up the Oxford Psychiatry in Iraq (OxPIQ) Project, beginning with a project development team that included Medicine Africa, an experienced online distance learning platform.
So what is the role of TEL in the delivery of online distance learning targeted at medical professionals in these circumstances?
Meeting the Challenge – the role of TEL
The concept of Technology-enhanced Learning (TEL), or Web-based learning (WBL), defined as the use of information and communication technologies in teaching and learning 1, is a relatively new phenomenon. Nevertheless, there is a considerable body of evidence supporting the use of TEL in various clinical and non-clinical settings.
Mccutcheon et al. 2 systematically reviewed thirteen studies and found that ten of these studies concluded that online learning is as effective as traditional or classroom teaching, despite the limitations of some of these studies.
In a large meta-analysis, Means and colleagues 3 concluded that students using online teaching performed modestly better compared to students learning similar material using face-to-face teaching. Combining face-to-face and online teaching resulted in larger benefit compared to the use of face to face methods only.
TEL can address the learning limitations in classroom settings due to expanding curriculum coverage and limits on contact time between students and lecturers/trainers alike. It can contribute to better use of such face-to-face classroom contact through the facilitation of the flipped classroom approach. 4 In this approach (also called inverted instruction and upside-down teaching), students acquire the basic information of the lesson outside the class (usually using online materials) and then develop their knowledge further in the class by sharing their learning, interacting with other classmates and teacher, and discussing various aspects of the study topic.These advantages have enabled TEL to revolutionise distance learning at many levels – enabling greater access to education by overcoming geographical and time-zone boundaries.
An allied concept within distance TEL is the concept of virtual teams 5 where health professionals come together to teach and learn from each other independent of location. Of itself, this offers some advantages. These include the possibility of addressing speciality-specific training gaps through the incorporation of the relevant expertise within the team - and to the creation of what is termed “connectivism”. This term refers to the use of internet technologies to enhance learning through online peer networks 6 and the development of communities of practice. 7 The latter allows for workplace-based learning with trainees learning from more experienced practitioners and moving towards the same through greater competency acquisition.
In a similar vein, creating networks of professionals may help to establish more longer-lasting relationships of mutual benefit between the UK and Iraqi professionals (e.g. through collaboration on training programmes, conferences, etc.). Also, cross-cultural online learning has been shown to be very useful in improving language skills and cultural awareness of learners and educators. 8 With language translation technology, any language difficulties can also be overcome, especially if the educator can observe the learners’ responses to the translated text and offered the opportunity to give further explanations and clarifications when necessary. 9 Finally, as well as sharing knowledge and experience within groups, TEL enables opportunities for mentoring and coaching individuals. 10
For our purposes, these findings and opinions support the use of online learning as a suitable distance learning “add-on” to existing training opportunities in Iraq.
TEL and Learning Theories
Learning theorists suggest that experiential and constructive learning theories are most appropriate to learning in the clinical context. Both are possible with TEL (as well as being facilitative of behaviourist and cognitivist approaches).
For example, the virtual classroom environment can enhance the learning experience of the participants by improving their analytical skills by thinking through case formulation and management plans. 11 Participants in online learning could be assessed and receive the feedback immediately. Ideas can be shared, and there is no passive acquisition or transfer of knowledge as is the case with traditional lectures. These aspects have implications for the design of the educational sessions and are discussed below in the learning methods section.
Challenges of Online Distance Learning
There are many challenges associated with online distance learning. Firstly, there is the potential lack of the required technologies (internet access, laptops or desktop computers), the expenses of subscribing to these online learning templates, the need to have technical support, and similar technical and logistic issues. 12 These technical problems may impair access to and functioning of the virtual team. The choice of an experienced online platform must, therefore, be considered carefully.
Secondly, there may be ethical issues about the protection of patients’ confidentiality in these sessions, especially when there are different laws of privacy that are applied in the UK and Iraq. This will require knowledge of the relevant professional requirements by the tutor team for example.
Furthermore, the student-teacher relationship has traditionally been underpinned by direct face-to-face contact and being present at the same time and place. 11 Therefore, learners and educators might be less satisfied with online learning. For these reasons, the concept of blended learning (careful integration of online learning with face to face learning experience) has been developed to overcome the limitations of a standalone online or face to face learning and has been found effective and applicable in various settings. 13
Thirdly, any distance online learning programme must understand and support existing “local” training provision and arrangements, in the classroom and the workplace. This requires liaison and cooperation with the training providers and institutions on the ground.
For clinical training to be relevant, it needs to reflect the learning needs of trainees in the workplace – in keeping with adult learning principles and cognitive apprenticeship models of learning. 14 The latter includes the importance of clinical decision-making underscored by the higher levels of Bloom’s (1956) cognitive domain. 15 To this end, then appropriate learning and assessment methods are needed to enable effecting learning.
In other words, while necessary, TEL may be insufficient in enhancing learning outcomes if allied learning methods are not chosen appropriately. Also, in our view, TEL is not a substitute for bedside teaching.
Table 1 summarises this appraisal of online distance learning (using the online platform provided by MedicineAfrica).
Table 1 Strengths and limitation of using MedicineAfrica (web-based virtual classroom environment)
Strengths
Limitations
Better use of the participants time and resources
Limited or lack of internet access
Overcome geographical barriers between two countries
Technical and logistic issues
Improve critical thinking and communication skills
Subscription expenses Appropriate choice of learning methods
Form long-standing professional networks
Ethical and legal issues (e.g., confidentiality)
Interactivity
Lack of direct face to face contact
OxPIQ & Project Development Team
OxPIQ is a partnership between Medicine Africa and psychiatrist members of the Oxford University Medical Education Fellows, with experience of working in Iraq. The Oxford University Medical Education Fellows (http://OUMEF.org) is a group of trainees from across medical and surgical specialities with interest in medical education and training.
Medicine Africa (http://medicineafrica.com) is an innovative clinically targeted online platform developed in collaboration with King’s College London’s Centre for Global Health, within the King’s Somaliland Partnership. Built at low bandwidth, it enables collaboration between medical professionals in the UK and those in remote or fragile states to enhance education in various clinical specialities using online sessions (live courses and mentoring sessions). Please see Appendix 3 for a screenshot of one of the active sessions of OxPIQ.
The next step was to invite representation and support from the Iraqi Board of Psychiatry and the Medical Education Unit in Baghdad. These developments led to the formal launch of OxPIQ Partnership in March 2016. Later on, the many UK and Iraqi doctors joined the Partnership as tutors and learners.
The Virtual Learning Team: Trainees, Specialty Consultants & Tutors
Iraqi psychiatry trainees were then recruited, and their more pressing learning needs to be appraised based on their views and those of the Iraqi Board of Psychiatry supervisors. Learning needs to emerge included the management of older patients with dementia and functional disorders, assessment and management of children and adolescents (with autism and ADHD for example), forensic patients and those with drug and alcohol addiction. The team thus formed was composed of up to ten psychiatry trainees from Iraq and five senior psychiatrists/tutors each, from Iraq and the UK respectively. A schedule of fortnightly seminars was agreed and published on the learning platform. Case-based discussions were used as the main educational activity during these seminars.
Learning Methods and Processes
As noted earlier, the importance of experiential and constructivist learning methods are key to clinical education. Our literature appraisal revealed that they are essential elements of successful TEL in this context too. 16, 17 To these must be added learner engagement. 18
Virtual or online (anonymised) case-based discussions (CBDs) are valid and reliable learning tools. 16 They are interactive and centred around the students and their learning needs while a facilitator guides the process of learning. Learners are engaged through discussion of actual clinical cases, so preparing learners for real-life experience. 19 Also, expert facilitation and peer feedback to trainees promotes clinical knowledge and skills’ development. 20, 21
Effective small group teaching is characterised by four main strengths: flexibility, interaction, reflexivity and engagement. 22 Flexibility is when the teacher responds to the needs and learning of the students dynamically and helps them to explore wider pedagogic spaces. A higher degree of interactivity is usually seen in small group teaching compared to a larger group. Teachers are better able to continually engage in self-reflection and listen sensitively to students in a small group and observe the dynamics between the members of the group, leading therefore to better reflexivity. Engagement refers to encouraging the students to develop their academic identity and engage in lively debate about the various aspects of the topic discussed.
We aimed to replicate these characteristics. For example, a small group discussion allowed better interaction with each participant (interactivity); the presence of chat windows enables the facilitator to self-reflect on the process, monitor engagement and respond reflexively using questions and answers to stimulate interest and respond flexibly to individual trainee knowledge gaps. Tutors are encouraged to identify trainees’ learning needs and facilitate interactivity, and timely feedback as these are highly valued by the participants and help to keep them motivated and engaged. 18
For further reading in this area, we recommend Brindly and colleagues’ 23 ten strategies to increase students’ motivation towards and engagement with online learning (see table 2).
Table 2- Strategies to increase engagement in online teaching (modified from Brindly and colleagues, 2009) 23
1. Transparency of expectations: Making the learning objectives very clear and relevant to the participants learning needs. The teachers must be open to the learners’ suggestions and must be willing to discuss the process and purpose of the educational activities.
2. Clear instructions: The educational activity, its timing, duration, and the technical aspects are described in detail to the participants. They should not be left to ‘try out things’ and must be guided explicitly.
3. Appropriateness of task for group work: For the online activity to succeed, individual versus group tasks should be differentiated. In our example, this may be done by asking the participants to do a particular task before the session (e.g., read about severe and enduring mental illness), and then to work together on producing a formulation for the case discussed. This will increase their motivation to be involved in various tasks.
4. Meaning-making/relevance: The case-based discussions (and any online activity) should have relevance for the participants and aim to enrich their experience in their clinical work.
5. The motivation for participation embedded in course design: It is essential that participants in the online activity understand that the success of the group and the course depend on the individual effort of each participant.
6. The readiness of learners for group work: This aspect describes the development of a sense of community through a professional relationship which leads to better collaborative work.
7. The timing of group formation: Before the participants join in the educational activity, it is preferable to have some discussions before the tutorial on their learning needs to allow a time for rapport to develop to enable better group activities.
8. Respect for the autonomy of learners: Joining and leaving the educational activity (and the whole online course) should be voluntary. No penalties should be attached to leaving the course. Learners should have the freedom to choose what aspects of the online course is relevant to them.
9. Monitoring and feedback: The tutor should monitor the progress of the participants, and timely feedback is given respectfully to enhance the engagement and motivation of the participants. Please see Appendix 1 (lesson plan) for more details on feedback and evaluation.
10. Sufficient time for the task: Participants should be given time to be actively involved in the session. This is particularly important in a distant learning session when issues related to sound quality or speed of internet connection may prevent some participants from engagement.
The focus of the Lesson Plan Design
To these ends, the focus on the lesson design was on using problem-based learning methods (e.g. CBDs) within a small group setting (between 4-12 members) and a format that promoted learner engagement. A sample lesson plan is provided in Appendix 1.
In practical terms, tutorials were held fortnightly in term-time. All participants received an email notification to inform them of the session topic, and the tutor uploaded the slides from the session to the website beforehand. Participants logged-in to the site (http://medicineafrica.com) and interact with the tutor by voice (requiring only simple microphone equipment) and by writing in a chat window.
Evaluation and feedback gathering
The evaluation of the effectiveness of these sessions was reliant originally on trainees’ immediate reaction (table 3, level 1 evaluation, Kirkpatrick 24) using formal feedback tools provided online by MedicineAfrica. This feedback was shared with tutors and the Project Team. Please see Appendix 2 for the template used in collecting feedback after each session.
Subsequently, members of the project team approached trainee representatives, tutors and Iraqi Psychiatry Board leads separately for further feedback and appraisal of learning needs. Furthermore, some months after a tutorial we have asked trainees for evidence of learning across the higher levels of Kirkpatrick’s evaluation model.
Regular feedback from the Iraqi and UK participants has been positive. The sessions have been associated with improved clinical knowledge and skills of the Iraqi Psychiatry Trainees. Requests for certificates of tutorial participation have been agreed upon and provided by the project team addition, so supporting learners’ (and tutors) portfolio development.
Table 3 Kirkpatrick’s (1996) Levels of Training Assessment
Level
How to assess
Level 1: Reaction (the participants feeling about the training
Feedback during and after the tutorial using the feedback questionnaire
Level 2: Learning (improving the participants’ knowledge)
Post-tutorial questionnaire and interviews
Level 3: Behaviour-also called Transfer (improving the participant's performance)
Direct or indirect observation and assessment of the skills and competencies of the trainees
Level 4: Results (cost-effectiveness, engagement, sustainability, adherence to evidence-based practices)
regular meetings between the participants, tutors, and stakeholders.
Further cooperation
A surprising (and very welcome) outcome of the project was, through the facilitation and support of the Iraqi Board of Psychiatry, the introduction of educational workshops in Baghdad. These workshops were held in Medical City, Baghdad, in May 2017 and April 2018 and were facilitated by tutors (YH & H Al-T) from the OxPIQ Partnership. They covered targeted topics such as old age psychiatry, addiction, organic and forensic psychiatry. Trainees and senior psychiatrists from Iraq attended; their feedback showed how they valued the interactive nature of the teaching and use of CBDs as learning methods, resulting in high levels of engagement.
Conclusions
This paper describes the process of designing, delivering, and the early evaluation of an online distance TEL programme for mental health professionals based in the UK and Iraq.
TEL has had an important role in overcoming the geographical barriers and other challenges to developing training opportunities in Iraq and other developing countries. We are of the view that it could be used more often to connect professionals working in similar circumstances and with other disadvantaged groups, including refugee and asylum seekers. It is a flexible way of providing training to professionals working with those groups in relatively remote and resource-deprived environments.
Greenhalgh 25 suggests that three factors are needed for the success of online educational activity: ease of access, perceived usefulness of the activity to the learning requirements of the students, and the interactivity of the session. In our experience, these are important. Also, we believe that additional consideration should be given to (i) working with an experienced online platform provider; (ii) working with local educational institutions, trainers and learners to identify unmet learning needs and support existing learning opportunities/programmes; and (iii) adopting an iterative approach to feedback and evaluation.
Appendix 1: Example of a Lesson Plan
Session title
Case-based discussion on management of severe and enduring mental illness.
Duration of session
60 minutes
Tutor
A UK-based Psychiatrist
Learner group
Psychiatry Board Trainees and Senior Psychiatrists in Iraq and UK
Step 1– Learning outcomes
a) Describe the various stages in the management of the cases discussed during the session.
b) Enhance the participants learning using case-based discussion with peers and seniors in the UK and Iraq.
c) Improve the presentation and discussion skills of the participants and their communication skills.
d) Explore ethical, cultural, and social issues related to the management of mental disorders and improve cultural competency and awareness.
Step 2 – Learning Plan
Introduction to the online tutorial -10 minutes
a) Highlight the learning objectives of the tutorial
b) Stimulate the thinking of the participants by asking about their current knowledge of the subject, whether they managed similar cases in their clinical work, and what are their learning needs.
c) Outline the tutorial structure and further engage the participants by informing them about other details (e.g., if they can ask the question during or after the case presentation)
2. The tutorial in a case with severe and enduring mental disorder – 30 minutes
a) Participants are encouraged to interact with the tutor who should be invited to keep the tutorial interactive.
b) The case presented will provide an overview of the patient’s journey from the initial presentation, followed by the investigation, then treatment plans. Discussions of the differential diagnosis are important.
c) The tutor will assess the knowledge of the participants by asking questions on the various aspects of the case presentation (e.g., what is your differential diagnosis for a patient presenting with auditory and visual hallucination? What investigations would you request?).
3. Recap and Q&A time- 20 minutes
a) Tutor to give a summary of the main learning points from the tutorial and linking these to the learning outcomes presented at the beginning.
b) Participants are given enough time to ask questions and to participate actively in the session.
Step 3 – Assessment
Before Lesson
Before the tutorial, the tutor should know the current educational curriculum of the participants and their learning outcomes in that subject. UK and Iraqi Psychiatry curriculum are different, and therefore knowing what is relevant is important.
Stating the learning outcomes at the beginning of the tutorial will also help in the baseline assessment of the knowledge and skills of the participants.
Pre-session questionnaires could be used as well (for example, asking questions on the prognosis of various mental disorders and comparing the participant’s knowledge before and after the session).
After the lesson
· Ongoing assessment during the tutorial using questions on various aspects related to the case presented.
· Questions in the recap section at the end of the tutorial.
· Post tutorial feedback forms will allow the participants to give their views about their learning needs and if they feel the tutorial was relevant to their learning outcomes.
It is important to provide personalised feedback to the participants about their performance on these assessment tools as this will help them to identify gaps in their knowledge and improve their learning. 26
Step 4 – Resources required
MedicineAfrica is free to join and designed to work well even with low bandwidth. Hence it won’t be affected by slow internet connections which are likely to be the case in developing countries.
Trainees and Tutors will need a computer (desktop or laptop) with an internet connection. No other resources are needed. Recommended readings could be disseminated by email to the trainees after the session.
Step 5 – Evaluation
Student evaluation
Gathering feedback is an essential step to influence the learning outcomes favourable and continue to improve the structure and content of the tutorials (After the tutorial, the participants will be asked to fill an electronic feedback form (please see Appendix 2).
The form contains various questions with rating (from 1-5, ranging from strongly disagree to strongly agree) on various aspects of the tutorial. These include structure, organisation, the range of aids used and meeting of the learning outcomes.
Also, direct feedback from the trainees, tutors, facilitators, and the stakeholders responsible for running the online learning platform is gathered to assess the effectiveness of these tutorials.
Teacher evaluation
Professionals invest a significant amount of time and efforts in these lessons, and it is imperative to assess how the tutorials could be improved to meet the needs of the trainees and keep them and the tutors motivated and interested. Tutors in these tutorials meet regularly using Skype to reflect on their teaching sessions and discuss ways of improving the delivery and quality of the tutorials.
Mutual learning is another aspect that needs to be assessed (is the tutor also benefitting from these lessons, for example, by improving their cultural competencies or their teaching skills).
Appendix 2: Feedback form to be completed by the participants after the session
Session title
Case-based discussion on management of severe and enduring mental illness.
Speaker
Date
Content
The session was relevant to my training needs
Strongly disagree 1 2 3 4 5 Strongly agree
Organisation
Sufficient time was allowed for the session
Strongly disagree 1 2 3 4 5 Strongly agree
Presentation
The session was well presented
Strongly disagree 1 2 3 4 5 Strongly agree
The session was delivered at the right pace
Strongly disagree 1 2 3 4 5 Strongly agree
The session was interactive and encouraged discussion/questions
Strongly disagree 1 2 3 4 5 Strongly agree
Structure
The session was well organised and structured
Strongly disagree 1 2 3 4 5 Strongly agree
The aims and objectives of the session were clear
Strongly disagree 1 2 3 4 5 Strongly agree
The aims and objectives of the session were met
Strongly disagree 1 2 3 4 5 Strongly agree
Overall evaluation
Overall, I would rate this session as
Extremely poor 1 2 3 4 5 Extremely good
Appendix 3: MedicineAfrica screenshot during an active session
Epidural anaesthesia is one of the favored and effective treatment options for labour pain. It is usually safe and only a handful situations lead to absolute contraindications to this technique such as patient’s refusal, lack of expertise and equipment, severe coagulopathy and infection at the site of puncture (1). However, as with any other technique and procedure, epidural anaesthesia is not flawless. The side effects and complications include hypotension, pruritus, inadequate analgesia, post puncture headache, nerve damage, infection, and epidural haematoma (1,2). Headache is common in one third of the patients after lumbar puncture however, the frequency is less in epidural anaesthesia as the fluid is injected in and not removed in the latter (3). Accidental dural damage and subsequent headache following epidural anaesthesia is uncommon and is an important cause of morbidity which can limit patient severely. Further, in rarest of rare cases Pneumocephalus can develop after epidural anaesthesia which has rarely been reported. We report a patient who developed Pneumocephalus after receiving epidural anaesthesia for labour pain.
Case Report:
A 39 year old female presented to our Emergency Department with severe headache not responsive to analgesics. The headache started developing 10 to 12 hours after she was given an epidural which was attempted three times for labour pain which was four days prior at a nearby medical center . The severity of the headache did not change with lying or the upright position. She had no symptoms of vomiting, no fever and no confusion. Neurological examination and vital signs were unremarkable. The site of the spinal anaesthesia did not reveal any swelling or any signs of infection. An urgent head CT scan was performed which revealed Pneumocephalus denoted by numerous left fronto-parietal extra axial air locules (Figure 1 and Figure 2). MRI spine revealed mild subcutaneous oedema at the site of the needle insertion without any haemorrhage or collection. The patient was admitted and treated conservatively for six days and follow up serial head CT scans showed complete resorption of the Pneumocephalus and the patient’s symptoms resolved completely. The patient was discharged and the follow up was uneventful.
Figure 1: Pneumocephalus seen as locules of air (black color) in the left fronto-parietal region denoted by arrows (Axial section)
Figure 2: Multiple pockets of air seen in the Saggital section marked by arrows demonstrate the Pneumocephalus.
Discussion:
Pneumocephalus is the presence of air in the intracranial cavity. It can be acute ( less than72 hours ) or delayed (more than 72 hours). The most common site is the frontal region (4). Plain skull x-rays can detect Pneumocephalus of about 2 ml, whereas it requires only 0.5 ml of air to be detected by a CT scan (5). Pneumocephalus is most commonly a result of traumatic brain injury, surgical intervention of the brain or infection (5). Trauma accounts for up to 75% percent of the total cases. Chronic infections of ENT especially otitis media also amounts to a number of significant cases. Surgical procedures of brain, spine and ENT like sinus surgery, nasal polypectomy and nasal septum resection accounts for the causes. The incidence after supratentorial craniotomy has been reported to be 100% (6, 7). However, it is very unusual for pneumocephalus to develop post epidural anaeasthesia possibly due to ball valve mechanism in which the air enters the space through the CSF leakage which allows input but not output. Headache post lumbar puncture and epidural anaesthesia is relatively not uncommon but certain situations may demand a more thoughtful approach (3).
In our patient we suspect there was a puncture of the dura during epidural anaesthesia which led to air being trapped and siphoned upwards in an inverted soda bottle fashion. This is supported by the meta-analysis done by Choi et al. which states the incidence of accidental dural puncture in epidural insertion to be 1.5% and among those 52 % will have post puncture headaches (8). In another extensive study performed over ten years, the overall incidence of accidental dural puncture and postdural puncture headache were 0.32% and 0.38%, respectively (9). The authors further stressed that if more than one attempt was required to identify the epidural space, the accidental dural puncture rate increased to 0.91%. In our patient we witnessed the same wherein three attempts were made to identify the epidural space which increased the risk of dural injury and subsequent leaking. Pneumocephalus usually gets absorbed without any clinical manifestations. The conservative treatment involves placing the patient at rest, avoiding Valsalva manoeuver, administering analgesics. With these measures, reabsorption was observed in 85% of cases after 2–3 weeks (5). Use of oxygen mask, nasal catheter, hyperbaric oxygen sessions and good hydration have also been reported. If conservative measures fail to provide the desired results then specific treatment like a epidural blood patch or even surgical closure of the dural gap is indicated (3, 10).
The recent increase in the number of patients presenting with a borderline personality disorder (BPD) in general adult psychiatry and primary care is creating pressure within the National Health Service (NHS)1.Currently, BPD is perceived to be like an ‘epidemic’ entity, particularly in areas with a high incidence of socioeconomic deprivation. Similarly, there is a parallel increase in the human and medical resources needed to manage this disorder efficiently. In fact, the authors have found that BPD tends to be comorbid with factitious disorders and depression (Tripolar syndrome) with a tendency to overuse hospital and medical facilities, inclusive of Accident and Emergency (A&E) departments, family doctors and General Practitioner (GP) surgeries2.
Consequently, patients with BPD require a constant and unlimited allocation of medical and psychiatric resources, together with targeted care plans. In fact, they might be prone to frequent self-referrals to A&E, seek hospital admissions and augment all their psychotropic medications in order to deal with their on-going crises not solvable in their homes. Also, the skills needed by healthcare personnel to reduce chronic self-harming and suicidal ideation in this vulnerable population are complex and need to be updated on an on-going basis also due to the tendency of these patients to raise allegations towards their healthcarers3. Nonetheless, the provision of treatment is often hindered by various healthcare system limitations, such as the lack of beds on medical and psychiatric units, forced reduction in the length of stay in a hospital and insufficient human resources. This scenario has mostly affected female patients with BPD who seek admission to psychiatric hospitals often for respite from chronic suicidal ideation4.Moments of amplified suicidal ideas become evident when internal voices, perceived as auditory hallucinations commanding to self-harm or to commit suicide, become more intense5.
As observed by the authors of the current editorial, increased suicidal ideation in persons with BPD also occurs during minor crises in life, when experiencing intensified flashbacks about past abuses, during minor losses, after significant conflicts with others and after the separation from influential people in their social network. Besides, admissions in psychiatric wards, very commonly, occur when there is an intensification of internal voices commanding BPD patients to take overdoses of the prescribed medication or to jump in front of a train, a car or off a pier to commit suicide. Police is often involved to stop these dramatic plans. At the same time, healthcare professionals are discouraged by the complex management of patients with BPD, which, in combination with their tendency to challenge or make unwarranted allegations against their health carers, results in feelings of sadness, rejection and alarm in the latter. Kanin reported that the reason to produce a false allegation is to create a defence or to get compassion6. Nonetheless, it is also likely that some healthcare professionals might have some preconceived ideas about people with Borderline Personality Disorder, which might reduce the depth of health carers’ empathy towards these patients and lead to burnout after prolonged treatment of BPD in hospital or community. Attempts to treat and to reduce suicidal ideation and self-harm in this group of patients are often thwarted as they challenge medical decisions and endeavour to sabotage the proposed care plans. The strain on the doctor-patient relationship is determined by the underlying ‘Mistrust/Abuse’ scheme of patients with BPD who expect from others, and are thus sensitive to, signals of relational wound, treachery and abuse7.
Consequently, a chronic feeling of inadequacy in patients with BPD translates itself in enduring dissatisfaction with any therapy and healthcare professionals. Hence, in the authors’ experience, any attempt to establish a long-term therapeutic relationship with BPD patients might have limited outcomes. Frustration in healthcare professionals aiming to create an enduring therapeutic alliance with patients with BPD happens as these patients tend to interpersonal biases and to ascribe undesirable experiences to people (hence to healthcare professionals) as opposed to circumstances8. Therefore, social interactions with primary carers result in dissatisfaction of people with BPD about any medical or psychiatric plan is set up for them. Consequently, community teams, general practitioners and hospital staff feel hopeless due to recurrent readmissions of people with BPD and the lack of definitive treatment for such pathology. Stress caused by difficulties encountered in ensuring that BPD patients comply with the therapy regularly places doctors and nurses at crisis point.
Once in the hospital, discharging patients with BPD can be difficult as they are frequently reluctant to return to the community, leading to recurrent readmissions within a short period. In fact, the period before discharge from a psychiatric hospital is complicated by mounting anxiety and distress in patients with BPD. The authors observed a regular escalation of self-harming behaviours and increased suicidal ideation in these patients just before discharge, possibly indicating their underlying anxiety in going back to the home environment. Many BPD patients suggest that they would rather stay in the hospital instead of returning to the community that is considered by them as unsafe or unstructured. Furthermore, as these patients have an intense vulnerability to social rejection, they rarely feel adequate during social interactions thus developing an enduring sense of solitude9. Therefore, any hospital discharge or a visit to the GP will be interpreted by them as disappointing and will lead patients with BPD to confirm their sense of rejection. As a reaction, the authors observed that BPD patients demand endless and unconditional attention from their primary carers. Attempts by patients with BPD to self-harm or commit suicide intensify over weekends or public holidays as their sense of solitude increases during these periods, especially when there is also a shortage of healthcare professionals available for immediate support.
The authors of the current editorial propose possible strategies of intervention both on the psychopharmacological and managerial side. The challenge is that patients with BPD often use overdoses of oral medication in a suicide attempt10. Hence, the authors recommend the use of long-lasting depot antipsychotic injections (e.g., Zuclopenthixol Decanoate) to stabilise their mood and reduce impulsivity, the risk of overdoses, pseudo-psychotic symptoms and command hallucinations leading to deliberate self-harm. The use of oral lithium to treat mood swings poses an ethical dilemma for doctors as it could be lethal when used as an overdose. Healthcare management is another way of intervention. One point of difficulty is the tendency of patients with BPD to split their teams and to create niches of protectors and opposers within staff with possible conflicts within the team that is treating them. In this case, inter-professional coordination, integrated care and constant information sharing are required11. Furthermore, several healthcare services treating patients with BPD are trying to find an integrated approach for their hospital and community treatment. The authors speculate that the increased number of admissions of patients with BPD is reducing the total capabilities of physical and mental wards to treat patients with other pathologies. Besides, the dramatic presentation of patients with BPD who tend to overuse the healthcare services poses ethical dilemmas in their management. This scenario has created discrepancies in health care policies about treatments and hospital (re)admissions of patients with BPD reaching an epidemic magnitude in many healthcare trusts. Hence, a new culture is required for the management and treatment of patients with BPD in the community.
Culture is defined as the character of an institution that affects employee gratification and organisational accomplishments12.What is needed is a frank and constructive dialogue between healthcare managers, leaders and medical staff in the hospital and in the community. Furthermore, clear and regional guidelines should exist to improve the efficacy of care which is offered to BPD patients at home and to reduce the constant risks which patients pose to themselves, their sense of solitude and their tendency to seek hospital admission in order to solve chronic existential difficulties. A model of integrated care comes from Max Weber who differentiated between ‘formal rationality’, the endorsement by healthcare managers of the most efficient ways of achieving organisational goals (e.g., ensuring more hospital beds by quick discharges of ‘bed blockers’), and ‘substantive rationality’, the expectation by healthcare professionals that values and morals should instead be based on tradition, compassion and dedication13;pertinent to the care of BPD patients in our case. The collaboration of all those involved parties is also important to reduce the risk of ‘silo management’ where confined and regional policies do not embrace a wider perspective for the management of specific problems while responding only within the confines of the own guidelines and procedures14.In these cases, integrated care in communities can halt self-harming and suicidal attempts of patients with BPD. The organigram sees inter-professional actions, targeted psychopharmacological policies and psychiatric crisis teams in A&E that can reduce the need to hospitalise patients with BPD at any ensuing crisis.
Physicians pursue the interest that during the hospital stay the best patient care needs to be provided; and achieving that in a short time - as a result the patient is expected to recover from illness and return to normal life.
The ability to prevent possible complications that the patients are exposed to, has always generated ambiguity in the current medical practice, since it is assumed, that the relief of the patients once the treatment is established, should always be the same1. However, it is the awareness and proper care of comorbidities and the baseline condition of the patients that determine the success rate of the treatment, without requiring additional interventions beyond the ones proposed at the beginning of the treatment 2, 3.
This important factor has generated in practitioners the need to be able to monitor the clinical evolution of the patients. Laboratory tests are an important basis of medical diagnosis, and are frequently used to monitor the clinical progress of the hospitalised patient. The patient clinical state sometimes changes suddenly or continuously; requiring the surveillance of the basic variables such as vital signs. Vital signs monitoring activate a warning signal for the immediate reassessment of the patient and reorient the medical decisions at any moment during the hospitalisation, with the goal of avoiding further deterioration or adequately treating any new disease state that the patient may develop 3, 4.
From that point of view and long time ago the medical community has observed the need to generate a code that could be universal and that could be used as an early warning of the patient worsening. As a result of this situation, in different countries around the world, researchers and clinicians have developed scales, scores, algorithms and others tools to identify early patients in risks to be in critical conditions. Those tools are based on the ability of easy data collection and simple clinical interpretations allowing the clinical personnel to make objective and early assessment of the overall clinical state of the patients 4.
These scales or scores are not ideal, since there is no perfect scale, and all have statistical weaknesses either in their sensitivity or specificity. The clinical judgment and the physician experience, added to a score from any of these scales, may guide the path to follow according to the particular scenario to treat the patient illness 5.
Selecting the ideal scale to be adopted is one of the controversial topics in which a practitioners and institutions can be involved in. Occasionally other services in the hospital such as clinical laboratory and clinical imaging values play an important role in the process of diagnosis of the disease and are counted in the risk scales making easier to have good standard of care. Scientific studies assess the statistical performance of these scales yield controversial results that sometimes distort or endorse these results 5. This is why the decision of the ideal scale is based first on the target population that physicians in charge will care of and select the appropriate scale or score that will be applied, to know the implications of the most representative age group of patients that will be attended and to use scales which data acquisition be a simple and quick task to perform6.
Based on that, the Royal College of Physicians of the United Kingdom headed by Bryan Williams and collaborators, and many other researchers worldwide have analysed a significant number of scales on the basis that the scale should use systems (track and trigger warning systems protocol) divided into three types. Single parameter systems, multi-parameter systems, total weighted scoring systems and combined systems 6.
The researchers came to the conclusion that the performance of these scales was better than those that conserve the third type of system, since not only the parameters are categorized but also those who develop the scale proposed management to be carried out in an easy, orderly scheme and logical within a framework of independent work or in addition to more robust strategies that involve management schemes within a hospital institutions - the so-called (Rapid Response Systems RRS) 7.
For Williams et al, the MEWS changed its name after being accepted by the Royal College of Physicians of the United Kingdom as the NEWS scale with its variables defined as (respiratory rate, oxygen saturation, systolic blood pressure, heart rate, consciousness or new confusion and temperature). This score has been recognized and quickly adopted worldwide. The NEWS has an immediate applicability as a parameter of high sensitivity in the detection of clinical deterioration, despite its known low specificity. Thus inviting the attending physician to approach and reassess the state of the patient. The score makes changes in medical decisions according to the new conditions found during the patient’s assessment7.
This kind of scales must be endorsed internationally and be easily replicable by all practitioners who wish to adopt them. This allows other physicians to obtain results when implementing actions, reaching better clinical outcomes similar to clinical studies previously published. In the daily practice and clinical application we find different scenarios to use the scales, where the main problem of its application represent extra costs in lab test or clinical images and the time invested by the practitioners and medical personnel 7.
For this reason, the scales for clinical assessment should be easy and flexible to be implemented by any person, ideally for any member of the healthcare team to avoid barriers during the process of data acquisition. From this perspective, the scales that are based on easily collected parameters are the most appropriate, but they are often the scales that suffer the rigors of the biases when they are undervalued or overvalued, just the operability can be affected by personnel knowledge and skill.
The interesting thing about this exercise is to see that the people who have the most continuous contact with the patient, such as the nursing staff, physicians with the practice have the ability to use them in their practice and this would make the scales a valuable resource to perform clinical assessments and achieve the goal proposed.
In this new era where the reincorporation of a patient into daily life in a short time is ideal scenario, the medical and nurse staffs and also service providers seek to alleviate the patient's health breakdown. It is here from the hospital point of view where the proper care not only in the quality of care but also in the prevention of complications plays an important role in the applicability of these early detection scales. This is an invitation to success from its inception and to tend to patients being hospitalized for the minimum time required.
A dermatoscope is a hand-held device for examining the appearance of the skin. Dermoscopy has become an increasingly used and valued tool in the assessment of various skin lesions, and more recently, inflammatory rashes. It is quick, cheap and when used correctly, dermoscopy is an essential tool in helping clinicians detect early stage skin cancer. Various national and international guidelines recommend routine use of dermoscopy in the assessment of pigmented lesions1,2 because it enhances melanoma detection rates3,4 and can help confirm the diagnosis of benign lesions such as haemangiomas and seborrhoeic keratoses. As with any skill, competency takes time to develop and a combination of various learning and assessment methods is best. The dermatology specialist training curriculum in the United Kingdom (UK) states that trainees should be competent in using a dermatoscope and interpreting findings, while recognizing the limitations of this tool5. Assessment of these clinical skill and behavioural competencies using direct observation of procedural skills (DOPS), case-based discussion (CBD), mini clinical examination (mini-CEX), and/or multisource feedback (MSF) is suggested. There is no specific guidance on what resources a trainee should use to achieve these competencies, nor on what is the minimum expected dermoscopy skillset at completion of specialist training.
The aim of this survey was to explore dermoscopy use amongst dermatology specialist trainee registrars in the UK including frequency of use, how it is being taught and whether trainees feel their dermoscopy training has been adequate.
An online survey was designed and distributed to dermatology trainees in the United Kingdom using an email link and hard copies were distributed at a national dermoscopy course. Respondents who did not identify themselves as dermatology trainees were removed from the analysis. Responses were collected anonymously, then collated and analysed using SurveyMonkey® computer software.
Twenty-five percent (59/238) of dermatology trainees completed the survey. On average, 92% (54/59) use dermoscopy more than once daily. Eighty-five percent (50/59) always use dermoscopy when assessing pigmented lesions while 34% (20/59) always and 59% (35/59) sometimes use it to assess non-pigmented lesions. When asked about specific tools used to learn dermoscopy, 41% (24/59) have been on a previous course, 42% (25/59) reported attendance at a lecture or seminar, 46% (27/59) have used a dermoscopy text book, 14% (8/59) have attended a conference, 19% (11/59) have used online resources. Seventeen percent (10/59) have never used any of the above learning methods. (Figure 1a). Amongst those who have attended a formal dermoscopy course (n=24), 92% (22/24) of these were ≤1 day in duration. When questioned about informal teaching in clinical practice, 12% (7/59) frequently, 56% (33/59) sometimes, 31% (18/59) rarely and 2% (1/59) never receive teaching from their supervising dermatology consultant. (Figure 1b). Fifty-four percent (32/59) feel they have received adequate training in dermoscopy while the remaining 46% (27/59) feel their dermoscopy training is inadequate for their training stage (Figure 1c). Seventy-three percent (43/59) have access to dermoscopic photography within their local dermatology department.
Fig 1a - Have you undertaken any formal study in dermoscopy? 49% of trainees have attended a lecture, 2% a seminar, 14% a conference, 41% a course, 19% have used an online resource, 46% have used a book, 17% have not used any resource.
Fig 1b- Do you receive dermoscopy training from your supervisor in clinic? 56% of trainees sometimes, 31% rarely, 12% frequently, and 2% have never received training from their seniors in clinic.
Fig 1c- Do you believe that you have received adequate training in the use of a dermoscopy for your training grade?
These results of this survey highlight the need for dermoscopy training to be reviewed within the UK national training curriculum for dermatology. Despite daily use by the vast majority, dermoscopy training is largely self-directed and highly variable amongst individual trainees. Of concern, a significant proportion of those who responded feel their dermoscopy skills are inadequate for their training stage. Of note, the 25% response rate means that the results of this survey may not be representative of dermatology trainees in the United Kingdom as a whole.
This is the first time that dermoscopy use has been explored through a national survey of dermatology trainees in the UK, to the best of our knowledge. A survey on dermoscopy use was carried out by The British Association of Dermatologists (BAD) in 20126 but the majority of responses were from dermatology consultants. This confirmed that 98.5% of respondents regularly used dermoscopy, while 81% had received any training. The most frequent source of training was UK based courses, which 62% of respondents reported attending. Of note, 39% of all respondents lacked confidence when making a diagnosis based on their interpretation of dermoscopy findings. It is not clear how many of those lacking in confidence were consultants, trainees or specialty doctors. Although the situation may have improved since 2012, these results do suggest that dermoscopy training needs have not been met for a proportion of doctors across the dermatology community.
Dermoscopy training is an important issue to address for several reasons. The volume of cutaneous lesions being referred to dermatology is increasing, and skin cancer referrals and treatment now account for 50% of a UK dermatologists’ workload7. For every melanoma diagnosed, a dermatologist may expect to see 20–40 benign lesions referred from general practitioners (GPs)7. These facts highlight the importance of maximising diagnostic skills which frequently include using dermoscopy as part of clinial assessment. Lack of adequate training is a common self-reported reason for dermatologists not using dermoscopy8. Both trainees and their supervising bodies have a responsibility to maximize training opportunities and embed the use of dermoscopy in routine practice.
In conclusion, we feel UK dermatology trainees and indeed any clinician who utilizes this tool, would benefit from a more standardized and integrated approach to dermoscopy teaching to ensure safe practice of this skill and deliver high quality evidence-based patient care.
With the increasing use of ultrasound as a standard examination in the first trimester, more incidental adnexal masses are detected. The reported incidence of adnexal masses in pregnancy varies, depending on the criteria used to define the mass. A literature review by Goh W. et al., found that 1% of all pregnancies are diagnosed with an adnexal mass 1. A more recent article has suggested adnexal masses are diagnosed in 5% of all pregnancies 2. Simple and functional cysts are very common, and they usually resolve after the first trimester 3. Mature teratomas are by far the most common persistent adnexal masses found in pregnancy 8. It has been estimated that up to 5% of adnexal masses in pregnancy are malignant 4.
Ovarian cysts are typically asymptomatic, but they can cause pain due to pressure on adjacent organs, rupture, bleeding or torsion. The latter case is a significant health condition which mainly requires emergency surgical intervention. During pregnancy, surgical management of ovarian cyst complications is more difficult and more challenging. This is mainly because of other differential diagnosis causing similar symptoms related to pregnancy such as ectopic pregnancy and miscarriage. In case of surgical intervention, the second trimester of pregnancy is supposed to be the safest window for surgery as the risk for drug-induced teratogenicity is smaller than in the first trimester, most functional cysts have disappeared by then and it is technically less difficult than operating during the third trimester 13.
Antenatally, ultrasound is considered to be the best first-line imaging to evaluate adnexal masses 5. Ovarian mass characterization into benign, malignant or borderline can be challenging in pregnancy. This is mainly due to the effect of high levels of gestational hormones which can cause decidualisation of the cystic or solid parts of the ovaries. Benign masses can mimic malignant masses due to this pregnancy related phenomena 12. One of the largest data in literature on ovarian mass characterization is published by the International Ovarian Tumor Analysis group (IOTA). All IOTA studies excluded pregnant women when they developed and validated the rules and models to characterize ovarian masses 14-17. This limits our knowledge and ability to use these models in pregnant women. It is known that tumour markers may be raised in pregnancy and should therefore not routinely been done 7. An alternative diagnostic tool is Magnetic Resonance Imaging (MRI) which is considered to be safe in pregnancy and can be helpful if the ultrasound imaging is inconclusive in evaluating whether a mass is benign or malignant 6; 10. The American College of Gynecology and Obstetrics recommends that pregnant patients should be reviewed on a case-to-case basis and stated that there are no known biological effects of MRI on fetuses. However, Gadolinium, which help in characterizing ovarian masses, should be avoided when examining a pregnant patient 11.
The aim of this retrospective study was to look into characteristics, size and subsequent management of cases of adnexal masses in early pregnancy.
Methods
This was a retrospective study of data collected between 12/01/2014 and 14/11/2016 in the Early Pregnancy and Gynaecology Unit (EPAGU) of a tertiary referral centre (Guy’s and St Thomas’ NHS Trust, GSTT) in central London. The Ultrasound reporting system (Astraia Software Gmbh, Version 1.24.10, Munich, Germany, 2016) was searched for data. Cases included were consecutive. The study was approved as a service evaluation audit by the Clinical Governance team at Guy’s and St Thomas’ NHS Trust. The study included women who were diagnosed with an adnexal mass while having a transvaginal ultrasound scan TVS at or before 15 weeks of gestation. Pregnancy was confirmed by a positive pregnancy test and an intrauterine gestation on transvaginal ultrasound scan. Women who had the first gestational TVS after 15 weeks of gestation, pregnancies of unknown location, ectopic or trophoblastic pregnancies and patients who had ovarian stimulation treatment were all excluded.
Repeat ultrasound scan reports were retrieved from the Astraia system. Further procedures, tests and imaging results were retrieved using the Electronic Patient Reporting system at GSTT (EPR application, iSOFT Group plc., USA, 2004), PACS (GE Medical Systems, Wisconsin, USA, 2006), Badgernet (Clevermed, Client version 2.9.1.0, Edinburgh, UK). We have used the subjective impression of the examiner as the index test. If surgery was performed the final outcome to identify benignity or malignancy was considered to be the histological diagnosis if any removed tissue. Cytology was used for a reference test in two cases when ovarian cysts were aspirated only. Borderline tumours were classified as malignant for statistical analysis. Tumours were classified using the criteria recommended by the World Health Organisation (WHO) 9; 10. All ultrasound scan images were available and reviewed by author TEG to confirm the US finding. For statistical analysis, the SPSS software package was used (version 24 for Windows, Chicago, IL, USA). A two tailed student’s t test was used to compare means in ovarian masses diameters and a p value of less than 0.05 was considered statistically significant.
Results
7150 patients underwent transvaginal scans for early pregnancy in that period. In total 48 cases of women with adnexal masses in pregnancy and completed data were analysed. Seven women have been excluded; one woman being postpartum at the time of the finding of a large endometrioma, five pregnancies due to assisted conception and one woman was found to have a corpus luteum cyst (Figure 1).
Figure 1: Study flow chart.
41 women with 46 ovarian cysts could be included in the study. Two women had bilateral ovarian cysts, one had two ipsilateral cysts and one woman had three ipsilateral cysts. The mean age at the time of detection of the ovarian mass was 30 (95%CI:28-32) (Fig.2).
Figure 2: Age distribution in the study group.
The mean gestation at the time of first ultrasound was 7.4 weeks (95%CI:6.6-8.3). The mean diameter of ovarian cysts measured 47.7mm (95%CI:39.9-55.4). In 36 women ultrasound alone was performed to reach diagnosis, one woman had an extra MRI scan, two women had tumour markers on top of the TVUS and in two women an MRI scan and tumour markers were performed after the TVUS. The ovarian cyst(s) was on the right ovary in 16/41 women, on the left in 22/41, bilateral in 2/41 and in one case the side of the cyst was not reported. The most common ultrasound subjective impression was mature teratoma (22/46 cysts), followed by simple cysts (12/46 cysts), haemorrhagic cysts (6/46 cysts), endometriomas (5/46 cysts) and one possible mucinous Borderline tumour. The latter was confirmed later on histology as the stage FIGO 1A intestinal type mucinous Borderline tumour (Fig.3).
Figure 3: Distribution of origin of cysts by histology.
In total 8/41 women (19.5%) underwent surgical intervention; of these eight cases six underwent major surgery under GA and two had a cyst aspiration under local anaesthesia. Seven out of these eight masses were classified as benign on USS and were subsequently confirmed to be benign by histology or cytology. In only one case a complex adnexal mass was found on USS examination at 9 weeks of gestation and the MRI scan reported possible malignancy. Tumour markers in this 23-year-old woman were normal and a laparotomy was performed at 17 weeks of gestation to remove the mass. Histology showed the mass to be a mucinous borderline tumour, FIGO stage IA. In another patient, an oophorectomy had to be performed at the time of the Caesarean section at term for fetal distress as the ovary was found to be necrotic. In this patient an ultrasound at 10 weeks of pregnancy had demonstrated a 6cm diameter haemorrhagic cyst, which had presumably torted during the pregnancy without any symptoms to prompt the patient to refer herself. Histology in this case had shown an infarcted cyst with fibrosis and calcification. In four of the major surgery cases performed under GA uncomplicated laparoscopies were performed to remove the adnexal mass; in one case a laparoscopic salpingoophorectomy was performed as an emergency for a suspected ovarian torsion at 16/40 weeks. In three cases a laparoscopic procedure for cystectomy was performed electively for ongoing pain. In the first case a cyst was diagnosed in early pregnancy subsequently there was a miscarriage and the cyst was removed 4 months after the diagnosis. In the second case a cyst was found in early pregnancy the woman had a termination at 11 weeks of pregnancy and a cystectomy 5 months later. In the third case a laparoscopic cystectomy was performed 8 weeks after the diagnosis, however the woman suffered a miscarriage at 12 weeks of gestation. Histology confirmed dermoid cysts in all four of these cases. The two cyst aspirations performed under local anaesthesia and ultrasound guidance were both symptomatic for torsion, one at nine weeks and one at ten weeks of pregnancy. In both patients the procedure has been successful. 33/41 (80.5%) with no indication for surgical intervention. There was a significant difference in the mean diameter of ovarian cysts in the expectant management group (41.2mm; 95%CI:34.7-47.7) compared with the mean diameter of cysts in the surgically managed group (74.5mm; 95%CI: 49.2-99.8) (Fig.4).
Figure 4: Mean diameter of the ovarian cysts.
In 33/41 patients no surgical intervention was needed during pregnancy. 13/33 patients had no follow up of their ovarian cyst arranged and no further mentioning of the cysts on routine growth and anomaly scans during pregnancy was found. In 20/33 patients at least one routine follow-up scan was performed 1-2 weeks after the diagnosis and in 12 of these 20 patients a second follow-up had taken place at least 1 month after the diagnosis. In one of the 20 patients with recorded follow-up’s an MRI scan had been arranged 2 months after the initial USS finding of a dermoid cyst.
Discussion
The results of our study confirm findings from previous studies: The vast majority of ovarian masses in pregnancy are benign and invasive cancer in pregnancy is rare; The results show a significant relation between size of adnexal mass and probability for surgery; Ultrasound examination of adnexal masses has been proven to be accurate and safe in pregnancy; Managing ovarian cysts in pregnancy can be challenging. Goh et al. have reported similar outcomes, namely that ovarian torsion is still reported as a rare event in pregnancy and that the management of most adnexal masses in pregnancy can be conservatively managed if asymptomatic and if there are no ultrasound findings suspicious for malignancy 8. If a surgical intervention is needed for persistent masses with complications such as torsion Goh et al. have found that laparoscopy during 1st and 2nd trimester can be safely performed 1. In our cohort two out of six women underwent successful major surgery during the 2nd trimester of pregnancy. One had an emergency laparoscopy for a torsion at 16 weeks of pregnancy and the other had a laparotomy at 17 weeks of pregnancy for a mucinous borderline tumour.
However, to our knowledge currently no evident guidelines exist on how to manage and follow-up ovarian masses during pregnancy. The characteristics and presentation of ovarian mass complications in pregnancy can be mimicked by similar symptoms related to pregnancy, such as ectopic pregnancy. In one of our cases a woman with a known ovarian cyst was found to have a necrotic ovary at the time of Caesarean section, despite no signs of torsion at any time during pregnancy. This only highlights how challenging and difficult the assessment of ovarian masses during pregnancy can be. Additional diagnostic examinations such as tumour markers in suspicious ovarian masses have been found difficult to interpret in pregnancy. However, there has been literature suggesting that if a mass is strongly suspicious for malignancy, it is likely that CA-125 will be severely elevated (1000-10 000) 7.
The strength of this study is that the data has been collected using the expertise and facilities of a tertiary referral centre in London (GSTT). The limitations of this study include retrospective data collection, small numbers of cases and loss of follow-up. Although, our study shows the benign nature of most ovarian masses in pregnancy and the ability of ultrasound to safely characterize ovarian masses, a prospective study is required to validate our results. As it is difficult to interpret ovarian cancer tumour markers in pregnancy 7, other models, such as the IOTA Simple Rules14,16 or the ADNEX model17 may play a role for further characterisation of ovarian masses. A prospective trial is required to validate these models in pregnancy.
We would like to draw the attention of your readers to the outcome of a survey undertaken in Kettering General Hospital. We wanted to determine what methods clinicians use to confirm central line cannula/needle position before dilatation and what their removal plan would be for an accidental insertion of a central line (>7 Fr) into the carotid artery.
We performed a paper survey of 52 doctors in anaesthesia/intensive care at Kettering General Hospital. We achieved a 100% return rate. We asked the doctors to answer questions based on their practise over the previous year period. The majority of people surveyed were consultants (47%). The results of the survey revealed doctors mostly utilised ultrasound confirmation of guidewire before dilatation (89%) but only 19% utilised pressure transduction. A large proportion of doctors surveyed either did not know how to manage carotid artery cannulation with a >7 Fr central line (35%) or would ‘pull and press’ (40%). Only 5% of the doctors who would ‘pull and press’ would arrange computed tomography (CT) angiogram follow up.
We highlighted a lack of clarity, which may be widespread. It is advisable to seek a vascular surgeon or interventional radiology input to facilitate line removal due to the excessive complications related to the ‘pull and press’ technique (47% complication rate).1 Complications include pseudoaneurysm formation, airway compromising haematomas, arteriovenous fistula, stroke and death.1 If such lines are removed by the ‘pull and press’ technique it is recommended to arrange CT angiogram even if the patient is asymptomatic due to the possibility of pseudoaneurysm or arteriovenous fistula formation.1
We correctly utilised ultrasound confirmation of guidewire position before dilatation. However ultrasound alone has not eliminated accidental arterial dilatation. This still occurs despite ultrasound usage especially in cases involving inexperienced clinicians and the guidewire going through the vein and into the artery.2 The combined use of ultrasound and transduction may further reduce the incidence of carotid cannulation.3 This may prove invaluable in centres without vascular or interventional radiology support.
Our centre has reduced its usage of central venous pressure (CVP) monitoring. This may reflect our lack of transduction prior to dilatation for central line insertion. Hence we devised a novel use of the double male Luer lock connector. This connector allows the female connector end of an infusion line to connect to the female connector of the blood aspirating port of an arterial transducer. This will then allow transduction of a central line cannula, before dilatation, via the arterial transducer by turning the 3-way tap (Figure 1). This removes the need to set up a separate transducer and also prevents the need to disconnect connections in the arterial line to allow CVP confirmation, as this was considered an infection risk.
Figure 1: Double male Luer lock connector attached to the blood aspirating port of an arterial transducer, with a fluid line connecting this to the central venous cannula
Adult Still’s disease (AOSD) is an inflammatory disorder of unknown etiology characterized by quotidian (daily) fevers, arthritis, and an evanescent rash and multi-organ involvement [1].First described in children by George Still in 1896, subsequently in 1971 Bywaters described 14 patients with similar presentation [2]. The clinical course of adult Still’s disease (AOSD) can be divided into three main patterns: monophasic (or monocyclic), intermittent, and chronic. Patients with monophasic AOSD have a disease course that typically lasts only weeks to months, completely resolving within less than a year in most patients [3]. Systemic features, including fever, rash, serositis, and hepatosplenomegaly, predominate in this group. The patient we diagnosed as AOSD, with monophasic course, went into remission after proper treatment and is symptom free even after stopping the treatment.
CASE REPORT
46 year old Indian male, non-smoker, married, nondiabetic, normotensive admitted at department of internal medicine in our hospital with history of high grade fever, polyarthritis, and skin rash for the last 4 weeks. The fever was high grade, with maximum temperature reaching 39.2°C. The patient also complained of joint pains involving the knee, ankle, wrist and proximal interphalangeal joints. There was no history of oral ulcers, morning stiffness, ocular symptoms, or contact with infected persons. In the hospital, during the febrile period, he developed macular rash mainly on chest and back [Figure 1]. On examination, the patient was sick looking, febrile-39.2°C. Chest on auscultation was normal, cardiovascular examination was unremarkable. Examination of abdomen revealed mild spleenomegaly. Neurological examination was unremarkable. Investigations revealed hemoglobin 12.7g/dl, erythrocyte sedimentation rate (ESR) 120 mm in 1st hour. Total leukocyte count-12.7 x10 9/L. Liver function showed elevated liver enzymes with Aspartate transaminase-125U/L, Alaninie aminotransferase 60 U/L, low albumin 2.3gms/dl. He was worked on lines of pyrexia of unknown origin and his blood, urine and sputum culture showed no growth. Procalcitonin level was less than 0.5ng/ml. Sputum for AFB was negative for three samples; qunatiferon gold test for tuberculosis was negative. IgM CMV, EBV, HIV, hepatitis B and C were negative malarial parasite, Widal and Brucella serology was negative. CT-chest and abdomen were normal, except for mild spleenomegaly. Echocardiogram was normal. ANA, rheumatoid factor was negative. Lactate dehydrogenase (LDH) 978 U/L, His CRP showed a progressive increase from 82mg/L to 284 mg/L, which decreased after starting steroids. His ferritin levels were 40, 000 (normal range 21.8 -274.6 ng/ml), which were reconfirmed by second sample and he had normal transferrin saturation. On the basis of his history, clinical examination and review of his laboratory investigations, diagnosis of AOSD was made. We started him on prednisolone 60 mgs daily along with Diclofenac potassium 50 mg twice daily, to which he responded and become afebrile. He was discharged with a tapering dose of steroids 5mgs weekly. He is doing well and is completely symptom free.
Figure 1: Skin Rash on the back
DISCUSSION
First described in children by George Still in 1896, “Still’s disease” has become the eponymous term for systemic juvenile idiopathic arthritis [4]. In 1971, the term “adult Still’s disease” was used to describe a series of adult patients who had features similar to the children with systemic juvenile idiopathic arthritis and did not fulfill criteria for classic rheumatoid arthritis.
The etiology of adult Still’s disease (ASD) is unknown; both genetic factors and a variety of infectious triggers have been suggested as important, but there has been no proof of an infectious etiology, and the evidence supporting a role for genetic factors has been mixed. It is uncertain whether all patients with AOSD share the same etiopathogenic factors. Proposed pathogens have included numerous viruses; suspected bacterial pathogens include Yersinia enterocolitica and Mycoplasma pneumoniae [5]. As an example of studies of the immunogenetics of ASD, in a series of 62 French patients, human leukocyte antigen (HLA)-B17, -B18, -B35, and -DR2 were associated with AOSD. However, other studies have not confirmed these findings [6].
Adult Still’s disease is very uncommon. Prevalence of AOSD is estimated to be 1.5 cases per 100, 000-1, 000, 000 people, with an equal distribution between the sexes [6]. There is a bimodal age distribution, with one peak between the ages of 15 and 25 and the second between the ages of 36 and 46. The diagnosis of AOSD is possible only by recognizing the striking constellations of clinical and laboratory abnormalities. It is also to be to remember that AOSD is a diagnosis of exclusion. AOSD has been associated with markedly elevated serum ferritin concentrations in as much as 70 percent of patients. Serum ferritin values above 3000 ng/mL in a patient with compatible symptoms should lead to suspicion of AOSD in the absence of a bacterial or viral infection. Abnormally high serum ferritin values were reported in some case reports and it was suggested that high ferritin levels may be a diagnostic marker of Still's disease [7]. Our patient showed almost all features as laid down in Yamaguchi criteria[Table 1] for the diagnosis of AOSD [8] along with a markedly high ferritin levels.
Table 1 : Diagnostic criteria for AOSD (Yamaguchi)8
Major criteria
Minor criteria
Fever > 39ºC, > 1 week
Sore throat
Arthralgia/ arthritis > 2 weeks or splenomegaly
Lymphadenopathy
Typical rash
Abnormal LFT
WBC > 10, 000 with > 80% PMNs and RF
Negative ANA
Exclusions: Infections, malignancy, rheumatological diseases. Five criteria with at least two major criteria. ASOD: Adult onset still’s disease. WBC: White blood cell, ANA: Antinuclear antibody, RF: Rheumatoid factor, PMN: Polymorphonuclea
Non-steroidal anti-inflammatory drugs (NSAIDs), such as aspirin, ibuprofen or naproxen, help to reduce inflammation [9]. Patients with high-fever spikes, severe joint glucocorticoids, such as prednisone (0.5- 1mg/kg/day)Methotrexate has been used successfully in a small series of people to treat adult Still's disease [10]. Some patients are refractory to these conventional therapies. Tumor necrosis factoralpha (TNF) blockers include infliximab, adalimumab, etanercept, anti-interleukin-1, antiinterleukin-6 agents, and most recently anti CD20-expressing B-cell antibodies are also effective in some cases. Other experimental drugs, including cyclosporine and anakinra, have also been successful in small groups of people [9]. Interleukin 6 inhibitors like tocilzumab showed a good result in patients with AOSD resistant to other immunosuppressive agents such as methotrexate, TNF inhibitors and anakinara [11]. Even with treatment, it's difficult to predict the course of adult Still's disease. Some people might only experience a single episode, while for others adult Still's disease may develop occasional flair up or a chronic condition. About one-third of people with the disorder may fall into each of the above groups.
CONCLUSION
A diagnosis of AOSD should be kept in mind in case of pyrexia of unknown origin particularly in a patient who presents with high-grade intermittent fever, polyarthritis and skin rash of more than two weeks duration. However, the patient should be extensively evaluated to rule out other differentials of AOSD like acute or chronic infections, autoimmune disorders, vasculitis and malignant disorders. Serum ferritin values can be powerful adjuncts in making the diagnosis of AOSD [12], where they are usually higher than other inflammatory diseases. Indeed, extreme elevation of serum ferritin up to 75, 500ng/ mL has been reported in AOSD[12]. Several investigators agree that ferritin levels above l, 000 ng/mL are suggestive of AOSD while levels greater than 4, 000ng/mL are very specific for this diagnosis when accompanied by a compatible clinical picture.
Several studies found that refugees develop post-traumatic stress disorder (PTSD) after having endured war trauma1, or certain circumstances related to migration like moving to a new country, being unemployed and poor housing2. PTSD is described as distress and disability due to a traumatic event that occurred in the past3. In 2013, the American Psychiatric Association revised the PTSD diagnostic criteria in the fifth edition of its Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and PTSD was included in a new category, Trauma- and Stressor-Related Disorders4. All of the conditions included in this category required exposure to a traumatic or stressful event as a diagnostic criterion4. The person with PTSD often avoids trauma-related thoughts and emotions, and discussion of the traumatic event4. PTSD patients are invariably anxious about re-experiencing the same trauma. The trauma is usually re-lived by the person through disturbing, repeated recollections, flashbacks, and nightmares4. Symptoms of PTSD generally begin within the first 3 months after the provocative traumatic event, but may not begin until several years later4. A large number of children (10-40%), 16 or younger, who have experienced a traumatic event in their life, tend to develop PTSD later on5. Moreover, many families with children growing in war zones and then moving to safer places, experience trauma, stress and reduced functioning6. These families have different resilience rates in their survival mechanisms, coping strategies and adaptation levels7.
The latest war in Syria has led to the migration of large parts of the Syrian population to neighboring countries such as Lebanon, Jordan and Turkey8.The United Nations High Commissioner for Refugees (UNHCR) estimates that approximately 1.5 million refugees are located in Lebanon9. These refugees have been exposed to several types of traumatic events that may increase the incidence of mental health problems10.
We hypothesize that the proportion of positive PTSD screens would be high among Syrian refugees with the presence of some specific related risk factors. Thus, the objective of our study was to examine PTSD symptoms and to determine the associated risk factors in a sample of Syrian refugees living in North Lebanon.
METHODS
1. Study design and population
This was a cross-sectional study that aimed to assess the proportion of Syrian refugees in North Lebanon who were at high risk of developing PTSD, and to examine the association of PTSD high risk with other factors. The survey was carried out during February and March 2016. A convenient sample of Syrian refugees of both gender, aged between 14 and 45 years, living in North Lebanon, was selected out of a population of 262,15111.
The estimated minimum sample size, calculated using Raosoft sample size calculator, with a margin of error of 5% and a confidence interval of 95%, was 384 refugees. A total number of 450 Syrian refugees, residing in individual tented settlements (ITSs), collective shelters (CS) or Primary Health Care Centers (PHCs) located in North Lebanon, was selected according to inclusion and exclusion set criteria.
The inclusion criteria were: Syrian refugees, aged (14-45 years), physically and mentally independent. Hence, all subjects that were younger than 14 or older than 45, speechless, deaf, physically and mentally dependent, or have undergone recent moderate or severe surgery (less than one week earlier), were excluded from the study.
2. Ethical considerations
The study protocol received approval from the Notre Dame University (NDU) Institutional Review Board (IRB). The approval comprised details about the procedure of the study and the rights of the participants. Informed consent was obtained from each participant. The questionnaires were answered anonymously, ensuring confidentiality of collected data.
3. The Interview questionnaire
The interview questionnaire was divided into six sections consisting of a total of 46 questions. The questions were dichotomous, close-ended and open-ended. A cover page described the purpose of the study, ensuring the anonymity and confidentiality, and soliciting the consent of participants. The questionnaire collected data on the demographic and socio-economic characteristics of the participants. Information about health status and stressful life events (SLE) were also obtained. The PC-PTSD (Primary Care Post-Traumatic Stress Disorder) tool was used to screen PTSD.
For the purposes of the study, subjects were classified as having/not having positive PC-PTSD. The results were used to calculate the proportion of Syrian refugees who are at high risk of developing PTSD.
PC-PTSD questionnaire: The PC-PTSD was initially developed in a Veteran Affairs primary care setting and is currently used to screen for PTSD, based on the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-4) diagnostic criteria12. The screen consisted of 4 questions related to a traumatic life event: In the past month you (1) Have had nightmares about it or thought about it when you did not want to?; (2) Tried hard not to think about it or went out of your way to avoid situations that reminded you of it?; (3) Were constantly on guard, watchful, or easily startled?; (4) Felt numb or detached from others, activities, or your surroundings? The answers to these questions were dichotomous (Yes/No) and the total screen was considered "positive" when a patient answered "yes" to three out of four questions. PC-PTSD showed a high sensitivity (86%) and moderate specificity (57%) when using a cutoff score of 213.
In order to validate the Arabic version of the PC-PTSD questionnaire, it was translated into Arabic and translated back into English. The original version of the Arabic questionnaire was pilot-tested on 10 Syrian refugees to ensure the validity of the answers, and to guarantee its reliability.
Anthropometric measurements: The main anthropometric measurements were weight and height. Participants were dressed in light clothes and barefooted, and standing height was measured to the nearest 0.1 cm using a stadiometer. Body weight was measured to the nearest 100g using an electronic scale. Body Mass Index (BMI) is a measure of weight adjusted to height (kg/m2), calculated by dividing weight (in kilograms) by the square of height (in metres). For the purposes of the study, BMI was recoded into four categories: underweight, normal, overweight and obese.
4. Data entry and statistical analysis
The Statistical Package for Social Science (SPSS) for Windows (version 22) was used for data entry and analysis.
First, bivariate analyses of categorical variables were performed using the Fisher exact tests, Chi-squared tests and Student’s t-test. The dependent variable was the high risk of PTSD, using the PC-PTSD tool. Thus, the PC-PTSD score was considered the dependent variable: a dichotomous variable of PC-PTSD (-) and PC-PTSD (+), and all variables that might be a risk factor or might lead to PTSD were set as the independent variables. Two main independent variables were: age and gender. Other variables included: marital status, place of residence, number of people and families living in the same household (crowding index), income, education status, profession, work status, lifestyle habits, medical or psychological problems, medication taken and SLE. Frequencies and percentages were calculated for qualitative variables, and mean and standard deviations for quantitative variables (BMI, crowding index). A p-value of 0.05 or less was considered to be statistically significant.
RESULTS
Table 1: Socio-demographic characteristics of the 450 Syrian refugees
Variables
Frequency (n) or Mean
Percentage (%) or Standard Deviation
Gender
· Male
69
15.3
· Female
381
84.7
Age (years)
27.9
8.1
Crowding index (co-residents/room)
4
2.4
Crowding index
· ≤ 2.5
135
30
· 2.51-3.5
108
24
· > 3.5
207
46
Current place of residence
· Tented settlements
62
13.8
· Collective shelters
92
20.4
· Building
296
65.8
Educational level
· Don’t know how to read and write
33
7.3
· Know how to read and write/Elementary
216
48
· Complementary/Secondary/Technical
178
39.6
· College degree
23
5.1
Marital status
· Single
54
12
· Married
378
84
· Divorced
5
1.1
· Widowed
13
2.9
Current employment status
· No
379
84.2
· Full-time job
40
8.9
· Part-time job
31
6.9
Presence of income
· No
379
84.2
· Yes
71
15.8
Perceived income (n=71)
· Satisfactory
25
35.2
· Non-Satisfactory
46
64.8
Table 2: Health characteristics and migration factors of the 450 Syrian refugees
Variables
Frequency (n)
Percentage (%)
BMI category (kg/m2)
· <18.5
11
2.4
· 18.5-24.9
176
39.1
· ≥ 25
263
58.5
Tobacco consumption
· Yes
97
21.6
· No
353
78.4
Presence of medical conditions
· No
337
74.9
· Yes
113
25.1
Migration status
· Before 2011
15
3.3
· 2011-2013
339
75.3
· After 2013
96
21.4
Seeking professional help for psychological disorders
· No
439
97.6
· Yes
11
2.4
Number of stressful life events
· None
22
4.9
· [1-2]
181
40.2
· [3-4]
235
52.2
· [5-6]
12
2.7
PC-PTSD
· Negative
237
52.7
· Positive
213
47.3
Table 3: Socio-demographic characteristics associated with positive screen for PTSD among the 450 Syrian refugees (bivariate analyses)
Variables
Positive PC-PTSD n(%) or mean±SD
Negative PC-PTSD n(%) or mean±SD
p-value
Gender
0.011*
· Male
23 (33.3)
46 (66.7)
· Female
190 (49.9)
191 (50.1)
Age (years)
28.9 ± 7.6
26.9 ± 8.5
0.009*
Crowding index (co-residents/room)
4.2 ± 2.7
3.8 ± 2.2
0.069
Crowding index
0.294
· ≤ 2.5
58 (43.0)
77 (57.0)
· 2.51-3.5
49 (45.4)
59 (54.6)
· > 3.5
106 (51.2)
101 (48.8)
Current place of residence
0.137
· Tented settlements
27 (43.5)
35 (56.5)
· Collective shelters
52 (56.5)
40 (43.5)
· Building
134 (45.3)
162 (54.7)
Educational level
0.479
· Don’t know how to read and write
16 (48.5)
17 (51.5)
· Know how to read and write/Elementary
95 (44.0)
121 (56.0)
· Complementary/Secondary/Technical
· University level
92 (51.7)
86 (48.3)
10 (43.5)
13 (56.5)
Marital status
0.000*
· Single
9 (16.7)
45 (83.3)
· Married
191 (50.5)
187 (49.5)
· Divorced
4 (80.0)
1 (20.0)
· Widowed
9 (69.2)
4 (30.8)
Current employment status
0.205
· No
184 (48.5)
195 (51.5)
· Full-time job
14 (35.0)
26 (65.0)
· Part-time job
15 (48.4)
16 (51.6)
Presence of income
0.233
· No
184 (48.5)
195 (51.5)
· Yes
29 (40.8)
42 (59.2)
Perceived income (n=71)
0.264
· Satisfactory
8 (32.0)
17 (68.0)
· Non-Satisfactory
21 (45.7)
25 (54.3)
*Significant with p-value < 0.05
Table 4: Health characteristics and migration factors associated with positive screen for PTSD among the 450 Syrian refugees (bivariate analyses)
Variables
Positive PC-PTSD n (%)
Negative PC-PTSD n (%)
p-value
BMI category (kg/m2)
0.183
· <18.5
7 (63.6)
4 (36.4)
· 18.5-24.9
75 (42.6)
101 (57.4)
· ≥ 25
131 (49.8)
132 (50.2)
Tobacco consumption
0.369
· Yes
42 (43.3)
55 (56.7)
· No
171 (48.4)
182 (51.6)
Presence of medical conditions
0.000*
· No
143 (42.4)
194 (57.6)
· Yes
70 (61.9)
43 (38.1)
Migration status
0.094
· Before 2011
5 (33.3)
10 (66.7)
· 2011-2013
154 (45.4)
185 (54.6)
· After 2013
54 (56.2)
42 (43.8)
Seeking professional help for psychological disorders
0.003*
· No
· Yes
203 (46.2)
236 (53.8)
10 (90.9)
1 (9.1)
Number of stressful life events
0.000*
· None
0 (0.0)
22 (100.0)
· [1-2]
66 (36.5)
115 (63.5)
· [3-4]
138 (58.7)
97 (41.3)
· [5-6]
9 (75.0)
3 (25.0)
* Significant with p-value < 0.05
All the socio-demographic, health and migration characteristics of our sample of Syrian refugees were described in Tables 1 and 2. Out of the 450 participants, 47.3% had positive PC-PTSD. In order to study the association between the socio-demographic characteristics among the Syrian refugeesand PTSD screening, abivariate association was explored as shown in Table 3. The results indicate a significant difference between gender groups, as almost half of the women (49.9%) had a positive screen for PTSD compared to 33.3% of the men (p=0.011). Mean age was significantly higher in refugees with positive PC-PTSD (28.9 ± 7.6 years) versus those with negative PC-PTSD (26.9 ± 8.5 years) (p=0.009). PTSD screening was shown to be significantly associated with marital status. In fact, positive PC-PTSD was mostly perceived in divorced participants (80%) compared to 69.2% of widowed, 50.5% of married, and 16.7% of single subjects (p=0.000). Yet, crowding index, current place of residence, educational level, employment status, and income were not significantly associated with positive PC-PTSD (p>0.05).
The association of health characteristics and migration factors among the Syrian refugees with PTSD screening was displayed in Table 4. A significant association was observed between the presence of medical condition and positive screen for PTSD, as 61.9% of subjects suffering from a medical condition had a positive PC-PTSD, compared to 42.4% of participants without medical conditions (p=0.000). However, BMI and tobacco consumption were not significantly associated with PTSD screening (p>0.05). PTSD screening was significantly associated with the presence of psychological disorders. Thus, 90.9% of refugees who sought professional help for psychological disorders had positive PC-PTSD, versus 46.2% of those who did not (p=0.003). Positive PC-PTSD was significantly associated with the increase in the number of SLE. In fact, none of the participants without any stressful event had a positive PC-PTSD, compared to 36.5% of participants with 1-2 SLE, 58.7% of participants with 3-4 SLE and 75% of participants with 5-6 SLE (p=0.000). On the other hand, no significant association was observed between PC-PTSD and migration status (p>0.05).
DISCUSSION AND CONCLUSION
PTSD represents the most frequently occurring mental disorder occurring among refugees14. PTSD prevalence rates ranging between 15% and 80% have been reported in refugees. A study of Cambodian refugees living in the Thailand-Cambodia border camp indicated that 15% had PTSD15. A cohort study aimed to show the prevalence of PTSD among Iranian, Afghani and Somalian refugees that have moved to the Netherlands at a 7-year interval [(T1=2003) - (T2=2010)]. Results displayed a high prevalence at both T1 (16.3%) and T2 (15.2%). The reason for this high unchanged prevalence may be due to the late onset of the PTSD symptoms, and the low use of mental health care centers16. De Jong and colleagues reported that 50% of the refugees in Rwandan and Burundese camps had serious mental health problems, mainly PTSD17. While Teodorescu and colleagues aimed to illustrate the prevalence of PTSD among refugees in Norway; results showed that 80% of the refugees had PTSD18. In our study, the high proportion of positive screen for PTSD among Syrian refugees was estimated at 47.3%. In 2006, a mental health assessment demonstrated that Lebanese citizens exposed to war were more likely to develop psychiatric problems such as PTSD19. Subsequently, a cross-sectional study was done in South Lebanon on 681 citizens in 2007 (1-year after the 2006 war in Lebanon). The aim of the study was to examine the prevalence of PTSD 12 months after 2006 war cessation. Results showed that the prevalence of PTSD was 17.8%19. A recent cross sectional study, aimed to show the prevalence of PTSD and explore its relationship with various variables. The study included 352 Syrian refugees settled in camps in Turkey. An experienced psychiatrist evaluated the participants, and results demonstrated that 33.5% of study participants had PTSD, mainly female refugees, people who experienced 2 or more SLE, or those who had a family history of psychiatric disorder20.
PTSD has been associated with a wide range of traumatic events: emotional or physical abuse21, sexual abuse22, parental break-up23, death of a loved one24, domestic violence25, kidnapping26, military services27, war trauma28, natural disasters29 and medical conditions including cancer30, heart attack31, stroke32, intensive-care unit hospitalization33, and miscarriage34.
Our findings should be interpreted taking into account several limitations. The first limitation is the use of screening tools, instead of the more accurate diagnosis of the clinician, in order to detect PTSD. Given that a standardized screening tool for PTSD was used, our rates are likely an overestimate of the true prevalence rates. Secondly, this study was conducted with a limited sample of Syrian refugees and therefore should not be generalized to all refugees of other eras or from other countries. The third limitation is represented by the lack of information on the presence of other Axis I psychiatric comorbidities such as anxiety or mood disorder that could facilitate the development of PTSD or influence its manifestations35-36.
Refugees are an important group to examine, given the high prevalence of mental health disorders. Although refugees are evaluated for health problems, currently there are no standardized screening and clinical practice guidelines for assessing PTSD in all refugees. Therefore, we may be missing opportunities to detect and treat these harmful and potentially fatal conditions. Our findings suggest the need to consider a standardized screening tool for PTSD in this population. In addition, a far greater percentage of patients may have “PTSD symptoms,” that are abnormal but do not meet full criteria of the DSM5 for PTSD diagnosis, but still cause functional impairment and may later develop into a diagnosable PTSD. Given the overall high prevalence, one possible model for evaluation would be a stepped screening approach: Positive screens for PTSD could trigger a standardized clinical diagnosis for PTSD with more comprehensive assessment and early intervention. Considering the high cost of treating individuals with PTSD, screening and intervention strategies should be addressed. Greater awareness among providers and increased targeted assessment and treatment efforts may increase early detection of a wide range of PTSD, preventing more serious future health problems and functional impairment among refugees.
The identification of dark patches on the skin may be the first indication of type 2 diabetes mellitus (DM type 2). DM type 2 is a complex heterogeneous group of metabolic conditions characterised by elevated levels of serum glucose. Causative factors are impairment in both insulin action and insulin secretion. The darkening of the skin is usually evident on the hands and feet, in folds of skin, along the neck, and in the patient’s groin and armpits.1 The affected skin differs from that which surrounds it, and it feels velvety and also thicker. That skin may have hanging from it the small, soft, skin-coloured growths known as tags, and the area affected may be pruritic. This condition is a nonspecific dermatological disorder termed acanthosis nigricans (AN), which often occurs in patients with high insulin levels. Hud et al. (1992) found that 74% of the obese population exhibits AN.2The association of AN, skin tags, and diabetes mellitus due to insulin resistance – along with obesity in adolescents and young adults – is a well-defined syndrome.3,4,5
High-insulin levels in the blood may increase the body’s production of skin cells, many of which have increased pigmentation that gives the skin a darkened appearance – dark patches appear on the skin. These are often the outcome of insulin receptors in the skin being triggered, causing mutations of normal tissue that are dark in colour and/or irregular in shape. The condition may be an indication that the blood sugar is persistently high. The term ‘acanthosis nigricans’ was originally proposed by Unna et al. in 1891, but the first descriptions of it were made a year earlier by two researchers working independently of each other: Pollitzer and Janovsk.6 Kahn and colleagues tried to clarify the link between AN and insulin resistance in 1976.7 Eventually, its presence became established as an indicator of insulin resistance or diabetes mellitus in obese patients.8 In 2000, the American Diabetes Association formally accepted AN as such.9 It should be borne in mind that AN must not be considered a characteristic feature of DM type 2; it is not a condition that is developed by all those who suffer from the disease.
Figures 1,2,3 - Acanthosis Nigrans
Pathogenesis
Although the pancreas produces insulin in DM type 2, the body cannot make use of it efficiently. The outcome is a build-up of glucose in the bloodstream, which may lead to high levels of blood glucose and insulin. At low concentrations, insulin regulates the metabolism of carbohydrates, lipids, and protein and may promote growth by binding to ‘classic’ insulin receptors. High concentrations of insulin may stimulate keratinocyte and fibroblast proliferation through high-affinity binding to the insulin-like growth factor 1 (IGF-1) receptors.10 In obese patients elevated IGF-1 levels may contribute to keratinocyte and fibroblast proliferation;11 the binding stimulates the proliferation of keratinocytes and fibroblasts, which leads to AN.
To put this simply, AN is the outcome of a toxic effect of hyperinsulinemia. Excess insulin causes the normal skin cells to reproduce rapidly rate, and it has been demonstrated to cross the dermo-epidermal junction and reach keratinocytes. In those who have dark skin, these new cells have increased melanin. The higher level of melanin results in the creation of a patch of skin that is noticeably darker than the skin surrounding it. The presence of AN is therefore a strong indicator of increased insulin production and, therefore, it is also a predictor for future DM type 2.
When the occurrence of AN is recognised, a prediabetic person has the opportunity to become more alert to their symptoms and to take precautions in the form of diet restrictions and weight loss. This is because overweight people tend to develop resistance to insulin over time. If too much insulin is the cause of AN, it is relatively easy for the patient to counter it by changing to a healthier diet, taking exercise, and controlling their blood sugar. Obesity-associated AN may be a marker for higher insulin needs in obese women with gestational diabetes,12andAN has been shown to be a dependable early indicator for metabolic syndrome in paediatric patients.13
Autoimmunity?
Unknown autoantibodies other than insulin receptors have been implicated in AN, which may explain the effectiveness of cyclosporine treatment. Kondo and colleagues identified a very rare occurrence – without type B insulin resistance – of generalised AN with Sjögren's syndrome and systemic lupus erythematosus-like features.14Theirs was the first report of generalised AN involving an area from the mucosa of the larynx to the esophagogastric junction, accompanied by autoimmune disorder (AD) responding to systemic immunosuppressive therapy. AN skin lesions and mucosal papillomatosis were medicated with oral cyclosporine A and were accompanied by lower autoantibody titres. That was an outcome of the development of antibodies to insulin receptors in AD such as systemic lupus erythematosus.15
Raymond et al. have reported on the association of AN with disordered immunoreactivity.16 The onset of AN may precede a variety of classic ADs, and different categories of ADs may be present at the same time. If AN is an AD, DM type 2 may also represent a slow and subtle autoimmune process. AN and DM type 2 then become two different expressions of the same disease process, but the former is apparently benign and the latter is ultimately potentially fatal. Autoimmunity is a well-known pathogenic component in DM type 2. The assumption that its pathogenesis encompasses autoimmune aspects is increasingly recognised. That is based on the presence of circulating autoantibodies against β cells, self-reactive T cells, and also on the glucose-lowering efficacy in DM type 2of some immunomodulatory therapies.17 The autoimmune hypothesis of AN has the potential to modify the direction of DM type 2 research.
The symptoms of ADs are inconstant and this is in divergence to the mechanisms of antigen recognition and effector function that are alike in the response to pathogens. 18 The symptoms basically depend on the triggering autoantigen and the target tissue. In certain conditions, autoantibodies function as receptor antagonists and in other situations, they function as receptor agonist. Autoantibodies of both types can be made against insulin receptor. When they serve as antagonists as in DM Type 2, the cells of patients are unable to take up glucose, the consequence is hyperglycaemia whereas in patients with agonistic antibodies, cells deplete blood glucose resulting in hypoglycaemia.18 One wonders whether AN may be an early by-product of such an autoimmune process.
Vitiligo which is the result of depigmentation of the skin is in fact an opposite disorder to AN. Vitiligo is recognised as an AD.19Thyroid disorders, particularly Hashimoto thyroiditis and Graves’ disease, other endocrinopathies, such as Addison disease, diabetes mellitus, alopecia areata, pernicious anaemia, inflammatory bowel disease, psoriasis, and autoimmune poly-glandular syndrome are all associated with vitiligo. 20 Kakourou et al identify that Hashimoto's thyroiditis is 2.5 times more frequent among children and adolescents with vitiligo than in a healthy age- and sex-matched population and it usually follows the onset of vitiligo. 21
As in the case of other ADs, vitiligo susceptibility may involve both target organ-specific genes and immune response genes. 22 The autoimmune theory suggests alteration in humoral and cellular immunity in the destruction of melanocytes of vitiligo. 23. Vitiligo lesions have an infiltrate of inflammatory cells, particularly cytotoxic and helper T-cells and macrophages;histological evidences further back up an autoimmune aetiology. 24 Like AN, Vitiligo is thus gene linked; immunity derangements may be providing the matrix and genes are the craftsmen in both conditions.
Vitiligo occurs more commonly in DM Type 1. A few recent studies have revealed its increased incidence in DM Type 2. 25 These may be isolated case studies, but offer new insight into the pathogenesis of DM Type 2. There is a logical thread running between the autoimmune assertion of AN and its depigmentation counterpart (vitiligo) which is recognised as an AD. If AN is proven as an AD, the AD hypothesis of DM Type 2 becomes more binding. The autoimmune process of AN warrants further consideration and further study is needed to confirm or falsify the hypothesis of an autoimmune spectrum disorder between AN, vitiligo and DM Type 2.
Even though the common assumption that bacteria flora occupying human body outnumber the body cells has been proven wrong, their revised ratio of 1:1 is still astounding.26 The exact role of the resident microbial colony in human body is unclear. There are less ADs observed among the hunters’ tribal population of Tanzania whose faecal matter contain more varieties of microbes than people in developed countries. 27,28 This is an observation that need further verification. It is possible that the occupied microbial army may be maintaining the harmony among the human body cells from attacking each other and serving as moderators. Now that anti-autoimmune activity in molecules produced by parasites have been confirmed in haematology lab, these findings may have clinical significance. The aetiology of ADs is multifactorial. Genetic, environmental, hormonal, psychological stress and immunological factors are all considered important in their development. I content that the clue to the mechanism of development of certain ADs and ways of counteracting them may be embedded in the bacterial colony and their interaction with human cells.
International studies
A pilot study by Bhagyanathan and colleagues demonstrated that children with AN have a high incidence of insulin resistance.29 They posit that the detection of insulin resistance in children may present an opportunity to prevent the onset of microvascular changes before the development of DM type 2. Once DM type 2 by hyperglycemia is diagnosed it may be too late for that. Insulin resistance is one of the mechanisms involved in the pathogenesis of DM type 2; therefore, early recognition of insulin resistance is paramount in the prevention or delay of the onset of diabetes. Their study was of 62% of children who had AN alongside high insulin resistance. In children with AN and a high BMI, the incidence of insulin resistance was about 80%. This is evidence that the easily detectable symptoms are of value in the screening of children who are at high risk of developing DM type 2. Bhagyanathan et al. conclude that AN has potential as a screening method because those who have high insulin resistance as well as AN are at high of future DM type 2.
An earlier US study, by Brickman and colleagues, had yielded somewhat similar results.30 This involved 618 youths from different ethnic groups at nine paediatric practices. They were aged from 7 to 17 years. A survey was made of their demographics and their family history with regard to DM type 2, and their weight and height were also measured. AN was scored and digital photographs of their necks were taken. AN was identified in 19%, 23%, and 4% of the African American, Hispanic, and Caucasian youth respectively. It was also found in 62% of those studied who had a BMI greater than the 98th percentile. Using multiple logistic regression, the researchers found that the level of BMI, the presence of maternal gestational diabetes, female gender, and not being Caucasian, were all independently associated with AN. AN was common among the overweight young people and was associated with risk factors for glucose homeostasis abnormality. Brickman et al. concluded that identification of AN offers an opportunity to advise families about the causes and consequences of the condition. 30 That has the potential to motivate those with responsibility for the young people to encourage and effect healthy lifestyle changes that decrease the risk of the development of DM type 2 and cardiovascular disease.
In their research in India, Vijayan et al. determined that BMI, waist circumference and AN are three physical markers for the recognition of insulin resistance in children.31 They conducted a cross-sectional school-based study in a semi-rural environment in the state of Kerala, which has become known as the diabetic capital of the country. Their study encompassed 283 children between the ages of 10 to 17. The prevalence of insulin resistance was 35%; this estimate was arrived at by using a homeostasis model assessment of insulin resistance (HOMA-IR). Of the children studied, 30% had a waist circumference above the 75th percentile and 18.7% had a BMI above 85th percentile. AN was diagnosed in 39.6% of the population studied. A significantly high prevalence of insulin resistance was observed among the children with a waist circumference exceeding the 75th percentile, a BMI above the 85th percentile, or a diagnosis of AN. The most sensitive physical marker of insulin resistance was AN (90%) and the most specific was BMI (91%). By combining these parameters their sensitivity may be increased to 94% and their negative predictive value to 96%. Vijayan et al. conclude that these easily recognisable physical markers are an efficient warning of insulin resistance among children.
Acanthosis nigricans in different conditions
AN is not a disease, but a symptom of disease. A high prevalence has been observed recently, and there are a number of varieties. These include benign, obesity associated, syndromic, malignant, acral, unilateral, medication-induced, and mixed AN.32,33 It has been established that AN may occur in a number of conditions, and a brief discussion of these is appropriate for this paper. Different types of AN are listed in Table 1. It often appears gradually in the prediabetic state, but abruptly in malignancy.
AN may be triggered by a plethora of medications, such as birth control pills, human growth hormones, thyroid medications, and even some bodybuilding supplements. All these medications may cause changes in insulin levels. Medications used to ease the side effects of chemotherapy have also been linked to AN. In most cases, the condition clears up when the medications are discontinued. In rare cases, AN may be caused by gastric cancer (especially gastric adenocarcinoma) or an adrenal gland disorder such as Addison’s disease. Hypothyroidism, Cushing’s disease, and polycystic ovarian disease are also common causes of AN. 34,35
When AN is present without any identifiable cause in middle-aged and older patients with extensive skin findings, internal malignancy needs to be ruled out. AN has been reported in association with many kinds of cancer, by far the most common being an adenocarcinoma of gastrointestinal origin. In these patients it is a rapid-growing dermatological pigmentation disorder. The skin changes are typically more extensive and severe than those seen in benign AN. Findings may include thickening, unusual roughness and dryness, and/or potentially severe itching (pruritus) and irritation of the skin regions affected. Pigmentary changes may be more pronounced than those observed in benign AN and they are not restricted to areas of hyperkeratosis. Malignant AN is frequently associated with the mucous membranes and with distinctive abnormalities of the oral (mouth) region. For example, reports indicate that the lips and the back and sides of the tongue may have an unusually ‘shaggy’ appearance, sometimes with elevated, wart-like, non-pigmented tissue growths (papillomatous elevations). Malignant AN is also commonly characterised by wart-like thickening around the eyes, unusual ridging or brittleness of the nails, thickening of the skin on the palms of the hands, hair loss, and sometimes other symptoms. Investigators have reported that the development of malignant AN may occur as much as five years before the onset of other symptoms, although the time span before malignancy is typically of shorter duration.
Table 1. Different types of acanthosis nigricans
1. Obesity-associated acanthosis nigricans. Obesity-associated acanthosis nigricans, once labelled pseudo-acanthosis nigricans, is the most common type. Lesions may appear at any age but are most common in adulthood. The dermatosis is weight dependent, and lesions may completely regress with weight reduction. Insulin resistance is often present in these patients. It is slow growing. 2. Acral acanthosis nigricans. Acralacanthosis nigricans (acral acanthotic anomaly) occurs in patients who are otherwise in good health. Acral acanthosis nigricans is most common in dark-skinned individuals, especially those of African-American or sub-Saharan-African descent. The hyperkeratotic velvety lesions are most prominent over the dorsal aspects of the hands and feet, with knuckle hyperpigmentation often most prominent. 3. Unilateral acanthosis nigricans. Unilateral acanthosis nigricans, sometimes referred to as nevoid acanthosis nigricans, is believed to be inherited as an autosomal dominant trait. Lesions are unilateral in distribution and may become evident during infancy, childhood, or adulthood. Lesions tend to enlarge gradually before stabilising or regressing. 4. Generalised acanthosis nigricans. Generalised acanthosis nigricans is rare and has been reported in paediatric patients without underlying systemic disease or malignancy. 5. Syndromic acanthosis nigricans. Syndromic acanthosis nigricans is the name given to acanthosis nigricans that is associated with a syndrome. The type A syndrome and type B syndrome are special examples. 6. Hereditary acanthosis nigricans. Familial acanthosis nigricans is a rare genodermatosis that seems to be transmitted in an autosomal dominant fashion with variable phenotypic penetrance. The lesions typically begin during early childhood but may manifest at any age. 7. Drug-induced acanthosis nigricans. Drug-induced acanthosis nigricans, although uncommon, may be induced by several medications, including nicotinic acid, insulin, pituitary extract, systemic corticosteroids, and diethylstilbestrol. Rarely, triazinate, oral contraceptives, fusidic acid, and methyltestosterone have also been associated with it. 8. Malignant acanthosis nigricans. Malignant acanthosis nigricans, which is associated with internal malignancy, is the most concerning variant of acanthosis nigricans because the underlying neoplasm is often an aggressive cancer. 9. Mixed-type acanthosis nigricans. Mixed-type acanthosis nigricans refers to those situations in which a patient with one of the above types develops new lesions of a different ethology.
Genetic links
It is worth noting that certain types of AN may be genetically linked.36 The interaction of genes and the environment is not clearly understood and the different variables of DM type 2 are not established. It is a heterogenous disorder and there is a general consensus that diabetic comorbidities may be the outcome of genetic and environmental susceptibilities.37,38,39,40,41 Such factors may have an influence independently or in combination with one another that brings about hyperglycaemic conditions.It would be interesting to explore the possibility of a link between the diabetic genes and the AN gene. DM type 2 may be potentiated by poor quality of insulin or decreased production of insulin, and the distinction between those manifestations is not well recognised. The controversy concerning the relative roles of insulin deficiency and insulin resistance in DM type 2 continues to be unresolved.42 Despite the early demonstration that obese people have elevated plasma insulin concentrations, many studies over the years have failed to control satisfactorily for the influence of obesity.43 Another difficulty with the interpretation of plasma insulin concentrations is that sustained hyperglycaemia may have detrimental effects on insulin secretion.44 Diabetes mellitus affects every cell of the body, and therefore it affects the beta cells of the pancreas in turn. The spiralling effect of hyperglycaemia adds to the malfunctioning of beta cells, and that results in impaired quantity and quality of insulin. Only a subset of diabetic patients shows AN, and other groups of obese diabetic patients do not develop AN. AN is linked with higher insulin production and obesity, whereas AN may not be present in diabetes with a reduced quantity of insulin. The presence of AN may serve as one of the biological markers to determine subtypes of DM type 2.
The incidence of AN varies in different races, which is evidence that AN may have a genetic contribution – indeed, it has been regarded by some as being strongly influenced by genetic factors. It is thought to be autosomal in nature. AN is common among African-Americans, Hispanics, and American Indians, but it is rare among white people.45,46 A study from the USA reports the prevalence of AN as 3% among Caucasians, 19% in Hispanics, and 28% in American Indians. 47 More recently, studies from Sri Lanka and south India show the prevalence of AN as high as 17.4% and 16.1% respectively in the adult population in general.48,49
Type 2 diabetes mellitus and schizophrenia
DM type 2 is relatively common among people who have mental health issues. Increased risk for cardiovascular disease and other serious illnesses related to insulin resistance – for example, certain epithelial cell carcinomas, AN, and polycystic ovary syndrome – are long-term concerns associated with the cluster of metabolic abnormalities stemming from insulin resistance. These are often referred to as the metabolic syndrome.50 Impaired action of insulin in patients with schizophrenia was reported over fifty-five years ago and later confirmed in Australia.51 The prevalence of DM type 2 in patients with schizophrenia was found to be higher than it was in the general population, even before antipsychotic medication was in widespread use.
The mechanisms underlying the relationship between schizophrenia and diabetes remain unexplained. The present author has argued in favour of the autoimmune hypothesis of a subset of schizophrenia.52,53 The proposal is that if AN is an AD, it may be co-existing with DM type 2, or the DM type 2 itself may even be an extension of the same autoimmune process. In other words, there may be a continuum of pathological process between AN and DM type 2. It follows that schizophrenia sufferers may have a predisposition to develop DM type 2; schizophrenia may even be considered as a clinical surrogate of DM type 2.
When AN occurs in a schizophrenic patient, they sometimes develop a delusional misinterpretation of the condition, such as that it is a result of skin cancer or even a manifestation of an external agency. Such situations may result in severe anxiety. Schizophrenia is frequently associated with poor lifestyle choices on the part of the patient, such as a diet high in fat, reduced levels of physical activity, and high rates of smoking-all of these may contribute to the development of a metabolic syndrome and insulin resistance.54,55 It is worth considering investigation into the early warning signs for DM type 2 – including the AN – before commencing a patient on antipsychotic drugs that lead to a metabolic syndrome.
It is now well recognised that patients treated with clozapine or olanzapine are more often classified as having DM type 2 or impaired glucose tolerance in comparison with patients treated with other second-generation antipsychotics. Clozapine increases the risk of diabetes if there is a history of pre-existing diabetes or a family history of diabetes. According to a US study, the risk is higher if the patient is African-American or of Hispanic origin. Such patients may need close blood sugar monitoring during the initiation of clozapine treatment. I contend that if a patient already has AN, weight-increasing antipsychotics should be avoided. Even though aripiprazole is the most metabolic-sparing agent among the second-generation antipsychotics, Manu et al. report a case of AN in a patient treated with it. That patient did have a family history of DM type 2, which adds to the interest of the case.56
Diagnosis and treatment
There is no specific treatment for AN. Treatment is directed towards the specific symptoms that are apparent in each individual. It should be borne in mind that such treatment may require the coordinated efforts of a team of medical professionals. Correcting the underlying disease improves the skin symptoms. Steps that may be taken, depending on what the disease is, include correcting hyperinsulinemia through diet and medication, encouraging the loss of weight in those with obesity-associated AN, removing or treating a tumour, and discontinuing a medication that causes AN. The control of obesity contributes significantly to reversing the whole process, essentially by reducing both insulin resistance and compensatory hyperinsulinemia. However, the pigmentary changes may persist. In drug-induced AN, offending medicines should be stopped. In hereditary AN, lesions tend to enlarge gradually before stabilising and/or regressing on their own.
For those with AN, the recommended treatment may include the use of certain synthetic, vitamin A-like compounds (retinoids). For individuals with malignant AN, disease management requires treatment by oncologists. Reports indicate that AN has improved with therapy used to treat underlying malignancies and has reappeared with tumour recurrences. Other treatment for this disorder is symptomatic and supportive. The treatments considered are used primarily to improve appearance, and include topical retinoids, dermabrasion, and laser therapy. The final outcome of AN varies depending on the cause of the condition. Benign conditions, either on their own or through lifestyle changes and/or treatment, have good outcomes. The prognosis for patients with malignant AN is often poor as the associated cancer is often advanced.
AN may be diagnosed on the basis of thorough clinical evaluation, identification of characteristic physical findings, a complete patient history including medication history, a thorough family history, and various specialised tests.57 The age at detection will vary, depending upon the form of AN present and on other factors. For example, benign forms of AN often become evident during childhood or puberty. It is less common for benign AN to be apparent at birth or to develop in adulthood. The latter cases most typically involve AN in association with obesity.
In individuals with skin changes that suggest AN, diagnostic assessment may include various laboratory tests. Examples are the glucose tolerance test and the glycated haemoglobin (HbA1c)test. Additional laboratory studies or other specialised tests may also be utilised in diagnosis in order to help detect or rule out certain other underlying disorders – including a number of endocrine and autoimmune conditions – that may be associated with AN. In addition, in some instances, particularly where the patient presents with signs suggestive of malignant AN, testing may include biopsy and microscopic evaluation of small samples of skin tissue affected.
The onset of malignant AN usually occurs after the patient reaches 40 years of age. Various factors may be indicative of malignant AN in association with an underlying cancer. These include symptom onset in adulthood that is not associated with the use of particular medications, obesity, a positive family history, and certain underlying disorders known to be associated with AN. It is rare for malignant AN to develop during childhood. In such instances, warning signs may include skin changes that progress rapidly and also involvement of the mucous membranes.58
AN may be metaphorically linked to the dark pigmentation that appears on the skin of the ripe Sharon fruit. Sharon fruit is the trade name for a variety of persimmon that is grown in Israel. In the fruit the dark patches on the ripe and sugary fruits are the result of condensed tannins. Insulin-resistant AN may be referred to as the Sharon fruit sign in order to emphasise the diagnostic value of the condition. It has been suggested that the official terminology for AN is inappropriate for a significant warning of an increasingly common disease for which early diagnosis is imperative. Because the complex name may have a negative impact on its identification by both clinicians and patients, a less formal term is in use among some of those who are concerned with patient care. It must be borne in mind that AN, otherwise Sharon fruit sign, manifests only in those with the insulin-resistant condition and should not be considered a characteristic feature of DM Type 2. Identification of the Sharon fruit sign may be helpful in the early diagnosis of DM type 2.
Discussion
Diabetes puts an enormous burden on patients, their families, and the health-care system. Detection of the disease at an early state using physical markers and instituting preventive measures will reduce the economic strain on society to a great extent. According to the latest global data from the World Health Organization (2016), an estimated 422 million adults are living with DM type 2 and diabetes prevalence is increasing rapidly.59 In 2013 the International Diabetes Federation estimated that 381 million people were living with diabetes.60 That number is anticipated to almost double by 2030.61 About 3.8 million people in the UK have DM type 2, and the charity Diabetes UK has made predictions that it may become as high as 6.2 million by 2035/36.
Most often a diagnosis of DM type 2 is made only when such symptoms as loss of weight, polydipsia, and polyurea have become manifest. By that time the damage to the body may have already come about. Complications arising from diabetes cover the entire area of medical science, so early detection is crucial. Intervention at the prediabetic stage helps to arrest the progress of this condition. AN may herald DM type 2, endocrinopathies, and malignancies. This cutaneous disorder is easily detectable and highly useful in the early detection of the disorders associated with it. Early screening for AN in preadolescent and adolescent people would provide a relatively simple, inexpensive, and non-invasive tool for identifying those young people who have hyperinsulinemia and could benefit from early intervention. That would prevent the development of DM type 2. Young people tend to be reluctant to undergo traditional screening measures and definitive diagnostic tests as they find them invasive and unpleasant.
A sedentary life style and unhealthy dietary habits – as well as the side effects of antipsychotics – make chronically ill psychotic patients more vulnerable to DM type 2 than the general population. Long-standing detained patients in particular are restricted in their mobility and may become more prone to obesity and insulin resistance. It is not clear whether the pathogenesis of psychosis itself has a diabetogenic effect. It is evident that because of the high incidence of DM type 2 among mental health service users, psychiatrists need to become more alert in the diagnosis, management, and prevention of the complications of DM type 2.
Within the United Kingdom, all doctors are expected to teach.1 It is assessed throughout their professional career, during annual appraisals for doctors in training and during consultant revalidation. But how are those just embarking on their medical career expected to develop the necessary teaching skills? As three educators at various stages in our clinical careers, we developed and delivered a small course with the aim of addressing this issue.
The General Medical Council within the United Kingdom, suggest that a basic comprehension of teaching should be gained during the undergraduate and postgraduate training of doctors.2 Dandavino et al. further suggest that early development of these teaching skills may have additional benefits for the clinician; such as improving communication and assisting undergraduates to develop their own ability to learn.3 Our local training region, Yorkshire and the Humber Deanery (HEYH), have a mandatory post-graduate training day in teaching skills which focuses on generic and clinical teaching skills. This is delivered towards the end of the first foundation year. It is delivered by doctors who have various roles in medical education. Whilst useful in its content, for many it comes late. Doctors have often already been involved in providing teaching to medical students on placement at this time.
AMC recalls from her first postgraduate (foundation) year. One peer was thrilled to have ‘shaken-off’ a final year medical student who was supposed to accompany them on a shift as a learning experience; stating that they were now able to ‘do some work’. She couldn’t understand the desperation to escape one-to-one teaching. On reflection, it was probable that her colleague found it overwhelming to incorporate the additional responsibility of teaching alongside an already stressful clinical workload. Many share these feelings, with new doctors finding time pressures along with competing clinical demands a challenge to implementing clinical teaching.4
We thought that giving our graduates simple tools to understand and overcome these challenges may empower them as teachers. It may also improve their confidence in other areas, such as in their own learning and presentation skills.5-6 This paper proposes a solution; after creating a short course to be delivered immediately following graduation, to empower new doctors as teachers by providing some basic training in clinical teaching. These doctors are then able to use this training as soon as they begin their foundation training, which is ultimately the beginning of their teaching career.
Methods
Two versions of a half-day course, titled ‘Teach the Medic’ were developed in HEYH which ran in successive years. The original course was designed by a surgical trainee (AMC) and a general practitioner running the undergraduate education curriculum (MS). Initial topics were chosen based on experiences of the authors and colleagues. The optional course (see figure 1), was offered to the cohort of Leeds medical students who were in the transition period between finishing their final examinations and commencing their first post as a doctor.
Figure 1: A representation of the initial course structure. Stations were developed as interactive lectures and delivered to participants by doctors of various training levels.
The initial course received encouraging verbal and written feedback from the participants, which was collected on the day of the course. Further feedback was collected a few months into foundation training, allowing enough time to pass for delegates to use this knowledge. This feedback, whilst encouraging, included that delegates were keen for additional workshop style sessions. Subsequently, a modified half-day course ran the following year, with the recruitment of additional postgraduate teachers (including LES). A further 17 newly qualified doctors from various medical schools completed the course, prior to commencing their HEYH foundation post. This modified course (see figure 2) included scenario based sessions around potentially difficult situations for the clinical teacher, and also explored alternative styles of teaching that could be adopted successfully in the workplace.
Figure 2: A representation of the modified course structure. Building on feedback from the initial course, the three co-authors incorporated new small group scenario based discussions, alongside the interactive lectures.
Results
Initial feedback received from evaluation of the day was positive for both courses. For the second course, initial feedback found that all participants found every session very good (71%) or good (29%) overall. 12/17 (71%) thought that the course should be made compulsory to medical students. We also sent a follow-up survey, distributed six months after the course which generated 14 responses. All respondents felt that the course should be run again. All participants either strongly agreed (n=2, 14%) or agreed (n=12, 86%) that they felt more confident in teaching compared with their peers. Regarding individual sessions, 10 participants (71%) had directly incorporated learning from the ‘Teaching Theory’ session, 12 (86%) from the ‘Teaching for your Learning and Portfolio’ session, 11 (79%) from the ‘Teaching your Peers’ session, and 10 (71%) from the ‘Scenarios’ workshop. All participants stated they would still recommend this course to colleagues. They also reported directly incorporating their learning from the sessions into their teaching practice. The responses gathered from the second course implied that participants felt more confident in teaching when compared to their peers.
Discussion
We feel the course content in ‘Teach the Medic’ complements other courses available later in one’s career, such as the Royal College of Surgeons’ ‘Train the Trainer’ course. We propose this course could be run by a junior doctor who has a strong interest in clinical teaching with involvement of a senior colleague with extensive medical education experience. We felt the course was especially beneficial as participants had continued to find it useful long after its delivery.
To expand this project to include a whole year group as a compulsory course is ambitious. It would require development and the use of more resources, but initial feedback suggested participants will find it extremely useful. Bing-You et al.6 agree, having found that undergraduate students would be willing to undertake formal instruction in clinical teaching prior to graduation.
As our short course gains momentum within HEYH, this prospect becomes more achievable. When considering a wider delivered course, one must remember that attendance to ‘Teach the Medic’ was optional; suggesting that those who attended had already identified an interest in teaching. This has the potential to bias our data to some degree. However, we still believe that making the session compulsory would allow skill development and empowerment for those who may not consider themselves aspiring medical educators, but who are still in positions to deliver teaching.
Conclusion
Our evolving teaching skills course suggests that close work with both local medical schools and deaneries is important to allow this course to be incorporated into the training of newly qualified doctors. This may be included as a compulsory part of the final year medical school curriculum, an option for a SSC, or as an integrated part of the new starter induction programme delivered by individual hospitals.
Medical scientists who espouse a strict biological model of the mind tend to care less about the prolongation of life than do those who have faith in higher authority.1 The prevailing reductionist model of mind has recently been challenged effectively.2,3,4,5. That has led to a position in which there is some justification for claiming that there is scientific evidence to enable a suspension of disbelief in life after death. 6 Medical profession should respect the theology veiled in thanatology and should be careful not to become instrumental in creating a culture of death; alleviating suffering is not by eliminating the patient.
In the absence of spiritual conviction, human suffering lacks deep meaning and death is regarded as the ultimate tranquilliser. Prolonging life at any cost may be perceived as a worthless endeavour. To counter that, without suffering evolution would not take place and human consciousness would fail to expand. Without stress and struggle the spirit buds to which we may be likened would not mature and grow leaves and fruit, and our characters would not develop; we would lead the lives of lotus-eating sybarites. 7
Evidence for discarnate survival
According to those who are sceptical about after-death survival, there is only as much evidence to justify belief in life after death as there is for the historical existence of dinosaurs. Some scientific researchers however argue that there are compelling reasons to support those who are proponents of belief in life after death. Dr Vernon Neppe, a neuropsychiatrist turned parapsychologist, has declared that the combined body of evidence for discarnate survival is overwhelming – so great that it may be regarded as scientifically cogent.8 This emerging scientific view, coupled with the wisdom of the faith traditions, challenges the rationality of supporting assisted suicide. The following are examples of evidence for discarnate existence that are commonly cited:
clinical death experiences
pre-death visions
shared death experiences
collective apparitions
some forms of mediumistic incidents, particularly ones that involve cross-correspondence, drop-in communications and physical phenomena
children’s memories of previous lives
electronic voice phenomena
instrumental trans-communications
transplant cases
Scientifically studied Marian apparitions
The list is becoming longer as survival research progresses. Encouraged by the success of afterlife experiments with mediums,9 the multi-specialist professor Gary Schwartz of Arizona University claims to have invented a device to communicate with discarnate spirits; the holy grail of survival research that could possibly offer a fool proof scientific evidence of afterlife existence,10 but also takes account of all the potential negative consequences. He claims to have worked with black boxes in his laboratory, using a software programme that has generated proof that there is a spirit world by measuring light. 11It appears that he has developed a technique whereby faint light can be detected in a totally dark box. Measurements are taken at the beginning of an experimental session, and then a specific “hypothesized spirit collaborator” is asked to show a “spirit light” in the box and a second reading is taken. The finding is that instruction for specific spirits to enter a light sensing system was associated with reliable increase in the apparent measurement of photons. Such a curious result means that these communicating spirits are able to hear, respond and produce light in an otherwise dark enclosure. 12,13 The conclusion is that survival research opens up new vistas which seem much more important than cosmology or quantum electrodynamics.
Scientifically examined Marian apparitions are a recent addition to the evidences for discarnate existence. 14 Mainstream scientists seem never to have attempted to develop the conceptual tools and vocabulary needed to investigate the possibility of post-mortem existence. It may be that science will not accept the possibility of discarnate survival without a new theory of physical reality. In the early part of the twentieth century the prevailing view of scientists was that there was no possibility whatsoever of proving the existence of life after death. Over the years that have passed since then attitudes have evolved, and in the world, we are now in it is asserted by some researchers that there is scientific evidence for the existence of life after death. Some of the evidence relating to discarnate existence may not however satisfy the criteria of the physical sciences since the latter are based on speculative science and court room logic.
Paradigmatic shift
Demonstrating post-mortem existence as an irrefutable phenomenon is a route to establishing empirically that humans have a higher consciousness. Unfortunately, in survival research there are many phenomena that have multiple possible explanations, and these augments add to the complexity of this immensely significant area of scientific enquiry. All the types of evidence postulated as supporting discarnate survival are simultaneously a form of evidence of a non-biological component that operates in association with the brain. The existence of a non-biological component indirectly proves the possibility of survival after physical extinction. A huge paradigmatic shift towards non-reductionism is now taking place in the cognitive sciences – consciousness is no longer considered an epiphenomenon of brain activity, but asthe designer and prime mover of the material body. Nowadays, some mainstream scientists are themselves paradoxically trying to debunk mainstream science.
Suicide victims
Through suicide, a person is simply changing the location of their suffering. While wrapped in the physical planet by space and time, we are in an advantageous position for inducing personality changes swiftly, whereas in the timeless state of discarnate existence changes are sluggish and personality development is much slower. Contemporary data for survival research may be congruent with the wisdom of the faith tradition. 15 To use a simple analogy for this, carrying out assisted suicide is like destroying the shell of a pupa and forcefully freeing it in a premature state. Such a pupa will not be able to fly about like a butterfly. It is arguable that a person subjected to violent death – as in the case of suicide – may not be able to enjoy the beauty of God’s grand other-worldly dimensions until they have become spiritually compatible with them. They have to navigate through the physical plane like wingless birds. 10 To look at this way, if fruit that is unripe drops from a tree, it will be sour. Suicide breaks a solemn law because it deprives the conscious self of the natural growth that life in a physical body can best provide.7The Chinese saying “One day of earthly existence is not equivalent to a thousand days of ghostly existence” is a statement of the sanctity of terrestrial life.
Lord Alton has campaigned against the Assisted Suicide Bill of 2014 since its inception. Referring to his dying father’s account of how he had seen his own brother, a member of the Royal Air Force who had died in the Second World War, Lord Alton argued that a forced death, as opposed to a natural one, may deprive a person of their “healing moment.” 16 A graceful and natural death may be supposed to be accompanied by benign caretaker spirits with exuberant love who assist those who are dying by making them comfortable for the big transition.17,18 A person who terminates their own life prematurely may not be so fortunate as to get such benevolent assistance from the spiritual realm. Most hospice workers are very familiar with departing and death-bed visions such as that described by Lord Alton. Furthermore, it has been suggested that beings from the imperceptible spiritual sphere who assist in delivery from the terrestrial plane have a role in such matters as the timing of death, and it is arguable that their part in what happens should not be impeded by intervention.
It appears that human brain is designed to have some doubts about discarnate survival for some reason and a fool proof evidence of post-mortem existence may have its down side in that somebody who is fed up of life might use it to justify ending his earthly life voluntarily. 19An ultra-optimistic view of discarnate life is spiritually counterproductive and such an over optimism could be seen as a justification by the patient and carers in the decision making of assisted suicide. In a weak moment of extreme psychological or physical sufferings, such a belief can also become the final rationalization for ending one’s own life voluntarily. In my own clinical practice, I have come across suicidal patients telling me, “It will be always better on the other suicide.” A belief in discarnate existence based on parapsychological proof alone did not deter one such patient making a serious suicidal attempt
End-of-life concerns
The evening of life was considered as a great opportunity for spiritual, emotional and psychological growth and a celebration of one’s life journey. These are also times to harvest the wisdom of yesteryears and share them with the succeeding generation. Spiritually enlightened people consider this to be the time to conquer the fear of death. Fear of death is not the fear of the physical pain of death, but the fear of truthful self-judgement after death. Recent observations in thanatology favour a belief in post-mortem self-assessment and appraisal. For some, it would be voluntary or assisted, whereas for others it could turn out to be forced upon them. The final phase of life is the time to settle the errors committed against fellow beings that have not been remedied in life. Fortunately, modern medicine has prolonged this period, which grants an opportunity for most people to experience conscious ageing. Sadly, traditional attitudes towards the evening of life have changed in today’s youth-obsessed culture. For some, medical procedures have extended life and made dying a lingering process rather than a sudden event, and have contributed their own problems. For several reasons, terminally ill people who are in crisis may wish to die rather than being kept alive longer (Table 1).
From an evolutionary point of view, there can be only a survival instinct – no Freudian death instinct. Avoiding death rather than seeking it is a natural human urge and the fear of death may affect every individual action. The very concept of euthanasia is totally against the human make-up and is entirely artificial. Assisted dying and assisted suicide are the same thing when a member of the medical professional gives a lethal drug to a person so that they can take their own life. Euthanasia is different; it happens when, for example, someone injects a lethal substance into a patient. Involuntary euthanasia refers to a situation in which the patient has the capacity to give consent, but has not done so; and in non-involuntary euthanasia a person is unable to give consent, for example because of dementia or being in a coma. Mercy killing is claimed to be a compassionate act to end the life of a patient.
Table 1: End of Life Concerns
Losing autonomy
Less able to engage in activities making life enjoyable
Loss of dignity
Burden on family, friends/caregivers
Losing control of bodily functions
Inadequate pain control or concern about it
Financial implications of treatment
Moral and ethical issues
It has been observed that the risk of suicide is higher among people with a family history of suicide. Family culture and genetics may account for the increased incidence of suicide in such situations. Assisted suicide would create a trail in the culture of more families and more succeeding generations would perhaps be at increased risk of considering suicide as a serious option at a time of crisis. Kevil Yull (2013) comments that changes in the law of assisted suicide would have an additional impact on those left behind, because of their effect on the moral connections, assumptions and accepted responses to situations on which we base our relationships with fellow human beings and establish ourselves in the world. 20 He argues that the legalisation of assisted suicide would undermine freedom instead of promoting freedom of choice, and also that the proposed safeguards and regulations would breach the privacy of the death-bed.
Assisting someone to kill themselves is assisting them in murder. According to all the major faith traditions, life is a gift from God and ending it is like throwing a precious object back to the giver. All spiritual traditions teach and believe that bringing the human heart to a standstill is God's business (Table 2). There are patients who assert that even if all their limbs were amputated, they would still want to hold on to the treasured gift of life. It is very difficult to define what unbearable suffering is; extreme suffering is a subjective matter that it is not possible to separate from an individual’s outlook on life. A fundamental question is that of who would be the one to pronounce a verdict on when suffering is intolerable – the patient or medical personnel?
Laws are not precision-guided arrows and they may become perverted. In a world full of violence and crime, assisted suicide is unsafe and could be exploited. There would be many unintended consequences. For reasons of public safety alone, some people oppose assisted dying. Financial abuse by relatives of the elderly seems to be becoming more common; those with a vested interest could be tempted to put an inheritance before life.
The regulation of assisted dying has been modified in recent times in some countries, an example being the Netherlands in 2014. There it is now lawful to kill a patient without their consent, and euthanasia and assisted suicide may be offered to people with mental health problems (consensus with the family is required in all these situations). Inboth Belgium and the Netherlands the euthanasia of children is legal with family consent (in Belgium there is no age limit; in the Netherlands the child must be 12 or above and must give consent). In Belgium blind adults who were developing further problems were granted euthanasia at their own instigation a few years ago. There is public concern about collaboration between euthanasia teams and transplant surgeons in Belgium.
Table 2: Medical dilemmas
Assisted suicide promotes a human right to commit suicide and gives wrong messages to suicidal patients in psychiatry.
It undermines the Universal Declaration of Human rights and strikes at the foundations of all spiritual values.
It is hard to define unbearable sufferings.
Assisted suicide has many unintended consequences.
Death with dignity could deteriorate as death with indignity.
It might permit the unlawful killing of innocent people in certain circumstances.
It is founded on unethical principles-survival of the fittest.
In 2013, 1.7% of all deaths in Belgium were hastened without the explicit request of the patients. 21 Professor Cohen Almagor, the author of 2015 report on euthanasia in Belgium stated that the decision as to which is no longer worth-living is not in the hands of the patient but in the hands of the medical personal. 22 More than 500 people in the Netherlands are subjected to euthanasia without their consent.23 Data from Oregon where assisted suicide was legalised in 1977 shows that the top five reasons people choose assisted suicide are not because they are suffering from a terminal illness and 49% stated that feeling like a burden and a fear of loss of control are among the main reasons for choosing assisted suicide Oregon .24 In Washington state in 2013, 61% of people who were killed in assisted dying said that being a burden was a key factor for their choice of death. 25
Medical Dilemmas
Majority of British medical practitioners are against assisted suicide. 26 A 2013 survey showed that 77% hold the view that they would oppose a change in the current law to allow assisted dying, 18% favoured the RCGP moving to a neutral position, and only 5% favoured a change in the current law. They opined that a change in the law would make patients afraid of their doctor and would alter doctor –patient relationship, and would make vulnerable patients most at risk from assisted dying. According to Marris bill, some people should be given help to die meaning that some lives are worth less than others. Vulnerable people would feel pressurised to choose death and could be killed without their explicit consent. GPs feel that it is their privilege to protect the disadvantaged and vulnerable people of the society.
Assisted dying would lead to less focus on investment in palliative care. The RCGPs also cautioned in the survey that a change in the position of the law makers would become like abortion legislation, which started as something for extreme circumstances and is now effectively on demand. They are also anxious for the fact that legalisation of assisted dying would make it impossible to tell the real reason why patients decided to die, because illness can cause people to become depressed and frightened. As debate on assisted suicide has become hotter, in clinical practice suicidal patients have already started enquiring about the prospects of assisted death.
Thanatology
Medical sciences have not advanced enough in matters of death to offer details to make informed choice for those who want to die voluntarily and thanatology is only a fledgling science. Thanatology is the scientific study of death and investigates the mechanisms and forensic aspects of death, such as bodily changes that accompany death and the post-mortem period, as well as wider psychological, parapsychological and social aspects related to death. They are not particularly interested in the meaning of life and related philosophical issues, but this is an area where science and philosophy not be separate. In recent years, studies of parting visions by Elizabeth Kubler Ross and Raymond Moody’s NDE studies. 27,28,29,30,31,32 have given a spiritual dimension to thanatology. Theology and Thanatology are two major corpuses of human wisdom that cannot but overlap. Assisted dying would probably become also an issue of forensic sciences.
It is the job of the doctor to keep the patient alive whereas it is the job of the psychotherapist to have a sense of a bigger picture. 33 People wanting to hasten death should also have the choice of receiving pastoral and psychotherapeutic assistances to distract themselves from their preoccupations of death and allow nature take its own natural course. New generation psychotherapists will have to be well versed in all aspects of death related sciences. Thanatology has a rightful place in medical studies, but I content that medical professionals need not to be unduly concerned about the different forms of afterlife existences, the borderland between religion and thanatology. Medical professionals are expected to be above religion and politics. Thanatologists now fear that if assisted suicide is legalised, they might be pressurised to slip from the original goal of acquiring more knowledge of human dying to serve the dying into the pursuit of death.
Concluding Remarks
Assisted suicide or euthanasia is incongruous with the theological view that it is the weakest and the vulnerable who can teach us the values of life and the concepts of euthanasia or assisted suicide have an indirect message of discarding them. The right to die would soon deteriorate as duty to die to prepare room for fittest ones. Instead of looking for reasons to live, people will be looking for reasons to die. What is need is better understanding of death process and advancements in the palliative care of the terminally ill, rather than doing away with them. Until we know more about the death process, assisted dying debate should be kept on hold. More research in palliative care and allowing people to die naturally with dignity should be the concern of medical profession.
Evolution may be taking place in biological and spiritual streams and they are interconnected: biological sufferings maybe aiding spiritual evolution. 34. From a philosophical perspective, the rationale of terminal sufferings is to help the individual to disengage from the “pleasant illusions” of earthly life. The debate of assisted suicide raises the question whether human beings are mere electrical animals, quantum beings or fundamentally spiritual personalities-humans maybe all the three.The sanctity of human sufferings need to be brought into the equation of assisted suicide discussion. Assisted suicide would only add to the growing violence in the present world that could do with reintroduction of principles of non-violence.
USA may have better legal infrastructure to negate the unwanted and unintended errors of assisted suicide, but in many third world countries, where there is no such legal infrastructure, the procedure would easily get dishonoured. Oriental religions as well as Abrahamic faith traditions are opposed to ending life voluntarily. In general, all faith traditions believe that life that is nearing the biological end need not be preserved at all costs and that one does not have to go to extraordinary lengths to preserve a terminally ill person’s life. This means, for instance, that while a terminally ill person should not be denied basic care, he or she could refuse treatment that might prove to be futile or unduly burdensome for the dying person - passive voluntary euthanasia.
A scientific belief in after death existence is not without its pitfalls unless it is accompanied by the spiritual corollary of sanctity of earthly life. Science alone cannot highlight the sanctity of life; Divine standards are helpful in comprehending the sacredness of life. In fact, science has taken us to a cross road with Professor Schwartz’s new instrumental communication and it is time mark the boundaries of healthy survival research and the unhealthy ones.
This case highlights a rare and interesting medical condition bought about by liquorice ingestion. While there have been many previous reports of liquorice toxicity secondary to eating confectionary, I have only found one case report of liquorice toxicity secondary to liquorice tea ingestion and the patient there had only a mild case of hypokalaemia and did not require hospital admission unlike patient X.1
Case presentation
Patient X is a usually fit and well 49-year-old woman who was admitted following a collapse. Prior to this collapse she had experienced a 3-week history of gradual onset, slowly worsening dull headache and a feeling of tingling in her hands which increased over this timeframe. On the day of admission, she experienced nausea. Her past medical history included migraines, for which she self-medicated with paracetamol, ibuprofen and codeine as necessary. She was also using Cerazette. She was in full time work, was a lifelong non-smoker and used alcohol very rarely. She had no family history of note.
On examination, Patient X was a slim individual who was hypertensive with a blood pressure of 170/100 mmHg. Her other observations were normal. Her general and neurological examinations were unremarkable.
While initially the medical team felt that Conn’s syndrome was a likely cause of the abnormalities, this was reconsidered on a ward round where, following research on the internet into her abnormalities, patient X brought it to the attention of the team that she had been consuming between six and eight liquorice tea infusions per day for the past two months and had been consuming around three a day for a significant period time prior to this (roughly 18 months). The diagnosis was made based on her history.
Investigations
A venous gas demonstrated a metabolic alkalosis with a pH of 7.57.
Blood tests revealed hypokalaemia (K+ = 2.2 mmol/L) and hypophosphataemia (PO4 = 0.35mmol/L). No other abnormalities were detected.
More specialist assays were undertaken, and demonstrated that her plasma renin was 2.3 ng/mL/hour (normal range 0.2 – 3.3 ng/mL/hour). Her morning supine plasma aldosterone level was 29 pg/mL (normal range 30 – 160 pg/mL). Her morning plasma cortisol level was 612 nmol/L (normal range 138-635 nmol/L).
In addition, the patient also underwent a CT head and a CT renal angiogram, both of which were normal.
DIFFERENTIAL DIAGNOSIS
Apparent mineralocorticoid excess
Exogenous mineralocorticoid excess
Liddle’s syndrome
Congenital adrenal hyperplasia
Cushing syndrome
Liquorice
Treatment
Intravenous potassium and phosphate replacement, cessation of liquorice intake and amlodipine 5mg OD.
Outcome and follow-up
The patient was discharged with a blood pressure of 132/66 mmHg on amlodipine, a potassium level of 2.9, and a phosphate level of 1.16. She was given oral potassium supplementation to be taken 3 times per day. After two weeks, the amlodipine was stopped as she became mildly hypotensive. Her blood pressure was 120/60 following cessation of amlodipine. Three months later her plasma potassium level was 4.6 without supplementation. Mrs X required no follow up although she reports an ongoing feeling of tingling in her hands.
Discussion
Liquorice is an extract of the roots of the Glycyrrhiza glabra plant and has been used as both a confectionary flavouring agent and a herbal remedy. It is also commonly used as a laxative, and as a flavouring agent in chewing gums, sweets, and food products.
The active ingredient in liquorice is glycyrrhetinic acid which inhibits the enzyme 11-β-hydroxysteroid dehydrogenase. This enzyme converts cortisol into inactive cortisone within the distal tubule of the kidney and so in liquorice toxicity there is a build-up of cortisol in distal tubular cells.2 This results in increased mineralocorticoid like activity as there are structural similarities between cortisol and aldosterone, with increased Na+ and water retention in conjunction with increased H+ and K+ excretion.3 Here hyperaldosteronism occurs, but with a low or low-normal plasma aldosterone and renin level. Serum glycyrrhetinic acid levels can be measured with enzyme-linked immunosorbent assay (ELISA) and high-performance liquid chromatography (HPLC). Urinary glycrrhetinic acid levels can be measured with gas chromatography-mass spectrometry (GC-MS).4
Pseudoaldosteronism secondary to liquorice consumption is a relatively rare occurrence. Case reports demonstrate a range of clinical manifestations from an asymptomatic patient fortuitously diagnosed to those with more severe presentations such as rhabdomyolysis, hypertensive encephalopathy, asthenia, paralysis, heart failure, and cardiac arrhythmias such as polymorphic ventricular tachycardia and ventricular fibrillation secondary to hypokalaemia. For these reasons, it has been suggested that the public should be made aware of the potential dangers associated with liquorice consumption. 5-13
The combination of alkalosis hypokalaemia and hypertension suggests increased mineralocorticoid activity leading to increased renal tubular Na+ reabsorption along with increased k+ and H+ excretion. Both primary and secondary hyperaldosteronism cause these abnormalities, the former via an appropriate response (renin release) to decreased renal perfusion pressure or decreased sodium concentration in the ultra filtrate, the latter an inappropriate release of aldosterone from the adrenal cortex, often a result of an adrenal adenoma.
Other genetic syndromes, such as Bartter’s or Gitelman’s, cause hypokalaemia with alkalosis but without hypertension.14
Features of low potassium include generalised weakness and lethargy, ascending paralysis, and rhabdomyolysis.15-17 Decreased intake is rarely a cause of low potassium as the western diet usually contains significantly more potassium than is needed and because the renal tubular reabsorption mechanism can be extremely effective in limiting potassium excretion.17
The maximum recommended dose of liquorice is 100mg/day although cases of liquorice toxicity have been reported in association with doses as low as 80mg/day. Each liquorice tea bag contains approximately 500mg of glycyrrhetic acid, of which approximately 20mg is ingested per infusion.
This is a relatively rare occurrence and it has been suggested that certain groups are more susceptible to toxicity than others – for example those with 11-β-hydroxysteroid dehydrogenase deficiency.18 It is also thought that those with essential hypertension are also more at risk.19
LEARNING POINTS/TAKE HOME MESSAGES
Consumption of liquorice can cause pseudoaldosteronism.
The clinical picture is similar to that of primary aldosteronism, but is characterised by low levels of both aldosterone and renin.
While liquorice toxicity can be asymptomatic, clinical manifestations are wide ranging and include cardiac arrhythmias, rhabdomyolysis, weakness and paralysis.
Pseudohyperaldosteronism caused by liquorice consumption is reversible and generally resolves upon cessation of liquorice consumption. Prior to resolution, potassium supplements are usually necessary.
The Royal College of Psychiatrists (RCPsych) launched its five-year Recruitment Strategy in 2011 aiming to achieve a 50% improvement in recruitment to core psychiatry training and a 95% fill rate of posts by the end of the five-year campaign (1). The primary focus of this campaign was on recruiting UK medical graduates.
Two of the Strategy’s main aims were to highlight good practice in undergraduate teaching and to improve the teaching skills of psychiatrists to inspire and influence medical students during their psychiatry curriculum.
The Strategy stressed the importance of good clinical placements in psychiatry and recommended that medical students should ideally be placed only with ‘the best teachers and welcoming teams’ avoiding colleagues who are disillusioned with psychiatry or not happy with their jobs.
It is therefore essential that psychiatrists and other clinicians play an important role to improve the medical students placement in their workplace in order to give the students a positive expereince of this speciality and hopefully promote it as a future career option.
Background
Fourth-year medical students from the University of East Anglia (UEA) spend two months rotating through various mental health services as part of their clinical placement in the Mind Module (also known as Clinical Psychiatry or Module 11).
As part of this rotation, students are placed in Old Age Psychiatry for six days over a two-week period. They shadow clinicians in two community teams, two inpatient wards and the Electro-Convulsive Therapy (ECT) clinic. All of these teams are based at the Julian Hospital in Norwich.
The students are encouraged to talk to patients and carers and perform basic clinical tasks such as mental state examination and risk assessment. Table 1 summarises the learning outcomes for students during their placement.
Table 1- The learning outcomes for students during their Old Age Psychiatry placement
Gain clinical experience of diagnosis and management of mental health problems (including dementia) in older people.
Improve the communication skills with regard to interactions with older people with mental disorders and their carers.
Enhance the student’s understanding of the nature of the multi-disciplinary team (MDT) model in mental health for older people, particularly the social aspect of care and end of life care.
After each rotation, each student is asked to complete a feedback form regarding their placement. This feedback helps the module leads and clinicians to improve the students learning experience.
Before the implementation of our placement improvement project, the students did not feel that they were meeting their learning outcomes. Table 2 summarises the major areas that needed improvement.
Table 2- Areas needed to improve before the implementation of the new placement structure
Poor planning and organisation of the clinical placement.
Inadequate or no information sent before starting the placement.
Lack of a dedicated coordinator to design the placement timetable and allocate students to specific clinicians
Lack of multidisciplinary teaching and hence poor understanding of the various roles of professionals (e.g., memory assessors, community nurses, support workers, etc.).
Students felt that clinicians were too pressured to supervise students. Some students reported that they were sometimes sent off as the staff had been too busy or there insufficient volunteers from clinical staff to take a student.
Student dissatisfaction with clinical placements is not unique to psychiatry. Research has shown that educators and learners face significant challenges when teaching and learning take place in any clinical setting. See Table 3 for a summary.
Table 3- Challenges of teaching in clinical settings (modified from Spencer, 2003)
Limited clinician time allocated to teaching activities.
High number of students allocated to few clinicians.
Difficulty in seeing patients (e.g., patients refusing the presence of a student).
Clinical setting is not ‘teacher friendly’ (overcrowded, too small, noisy and/or lacking privacy to interview and examine patients).
Lack of rewards and recognition for the clinical educators.
One of the biggest challenges of teaching in clinical settings is how to provide a welcoming and supportive learning environment in a busy and time constrained practice. We found that one of the main reasons for clinicians to be reluctant to have students shadowing them is the challenge of providing a dual role of caring and teaching simultaneously.
The placement improvement project
The improved structure of the student's’ placement in Old Age psychiatry was based on the tenet that clinical placements should provide various clinical experiences that include interaction with patients and professionals from various grades in addition to face to face teaching in small groups (3). The authors took over full responsibility for coordinating the students’ placements and liaising with the various supervising clinical teams. This ensured clear leadership and consistency in organising the placement.
The improved placement structure started in October 2015 with the first cohort of medical students coming to their clinical placement after the summer break. Table 4 gives a summary of the changes implemented.
Table 4- Changes to improve the clinical placement in Old Age Psychiatry
Compiling a ‘welcome pack’ and sending it by email to the students before the clinical placement.
A “Meet and Greet” event on the first day of the clinical placement was introduced comprising of several clinicians who operate on a rota basis.
Involvement of all professionals in the MDT (including Staff and Associate Specialists, community and memory nurses, junior doctors and clinical psychologists in addition to consultant psychiatrists).
Introduction of a Balint-style psychotherapy group aiming to facilitate discussion in a safe and containing environment of the emotional impact of patients encountered.
Designing a weekly one-hour teaching session supervised by a senior clinician and facilitated by a trainee psychiatrist.
Each clinician received a formal letter of thanks from the Head of Norwich Medical School, the Module Lead and the Secondary Care Lead certifying their contribution to the education of medical students and thanking them for their work.
The information pack sent to the students before the placement contained information about the hospital environment (location, map, parking, travel arrangements, key codes and useful contact numbers) and a detailed timetable (and email address) of the clinician supervising the student each day during the placement. Also, it included useful information about the mental state examination and the Mental Health Act, information that had been requested previously by medical students.
Sending information before the placement has been shown to be beneficial in students’ electives (4) and this is especially important in psychiatry which can be experienced as less structured than other medical specialities and where students are required to travel to various hospitals and clinics bases. As a result, students felt that they were expected and had a clearer sense of where they should be and who would be supervising or teaching them. Later student feedback reported that these changes had contributed directly to an improved learning experience.
The timetable design ensured that every student would have the chance to experience working in several settings in Old Age Psychiatry, including community, inpatients, ECT and the Memory Clinic. It was also noted that a two-week placement in any psychiatric team could not easily give a student a sense of patient ‘recovery’. It was, therefore, decided that students see a patient who had been discharged from the ward, e.g. with the care coordinator.
The rota of the ‘Meet and Greet’ event on the first day of the placement ensured that the workload is spread among the clinicians and helped sustain the necessary levels of enthusiasm and energy. Previously, this task had repeatedly fallen to just a few clinicians.
The participation of all professionals in the clinical team in supervision and teaching helped the students to better understand the different roles of clinicians within the multidisciplinary team and enriched their learning experience. To achieve this, we attempted to allocate sessions with a clinical psychologist, care coordinators, memory assessment nurses and members of the intensive support team. It also had the bonus of ensuring that the workload of teaching was spread more equally among clinicians.
Attendance at ward rounds and community MDT meetings could be a valuable experience but only if the process is explained, and – in the ward round – the student is briefed on the clinical history and background of the patients. For these reasons, supervising clinicians were reminded to give this information to the students attending such meetings.
The weekly teaching sessions provided an opportunity for the students to present case histories of patients they had seen and to discuss their management. Clinicians could also give a formal didactic teaching on a specific topic, for example, mental state assessment or risk assessment in psychiatry.
The letters of thanks to the participating clinicians served as an added benefit (in addition to the satisfaction of teaching others) to sustain their motivation and reward them for their contribution to the teaching of medical students. The psychiatric trainees used the letter to demonstrate their skills in teaching in their portfolio.
Benefits of the new placement structure
Helping students to feel supported before, during and even after their placements was a high priority in this project. Research has shown that learners rank the need for support and guidance in workplace environments as high and it is an essential requirement for a successful learning experience (5). This extra support is particularly crucial in psychiatry which is perceived by many students to be difficult and challenging (6).
The support provided to the students in the improved structure was in the form of having the contact details of the rota coordinators, their supervising clinicians, the administration team (medical secretaries, site manager for parking permits) and some other useful numbers for various locations and clinics.
While improving the organisation of the placement, changes were aimed to reduce the commitment of teaching and supervision for clinicians and spread it more equally among the members of the team.
Students reported that they found home visits during the placement the most useful part of their placement and the most interesting. This is an invaluable experience with the student having a significant amount of time in a one to one interaction with a clinician (including during the travel from one location to another) and then observing the clinician ‘in action’ with patients at home. This experience highlights the role of ‘professional socialisation’ (7) that is considered by educators as a significant process in the development of a sense of a shared professional identity and responsibility in both the clinician and the learner.
Furthermore, using non-medical professionals to supervise and teach students has been valued by students (8). It enriched the clinical placement with the concept of (Inter-Professional Learning) which is an active learning from and with professionals from other disciplines allied to medicine. This style of education has been shown to improve students’ communication with professionals from different disciplines and to have a better understanding of the nature of the multidisciplinary teamwork and the roles of each member of the team (9).
While improving the organisation of the placement, changes were aimed to reduce the commitment of teaching and supervision for clinicians and spread it more equally among the members of the team.
Balint groups and improving student placements in psychiatry
Balint groups were pioneered by the Hungarian psychoanalyst Michael Balint who introduced this model in the late 1950s after running seminars for general practitioners in the UK with his wife, Enid. (10)
Balint recognised the intense emotions that affect the doctor and the patient and encouraged clinicians to talk about these feelings in groups, which later came to be known as Balint groups.
Research has shown that Balint groups for medical students can increase the students’ empathy towards patients with chronic mental illness and improve their ability to cope with complex clinical situations (11). It also helps them to engage in reflection about their professional growth and to develop their identity as future doctors (12). Most importantly, this psychotherapeutic approach it allows them to reach a deeper understanding of the emotional impact of their patients (13). It was felt that the students would benefit from this model to help with the various emotions evoked by dealing with patients they would encounter in Old Age Psychiatry, in particular, dementia.
The student feedback was very positive for the Balint group. One student commented. It is inevitable to have experiences with patients that leave you with a feeling, whether that be positive or negative. To be able to look back at those times, talk them through, be listened to and have others reflect things back that you may not necessarily have realised yourself, is invaluable’.
Patient and carer involvement in clinical education
Clinical education in the workplace has always depended on patients and carers in its design and delivery. Students value seeing patients and learning from their experiences. However, the evidence suggests that patients are not routinely involved in the design of the curriculum or clinical placements despite calls to actively engage them in teaching and training (14).
Students were given the opportunity to learn from patients and carers through regular and supervised contact with them. They also attended a workshop on dementia and viewed a DVD showing the experiences of a woman with dementia and depicting how the world might be seen from her perspective. Feedback from students was very positive for these opportunities.
Medical students placement and Electro-Convulsive Therapy (ECT)
Students are allocated to spend one day in ECT clinic during their two-week placement in Old Age Psychiatry. Research has shown that many medical students have negative attitudes and unjustified reservations about ECT and its therapeutic applications (15). However, these views can change with education about this therapy during clinical placements and encouragement of the students to talk to patients and read about its indications and effectiveness in people with severe mental illness (16). Seeing the procedure first hand would, therefore, help the student gain confidence to challenge the stigma attached to ECT and to explain this treatment to their future patients.
Feedback from the students following the implementation of the placement improvement project
The feedback from medical students and clinicians was very positive. The students enjoyed their placements and felt that they gained useful knowledge and skills. Above all, they felt welcomed in the clinical settings and settled very nicely into the teams.
Figures 1 and two summarise some comments from the medical students following the placement. This feedback was collected by Norwich Medical School as part of the regular monitoring of clinical placement for medical students.
Figures 1 and 2: Feedback from students after the implementation of the changes to the clinical placement:
‘Best part of placement. Doctors were all happy to have us and teach. It was well organised, I felt that we were welcomed and always expected. It was varied and generally useful to my learning needs’. Student ID 69. End of Module 11 feedback.
‘This was one of the best placements in psychiatry, each doctor was very helpful and especially keen on teaching. It was really good to not only see the patients on the ward but so helpful to go on home visits to see assessments in patients own home. Really enjoyed this placement’. Student ID 95.
Limitations
There were some challenges in the implementation of this improved model. First is not always easy to recruit non-medical members of the clinical teams to take students. There are some reasons for this including lack of confidence or experience in teaching, a belief that it is “not their role”, or concern about the increasing demands on their time. Others already had students in their discipline. This was addressed by briefing the professionals about what the students need to achieve at the end of the placement and encouraging them to be involved in the supervision. The introduction of nursing revalidation in April 2016 may help more nurses to get involved. (17)
Conclusions and recommendations
This paper describes a clinical placement improvement project for medical students in Old Age Psychiatry. The changes focused on the enhancement of organisation, supervision and teaching.
Our improvement project is ongoing, and there are areas needing further improvement, for example, more active involvement of patients and carers in the teaching and learning of medical students is necessary. It is planned to achieve this by inviting patients and carers to tell their personal stories to the students in a small group.
Organisers of students’ placement in secondary or primary care need a systematic approach to filling allocation slots to ensure that all students receive a similar and broad exposure to their speciality. It can be dispiriting and stressful to ask for volunteers constantly. They need to have good relationships with their clinical colleagues of all disciplines, and to be willing and assertive enough to go around and ask colleagues in person rather than sending email requests.
Psychiatric educators have a significant role to play in the improvement of clinical placements for students as this will hopefully contribute to improving recruitment to this medical speciality that is undergoing a recruitment crisis. Research has shown that there is a positive correlation between the length and quality of clinical placement and the likelihood of choosing psychiatry as a future career. (18)
Depression is a clinical syndrome. The International Classification of Diseases (ICD) diagnostic classification systems describe three core symptoms of depression; low mood, anhedonia and reduced energy levels1. Other symptoms include impaired concentration, loss of confidence, suicidal ideation, disturbances in sleep and changes in appetite. Symptoms must have been present for at least a period of two weeks for a diagnosis of depression to be made. Major depression refers to the presence of all three core symptoms and, in accordance with ICD criteria, at least the presence of a further five other symptoms1. See Table 1 for severity criteria of a depressive episode according to ICD criteria.
Table 1: Severity criteria of a depressive episode according to ICD-101
Criteria A – General:
Criteria B – Presence of ≥2 of the following:
Criteria C – ‘Other’ symptoms:
Symptoms for at least 2 weeks Symptoms not attributable to psychoactive substance use or organic mental disorder
Low mood Anhedonia Reduced energy levels/ increased fatigability
Loss of confidence and self-esteem Feelings of guilt Suicidal thoughts Impaired concentration/ability to think Changes in psychomotor activity Sleep disturbance Changes in appetite with weight changes
Criteria for severity of depressive episode:
Mild episode: 2 symptoms of criteria B
Moderate episode: ≥2 symptoms of criteria B + symptoms of criteria C until minimum of 6 symptoms in total
Major episode: all 3 symptoms of criteria B + symptoms of criteria C until a minimum of 8 symptoms in total
Depressive symptoms, which can be clinically significant, can be present in the absence of a major depressive episode. Depressive symptoms are those that do not fulfil diagnostic criteria for a diagnosis of depression to be made. Depressive symptoms can be collectively referred to as sub-threshold depression, sub-syndromal depression or minor depression2.
It has been proposed that there are two types of depression; early-onset and late-onset depression. Late-onset depression refers to a new diagnosis in individuals aged 65 years of age or older. Over half of all cases of depression in older adults are newly arising (i.e. the individual has never experienced depression before) and thus late-onset type depression. Late-onset type depression is associated more with structural brain changes, vascular risk factors and cognitive deficits. It has been suggested that late-onset depression could be prodromal to dementia3.
The Kings Fund has estimated that by 2032 the proportion of older adults aged 65-84 years old will have increased by 39% whereas the proportion over the age of 85 years will have increased by 106%4. This increase in population will consequently see the incidence and prevalence of depression rise. By 2020 it is estimated that depression will be the second leading cause of disability in the world regardless of age5. Recognising, and so diagnosing, depression in older adults will become more important because of a greater demand on existing healthcare services and provisions, due to physical health consequences, impact upon healthcare utilisation and greater economic healthcare costs.
Presentation of depression in older adults
The presentation of depression in older adults is markedly different to that in younger adults. The most significant and fundamental difference in presentation in older adults is that depression can be present with the absence of an affect component, i.e. subjective feelings of low mood or sadness are not experienced3,6-9. The absence of an affective component is referred to as ‘depression without sadness’8-9. It is common instead for older adults to report a lack of feeling or emotion when depressed8-9.
Anhedonia is also less prevalent in this population. However, reduced energy levels and fatigue are frequently reported8-9.
Compared to younger adults, psychological symptoms of depression occur more frequently and are more prevalent in older adults10. Such psychological symptoms include feelings of guilt, poor motivation, low interest levels, anxiety related symptoms and suicidal ideation. The presence of irritability and agitation are key features as well7. Hallucinations and delusions are also more common in older adults, particularly nihilistic delusions (i.e. a person believing their body is dead or a part of their body is not working properly or rotting).
Cognitive deficits are characteristic of depression in older adults7,11 and are described as ‘substantial and disabling’12. Such deficits mainly concern executive function13-14. Pseudodementia is a phenomenon seen in older adults15. The term refers to cognitive impairment secondary to a psychiatric condition, most commonly depression16. Pseudodementia has become synonymous with depression. Pseudodementia can be mistaken for an organic dementia and so older adults who are depressed can present primarily to mental health services with memory problems. Pseudodementia is classically associated with ‘don’t know’ answers, whereas older adults with a true dementia will often respond with incorrect answers17.
‘Depression-executive dysfunction syndrome’ is a more specific and descriptive term to describe the cognitive deficits found in older adults with depression14. It is associated with psychomotor retardation, which can be a core feature of depression in this population7,14,18. Psychomotor retardation describes a slowing of movement and mental activity19. Like pure cognitive deficits, psychomotor retardation contributes significantly to functional impairment19. Both executive dysfunction and psychomotor retardation have been found to be related to underlying structural changes in the frontal lobes14, 20-21. Psychomotor retardation is further related to white matter changes in the motor system, which leads to impaired motor planning21. There is conflicting evidence of whether the presence of psychomotor retardation is related to depression severity18-19.
Somatisation and hypochondriasis are associated with depression in older adults and increasing age in general22-23. Somatisation is often overlooked in older adults by healthcare professional who actively search to attribute such symptoms to a physical cause. Somatisation is more common in those who have physical comorbidities. Somatisation in older adults is associated with structural brain changes and cognitive deficits24.
Depression in older adults is associated with functional impairment cognitively, physically and socially7,12,25. Such functional impairment is linked to loss of independent function and increased rates of disability26. Withdrawal from normal social and leisure activities can be marked7,25. Social avoidance reduces interaction with others and is often a maintaining factor for depression25.
Self-neglect is a classical feature of depression7, with the presence of depressive symptoms in older adults being predictive of it27. Behavioural disturbances can be a common mode of presentation, especially for older adults living in institutionalised care 6-7. Behavioural disturbances include incontinence, food refusal, screaming, falling and violence towards others7.
Diagnostic difficulties
Depression in older adults has been a condition that has constantly been under-recognised. Several issues account for this. Firstly, phenomenological differences are present. Many have argued that phenomenological issues contribute heavily to diagnostic difficulties28; both the DSM and ICD classification systems do not have specific diagnostic criteria for depression in older adults. Potentially invalid diagnostic criteria for depression in older adults could result in fundamental difficulties in understanding, with consequent impact on both clinical practice and research.
Diagnostic difficulties are also encountered because depression in older adults can present with vague symptoms, which do not correspond to the classical triad of low mood, low energy levels and anhedonia, which can all be cardinal symptoms in a younger population. Reports of fatigue, poor sleep and reduced appetite can be attributed to a host of causes other than depression and therefore it is no surprise that a diagnosis of depression is overlooked and goes undetected by healthcare professionals29.
The absence of an affective component (i.e. low mood) can lead to healthcare professionals disregarding the potential for the presence of depression and consequently not exploring for other symptoms.
Furthermore, symptoms of depression, especially somatic ones, are often attributed to physical illnesses. Depressive somatic symptoms often lead to a diagnosis of depression being over looked; such symptoms ‘mask’ the clinical diagnosis of depression and hence the term ‘masked depression’30. Depressive somatic symptoms – e.g. low energy levels, insomnia, poor appetite and weight loss - are often attributed to physical illness and/or frailty by both the individual and healthcare professional7-8, 31.
Further complicating diagnostic difficulties and under-recognition is the fact that older adults are less likely to report any symptoms associated with mental health problems and ask for help in the first place7,10,32; explanations for this include older adults being less emotionally open, having a sense of being a burden or nuisance, and believing symptoms are a normal part of ageing or secondary to physical illness7,10,29,33.Older adults also have a reluctance to report mental health problems due to their perception of associated stigma; many older adults hold the view the mental health problems are shameful, represents personal failure and leads to a loss of autonomy7.
There is an overlap between symptoms of depression and symptoms of dementia. It is quite common for older adults with dementia to initially present with depressive symptoms. Depression has a high incidence in those with dementia, especially those with vascular dementia. Depression is particularly difficult to diagnose in dementia due to communication difficulties; diagnosis is often based on observed behaviours8,33.
Depression and comorbidity in older adults
In those with pre-existing physical health problems, depression is associated with deterioration, impaired recovery and overall worse outcomes34. For example, the relative risk of increased morbidity related to coronary heart disease is 3.3 in comparison to individuals without depression35. Mykletun et al. established that a diagnosis of depression in older adults increased mortality by 70%36. Several causative routes account for poor physical illness outcomes. Older adults with depression are less likely to report worsening health. Depressive symptomatology indirectly affects physical illness through reduced motivation (often secondary to feelings of helplessness and hopelessness) and engagement with management. Poor compliance with management advice, notably adherence to medications is observed37. Feelings of hopelessness, helplessness and negativity will contribute to the failure to seek medical attention in the first place or report worsening health when seen by a healthcare professional.
Depression affects biological pathways directly, which impairs physical recovery. Such biological effects include pro-inflammatory factors, metabolic factors, impact upon the hypothalamic-pituitary axis and autonomic nervous system changes38.
Older adults who are depressed are more likely to have existing physical health conditions and more likely to develop physical health conditions15. Depression is particularly associated with specific physical illnesses; cardiovascular disease and diabetes mellitus. A study by Win et al. found that cardiovascular mortality is higher in older adults with depression because of physical inactivity; the study established that physical inactivity was accountable for a 25% increased risk in cardiovascular disease39. The relationships between depression and cardiovascular disease and depression and diabetes have been described as “bidirectional”38.
Higher incidents of cardiovascular disease and diabetes mellitus are seen in people with depression regardless of age. A study by Brown et al. found that older adults with depression had a 1.46 relative risk increase for developing coronary heart disease compared to those without depression40. The hypothalamic-pituitary axis dysfunction found in depression leads to increased levels of cortisol, which in turn, increases visceral fat. Increased visceral fat is associated with increased insulin resistance, promoting diabetes mellitus, and increased cardiovascular pathology38.
Depression is a risk factor for the subsequent development of dementia; this is especially so if an older adult has no previous history of depression (i.e. depression is late-onset)13.
Healthcare utilisation and economic impacts
Older adults are less likely to report depressive symptoms to healthcare professionals explaining the under-utilisation of mental health services for depression32,41. Despite older adults under-utilising mental health services they over utilise other healthcare services26,41. For example, those presenting with non-specific medical complaints or somatisation have been found to have an increase use of healthcare services. Non-specific medical complaints and somatisation lead to an unnecessary use of resources, such as unnecessary consultations with healthcare professionals and investigations41. Increase in service utilisation means an increase in the associated economic cost of depression in older adults41-43.
Healthcare costs of older adults with a comorbid physical illness and depression are far greater than those without depression – findings in diabetes mellitus are a good example43. The majority of the increased healthcare costs are associated with the chronic physical disease and not the care and treatment of the depression44. Poor compliance with physical illness management is associated with missed appointments and a greater number of hospital admissions, which both have financial implications.
Aetiology and associations of depression in older adults
Late-onset type depression in older adults has been associated with the term ‘vascular depression’45-47. Studies have found a significant higher rate and severity of white matter hyperintensities on MRI imaging in older adults with depression compared to those without depression46,48,50. White matter hyperintensities represent damage to the nerve cells; such damage is a result of hypo-perfusion of the cells secondary to small blood vessel damage49. White hyperintensities are associated with vascular risk factors (e.g. age, hypertension, hypercholesterolaemia, obesity, diabetes mellitus, smoking) and are linked to cerebrovascular disease, such as stroke, vascular dementia. A relationship has been found between psychosocial stress and consequent development of vascular risk factors, which further supports the hypothesis of ‘vascular depression’46. Clinically, ‘depression-executive dysfunction syndrome’ and psychomotor retardation are associated with vascular changes48.
In older adults with depression, white matter hyperintensities are associated with structural changes to corticostriatal circuits and subsequent executive functional deficits. Loss of motivation or interest and cognitive impairment in depression are hallmark features of structural brain changes associated with the frontal lobes, which in turn are associated with a vascular pathology20. A study by Hickie et al. established that white matter hyperintensities in older adults with depression are associated with greater neurological impairment and poorer response to antidepressant treatment50. It is not fully understood why vascular depression responds less well to antidepressants; poor response has been linked directly to vascular factors but has also been associated with deficits in executive function46-47.
The relationship between cerebrovascular disease and depression is described as ‘bi-directional’45,51; depression has been found to cause cardiovascular disease and vice versa51. Baldwin et al. direct the reader to the presence of post-stroke depression and the occurrence of depression in vascular dementia45.
Younger and older adults share a number of fundamental risk factors for depression; such as female gender, personal history and family history7. Older adults have additional risk factors related to ageing, which are not just physiological in nature.
Age related changes:
Age related changes occurring in the endocrine, cardiovascular, neurological, inflammatory and immune systems have been directly linked to depression in older adults3.
The normal ageing process sees changes to sleep architecture and circadian rhythms with resultant changes to sleep patterns52. Thus sleep disturbances are common in older adults and positively correlated to advancing age52; over a quarter of adults over the age of 80 years report insomnia, and research has well-established that this is a risk factor for depression53-54. A meta-analysis by Cole et al. found sleep disturbances to be a significant risk factor for the development of depression in older adults53.
Sensory impairment:
Sensory impairments, whether secondary to the ageing or a disease process, are risk factors53,55. Research has found that hearing and vision impairments are linked to depression56. A sensory impairment can lead to social isolation and withdrawal, which, in turn, are further risk factors for depression.
Physical illness:
Physical illness, regardless of age, is a risk factor for depression. Older adults are more likely to have physical illnesses and so in turn are more at risk of depression. See Table 2. Physical illness is associated with sensory impairments, reduced mobility, impairment in activities of daily living and impaired social function, all of which can lead to depression. Physical illnesses associated with chronicity, pain and disability pose the greatest risk for the subsequent development of depression7,53,55. Physical illness affecting particular systems of the body, such as the cardiovascular, cerebrovascular and neurological, are more likely to cause depression3. Essentially, however, any serious or chronic illness can lead to the development of depression. It should be noted that a large proportion of older adults have physical illness but do not experience depression symptoms, therefore other factors must be at play5,57.
Treatments of physical illness are directly linked to aetiology in depression, for example, certain medications are known to cause depression; cardiovascular drugs (e.g. Propranolol, thiazide diuretics), anti-Parkinson drugs (e.g. levodopa), anti-inflammatories (e.g. NSAIDs), antibiotics (e.g. Penicillin, Nitrofurantoin), stimulants (e.g. caffeine, cocaine, amphetamines), antipsychotics (e.g. Haloperidol), anti-anxiolytics (e.g. benzodiazepines), hormones (e.g. corticosteroids), and anticonvulsants (e.g. Phenytoin, Carbamazepine)7,29. Polypharmacy is present in many older adults further increasing the risk of depression. Pharmacokinetic and pharmacodynamic age related changes also contribute to an increased risk of medication induced depression in older adults.
Table 2: Table of physical illnesses associated with depression3,7
Dementia is common in old age and those with dementia are at higher risk of developing depression compared to those who do not have it58. 20-30% of older adults with Alzheimer’s disease have depression59. Depression is a risk factor for the subsequent onset of dementia.
Psychosocial:
When compared to younger adults, older adults are at a greater risk of developing depression due to the increased likelihood of experiencing particular psychosocial stressors, in particular adverse life events. Stressors include lack of social support, social isolation, loneliness and financial hardship. Financial hardship and functional impairment often sees older adults downsizing in property. Deteriorating physical health often sees older adults no longer being able to manage living independently at home necessitating a move into institutional living. Bereavement, especially spousal, and the associated role change that follows this are risk factors for depression3.
Sub-threshold depression:
Sub-threshold depression is an established risk factor for major depression.
Prevalence and epidemiology
The prevalence of depression in older adults in England and Wales was found to be 8.7% in 2007; however, if those with dementia are included this figure rises to 9.7%60. A meta-analysis by Luppa et al. established a 7.2% point prevalence of major depression and a 17.1% point prevalence of depressive disorder in older adults61. The projected lifetime risk of an older adult developing major depression by the age of 75 years old is 23%62.
Sub-threshold depression is 2-3 times more prevalent than major depression in older adults26,63. These depressive symptoms are often clinically relevant26,29. 8-10% of older adults per year with sub-threshold depressive symptoms go onto develop a major depressive episode63.
Incidence and prevalence are greater in women; 10.4% of women over the age of 65 years have depression compared to 6.5% of men60. Older women are more likely to experience recurrent episodes of depression compared to older men62. The gender gap in incidence and prevalence becomes narrower with increasing age3. It should be acknowledged however that women are more likely to present to healthcare services and seek help in comparison to men64-65.
The prevalence of major depression in older adults varies by setting66. Highest rates are seen in long-term institutional care and inpatient hospital settings67. Table 3 summaries prevalence rates of major depression by setting.
Table 3: Prevalence rate of major depression by setting 7, 67
Setting
Prevalence rate (%)
Community
5 – 10
Primary care
10 – 30
Hospital inpatient
11 – 50
Long-term institutional care
10 – 43
Prognosis of depression in older adults
Depression in older adults is associated with a slower rate of recovery9, worse clinical outcomes compared to younger adults3 and is associated with higher relapse rates68. Worse prognosis in older adults correlates with advancing age, physical comorbidities and functional impairment70. The structural brain changes associated with depression in older adults are linked, as discussed, to poorer treatment response.
Morbidity and mortality associated with depression can be described as primary or secondary; primary morbidity and mortality arises directly from the depressive illness; whereas secondary morbidity and mortality arises from physical health problems, which are secondary to depression.
Outcomes from sub-threshold depression are on par with those of major depression; however sub-threshold depression which develops into major depression is associated with worse outcomes2.
Proportionally more people over the age of 65 years commit suicide compared to younger people71. Depression is the leading cause of suicide in older adults29,71; one study reports that 75% of older adults who killed themselves were depressed72.
The vast majority of older adults who commit suicide have had contact with a health professional within the preceding month9; this figure has been quoted as high as 70%3. This further supports and suggests the fact the depression is under-detected. Unlike younger adults, older adults are less likely to report suicidal ideation and can experience suicidal ideation without feeling low in mood3,7. Older adults have few suicide attempts, compared to younger adults, because their suicide methods are more lethal13.
Despite advances in the management of ectopic pregnancies an emphasis must be given on improving the understanding of the women and the healthcare professionals of the pathophysiology of haemorrhagic shock.
Educating the public and all health care professionals about the phrase “Think Ectopic” as a main differential in any women of childbearing age with atypical signs and symptoms of general ill health is paramount.
Précis
The significance of effective communication within multidisciplinary teams especially in emergency situations towards optimising patient care and saving lives cannot be understated.
Case Report
A 27-year-old woman who claimed to be unaware of her current pregnancy collapsed at her home. She was not known to have any co-morbidities. Paramedics were called and found her to be in cardiac arrest with pulseless electrical activity. Cardiopulmonary resuscitation (CPR) was immediately commenced. Spontaneous circulation returned after 13 minutes of CPR at home.
She was then transferred to the emergency department. On arrival to the emergency department her Glasgow Coma Scale (GCS) was 3. She had a pulse rate of 130 beats per minute; unrecordable blood pressure; haemoglobin of 55g/l; metabolic acidosis with a pH of 6.8; lactate > 15; and a potassium of 6.6 mmol/l. She was resuscitated and gradually regained consciousness with a GCS of 15.
In the midst of stabilising her condition and unaware of her pregnancy, a urine pregnancy test was obtained following siting of a urinary catheter. A positive pregnancy test prompted notification to the gynaecology team who performed ultrasonography imaging which revealed significant haemoperitoneum. An immediate decision was made to perform laparotomy in view of the most likely diagnosis of a ruptured ectopic pregnancy.
Laparotomy revealed a 3.5 litre of haemoperitoneum secondary to a ruptured right sided tubal ectopic pregnancy. A right salpingectomy was performed. The patient was subsequently transferred to the intensive care unit as her serology results were consistent with multi organ failure with a platelet count of 46 (109/L); creatinine of 194 mmol/L; estimated glomerular filtration rate (egfr) of 27 mls/min/ 1.73 m2; alanine transaminase (ALT) of 441 IU/L; and alkaline phosphatase (ALP) of 49 IU/L.
She made an uneventful recovery as demonstrated in figure 1 by the improving serological parameters and was discharged home after 6 days.
Figure 1: The cumulative serology- full blood count, liver function tests, urea and electrolytes, clotting profile.ankle
Investigations
Day 0
Day 0
Day 1
Day 1
Day 2
Day 3
Day 4
Day 5
Day 15
13:46
18:30
05:53
17:27
06:49
07:00
11:14
09:37
09:50
Hb g/L (115-150)
82
100
83
73
72
88
89
93
WCC 109/L (3.5-11.0)
19.8
23
18.1
13.2
11.8
11.2
9.5
9.1
Plts 109/L (140-400)
46
61
49
46
48
51
76
106
ALP IU/L (30-130)
49
42
44
51
57
73
74
100
ALT IU/L (0-40)
441
428
701
3197
2621
1807
1290
185
Bili mmol/L (0-21)
9
13
8
18
18
16
11
4
Na mmol/L (133-146)
139
142
143
141
143
142
140
141
139
K+ mmol/L (3.5-5.3)
6.8
4
4.3
4.2
3.9
3.9
3.7
4.6
Urea mmol/L (2.5-7.8)
9.6
12.4
14
14.2
11.5
7.9
7.4
8.3
Creat mmol/L (48-128)
194
174
230
279
319
269
163
137
88
egfr mls/min/ 1.73 m2
27
30
22
18
15
19
33
40
66
INR ratio
1.4
1.4
1.5
1.6
1.4
1.1
1
PT secs (9.7-12.3)
14.8
15.1
15.9
16.7
15.1
11.5
10.6
Fibrinogen g/L (1.9-3.1)
1.2
1.4
1.2
1.5
2.4
3.9
>4.5
Discussion
The confidential enquiry report into maternal deaths – UK has shown a decreasing trend in the case fatality rate in women with ectopic pregnancies. This has been suggested to reflect earlier detection and immediate treatment of ectopic pregnancies. However unforeseen tubal rupture with major haemorrhage continues to be a source of major morbidity and mortality. Ectopic pregnancies account for 3-4% of pregnancy related deaths. 4
The classical triad of symptoms encountered in ectopic pregnancy includes pain, vaginal bleeding and amenorrhoea. 1 Worryingly, as illustrated by our case, rarely these women may present in a state of collapse even before the diagnosis of pregnancy is made. 4
Pathophysiology of multi-organ failure following haemorrhagic shock
Our case clearly demonstrates the detrimental multi-systemic effects and subsequent threat to life created by haemorrhage from a ruptured ectopic pregnancy. Acute haemorrhage results in decreased cardiac output and pulse pressure that is detected by baroreceptors in the aortic arch and atrium. Neural reflexes subsequently cause an increased sympathetic outflow to the heart and other vital organs resulting in vasoconstriction, and redistribution of blood flow away from non-vital organs. Neuroendocrine responses activated by neural reflexes play a major role in homeostasis during haemorrhage. Elevated aldosterone and cortisol secondary to raised adrenocorticotrophic hormone secreted by the pituitary gland leads to increased water absorption in the kidneys. The reduced tissue perfusion to non-vital organs results in insufficient delivery of oxygen and nutrients required for cellular function. 2
The resultant hypoxia leads to anaerobic metabolism and hence lactate production and metabolic acidosis. Hyperlactaemia is defined when serum lactate is greater than 4 mmol/l. 3 A level of 15mmol/l as demonstrated by our case highlights the extent of shock the patient was in.
Endogenous heat production is restricted by anaerobic metabolism, which in turns exacerbates hypothermia that is likely to be predisposed by the administration of intravenous fluids and blood products. Hypothermia is one of the reversible causes of pulseless electrical activity and a core temperature of less than 35°C is itself an independent predictor of mortality after major haemorrhage.
Furthermore, our case revealed a severe acidosis with a pH of 6.8, which is reflective of widespread cellular anaerobic respiration secondary to hypoxia as a result of inadequate perfusion. Widespread literature has shown that a pH of less than 7.2 is associated with decreased contractility, low cardiac output, bradycardia, arrhythmias and decreased blood flow to the liver and kidneys. This can lead to multi-organ failure.6
Many patients with severe haemorrhage can establish coagulopathy very quickly as our case has demonstrated. At present there is nouniversally established definition of coagulopathy though many experts use prolonged prothrombin time as an indicator of coagulopathy. Our case presented with a prolonged prothrombin time of 14.8 seconds. The pathophysiology is complex and stems from immediate activation of multiple haemostatic pathways including fibrinolysis, platelet and endothelium dysfunction. Furthermore, acute phase response after resuscitation measures can create a prothrombotic state. Sometimes, disseminated intravascular coagulation can occur in those who are insufficiently resuscitated or not resuscitated in a timely manner. 7
Effective multi-disciplinary input
This case clearly highlights that the responsibility does not solely rely on the surgeon who is required to cease the bleeding but also on the multi-disciplinary specialists including paramedics, emergency clinicians, nursing staff, anaesthetists and haematologists. This is a vital component of resuscitation management during emergency situations.
Appropriate initial fluid management
The management with intravenous fluid resuscitation remains challenging as some evidence suggests that aggressive fluid resuscitation can be detrimental because it can lead to dislodging of clots and dilutional coagulopathy leading to increased risk of haemorrhage. 5
Clinicians supporting this hypothesis suggest to cautiously administer fluid resuscitation with the aim of maintaining a subnormal blood pressure (systolic of 70-90 mmHg), whilst allowing sufficient oxygen delivery. The very early use of crystalloids and blood products is paramount to help treat acute coagulopathy. 7
Immediate surgical treatment
Recourse to immediate surgical cessation of bleeding is a vital part of the resuscitation process, and must not be delayed. 7 The presence of free fluid in the abdomen and a positive pregnancy test immediately alerted an ectopic pregnancy as the most likely diagnosis. The majority of women of reproductive age are free of comorbidities with a greater ability to adapt to resuscitative measures and hence showing quicker recovery.
Conclusion
Despite advances in the management of ectopic pregnancies an emphasis must be given on improving the understanding of the women and the healthcare professionals of the pathophysiology of haemorrhagic shock. Educating the public and all health care professionals about the phrase “Think Ectopic” as a main differential in any women of childbearing age with atypical signs and symptoms of general ill health is paramount.
Furthermore, the significance of effective communication within multidisciplinary teams towards optimising patient care and saving lives cannot be understated.
Telephone triage has been used by many practices in primary care to manage workload and prioritise patients for same day appointments.1,2 Telephone triage may have benefits in terms of managing work load,3 but is also associated with certain risks,4 which has worried both clinicians and patients.5 The analysis of the use of telephone triage has so far focused on the ease of access, demand management, cost effectiveness, quality of consultations, safety and patient satisfaction. However, other effects in terms of patient outcome may exist. One of the main focuses in General Practice is to identify symptoms and signs of cancer for early diagnosis to improve outcome. Our study aims to assess whether telephone triage helps in prioritising early assessment and referral of patients who are subsequently diagnosed with cancer.
Methods
A retrospective analysis of all the patients at our practice who had a diagnosis of cancer made between April 2013 and December 2014 was carried out.
Patients have a choice of 2 different ways to book an appointment in our practice.
Telephone triage for same day appointment requests, where a triaging doctor decides about the urgency of a problem and books the appointment, arranges tests or gives advice after speaking to patients over the phone. This group is referred as “Group 1” in this study.
Patients book the next available appointment to see a GP through reception without any triage. This group is referred to as “Group 2” in this study.
The date of first contact with the GP practice for the symptoms which later lead to a diagnosis of cancer was noted for both groups. This was the telephone triage date for the first group and the date the appointment was booked by the patients for the second group. The date the patient was first seen in secondary care for further assessment and investigations was also noted. The duration between first contact with GP practice and GP appointment, and the duration between the first contact with practice and first hospital appointment were calculated. This information was gathered from practice computer records.
Patients who were diagnosed with cancer through screening were excluded. Slow growing tumours which do not merit a 2-week rule referral, such as basal cell cancer of skin were excluded. Patients whose appointments were initiated by the GP on reviewing the results of routine tests were not included. Patients diagnosed with cancer in hospital without going through primary care referral were also excluded from this study.
There are two research questions:
Is there a significant difference in the time required from the first contact with primary care to the GP Clinic appointment between Group 1 and Group 2 patients?
Is there a significant difference in the time required from the first contact with primary care to the date the patient was seen in the secondary care between Group 1 and Group 2 patients?
Descriptive statistics (such as the mean, standard deviation, median, minimum, and maximum) were used to present the time required from the first contact with primary care to the GP Clinic appointment; and the time required from the first contact with primary care to the date patient seen in the hospital, for Group 1 and Group 2 patients. Wilcoxon rank-sum test was used to answer each research question. A p-value less than 0.05 indicated significance at the 0.05 level.
All data analyses were conducted using SAS.
Results
A total number of 39 patients were included in the study. Among them, 13 (33%) used telephone triage to make their appointments and 26 (67%) booked their appointment by themselves.
Figure 1 shows the bar charts of the time required from the first contact to GP practice, to the GP clinic appointment for Group 1 and Group 2 patients. It took 0-3 days for 12 Group 1 patients and 8-11 days for 1 Group 1 patient. The time required from the first contact to GP practice to the GP Clinic appointment for Group 2 patients can be illustrated by the same manner.
Figure 1: Bar charts of the time (days) required from the first contact for surgery to the GP Clinic appointment for Group 1 and Group 2 patients. (Note that the midpoints 2, 6, 10, 14, 18, and 22 represented days within the range of 0-3, 4-7, 8-11, 12-15, 16-19, and 20-23, respectively.)
Table 1 shows the summary statistics for the time (days) required from the first contact with the practice, to the GP Clinic appointment for Group 1 and Group 2 patients. The average time required was 0.77 days for Group 1 patients and the average time required for Group 2 patients was 7.88 days. The results of Wilcoxon rank-sum test indicated that this was a statistically significant difference in time required from the first contact for surgery to the GP Clinic appointment between Group 1 patients (patients using Telephone triage to make their appointments) and Group 2 patients (patients booking their appointment by themselves) (p = 0.0020).
Number
Mean
SD
Median
Min
Max
Group 1
13
0.77
2.24
0
0
8
Group 2
26
7.88
7.53
6
0
23
Table 1:Summary statistics for the time (days) required from the first contact for surgery to the GP Clinic appointment for Group 1 and Group 2 patients. SD = standard deviation.
Figure 2 shows the bar charts of the time required from the first contact with the GP practice, to the date patients were seen in the secondary care for Group 1 and Group 2 patients. It took 0-5 days for 4 Group 1 patients, 10-19 days for 5 Group 1 patients, 20-29 days for 1 Group 1 patient, 30-39 days for 2 Group 1 patients, and 90-99 days for 1 Group 1 patient. The time required from the first primary care contact to the date patient seen in the secondary care for Group 2 patients can be illustrated by the same manner.
Figure 2: the bar charts of the time required from the first contact for surgery to the date patient seen in the hospital for Group 1 and Group 2 patients. (Note that the midpoints 5, 15, 25, 35, 45, 55, 65, 75, 85 and 95 represented days within the range of 0-9, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70-79, 80-89 and 90-99, respectively).
Table 2 shows the summary statistics for the time (days) required from first contact with GP practice to the date patients were seen in the hospital for Group 1 and Group 2 patients. The average time required for Group 1 patients was 19.54 days and the average time required for Group 2 patients was 35.69 days. The results of Wilcoxon rank-sum test indicated that this was a statistically significant difference in time required from the first contact to the primary care to the date patient seen in the hospital between Group 1 patients (patients using Telephone triage to make their appointments) and Group 2 patients (patients booking their appointment by themselves) (p = 0.0474).
Number
Mean
SD
Median
Min
Max
Group 1
13
19.54
23.41
10.00
3
90
Group 2
26
35.69
26.28
32.50
1
88
Table 2: Summary statistics for the time (days) required from the first contact for surgery to the date patient seen in the hospital for Group 1 and Group 2 patients. SD = standard deviation.
Type of Cancer
Number of Patients
Lung
5
Breast
5
Colorectal
4
Malignant melanoma of Skin
3
Squamous Cell carcinoma of Skin
3
Oesophagous
2
Stomach
2
Urinary Bladder
2
Larynx
2
Pancreas
1
Endomtrium
1
Cervix
1
Kidney
1
Prostate
1
Testis
1
Tonsil
1
Lymphoma
1
Appendix
1
Myelodysplastic
1
Olfactory Neuroblastoma
1
Table 3: Number of patients with types of cancer.
Discussion
More than 90% of contacts with healthcare in the UK occur in primary care.6 The estimated numbers of consultation for a typical practice in England rose from 21,100 in 1995 to 34,200 in 2008 as per analysis conducted by Hippisley-Cox J et al.7 With increasing demands being placed upon General Practice, there is a need to explore innovative ways of working which enable the prioritisation of patients with concerning symptoms. Telephone triage has been considered to reduce the demand for face-to-face consultation with GPs,3 which can potentially free up time for effective use. NHS England report ‘Transforming Urgent and Emergency Care Service in England’ suggests GPs should offer more telephone consultations to reduce pressure on accident and emergency.8 However, a cluster-randomised controlled trial (The Esteem Trial) across 42 practices showed that telephone triage increased the number of primary care contacts in the following 28 days, after patients’ request for same day GP consultation.1
With increasing demands for consultations, it is important to have a system to identify and prioritise patients for early assessment; who may have a suspected cancer diagnosis. Our study demonstrates that telephone triage reduces the time from first primary care contact to face to face assessment in primary and secondary care for patients with suspected cancer. Patient numbers are small and the sample is from one practice, yet the difference seen is statistically significant.
Cancer stage at diagnosis is one of the major reasons for difference in cancer survival in different countries.9,10 The delay in cancer diagnosis can be due to multiple factors. Telephone triage can provide an opportunity to patients to discuss symptoms early with a GP, and this can reduce delays in the cancer diagnosis pathway. It has been shown that certain alarm symptoms are associated with the likelihood of cancer diagnosis 11 and these can be used to prioritise the patients in triage process. It may also reduce anxiety amongst patients waiting for an appointment, who are concerned about their symptoms.
Telephone triage should not only be seen as a way of managing demands and appointments but also as a system to improve patient outcome. Further research is clearly needed on a larger scale to determine if the results are reproducible in other settings as patients’ knowledge and understanding about cancer warning symptoms and healthcare seeking behaviour may vary among different population sets.
A 70-year-old man presented in the winter with a four-week history of redness of the left anterolateral leg. He first noticed a slight “tenderness” in the area when showering; the discomfort lasted only a few days. Over the next week, he noticed redness developing. It is now painless and not pruritic, warm, or peeling. He has not applied any topical lotions or creams. He has not had an exposure to new soaps or detergents. He feels well, without fever or weight loss. He has a diagnosis of hypertension and lumbar radiculopathy with an L5 discectomy and resultant leg numbness. He is retired and does not smoke or drink alcohol; his hobby is woodworking in his garage.
Physical examination reveals normal vital signs. On his left anterolateral leg, he has an 8 cm, irregular patch of reticulated erythema with both hyperpigmentation and scaling. The lesion is non-palpable. He has decreased sensation in an L5 distribution on that leg, which was unchanged from prior examinations. These skin findings are shown in Figure 1.
Figure 1
Question: Based on history and physical examination, which of the following is the most likely diagnosis?
Livedo reticularis
Erythema ab igne
Livedo racemosa
First-degree burn
Discussion
The answer is erythema ab igne (EAI; literally “redness from fire,”) which results from chronic exposure to moderate-intensity heat. EAI presents as a reticulated erythematous patch over the area of exposed skin. Possible secondary changes include epidermal atrophy and scaling.1,2 With repeated exposure, brown hyperpigmentation may develop.1 Most patients are asymptomatic, although some note a mild burning sensation. A history of repeated exposure to heat is key to the diagnosis. While cases were historically noted on skin exposed to fire, such as the arms of bakers and coal shovellers, EAI can result from our many, modern heat-sources, such as laptop computers, car seat heaters, heating pads, and, in this case, the portable space heater under the patient’s woodworking bench.2-4 With removal of the heat source, hyperpigmentation typically regresses but may take years.1,3 The diagnosis is clinical. A biopsy is not required to make the diagnosis, but is indicated if malignant transformation is suspected. EAI can increase risk of squamous cell carcinoma, Merkel cell carcinoma, and cutaneous marginal zone lymphoma.1,5 Treatment is typically not necessary; topical steroids or retinoids and laser have had variable success.1,3,4 If pre-malignant changes are detected, topical 5-flourouracil is recommended.1,4
See Table 1 for a summary of the key characteristics and distinguishing features of each diagnosis in this selected differential.
Table 1. Selected Differential Diagnosis of Reticulated Skin Lesions in Adults
Condition
Characteristics
Livedo reticularis
Violaceous mottled or reticulated patches; painless; typically temperature sensitive; may be physiologic or secondary to systemic disease; no hyperpigmentation.
Erythema ab igne
Erythematous reticulated patch, with possible secondary changes including epidermal atrophy and scaling; chronic exposure may lead to hyperpigmentation; painless or associated with a mild burning sensation; history of heat exposure.
Livedo racemosa
Violaceous reticulated patch with larger branching pattern than livedo reticularis, often with asymmetric or “broken” net appearance; typically involves the trunk and proximal limbs; generally secondary to chronic disease; frequently painful; no hyperpigmentation.
First-degree burn
Erythematous, dry, painful lesion which includes the entire area of skin that contacted the high-intensity heat source; not reticulated; no hyperpigmentation.
Livedo reticularis is typically more violaceous in appearance, with net-like, mottled discolouration of the skin due to deoxygenation and dilation of the venous plexus. Primary, physiologic livedo reticularis is often brought on by cold and alleviated by warming. It usually involves a larger area, such as the bilateral thighs, rather than a confined area of skin.1,2
Livedo racemosa is a persistent variant of livedo reticularis with a characteristic, large, broken, branching pattern, often on the trunk and proximal limbs. It is generally secondary to a systemic disease, such as antiphospholipid antibody syndrome or Sneddon syndrome.6
First-degree burns are erythematous, dry, and painful. Instead of a reticulated pattern, as shown here, the erythema of first degree burns covers the entire area of skin that contacted the high-intensity heat source.
A 23 y/o male was brought to our Emergency Department after having a seizure. He was alert and his vital signs were stable. He is known to have epilepsy and is on regular anti-epileptic medication for three years. He is being followed up at a neighborhood medical center at his native village . On physical examination numerous brown papules were seen over his nose and both cheeks in a butterfly pattern which correspond to facial angiofibromas (Figure 1). Ash Leaf Hypomelanotic macules were seen over his extremities (Figure 2). Few hyperpigmented café au lait macules were observed over his trunk (Figure 3). A big fibroma was also seen over his scalp (Figure 4). Areas of thick leathery texture of orange peel known as Shagreen patches were observed on back (Figure 5).
Figure 1: Facial angiofibromas
Figure 2: Ash Leaf spot
Figure 3: Cafe au lait macule
Figure 4: Scalp fibroma
Figure 5: Shagreen patch
A Brain CT scan revealed multiple subependymal giant cell astrocytomas. Laboratory investigations were normal.
This patient was clinically diagnosed as Tuberous Sclerosis Complex having a myriad of skin lesions.1
Tuberous sclerosis complex is an autosomal-dominant, neurocutaneous, multisystem disorder characterized by cellular hyperplasia and tissue dysplasia.2 Seizures are commonly encountered in Emergency Room however, conspicuous lesions as described above must alert the physician to guide the patient for a multidisciplinary approach.3
Spasticity was first described by Lance1 in 1980, and according to him it was described as; a motor disorder characterised by a velocity-dependent increase in tonic stretch reflexes (muscle tone) with exaggerated tendon jerks, resulting from hyperexcitability of the stretch reflex, as one component of the upper motor neuron syndrome. Spasticity can be a consequence of many neurological conditions including traumatic brain injury, spinal cord injury, stroke and multiple sclerosis. The annual incidence of spasticity in the lower limb following a stroke, traumatic brain injury and spinal cord injury is estimated to be 30 to 485 per 100,000, 100-235 per 100,000 and 0.2 to 8 per 100,000 respectively2. Spasticity is characterised by muscle overactivity and can lead to permanent changes in the muscle fibres leading to muscle contractures. Contractures can be very painful and may interfere with seating, posture, mobility and activities of daily living, thus increasing the care cost significantly.
Phenol has been used peripherally and intrathecally for the treatment of spasticity for many years. The botulinum toxin became available in the last decade for treatment of spasticity. Its use has increased since then, and this has led to a decline in the use of phenol. It is still being used in patients who are sensitive to botulinum toxins or have developed antibodies to them. Phenol is both neurolytic and anaesthetic in nature3. The anaesthetic effect of phenol can be seen immediately after the injection where the patient reports an immediate effect. The neurolytic effect takes at least two weeks, and therefore patients should be educated not to expect any significant change in the spasticity before two to four weeks. Phenol can also be used in combination with botulinum toxin to treat multifocal spasticity where the maximum dose of botulinum exceeds the recommended safe dose. This allows several groups of muscles to be treated in a single setting3.
The lethal dosage of phenol has been reported to be greater than eight grams4. Phenol in aqueous solution is preferred for peripheral nerve and motor point blocks and is available in 5, 6 and 7% concentration. Injecting botulinum toxins is quite different from performing nerve and motor point blocks. Phenol nerve and motor point blocks take a longer duration of time to perform as compared to botulinum toxins. For motor point blocks, a nerve stimulator with a surface electrode is needed to localise the motor points on the muscles. In the present study, we highlight the importance of management of spasticity in adults with a combination of botulinum toxin and phenol nerve /motor point blocks. A case series of patients who underwent combined phenol and botulinum toxin is presented, describing the diagnosis, number and location of muscles injected, types of phenol nerve and motor point blocks and any complications encountered.
Methods
This is a retrospective study conducted at the Rehabilitation Medicine Department of the University Hospital in Cambridge UK. The study period included from December 2014 to January 2017. The patients were identified from the spasticity clinic database. All patients were assessed in the spasticity clinic, and a plan to inject the botulinum toxin along with phenol nerve block/motor point block agreed with the patient. The patients who decided to have the procedure were appointed to a clinic to perform the agreed injections and blocks. If the patients were on anticoagulants (warfarin, dalteparin or clopidogrel), they were advised to stop the anticoagulation 3 days before the procedure. International Normalised Ratio (INR) was checked before the procedure. The usual dose of anticoagulation was started after the procedure.
Patients were consented and placed on a plinth. Botulinum toxin type A only was used in our study. It was diluted with normal saline, and the muscles were injected either using the surface anatomy or electrical stimulation. Each muscle was either injected at one to two sites, depending on the size of the individual muscle.
Phenol nerve blocks and motor point blocks were performed according to the techniques described by Roy3 and Gaid5. Aqueous phenol 5% (phenol in water) was used for all the procedures. The nerves were identified using a nerve stimulator with a surface electrode using 2mA current (Figure 1). The skin was infiltrated with 1% lignocaine, and the nerve was approached with a stimulator needle. The nerve was then ablated with 5% phenol under stimulation guidance. The dose of the phenol was titrated while the nerve was being stimulated. The motor points were located similarly with the help of a surface electrode and marked before ablation with 1 to 2 ml of 5% phenol. The amount of botulinum toxin and phenol was recorded. All patients were reviewed in 6 weeks’ time for any complications.
Figure 1: Nerve Stimulator with surface electrode
Results
Between December 2014 to January 2017, we treated 29 patients with spasticity caused by different neurological conditions with a combination of aqueous phenol and botulinum toxin injections. There were 15 males and 14 females with an age range of 18 to 80 years and a mean age of 49.3 years. The most common diagnosis was multiple sclerosis followed by stroke (Figure 2). A total of 40 phenol nerve or motor point blocks were performed in 29 patients. Nineteen patients (65.5%) received phenol blocks once, 9 (31%) twice and only 1 patient (3.4%) had the phenol block done three times. Where the phenol blocks were repeated, the mean duration between the phenol injection was 14.1 months (range 6-23 months). The procedure was bilateral in 16 (55.2%) and unilateral in 13 (44.8%). The local anaesthetic (trial block) was performed in 6 (20.6%) patients who were ambulatory before the phenol block.
Figure 2: Frequency of Diagnosis
Obturator nerve block was the most common phenol procedure performed (44.8%), followed by posterior tibial nerve block (37.9%). Two (6.9%) patients had both obturator nerve blocks and posterior tibial nerve blocks, whereas 1(3.4%) patient had hamstring motor point blocks, 1(3.4%) patient had gastrocnemius motor point blocks. One patient (3.4%) had bilateral obturator nerve blocks, posterior tibial nerve blocks and rectus femoris motor point blocks (Figure 3).
Figure 3: Frequency of Phenol Nerve/Motor Point Blocks
Botulinum toxin was also injected into various muscles in all 29 patients. The botulinum was repeated every 4 to 6 months in the same muscles. Botulinum toxins were injected bilaterally in 12 (41.4%) and unilaterally in 17 (63.6%) patients. The most common muscles injected with botulinum toxin were hamstrings (44.8%) followed by finger flexors (13.8%). The frequency of botulinum toxins injections is shown in Figure 4.
Figure 4: Muscles Injected with Botulinum Toxins
The most common combination in our series was obturator nerve block and hamstring botulinum toxin injections (34.4%). The combination of posterior tibial nerve block with hamstring botulinum toxins was used in 3 (10.3%), and 2 (6.8%) patients received posterior tibial nerve block with finger flexor botulinum toxin injections. The combination of phenol and botulinum toxin injection is shown in Table 1. There were no complications noted following both phenol as well as botulinum toxin injections.
Table 1: Combination of Phenol and Botulinum Toxins used
Phenol Nerve/Motor Point Blocks
Obturator Nerve Block
Posterior Tibial Nerve Block
Obturator and Posterior Tibial Nerve Block
Hamstrings Motor Point Blocks
Gastrocnemius Motor Point Blocks
Obturator, Posterior Tibial Nerve Block and Rectus Femoris Motor Point Block
Muscles Injected with Botulinum
Hamstrings
10
3
0
0
0
0
Finger Flexors
0
2
1
0
0
0
Finger and Wrist Flexors
0
2
0
1
0
0
Wrist, Finger Flexors and Hamstrings
1
0
1
0
0
0
Elbow and Finger Flexors
0
1
0
0
1
0
Elbow, Wrist and Finger Flexors
0
1
0
0
0
0
Elbow and Wrist Flexors
0
1
0
0
0
0
Wrist Flexors and Knee Extensors
1
0
0
0
0
0
Ankle Planter Flexors
1
0
0
0
0
0
Flexor Digitorum
0
1
0
0
0
0
Discussion
Perineural injection of aqueous phenol (3 to 7%) can reduce spasticity by blocking the nerve signals to the group of muscles supplied by the nerve. Phenol produces an initial local anaesthetic effect which is followed by neurolysis caused by protein coagulation and inflammation6. The neurolysis leaves the nerve with about 25% less function than before but does not disadvantage people with little or no residual function, as a mild progressive denervation can be beneficial in reducing spasticity6. Khalili et al7first described the technique of phenol nerve blocks and also suggested that the re-growth of most axons is seen with preservation of gamma motor neurons. This means that phenol reduces spasticity without reducing the strength of the muscle significantly.
The use of combining phenol with botulinum toxins injections has been documented in children with cerebral palsy and central nervous degenerative diseases8. To date, there are no studies in the literature showing the use of combined phenol and botulinum toxins in the treatment of spasticity in adults. The combination of phenol with botulinum toxin helps to treat multifocal spasticity allowing more spastic areas to be treated. The most frequent pattern used in Gooch et al8 study was obturator nerve block and gastrocnemius botulinum toxin injections. In our study, the most common combination was obturator nerve block and hamstring botulinum toxin injections. The possible explanation for this variance is that the majority of our study population suffered from multiple sclerosis and hamstring with hip adductor spasticity is a very common pattern.
The mechanism of action of phenol is different from botulinum toxins. However, the reduction in spasticity with phenol and botulinum toxins is comparable. Manca et al9 compared botulinum toxins and phenol nerve blocks to reduce ankle clonus in spastic paresis and concluded that both patient groups showed significant clonus reduction over time with the phenol group effect greater than the botulinum toxins group. They also suggested that the two drugs have a different mechanism of action with phenol reducing the excitability of the alpha motor neuron. A randomised double-blind trial by Kirazli et10 al compared the effects of botulinum toxins Type A and phenol on post-stroke ankle plantar flexor and invertor spasticity. There was a significant change in Ashworth scores at week 2 and 4 in the group who received botulinum toxins but there was no significant difference between the two groups at week 8 and 1210. Similarly, the decrease in clonus duration (detected by electromyography) was significant in both groups. However, the group that received botulinum toxins showed significant change at week 2 and 4 compared to phenol group. The reason for this may be the delayed onset of action of phenol as compared to botulinum toxins. Burkel et al6 studied the effects of phenol into the peripheral nerves of rats and showed that Wallerian degeneration of the nerves occurs before healing by fibrosis that starts after about 4-6 months following phenol injections. Their study also concluded that following phenol the nerves are left with 25% less function than before and this does not disadvantage the people with little or no residual function6.
There is always a risk of deteriorating the mobility or function due to weakness caused by the phenol nerve block. It is our usual practice to perform a local anaesthetic block (trial block) before injecting the phenol in all ambulatory patients or patients who are using spasticity functionally to their advantage. In our series, 20.6% of patients underwent local anaesthetic block before proceeding to the phenol block. There were no adverse effects noted following the local anaesthetic block, and all six patients chose to have the phenol blocks. A recent study by McCrea et al11 looked at the effects of phenol on position and velocity components of spasticity in addition to strength in post-stroke elbow flexor spasticity. The study concluded that phenol paradoxically improved muscle strength in addition to reducing hypertonia11.
In our series, we used phenol mainly for lower limb muscles and botulinum toxins for both the lower and upper limb muscles. For smaller muscles of the upper limb, it is difficult, but not impossible, to find the motor points. The technique for upper limb phenol blocks has been well described in literature3. However, when combining the botulinum toxins with phenol, we find it useful to prefer the phenol block for the lower limb muscles. Gooch et al8 also injected larger proximal muscles with phenol, and smaller distal and deeper muscles with botulinum toxins. In our series, the maximum dose of botulinum toxins used was 1000 units of Dysport and the maximum dose of phenol used was 20 ml of 5% aqueous phenol.
Conclusion
The combination of botulinum toxin with phenol injections is effective in treating multi-focal spasticity in clinical settings. The advantage of using phenol in combination with botulinum toxins is cost-reduction and the flexibility of managing various muscle groups at the same time. Further studies are needed to evaluate the long-term cost-effectiveness and complications of combining phenol and botulinum toxins, especially after repeated injections.
Isolated splenic tuberculosis is extremely rare, particularly in the immunocompetent persons. Splenic tuberculosis, however, can be part of military tuberculosis in immunocompromised patients. Tuberculosis spleen invariably presents in the form of an abscess. The risk factors for splenic abscess described in the literature are sickle cell disease, hemoglobinopathies, splenic trauma endocarditis or tuberculosis elsewhere in an immunocompetent patient. Although rare cases of splenic tuberculosis in immunocompetent patients have been described in the past. With re-emergence of tuberculosis due to AIDS and use of immunosuppressive medications around the globe, it is very important to bear this rare clinical condition while evaluating pyrexia of unknown origin in a given case.
Case 1
A 54-year-old male vet nary doctor by profession presented with the history of off and on fever of 8 weeks duration. The fever was low grade, intermittent and was associated with weight loss of 4 kilograms. There was no evening rise of temp and no sweating. The patient denied any history of a cough, urinary symptoms or diarrhoea. There was no history of travel or contact with sick people. He had been type II diabetic for last 12 years controlled on oral hypoglycemic agents and had no history of acute or chronic complications. There was no history of tuberculosis in the past or in close contacts . He was non-alcoholic and denied any high-risk behaviour. The clinical examination revealed an average built person who was conscious oriented and had stable vitals. There was no jaundice or lymphadenopathy. Abdominal examination revealed moderate splenomegaly. The liver was not palpable and there was no ascites. Respiratory and cardiovascular systems were normal. During the hospitalisation temp. recorded ranged from 380 C to 390C with no night sweats during his hospital stay. Patient’s evaluation showed haemoglobin levels of 10.5g/dl Leukocyte and platelet counts were normal. ESR was 88mm for the first hour. The tests on kidney and liver functions were normal. An ultrasound abdomen showed the presence of two heterogenic space occupying lesions measuring 3×4cms suggestive of splenic abscesses. An echocardiogram was done to rule out any features of subacute bacterial endocarditis. All the valves of his heart were normal and no feature of endocarditis was noted. The patient had normal ejection fraction and the pericardial cavity was normal too. Blood cultures and urine cultures were found to be sterile. A 24-hour collection of urine showed no evidence of albuminuria and funduscopic examination ruled out retinopathy. Keeping in view splenic abscesses CT guided fine needle aspiration was done and acid-fast bacillus were demonstrated by Zeal-Neilson s stain and the patient was put on antitubercular treatment. The culture of the aspirate a few weeks later turned out to be positive for Mycobacterium tuberculosis. His HIV serology was negative .The patient continued standard four-drug regimen for two months followed by two drug regimen for another seven months. Patients fever settled after two weeks of treatment and followed our clinic till completion of his treatment.
CASE 2
A 24-year-old female student presented with the history of off and on fever of 5 weeks duration. The fever was low grade intermittent and not associated with sweating She also complained of loss of appetite and weight loss of 3 kilogrammes over a period of 2months She denied any history of a cough, urinary symptoms. The patient had no history of contact with sick persons or travel. She had no co morbid illness. On examination she was conscious oriented and had mild pallor, no lymphadenopathy or jaundice was noted. Her respiratory and cardiovascular system was normal. Abdominal examination showed splenomegaly 5cms below the costal margin. Her laboratory data showed haemoglobin levels of 9.8gm/dl WBC count of 4200 and platelets were 1.5×103. ESR was 90mm in the first hour. Blood culture, widal tests and Brucella serology were negative. Tests on liver and kidney functions were normal. An ultrasound abdomen showed the presence of three small space occupying lesions in the spleen.Each was measuring 2×2 cms. Portal vein diameter and spleno-portal axis were normal. CT scan abdomen confirmed splenic abscesses and no abdominal lymphadenopathy was noted. CT guided fine needle aspiration was done which turned out to be positive for AFB and cultures a few weeks later confirmed Mycobacterium tuberculosis. Her transthoracic echocardiography showed normal values and didn’t show any features of vegetations. Her HIV serology was negative. The patient was started on conventional four drugs antitubercular regimen for two months followed by two drug regimen for another seven months. Her fever settled and she had marked improvement in her appetite and her weight increased.
DISCUSSION:
Splenic abscesses presenting as fever of unknown origin is well known. Most of the cases of TB spleen present as fever, vague ache in left hypochondrium or weight loss. Although the frequency of splenic tuberculosis is more common in immunosuppressed patients but splenic abscess due to tuberculosis has been described in immune competent patients as well1. Due to the advent of AIDS epidemic prevalence of tuberculosis has increased globally and more cases are now getting reported. Another scenario leading to increased frequency of this previously rare entity is the widespread use of immunosuppressed therapies for chronic disorders like Rheumatoid arthritis, Crohn's disease Psoriasis etc. The index cases were neither HIV positive nor where on any immunosuppressant medication but developed splenic lesions, reflecting some other hitherto unknown predisposing factor for such lesions. The splenic abscess does occur in the setting of infective endocarditis as infective emboli get lodged in the spleen. Splenic abscess following endocarditis in an 80 year old male presenting with abdominal pain during the course of treatment was reported by Pereira et al2 and in another series3 of 3 patients with bacterial endocarditis, splenic abscess was diagnosed based on CT abdomen with evidence of endocarditis on Echocardiography. In the same series two of the patients underwent splenectomy before valve repair while as splenectomy was performed after the valve repair in the other patient .The echocardiogram in index cases, however, were normal without any evidence of endocarditis. After the diagnosis and initiation of ATT, the index cases became afebrile and splenectomy was thus averted . While one of the cases had well controlled diabetes mellitus but the other patient was euglycemic and had no other known risk factor for splenic tuberculosis as is described in the literature. What lead to splenic tuberculosis in the second case remained unidentified? .There is a well known linkage between diabetes mellitus and TB and as per WHO a bidirectional screening has been recommended. Sri Lankan data4 on 112 patients with TB found that 8 patients with TB already were already known cases of diabetes mellitus and in their study further screening unravelled TB in another 17 patients. It is thought that metabolic adaptation is critical during the pathogenesis of mycobacterium tuberculosis5,6..
In time management of tubercular abscess is very crucial as without treatment patients can have complicated clinical course.Splenic abscess can rarely rupture or lead to fistulous communication with adjacent organs. A gastrosplenic fistula has been reported by Lee et al7 in a 61-year-old male presenting with abdominal discomfort and cough. The authors demonstrated a fistulous tract between the spleen and the stomach on endoscopic examination. The fistulous track healed in their case after completion of anti-tubercular treatment. It is quite possible that delay in diagnosis may be a factor that leads to such complications. The index cases, however, had favourable outcome without any complications and successfully completed anti-tubercular treatment.
The other side of the coin is that complications are even known during the antitubercular treatment as a result of reaction to antitubercular treatment. Spontaneous rupture during treatment leading to splenectomy was reported by et Yea et al8 .Splenic tubercular abscess are known to be associated with miliary tuberculosis or with haematological diseases where leucopenia and thrombocytopenia are profound9.The index cases had normal platelet count and leucocytic count highlighting that there was neither bone marrow suppression nor hypersplenism. The patients with ITP on treatment are also prone to develop Tuberculosis of spleen and conversely, patients with TB spleen can per se develop thrombocytopenia.
The management of splenic abscess used to be splenectomy in the past but with the advent of FNAC splenectomy is avoided . Here a word of caution is that various other lesions in spleen can mimick splenic TB hence it is very important to confirm the disease especially in endemic areas where TB is prevalent. Kunnathuparambil et al10 described melioidosis in a 47 year old male who was treated as case of splenic tuberculosis based on splenic lesions on imaging and fever. The diagnosis of splenic tuberculosis in past was mainly reached after histological examination of surgical specimens but now fine needle aspiration has become the procedure of choice. Spleen being highly vascular organ bleeding is the most feared complication of any intervention but fine needle aspiration has been found to be technically safe and in a retrospective data no significant complication was observed11 .With the advent of non-invasive biomarkers diagnosis of tuberculosis has advanced further and a step further is Quantiferon Gold test which has come up another non-invasive modality in the diagnosis of TB. In various studies12, the sensitivity of this test to the tune of 75% has been demonstrated.
While treating an HIV patient the clinician requires high alert in patients presenting with pain abdomen as the splenic abscess is one of the differentials and it is recommended that initial ultrasound must be carried out to diagnose this condition13. In modern era ,Splenectomy may be offered only to resistant cases otherwise ATT is considered to be the therapy of choice.
To conclude tuberculosis of spleen must be kept in mind while evaluating fever of unknown origin in any patient on immunosuppressant treatment or having HIV and even in immunocompetent patients as well. The Fine needle aspiration is a safe diagnostic modality and treatment with antitubercular medication is rarely unsuccessful.
Warfarin is a widely used anticoagulant, primarily for the prevention of thrombosis and thromboembolism. Warfarin is used as a prophylactic agent in conditions such as atrial fibrillation and coronary artery thrombosis.1
Although effective and safe, treatment with Warfarin is associated with risks. Because of its narrow therapeutic index, patients require regular blood monitoring for the international normalised ratio (INR) to determine the safe yet effective dose of Warfarin.
Warfarin has interaction with several medications which affect the availability of Warfarin. One class of drugs which is very commonly prescribed is the selective serotonin reuptake inhibitors (SSRIs) antidepressants.
Due to their supposedly favourable side-effect profile, e.g. less cardiotoxicity, and safety in overdose, SSRIs have become the first-line antidepressant 2, preferred over tricyclic antidepressants (TCA). However, SSRIs, have other serious side-effects including increased tendency to bleed, particularly gastrointestinal bleeding. SSRIs may increase the risk of bleeding due to the secondary effect of serotonin release which is essential for platelet aggregation. 3 This effect is especially significant when combined with anticoagulants. 1
Several of the SSRIs are inhibitors of the cytochrome P 450 enzyme system, which is responsible for the metabolism of some medications, including Warfarin. Both SSRIs and anticoagulants are frequently prescribed in the elderly population.
Fluvoxamine is a SSRI which is licensed for use in the treatment of depressive disorder, obsessive-compulsive disorder (OCD) 1, and is also used in the treatment of social phobia. While an interaction between Fluvoxamine and Warfarin is to be expected because of Fluvoxamine’s inhibitory action on a number of cytochrome P 450 enzymes, there have not been many case reports of the interaction between Fluvoxamine and Warfarin.
A Medline search revealed only two case reports of an interaction between Warfarin and Fluvoxamine. 4,5
We report a case of an elderly man who developed elevated INR when he was started on Fluvoxamine for the treatment of depression, while on Warfarin.
Case Report
A 75-year old male was admitted to the acute psychiatric unit with complaints of anxiety, depressed mood and suicidal ideation. In the previous months, he had developed a pre-occupation that his bowels were not functioning properly and that he would not be able to open his bowels. He was using excessive laxatives secondary to this preoccupation. He also described other depressive symptoms: anhedonia, insomnia with early morning wakening, poor concentration, and low motivation.
The patient was diagnosed with depression two years previously, requiring Electroconvulsive therapy (ECT). He was discharged on Mirtazapine 45mg and Venlafaxine-XL 150mg. Following a deterioration in mental state, the Venlafaxine-XL dose was increased to 225mg three months before this admission, without much improvement. Risperidone and Olanzapine were trialled as an adjunct without beneficial effect and were discontinued. Compliance with medication was reportedly good.
The patient had multiple physical health complaints: previous myocardial infarctions, hypertension and paroxysmal atrial fibrillation. He was prescribed tamsulosin, bisoprolol, perindopril, atorvastatin, and warfarin.
The patient’s preoccupation about bowel movements was, for the most part, deemed to have obsessive quality: he accepted that these worries were repetitive and came to his mind against his wishes. He said that he would rather not have these worries but was unable to distract himself.
The marked subjective anxiety, according to the patient, was entirely linked to the preoccupation. However, when the patient became agitated, it was difficult to persuade him to appreciate the anomalous nature of his thoughts. At such times he insisted that there was something definitely wrong with his bowels and nothing could help him.
Prior to admission, the patient was treated at different times with different antipsychotic medications, which made little impact on the symptoms. Following his admission he was referred to the psychology services, which concluded that he was too unwell to meaningfully participate in psychological therapies. The patient, too, was not keen on this option. He also declined ECT and was deemed to have the capacity to make the decision.
Venlafaxine-XL was switched, in light of the patient’s cardiac risk history, to an SSRI known to have an effect on obsessional symptoms. Accordingly, a dose of Mirtazapine was decreased to 30mg, and Venlafaxine-XL was tapered off over ten days.
Fluvoxamine was started at 50mg/day, and the dose was titrated to 150 mg/day over the next week. The dose was further increased to 200 mg/day.
INR was previously stable on warfarin dose of 5mg per day: blood results were between 2.32-2.68. Following fluvoxamine-initiation, INR started increasing: 2.98 after six days. INR increased further, to 3.82, with increase in Fluvoxamine’s dose
Warfarin-dose, as a result, was decreased, initially from 5mg to 4mg; however, INR remained above range (3.75). With further reduction in Warfarin’s dose, to 3mg/day, INR reduced but was still above range (3.51). INR eventually stabilised when warfarin-dose was reduced to 2mg/day. The dose adjustment took place over ten days.
Discussion
Management of depression in elderly population requires careful consideration of the choice of psychotropic medication, as elderly patients are more likely than younger patients to be on multiple medications for associated physical health problems, which increase the potential for drug interactions. Half-lives of drugs are also extended in the elderly.
The patient was prescribed Warfarin for the management of atrial fibrillation. Warfarin is a racemic mixture of S-warfarin and R-warfarin, of which S-warfarin is more potent than R-warfarin. 6
Warfarin has the potential to cause pharmacokinetic drug interactions (drugs affecting hepatic cytochrome P 450 enzyme system, which metabolises warfarin), which are thought to be more clinically relevant than pharmacodynamic interactions (highly protein bound drugs displacing Warfarin from its binding site) for warfarin. 6
Warfarin is metabolised by a number of P450 isoenzymes such as 2C9/2C19, 1A2, and 3A4. 6 Of these 2C9 is thought to be crucial, as it metabolises the more potent S-isomer. Isoenzyme 1A2 is the major route for the metabolism of the R-isomer, while 3A4 and 2C19 are considered to be minor routes.
The psychotropic medications that are thought to have the potential for significant pharmacokinetic interaction with warfarin include Fluoxetine, Fluvoxamine, Quetiapine, and Valproic Acid. 7 Venlafaxine is also considered a high-risk drug in patients taking warfarin. One study found that fluvoxamine and venlafaxine were associated with a more than double risk of having an INR value of 6 or more. 3
Fluvoxamine by dint of its inhibitory actions on 2C9/2C19, 3A4, and 1A2 8 inhibits all the isoenzymes that metabolise Warfarin and can be said to have the maximum potential for pharmacokinetic interaction with Warfarin.
National Collaborating Centre for Mental Health guidelines on depression in adults with chronic physical health problems advise avoiding SSRI in patients taking warfarin or heparin and instead offering an alternative antidepressant such as mirtazapine. 9 Therefore, caution is needed when prescribing these medications to patients who fail to respond to other safer options.
At the time of his admission, the patient was on a combination on Mirtazapine and Venlafaxine XL, both at high doses, for several months. Neither of the antidepressants has significant action on the P450 isoenzyme system, although both are substrates of some enzymes, such as 2D6, 1A2 and 3A4. 8 The patient’s INR was within normal limits when he was on these two medications (despite the aforementioned potential effect of venlafaxine on INR values).
During the period of admission, a decision was taken to change the antidepressant regime, for clinical reasons. The two antidepressants considered were Fluvoxamine and Sertraline.
It was felt that despite its higher potential to cause pharmacokinetic interaction with Warfarin, Fluvoxamine is chosen ahead of Sertraline (which inhibits 3A4 and 2D6). The more potent action of Fluvoxamine on the sigma1 receptors, which account for its significant anxiolytic properties and therapeutic action in the delusional depression 10 was felt to have the potential of benefit given the patient’s clinical symptoms.
It was also felt that Fluvoxamine would also be of more benefit than Sertraline with insomnia which was patient’s frequent complaint given its side-effect of somnolence. 11
Fluvoxamine was started at a low dose (50 mg/day) after Venlafaxine XL was completely withdrawn. The dose was titrated rapidly over the next one week, to 150 mg/day. The INR showed an upward trend within five days of commencing Fluvoxamine, and it exceeded three by the time the dose of Fluvoxamine was increased to 150 mg/day. This necessitated a reduction in Warfarin’s dose from 5 mg to 2 mg/day. The INR stabilised to between 2 and 3 when warfarin dose was more than halved.
Fluvoxamine has a half-life of 9-28 hours, and steady-state levels reach roughly after ten days. 8
The half-life is increased by almost 50% in the elderly. 3 The trajectory of increase in the INR was consistent with the pharmacokinetic of Fluvoxamine: it is a potent inhibitor of CYP4501A2, with a relatively little affinity for the other isoenzymes. The increase in INR was not dramatic, which is different from previous reports. 4 Throughout the period of Fluvoxamine –titration and Warfarin dose adjustment, there was no clinically untoward incident.
In conclusion, this case shows the need for close monitoring when warfarin is combined with another drug which significantly enhances warfarin’s anticoagulant effect. At the same time, it can be done safely, even in patients such as the elderly, who can be at a higher risk of adverse effects of interaction, when appropriate steps to monitor the patient are in place. The case also demonstrates the necessity of using clinical judgment while applying the guidance in individual patients.
Sjögren’s syndrome (SS) is an autoimmune exocrinopathy characterised by a lymphoplasmacytic infiltration of the exocrine glands. Both xerophthalmia and xerostomia are the most common manifestations of the disease. However serious organ damage such as, pulmonary and neurological involvement, can occur. The prevalence of neurological manifestations of SS varies between 0% and 70% (average 20%), which is largely dominated by peripheral neuropathies¹. The cranial nerve involvement, especially when it is isolated, represents a rare facet of the peripheral neuropathy.
Observation
We report the case of a 62-year-old patient with no medical history, referred to the internal medicine department with a 6 years history of dry mouth and xerophthalmia. No other complaints were reported.
The mouth examination showed a fissured, smooth and left deviated tongue without evidence of atrophy or fasciculation (figure 1). The rest of the oropharyngeal examination was unremarkable with no angina or cervical lymphadenopathy. Neurological examination confirmed a deficit of the right XII cranial nerve and excluded other cranial nerve involvements, sensibility or motility deficit. A specialised ophthalmologic examination was performed and showed a bilateral superficial punctuate keratitis.
The search for antinuclear antibodies by indirect immunofluorescence was positive at the titre of 1/1280 (speckled) corresponding to Anti-SSA and Anti-SSB antibodies. Cryoglobulinemia search was negative.
The rest of the laboratory investigations (blood cell count, liver and renal function tests, thyroid balance and inflammation markers) were normal.
A labial salivary gland biopsy was performed and its histological examination showed a lymphoid cell cluster of more than 50 cells/ 4 mm² corresponding to a focus score 1.
Brain MRI was normal - no damage in the brain stem was seen. Electromyography was normal.
The diagnosis of SS was made according to the presence of five out of six criteria according to the European American study group. The diagnosis of primary SS was retained due to the lack of clinical or biological argues for an associated autoimmune disease. A symptomatic treatment of Sicca syndrome was prescribed but no specific therapy has been initiated for the hypoglossal nerve attempt because of its asymptomatic nature.
Discussion
In the case of our patient, the tongue deviation was discovered at physical examination and was totally asymptomatic. In other cases, the twelfth nerve palsy could be responsible for swallowing difficulties, and in advanced stages for a lingual or hemi lingual amyotrophy. The spectrum of its aetiologies is numerous. In a large case series of 100 patients, malignant tumors (about half of cases), neurological causes (16 %) and post-traumatic palsy (12% of cases) were the three most popular aetiologies². Other conditions could be associated with twelfth nerve palsy, such as, infections², vascular injury³ and non-invasive oxygen therapy⁴. Paroxysmal idiopathic hypoglossal nerve palsy has also been described⁵. Our patient had Sicca syndrome which was related to SS according to 5 criteria of the European American study group: it was a subjective sensation of dry mouth and dry eyes associated to a bilateral punctuated keratitis, a focus score > 1 at the histological examination of the salivary gland biopsy and positive anti-SSA and anti-SSB⁶.
SS is an autoimmune disease that often presents as dry eyes and dry mouth due to lacrimal and salivary gland involvement. It can be primitive or associated to other autoimmune diseases such as, Hashimoto’s Thyroiditis, Rheumatoid Arthritis or Systemic Lupus Erythematosus. Wide varieties of neurological complications are characteristic features of SS which occurs most frequently in the primary form. Peripheral neuropathy is the most frequent neurological manifestation. Its most common presentation is a symmetrical sensorimotor or pure sensory neuropathy of hands and feet. Sensitive neuropathy, small fiber neuropathy, multiple mononeuropathy and polyradiculoneuropathy have also been described¹. Cranial nerve involvement is rare. In a review of the literature, Colaci M found 267 patients suffering from SS with different types of cranial neuritis during their clinical history. The discovery of cranial neuritis was contemporary to SS diagnosis in 40% of the patients, as in the case of our patient.
Optic neuritis and trigeminal nerve injury were the most frequent attempts and represent respectively 46.4% and 38% of all cranial nerve palsies. All cranial nerves palsies have been described except the eleventh⁷. Involvement of the twelfth cranial nerve is very rare and only two cases have been described⁸′ ⁹. In these two cases, it was associated with an involvement of other cranial nerves (table 1). To the best of our knowledge, this is the first report of an isolated and permanent involvement of the twelfth cranial nerve in a patient with primary SS. Many mechanisms were proposed to explain the cranial nerve involvement in SS. Clinicopathological observations of Mori K⁸ suggest that an isolated trigeminal nerve attempt could be explained by an immune-mediated neuron death in the sensory gasser ganglion. Whereas, other cranial nerve involvements which are frequently associated together could be explained by a multiple mononeuropathy resulting from a vasculitis⁸.
Further clinical observations will be necessary to determine the exact mechanisms of such neurological involvement.
Table 1: Review of the literature regarding SS patients with hypoglossal nerve injury
Number of patients
Age
Nerves
involved
Treatment
Evolution
Mori/2005 [8]
1
No data
V, VII, IX, X, XII
No data
Paroxysmal
Ashraf/2009 [9]
1
47
V–IX–XII
No data
Paroxysmal
Our patient
1
62
XII
None
Permanent
Figure 1: Smooth and left deviated tongue
Conclusion
In front of cranial nerves neuritis, we should actively search for sicca syndrome, sometimes not spontaneously reported by patients. Examination of the mouth can be instructive and should not be omitted in the diagnosis and monitoring of Sjögren’s syndrome.
In the UK, all newly graduated doctors spend their first two years of work rotating between different specialities, usually spending four months in each placement, before applying for speciality training. This period is called the Foundation Programme.
In January 2016, the Royal College of Psychiatrists published its first ever strategy on Broadening the Foundation Programme to address the need to improve the psychiatric training experience for foundation doctors. The strategy’s aim is to “ensure the delivery of a high-quality training experience in all psychiatry foundation placements”.1
Over the last few years, the number of Foundation training posts in psychiatry in England and Wales has significantly increased. Health Education England aims that all Foundation doctors should rotate through a community or an integrated placement (psychiatry is considered as a community placement) from August 2017.2
As such, the College highlights the need to provide a supervised and well-structured psychiatric training experience for Foundation doctors. This aims not only to improve recruitment into psychiatry but also to ensure doctors have a good working knowledge and understanding of psychiatry and psychiatric services, no matter what career they pursue.
Mentoring provides an additional support and therefore can be helpful to improve the placement of Foundation doctors in psychiatry.
We implemented an ambitious mentoring scheme in Norfolk and Suffolk NHS Foundation Trust (the seventh largest mental health trust in the UK). The paper describes its essential component together with a brief review of the literature on mentoring in clinical settings, focusing on Foundation placements.
Why is mentoring is needed for Foundation doctors in psychiatry?
The literature on mentoring for medical professionals draws attention to the idea that it is beneficial to all doctors at all stages of their career to experience mentoring in some form or another. However, mentoring is of particular importance to doctors moving to a new job or organisation 3, thus making it highly relevant to Foundation trainees.
For newcomers, most of the mentoring support will focus on helping them settle into their new role, becoming familiar with, and developing an understanding of the expectations of their employers.4
Evidence shows that the quality of care in any organisation can be improved when clinical leaders support time for activities such as reflection, coaching and mentoring 5.
Most Foundation doctors will lack experience in psychiatry and will need a substantial amount of guidance from their supervisors and their teams.6 Research has shown that the transition from student to doctor is a difficult one and can be associated with significant levels of emotional stress.7
Foundation doctors find psychiatric assessments physically and emotionally challenging. They feel they lack the specialist knowledge and skills to deal with complex patients, especially concerning self-harm, personality and eating disorders. Dealing with such complex diagnostic categories requires knowledge, skill, understanding as well as physical and emotional robustness. Due to the relative lack of focus on such topics in most undergraduate medical training, a comprehensive support in psychiatric placements is essential.
Psychiatry is very different from other specialities in the way services are configured and delivered: junior doctors may face isolation as psychiatric units are typically spread across a wide geographical area and often lack a centralised meeting place for junior doctors (e.g. a doctors’ mess). In addition, they may find themselves the lone practitioner when on call, which can be daunting for many.
Clinical and Educational supervision is provided to Foundation doctors in similar ways to other rotations. However, the consultants delivering this essential support often focus only on clinical issues related to knowledge and skills. Furthermore, it is easy to see that the best guides to new trainees regarding the idiosyncrasies of the speciality and its services are likely to be trainees who have spent some time in those services and are more able to detect the specific stresses new doctors may experience and may find difficult to articulate.
Furthermore, mentoring fosters a productive peer-to-peer relationship. The learning needs of the Foundation doctor can be considered alongside their personal and professional interests and lifestyle. Questions can be posed in a non-judgmental forum, without fear of being ridiculed or condemned. The fundamentals of on-call systems, clinical cases and management options can all be considered at a level appropriate to their junior grade. Tips for examination success and information about essential courses and core texts can be shared. Job choices and research opportunities can be discussed. Day to day difficulties and mismatches between expectation and reality can be identified and possibly overcome. Where this is not possible next steps can be identified, and clinical and educational supervisors can be drawn in for higher level support. The benefits of the scheme are broad.
Finally, although mentoring is different from role-modelling (teaching by example and learning by imitation), it has been shown toserve some of the same aims of role-modelling, including enhancing problem-solving abilities of the mentee, improving professional attitudes, showing responsibility and integrity, and supporting career development. 8
What is mentoring?
Mentoring can mean different things to different people. There are various definitions which can create confusion between mentoring and other formal structures of support such as supervision, coaching, consultation, befriending or friend systems and even counselling. However, mentoring is none of the above but at the same time a combination of them.
The Standing Committee on Postgraduate Medical and Dental Education (UK) defined mentoring as ‘The process whereby an experienced, highly regarded, empathetic individual (the mentor) guides another individual (the mentee) in the development and re-examination of their ideas, learning, and personal and professional development”.9
The term “mentoring” takes us back to Greek mythology: Mentor was a person: he was the friend of Odysseus who was asked to look after Odysseus’ son Telemachus when Odysseus was fighting in the Trojan Wars. The name Mentor was later used to describe a trusted person, a supporter, or a counsellor.10
Mentoring as a professional developmental tool became popular in the private sector organisations in the USA during the 1970s and was introduced to the area of health during the 1990s 11. Since then, it has been widely used in various organisations.
Aims of mentoring
Mentoring has the advantage of being a flexible supporting tool, unlike other structured processes (e.g., clinical supervision or coaching) where the goals are set clearly from the start of the relationship between the supervisor and supervisee. The aims of mentorship are summarised in Table 1.
Table 1- Aims of mentorship
Help the mentees take the lead in managing their career and its development.
Provide support to discuss personal issues in a confidential and secure environment
Improve both the individual and the team performance
Types of mentoring
Buddeberg-Fischer and Herta 11 discussed various types of mentoring based on the numbers of mentors and mentees and their professional status or grade:
One to one mentoring (between a mentor and a mentee).
Group mentoring (one mentor and a small group of mentees)
Multiple-mentor experience model (more than one mentor assigned to a group of mentees).
Peer-mentoring (the mentor and mentee are equal in experience and grade): This mentoring is used mainly for personal development and improving interpersonal relationships. Mentor and mentee roles can be reversed. Also, called ‘co-mentoring’.
Distance or e-mentoring is becoming more popular, and it has the advantages of being “fast, focused, and typically centred on developmental needs”. 12
Structured vs. flexible mentoring
Evidence suggests that providing mentorship through a rigid and structured process can be counterproductive. 13 Mentors and mentees usually work in different locations, making it difficult for both to have a set of pre-planned meetings and topics for discussions.
Another advantage of the flexibility of mentoring is that it does not follow a “tick box” exercise but encourages informal discussions and exploration of whatever comes to mind during meetings. Doubtless, having some structure to the overall mentoring process is important as it ensures that the mentoring session doesn’t become an informal befriending or friend support system. Table 2 sets out the main benefits of mentoring.
Benefits to the organisations
Benefits to the mentee
Benefits to the mentor
Improve job satisfaction leading to improved performance, recruitment, and retention of employees
Enable early recognition and resolution of issues that face employees
Provide a valuable source of feedback that the organisations can use to improve working conditions
Ensure that the mentee has clear aims and objectives (development outcomes) at the start of their mentorship. These goals may include improving knowledge, performance and preparation for exams and interviews
Empowers the mentee to explore and pursue their career aims
Incorporates the mentee into a wider professional network and prevents isolation
Supports the mentee to use reflective practice and improve their self-awareness
Reducing stress and burnout
Formal recognition of informal practice within the profession
Structured program with support and supervision to the mentor
Development of knowledge and skills in management and supervision
Satisfaction of helping others and passing on their knowledge
Table 2-Benefits of mentoring. Developed from Mentoring – Chartered Institute of Personnel and Development (CIPD) Factsheet. Revised February 2009 14. Available from: https://www.shef.ac.uk/polopoly_fs/1.110468!/file/cipd_mentoring_factsheet.pdf
Are there any Disadvantages of mentoring?
There is extensive literature on the benefits of mentoring, but is there any harm associated with it?
As with any intervention, it does carry some potentially adverse effects. Mentoring can be perceived to “infantilise” junior employees rather than empowering them 10. This perception is probably more common among employees who see themselves as senior or very competent and think of themselves as able to adapt to change very quickly.
Mentoring might hinder creativity in new employees and inhibit them from thinking “outside the box” as it might re-enforce the message that ‘this is how we do things here’. 10
Clashes of personalities or other interpersonal factors could lead to a troubled mentor-mentee relationship and cause distress to both parties. Hence, plans must be put in place in any formal mentoring scheme to ensure an amicable ending to this relationship. Multiple mentor allocation mentioned earlier could also prevent such interpersonal problems and help to tackle them early on.
Furthermore, some mentees may feel uncomfortable with the influence or authority of the mentor, and this may hinder the progress of the mentoring relationship. 13 This is particularly relevant when the mentor is also involved in the formal assessment of the performance of the mentee (e.g. being the line manager or supervisor) or when a mentee who lacks self-confidence is paired with an overconfident mentor.
Good mentors avoid common pitfalls in the mentoring process, such as a patronising attitude, breaches of confidentiality and offering direct advice to the mentees. Instead, they encourage the mentee to reflect and come up with their answers. 15
Finally, mentoring can be perceived as an additional demand on doctors during their training, and some may feel that they are forced to provide it or receive it during placement. However, it must be remembered that mentoring should always be voluntary and flexible to meet the individual’s needs and not an additional ‘box to tick’ or a portfolio enrichment exercise.
The Mentoring Scheme for Foundation Doctors in Psychiatry Norfolk:
The scheme started in December 2015 and initially ran as a pilot in Norfolk with the support of all stakeholders. The mentoring scheme coordinator (YH) contacted twelve Foundation doctors by email, welcoming them to the Trust and inviting them to participate. The welcome email contained information about mentoring, including the benefits it may offer.
The voluntary nature of this scheme was highlighted so that the doctors didn’t feel they were being pressured to be enrolled.
Of the 12 doctors invited, five took up this opportunity. Uptake has remained constant over the consequent cohorts of Foundation doctors for many reasons. Those deciding not to enrol in this scheme explained that they felt happy with the support provided by their clinical supervisors. However, some doctors asked for a mentor halfway through their placement when they felt that they needed additional support. In these instances, a mentor was allocated to them as soon as possible.
Mentors were core and higher trainees already involved in supporting more junior psychiatric trainees through informal mentoring. Their experience meant there was no need for formal training. However, reading material was circulated to them to highlight the roles and responsibilities of mentors and what to do if any problem arose during mentoring. Monthly mentors’ meetings were very helpful to discuss issues arising in mentoring and offer peer to peer support.
Also, there were regular meetings and discussions between the mentoring coordinator, the Director of Medical Education, and Medical Director of the Trust to resolve any issues facing the Foundation doctors and provide feedback to improve the Psychiatric placement.
During the first meeting, the mentors and mentees agreed on the aims of mentoring drawing up a list of objectives that the Foundation doctor would like to achieve by the end of the placement. Following this initial meeting, there should have been once monthly face to face meetings throughout the placement. The mentor and mentee agreed on the most convenient means of communication (e.g. using text messages, emails, etc.) outside scheduled meetings.
All mentors kept a record of the mentoring meetings, with the mentoring coordinator informed about these meetings. Issues discussed were confidential and not shared with the coordinator or supervisors unless the mentee gave specific consent.
At the end of the mentoring scheme, the coordinator collected feedback from mentees and mentors using a structured questionnaire that was designed by the coordinator using SurveyMonkey® website. The feedback highlighted the positive aspects of mentoring as well as areas for improvement.
End of mentoring survey
Mentors reported that acting as a mentor without being involved in clinical supervision allowed them to offer objective advice and support in a way that would have been harder if they were directly involved in the workplace. One Foundation doctor experienced bullying from another member of the team who was a locum doctor. The mentor supported the Foundation doctor, and the issue was addressed and resolved promptly. There was a significant risk that they would have been left isolated and unsupported during this time if the mentor scheme had not been in place.
The topics discussed were varied, and this suggested that mentoring was not limited to a aspect of the job (see Table 3)
Table 3- Topics discussed in mentoring meetings
General guidance about the job
Assistance on completing competencies on e-portfolio
Advice regarding personal health, bullying, career choices
Leadership and research opportunities.
Mentees reported that they found mentoring useful and supportive of learning and development. This was especially important in a speciality that they had little experience of as an undergraduate. With a mentor in Psychiatry, the Foundation doctors reported that they could identify areas of development, including leadership and teaching opportunities for Foundation doctors.
Overall, mentoring was shown to be a useful tool to improving Foundation doctors’ experience in Psychiatry by offering extra support during placement in a challenging medical speciality.
Table 4 summarises the areas of development suggested by the mentors and mentees.
Table 4- Recommendations from the feedback of mentors and mentees
Early allocation of mentors at the start of the placement is vital.
The need to provide e-learning and classroom training on mentoring to develop the skills of mentors
Maintain the independence of the mentor from the clinical team of the mentee
Enhance the flexibility of the scheme to meet the demands of the training and the time constraints of the mentees and mentors
Limitations
Feedback from mentors and mentees showed an overall satisfaction with the scheme, but it was not possible to measure such satisfaction quantitatively, this was expected from an approach which is willfully kept outside the realm of performance management.
According to the literature on mentoring, most mentoring schemes lack a clear structure, as well as a clear evaluation process of its short and long terms, benefits 11.In our scheme, we addressed this by continually monitoring the mentoring process and collecting feedback from mentees and mentors. Another limitation involves training the mentor himself and finding the time in a highly pressurised and heavy workload environment.
There are many questions that the literature on mentoring is yet to answer. For example, what are the long-term benefits of mentoring? Would our Foundation doctors who received mentoring be more successful professionally and personally compared to their peers who decided not to participate? These questions remain unanswered as our pilot was not set up to address this general shortcoming of current knowledge and understanding.
Conclusions and recommendations
Mentoring provides a focused opportunity to target the wider needs of the trainee. Not only could this encourage Foundation doctors to pursue a career in Psychiatry, but it also provides the space for them to learn how to incorporate psychiatric skills into whatever speciality they choose to pursue.
As a new doctor in a novel environment, being expected, welcomed, and gently guided into the job is invaluable. With the hindsight of our training experiences (good and bad), junior doctors are ideally placed to support more junior colleagues at all levels.
There is a need to develop links with other mentoring schemes to exchange experiences and learn lessons from others. Research has shown the importance of supporting mentors in their roles through regular meetings where mentors learn from each other. 13
In our experience, the mentoring scheme worked both alongside and separately to clinical and educational supervision and the opportunity for reflective practice offered in Balint groups. Mentoring added another level of support for the Foundation doctors, which was deemed beneficial by those participating.
We recommend more research is required to determine whether mentoring will increase recruitment to psychiatry. Organisations responsible for the training of doctors should support formal mentoring schemes and supervisors should ensure that mentors and mentees have protected time in their timetable due to the benefits of the mentoring experience to the doctors and the employing organisations.
Finally, funding should be available to support training of mentors in their workplace and aim to develop their skills in helping their mentees. Many private organisations offer mentoring training packages (including classroom and online training) for competitive prices. These courses provide useful resources to mentors and may help to increase the motivation of mentors to continue their participations in mentoring.
Appendix:
How does mentoring work? A simple three stage model:
Figure 1- The three stage model of mentoring. Developed from Alred et al (1998). Alred, G., Gravey, B. and Smith, R, 1998, Mentoring pocketbook. Alresford: Management Pocketbooks.
One of the unique characteristics of mentoring is that it is a partnership between two individuals (mentor and mentee) where both contribute to its growth and sustainability. It is based on trust, eagerness to learn and mutual respect. 16
Alred et al (1998) 4 described a model of mentorship with three stages: exploration, developing new understanding and then action planning (Figure 1). Both the mentors and mentee have certain roles and responsibilities in each stage and it is only through their collaborative work that the benefits of mentoring can be obtained.
The stage of exploration is characterised by the building of a relationship between the mentor and the mentee. Trust, confidence and rapport start to develop and hopefully grow throughout the mentoring process. Methods such as active listening, asking open questions, and negotiating an agenda are essential to facilitate this growth.
The second stage is where new understanding develops, is characterised by showing support to the mentee, demonstrating skills in giving constructive feedback and challenging negative and unhelpful cognitions.
Key methods employed in this stage include recognition of the strengths and weaknesses of the mentee, giving them information, sharing experience and establishing priorities for the mentee to work on.
In the third and last stage of the mentoring, action planning, the mentee takes the lead in negotiating and agreeing on the action plan, examining their options and developing more independent thinking and decision-making abilities.
A good mentor should help the mentee to gain confidence and knowledge over time. In order to achieve this, the mentor helps the mentee to develop new ways of thinking and improve their problem-solving abilities.
Monitoring the progress and evaluating the outcomes of the mentoring process is essential to ensure that the mentoring relationship is going in the right direction.
Acknowledgement
We would like to thank Dr Stephen Jones (Consultant CAMHS and former Training Programme Director), Dr Trevor Broughton (Consultant Forensic Psychiatrist, Director of Medical Education), Dr Bohdan Solomka (Medical Director) from Norfolk and Suffolk NHS Foundation Trust for their unlimited support for the mentoring scheme.
We also would like to thank Dr Calum Ross (Foundation Training Programme Director-FY1) and Mr Am Rai (Foundation Training Programme Director -FY2), Norfolk and Norwich University Hospital for their support in implementing this scheme. Dr Srinaveen Abkari (Specialist Registrar, Norfolk, and Suffolk NHS Foundation Trust) is one of our mentors who also contributed useful ideas to the development of this paper.
Finally, we would like to thanks all our mentors who provided the support for the Foundation, without their efforts, this scheme would not have succeeded.
Oestrogen receptors (ERs) are expressed in a large proportion (approximately 70%1) of breast cancers (BCs). Oestrogen stimulates the growth of breast epithelial cells (both normal and cancerous) by binding to these receptors. Aromatase inhibitors (AIs) prevent the conversion of androstenedione to oestrogen by the enzyme aromatase in peripheral tissues, which is the predominant source of oestrogen in post-menopausal women. Consequently, they are routinely offered to post-menopausal women with ER-positive early invasive breast cancer as adjuvant therapy2. However, decreased residual oestrogen levels are associated with increased bone resorption by osteoclasts. The menopause initiates an accelerated phase of bone loss lasting 4 to 8 years, which is followed by a slower phase which continues indefinitely3. AI-induced bone loss (AIBL) occurs at a higher rate than natural menopausal bone loss4. Women are therefore at increased risk of fractures while they are on AI therapy5, with an associated higher rate of fractures as demonstrated in the ATAC trial6.
Recent data have supported more prolonged use of AIs (10 years instead of 5) to achieve lower BC recurrence rates7. This may lead to changes in future clinical practice in that ER-positive BC patients may be on an even longer course of AIs. This is likely to translate into a higher fracture risk in patients on long term treatment, and bone health in these patients should remain an important consideration.
Several guidelines have emerged over the years, as summarised by Hadji et al8, to aid the assessment of fracture risk in women receiving BC treatment, and management of AIBL. In the UK, the guidance in use and recommended by the National Institute of Health and Clinical Excellence (NICE) is a UK expert group consensus position statement issued in 2008 (Guidance for the Management of Breast Cancer Treatment-Induced Bone Loss)9. This includes two treatment algorithms for the assessment and management of bone loss in early BC: one for women with adjuvant treatment-induced premature menopause and the other for postmenopausal women starting adjuvant AI.
Despite the existence of various guidelines on the management of AIBL in BC patients, few articles have been published on the practical adherence to guidance. We carried out an audit of the management of AIBL in BC patients in a large general practice (with roughly 9000 registered patients) in Bradford (UK). Given the small number of eligible patients in our study, we undertook a review to identify all studies in the English literature assessing practical adherence to guidance on AIBL to establish whether gaps identified in our practice reflects a more widespread issue.
Our study
Methods
We carried out a retrospective study in a general practice in April 2015. Using the clinical electronic system used at the practice (SystemOne), we performed a search for all registered patients documented to currently be on AIs or to have previously been on them at any point, for the treatment of BC, using the search terms “anastrazole”, “Arimidex”, “exemestane”, “Aromasin”, “letrozole” and “Femara”. We excluded male patients (not addressed by current guidelines) and patients who started their treatment with AIs prior to the issuance of the UK guidance in 2008. For each patient we gathered data on the indication of treatment, menopause status, the date of initiation +/- completion of treatment, details of dual energy Xray absorptiometry (DEXA) scan and bone mass density (BMD), blood biochemistry results, documented risk factors for fractures and details of bone protection treatment. We audited our practice against the UK guidance.
Summary of the UK guidance
All post-menopausal patients starting AIs should have a baseline DEXA within 6 months of treatment initiation. Patients are stratified as low, medium and high risk for fractures based on the baseline T-scores. Medium and high risk patients should have vitamin D and calcium supplements, and high risk patients should be started bisphosphonates. A repeat DEXA scan should be performed 2 years later for medium and high risk patients to re-assess BMD and augment bone protection therapy as appropriate. Patients aged 75 years and above with at least one clinical risk factor for fractures should be started on a bisphosphonate regardless of their baseline BMD.
Results
There were 12 female patients who started AIs for BC treatment from 2008 onwards. Treatment was initiated between the years 2008 and 2014 (inclusive). The mean age was 67 years (range 57-81 years) and all 12 were post-menopausal at the time of adjuvant hormonal therapy initiation. Three were initially on tamoxifen and switched to an AI after 2 years of tamoxifen therapy.
Three patients (25%) did not receive an initial DEXA scan and had no subsequent risk fracture management. One of them was 75 years of age at the time of AI initiation and was on long term steroids (i.e. should have been on a bisphosphonate regardless of BMD), but she was not on a bisphosphonate.
Of the remaining 9 (75%) who did have a DEXA scan,
One was at high risk (T-score -2.7), and was appropriately started on a bisphosphonate and calcium and vitamin D supplements.
7 patients were at medium risk of osteoporotic fractures (T-score range -2.0 to -0.1). All were started on calcium and vitamin D supplements.
7 were eligible to have had a repeat DEXA scan at the time of the study but only 4 had a scan. Of these four, one was found to have incurred significant bone loss and was started on a bisphosphonate.
The mean interval between AI initiation and baseline DEXA was 1.9 months (range 0.2-4.4). The mean interval between the initial and repeat DEXA scans was 4.1 years (range 2.5-5.1).
Figure 1 illustrates the proportion of scans requested by different clinical teams involved in the patients’ care.
Figure 1: Who requests DEXA scans?
Literature review
Methods
We performed a search with the following terms on the Ovid Medline and Embase databases: “bone loss”, “osteoporosis”, “osteopenia”, “aromatase inhibitor”, “breast cancer”, “guidelines” and “guidance”. Of the 137 results returned after deduplication, we selected original and review articles assessing management of AIBL against established guidelines. We retrieved further papers by reviewing the references of these articles.
Results
The original articles generated are shown in Table 1. While conference abstracts have not been included here, they have been reviewed for the purpose of our discussion.
Table 1: Original articles publishing the results of audits of bone health management in BC patients on AIs against established guidelines
Authors
Place of study
Guidelines used to define audit standards
Sample size
Adjuvant therapy
Roberts R et al10
Australia
ASCO*, ESMO*, Hadji et al8, Belgian Bone Club
42
Both AI and tamoxifen
Spangler L et al11
Washington, USA
ASCO*
342
AI
Bosco D12
Italy
Results from the ARBI* trial13
39
AI
Gibson K et al14
Colorado, USA
ASCO*
54
AI
Ligibel et al15
USA
ASCO*, NCCN*, Hadji et al8
9138
AI
Dong et al16
UK
NICE guidelines based on UK expert group consensus9
100
AI
Zekri J et al17
Saudi Arabia
NICE guidelines based on UK expert group consensus9
367
AI
*ASCO: American Society of Clinical Oncology, ESMO: European Society for Medical Oncology, NCCN: National Comprehensive Cancer Network, ARBI: Arimidex Bone Mass Index and Oral Bisphosphonates
Discussion
The results of our audit show that we are failing to meet our current national standards pertaining to management of AIBL in BC patients. Our literature review confirms that this is a widespread issue and that results from larger studies are in agreement with ours.
25% of our patients never had a baseline BMD measurement. Similar findings have been reported in the literature11,12,14. However, Roberts et al report much higher rates of DEXA screening pre –AI10. Reasons for this were felt to be the presence of an institutional treatment algorithm as well as a survivorship programme.
We had a poor rate of repeat DEXA scans. Gibson et al and Spangler et al also noted that the highest rate of DEXA scanning was around the time of AI initiation compared to after initiation of therapy11,14. For the patients who had a repeat BMD measurement in our study, practice was not in line with recommendations as the interval between the initial and repeat DEXA scans (mean 4.1 years) was much longer than the recommended 2 years. This may be because recommendations made by the breast surgery team were different (intervals of 3 to 5 years being recommended in some cases in clinic letters written to the GP by the breast team).
Gibson et al found that 75% of their patients were on calcium and vitamin D, which deviates from the ASCO guidelines that they audited their results against14. The ASCO guidelines recommend that all BC patients should be on calcium and vitamin D therapy. In some studies10.12 not all women diagnosed with osteoporosis were started on bisphosphonates. Although women diagnosed with osteoporosis were started on bisphosphonates in our cohort, the suboptimal uptake of DEXA scans means that we may have missed the diagnosis in a number of patients.
From the articles included in our literature review, several reasons have been suggested as causes for deviation from guidelines when it comes to management of AIBL in BC patients. Lack of awareness of guidelines, especially amongst general practitioners (GPs), has been recognised as a barrier, as well as the expectation that other healthcare professionals should be addressing this aspect of care10. In our study, DEXA scans were mostly requested by the specialist breast team initiating AIs, or by the GP at the request of the breast team. Based on our experience, it is not clear who the responsibility of bone health management lies with – the breast surgery team, the oncologist or the GP. In a survey of 307 UK-based breast surgeons and oncologists 57% of responders felt that oncologists should be responsible for this18. In practice, patients may be discharged from specialist clinic follow-ups while they are still on hormonal therapy and GPs would be expected to continue their care. When this happened in our cohort of patients, there was no evidence of clear written communication from specialist teams to the GP regarding outstanding aspects of care that the GP would be expected to follow up.
An analysis of five different guidelines regarding antiresorptive treatment in postmenopausal women with hormone-receptor positive BC showed that little consistency exists among the five guidelines19. The variety of guidelines and recommendations regarding bone loss in BC patients probably leads to inconsistency in practice. In our study, specialist teams have sometimes recommended an interval of 3 to 5 years between BMD tests, deviating from the national recommendation of 2 years. This can translate into confusion when care is taken over by the community team after the patient is discharged from the specialist team.
Recommendations
We therefore suggest that institutional guidelines on bone health management in BC patients on AIs (as well as other hormonal therapies) should be created to improve awareness amongst clinicians as this has shown to improve rates of DEXA scanning10. Local guidelines should closely mirror national guidelines to allow delivery of standardised care across the country, but should include clear recommendations as to which local team should be responsible for bone health management, as well as recommendations regarding the creation of a care plan for general practitioners when the patient is discharged from specialist teams.
A UK-based study has shown that a “one stop” nurse-led bone health clinic within the breast care service can be a cost-effective way of ensuring adherence to guidelines20. Patients to be started on an AI are identified by the multidisciplinary team (MDT). They are referred to the clinic which arranges a baseline DEXA and other appropriate investigations. Such a clinic may be a consideration in institutions where resources allow. Studies have also shown that simple interventions such as presentations at MDT meetings and display of posters to increase awareness of guidelines amongst clinicians have led to significant improvement in compliance16,17.
Lack of patient awareness of the negative effects of AIs has also been highlighted in the literature21. Improving patient education can improve patients’ compliance with treatment and decrease the rates of unattended appointments for BMD screening. It can also give more control to patients over management of their bone health, as they may be able to discuss with their clinicians where they notice a gap (e.g. if they have failed to receive an appointment for a DEXA scan). Ligibel et al have noted that women from areas with lower levels of education are less likely to undergo BMD tests15.Patient education can also help reduce the impact that such health-seeking behaviours have on compliance to bone health management.
Current guidelines make no mention of bone health management in male BC patients on hormonal therapy. Although they constitute a small percentage of BC patients, it would be reasonable to include recommendations of their bone loss management in updated guidelines so that this aspect of their care is not neglected.
Strengths and limitations
Our audit is limited by its small sample size and its retrospective nature which meant that we relied on documentation of variable accuracy. We had no information regarding patients who failed to attend appointments despite their clinicians’ invitations for DEXA scans or biochemistry tests, and no information on compliance to medication. However, the results from recent conference abstracts on UK based studies22,23 generated from our literature review reflect our results, suggesting that this is indeed a national issue. The literature review presented is the most extensive currently available on the subject, gathering up-to-date evidence on worldwide compliance to guidelines on AIBL.
Conclusion
Although the sample size of our study does not allow us to draw conclusions purely based on our data, the literature review that it has prompted has shown that several years after issuance of various guidelines on the management of BC treatment-induced bone loss, in particular AIBL, important gaps still exist in practice. We have presented a summary of up-to-date evidence in the literature to identify potential reasons for this and possible solutions to the current problems, hoping that this will improve current practice.
However, the current guidelines are now several years old. In the last few years, there has been a lot of research on the role of bisphosphonates in BC. A consensus paper assessing recent evidence has suggested that bisphosphonates should be considered for the prevention of bone loss in patients with a T score of <-2.0 or with at least two clinical risk factors for fracture24. The paper also suggests considering the use of bisphosphonates as adjuvant BC treatment, based on a large meta-analysis including 18 766 patients which demonstrated significant benefits of bisphosphonates in terms of prevention of bone metastases and BC survival in postmenopausal women25. This may well change routine adjuvant treatment of BC in the next few years and must be taken into consideration if and when new guidelines on the management of AIBL are issued, or when writing local guidelines.
Metastatic bone disease is a relatively common event in the advanced stages of many malignancies.1 Bone-modifying agents decrease the incidence of skeletal-related events (SREs) such as spinal cord compression and bone fracture, as well as the need for skeletal radiotherapy or surgery.2
Bone modifying agents such as intravenous bisphosphonates (IV BPs) (e.g. pamidronate and zoledronic acid) and denosumab are approved for prevention of SREs. IV BPs are primarily used and effective in the treatment and management of cancer related conditions such as multiple myeloma (MM), and breast cancer with skeletal metastases, because they reduce bone pain, hypercalcemia, and the risk of pathologic fractures.3
Denosumab, a receptor activator of nuclear factor kappa-B ligand (RANKL) inhibitor, represents a breakthrough in the treatment of osteoporosis, MM, and bone metastases. The Food and Drug Administration (FDA) approved it in 2010 for the prevention of SREs in patients with bone metastases and in 2011 for the prevention of endocrine-therapy induced bone loss in patients taking aromatase inhibitors for breast cancer and in patients with non-metastatic prostate cancer.
Three international, randomised, double-blind, double-dummy phase III studies have evaluated denosumab versus zoledronic acid for the treatment of SREs in breast and prostate cancers, and in combined solid tumours and MM. Denosumab’s superior efficacy over zoledronic acid was demonstrated in the studies of patients with advanced breast or prostate cancer, as well as in a pre-specified integrated analysis of all patients enrolled across the three studies.4
In the 2014 position paper of the American Association of Oral and Maxillofacial Surgeons (AAOMS), the nomenclature “bisphosphonate-related osteonecrosis of the jaw” changed to “medication related osteonecrosis of the jaw” (MRONJ). MRONJ is defined as cases in which all of the following 3 characteristics are present5:
current or previous treatment with antiresorptive or antiangiogenic agents
exposed bone or bone that can be probed through an intraoral or extra-oral fistula in the maxillofacial region that has persisted for longer than 8 weeks
no history of radiation therapy to the jaws or obvious metastatic disease to the jaws
Other terminologies used previously include “denosumab related osteonecrosis of the jaw” (DRONJ), and “antiresorptive agent-induced ONJ” (ARONJ).
The aetiopathogenesis of MRONJ related to denosumab therapy remains enigmatic, and hypotheses have focused on reduced bony turnover, infection, toxicity of the soft tissue, and antiangiogenesis. The epidemiology also remains unclear, and reported incidence varies widely.6 Overall, it is estimated that bone necrosis can develop in about 0.7-1.9% of patients with malignancy who are given high-potency IV BPs (such as zoledronic acid), and in 0.01–0.1% of those with osteoporosis who take low-potency oral BPs (such as alendronate). Data relevant to denosumab given subcutaneously in patients with metastatic cancer and osteoporosis seem to replicate those when IV high-potency BPs are administered.7 The risk of osteonecrosis of the jaw (ONJ) is higher in patients exposed to concomitant antiagiogenic medication. The individuals’ risk of ONJ is further determined by factors such as the potency of agent, cumulative dosage or duration of antiresorptive treatment, route of administration, comorbidities and local factors such as periodontal disease.8,9 Oral hygiene plays a significant role with evidence supporting a strong correlation between bacteria associated with periodontal disease and MRONJ.10
MRONJ typically manifests as painful and often infected areas of necrotic bone, which subsequently may lead to severe chronic pain and facial disfigurement. This adversely affects the ability to eat, speak and lowers the quality of life. Adverse events related to RANKL inhibitors are usually considered to be infrequent and low in occurrence. Unfortunately from our recent clinical experience at Sheffield Teaching Hospitals' Trust, there have been several new cases presented in a very short period of time. In this paper we present a case series of MRONJ related to denosumab therapy since adverse events of denosumab in the mandible or maxilla have received relatively little attention.
The aim of this article is to highlight the elevated risk of MRONJ in patients receiving denosumab treatment and educate all health care providers involved in the management of such patients. Furthermore, the mechanisms of denosumab, comparison with bisphosphonates and the reported management strategies are reviewed.
Mechanism of Denosumab
Denosumab is an antiresorptive agent that exists as a human IgG2 monoclonal antibody and inhibits the binding of the receptor activator of nuclear factor kappa-B ligand (RANKL) to RANK (Receptor Activator of Nuclear Factor kappa-B). The binding normally signals the proliferation of osteoclasts, as RANK is expressed on the surface of osteoclasts and their precursors, whereas its ligand, RANKL, is a membrane bound protein expressed by bone marrow stromal cells, osteoblasts and T-lymphocytes. The activation of RANK is integral to the function of osteoclasts. Osteoprotegerin binds to membrane bound RANKL on osteoblast which in turns decreases the osteoclastic activity and in theory negatively effects bone turnover. Denosumab acts similarly to osteoprotegerin but has a higher affinity for RANKL.11-13
Denosumab follows nonlinear, dose-dependent pharmacokinetics. The bioavailability of one subcutaneous denosumab injection is 61% and serum concentrations are detected within 1 hour. Maximum serum concentrations occur in 5-21 days and cessation of osteoclast activity occurs within six hours of the subcutaneous injection. The normal function is restored approximately six to nine months later, whilst bone turnover returns to normal shortly after this.14 Based upon monoclonal antibody pharmacokinetics, denosumab is most likely cleared by the reticuloendothelial system with minimal renal filtration and excretion thus avoiding nephrotoxicity. Its elimination half-life is 32 days, and it does not incorporate into bone.15
It is currently marketed as Prolia® and Xgeva®, approved by FDA. Prolia® is administered subcutaneously every six months and has shown to reduce the incidence of new vertebral, non-vertebral, and hip fractures in osteoporotic patients.16,17 Xgeva® is also effective in reducing SRE related to metastatic bone disease from solid tumours when administered intravenously on a monthly basis.17,18
RANKL Inhibitors and BPs Pharmacokinetics
There are fundamental differences between denosumab and BPs with regard to their mode of action. Denosumab is an antibody and acts extracellularly whereas BPs act intracellularly. As such, BPs must be present in the circulation and available for reuptake into bone for prolonged periods to function.19 There is not any evidence of drug recycling with RANKL inhibitors, and therefore it is suggested that their adverse effects can be reversible with discontinuation, in fact leading to a transient rebound phenomenon, which can be restored, with subsequent treatment.14,20 On the other hand, recycling of BPs in the circulation system has been proposed as a reason for the long duration of action even after cessation which can be up to 12 years.
The US FDA-approved manufacturer’s package insert for both zolendronate and pamidronate states that “there are no data available to suggest whether discontinuation of bisphosphonate treatment reduces the risk of ONJ in patients who require dental procedures during therapy and that clinical judgment of the treating physician should guide the management plan of each patient based on individual benefit/ risk assessment”. The package insert for denosumab does not address the issue of treatment continuation in patients who develop MRONJ to date.
Denosumab is a circulating protein capable of distributing throughout extravascular space. It is expected to reach all sites within bone including intracortical sites unlike with BPs. BPs have strong affinity for hydroxyapatite and bone mineral which limits their even distribution throughout the skeleton, particularly to sites deep within the bone.19,21 This can explain the more profound inhibition of bone remodelling with denosumab than that seen with BPs.
Case Series
Case 1
A 55 year-old lady referred to a dedicated Oral Surgery nerve injury clinic for an opinion and management of her left sided inferior alveolar nerve (IAN) paraesthesia. The patient presented with a history of numbness in the left sided inferior alveolar nerve distribution following removal of the left mandibular second premolar (LL5) in July 2014. She was asymptomatic until she had the LL5 removed and since had suffered with constant pain and numbness. A year later, she had removal of the left mandibular first molar (LL6) and gave a history of recurrent infections and excruciating pain in her mandible over the past two months. On presentation she had an obvious submental swelling and left sided IAN anaesthesia.
Medically she was diagnosed with breast cancer in 2011, for which she underwent wide local excision followed by chemotherapy. She then was placed on unknown clinical trial that we identified at the time to be denosumab trial, following liaison with the Oncology team. She is currently receiving intravenous denosumab every three months.
Clinical examination revealed a grossly mobile anterior mandible with widespread bony necrosis and associated osteomyelitis. Sensory testing revealed complete anaesthesia in the left sided IAN distribution secondary to MRONJ.
An OPG (Orthopantogram) and CBCT (Cone-Beam Computerised Tomography) revealed an extensive patchy area of ill-defined bone loss in the anterior mandible extending posteriorly to the premolar/molar areas bilaterally (Fig 1).
Figure 1 A) OPG showing non-healing sockets in the left mandible with extensive bony destruction together with periosteal reaction extending to the right mandible as shown by the arrows.
Rather interestingly, the bony destruction was evident bilaterally with the patient only having had extraction of teeth in the left mandible (Fig 1). This could be the case of spontaneous ONJ in the right mandible or an extensive ONJ arising from simple extractions on the left side.
Figure 2 3D reconstruction of the CBCT image demonstrating extensive bony destruction involving the lower border of anterior mandible in keeping with a spreading chronic bony infection and clinical presentation of submental swelling as showing by arrows.
Case 2
A 66-year-old female referred by her general medical practitioner (GMP) with a 3-month history of delayed healing following a tooth extraction in the left posterior mandible. She had moderate to severe discomfort and reported multiple previous infections and purulent discharge from the area, which treated with multiple courses of antibiotics. In addition, she reported discomfort from the root treated right mandibular first and second premolar teeth (LR4 and LR5).
Medically she was diagnosed with breast cancer over 10 years ago for which she underwent resection followed by chemotherapy. Three years ago, she diagnosed with metastatic deposits and therefore has been receiving intravenous denosumab every six weeks since then. Other medications include steroids, chemotherapy agents, antihypertensives and analgesics. She did not receive any radiotherapy or BPs treatment in the past.
Clinical presentation revealed a heavily restored dentition with chronic generalised periodontal disease. There was evidence of widespread bone loss clinically and radiographically. The slow healing socket in the left mandible was visible but did not have any exposed bone (Fig 3). The lower right first and second premolar teeth (LR4 and LR5) were clinically and radiographically sound.
Figure 3. Non-healing socket in the left posterior mandible with no evidence of exposed bone or suppuration as showing by white arrow. Gingiva recession (black arrows) is evident in the LL6 and LL5 teeth in keeping with chronic periodontal disease.
Figure 4 Coronal sections of CBCT A and B showing multiple lytic areas within the inferior cortex of the mandible and incomplete healing of the extraction sockets.
On follow-up appointments, the patient suffered multiple repeated infections in the right and left posterior mandible and due to deteriorating periodontal disease, the LR4, LR5, LR6 were extracted by her own general dental practitioner (GDP) due to severe mobility. All three extraction sockets failed to heal (Fig 5) leading to an extensive area of exposed bone in the right mandible, extending from the lower right first premolar (LR4) to lower left first molar (LL6) region. Conservative management was embarked which included antibiotics, chlorhexidine mouthwash and routine oral hygiene appointments. Selective sharp bone trimming and three sequestrectomies were undertaken. At the same time, liaison with the patient’s oncologist resulted in cessation of the denosumab therapy and complete resolution of her oral symptoms.
Figure 5 Clinical picture of exposed necrotic bone (white arrows) following simple extractions of periodontally involved teeth.
Case 3
A 76-year-old lady referred to the Oral Surgery department by her GDP with a 3-month history of a non-healing lower left first premolar (LL4) socket. The patient was treated with two courses of antibiotics prior to referral which provided only temporary relief to her symptoms.
Medically she was diagnosed with breast cancer 10 years ago and recently commenced intravenous denosumab for metastatic disease. She also receives hormone therapy and palliative radiotherapy to the spine.
On clinical examination, there was a partially healed LL4 socket with a rather granulomatous appearance. There was no clinical evidence of suppuration or bony exposure. Radiographs confirmed the absence of bony infill in the socket. Local debridement and biopsy of the granulomatous tissue was performed to exclude any metastatic disease. Biopsy report confirmed the presence of inflammation tissue.
Figure 6 CBCT scan; A and B sagittal views, C axial view and D 3D reconstruction. Extensive periosteal reaction extending from the midline of the mandible to the left molar region is evident in keeping with chronic osteomyelitis secondary to MRONJ.
Liaison with the microbiologist suggested a long-term antibiotic course to arrest osteomyelitis. Further liaison with the oncology team, resulted in denosumab being stopped for 4 months. On subsequent review appointments, patient’s symptoms improved however, there is now an area of exposed bone in the LL4 region as shown in Fig 7.
Figure 7 Clinical photo illustrating exposed bone (white arrow) in the LL4 region without evidence of local infection.
Case 4
A 65-year-old lady referred to the Oral Surgery department by her GDP with a history of a sore upper mouth and jaw underneath the dentures which is unable to wear.
Medically she was diagnosed with disseminated breast malignancy including bone metastases 3 years ago and for that, she is on exemestane and IV Denosumab monthly.
Clinical examination revealed multiple draining sinuses in the anterior maxilla. There was a partially healed LL4 socket with a rather granulomatous appearance and tenderness on palpation. There was neither discharge from the area nor any exposed bone. Radiographs confirmed the absence of bony infill in the LL4 socket. Local debridement and biopsy of the granulomatous tissue was performed to exclude any potential malignancy and it was confirmed as inflammation tissue.
Figure 8 CBCT scan; A axial view, B and C 3D reconstruction. A 25mm fragment of right anterior maxilla is beginning to sequestrate. This extends from the anterior margin of the right maxillary sinus approximately to the position of the upper left lateral incisor, crossing the midline. The sequestrated fragment involves the lateral margin of the nasal cavity. There is bilateral moderate mucosal thickening in the maxillary sinuses. Extensive periosteal reaction extending from the midline of the mandible to the left molar region is evident in keeping with chronic osteomyelitis secondary to MRONJ.
Table 1 Summary of cases
Cases
Indications
Duration (months)
Clinical Findings
Case 1
Metastatic deposits from primary breast malignancy
48
Anaesthesia in the distribution of the left inferior alveolar nerve Osteomyelitis Excruciating pain
Case 2
Metastatic deposits from primary breast malignancy
36
Chronic generalised adult periodontal disease Non-healing extraction sockets Exposed bone persisted for longer than 8 weeks Severe pain
Case 3
Metastatic deposits from primary breast malignancy and myeloma
24
Non-healing extraction socket with granulomatous tissue Severe pain
Case 4
Disseminated breast malignancy including bone metastases
30
Multiple draining sinuses in anterior maxilla Non-healing extraction socket with granulomatous tissue Severe pain
Discussion
ONJ associated with antiresorptive therapy deserves distinction from other causes and diseases/medications associated with the development of osteonecrosis of the jaw. AAOMS recently published stage specific treatment recommendation for MORNJ.22 The various stages and suggested stage-specific treatment strategies are not evidence-based, and in particular, stage 0 disease is not universally accepted. AAOMS recommendations echoed those stated in previous years for BRONJ, namely supporting conservative therapy, with aggressive surgery offered only to symptomatic patients. In contrast, the MRONJ guideline report from the German Dental and the German Oral and Maxillofacial Associations refrains from recommending therapy at least for certain stages of the disease. This might be attributed to the pitfalls of current MRONJ criteria. Furthermore, due to poor guidelines specifically related to RANKL inhibitors, no agreement exists on a universally acceptable therapy strategy of such cases.
Management strategies are largely based on expert opinion rather than experimental data. It includes prevention, conservative and surgical modalities. Prevention of the condition is the gold standard. It is highly recommended all patients have a comprehensive dental examination and preventive dentistry (pre-emptive extraction of unsalvageable teeth and optimised periodontal health) before commencing antiresorptive therapy.23,24 Oral hygiene should be kept meticulous during the course of therapy as periodontal disease and associated bacteria claim to be implicated in this condition and also observed in these cases.
The success rate of conservative treatment regimens range from less than 20% 25,26 to above 50%27,28 although some cases become chronic and develop complications.29
Microbial cultures from areas of exposed bone are not always helpful since normal oral microbes are isolated. However, when there is extensive soft tissue involvement, microbial cultures may help to define comorbid oral infections, which may guide the selection of an appropriate antibiotic regimen.30
Regardless of the stage of disease, areas of necrotic bone that are a source of chronic soft tissue irritation and loose bony sequestra should be removed or recontoured so that soft tissue healing can be optimised. This is in line with our clinical experience. The extraction of symptomatic teeth within exposed, necrotic bone should be considered as it appears unlikely that extraction will worsen the established necrotic process. Otherwise, surgical resection of necrotic bone should generally be reserved for refractory or advanced cases.31 Resection may occasionally result in even larger areas of exposed and painful infected bone.32
A recently published MISSION study7 reported that the AAOMS system misclassified/ underestimated the severity of the disease at a rate of about 1 in 3, in particular in patients suffering from MRONJ stage 1 and 2. The authors conclude that these findings may explain why the treatment of stage 3 ONJ, namely surgery with success rate over 85%33,34, has been deemed to be more predictable and therefore yields more favourable outcomes than the treatment of stages 1 and 2.35
Denosumab is characterised by reversibility of its effect after treatment discontinuation, in contrast with bisphosphonates. This is in line with our findings since cessation of denosumab in two cases helped to improve their symptoms significantly.
MRONJ has been reported to occur after a mean administration period of 39.3 months and 35 infusions in oncology patients.23 It is interesting that all published cases of denosumab-related ONJ occurred early after commencement of therapy, independent of the number of previous administrations.36,37 In our experience, all patients developed MRONJ within the first 3 months of teeth extractions; well ahead of the reported period and number of administrations of denosumab.
Furthermore, all four cases have had extensive lytic lesions developed following removal of a single tooth. The common radiographic findings in all cases include:
non-healing extraction socket
areas of focal and diffuse sclerosis
thickened lamina dura
early sequestrum formation
reactive periosteal bone
osteolysis of cortical and spongious bone
These findings, although common in MRONJ cases, have had extensive bony involvement and rapid progression of ONJ, demonstrating a far more aggressive nature of the disease compared to that seen with BPs.
In our experience, not all patients are adequately informed of the risks and adverse events of denosumab therapy. This highlights the importance of educating patients and inter-professional communication regarding the prevention and best management of MRONJ cases. In one of the cases, the lack of patient education concerning denosumab side effects and the failure of inter-professional communication had a detrimental effect on the patient’s overall management and subsequently patient’s oral health.
Table 2 Important Points
All patients prior to start of any antiresorptive medication should have a dental check-up and receive dental or surgical treatment beforehand to avoid the possibility of complications associated with antiresorptive medications
Strongly recommend regular dental check-ups to prevent
Patients should be advised to contact their Doctor/Dentist/Oral surgeon immediately if notice following symptoms:
Feeling of numbness, heaviness or other unusual sensation in the jaw
Pain in the jaw / toothache
Delayed healing to the gums, especially after dental work
Bad taste / infection
Swelling of the jaw
Loose teeth
Exposed bone
Pus like discharge from the affected area
Conclusion
We present our experience with denosumab-related ONJ from Sheffield Teaching Hospital’s NHS Trust. This case series should contribute to the existing sparse clinical literature on this topic. The pathogenesis, treatment and outcome of ONJ are complex and multifactorial. Patients treated with denosumab may be more prone to developing ONJ even without a precipitating dental event. ONJ may have a more aggressive profile and develop significantly earlier in patients receiving denosumab. Prevention of ONJ still remains the most important goal, and this is most directly accomplished by avoiding invasive dental procedures and establishing inter-professional communication.
Medicine is a dynamic science and discipline, which arise from the human need to face the suffering, pain and give hopes for better life. Since its inception, medicine has entered a career development that has brought great advances in science. Part of this momentum observed in medicine is defined as its reason for being, or rather, its primary goal, maintaining the health status in different populations1. This simple statement but becomes an object of great complexity which has received attention by many physicians and researchers from ancient times to the present, and in the tenth century, Ibn Hazm, a father of modern medicine enunciated the truth in this science was going to be impossible, since its dynamism is always present and the truth became a clear misrepresentation or significantly modified in the future2.
The goal of keeping adequate health status and prevent diseases has kept biomedical research in an alarming race in which the speedometer registering increases day by day. Today medical science is one of the most important sources of scientific innovation around the world; hundreds of manuscripts on health issues are published every day in multiple languages, in addition to numerous books and other non-official publications2-3. The increase in the medical literature during the last decade, led to think that the development of medicine has become a breakneck ratio, both the magnitude of the information obtained and its complexity.
However, the real reason for this phenomenon in biomedical sciences should be as a result of the new funding sources available for biomedical research either from the biomedical industry and government agencies. Each year new sources of money are offered to scientists to encourage innovation and the development of new ideas; and the resources existing to perform this goal increase. The OECD (Organization for Economic Co-operation and Development) suggests that countries spend about 500 billion dollars a year on research in biomedical sciences, including private laboratories and research institutes4. Medicine has become one of the most promising businesses in the economic field, and up today, and it is considered one of the greatest science future with wide range of prospects5.
Despite this encouraging situation, the concerning of development of modern medicine could be measured as a fundamental problem: the doctors and other scientists in charge of biomedical innovation are not trained in the administrative field. The above problem is clearly seen in different situation in the current medical and research practice, weak reflected on wasteful resource allocation processes6.
Each organization facilitating biomedical research resources requires that its resources are managed and used in an appropriate manner. Institutions demand to distribute funding for various interests not only in biomedical sciences. On the other hand, institutions have to verify the novelty and the ethical viability of the proposals, with the main idea of support ethical-approved studies, avoiding catastrophes in non-well-designed trials. Nowadays, as a result of better “quality control processes”, grant submissions involve a great number of administrative steps in order to be ready to submit any proposal. In this verification process, no only questions have to be addressed in term of sciences; a lot of administrative issues have to be explained in detail, including budget utilization and personnel management1, 4, 7.
Academics and scientists in universities are trained mostly in the technical aspects of their daily work, for this reason the horizon shows that the physician (M.D.) receives their training focus for the clinical management of patients, a doctor of science (Ph.D.) receive their training in the handling the samples to obtain the best results in the tests planned. In both cases, scientists are plainly educated out of the business and administrative field, leading important limitations with resources (personnel and funding) management3, 5.
Nowadays, the easiest application to obtain resources requires the approval of at least 5 different offices responsible for reviewing ethical, financial, legal, logistical, and scientific issues. Now back to the main problem in this discussion, the fact that medical researchers are not trained as integrated researchers (sciences and business), we are in the position to deduct that this condition may generate a bottleneck, specifically in this time where biomedical research are gaining a lot of power and interest in the industry8.
A potential alternative for this dilemma is to offer manager and administration training to researchers in order to be able to efficiently manage the resources requested. Masters degree programs are now available in health sciences management and administration, gaining popularity in the last five years. Researchers are more committed to show better profiles in their grant applications. Modern scientists must be people with a proven knowledge on costs and productivity that allows performing biomedical research with scientific quality combined with attractive financial management in terms of production2, 5, 8.
Postgraduate medical training should equip trainees with the skills, knowledge and attributes for independent practice1. They need to be equipped with the skills to become lifelong learners and continually develop their abilities throughout their careers by learning from colleagues, mentors, patients and disease. The challenge for clinical teaching is how to provide an optimal learning environment in which trainees can achieve their competencies for practice within a defined training rotation; both the limitations in the number of hours within a working week and the balance between learning and service commitments all can negatively impact on the educational experience of trainees2. Moreover, trainees need to balance their own development of skills, knowledge and attributes for independent practice against the requirement to provide high quality and safe healthcare3. The appropriate level of supervision must be provided to trainees performing any patient interaction and this is gauged by the trainer-trainee relationship, regular assessment and feedback. The clinical workload of a trainee needs to be finely balanced between overstretching them with tasks outside their competencies and being left with all the routine and menial tasks4. Thus whilst trainees should work within their competencies, they must be given the opportunities to expand their repertoire of skills, which may result in errors (and potentially patient harm) – supervision should limit these errors, which should be reflected on to provide a learning opportunity within a ‘no-blame’ culture5. As a trainee gains competence of their necessary skills, the amount of supervision required can be stepped down, until distant supervision (i.e. advice via a telephone) may be all that is required.
An understanding of how each learning environment within the hospital setting can be maximized may enhance the learning opportunities conferred upon trainees. Both technical skills and the professional attributes of being a clinician can be learnt in clinical and non-clinical environments. These learning environments will be explored in the subsections below.
Bedside Teaching and Ward Rounds
Bedside teaching is a stalwart of medical education, allowing clinical history and examination to be performed under guidance, in an appropriate setting and with relevant clues (observation charts, oxygen, etc.) present. This patient-trainee interaction provides an opportunity to develop professionalism and communication, and can also be the source of training of diagnostic techniques ranging from venesection and cannulation to more invasive techniques (e.g., pleural aspiration, drainage of ascitic fluid)1.
Presentation of patients during ward rounds allows a professional conversation between trainees and trainers to occur, which justifies their role in management and provides an insight into understanding and thought processes6. The multidisciplinary nature of rounds creates a community of practice7, allowing social learning to occur, and an opportunity to voice differing perspectives on patient care3. In order to maximize these learning opportunities, learning objectives can be discussed prior to commencement and reflection undertaken once they it has been completed6. Teaching rounds should be carried out when the ward is quiet, at a suitable pace, with regular questioning and opportunities for trainees to ‘lead’ the process8. Factors that hinder this educational process include time pressures, patients not being available, and the availability of trainees8.
Outpatient Clinics
Outpatients provides a mixture of new and follow-up patients that enables a trainee to learn management of patients in an ambulatory setting. Trainees may be in the same room as their supervisor (learning the basics of the consultation), or can practice semi-autonomously as their experience increases (with discussion with their supervisor as required); they must select an appropriate investigation and treatment plan, with a time frame for review, once the investigation or intervention has been performed3. Outpatient teaching is more highly valued by trainees and students compared to ward based tuition9. Factors that hinder this educational opportunity include room availability, time constraints, staffing levels and attitudes to teaching9.
Operating Theatre and Interventional Suites
Invasive procedures should be performed by adequately trained (or supervised) personal in the relevant area of the hospital (e.g. endoscopy, interventional radiology suites, theatre), with the necessary equipment and monitoring for the technique to be performed. Even before patients enter these environments, trainees have an opportunity to review the patient and their relevant investigations, discuss the procedure with the patient and obtain consent for the intervention1. Trainees can learn a wide spectrum of skills within these environments including both technical (both procedural related and anesthetic related) and non-technical skills, including Human Factors, anatomy, identification of instruments, aseptic technique, effective hand-washing and donning of surgical gowns10. Teaching invasive procedures represents a dichotomy for clinicians, not only do trainees need to gain exposure and experience in the relevant technique, but patients need to be prevented from undue harm. Prior to undertaking an intervention, trainees should be familiar with the relevant anatomy and physiology of the system they are about to operate upon, will have watched the procedure being performed and may have learnt the basics of the procedure in a simulated setting.
Trainees must to be able to self-reflect on their own skills and record of the number of procedures they have performed (which can act as a proxy for ability) to ensure that the correct level of supervision is provided alongside an intervention of suitable difficulty. Trainers need to be sure that their trainees have the necessary skills and knowledge to perform a technique, with experience often being gained in a stepwise reflecting both the difficulty of intervention and the gaining of skills, competence and confidence by the trainee11. This skills acquisition should be accompanied by regular discussion and feedback to maximise learning opportunities; when no supervision is available, trainees should consider video-recording the procedure as this allows reflection and review at a later date. A video diary can also be used as a portfolio of a trainee’s repertoire from beginner to expert during their training rotation. The challenge for trainees is to achieve competence in the relevant invasive technique within their training rotation; the number of interventions required to gain competence will vary between each trainee and technique11.
Handover
Handover allows the care of patients to be transferred from one group of individuals to another on a temporary or permanent basis. Handover confers an opportunity to present a clinical synopsis of patients with key information to ensure continuity of care and patient safety is maintained3. Most handovers are trainee led which provides an opportunity for peer learning to occur, checking comprehension and sharing interesting cases or tips for practice12. Handovers should be considered to be a high risk procedure, as communication errors can result in vital information being omitted; as such the process should be undertaken in suitable environment away from distractions, in a structured written and oral manner supported by an electronic format12. A further review at the patient’s bedside can be performed if required, which can highlight high risk patients.
Multidisciplinary Team Meetings
Multidisciplinary team (MDT) meetings are small formal meetings focused on all aspects of a patient’s care that involve a wide range of medical personal, nursing staff and allied health care professionals1. Meetings ensure that evidence-based guidelines are followed, and help to streamline management, removing unnecessary delays in treatment and improving cost effectiveness. MDTs represent a Community of Practice7 allowing social learning to occur as each individual can share their relevant expertise; MDTs enable best practice to be shared and help break down barriers between different specialties. Trainees can learn from the didactic teaching that occurs within the MDT (in relation to clinical details, investigation and management), but can also contribute to the meetings and practice their presentation skills.
Morbidity and Mortality Meetings
Morbidity and Mortality (M&M) meetings can help ascribe accountability and be used to highlight improvements in patient safety. They provide an opportunity for professional education, especially if the discussion can be held within a no-blame culture, and the meeting can voice discrepancies in how to manage patients, especially in ambiguous situations13. Trainees may be tasked with presenting a case and the potential learning aspects associated with patient care.
Grand Rounds/Formal Teaching
Grand rounds are traditional formal teaching opportunities that typically revolve around a case, whereby salient findings are presented prior to a discussion of management. These meetings allow opportunities for trainees to present cases and learn management, but their educational benefit may be decreasing as they are being replaced with lectures with limited clinical relevance14. However “audience apathy, deteriorating decorum and shrinking attendance” have diminished these learning opportunities14. Targeted teaching and the establishment of learning objectives for trainees can improve the educational content and the provision of feedback to the speakers can also enhance these meetings.
Journal Club
Journal clubs confer an opportunity for current scientific research and developments to be presented, critiqued and discussed by trainees. These clubs confer an opportunity to appraise the current literature and how that can be translated into evidence based patient care15. Journal clubs tend to be both voluntary and occur outside of working hours, resulting in highly motivated groups of participants whom are protected from interruptions.
eLearning and mLearning
Electronic learning (eLearning) and multimedia learning (mLearning) enable trainees to work informally, away from desks and computers, and at their own pace through a series of educational modules. Any intervention that engages trainees and promotes learning should be encouraged and these online learning platforms can be combined traditional learning resources should be promoted to ensure that all aspects of the curriculum are covered. mLearning in particular can be ‘dipped into’ allowing the learner optimal flexibility of how and when they want to use it. elearning can be referred to during point-of-care patient interactions when a trainee is unsure of how to proceed with patient management1. These increasingly important and under-utilized resources should be supported by educational institutions that support both undergraduate and post graduate trainees. By developing a virtual learning environment, individual tailored learning programs can be created that allow a trainee to develop and control their own online learning16.
Simulation
Simulation is becoming increasingly important for medical training. , anything can be simulated from learning clinical skills to human factors training, for both individuals and teams, focused on patient care and current medical practice in both the undergraduate and postgraduate setting. The availability of simulators coupled with competency based training and a decreased amount of training within the workplace has led to an increase use of this teaching format1. In addition, trainees need to understand how to use certain pieces of equipment prior to employing them on patients and this familiarity can only be gained in a simulated setting. Simulation can either occur within the workplace (allowing point-of-care simulation to see how teams react in a situation) or on formal taught courses; it can be low-technology and cheap (e.g., tying surgical knots on the back of a chair), but can also be high-fidelity and expensive (e.g., a virtual reality training simulator for laparoscopic operations), or use animal or cadaveric tissue. Simulation that increases trainee’s familiarity with certain techniques is likely to improve their clinical performance, decreasing potential patient harm and shorten the time taken for trainees to achieve competence1. A simulation should be completed with feedback from the supervisor to ensure that trainees gain the most from the session and clarify any facts or concerns about the simulation; a video recording of the session can also enable participants to reflect on their performance in a manner that is almost impossible in everyday clinical practice.
The skeletal type of muscles are rarely affected by tuberculosis (TB) because they are not a preferred site for the survival and multiplication of Mycobacterium tuberculosis.1 Even in patients with widespread involvement of the disease, TB rarely involves muscles. Petter et al recorded only one case of primary skeletal muscles TB in over 8,000 cases of all types of TB, with an incidence of 0.015%.2 Few cases of tubercular myositis have been described in literature until now, mostly in adults. This, together with the decline in TB in general, makes it unlikely that one would immediately consider TB as the cause of a rectus sheath abscess.
There are only limited cases reports of isolated tubercular involvement of the anterior abdominal wall even though TB is rampant in developing countries, and with the rapid spread of acquired immune deficiency syndrome (AIDS) it has made inroads into the developed nations as well.3 We are presenting a case of primary tuberculous abdominal wall abscess, with no evidence of pulmonary, skeletal or gastrointestinal TB in an immunocompetent patient. This case report should serve as a reminder that TB, in all of its various manifestations, remains very much among us.
Case report
A 20-year-old female presented to the outpatient department of surgery, with a complaint of a progressive swelling in the left lower abdomen for the last three months. There was no history of preceding trauma, fever, cough, malaise or pain. There was no history of contact with any case of TB. On examination, there was a swelling in the left iliac fossa, measuring 8x8cm in size, non-tender with smooth and ill-defined margins, and normal overlying skin. The swelling was firm in consistency and moved with respiration. Examination of the cardiovascular and respiratory systems were within normal limits.
Laboratory investigation revealed: haemoglobin 11.5 g/dl; total leukocyte count 8510/cumm with a differential count of 54% neutrophils, 42% lymphocytes and 4% eosinophils; erythrocyte sedimentation ratio 70 mm; and enzyme-linked immunosorbent assay (ELISA) for human immunodeficiency virus (HIV) negative. The chest radiograph was unremarkable. Other biochemical blood investigations, including liver and kidney function tests, were within normal limits. Ultrasonography of the abdomen revealed a 6.5x8.5cm left iliac fossa cystic mass with a liquefied necrotic centre in the anterior abdominal wall (Fig. 1). Computerized Tomography (CT) scan of the abdomen showed an abscess in the left antero-lateral portion of the abdominal wall limited to the muscle layer (Fig. 2). Ultrasound-guided fine-needle aspiration and cytological examination revealed caseating granuloma with central necrosis, lymphocytes, and giant cells, consistent with TB (Fig. 3). The patient was diagnosed to have tuberculous abscess of the anterior abdominal wall and antituberculous treatment was started following internal medicine consultation. She improved rapidly over the next few days. After four weeks of antituberculous treatment, she responded well to the treatment and the abscess regressed considerably. The patient did not require any surgical intervention.
Figure 1. Ultrasonography of the abdomen revealed a left iliac fossa cystic mass.
Figure 2. Focal cystic collection seen in the anterior abdominal wall of the left iliac fossa with mild peripheral enhancement (white arrow)
Figure 3. Photomicrograph revealed caseating granuloma with central necrosis, lymphocytes, and giant cells, consistent with tuberculosis.
Discussion
TB of the anterior abdominal wall is a rare entity and only isolated cases are reported in the literature. Possible explanation for the rarity of muscle involvement in TB includes high lactic acid content, lack of reticulo-endothelial tissue in muscle, lack of lymphatic tissue, the abundant blood supply, and the highly-differentiated state of muscle tissue.4 Although none of them seems to be an adequate explanation, all theories (except the first one) have been criticized.2
Two forms of skeletal muscle involvement are recognized.5 In the first type the tuberculous process spreads into the muscle through direct extension from a neighbouring structure e.g. bone, joint, tendon, or lymph node. In the second type the spread is haematogenous. Our patient is of interest because she seems to have a primary tubercular anterior abdominal muscular lesion with no evidence of immunoincompetence.
A tuberculous focus in the muscle usually manifests as progressive swelling and pain. The infection is usually restricted to one muscle.6 There may be a frank tuberculous abscess (as seen in our case) or a nodular sclerosis followed by calcification. Ultrasonography usually shows a cystic mass of mixed echogenicity with irregular walls and a liquefied, necrotic centre. Computed scan of the abdomen usually shows a well-defined abscess in the abdominal wall.7, 8 Ultrasonography or CT-guided aspiration followed by cytological examination usually reveals tuberculous granulomas with areas of caseous necrosis.
Management of this entity is mainly in the form of antituberculous drugs. Surgical intervention in the form of sonography, CT-guided aspiration or open drainage is usually reserved to patients where medical treatment has failed.3 Our patient responded well to medical treatment.
Although localized swelling in the rectus abdominis muscle is commonly due to necrotizing fasciitis, rectus sheath haematoma or tumours (benign / desmoid / malignant), a rare possibility of TB should also be considered. The prognosis is good in tuberculous myositis with appropriate chemotherapy.
Conclusion
This case alerts clinicians and radiologists of the possibility of TB when considering the differential diagnosis of any lesion even in any unlikely anatomical area, especially in those regions where TB is endemic.
Endocrine disorders are frequently accompanied by psychological disturbances. Conversely, psychiatric disorders, to significant extent demonstrate consistent pattern of endocrine dysfunctions. [1] Endocrinopathies manifests as myriad of psychiatric symptoms, as hormones affect a variety of organ systems function. The presence of psychiatric symptoms in patients with primary endocrine disorders provides a new insight for exploring link between hormones and affective function.[2] Disturbance of hypothalamic-pituitary-thyroidal axis is of considerable interest in psychiatry and is known to be associated with a number of psychiatric abnormalities.[3]Thus, the main focus of psychoneuroendocrinology is on identifying changes in basal levels of pituitary and end-organ hormones in patients with psychiatric disorders. Psychiatric symptoms may be the first manifestations of endocrine disease, but often are not recognized as such. Patient may experience a worsening of the psychiatric condition and an emergence of physical symptoms with the progression of the disorder.[4] Psychiatric manifestations of endocrine dysfunction include mood disturbances, anxiety, cognitive dysfunction, dementia, delirium, and psychosis. While dealing with treatment-resistant psychiatric disorder, endocrinopathies should also be considered as a possible cause in management. Psychotropics medicine may worsen the psychiatric symptoms and improves only once the underlying endocrine disturbance is corrected. [5]The lifetime prevalence of depression and anxiety is 11.8% to 36.8% and 5.0% to 41.2% respectively in the group with previously known thyroid disorder. [6,7].The occurrence of major depression in DM is mostly estimated around 12% (ranging from 8-18%). 15-35 % of individuals with DM report milder types depression. [8]. Depressive symptom is seen in almost half of patients with Cushing's syndrome and these experience moderate to severe symptoms. Some patients with Cushing's syndrome also experience psychotic symptoms [9]. Patients suffering from Addison's disease may be misdiagnosed with major depressive disorder, personality disorder, dementia, or somatoform disorders [4, 10]. Women with hyperandrogenic syndromes are at an increased risk for mood disorders, and the rate of depression among women with PCOS has been reported to be as high as 50 percent. Central 5-HT, system dysregulation that causes depression might simultaneously affect peripheral insulin sensitivity, or vice versa, possibly via behavioral or neuroendocrinological pathways, or both. [10]
Hollinrake 2007 showed prevalence of depression has shown it to be four times that of women without PCOS. Hollinrake screened patients with PCOS for depression and found total prevalence of depressive disorders which included women diagnosed with depression before the study, was 35% in the PCOS group[11]. No specific psychiatric symptoms have been consistently associated with acromegaly or gigantism or with elevated GH levels. Adjustment disorder may occur from changes in physical appearance and from living with a chronic illness [11]. Sheehan’s syndrome (SS) refers to the occurrence of varying degree of hypopituitarism after parturition (1). It is a rare cause of hypopituitarism in developed countries owing to advances in obstetric care and its frequency is decreasing worldwide. Reports of psychoses in patients with Sheehan’s syndrome are rare. [13] Psychiatric disturbances are commonly observed during the course of endocrine disorders .The underlying cause can be hyper- or hyposecretion of hormones, secondary to the pathogenic mechanisms. medical or surgical treatment of endocrine diseases, or due to genetic aberrations[14]. Psychiatric disorders frequently mimic the symptoms of endocrinological disorders. In view of sizable number of patients seeking treatment from our department present with comorbid endocrinolgical disorders, we planned the present study to investigate psychiatric morbidity preferably anxiety and depression pattern among endocrinolgical disorders patients. With this background, we studied the depression and anxiety in different endocrinogical disorders.
Methods
The present study was conducted in the SMHS Hospital of Government medical college Srinagar and the study sample was drawn from patients attending the endocrinogical OPD in the Department of Medicine at Government Medical College Hospital Srinagar (SMHS).The study was conducted over a period of one and half year, from April 2011 to September 2012 in patients attending the Department of Medicine Government Medical College Hospital Srinagar enrolling 152 cases of Endocrinological disorders. All patients were first examined by Consultant endocrinologist. The patients were then selected using simple random sampling choosing every alternate patient. General information including age, sex, residence, economic status, past history of thyroid disorders, family history of psychiatric disorders was included. An endocrinology specialist first examined the patients, while a psychiatrist administers Hospital Anxiety and Depression scale (HADS). Hospital Anxiety and Depression scale (HADS) was used for purpose of screening anxiety and depressive disorders in patients suffering from different endocrinogical disorders. Hospital Anxiety and Depression scale (HADS) is used for purpose of screening anxiety and depressive disorders in patient suffering from chronic somatic disease. HADS contain 14 items and consist of two subscales: anxiety and depression with seven question each. Each question is rated on four point scale (0 to 3) giving maximum total score of 21 each for anxiety and depression. Score of 11 or more is considered a case of psychological morbidity, while as score of 8-10 represents borderline and 0-7 as normal. The forward backward procedure was applied to translate HADS from English to Urdu by a medical person and professional translator. [15]
The participating physicians subjected select patient of chronic Endocrinological disorders to HADS Questionnaire and recorded scores both for anxiety and for depression.
The patients were subjected to inclusion and exclusion criteria as given below:
Inclusion criteria
1. All endocrinological disorders.
2. Both sexes will be included.
3. Age > 15 yrs.
4. Those who will give consent.
Exclusion criteria
1. Those who don’t consent.
2. If diagnoses is not clear.
3. Age less than 15 years.
4. Presence of pregnancy or a history of pregnancy in the last six months.
5. Those who are on steroids or drugs known to interfere with thyroid function
General description, demographic data and psychiatric history was be recorded using the semi structured interview which was pretested
Statistical methods: Statistical analyses were performed using the SPSS, version 16.0 for Windows. A secure computerized database was established and maintained throughout the study. Patient names were replaced with unique identifying numbers. Descriptive statics were used to generate a profiles of each illness group based on presence of depression only, anxiety only and those with both anxiety and depression. To determine whether there were any significant differences between each illness group in the prevalence of depression and anxiety disorders , an unadjusted 3×2×2 test chi square was conducted. Data were analyzed by the Pearson chi-squared test and t test. P<0.05 was considered as the significance level in the evaluations.
Consent: Informed consent was obtained from each patient; those who were considered incapable of consenting were allowed to participate with consent of their closest family member or custodian. All patients were informed about the nature of the research within the hospital and willingly gave their consent to participate. Information sheets and preliminary interviews made it clear that the choice to consent or otherwise would have no bearing on the treatment offered. The project ensured the anonymity of the subjects by replacing patient names with unique identifying numbers before the statistical procedures began.
Results
A total of 152 patients from the endocrinological departments of Govt. Medical College, Srinagar hospitals were taken up for study. They were evaluated in detail with regard to socio-demographic profile regard to presence of psychiatric co-morbidity by HADS and the results have been presented below in the tabulated form .Only patients who consented for complete interview and respond to all HADS questions were considered in final analyses.
Out of total 152 subjects 71 were males (46.72%)), and 81 were females (53.28%) (Table 1). Most of cases belong to 35-45 year age group (26.3%) followed by age group 25- 35 years (24.3%) and 67.7% were married and 18.4% were unmarried. More than half (51.97 %) of the study subjects were from nuclear families and 82 (53.9%) were illiterate and majority 84(55.4 %) belonging to middle class family. The socio-demographic profile of the studied patients is shown in Table-2 .
Out of 152 patients with endocrine disorders, 56(37%) patients elicited HADS score of 10 or less indicating absent or doubtful association anxiety or depression. 96 (63.15%) patients were found positive to HADS Questionnaire with anxiety/depression score of 11 or more. The mean HADS score for anxiety alone, depression alone and anxiety/depression patients were 13.42, 15.7 and 25.62 respectively. On the basis of HADS screening, 96(63.157%) patients had varying degree of psychiatric co morbidity. 27 (28.12%) had anxiety alone, 30(43.47%) had depression alone where 39(40.62%) as patients had anxiety and depression both.(Table 3) The breakdown of total number of different Endocrinological disorders is given in table. Maximum psychiatric comorbidity is found in thyroid patients (69.35%) followed by diabetic patients (68.05). (Table 4).
Table 1: Age and sex distribution
Sex
Total
Male
Female
Age group
< 25
14
20%
7
9%
21
14%
25 – 35
20
28%
17
21%
37
24%
35 – 45
17
24%
23
28%
40
26%
45 – 55
11
16%
19
24%
30
20%
55 & above
9
13%
15
19%
24
16%
Total
71
100%
81
100%
152
100%
Mean ± SD
51.4± 13.7
56.4± 13.1
54.1± 13.6
Table 2: Demographic Characteristics of the Studied Patients
Characteristic
N
%
Dwelling
Rural
98
64.47
Urban
54
35.52
Marital status
Unmarried
28
18.4
Married
103
67.7
Widowed
21
13.8
Occupation
Household
61
40.1
Unskilled
29
19
Semiskilled
39
25.6
Skilled
23
15.1
Professional
8
5.26
Family type
Nuclear
79
51.97
Joint
28
18.4
Extended
45
29.6
Literacy status
Illiterate
82
53.9
Primary
22
14.4
Secondary
16
10.5
Matric
13
8.55
Graduate
11
7.23
Postgraduate/Professional
8
5.26
Family Income(Rs)
< 5000
45
29.6
5000 to 10000
85
55.92
≥ 10000
22
14.4
Socioeconomic status ( Kuppuswamy Scale )
Lower
32
21
Upper lower
11
7.23
Middle
84
55.2
Upper middle
19
12.5
Upper
6
3.94
Table 3: Result of HADS Scoring
Variable
Total (n=96)
Anxiety alone
Depression Alone
Anxiety depression both
p value
Male
37(38.54%)
8(29.6%)
18(60 %)
11(28.2%)
-
Female
59(61.4%)
19( 70.3%)
12( 40%)
28(71.7 %)
-
Age (Years)
54.1± 13.6
51.4± 13.7
56.4± 13.1
54.1± 13.1
< 0.005
Mean HADS Score
-
13.42±3.4
15.73±3.3
25.62±4.3
< 0.005
Table 4: Types of endocrinological disorders
Endocrinological disorders
Number of patients(N=152)
Psychiatric comorbidity
percentage
Thyroid disorders
62 (40.7%)
43
69.35
Diabetes mellitus
47(30.92%)
32
68.05
PCOD
28(18.4%)
16
57.1
Cushings syndrome
5(3.289%)
2
40
Acromegally
2(1.31%)
0
0
Addisions disease
1(0.65%)
0
0
Sheehan’s syndrome
3(1.97%)
2
66.6
Miscellaneous
4(2.63%)
1
25
Table-5 Psychiatric Co-morbidity across Socio-demography of the Patients
Present
Absent
p value
n
%
N
%
Dwelling
Rural
59
60.02
39
39.7
<0.005 (Sig)
Urban
37
68.5
17
31.4
Marital status
Unmarried
8
28.5
20
71.4
>0.005 (NS)
Married
72
69.9
31
30
Widowed
16
76.1
5
23.8
Occupation
Household
57
93.4
4
6.55
>0.005 (NS)
Unskilled
14
48.2
15
51.7
Semiskilled
9
39.1
30
76.9
Skilled
14
60.8
9
39.1
Professional
2
25
6
75
Family type
Nuclear
45
56.9
34
43.0
>0.005 (NS)
Joint
22
78.5
6
21.4
Extended
29
64.4
23
51.1
Literacy status
Illiterate
70
85.2
12
14.6
>0.005 (NS)
Literate
26
36.1
46
63.8
Family Income(Rs)
< 5000
17
37.7
28
62.2
>0.005 (NS)
5000 to 10000
65
76.4
20
23.5
≥ 10000
14
63.6
8
36.3
Socioeconomic status
Lower
18
50
18
50
>0.005 (NS)
Upper lower
7
63.6
4
36.3
Middle
59
70.2
25
29.7
Upper middle
10
52.6
9
47.3
Upper
2
33.3
4
66.6
Discussion
This study is the first to offer data on psychiatric morbidity among endocrine patients in the Kashmiri population. 63.15% (96) patients were found positive to HADS questionnaire with anxiety/depression score of 11 or more in our study. The results of this study suggest patient suffering from endocrinological disorders are likely to have a co-morbid psychiatric disorder. [5, 16].Depressive disorders and anxiety disorders are the commonest psychiatric disorders in endocrinogical patients. [3].Numerous studies have shown a high correlation between depression and endocrinological disorders and this study supports these findings, with 43.47 %( 30) of the participants having depressive symptoms on the HADS. [3, 16] 40.62% (39) respondents had both depressive symptoms and an anxiety disorder. 28.12% (27) participants were diagnosed with an anxiety disorder, which is slightly higher than the lifetime prevalence of anxiety disorder in men [16]. Our findings of a high proportion of respondents with endocrinological disorders (45.7%) Female were more in number than their male counterparts 59(61.4%) vs. 37(38.54%) and the majority of men presenting with endocrinological disorders were between the ages of 35 and 45 years has also been reported in a previous studies. [4, 8].The findings of our study suggest that psychiatric disorders are highly prevalent in endocrinological disorders and is largely unrecognized in the primary care setting. Endocrine disorders of different kinds, irrespective of treatment have been associated with Psychological distress. Psychological wellbeing of endocrine disorders may provide new insights in clinical endocrinology. Further psychological disorders comorbid with endocrinological disorders adds to their disability as well as cost to the individual and the society.[17] Most of the clinicians do not suspect this important association of endocrinological disorders in the beginning resulting in delayed diagnosis. Thus, the high prevalence of anxiety and depression in endocrinological disorders in our study supports a case for screening for these disorders in endocrinological clinics. Furthermore, recognition and treatment of these comorbidities could improve patient outcomes. Future studies should focus on replicating or refuting these findings in larger samples as well as in testing interventions aimed at targeting psychological morbidities in this patient group. Under-recognition of psychiatric morbidity is not an uncommon phenomenon, and has been found in similar local studies of psychiatric morbidity in other medical illnesses[8].Thus, more attention should be paid to recognizing psychiatric morbidities in this group of patients.. The reasons for increase in the frequency of psychiatric disorders are multi-factorial. Being chronic illness leads to psychological stress .The major limitation of our study was relatively small sample size. Another limitation of our study is its crossectional design, which does not allow us to determine direction of causality in the relationship between endocrinological disorders and depression/anxiety. More community based studies are required to assess the magnitude of the problem and to lay down principles to help such patients.In order to clarify the temporal relationship prospective studies with a bigger sample size are essential in the future. As far as we are aware, this is a first of its kind study in kashmir. Endocrinological disorders accounts for a huge proportion of referrals to psychiatric clinics and misery is added upon an already devastating metabolic disease. To add the cost associated with psychiatric morbidity accounts individual and to the society are substantial. Thus, the high prevalence of anxiety and depression in endocrinological disorders in our study supports a case for screening for these disorders in Endocrinological clinics. Furthermore, recognition and treatment of these comorbidities could improve patient outcomes.
Chest pain accounts for 1% of all GP consultations, but in only 8%-18% of cases is it an indicator of underlying ischemic heart disease.1 Given the potential diagnostic uncertainty associated with chest pain at initial presentation, specialist evaluation of patients in a Rapid Access Chest Pain Clinic (RACPC) is of value and represents an important process in the evaluation of symptoms. These clinics were established with the aim of providing rapid outpatient assessment of patients with suspected cardiac disease in order to permit earlier provision of appropriate treatment and investigations where required.
Stable chest pain typically presents as angina, a triad of dull central chest pain, brought on with exertion and relieved by rest or GTN spray. The aetiology is usually stable atherosclerotic plaque disease which is associated with low mortality and can be treated with oral anti-anginals, as demonstrated by meta-analyses and the landmark COURAGE study. 2, 3
NICE Clinical Guideline 95 (NICE CG95) suggests that choice of initial investigation for stable chest pain should be guided by a patient’s pre-test probability of having CAD. Calculations of the pre-test probability take into consideration a patient’s age, gender, cardiac risk factors and symptoms. Patients are defined as high risk of cardiac disease if they have diabetes, smoke or have hyperlipidaemia (total cholesterol >6.47mmol/litre). Patients with none of the above are considered low risk. Symptoms are defined as “typical angina” if the pain is: 1) constricting discomfort in the front of the chest or in the neck, shoulders, jaw or arms; 2) is precipitated by physical exertion and 3) is relieved by rest or GTN spray within approximately five minutes. Pain is defined as “atypical angina” if only two of the above criteria are met and defined as “non-anginal” if one or none of the above criteria are met.
NICE pre-test probabilities of CAD (Table 1), are based on a version of Diamond and Forrester’s pre-test probabilities published in 1979, modified using data from Duke’s cohort study, published in 1993.4, 5, 6 Recent studies suggest that these NICE pre-test probabilities may overestimate the prevalence of CAD in a primary care population and may risk over investigating patients.7, 8 In addition to having financial implications, this may cause patients undue anxiety and unnecessarily put them at risk of complications.
Table 1: NICE Clinical Guideline 95 pre-test probabilities table. Each cell represents the percentage risk of each group of patients having CAD, based on their typicality of symptoms, gender, age and cardiac risk factors (lo, low and hi, high)4
ESC guidelines utilise an updated, validated model of the Diamond-Forrester model by Genders et al. to create pre test probabilities of CAD (Table 2), based on patient’s age, gender and typicality of symptoms. 9, 10
Table 2: ESC guidelines clinical pre-test probabilities in patients with stable chest pain symptoms
Each cell represents likelihood of each group of patients having CAD, based on typicality of symptoms, age and gender.9
We hypothesised that strict adherence to NICE guidelines results in over-estimation of the pre-test probability of CAD and therefore over-investigation of patients presenting with stable chest pain. ESC guidelines may offer more accurate pre-test probabilities of CAD and allow a more targeted and cost-effective use of investigations.
Methodology
Clinic records of all patients who attended the RACPC at Tunbridge Wells Hospital between July 2005 and December 2012 were reviewed. This service is run by a cardiology specialist. Patient demographics, cardiac risk factors and information regarding the nature of patient symptoms were collected prospectively and completed at the time of the patient’s RACPC appointment. Results of cardiac investigations were collected from paper and computerised records, and included diagnoses of significant CAD made following invasive coronary angiogram. These results were compared with patients’ pre-test probabilities of CAD calculated using both NICE and the ESC’s calculation methods. Outcome and readmissions were obtained from electronic records from the Maidstone and Tunbridge Wells NHS Trust computer records retrospectively.
Results
Study population
A total of 1968 records were reviewed. 59% (n = 1162) of patients were male and 41% (n = 806) were female. Their mean age was 60 years. At initial assessment, 69.8% patients (n=1373) had non-anginal chest pain, 19.5% (n=383) had atypical angina and 10.8% (n=212) had typical angina, based on the NICE guideline definitions of chest pain.
97.2% (n= 1912) patients underwent further investigation; 15% (n=256) of these were subsequently diagnosed as having significant CAD, accounting for their symptoms. The 2.8% (n=56) of patients who did not undergo investigation either chose not to, were unable to, were lost to follow up, or were diagnosed as having a non-cardiac cause of their symptoms at the initial RACPC appointment.
NICE CG95 pre test probabilities compared against cohort data
Table 3: NICE guidelines 95 pre test probabilities compared against cohort data
Each cell represents the proportion (%) of cohort patients from each group who were diagnosed with CAD. We have colour-coded cells to represent the NICE estimated pre-test probability of CAD in each group. Red cells represent 61-90+% probability, pink cells represents 30-60% probability, blue cells represent 10-29% probability and white cells represents <10% probability of CAD according to NICE Guidelines. “ – “ marks a cell where pre-test probabilities of CAD could not be calculated for cohort patients.
Table 4: A comparison of NICE pre-test probabilities and cohort patient data.
The risk of CAD as predicted by NICE guidelines 95 on the left compared with the actual number of cohort patients in each category and the proportion of those patients diagnosed with significant CAD.
The average discrepancy between the pre-test probability and actual incidence of CAD in cohort patients was 28% (range 20% - 88%). In 48% of cells in the NICE CG95 pre-test probability table (Table 1) the pre-test probability of CAD was overestimated by 30% or more (Table 3). A marked discrepancy between pre-test probability and actual incidence of CAD was found between “high risk” and “very low risk” patients. On average, high risk patients had an overestimated pre-test probability of 34.3 – 40.9% per cell compared with low risk patients whose pre-test probability was only overestimated by 6.5% (Table 3).
The cells highlighted in dark red in table 3 represent high risk patients whose pre-test probability was of 61-90+%, according to NICE CG95. In our cohort, only 31.2% (n=214, 95% CI 27.6-34.5) of high risk patients in this category were diagnosed with CAD. On average, actual incidence of CAD compared with pre-test probability was overestimated by 34.4% – 40.9% in each cell.
The pink cells in table 3 represent medium risk patients with a pre-test probability of CAD of 30-60%, according to NICE CG95. In our cohort, only 4.4% (n=24, 95% CI 3.0 – 6.5) of medium risk patients had a positive angiogram (Table 4). The average overestimate of actual incidence against pre-test probability was 35.9%.
The cells highlighted in blue in table 3 represent low risk patients with a pre-test probability of CAD of 10-29%, according to NICE CG95. In our cohort, only 2.5% (n=7, 95% CI 1.2 – 5.0) of low risk patients were diagnosed with CAD (Table 4). On average, the pre-test probability of CAD exceeded the found incidence of CAD by 18.6% (Table 3).
The white cells in table 3 represent very low risk patients with pre-test probability of CAD <10% according to NICE CG95. In our cohort, only 0.28% (n= 1, 95% CI 0.1 – 1.6) of patients were diagnosed with CAD. Average overestimation in this group was 6.5% in each cell.
ESC guidelines pre test probabilities compared against cohort data
Table 5: A comparison of ESC pre-test probabilities with cohort patient data.
Each cell shows the proportion (%) of cohort patients from each group diagnosed with CAD. Each cell is colour coded to correspond with the ESC estimated pre-test probability. Dark red cells represent >85% probability, pale pink cells represent 66-85% probability, pale blue cells represent 15-65% probability and white cells represent <15% probability.
Table 6: A comparison of ESC pre-test probabilities and cohort patient data
The risk of CAD as predicted by ESC guidelines on the left compared with the actual number of cohort patients in each category and the proportion of those patients diagnosed with significant CAD.
The average discrepancy between pre-test probability of CAD, according to the ESC’s risk stratification table, and actual incidence of CAD in cohort patients was 20.7%. In 28% of cells, the pre-test probability of CAD exceeded the found incidence of CAD by 30% or more (Table 5).
The cells highlighted in dark red in table 5 represent very high risk patients with a pre-test probability of CAD greater than 85%, according to ESC guidelines (Table 5). 73.4% (n= 58, 95% CI 63.7 – 82.7) of cohort patients in this high-risk category were diagnosed with CAD (Table 6). On average, incidence of CAD in each cell has been overestimated by 13% in this category.
The cells highlighted in pale pink in table 5 represent high risk patients, with a pre-test probability of CAD of 66-85%, according to ESC guidelines. 58.5% (n=103, 95% CI 51.1 – 65.5) of cohort patients in this high-medium risk category were diagnosed with CAD (Table 6). On average, the pre-test probability of CAD exceeded the found incidence of CAD in each cell by 17.7% (Table 5).
The cells highlighted in pale blue in table 5 represent medium risk patients with a pre-test probability of CAD of 15-65%, according to ESC guidelines. 6.4% (n=93, CI 5.3 –7.8) of cohort patients in this risk category were diagnosed with CAD (Table 6). On average, the pre-test probability of CAD exceeded the found incidence of CAD by 24.1%in each cell (Table 5).
The cells highlighted in white in table 5 represent patients whose pre-test probability of CAD was less than 15% according to ESC guidelines. Only 0.76% (n=2, 95% CI 0.2 –2.7) of cohort patients in this risk category were diagnosed with CAD (Table 6). On average, pre-test probability of CAD exceeded found incidence of CAD in each cell by 6.2% (Table 5).
Discussion
Only 15% of a total of 1968 patients referred to RACPC were diagnosed with significant CAD. The majority (70%) of referred patients had “non-anginal” chest pain and low pre-test probabilities of CAD, reflecting the importance ascribed by General Practitioners of ruling out ischemic heart disease as the underlying cause for chest pain, even in low risk patients. This may not be surprising given the large media attention to heart disease and sustained campaigns for early warning signs of heart attack in the British media. It is therefore of great public interest for cardiac disease to be identified.
NICE CG95 pre test probabilities compared against cohort data
Comparing cohort data to the pre-test probabilities of CAD outlined in NICE CG95, NICE have overestimated the number of patients likely to have CAD in the majority of groups. Strict adherence to NICE CG95 therefore carries the risk of over-investigating patients. NICE recommend CT calcium scoring as the first line investigation for patients with a low (10-29%) pre-test probability of CAD. 284 patients fall into this category and only 7 patients were shown to have CAD. This means that 40.5 patients need to be treated in order to identify 1 positive patient (NNT= 40.5).
In patients with a medium (30-60%) pre-test probability of CAD, NICE recommends functional imaging as the first line diagnostic investigation. In our cohort 544 patients would undergo functional imaging, but only 24 of these patients would be diagnosed with CAD, NNT=22.7.
Finally, in patient groups with a high (61-90%) pre-test probability of CAD, NICE recommends invasive coronary angiography as the first line diagnostic investigation. In our cohort of 1968 patients, 691 patients had a high pre-test probability of CAD, and 214 had significant coronary artery disease on angiography, NNT= 3.2.
Although invasive coronary angiography is considered the gold standard investigation for diagnosing CAD, and permits simultaneous therapeutic intervention, the procedure is not without risk, particularly in elderly patients and those with renal impairment.11 Furthermore, invasive angiography is expensive and is costed by the East Kent Hospitals University NHS Foundation Trust at £1166.02 per procedure (private correspondence).
NICE CG95 offers no guidance on managing patients who have a <10% pre-test probability of CAD. 347 of our cohort patients fell into this very low risk category and only 1 was diagnosed with CAD. Therefore, NICE CG95, if strictly adhered to, would have missed one diagnosis of CAD in our patient cohort.
ESC pre test probabilities compared against cohort data
ESC guidelines tend to offer more conservative estimates of pre-test probability of CAD compared with NICE guidelines. Using the ESC’s risk stratification table, almost all patients, except those with over 85% pre-test probability and those with less than 15% pre test probability, would be investigated for chest pain. This is due to their claim that non-invasive, image-based diagnostic methods for CAD have typical sensitivities and specificities of around 85%, so that roughly 15% of these investigations could be yielding false results. Hence, due to these inaccuracies, in patients with pre-test probabilities of CAD below 15% or above 85%, ESC state that performing no test at all could provide fewer incorrect diagnoses.9
In our patient cohort, 79 patients had very high (>85%) pre-test probability of CAD, but only 58 patients (73%) were diagnosed with CAD. For this patient risk group, ESC guidelines suggest that further investigation may not be necessary and that a diagnosis of CAD may be assumed. Thus, applying ESC guidelines to our cohort could result in 21 patients being incorrectly diagnosed with stable angina, and more serious causes of chest pain, for example pulmonary emboli or gastric ulceration, may be missed. However, in practice, it is likely that many patients in this very high pre-test probability category would have undergone angiography, because patients who have "severe symptoms" or who are clinically thought to have "high risk coronary anatomy" should be offered an invasive angiography with or without pressure wire studies. The vagueness of the guidelines allows interventionists to interpret this in the clinical context.
In ESC guidelines, invasive coronary angiography is not specifically recommended as a first line investigation for stable angina, regardless of the pre-test probability of CAD. In patients with a high (66-85%) pre-test probability of CAD, ESC guidelines recommend non-invasive functional imaging first line. Of the 176 patients who fell into this category, only 102 (58.0%) patients were ultimately diagnosed with CAD.
In patients with medium (15-65%) pre-test probability of CAD, ESC guidelines advise exercise ECG testing (or non-invasive imaging for ischemia if local expertise is available) as first line diagnostic investigations. Of the 1451 patients which fell into this category, only 93 were diagnosed with CAD, NNT= 15.6. Fortunately, exercise ECG testing would not expose the patient to potentially harmful radiation or medication, but their poor diagnostic power may result in the need for further investigations, despite a negative result.
In patients with low risk of CAD (<15%) ESC guidelines suggest making an assumption that the patient does not have CAD and advocates conducting no further investigations. In our cohort, 263 patients fell into this low risk category, two (0.8%) of which were diagnosed with CAD.
The ESC guidelines appear to have higher specificity than the NICE guidelines, and only two patients would have been missed had ESC guidelines been adhered to, compared to one patient missed if NICE guidance was used. Thus, although highly sensitive, ESC guidelines when applied to our cohort have lower sensitivity than NICE guidelines.
Comparison of number of investigations
Following ESC guidance for our cohort of patients would have resulted in fewer diagnostic invasive angiograms being performed than if NICE guidance had been followed. ESC guidance only recommends invasive angiography if first line, non-invasive investigations generate positive results. Overall, however, ESC guidance would result in a greater number of overall investigations being performed.
In total, NICE advises that all 691 of our high risk cohort patients should undergo invasive angiography as a first line investigation. 544 with medium risk should undergo functional testing first and 24 of these patients (assuming an angiogram would follow a positive result) would go on to have invasive angiography. 284 low risk patients should undergo CT calcium scoring first, of which 7 would go on to have functional imaging and angiography if the above logic is followed. This generates a total of 1557 investigations; 722 angiograms, 551 functional imaging investigations and 284 cardiac CT scans.
In comparison, using ESC guidance, 176 of our high risk patients would have functional imaging investigations, 103 patients with positive results would then undergo invasive angiography. 1451 patients would receive exercise ECGs, of which 93 with positive results would undergo functional imaging and invasive angiography. This generates a total of 1916 investigations; 196 angiograms, 269 functional imaging investigations and 1451 exercise ECGs.
If we assume that stress echocardiograms are used as “functional imaging” we can estimate costs for our cohort when applying each set of guidelines. Costs for each investigation are supplied by East Kent Hospitals University NHS Foundation Trust and are as follows: Outpatient elective coronary angiograms are costed at £1,166.02; stress echocardiograms are costed at £132.30; exercise ECGs at £40.26 and CTs of one area at £102.47 (private correspondence). If we were to apply NICE guidelines to our cohort, £841,866.44 would be spent on angiograms, £72,897.30 would be spent on stress echocardiograms and £29,101.48 on CT scans. This is a total of £943,865.22 on investigations.
If we were to apply ESC guidelines to our cohort, £228,539.92 would be spent on coronary angiograms, £35,588.7 would be spent on stress echocardiography and £58,417.26 would be spent on exercise ECGs. A total of £322,545.88 would be spent on investigations. Overall, this is £621,319.34 cheaper than applying NICE guidelines.
Limitations of study
This study is based on data from a single site and may not be nationally representative. The final diagnosis was made clinically by an experienced interventional cardiologist, which introduces subjectivity and the risk of interpreter bias. Not all patients underwent the gold standard of invasive coronary angiography to demonstrate the presence of CAD. However, all patients were seen and fully assessed by a cardiologist and 97% underwent investigations if deemed necessary.This study has all the limitations of a registry study. In addition, costs for investigations may vary throughout the country, and indeed the world, with varying expertise available.
Conclusion
In conclusion, strict adherence to NICE CG95 over-estimates the pre-test probability of CAD in our local population group. This is consistent with previous studies conducted in South London where there is a larger Afro-Caribbean population, as well as with studies conducted in the North of England.8,9 Adherence to ESC guidelines in place of NICE guidelines may enable a more targeted and cost-effective use of investigations. Strict application of the ESC guidelines to the study cohort would have resulted in investigations costing an estimated £322,545.88, compared to £943,865.22 if NICE guidelines were applied. However, conducting fewer investigations carries greater risk of misdiagnosis, and using ESC guidelines in isolation introduces the possibility of assuming CAD in patients without conducting investigations to confirm this.
It is advisable that local cardiology departments audit their stable chest pain guidelines to ensure that the interpretation of pre-test probabilities is in keeping with the local population. Unfortunately there is no ideal policy and local protocols should reflect the local population.
In the last few decades, the practice of medicine has seen swift changes, as well as its visualisation in the near future. It was designed and focused on serving the community and helping people in need. However, it is not a secret that there is a huge business around this labour and the economic interest of a diverse industry in the field. 1,2
Not intending to generalise, many have observed in daily practice a comparable trend with modern society. A phenomenon including both patients and health personnel, where there is a demand for health services, a growing supply, and a considerable revenue. Basic market economics, right? 3
Not that simple.
It would be the triumph of basic sciences to explain each disease under a biological substrate, minimising the involvement of other factors. A definitive targeting of biological research would be the key to unlocking knowledge. What is certain is that this approach has transformed pharmacotherapy, treatment alternatives and prognosis.2, 3, 4
Early physicians had little to nil information on what today we call aetiology, pathophysiology and therefore treatment. Patients were rarely relieved due to human intervention. Trepanations were frequently performed in the Classical and Renaissance periods and although having modern indications (decompressive craniotomy), its uses and technique were at best questionable. Belief and verbally transmitted understanding of a handful of medicinal plants whose effect were known empirically were standards of care.5
These times have changed, the pharmaceutical industry is a pillar of the economies in many countries, and the number of transactions and cash flow that they move are beyond the wildest dreams of the first physicians. Born each year, thousands of new pharmaceutical companies develop and market new drugs and medical supplies. 1, 6
As advocated by experts, pharmaceutical and medical supply companies are considered one the safest businesses nowadays, with everyone being a potential consumer/patient. It is the race for continuous development of new drugs to its current rate that guarantees soon we will have more drugs and procedures available. The drug industry may be easily overloaded by an oversupply of organic compounds and procedures to patients. 2, 4, 6
This pharmaceutical industry thriving is widening its horizon. Personalised medicine, the study of the influence of a patient’s genetic makeup on their disease susceptibility, prognosis, or treatment response (efficacy and safety), is actually in the spotlight. This can be assessed in different ways, being preventive and/or therapeutic. 7
In the preventive field, preconception screening studies have been unravelling genetic disorders, as recommended by different guidelines such as those of the American College of Medical Genetics, which are designed for individuals with known genetic conditions or high-risk patients who wish to become pregnant. 8
In the therapeutic filed, pharmacogenomics can aid in the identification of alterations of Single Nucleotide Polymorphism (SNPs) that affect the function or expression of proteins associated with pharmacokinetics or pharmacodynamics of different drugs. In recent years the research community has doubled efforts in personalising certain therapies. Hormonal therapy in breast cancer has been from the beginning a receptor-guided therapy, especially with ER (Oestrogen Receptor) therapy. Initial clinical results of trials conducted so far have allowed to establish single therapies regimens with Tamoxifen or combined with Arimidex. 9
Another model of the advances in this arena is reflected in the new alternatives for prostate cancer. This hormone-dependent tumour has demonstrated recurrent alterations in the androgen receptor and its pathway. In specific patients the disease can be found in Castration-Resistant Prostate Cancer (CRPC), a lethal clinical state in which the tumour has developed resistance to androgen deprivation therapy. This clinical scenario is commonly established in advanced or metastatic prostate cancer patients. The genomic landscape of localised prostate cancer has been well defined, describing putative pathogenic BRCA2 germ line mutations as well as somatic and germ line DNA repair alterations found such as BRCA1, CDK12, FANCA, and RAD51B. Furthermore, the research advances described above can allow clinicians to determine treatment, therefore achieving better outcomes. 10
It is unquestionable that personalising treatment will improve clinical outcomes for patients in the near future and help achieve a more effective use of available health care resources. The next challenge for scientists and researchers is to demonstrate with strong evidence the clinical and cost-effectiveness to support the use of personalised medicine and its implementation in different health care systems around the world. 2, 3, 5
In conclusion, individual patient variability currently studied in drug efficacy and drug safety has represented a major objective in current clinical practices. Years of research results have converged in progresses in pharmacogenetics and human genomics that have dramatically accelerated the discovery of genetic variations that potentially determine variability in drug response, providing better clinical outcomes for patients. The future in this field is expected to allow us to have effective and safe medications to targeted patients with appropriate genotypes.
BJMP December 2018 Volume 11 Issue 2
BJMP December 2018 Volume 11 Number 2
Editorial
Case Reports/Series
Clinical Practice
Education and Training