The concept of burnout has been used to describe emotional and psychological stress among healthcare workers in response to work-related stressors1. Maslach et al2 have defined burnout as a triad of characteristics: emotional exhaustion, depersonalisation (such as objectifying and treating patients indifferently) and lack of feelings of personal accomplishment. Since high time-pressure, high job- stress and excessive workload with poor support are among significant factors that contribute to burnout, physicians are at a greater risk of suffering from it as compared to the general population3.
Burnout affects approx. half of the doctors in the U.S. and in Western Europe working across multiple specialties including in family medicine and internal medicine4,5. Likewise, burnout is universally prevalent among healthcare workers from low and middle-income countries6.
Psychiatry presents specific range of stressors not encountered concurrently in other medical specialties, such as treating chronically ill patients, potentially difficult therapeutic relationships, threat of patient suicide/self-harm and stigma associated with this field of medicine7. Therefore, it is not surprising to discover that approx. 37% of psychiatric trainees working across 22 countries suffered from severe burnout8.
The COVID-19 pandemic resulted in a national lockdown in the U.K. with travel restrictions and unprecedented pressure on an already stretched healthcare system. Healthcare workers were, therefore, faced with extraordinary difficulties including increased working hours, heavy workload, staff shortages and lack of resources. A recent systematic review showed that a startling 40% of medical workers experienced acute stress disorder following COVID-19 pandemic, with burnout prevalent among 29% of them9.
During to the pandemic, there has been a huge increase in the pressure on mental health related admissions to hospitals10. A number of causative stressors may have instigated further strain on mental health workers, including bereavement, unemployment, and isolation, resulting in increased psychological morbidity11. Under such circumstances, ensuring the wellbeing of healthcare workers is of paramount importance to maintain a resilient healthcare system. However, limited research has been carried out so far on the effects of pandemics on psychiatrists and other frontline healthcare workers.
Following two surges of COVID-19 pandemic, we proposed to ascertain the frequency of burnout among doctors working in a large mental health trust in Southeast England, with a secondary aim of exploring possible contributory factors.
METHODOLOGY
We carried out a cross-sectional survey of all doctors working in a county-wide mental health Trust in England. Using the NHS Mail, a link to complete the online survey was sent to all doctors working at different experience levels and across a number of psychiatric specialties.
The survey was based on The Maslach Burnout Inventory12, which is considered to be a gold standard in assessing burnout among healthcare workforce. It consists of 22 questions, divided into domains that assess emotional exhaustion, depersonalisation and personal accomplishment based on a 7-point scale, ranging from “never” to “every day”. Scores for these domains range from 0 to 54, 0 to 30, and 0 to 48, respectively. High scores on the EE (≥ 30) and DP (≥ 12) subscales or a low score on the PA subscale (≤ 30) were considered highly suggestive of burnout symptoms.
The anonymised survey contained questions related to demographics, 22 questions as derived from the Maslach Burnout Inventory, and 14 other questions exploring specific work-related stressors regarding the COVID-19 pandemic. Responses to the questions were analysed and categorised into themes to allow further analysis and discussion.
RESULTS
Our response rate was 42% as 106 out of 254 doctors filled the questionnaire. Not all participants answered all questions, and response numbers for each question are indicated where applicable in the respective tables. There was an even distribution between trainees and consultants, but less representation from speciality doctors, which was expected due to their fewer numbers. Where gender was equally split, we found that age was relatively evenly distributed in our sample.
Figure 1: Participant demographics
Regarding the Maslach Burnout Inventory questions, higher aggregates in emotional exhaustion and depersonalisation subscales indicate higher chance of burnout. When comparing these two subscales, the levels of emotional exhaustion were higher than that of depersonalisation. Conversely, in the personal accomplishment subscale, more common occurrences indicate a lower chance of burnout.
Table 1: Maslach Burnout Inventory Results
Question
Possible responses
n
Never
A few times/ year
Once/ month
A few times/ month
Once/ week
A few times/ week
Every day
I feel emotionally drained by my work
7.6% (8)
23.8% (25)
10.5% (11)
27.6% (29)
6.7% (7)
20% (21)
3.8% (4)
105
Working with people all day long requires a great deal of effort
11.3% (12)
23.6% (25)
11.3% (12)
26.4% (28)
5.7% (6)
16.0% (17)
5.7% (6)
106
I feel like my work is breaking me down
20.0% (21)
39.0% (41)
9.6% (10)
18.1% (19)
1.9% (2)
9.6% (10)
1.9% (2)
105
I feel frustrated by my work
16.2% (17)
33.3% (35)
10.5% (11)
21.9% (23)
5.7% (6)
10.5% (11)
1.9% (2)
105
I feel I work too hard at my job
12.4% (13)
21.0% (22)
8.6% (9)
25.7% (27)
5.7% (6)
17.1% (18)
9.5% (10)
105
It stresses me too much to work in direct contact with people
46.2% (49)
28.3% (30)
9.4% (10)
7.5% (8)
0.9% (1)
4.7% (5)
2.8% (3)
106
I feel like I’m at the end of my tether
42.9% (45)
33.3% (35)
4.8% (5)
6.7% (7)
2.9% (3)
7.6% (8)
1.9% (2)
105
I feel I deal with my team/colleagues impersonally, as if they are objects
70.8% (75)
19.8% (21)
4.6% (5)
2.8% (3)
0.9% (1)
0.0% (0)
0.9% (1)
106
I feel tired when I get up in the morning and have to face another day at work
15.1% (16)
36.8% (39)
13.2% (14)
11.3% (12)
1.9% (2)
17.0% (18)
4.6% (5)
106
I have the impression that my team/colleagues make me responsible for some of their problems
41.0% (43)
21.9% (23)
10.5% (11)
20% (21)
0.0% (0)
4.8% (5)
1.9% (2)
105
I am at the end of my patience at the end of my work day
31.7% (33)
36.5% (38)
5.8% (6)
11.5% (12)
3.8% (4)
9.6% (10)
0.9% (1)
104
I really don’t care about what happens to some of my team/colleagues
85.7% (90)
6.7% (7)
1.9% (2)
1.9% (2)
1.9% (2)
0.9% (1)
0.9% (1)
105
I have become more insensitive to people in the workplace
67.0% (71)
22.4% (24)
2.8% (3)
3.8% (4)
0.9% (1)
2.8% (3)
0.0% (0)
106
I’m afraid that this job is making me uncaring
62.3% (66)
25.5% (27)
2.8% (3)
1.9% (2)
3.8% (4)
1.9% (2)
1.9% (2)
106
I accomplish many worthwhile things in this job
2.9% (3)
8.6% (9)
6.7% (7)
15.2% (16)
6.7% (7)
25.7% (27)
34.3% (36)
105
I feel full of energy
4.7% (5)
6.6% (7)
8.5% (9)
20.8% (22)
8.5% (9)
33.0% (35)
17.9% (19)
106
I am easily able to understand what my team/colleagues feel
0.9% (1)
2.8% (3)
3.8% (4)
13.2% (14)
8.5% (9)
34.0% (36)
36.8% (39)
106
I look after my team/colleagues problems very effectively
0.9% (1)
1.9% (2)
5.8% (6)
12.5% (13)
7.7% (8)
44.2% (46)
26.9% (28)
104
In my work, I handle emotional problems very calmly
0.9% (1)
4.8% (5)
1.9% (2)
2.9% (3)
13.3% (14)
31.4% (33)
44.8% (47)
105
Through my work, I feel that I have a positive influence on people
0.9% (1)
4.8% (5)
4.8% (5)
8.6% (9)
9.5% (10)
38.1% (40)
33.3% (35)
105
I am easily able to create a relaxed atmosphere with my team/colleagues
0.9% (1)
3.8% (4)
2.8% (3)
9.4% (10)
11.3% (12)
34.0% (36)
37.7% (40)
106
I feel refreshed when I have been close to my team/colleagues
1.9% (2)
8.5% (9)
3.8% (4)
17.0% (18)
11.3% (12)
34.9% (37)
22.6% (24)
106
In other quantitative questions, all respondents reported that their screen time had increased during the pandemic. A majority reported it to be by more than 2 hours/week, and 71% registered an increase of more than 4 hours/week. Despite this, there appears to be no increase in their home-working that could account for this difference.
The results of the remaining questions reflected a poorer work experience. The strongest evidence was for a feeling that mask wearing had affected rapport with patients. Other more common experiences included poor outcomes for patients during the pandemic, with decreased staffing levels, increased workload, and delayed treatments.
Table 2: Other Question Responses – Quantitative only
Question
Possible responses
n
0-1 hours
1-2 hours
2-3 hours
4-6 hours
6 hours +
During the pandemic, my screen time (e.g. due to meetings and teaching) increased by
4.0% (4)
5.0% (5)
20.0% (20)
37.3% (37)
33.3% (33)
99
Question
Possible responses
n
Yes
No
Were you working from home more often during the pandemic?
48%(48)
52%(52)
100
Question
Possible responses
n
Strongly disagree
Disagree
Neither agree nor disagree
Agree
Strongly agree
I felt that the increase in screen time negatively affected my mood
10.0% (10)
23.0% (23)
35.0% (35)
24.0% (24)
8.0% (8)
100
I felt that the increase in screen time increased my level of exhaustion
14.3% (14)
16.3% (16)
19.4% (19)
39.8% (39)
10.2% (10)
98
I felt that the increase in screen time resulted in depersonalisation of my patients
11.0% (11)
25.0% (25)
33.0% (33)
25.0% (25)
6.0% (6)
100
I felt that the increased screen time hindered the working relationship between colleagues
10.0% (10)
25.0% (25)
20.0% (20)
33.0% (33)
12.0% (12)
100
I felt that the increase in screen time resulted in feelings of burnout
17.0% (17)
26.0% (26)
27.0% (27)
23.0% (23)
7.0% (7)
100
I felt dissatisfied with my online/telephone consultations
8.2% (8)
31.6% (31)
41.8% (41)
13.3% (13)
5.1% (5)
98
I felt that wearing masks affected my rapport with patients
8.1% (8)
17.1% (17)
9.1% (9)
49.5% (49)
16.2% (16)
99
I felt dissatisfied with the patient care provided to patients during the pandemic
7.1% (7)
36.4% (36)
33.3% (33)
21.2% (21)
2.0% (2)
99
I felt that patients did have poorer outcomes during the pandemic
5.1% (5)
27.2% (27)
28.3% (28)
35.4% (35)
4.0% (4)
99
I felt that working from home affected my work-life balance
10.4% (10)
24.0% (23)
42.7% (41)
17.7% (17)
5.2% (5)
96
I felt that working from home resulted in increased work related stressors
12.5% (12)
30.2% (29)
37.5% (36)
18.8% (18)
1% (1)
96
I felt that working from home resulted in more difficulties in my job e.g. communicating with my team or patient
11.5% (11)
29.2% (28)
35.4% (34)
21.9% (21)
2.0% (2)
96
DISCUSSION
Our study provides a snapshot of difficulties encountered by different grades of psychiatrists, while working in a large English county, during the COVID-19 pandemic. We found a burnout rate of 44.2%, which is higher than 36.7% observed by Jovanović et al8 among those working in other countries before the pandemic. Since a higher prevalence is also documented in other recent studies13, it is reasonable to assume that the higher rate of burnout is due to increased work-related stressors during the COVID-19 pandemic. These stressors could be linked to the newly introduced guidelines, which involved social distancing, high staff sickness and redeployment.
In the personal accomplishment subset of our study, highest number of doctors experienced burnout, possibly suggesting a link to the COVID-19 pandemic. Unfortunately, we do not have a pre-COVID pandemic survey for the sake of comparison, which could have confirmed causality with greater certainty.
71% of our cohort reported an increase of more than 4 hours of computer screen time a week, which was not due to increased amount of working from home. Various factors could explain this finding including the introduction of remote medical consultations, online multidisciplinary team meetings and teaching/training. Virtual consultations may provide an alternative to face-to-face assessments, but complications such as difficulty in discussing sensitive topics and demonstrating empathy could influence therapeutic relationship, medical errors, and screen fatigue resulting in increased levels of burnout14, 15.
A compromised professional identity and reduced job satisfaction are considered among significant predictors of job burnout16, 17. It is, therefore, reasonable to question whether the increased screen time and reduced patient contact could have impacted the professional identity of our cohort and their job satisfaction. This could also provide possible explanation for our cohort scoring highly for low personal accomplishment. However, one study that examined burnout in medical residents, who had used virtual telemedicine to replace outpatient clinics, found that the burnout actually decreased with increased use of virtual consultations18. Therefore, more consideration and research needs to be conducted on telemedicine practices in different medical subspecialties and their impact on medical professionals’ working lives.
Burnout is associated with an increase in clinical errors and may manifest in irritability, fatigue, and reduced cognitive functioning that ultimately result in a reduction in quality of patient care12,19. Medical errors on the other hand cost the National Health Service (NHS) £3.3 billion in litigation costs and additional bed days due to both systemic and individual factors20. Overall, 41% of our cohort were dissatisfied with remote consultations and the care provided to their patients during the pandemic. The reported difficulties with providing good patient care primarily consisted of poorer quality of and reduced patient interaction, patients being unable to engage with services and delayed treatments.
Wearing face masks could affect both verbal and non-verbal communication that in turn hinder the therapeutic relationship, as previous research has shown that patient engagement, understanding and treatment success are influenced by a clinician’s facial expressions21. Poorer patient outcomes found in our study could partly be due to the difficulties experienced during the pandemic as approx. 62% of our cohort felt that face masks affected their rapport with patients. Other factors that could have contributed to these poorer outcomes include redeployment of staff due to NHS pressures and reduced services. Further work is, however, needed to ascertain the associated casual pathway.
During the height of pandemic, carrying out frenetic clinical work with limited resources and little respite, coupled with the loss of loved ones and colleagues, could have undoubtedly impacted the mental health of medical workforce including psychiatrists. On the other hand, the pandemic may have also heightened the sense of vocation for some doctors. It is, therefore, difficult to assess the lasting effects of burnout until the pandemic is finally over and we resume normal therapeutic practices, in both clinical and personal settings.
Sceptical attitudes towards Covid 19 vaccines effectiveness and/ or safety are currently a major risk to global health. However, not every person declining Covid 19 vaccination is an irrational conspiracy theorist (1). Patients suffering from specific conditions may have justified concerns that in the absence of safety data for their specific health problems, they may find it difficult to appraise the risks associated with the vaccination in their condition.
Patients suffering from long term complications of Covid 19 have coined the term long covid to describe their debilitating illness (2). Many clinicians feel that long covid complexity may reflect different pathological processes (3) with respiratory symptoms being primarily secondary to tissue damage whilst fatigue and its associated post exertional symptoms such as physical pain or brain fog resulting from a dysregulated immune response (4).
Two mRNA vaccines developed by Pfizer Biontech and Moderna have demonstrated impressive levels of immunity against SARS CoV-2 virus in randomised controlled trials (5,6). This relatively new technology had several advantages that made it one of the earliest vaccines to be developed, tested, scaled up and subsequently approved for use all over the world. The potency of the immune response is another significant advantage of mRNA vaccine as suggested by previous in vitro and animal experiments (7).
This potency is naturally a positive characteristic especially when mRNA vaccine technology is used against an easily transmissible and potentially lethal disease. However, for patients suffering from long covid, such a strong immune response could be a cause for concern.
As vaccination programmes against SARS CoV. 2 Virus are rolled out around the world, long covid patients face a difficult decision as no data is available about the impact of the mRNA vaccines on their condition. In the UK, long covid is not considered to be a contraindication for vaccination (8); however, in the absence of any safety data for this group of patients, it is very difficult to provide an informed opinion about the risk.
Methods
In the summer of 2020, Wrightington, Wigan and Leigh NHS Trust Hospitals established a dedicated service for staff suffering from long covid. As Health Care Workers (HCW) in the UK were prioritised for vaccination, Pfizer Biontech Vaccine was offered to all Hospital employees with the first dose provided between end of December 2020 and end of January 2021.
A survey questionnaire was sent to all long covid staff members 2 weeks following the conclusion of the first dose roll out. The e-mail addresses were obtained from the long covid clinic data base. This short questionnaire evaluated the rate of acceptance of the vaccine, reasons for declining, immediate side effects and any persistent change of the long covidsymptoms following the vaccination. The survey was approved by the information governance department.
Results
The questionnaire was sent to 117 HCW. Out of 83 responses, 77 subjects were offered the vaccine (age range:18 - 65 with only 7 male respondents).
10 HCW declined having the vaccine (13 %) with 5 of them citing concerns about worsening symptoms as the main reason. Out of 67 HCW receiving the vaccine 48 (72%) had immediate but self-limiting side effects.
Fatigue, shortness of breath and anxiety were the most common symptoms of long covid our cohort originally had (75%, 53% and 18% respectively). Several weeks following vaccination, 45 subjects reported no change (67%) in symptoms. Fourteen (21%) subjects reported improvement of one or more of their symptoms (8 of them experienced improving respiratory symptoms, 4 improving fatigue, 5 improving anxiety and 2 mentioned improving other symptoms). Eight subjects (12%) reported worsening symptoms including fatigue (3 subjects), respiratory (1 subject), anxiety (2 subjects). Two subjects experienced worsening of other symptoms.
Discussion
When offered vaccination, our long covidpatients showed higher rates of compliance (86%) compared to the general population (9). However, five patients declined the vaccine because of their concerns about worsening symptoms.
Despite having a small number of subjects, limitations to the survey methodology and the relatively short period following vaccination, our report is the first to comment on the response of a cohort of long covid patients to mRNA vaccination. Most of our HCWs didn’t report any change in their symptoms with encouragingly 21% experiencing subjective improvement of symptoms with 10% of all participants reporting respiratory symptoms improvement. The 8 subjects reporting worsening of symptoms experienced more diverse problems with worsening fatigue the most common.
Our results were consisted with unpublished data reporting the feedback of 473 long covid social media users (10). 32% of this self-selecting population reported improvement of symptoms whilst 17% reported worsening of symptoms.
We would like to suggest two potential explanations for our findings. Comprehensive investigations for the respiratory system could be normal in some long covid patients complaining of shortness of breath (11). Dysfunctional breathing might contribute to the severity of shortness of breath (12). The confidence given to the patients from taking the vaccine may act in a positive way to reduce their anxiety and subsequently such perception of the respiratory effort.
Another potential explanation is the complex way mRNA vaccines manipulate the immune system potentially improving or worsening the already dysregulated immunity in long covid patients (4). It is encouraging to see that long covid patients are about twice as likely to experience improvement of symptoms compared to patients experiencing worsening of symptoms. We hope that our findings may be an early source of reassurance that mRNA Covid 19 vaccines are not commonly associated with adverse effects in long covid patients.
We feel that longitudinal studies appraising long covid symptoms and immunological markers correlating the pre and post mRNA vaccines may have the potential not only to improve understanding of the main long covid pathologies but may also unlock the secrets of Chronic Fatigue Syndrome / Myalgic Encephalomyelitis (ME/CFS) as a common condition possibly sharing many of long covid characteristics.
The most recent outbreak of severe acute respiratory syndrome (SARS) has been caused by coronavirus-2 (SARS-CoV-2) – a new single-strand, positive-sense-RNA beta-coronavirus first reported in 2019 in Wuhan, China. The virus has spread to nearly all countries across the world.1-4
SARS-CoV-2 infection, also known as Coronavirus Disease 2019 (COVID-19), replicates mainly in the upper and lower respiratory tract. The transmission of COVID-19 from symptomatic and asymptomatic patients is usually through respiratory droplets, generated by coughing and sneezing or through contact with contaminated surfaces.4,5 The disease has an incubation period of approximately 5.2 days.6
Most infections are mild and uncomplicated.4 After one week of the onset of disease, 5-10% of patients tend to develop pneumonia, needing hospitalisation.4,6 Some of these patients develop further complications, often leading to death.4,6 The overall case fatality rate is 1.4%, with a noticeably higher rate after the sixth decade of life.4
People aged ≥ 60 years, especially with underlying medical conditions – such as cardiovascular disease, hypertension, diabetes mellitus (DM), chronic respiratory disease, cancer, immunodeficiency, obesity – and those of male-sex, have an increased risk of dying.4,7-12 Risk of severe adverse outcome is also associated with an increased number of associated co-morbidities.10
The impact of active cancer, endocrine disorders, autoimmune inflammatory rheumatic diseases etc. on COVID-19 outcomes has been investigated widely.13-18 Divergent views have emerged regarding the role of renin angiotensin aldosterone system (RAAS) inhibitors, steroids, and immunomodulators in COVID-19 mortality.
The objective of our study was to evaluate the risk posed by epidemiological and demographic variables in our local population. We also sought to analyse the impact of co-morbidities on in-hospital mortality in confirmed COVID-19 patients.
METHODS
Study design:
We conducted a retrospective analysis of demographics characteristics (age and sex) and medical co-morbidities – hypertension, chronic heart failure, ischaemic heart disease, DM, thyroid disorders, asthma, chronic obstructive pulmonary disease (COPD), chronic kidney disease (CKD) (eGFR < 60 mL/min/1.73 m2), chronic liver disease, active malignancy, immunosuppression, post-transplant status, chronic inflammatory arthritis and other rheumatic disorders – in all patients with confirmed COVID-19, who were admitted in two peripheral district general hospitals under a single National Health Service (NHS) trust serving primarily the rural population of western England.
Inclusion and Exclusion Criteria:
To determine COVID-19 status, nose and throat-swab specimens were obtained for real-time reverse transcription polymerase chain reactions (rt-PCR) in all adult (≥18 years) patients, attending one of the two district general hospitals (Royal Shrewsbury Hospital, Shrewsbury; and Princess Royal Hospital, Telford) under Shrewsbury & Telford Hospitals NHS Trust (SaTH) in the period from 1st March to 15th May 2020.
Patients who tested positive (either by N gene and ORF1ab gene positive / ORF1ab gene positive or N gene positive) and required subsequent in-hospital management were included in the study. Patients who were discharged after initial senior review (usually by a consultant physician), or brought in as a cardiac-respiratory arrest, were excluded. Re-admissions to the hospital beyond 48 hours following hospital discharge due to COVID-19 were excluded from the study. Patients diagnosed solely on radiological or clinical findings without a positive rt-PCR test were not included in our study.
We analysed the data based on the index-admission (including failed-discharge: re-admission within 48 hours following hospital discharge). No follow-up data was collected post-hospital discharge of these patients.
Data collection & analysis:
A list of all confirmed COVID-19 patients over a 76-day period was identified from the trust microbiology database. A search of the electronic patient records was completed by four members of our team. Supplementary data was gleaned from existing hospital paper records. Patient demographics, presenting symptoms, associated co-morbidities, medications, admission and discharge dates, intensive therapy unit (ITU) admissions, renal profile, referral source and outcomes were recorded in the specifically designed electronic datasheet.
Study Outcome:
The impact of epidemiological and demographic characteristics, and pre-existing medical conditions on the mortality of confirmed COVID-19 patients requiring in-hospital treatment was analysed.
RESULTS
A total of 303 confirmed COVID-19 (rt-PCR positive) samples were collected over a 76-day period. Five patients had been tested twice, and this was accounted for. Thirty-five patients were excluded from the study: twenty-four of them discharged after initial senior review without requiring in-hospital treatment, seven brought in with cardio-pulmonary resuscitation (CPR) in progress, three had inadequate data, and one was <18 years old. Of the 263 patients admitted, 70 (26.6%) died in hospital (Figure-1).
Figure-1: Flowchart of sampling and analysis
We stratified the mortality rates among the admitted patients by age (Table-1). A chi-square test of independence revealed that the mortality rate was significantly related to an advanced age (χ2 =27.078, p<0.001). The age and sex distributions of admissions and mortality are shown in Figure-2 (a, b, c).
Table-1: Medical admissions and mortality stratified by age
Age
Admission N(m/f)
Admission
(%)
Death
N(m/f)
Mortality
(%)
Chi-square
(χ2)
P-value
18 – 20 Years
0
0%
0
0%
27.078
<0.001
21 – 30 Years
9(2/7)
3.4%
0
0%
31 – 40 Years
9(6/3)
3.4%
0
0%
41 – 50 Years
26(17/9)
9.9%
3(2/1)
11.5%
51 – 60 Years
36(21/15)
13.7%
4(3/1)
11.1%
61 – 70 Years
43(26/17)
16.3%
11(7/4)
25.6%
71 – 80 Years
56(35/21)
21.3%
15(11/4)
26.8%
81 and Above
84(52/32)
31.9%
37(24/13)
44.0%
Total
263(159/104)
100.0%
70(47/23)
26.6%
N: number of patients, m: male, f: female.
Figure-2 (a,b,c): Age, Sex, Admission and Mortality pyramids
We considered two age cohorts - below 60 and ≥60 years of age and other relevant demographic parameters (sex and residence in own-home/care-home) to analyse the impact on mortality rates (Table-2). Of the admitted patients, 159 (60.5%) were male, and 104 (39.5%) were female. The mortality rate was strongly associated with advanced age ≥60 years (χ2 =17.120, p<0.001) but independent of sex distribution (χ2 =1.784, p=0.182). However, it was also affected by the care facility (χ2 =18.146, p<0.001) with a higher mortality rate among the group of patients with residence in a long-term care-home.
Table-2: Admission and Mortality stratified by demographic variables
Variables
Admission (N)
Admission (%)
Death (N)
Mortality (%)
Chi-square
(χ2)
P-value
Age
17.120
<0.001
<60 years
77
29.3%
7
9.1%
≥60 years
186
70.7%
63
33.9%
Sex
1.784
0.182
Female
104
39.5%
23
22.1%
Male
159
60.5%
47
29.6%
Care facility
18.146
<0.001
Own-home
211
80.2%
44
20.9%
Care-home
52
19.8%
26
50.0%
N: Number of patients; Care-home: Long-term care in residential or nursing home.
To identify the strength of the associations, we conducted a univariate logistic regression analysis with mortality as the dependent variable and the demography and presence/absence of the co-morbidities as the independent variable (Table-3). We found that age as a continuous predictor had an odds ratio of 1.058 (p<0.001), which translated to increased odds of dying by 5.8% for every year of advanced age. Using age as a categorical predictor with the other two categories, the odds of death for patients aged below 60 years was found to be 0.195 times the odds of death for the patients aged 60 years or above.
Table-3: Univariate logistic regression analysis of the demographic variables and co-morbidities
Based on the Charlson Comorbidity Index (CCI) score, the severity of co-morbidities was categorised into four cohorts: mild/no co-morbidity (CCI:0), moderate (CCI:1-2), severe (CCI:3-4), and very severe (CCI≥5) [Table-4(4a)].
Table-4: Impact of CCI score and specific medical-conditions on admission and mortality 4a) Admission and mortality stratified by CCI score based cohorts
CCI score
Admission
(N)
Mortality
(N)
Mortality
(%)
OR (95% C.I)
p-value
Overall
263
70
26.6
0
31
1
3.2
-
-
1-2
59
8
13.8
4.706 (0.56 – 39.49)
0.154
3-4
68
23
33.8
15.33 (1.97 – 119.67)
0.009
≥5
105
38
36.2
17.015 (2.23 – 129.78)
0.006
4b) Admission and mortality stratified by specific medical-conditions
Medical-conditions
Admissions
(N)
Mortality
(N)
Mortality
(%)
OR (95% C.I.)
p-value
DM
54
18
33.3
1.510 (0.791 – 2.883)
0.212
Thyroid Disorders
16
4
25.0%
0.914 (0.285 – 2.934)
0.880
Overall Hypertensives
75
16
21.3
0.707 (0.374 – 1.338)
0.287
ACEi/ARB* antihypertensives
51
11
21.6
0.760 (0.365 – 1.586)
0.465
Non ACEi/ARB§ antihypertensives
24
5
20.8
0.704 (0.253 – 1.964)
0.503
Long-term oral steroids
17
9
52.9
4.053 (1.091 – 15.063)
0.037
Immunomodulators
9
3
33.3
5.101 (0.659 – 39.460)
0.119
N: Number of patients; DM: Diabetes Mellitus; *RAAS-inhibitors; §Non RAAS-inhibitors.
The impact of CCI score-based cohorts on mortality are shown in Figure-3 (a-f). CCI value also predicted significant association with odds ratio 1.255 (p<0.001). If the CCI score was utilised as a categorical predictor with the other two parameters (age and place of primary care), it remained a significant predictor with the odds of death for the patients with CCI-scores between 0-4 turning out to be 44.8% (p=0.005) of the odds of death for the patients with CCI scores ≥5 (Table-3).
Figure-3(a - f): Pie-chart representing impact of CCI score-based cohorts on mortality
a) Overall admitted patients: discharge and mortality; b) CCI score 0: discharge and mortality; c) CCI score 1-2: discharge and mortality; d) CCI score 3-4: discharge and mortality; e) CCI score ≤4: discharge and mortality; f) CCI score ≥5: discharge and mortality.
Interestingly, the eGFR at presentation turned out to be a significant predictor of mortality (OR=0.961, p<0.001). Of the co-morbidities, pre-existing renal disease was found to be an important predictor of mortality with OR=1.996 (p=0.027). Long-term oral steroids were another significant predictor of mortality, with the odds of death for the patients with long-term oral steroids use being 341.2% (p=0.016) of the odds of death for the patients without such medication. Patients with no background medical conditions (OR=0.181, p=0.022) fared better, with significantly lower odds of death compared to patients with at least one known medical condition (Table-3).
We also analysed the mortality of our patients with specific medical condition-based cohorts [Table-4(4b)]. A high mortality of 52.9% [OR (95%CI): 4.053(1.091–15.063), p=0.037] was observed in patients who were on long-term oral steroids. A 33.3% [OR (95%CI):1.510(0.791–2.883), p=0.212] mortality rate was observed among in-patients with known diabetes on pharmacotherapy.
Many of the demographic variables and the co-morbidities were inter-related – the odds of death for a patient coming from their own-home was only 26% (OR=0.263, p<0.001) of the odds for those residing in a long-term care-home (Table-3). To offset the possibility of any confounding effect, we utilised multiple logistic regression analysis with all the important variables taken together (Table-5). Taking consideration of confounding effects, only age, care facility, presence of active malignancy and long-term oral steroids were found to be significant predictors of mortality. Interestingly, the presence of active malignancy was found to have a lower risk of death – this is possibly due to a bias on account of a relatively small number of patients in that subset of our study. Age was the most significant predictor of mortality, followed by a primary area of the care facility and the presence of active malignancy.
Table-5: Multiple logistic regression analysis of the demographic variables and co-morbidities
Odds Ratio
95% Confidence Interval
Variables
Lower
Upper
P-value
Age
1.049
1.013
1.086
.007
Sex (Female)
.588
.296
1.165
.128
Care facility (own-home)
.411
.195
.866
.019
CCI score
1.051
.826
1.337
.685
Active malignancy
.078
.008
.725
.025
Cardiovascular disease
.987
.491
1.984
.971
Respiratory disease
1.162
.517
2.612
.716
DM & endocrine disorders
1.370
.608
3.085
.448
Renal disease
.901
.419
1.937
.789
Rheumatic disorders
.927
.128
6.719
.941
Liver & hepato-biliary diseases
.364
.030
4.357
.425
Thyroid disorders
.827
.186
3.676
.803
Long-term oral steroids
4.053
1.091
15.063
.037
Immunomodulators
5.101
.659
39.460
.119
No medical condition
.685
.128
3.670
.658
DM: Diabetes Mellitus
DISCUSSION
COVID-19 has taken 800,000 lives world-wide as reported by the World Health Organisation (WHO) on August 30, 2020. A recent systematic review and meta-analysis have reported the association of COVID-19 with a severe disease course in about 23% of infected patients and has a mortality of about 6%.19 The mortality rate varies in different geographical areas. In-hospital mortality was significantly higher in the United States of America (USA) (22.23%) and Europe (22.9%) compared to Asia (12.65%) – (p<0.0001).20 However, there was no significant difference when compared to each other (p=0.49).20 Our study showed a 26.6% in-hospital mortality.
The mean age of the patients in our study was 68.74 years (SD:16.89) – 60.5% of them were male and 39.5% female. 70.7% of these patients were aged ≥60 years. Univariate analysis showed that the mortality rate was significantly age-dependent (OR=1.058, p<0.001) – mortality (33.9%) was higher in patients aged ≥60 years, rising sharply ≥80 years to 44.0% (χ2 =27.078, p<0.001). Our results were consistent with other studies.21
Among the demographic characteristics, mortality-risk was independent of sex distribution (χ2 =1.784, p=0.182) in our study. This is in contrast to a meta-analysis, which reported the association between male-sex and COVID-19 mortality (OR =1.81; 95%CI:1.25–2.62).22 Multicentric studies in the United Kingdom (UK) would be warranted to see the trend in the local population.
Long-term care-home residents suffered 50.0% mortality (χ2 =18.146, p<0.001). The London School of Economics report on May 14, 2020, estimated that the COVID-19 related deaths of care-home residents contributed to 54% of all excess deaths in England and Wales. Our study findings indicate long-term care-homes as hot-spots requiring shielding and protective measures against COVID-19 – a conclusion corroborating other studies.23
We aimed to define the predictive-role of co-morbidities on COVID-19 mortality, an aspect that has been probed earlier as well.7-12 The CCI score remains a reliable method to measure co-morbidity.24 For admission to intensive care, NICE recommended CCI-score ≥ 5 requires critical care advice to help in treatment decision regarding the essential benefit of organ support for seriously unwell COVID-19 patients. We examined the predictive mortality-risk of CCI scores among the admitted patients.
The mortality rate in cohorts with CCI ≤4 and CCI scores ≥5 were 20.3% and 36.2% respectively. The odds of death for CCI ≤4 cohort was less than half (44.8%) compared with CCI scores ≥5 cohort. Based on this finding, we strongly recommend CCI scoring as a clinical risk-stratification tool in COVID-19.
We examined the impact of organ specific co-morbidities on in-hospital mortality in our study as well. Patients with no background medical conditions showed a low mortality rate 6.9% [OR (95%CI): 0.181(0.042–0.782), p=0.022] and had better outcomes with significantly lower odds of death, compared to patients with at least one medical condition on univariate logistic regression analysis (Table-3). The mortality rate was 3.2% in CCI-0 cohort [Table 4(4a)].
The impact of COVID-19 on patients with CKD, glomerulo-nephropathies, on dialysis dependent patients and post renal transplant patients remains unclear. Patients with SARS-CoV-2 infection were frequently found to have renal dysfunction – the latter was associated with greater complications and in-hospital mortality.25 A mortality rate of 3.6%, was reported in patients attending an outpatient haemodialysis centre.26 Another study has concluded 3.07-fold (95%CI:1.43–6.61)mortality among renal failure patients.27 We found, the pre-existing renal disease to be a cause of significant concern with 37.7% mortality [OR(95%CI): 1.996(1.082 – 3.681), p=0.027] with the eGFR at presentation being a significant predictor (OR=0.961, p <0.001) (Table-3).
The use of steroids in COVID-19 continues to be explored.The RECOVERY trial in UK, after evaluation at 28 days, concluded that dexamethasone reduced deaths by one-third in ventilated patients [age-adjusted rate ratio (RR) 0.65; 95% CI: 0.48–0.88; p=0.0003], and by one-fifth in other patients receiving supplemental oxygen with or without non-invasive ventilation (RR 0.80; 95%CI: 0.67 to 0.96; p=0.0021), although no benefit was observed in mild or moderate cases not requiring oxygen support (17.0% vs.13.2%; RR 1.22; 95% CI, 0.93e1.61; p¼0.14). In contrast, a systematic review concluded that the results from retrospective studies are heterogeneous, and it was difficult to assign a definite protective role of corticosteroids in this setting.28 We found long-term oral steroids use to be a significant predictor of mortality – 52.9% [OR(95%CI): 3.412(1.261–9.23), p=0.016] – this was 341.2% of the odds of death for the patients without any long-term oral steroids use (Table-3). The sample size of this cohort was relatively small with 9 deaths out of 17 patients. However, based on our results, it may be safe to suggest that further population-based studies would be required to determine the impact of long-term oral corticosteroid use in COVID-19.
A major proportion of endocrine disorders are of autoimmune aetiology. The impact of thyroid disorders on COVID-19 is yet to be studied widely.15,16 We found no increased risk of mortality [OR (95%CI): 0.914 (0.285–2.934), p=0.880] in patients with thyroid disorders. However, 33.33% [OR(95%CI): 1.510(0.791–2.883), p=0.212] mortality was seen among the diabetic patients on pharmacotherapy in our study [Table-4(4b)].
Pre-existing hypertension is an accepted risk factor for COVID-19 mortality.26,27 However, the role RAAS-inhibitors and upregulation of ACE-2 receptors in COVID-19 mortality call for targeted clinical research for further clarification.29 A meta-analysis of four studies showed that patients treated with RAAS-inhibitors had a lower risk of mortality [RR: 0.65(95%CI:0.45–0.94), P=0.20].30 We did not observe any significant mortality-risk difference between RAAS-inhibitors treatment group [OR(95%CI): 0.760(0.365–1.586), p=0.465] and non RAAS-inhibitor treatment groups [OR(95%CI): 0.704(0.253–1.964), p=0.503] [Table-4(4b)]. We recommend the continuation of RAAS-inhibitors during COVID-19 unless there exist other compelling medical reasons for their discontinuation.
A prospective study in the UK concluded that the mortality from COVID-19 in cancer patients appeared to be driven principally by age, gender, and co-morbidities.13 The study could not identify evidence suggesting cancer patients on cytotoxic chemotherapy, or other anticancer treatment, were at an increased risk of mortality from COVID-19 compared to the general population.13 We also did not detect any increased risk of mortality in patients with active malignancy [OR(95%CI): 0.078(0.008–0.725), p=0.025)] (Table-5).
The impact of various non-specific immunomodulators in COVID-19 outcome remains inconclusive.14 Our study did not reveal any significant predictive mortality-risk with the use of long-term immunomodulators (methotrexate, tacrolimus, sirolimus, mycophenolate, dapsone, sulfasalazine and azathioprine) on multiple logistic regression analysis. We reached the same conclusion with patients suffering from chronic rheumatic disorders on similar analysis (Table-5).
Our study had some unique characteristics. We analysed all the eligible samples over a consecutive 76-day period at the initial peak of the pandemic. The study was conducted across two district general hospitals, allowing an insight into two differently located rural populations. We conducted univariate and multiple logistic regression analysis of the demographic variables and co-morbidities to examine the predictive-risk of contributing factors in COVID-19 mortality. The association between CCI scores and in-hospital mortality was also analysed in detail. We included demographic characteristics such as age, sex and residence in a long-term care-home while factoring in the associations.
Our study was not without limitations, though. We were unable to study the predictive-risk of obesity, socioeconomic status and ethnicity due to inadequate data. The “White British” group consisted of 80.61% of admitted patients, and no ethnicity was documented in 17.11% of our patients (Table-6, Figure-4).
Table-6: Medical admissions and mortality stratified by ethnicity
Ethnicity
Admission
(N)
Admission
(%)
Died
(N)
Mortality
(%)
White British
212
80.61
63
29.71
Asian
4
1.52
1
25.0
African
2
0.76
0
0.00
Not documented
45
17.11
6
13.33
N: Number of patients
Figure-4: Bar charts showing Admission and Mortality stratified by Ethnicity
We relied solely on electronic database and hospital records to conduct the study retrospectively. The few subsets of patients such as those on prescribed long-term oral steroids, immunomodulators, thyroid disorders, chronic liver disease, and active malignancy had relatively small sample sizes with possible introduction of bias. We did not categorise diabetic patients into insulin dependent/non-insulin dependent or well/poorly glycaemic control cohorts. We did not aim to split the respiratory group into well or poorly controlled asthma or COPD subsets. Patients on a long-term steroid inhalation treatment were not included in the steroid cohort – a more extensive population-based study may be better suited for such an analysis.
CONCLUSIONS
Patients aged ≥ 60 years, residence in a long-term care-home, pre-existing renal disease, multiple co-morbidities (especially those with CCI ≥ 5), and patients on long-term oral steroids need to be considered as having a high risk of dying from COVID-19, along with other established risk factors such as hypertension, diabetes and chronic respiratory disease. RAAS-inhibitors need not be discontinued due to COVID-19. Further studies are necessary to establish links between long-term oral steroids use, chronic rheumatic disease, non-specific immunomodulators and COVID-19 mortality.
Processed sugar has a high glycaemic index (GI) as it is easily digested and absorbed triggering a prominent insulin response, which if repeated over time leads to insulin resistance and type two diabetes1, 2. The appealing nature of high calorific sugary food combined with their low satiating nature means they also tend to be eaten in excess which contributes to obesity and metabolic syndrome2, 3. Obesity and diabetes raises the long-term risk of poor gut health and chronic inflammation increasing the risk of chronic fatigue, low mood and degenerative disease conditions such as cancer, cardiovascular disease, dementia and stroke2, 3.
Despite these obvious risks, a recent survey of NHS health care professionals reported that over half are overweight and over a quarter are living with obesity4. Both obesity and high sugar content-foods are associated with musculoskeletal disorders, lower mood, unhappiness, fatigue and depression which significantly contribute to sickness absence from work4, 5, 6, 7.
Despite these risks, consumption growth continues to escalate especially in low and middle income countries. Since 2000 consumption has grown from 130 to 180 million tonnes in 20208, and its production is contributing to poor health as well as greenhouse gas emission and deforestation9, 10.
In an attempt to reduce sugar intake, NHS England introduced a voluntary reduction scheme in July 2017, recommending that NHS Trusts and retailers on NHS premises reduce the proportion of monthly sugar-sweetened beverages sales. They reported in March 2018, a reduction as a proportion of total drinks sales from 15.6% to 8.7%11. However, to date, there is no information as to whether this has had any impact on consumption of sugar, wellbeing or weight reduction. In our cancer unit there is a constant availability of sweet snacks, predominantly gifted by patients, and during busy clinics these often replace balanced meals. Some argue that this display of sugary foods, together with the high proportion of overweight staff undermines the NHS’ ability to give patients ‘credible and effective’ behavioural lifestyle advice.
The hypothesis for this intervention was that a removal of sugary foodstuffs from the field of vision on nurses’ stations and replacing with fruit, nuts and seeds enables healthy snacking, resulting in weight loss and increased mood.
Methodology
This pilot intervention used quantitative methods to observe the feasibility of delivery and outcome of a real-world intervention. This project was registered with and approved by Bedford Hospital NHS Trust Research and Development Department, but classed as a practical service evaluation, hence no Ethics approval or written consent was required.
Participants: Fifty eight members of staff at the Primrose unit, Bedford Hospital were invited to participate for this 3 month nutritional intervention; 44 (75%) volunteered. The cohort consisted of 36 nurses, 2 consultants, 2 secretaries and 4 administration staff. There were 41 females and 3 males, aged 28-72 years (average age 45 years). A further 100 consecutive patients attending for treatments were asked for their views on the intervention.
Measures and outcomes: The primary endpoints were Body Mass Index (BMI) (Kg/m2) and happiness measured with the previously validated Subjective Happiness Score (SHS)12. As a secondary end point, patients attending the Oncology unit during the intervention period were asked anonymously for their opinion and likely influence on their eating habits.
Procedure: At baseline the Primrose Unit research department recorded staff demographics, BMI and SHS questionnaire scores. From the date of entry of the first participant (June 2019) to completion of the last participant (September 2019), all sugary foodstuffs were removed and replaced with bowls of mixed whole and dried fruit, seeds and mixed nuts. Non-participating staff were asked to voluntarily keep sugary items out of general sight. At baseline, 3 months and 5 months, participants were weighed by one of the research team and completed a SHS questionnaire.
In the final month of the intervention, 100 consecutive patients attending for treatments at the unit were asked their opinion of this intervention, specifically if they felt that removing sugary items from public display was a welcome gesture and whether seeing staff making efforts to reduce sugar intake would encourage them to do the same.
Statistical methods and analysis
The completed dataset was compiled in an excel spreadsheet then transferred for independent statistical analysis. The pre- and post-intervention weight differences datasets were analysed by the T-test as were the difference in happiness scores. The differences in participants’ opinion were analysed by the chi squared test. There were no missing data and in view of the relatively small numbers in the cohort, sub-group analysis was not planned or performed. The study advisory committee predetermined that a change in weight of 1 kg was meaningful13.
Results
Average weight: At baseline the average was 72.12 kg, and 71.23 kg.at 3 months; an average loss of 0.89 kg (T-test p= 0.02). The average weight at 5 months was 71.09 kg; an average loss of 1.03 kg from baseline (T-test p= 0.01). Twenty participants (46%) lost >1kg in weight (average 3.01 kg) as opposed to 7 (16%) participants who gained >1kg (average 2.23 kg) T-test p< 0.03.
Happiness score: Average happiness score increased from 21.65 to 23.44 (+6.6%), T-test p< 0.04). Amongst those who lost >1kg weight, average happiness score increased from 21.54 to 23.75 (+9.3%), T-test p<0.03. In those who gained >1 kg weight, average happiness score decreased from 22.28 to 21.43 (-3.8% T-test p< 0.08. There was a 13.1% difference in the happiness score in those losing >1kg compared to those gaining >1kg in weight (p< 0.001).
Patient opinion: 94 (94%) of patients indicated that this initiative gave a good impression; 6 (6%) were not sure or felt it did not give a good impression (Chi2p<0.001). Ninety seven (97%) indicated that the initiative would encourage them to reduce sugar in their own diet versus 3 (13%) who were not sure or felt that it would not change their behaviour (Chi2 p<0.001).
Discussion
This small pilot evaluation has a number of methodological weaknesses but what it lacked in statistical strength it gained in novelty and potential importance. This was the first nutritional intervention involving hospital staff within a routine working practice. It addresses a health issue which affects hundreds of thousands of health workers every year, and demonstrated that a practical behavioural change initiative was welcomed by the majority of staff (75%), with no drop-outs or objections from non-participating staff. This implied a larger national study would be feasible.
These data clearly demonstrated a statistically significant reduction in meaningful weight similar to the best designed weight loss programmes14. A fundamental rule of behavioural change is not to dictate to people, but to encourage them to want to make the decision to change for themselves. This simple intervention did not stop staff eating what they wanted as there was no restriction to their overall food choices. The big difference was that, within their field of vision, there were healthier fruit and nuts instead of high-calorie, sugar-laden foods, which are usually readily available.
This intervention was overwhelmingly supported by patients. Surveys have repeatedly reported that patients look to health workers for guidance, and this study confirmed that this manoeuvre made patients think about their own eating habits. Although a further trial would have to establish whether this initiative objectively reduce processed sugar intake amongst patients, a reduction in intake would confer considerable benefits as several large cohort studies have linked high sugar intake with a higher risk of cancer, greater complications of treatments and worse outcomes, for several reasons3.
Sugary foods increase the risk of weight gain, already more common after cancer; increases levels of oestrogen in post-menopausal women; and increases insulin like growth factor (IGF) and other hormones such as leptin, all of which in laboratory experiments increase proliferation and markers of aggressiveness and spread of cancer cells 2, 15, 16, 17. Cohort studies have also reported that those who ate more than 10% of their daily calories as sugar had higher total LDL cholesterol levels further adding to the cardiac risks of herceptin and anthracycline chemotherapy drugs. Independent from obesity, high sugar intake directly increases the risk of type 2 diabetes (T2D) by overloading the insulin pathways1. Individuals with T2D have higher serum insulin levels (hyperinsulinemia) which triggers proliferation in cancer models18, is linked to higher oxidative stress and low-grade chronic inflammation, causing epigenetic genetic damage and ongoing malignant transformation19. These laboratory findings are supported by several cohort studies which have linked diabetes with a higher risk of cancer and a higher risk of relapse post-treatment20.
Patients on chemotherapy should be particularly discouraged from eating sweets and cakes as they are more prone to dental caries which contributes to the risk of osteonecrosis following consequent bisphosphonate therapy. Dental caries may also be an increased factor for bowel cancer itself as DNA codes from bacteria, commonly found in caries (Fusobacterium), have been detected in the genes of bowel cancer but not in normal guts21.
Patients receiving the new generation of targeted therapies should be particularly vigilant of their sugar intake. PD-1 inhibitors recruit the body's immunity to recognise and target cancer cells, the influence of diet and lifestyle is becoming even more important. Studies have demonstrated that better gut health is linked to significantly better response rates. Processed sugar is the preferred fuel for pro-inflammatory firmicutes bacteria whilst the healthy bacteroidetes utilise glycans from the breakdown of polyphenols, which explains why there is a reverse correlation between sugar intake and gut health22. However, whole fruit intake is associated with better gut and general health as it provides polyphenol which feed healthy bacteria3, 23. Despite having between 9-14% fructose, the fibre and pulp makes fruit satiating and slows gastric emptying, thus reducing the GI3. Additionally, the polyphenols in fruit, vegetables, nuts, legumes, herbs and spices slow transportation of sugar across the gut wall by inhibition of sodium-dependent glucose transporter 1. They enhance insulin-dependent glucose uptake, activate 5' adenosine monophosphate-activated protein kinase, which explain why their regular consumption is associated with a lower risk of T2D3, 23, 24. They also improve reduced gut and systemic inflammation; enhance anti-oxidant enzyme production so reduce intracellular oxidative stress; and reduce the risk of cancer and other chronic diseases including those associated with diabetes3, 25, 26.
The evaluation was not robust enough to measure whether this resulted in less sickness absence, but this endpoint should be included in a larger design. It also did not include data for those staff who did not actively participate, but who benefited from removal of sugary foods from their work areas; the evaluation committee did not receive any complaints or objections to their removal.
Government initiatives such as a sugar tax and public information campaigns may help but as individuals within the NHS, we have an opportunity to influence our staff, the patients whom we serve and the wider public. The evaluation reported in this paper is a small start, but demonstrates that a multicentre study would be feasible and if the results are confirmed, it could initiate a national cultural change attitude towards sugar in the NHS.
Parkinson’s disease (PD) is the second most common neurodegenerative disease. It is associated with loss of dopamine leading to motor disorders 1. However, non-motor symptoms such as anxiety, stress, and depression as well as cognitive impairment are also abundant among patients 2. It has been hypothesized that non-motor symptoms can affect the quality of life in PD patients 3. The current therapeutic approach relies on dopamine substitution, which has no curative effect and does not improve non-motor symptoms. Studies have shown that meditation and other relaxation techniques can provide relief in non-motor symptoms. Mindfulness-based stress reduction (MBSR) is a technique used for improving stress-related symptoms in long-term conditions such as stroke, cancer, and PD 4-6. It involves focused attention, open monitoring, and self-awareness of body movements in a non-judgemental state in the present moment. Studies have shown that mindfulness improves brain plasticity in some areas of interest. The areas of plasticity are involved in emotional regulation and processing 7, 8. Thus, we hypothesized that mindfulness techniques could also have a positive effect on non-motor symptoms of PD patients which can enhance the quality of life after training sessions. This clinical trial aimed to investigate the impact of mindfulness training on the quality of life of PD patients.
Materials and Methods
Participants and Ethical issues
This randomized clinical trial was conducted at the neurology outpatient clinic of Imam Reza and Razi University-Hospital. Participants were 40 patients aged 67.95 ± 6.8 years (56-80) with a definite diagnosis of PD who were receiving dopaminergic drugs for at least one year. Twenty-seven of the patients were males, and 13 were females. They all were married, and 4 of them reported a family history of PD. Participants were randomly categorized into two experiment and control groups with 20 patients in each. For randomization, a list of random numbers was used based on the computer program and applied to the patients at the time of their neurologist visit at the clinic.
The inclusion criteria were: definite diagnosis of idiopathic PD based on UK Brain Bank criteria, mild and moderate forms of disease according to Hoehn and Yahr (HY) staging (1-3), stable and normal dosage of PD medications within last six months, normal cognitive function or mild cognitive impairment according to Mini-Mental State Examination (MMSE) score 17-30, enthusiasm and commitment to participate in mindfulness training sessions and to practice the required works at home.
The patients with the following criteria were excluded: focal neurologic deficit, abnormal brain imaging findings suggestive of brain lesions, other medical conditions that would affect the quality of life, use of antiepileptic drugs and symptoms of psychosis,
The protocol of the study was reviewed and confirmed by the local ethics committee of Tabriz University of Medical Sciences (IR.TBZMED.REC.1397.551). All patients received an informed written consent to participate in the study and to the use of their information. This trial was registered on the IRCT.ir website (IRCT20181007041258N1).
Mindfulness Training sessions
The interventions included 8-week mindfulness-based stress reduction (MBSR) sessions each for 2hours with a 15-minute break between the first and second hours. The sessions followed by a one-day retreat program between sixth and seventh sessions and took for 7 hours. The patients were asked to practice the requested homework at least for 30 minutes after each session. The protocol of the training sessions was conducted as per the steps described by Kabat-Zinn 9. The sessions were performed by a psychiatrist with over 5-year of experience in MBSR instructions. The instructions were based on the teaching of three techniques: body scanning, mindfulness meditation, and gentle yoga. The sessions focused on physical and mental awareness of body, how to diminish the physiological effect of pain and stress, how to perform less emotional reaction when facing distress, mental calmness in challenges through life, non-judgmental awareness, equity in stress management and joy of every moment.
Controls
The patients in the control group received eight 1-hour sessions during the same time as the experiment group. The sessions centered on basic information about PD based on brochures published by the American Parkinson Disease Association with topics: medications, symptoms of the disease, mood and sleep, and connecting with resources.
Assessments
All participant's general data, regarding age, gender, type of medication, and duration of disease were gathered according to patients' self-report and the information documented in patients' clinical records. Two neurologists assessed the HY stage, disease severity, and probable motor disturbance at baseline (within patient recruitment within one week before the initial session). The assessments of the quality of life were conducted at baseline (on the day of the first training session before the class), and after the experiment.
For the evaluation of the quality of life, the PDQ-39 questionnaire was used. PDQ-39 is a 39-item questionnaire based on the patient report of health status. It evaluates eight scales of daily activities including Mobility (MOB), Activities of daily living (ADL), Emotional well-being (EMO), Stigma (STI), Social support (SOC), Cognitions (COG), Communication (COM) and Bodily discomfort (BOD) and how these scales are being affected by PD. Participants are required to choose one of five orders of responses based on how often due to their disease, they have faced difficulties defined in each item. The final score of each item is calculated as a percentage score. The overall score is measured by calculating the mean percentage score of eight items as Parkinson`s disease summary Index(PDSI). The assessments were conducted in-person by the principal investigator (N.Gh) who was blinded about the group of study which patients were enrolled in.
Statistical Analysis
The scores of each item were described as mean ± SD. The between-group and within-group comparisons were made by the independent and sample T-test, respectively. The chi-square test was performed for the comparison of categorical variables. To investigate the change in the quality of life each PDQ-39 item scores and the PDSI scores were compared before and after the experiment in either control or experiment group by splitting the data into two study groups and comparing the mean scores of each item using independent T-test. All the analyses were performed using SPSS software version 19.0 (IBM Corp., Armonk, N.Y., USA). The boxplot figures were drawn using medCalc.ink software. Figures of the change in questionnaires' scores were provided by GraphPad.prism v.6.0.7 Ink software.
Results
All the 40 patients completed the training sessions in 8 weeks. The primary assessment was made after the last MBSR session and during their first neurologic visit at the clinic.
The general characteristics of the patients in each experiment and control group are shown in Table 1. The baseline characteristic data did not differ significantly between the two study groups.
As it is demonstrated in Table 2, at baseline, the PDQ-39 item scores did not differ significantly between two study groups, except for the SOC score, which was significantly higher in control subjects compared to the experiment group (35.80 ± 9.7 vs 29.11 ± 8.7, p = 0.02).
Quality of life assessment
The statistical analysis revealed a lower mean score in all PDQ-39 items in the experiment group compared to control subjects; however, the difference was insignificant for MOB, ADL, EMO, STI, COG, COM, and BOD and was only significant for SOC (34.13 ± 9.7 vs 26.19 ± 7.7 for control and experiment group, respectively. P = 0.007) (Table 2).
On the other hand, the within-group analysis yielded a significant improvement in the mean score of subjects in the experiment group. Their mean PDSI score was 31.88 ± 6.5 after one month compared to the baseline score of 33.93 ± 6.2 (p < 0.001). However, the mean scores of the participants in the control group did not significantly differ from the baseline.
A comparison of the delta values between the experimental and control groups showed MOB, ADL, and EMO to be significantly different.
The classification of the patients based on the stage of disease by HY revealed a significant improvement in PDSI score in patients in the experiment group at the severe stage (III). In contrast, the PDQ-39 item scores did not significantly differ (except for the ADL) after the training for mindfulness. The analysis also showed that patients in milder stages (I) have significant improvement after the experiment. However, the same improvement was noted in the control group (Table 3).
Table 1. Patients' demographic data in each study group
Table 2.PDQ-39 score items and PDSI before and after the mindfulness sessions in control and experiment group pf patients
Before Experiment
After Experiment
P ϯ value
95% CI €
The mean difference in score
Study Group
Mean Score
P* value
Mean Score
P* value
Delta
P* value
95% CI €
MOB
Control
47.87 ± 8.7
0.84
48.50 ± 8.4
0.62
0.26
-1.7 – 0.5
0.62 ± 2.4
0.02
0.24 – 3.28
Experiment
48.37 ± 6.8
47.23 ± 7.5
0.04
0.04 – 2.2
-1.14 ± 2.3
ADL
Control
34.72 ± 12.3
0.90
34.95 ± 13.8
0.47
0.75
-1.7 – 1.2
0.22 ± 3.2
0.004
1.11 – 5.60
Experiment
35.17 ± 10.8
32.04 ± 11.1
0.002
1.3 – 4.9
-3.13 ± 3.7
EMO
Control
37.41 ± 6.3
0.56
37.21 ± 7.0
0.43
0.83
-1.8 – 2.2
-0.23 ± 4.3
0.01
0.78 – 6.65
Experiment
38.92 ± 9.4
34.96 ± 10.4
0.001
1.7 – 6.1
-3.95 ± 4.7
STI
Control
25.94 ± 9.9
0.26
25.29 ± 10.2
0.26
0.47
-1.2 – 2.5
-0.65 ± 4.0
0.97
-2.24 – 2.18
Experiment
22.81 ± 7.3
22.18 ± 6.8
0.33
-0.6 – 1.9
-0.62 ± 2.7
SOC
Control
35.80 ± 9.7
0.02
34.13 ± 9.7
0.007
0.10
-0.3 – 3.7
-1.67 ± 4.3
0.40
-1.71 – 4.20
Experiment
29.11 ± 8.7
26.19 ± 7.7
0.01
0.6 – 5.2
-2.91 ± 4.8
COG
Control
28.43 ± 7.4
0.69
28.75 ± 7.9
0.15
0.71
-2.0 – 1.4
0.31 ± 3.7
0.09
-0.47 – 5.47
Experiment
27.60 ± 5.9
25.41 ± 6.3
0.08
-0.3 – 4.7
-2.18 ± 5.3
COM
Control
29.14 ± 9.9
0.47
29.97 ± 9.5
0.88
0.32
-2.5 – 0.9
0.83 ± 3.7
0.08
-0.36 – 5.35
Experiment
31.21 ± 8.0
29.54 ± 8.7
0.16
-0.7 – 4.0
-1.66 ± 5.1
BOD
Control
44.71 ± 11.2
0.08
43.05 ± 11.2
0.12
0.04
0.6 – 3.2
-1.66 ± 3.4
0.47
-3.11 – 1.46
Experiment
38.31 ± 11.9
37.47 ± 10.9
0.32
-0.9 – 2.5
-0.84 ± 3.7
PDSI
Control
35.50 ± 7.1
0.46
35.23 ± 7.5
0.14
0.29
-0.2 – 0.8
-0.27 ± 1.1
< 0.001
0.84 – 2.72
Experiment
33.93 ± 6.2
31.88 ± 6.5
<0.001
1.2 – 2.8
-2.05 ± 1.7
Note:Abbreviations:Confidence Interval (CI), Mobility (MOB), Activities of daily living (ADL), Emotional well-being (EMO), Stigma (STI), Social support (SOC), Cognitions (COG), Communication (COM), Bodily discomfort (BOD), Parkinson`s disease summary Index (PDSI). ϯ: P value of the differences before and after the experiment in each group; *: P value of the differences between mean score of experiment and control group; €: 95% CI of the differences between mean score of experiment and control group.
Table 3. The quality of life in patients with different stages of PD before and after the mindfulness sessions in each experiment and control group
Control group
Experiment group
Stage (HY)
PDQ-39
Before Experiment
After Experiment
P value
95%CI of difference
Before Experiment
After Experiment
P value
95%CI of difference
I
PDSI
25.93 ± 2.1
24.62 ± 2.0
0.03
0.31 – 2.30
26.60 ± 1.8
23.34 ± 0.9
0.009
1.55 – 4.95
MOB
37.50 ± 0.0
37.50 ± 2.5
1.00
-6.21 – 6.21
40.62 ± 1.2
38.12 ± 1.2
ADL
23.61 ± 2.4
18.01 ± 2.4
0.06
-0.42 – 11.62
23.93 ± 7.1
19.72 ± 6.2
0.09
-1.24 – 9.66
EMO
29.13 ± 4.1
29.13 ± 4.1
29.15 ± 7.6
22.87 ± 7.9
0.10
-2.27 – 14.84
STI
18.75 ± 10.8
14.58 ± 3.6
0.42
-13.76 – 22.09
17.18 ± 5.9
17.19 ± 5.9
0.39
-0.03 – 0.01
SOC
22.20 ± 4.8
22.16 ± 9.6
0.99
-20.70 – 20.77
22.80 ± 8.0
20.70 ± 8.4
0.39
-4.58 – 8.78
COG
20.83 ± 7.2
22.91 ± 9.5
0.42
-11.04 – 6.88
27.07 ± 4.1
20.31 ± 5.9
0.08
-1.51 – 15.04
COM
24.96 ± 8.3
24.96 ± 8.3
27.07 ± 4.1
22.87 ± 7.9
0.18
-3.51 – 11.91
BOD
30.50 ± 12.7
27.73 ± 9.6
0.42
-9.13 – 14.67
24.97 ± 6.8
24.97 ± 6.8
II
PDSI
32.02 ± 4.0
31.48 ± 3.8
0.21
-0.38 – 1.45
31.36 ± 3.2
30.03 ± 4.0
0.12
-0.48 – 3.14
MOB
43.12 ± 4.5
44.37 ± 4.1
0.31
-3.98 – 1.48
45.62 ± 4.9
44.65 ± 4.8
0.41
-1.71 – 3.66
ADL
27.59 ± 7.3
28.11 ± 6.9
0.35
-1.75 – 0.71
34.34 ± 6.9
31.22 ± 8.6
0.11
-0.91 – 7.16
EMO
36.93 ± 6.0
36.88 ± 7.5
0.98
-4.91 – 5.01
37.46 ± 6.3
32.77 ± 6.0
0.06
-0.03 – 9.40
STI
23.43 ± 9.8
22.61 ± 10.0
0.32
-1.01 – 2.65
21.87 ± 7.4
21.09 ± 7.4
0.35
-1.06 – 2.62
SOC
33.30 ± 7.6
30.18 ± 6.1
0.08
-0.47 – 6.70
27.05 ± 8.6
24.96 ± 7.7
0.17
-1.14 – 5.31
COG
25.78 ± 6.1
25.78 ± 7.0
1.00
-2.79 – 2.79
24.21 ± 5.3
25.25 ± 6.2
0.35
-3.49 – 1.41
COM
23.93 ± 8.2
24.97 ± 6.3
0.34
-3.49 – 1.41
27.06 ± 7.3
27.05 ± 8.6
0.99
-5.24 – 5.27
BOD
42.05 ± 8.9
38.92 ± 6.5
0.08
-0.48 – 6.73
33.30 ± 7.6
33.30 ± 6.2
1.000
-3.70 – 3.70
III
PDSI
41.79 ± 3.6
42.09 ± 3.6
0.43
-1.14 – 0.53
40.18 ± 3.1
37.99 ± 3.3
0.001
1.19 – 3.18
MOB
55.55 ± 5.6
55.83 ± 5.5
0.59
-1.43 – 0.87
55.00 ± 2.9
54.37 ± 4.1
0.35
-0.85 – 2.10
ADL
44.77 ± 10.0
46.67 ± 10.2
0.03
-3.63 – -0.16
41.64 ± 11.3
39.02 ± 10.1
0.04
0.02 – 5.20
EMO
40.60 ± 4.5
40.18 ± 5.5
0.74
-2.42 – 3.24
45.26 ± 8.7
43.20 ± 8.0
0.10
-0.55 – 4.67
STI
30.57 ± 8.5
31.23 ± 8.2
0.61
-3.59 – 2.26
26.56 ± 6.4
25.78 ± 5.2
0.59
-2.56 – 4.12
SOC
42.55 ± 6.5
41.62 ± 5.9
0.34
-1.21 – 3.08
34.32 ± 6.9
30.17 ± 6.1
0.10
-1.09 – 9.39
COG
33.33 ± 5.4
33.33 ± 6.2
0.99
-3.39 – 3.39
31.24 ± 5.7
28.12 ± 5.7
0.17
-1.70 – 7.94
COM
35.15 ± 9.0
36.07 ± 9.2
0.59
-4.75 – 2.91
37.42 ± 6.2
35.37 ± 5.8
0.17
-1.12 – 5.22
BOD
51.82 ± 6.9
51.82 ± 6.9
49.98 ± 4.4
47.88 ± 5.9
0.17
-1.15 – 5.35
Note:Abbreviations:Confidence Interval (CI), Mobility (MOB), Activities of daily living (ADL), Emotional well-being (EMO), Stigma (STI), Social support (SOC), Cognitions (COG), Communication (COM), Bodily discomfort (BOD), Parkinson`s disease summary Index (PDSI)
Discussion
Significant improvement in the quality of life between the patients who received mindfulness training and the control group was observed in this clinical trial of people with Parkinson’s disease within eight weeks of trial.
Overall PDSI decreased modestly in the experiment group by 2.05 points and decreased in the control group by 0.27 points after the experiment.
Among the PDQ items, MOB, ALD, and EMO significantly improved in the experiment group compared to the control group. These results show that mindfulness training has a significant impact on not only motor symptoms of the disease but also the non-motor emotional wellbeing of the patients. The most significant effect of mindfulness training was on patients’ daily activity, which was also obvious in the severe cases of the disease.
Up to now, a few trials have been conducted on the effect of mindfulness training on PD 10-13. The effect of mindfulness on different features of motor and non-motor symptoms has been measured. However, the outcome was discrepant regarding the time duration of the follow-up and improvement in the measured symptoms.
Similar to our findings, Geong son et al. found a significant difference in the quality of life and ADL of 33 experiment patients who received mindfulness training in comparison to 30 control subjects 13. Some other studies found mindfulness an effective modality for a few subscales of PDQ-3911, 12.
In a clinical trial by Cash et al. 39 patients were enrolled in 8-week mindfulness sessions and their EMO and COG improved after the experiment11. In a similar study conducted by Advocat et al., the effect of mindfulness training on the quality of life in 35 PD patients was compared with 37 control subjects within seven weeks and six months. In a two-step analysis, ADL was the only improved factor in experiment group 14.
In contrast, Dissanayaka et al. examined the effect of mindfulness on fourteen PD patients in the 8-week training program and compared the results with baseline at post-intervention assessment and 6-month follow up 15. Their results did not yield a significant improvement in any subscales of the quality of life in primary and secondary evaluation. Similarly, non-significant results were reported by Rodgers et al. and Pickut et al. 12.
Birtwell et al. also assessed the long-term efficacy (16 weeks) of mindfulness training on STI and EMO in thirteen individuals with PD. They found an insignificant change in these two subscales of PDQ-39 16.
In the present study, EMO and ADL were more subjective to the short-term effect of mindfulness training. The results of Rodger’s et al. study were consistent with our primary outcome. Their between-group analysis revealed a significant difference in depression subscale of DASS-21 after mindfulness intervention in PD patients 17. Cash et al. also found depression to improve after mindfulness interventions in PD patients 11.
In contrary to our findings, the difference between PD experiment and control subjects was not meaningful in Pickut et al.'s study 12. COG was unaffected to mindfulness training in our study. This finding was supported by the clinical trial of Cash et al. They found an insignificant change in PD patients' cognitive function in immediate pots-intervention assessment 11.
On the other hand, Dissanayaka et al. found post-interventional improvement in PD patients' cognition by obtaining PD Cognitive Rating Scale (PDCRS), extended for six months 15.
Similarly, Geong son et al. showed a significant difference between experiments who received mindfulness training with controls in the mean score of Korean Montreal Cognitive Assessment 13.
As described above, there are discrepant results regarding the role of mindfulness stress reduction sessions on quality of life in PD patients. Our results were consistent with some studies and contrary to others. The main factors that might have affected these differences are the size of the sample, including a control group in the study, subjective mood changes in the patients, the severity of the disease, and the likelihood of practicing the Learned lessons at home.
Mindfulness-based interventions aim to improve the current wellbeing of the individuals by self-awareness of present emotions and body movements. It might also help individuals to manage daily stress, have a better judgment of their own, and adjust to daily life. There is also evidence suggesting that mindfulness training leads to neuroplasticity in the brain areas which are involved in emotions 18.
Studies have also suggested that early therapeutic interventions are more practical in terms of diminishing the probable severity of the disease in the future 13, 18. In our study, patients in the early stages had improvement in their overall quality of life, which was also noted in controls of the same stage, too. However, a meaningful change in the quality of life of patients at the severe stage of PD was recorded after the training sessions. We suggest a long term follow up of the patients in each group and with different stages of the disease to find if the mindfulness training would help in diminishing the progress of the disease.
This study was a pilot study in which MBSR showed a great impact in improving the quality of life in PD patients. However, there were limitations in the study that must be considered. First, the sample size of the study was not large comparing to the prevalence of the disease, and it was constrained by other important factors such as disease severity and level of education. Patients needed to have a minimum level of education to be able to attend the sessions and apply them in their routine life. Second, the psychological nature of the intervention limited the blindness of the patients to the intervention.
We did not perform an intention to treat analysis or crossover randomization as all the randomly selected patients completed the trial, and none dropped out of the clinical trial.
Conclusion
In our study, mindfulness training improved the overall quality of life in PD patients. However, long-term follow up on a large-scale population is required to evaluate the impact of mindfulness-based stress reduction on each item.
Convex probe EBUS-TBNA has been a major development in respiratory medicine. In the last decade we have seen numerous articles supporting the high diagnostic accuracy of EBUS-TBNA in the diagnosis of lung cancer, staging of lung cancer, diagnosis of extra-thoracic malignancies & benign conditions (e.g., TB & sarcoidosis)1. Patients included in this study reflect the real-life referrals that we see as respiratory physicians in our daily practice. This shows that the trend of doing EBUS-TBNAs for non-cancer patients is rising. Lung cancer is a common cause of cancer death worldwide2. Various guidelines (including NICE) have found this procedure safe & recommend it for the staging of lung cancer. In the last 10 years, lots of district general hospitals have started this service in UK & it is mainly delivered by respiratory physicians.
This has provided a specialist service for patients in their local area, which has reduced travelling and waiting times.
Setting & Methods
In this district general hospital under discussion, EBUS service was setup in 2018, under supervision of a tertiary care centre. We carried out 82 procedures during the first year of this service. All of these cases were reviewed for this article. Data was recorded on an excel spreadsheet (data included: number of cases, age, gender, lymph node stations sampled, complications, pathology & microbiology results of EBUS TBNA). Minimum of 4 passes were done at each lymph node station. Where EBUS was done for diagnostic purposes, stations to be sampled were at the discretion of the operator. Samples obtained via EBUS-TBNA were flushed into CytoLyt (methanol-water solution). EBUS-TBNAs were carried out in the absence of rapid on-site evaluation (ROSE). Where a cancer was suspected but EBUS-TBNA showed normal findings, samples were obtained via another modalities (e.g., CT biopsy) & FDG PET was carried out as well (if not done already). In cases of isolated mediastinal & hilar lymphadenopathy (IMHL), where EBUS-TBNA did not reveal any pathology, interval surveillance CTs were carried out for monitoring purposes. Where lymphadenopathy did not resolve, surveillance scans were carried out for a year. The outcomes of these surveillance CTs & PET CTs were also reviewed for this study. Diagnosis of reactive lymphadenopathy was made if EBUS-TBNA sample did not reveal any pathology, repeat CT did not show any change (or showed reduction/ resolution of lymphadenopathy) & the clinician did not consider the patient to have another diagnosis. EBUS-TBNA was labelled as false negative, if pathology result was negative, but node was positive on PET (in suspected cancer patients).
Results
Out of these 82 patients who underwent EBUS-TBNA, 55 (about 67%) were male & 27 were female (about 33%) (Figure 1).
The age range of patients at the time of procedure was 28 to 88 years. Majority of the patients were between the age of 52 – 88 years (80% of the cases) (Figure 2).
The 82 EBUS-TBNA procedures were carried out for the following main reasons (Figure 3): A. 42 procedures for cancer reasons (i.e. 51% of the total procedures) a. For diagnosis of lung cancer (38 procedures) b. Diagnosis of suspected extra-thoracic cancer (3 cases) c. Staging of lung cancer (1 case) B. 40 procedures for IMHL (i.e. 49%)
The final diagnoses in 38 procedures carried out for “diagnosis of lung cancer” were as follows: 1. 25 patients were diagnosed with lung cancer (12 squamous cell cancers, 7 adenocarcinomas, 4 small cell cancers, 1 undifferentiated lung cancer & 1 neuroendocrine tumour) 2. Final diagnosis in 9 cases was reactive lymphadenopathy (repeat CT showed resolution of lymph nodes in 3 cases, reduction in the size in 1 case & stable nodes in 5 cases) 3. Extra-thoracic malignancies were diagnosed in 2 cases (1 metastatic prostate cancer & 2nd was metastatic disease from primary parotid gland tumour) 4. We had false negative results in 2 cases (1 patient was diagnosed with small cell lung cancer on CT biopsy & 2nd with adenocarcinoma on ultrasound biopsy)
It was found in 11 cases, where the clinicians initial suspicion was a possible lung cancer, that the final diagnoses were reactive lymphadenopathy and extra-thoracic malignancies.
Some of these patients had lung nodules as well (along with mediastinal & hilar lymphadenopathy). These nodules either resolved or remained stable. In the case of metastatic prostate cancer, prior MRI showed prostate confined disease & the clinician suspected this size significant lymphadenopathy to be due to a lung primary. In the case of metastatic parotid tumour, the initial diagnosis of parotid cancer was a very long time ago & metastatic disease was not expected.
Final diagnoses in 3 patients who had EBUS-TBNA for “extra-thoracic malignancies” were as follows: 1. Prostate cancer (here pelvic MRI showed locally advanced disease) 2. Colon cancer (known colon cancer) 3. Ovarian cancer (patient had ovarian mass & abdominal/pelvic lymphadenopathy)
As most of the surgical patients go directly to tertiary care centres (from this hospital), we therefore didn’t have many patients for staging purposes during the 1st year of the service. We only had 1 patient for “staging EBUS-TBNA” during this time. In this case stations 4L, 7 & 12L were sampled. Only station 12L was PET positive & also positive on EBUS-TBNA sample. Station 4L & 7 were PET and EBUS-TBNA negative. There was no size significant nodes seen on staging CT in any other area, only 12L node was PET avid & we were not able to identify any size significant lymphadenopathy at any other station via EBUS as well. Sensitivity in this staging EBUS was 100%.
In these 42 diagnostic & staging procedures (carried out for cancers or suspected cancers), summary of the pathological diagnoses from lymph nodes aspirates is as follows: 1. Squamous cell carcinoma of lung 13 approximately (31%) 2. Adenocarcinoma of lung origin 7 approximately (17%) 3. Small cell lung cancer 4 approximately (9.5%) 4. Neuroendocrine tumour of lung origin 1 approximately (2.3%) 5. Undifferentiated lung cancer 1 approximately (2.3%) 6. Metastatic prostate cancer 2 approximately (4.75%) 7. Metastatic parotid gland cancer 1 approximately (2.3%) 8. Metastatic ovarian cancer 1 approximately (2.3%) 9. Metastatic colon cancer 1 approximately (2.3%) 10. False negative 2 approximately (4.75%) 11. Reactive lymphadenopathy 9 approximately (21.5%)
Out of the 40 procedures for IMHL, we were not able to get an adequate sample in 1 case and this patient underwent repeat EBUS-TBNA. Repeat sample showed granulomas; findings were consistent with the clinical diagnosis of sarcoidosis. Final diagnoses in these 40 cases are as follows: 1. Metastatic adenocarcinoma from pancreaticobiliary origin = 1 (2.5%) 2. Bronchogenic cyst = 1 (2.5%) 3. Insufficient sample = 1 (2.5%) 4. Tuberculosis = 3 (7.5%) 5. Granulomas = 16 (40%) 6. Reactive lymphadenopathy = 18 (45%)
Serious diagnoses were made in 10% cases of IMHL (4 out of 40). One patient had metastatic adenocarcinoma from pancreaticobiliary origin & didn’t have any abdominal symptoms or any abnormalities on CTs in the abdomen. 3 patients were diagnosed & later treated for active tuberculosis. Out of these 3 patients only 1 had features of active disease, but sputum negative. The other 2 patients had only mediastinal lymphadenopathy, no lung infiltrates & no sputum production.
A total of 122 lymph nodes were sampled. Details are as follows (figure 4):
Lymph node station
Times sampled
%
Station 7
65
53.3
4R
18
14.8
11R
15
12.3
11L
11
9
10R
4
3.2
4L
3
2.5
2R
2
1.6
10L
2
1.6
12R
2
1.6
2L
0
0
12L
0
0
Commonly sampled nodes were station 7 nodes. This is consistent with international literature published on EBUS-TBNA.
There were no complications from the procedures performed. None of our patients experienced significant airway bleeding (requiring admission or blood transfusion), mediastinal infection, pneumothorax, pneumo-mediastinum, haemo-mediastinum or airway lacerations.
Discussion
EBUS TBNA is one of the methods to access the mediastinal & hilar lymph nodes. This is a minimally invasive way to get samples from these nodes. Several invasive, minimally-invasive & non-invasive techniques are available to diagnose & stage lung cancers. Choice depends upon the extent of the disease. About 50% of lung cancer patients have evidence of metastatic disease at the time of presentation 3. Patients with intrathoracic disease undergo several investigations. Now we know that EBUS-TBNA should be considered as the initial investigation for patients with early stage suspected lung cancer 4. Research carried out has shown that EBUS-TBNA had a sensitivity of 90% 5. A recent national BTS audit on bronchoscopy & EBUS showed national diagnostic sensitivity of 90% for staging EBUS-TBNA. BTS quality standards statement sets target of 88% sensitivity for staging EBUS-TBNA6. As far as diagnostic EBUS-TBNA is concerned, we had 2 false negative results out of 41 (4.8%), that gives the sensitivity for diagnostic procedures of 95.2%.
There is significant evidence available that ROSE does not increase the diagnostic yield of even conventional TBNA 7. Trisolini et al demonstrated in this randomised controlled trial that ROSE did not give any significant diagnostic advantage & did not affect the percentage of adequate specimens. Articles have also shown that ROSE does not reduce the EBUS-TBNA procedure time 8. The use of immunohistochemistry on EBUS-TBNA reduces the rate of unclassified non-small cell lung cancer when compared with cytological diagnosis alone 9. EBUS-TBNA samples are sufficient to allow immunohistochemical and molecular analysis. I am happy to say that we were able to get ALK, EGFR & PDL1 testing on the EBUS-TBNA samples (where indicated), at our centre. The presence of a cytopathologist or cytotechnologist during the procedure for ROSE purposes can increase the cost significantly. This increased cost can have a significant impact on starting the service at the level of a district general hospital. Another issue which needs clarification, is the number of passes required before declaring the material is inadequate while using ROSE technique. Studies have shown that significant number of samples inadequate on ROSE were still able to give a diagnosis with the help of immunohistochemical analysis.
Here we have seen that 40 EBUS-TBNA procedures were carried out for IMHL. Unfortunately, in this group, one patient was diagnosed with unexpected malignancy, i.e., metastatic adenocarcinoma of pancreaticobiliary origin. In the remaining cases we had benign diagnoses. In the IMHL group about 45% cases had the diagnosis of reactive lymphadenopathy. Out of the total number of 82, about 33% cases were diagnosed with reactive lymphadenopathy. We made the diagnosis of reactive lymphadenopathy in patients where EBUS samples showed normal lymphocytes; these patients had surveillance CTs & clinical follow up as well. Clinicians’ impression & surveillance scans were also reviewed for the purpose of this diagnosis. In this IMHL group, 40% cases were diagnosed with sarcoidosis. In these cases, in addition to clinicians’ impressions, we reviewed cytology, microbiology & surveillance CT reports. Processing method for specimens impacts on the yield for granulomas. Cell block preparation, as carried out in this hospital, showed higher yield for granulomas 10.
During the first year of the EBUS service at this centre, there was no suspected or diagnosed lymphoma patient who underwent this procedure. International data suggests, for the diagnosis of lymphoma, EBUS-TBNA aspirates should be sent for cytopathology, immunohistochemistry, flow cytometry, cytogenetics and molecular studies 11,12,13.
Conclusion
EBUS-TBNA is a safe & minimally invasive procedure. It is a first line investigation for lung cancer staging. EBUS-TBNA has been effective in diagnosing extra-pulmonary malignancies 14. In the last decade we have also seen that its utility has increased significantly in diagnosing benign conditions like sarcoidosis and TB.
We feel operators’ training is also very important in achieving excellent results. Mastering the complexity of this procedure is time consuming. Standardised training is mandatory to achieve high skill levels15 and we hope there will be a standardised approach to this in future.
According to DSM V, delirium is defined as disturbance in attention (i.e., reduced ability to direct, focus, sustain, and shift attention) and awareness (reduced orientation to the environment). This disturbance develops over a short period of time and it represents an acute change from baseline attention and awareness, and tends to fluctuate in severity during the course of a day.
The focus of the researchers has shifted from treatment to prevention of the syndrome. There is a need to study risk factors for prevention of delirium1. Data on delirium in the intensive care unit is scarce in the Indian subcontinent2.
A multicenter study indicated risk factors significantly contributing to delirium were related to patient characteristics (smoking, daily use of more than 3 units of alcohol, living alone at home), chronic pathology (pre-existing cognitive impairment), acute illness (use of drains, tubes, catheters, use of psychoactive medication, a preceding period of sedation, coma, mechanical ventilation) and the environment (isolation, absence of visit, absence of visible daylight, transfer from another ward, use of physical restraints)1. Psychoactive medications can provoke a delirious state. Lorazepam has an independent and dose related temporal association with delirium3.
Each additional day spent in delirium is associated with 20% increased risk of prolonged hospitalisation and 10% increased risk of death4.
Hence, the present study was done to assess risk factors and precipitating factors of delirium in a medical intensive care unit of a tertiary care hospital.
Materials and methods:
This is an observational study done over a period of 1 year in a tertiary care medical college hospital located in southern part of India. Ethical committee approval for the study was obtained from the institutional ethical committee.
All patients admitted to medical intensive care unit in our tertiary care hospital, were screened for presence of delirium during the first 72 hours of admission using Richmond Agitation Sedation Scale (RASS) and Confusion Assessment Method for ICU (CAM-ICU). Patients with delirium were classified as delirious and the remaining as non-delirious patients. Comatose patients (RASS score -4 or -5) were excluded from the study.
Patients were initially screened with Richmond Agitation Sedation Scale (RASS). It is a 10-point scale, with 4 levels of agitation (+1 to +4) and 5 levels of sedation (-1 to -5). Level zero indicates calm and alert patient. Patients with RASS score of -4 or -5 (deep sedation and unarousable patients) were excluded from the study. Patients with RASS score of +4 to -3 were then screened for presence of delirium using Confusion Assessment Method for ICU (CAM-ICU). CAM–ICU has 4 criteria:
1) Acute onset and fluctuating course of delirium
2) Inattention
3) Disorganized thinking
4) Altered level of consciousness
The diagnosis of delirium requires the presence of criteria 1 and 2 and of either criterion 3 or 4.
Risk factors for developing delirium were assessed in the study population. Risk factors are those proven factors which may also be present before patient’s admission to intensive care unit, and which predispose the patient to develop delirium. Risk factors were compared between delirious and non-delirious patients. Risk factors which were assessed were history of diabetes and hypertension, history of previous stroke, history of previous cognition impairment, history of previous psychiatric illness, history of previous trauma, history of previous episodes of delirium, history of bowel and bladder disturbances prior to admission (such as constipation and urinary retention respectively), history of alcohol abuse (consumption of more than 2 units of alcohol), history of smoking (more than 10 cigarettes per day), history of consumption of substances other than cigarettes and alcohol (such as cannabis, cocaine etc.), history of uncorrected visual or hearing disturbances before admission, history of usage of barbiturates (such as phenobarbital), benzodiazepines (such as alprazolam, chlordiazepoxide, clobazam, clonazepam) & opioids (such as morphine) before admission, history of usage of sedatives (such as haloperidol, midazolam, fentanyl) and pain killers (such as morphine, tramadol) at the time of admission. Metabolic risk factors which were compared between delirious and non-delirious subjects were uraemia, hyponatremia, hyperbilirubinemia, metabolic and respiratory acidosis.
Precipitating factors weredefined as factors that were the likely causes of delirium in delirious patients. Precipitating factors for delirium which were looked into were exposure to toxins (alcohol/drugs), deranged metabolic parameters, infections and central nervous system causes.
SPSS21 software was used to calculate statistics. Independent t-sample test and the Pearson Chi-square test were used to calculate differences between delirious and non-delirious subjects. Odds ratios (OR) was calculated for all factors using univariate binary logistic regression.
Results:
Total number of patients enrolled in the study was 1582, of which 406 were diagnosed with delirium. Percentage of patients developing delirium within first 72 hours of admission was 25.7%. Hypoactive delirium was present in 52% and hyperactive delirium in 48% of patients. Patients who experienced delirium (57.5 + 17 years) were older compared to their non-delirious (53.3 + 18.1 years) counterparts (p value <0.0001). Among delirious subjects, majority were in the age group of 61-70 years (Figure 1).
Figure 1- Age distribution among delirious patients
38.2% of delirious patients and 39.3% of non-delirious patients were females. 61.8% of delirious patients and 60.7% of non-delirious patients were males.
Alcohol consumption [OR = 6.54 (95% CI 3.76-11.4, p = 0.0001)], previous psychiatric illness [OR = 3.73 (95% CI 1.712-8.159, p = 0.033)], previous cognition impairment [OR = 2.739 (95% CI 1.509-4.972, p = 0.001)], sedatives usage at the time of admission [OR = 2.488 (95% CI; 1.452-4.264), p = 0.001)], visual disturbances [OR = 2.227 (95% CI; 1.328-3.733, p = 0.002)], bowel and bladder disturbances [OR = 1.677 (95% CI 1.044-2.693, p = 0.032)] were significant risk factors contributing to delirium after univariate analysis (Table 1). Metabolic acidosis [OR = 1.996 (95% CI 1.469-2.711, p = 0.0001)] and hyperbilirubinemia [OR = 1.448 (95% CI 1.111-1.886, p = 0.006)] were significant metabolic parameters contributing to delirium after univariate analysis (Table 2).
Precipitating factors (Table 3) for delirium are those factors that were considered the most likely causes of delirium among the delirious patients. Precipitating factors for delirium were classified into toxins, deranged metabolic parameters, infections and central nervous system causes, of which metabolic parameters were most common. Among metabolic parameters, uraemia (25.1%), hepatic encephalopathy (22.7%) and hyponatremia (19.5%) contributed to the majority of cases with delirium.
Table 1 – Univariate analysis of risk factors of delirium
NO DELIRIUM
DELIRIUM
COUNT
%
COUNT
%
P
UNIVARIATE
Diabetes
No
729
62
226
55.7
.025
1.3(1.1-1.6)
Yes
447
38
180
44.3
Hypertension
No
684
58.2
239
58.9
.8
.97(0.8-1.2)
Yes
492
41.8
167
41.1
History of Stroke
No
1107
94.1
379
93.3
.6
1.14(0.7-1.8)
Yes
69
5.9
27
6.7
Previous memory disturbances
No
1149
97.7
264
89.7
<.0001
4.9(2.9-8)
Yes
27
2.3
42
10.3
Previous psychiatric illness
No
1161
98.7
386
95.1
<.0001
4(2-7.9)
Yes
15
1.3
20
4.9
Trauma
No
1137
96.7
396
97.8
.3
0.6(0.3-1.3)
Yes
39
3.3
9
2.2
Previous episodes of delirium
No
1155
98.2
402
99
.3
0.55(0.2-1.6)
Yes
21
1.8
4
1.0
Bowel & bladder disturbances
No
1107
94.1
350
86.2
<.0001
2.6(1.8-3.7)
Yes
69
5.9
56
13.8
Alcohol
No
1089
92.6
336
82.8
<.0001
2.6(1.8-3.7)
Yes
87
7.4
70
17.2
Smoking
No
981
83.4
354
87.2
.07
0.7(0.5-1.03)
Yes
195
16.6
52
12.8
Other substance abuse (apart from cigarettes and alcohol)
No
1071
91.1
391
96.3
.001
0.4(0.22-0.6)
Yes
105
8.9
15
3.7
Visual disturbances
No
1062
90.3
298
73.4
<.0001
3.4(2.5-4.5)
Yes
114
9.7
108
26.6
Hearing disturbances
No
1104
93.9
338
83.3
<.0001
3.1(2.2-4.4)
Yes
72
6.1
68
16.7
Barbiturates
No
1155
98.2
401
98.8
.5
0.7(0.3-1.8)
Yes
21
1.8
5
1.2
Benzodiazepines
No
1155
98.2
400
98.5
.7
0.8(0.3-2.1)
Yes
21
1.8
6
1.5
Opioids
No
1176
100
405
99.8
.9
4.7(0-IN)
Yes
0
.0
1
.2
Sedatives usage in present admission
No
1143
97.2
369
90.9
<.0001
3.5(2.1-5.6)
Yes
33
2.8
37
9.1
Pain killers usage in present admission
No
1080
91.8
400
98.5
<.0001
0.17(0.07-0.39)
Yes
96
8.2
6
1.5
Table 2- Univariate analysis of metabolic parameters
NO DELIRIUM
DELIRIUM
COUNT
%
COUNT
%
P
UNIVARIATE
Uraemia
NO
648
55.1
186
45.8
0.001
1.45(1.2-1.8)
YES
528
44.9
220
54.2
Hyponatremia
NO
645
54.8
202
49.8
0.08
1.2(0.98-1.5)
YES
531
45.2
204
50.2
Hyperbilirubinemia
NO
837
71.2
246
60.7
<0.0001
1.6(1.3-2)
YES
339
28.8
159
39.3
Metabolic acidosis
NO
990
84.2
286
70.4
<0.0001
2.2(1.7-2.9)
YES
186
15.8
120
29.6
Respiratory acidosis
NO
1092
92.9
377
92.9
1
1(0.6-1.5)
YES
84
7.1
29
7.1
Table 3- Precipitating factors of delirium in the present study
PRECIPITATING FACTORS
%
Toxins
Drug or Alcohol overdosage
1.5
Alcohol withdrawal
2.7
Metabolic conditions
Hyponatremia
19.5
Hyperglycaemia
6.2
Hypoglycaemia
2.5
Hypercarbia
5.7
Uraemia
25.1
Hepatic encephalopathy (hyperammonemia)
22.7
Infections
Systemic infective causes
16.5
Meningitis/ Encephalitis
8.9
Central Nervous System causes
Hypoperfusion states
14.5
Hypertensive encephalopathy
5.9
Cerebrovascular accident (CVA)
7.6
Intracranial space occupying lesion (ICSOL)
5.4
Seizures
10.3
Psychiatric illness
4.9
Discussion:
Delirium is classified into hyperactive, hypoactive and mixed type. Hyperactive subtype is present if there is definite evidence in the previous 24 hours of at least two out of the following factors - increased quantity of motor activity, loss of control activity, restlessness, wandering. Hypoactive subtype is present if there is definite evidence in the previous 24 hours of at least two of the following factors - decreased amount of activity, decreased speed of actions, reduced awareness of surroundings, decreased amount of speech, decreased speed of speech, listlessness, reduced alertness, withdrawal. Mixed subtype is present if there is evidence of both hyperactive and hypoactive subtypes in the previous 24 hours5. Percentage of patients with hypoactive delirium was high in this study (52%). Hypoactive delirium often carries relatively poor prognosis, occurs more commonly in elderly patients and is frequently overlooked or misdiagnosed as having depression or a form of dementia.
In the present study, delirium was more prevalent in the elderly population. Most of the elderly patients will have multiple risk factors making them more vulnerable to delirium. Delirium is often the only sign of an underlying serious medical illness in an elderly patient and particular attention should be given to identify and correct the underlying illness.
History of alcohol consumption of more than 2 units per day, prior to admission of the patient, was the major risk factor contributing to delirium in this study (OR = 6.54). This was similar to other studies done by Bart1 et al & Ouimet6 et al where consumption of more than 3 units of alcohol (OR 3.23) & 2 units of alcohol (OR 2.03) respectively, was a significant risk factor for delirium. Patients with a previous psychiatric illness were at increased risk for delirium in this study (OR – 3.73). However, other studies explaining its importance in contributing to delirium were not available. Previous cognition impairment was a significant risk factor contributing to delirium (OR = 2.73). The study by Bart1 et al found that previously diagnosed dementia was an important risk factor (OR = 2.41). Positive correlation with dementia was reported by McNicoll et al7 (RR 1.4) and Pisani et al8 (OR 6.3). Usage of sedatives (OR = 2.48) at the time of admission was a significant risk factor for developing delirium. Bart1 et al found that use of psychoactive medication may disturb the neurotransmission in the brain provoking a delirious state and use of benzodiazepines is a risk factor for delirium (OR – 3.34). Pandharipande3 et al found that Lorazepam was an independent risk factor for daily transition to delirium (OR – 1.2). Pisani8 et al found that use of benzodiazepines was a significant risk factor for developing delirium with odds ratio of 3.4. Uncorrected visual disturbances were a significant risk factor for developing delirium in this study (OR-2.22). Inouye9 et al found that vision impairment (adjusted relative risk – 3.5) was an independent baseline risk factor for delirium. Bowel and bladder disturbances were a significant risk factor contributing to delirium in this study (OR – 1.67). Morley10 opined that constipation is a frequent, often overlooked precipitating factor for delirium. Tony11 et al was of the opinion that a careful history and physical, including a rectal examination with consideration of disimpaction, may be helpful in assessing and managing delirious patients. Waardenburg12 concluded that significant urinary retention can precipitate or exacerbate delirium, a disorder referred to as cystocerebral syndrome. Liem and Carter13 suggested that increased sympathetic tone and catecholamine surge triggered by the tension on the bladder wall may contribute to delirium. Metabolic acidosis and hyperbilirubinemia were significant metabolic parameters contributing to delirium in this study.Similar findings were reported by Aldemir14 et al.
Among delirious patients, most common precipitating factors for delirium in this study were uraemia (25.1%), hepatic encephalopathy (22.7%) and hyponatremia (19.5%). Alterations of serum electrolytes, renal function predispose to delirium15. Hyponatremia causes delirium and the mechanism is not well understood16, 17. Blood urea nitrogen/creatinine ratio greater than 18 is an independent risk factor for delirium in general medical patients9.Hepatic failure leads to hyperammonemia, which leads to excessive NMDA (N-methyl-D-aspartate) receptor activation, resulting in dysfunction of glutamate-nitric oxide-cGMP pathway and causing impaired cognitive function in hepatic encephalopathy18.Excess activation of NMDA receptors results in neuronal degeneration and death19. In hepatic failure, there may be a shift in regional cerebral blood flow and cerebral metabolic rates from cortex to subcortex resulting in delirium20.
Patients who develop delirium during their stay in hospital have higher 6-month mortality rates, longer hospital stay, increased economic burden and a higher incidence of cognitive impairment at hospital discharge21. Limitation of this study was long term follow up of patients who developed delirium was not done.
Conclusion:
Delirium is common in intensive care unit patients and hypoactive delirium is more common. Major risk factor contributing to delirium was alcohol consumption before admission. Most common precipitating factors contributing to delirium were deranged metabolic parameters.
Delirium in ICU patients especially hypoactive delirium is easily missed. Hence, all ICUs should implement both RASS and CAM-ICU for early detection of delirium. Future research needs to be directed at development of scoring systems for detection of delirium, which are easy to use and are accurate.
A family carer or caregiver is someone who gives a substantial amount of unpaid care and support regularly to a relative, partner or friend. Currently, there are over 850,000 people living with dementia in the UK, of which two thirds are looked after in the community by primary carers, and the demands on individuals and families are set to increase1. Without the work of unpaid family carers, the formal care system would be likely to collapse.
Many people in the UK still do not feel comfortable talking about dementia, especially with their own families. A recent survey of more than 2,100 carers, of which 17% of respondents cared for a person with dementia, found that 75% of carers were not prepared for all aspects of caring. Nor were they prepared for the emotional impact, lifestyle or relationship changes of their caring role2. Failure to prepare and support carers in their role not only affects their own personal health and wellbeing, but can also lead to the early and potentially avoidable admission of people with dementia into formal care.
As dementia progresses, family members often provide care under high level of stress for longer periods of time. The effects of being a family caregiver, though sometimes positive, are generally negative on their psychological and physical health, life expectancy and quality of life 3. It is therefore important to educate carers of family members with dementia to improve their knowledge of, and attitude towards people with dementia. Poor knowledge about dementia has been found to result in the underutilisation of support and treatment services, and in poorer outcomes of people with dementia and their caregivers such as inadequate care of the disease, misinterpretation of behaviours and increased caregiver stress due to failure to seek appropriate support4.
Currently there is too much reliance on people with dementia and carers seeking out information for themselves. The result is that people do not receive the information they need because they do not know what to ask for. Despite the existence of information for carers, people report that their information needs are not met. Information is provided too late or not at all. A key problem is that people have to ask for information, rather than it being provided proactively.
It has been found that education and training programme covering the information5, or an individual training programme6, improve attitudes towards caring for people with dementia as well as general knowledge of dementia7. Psychosocial interventions have also been demonstrated to reduce caregiver burden and depression, and delay care home admission8. A systematic review9 of 44 randomised controlled trials has found statistically significant evidence that group-based supportive interventions impact positively on caregivers of people with dementia.
Coon et al (2003)10 found that psychoeducational skill training, in small groups, improved both the affective states and the type of coping strategies used by caregivers. On the other hand, an information-orientated programme failed to improve caregiver’s mood11, and a befriending scheme was not effective in improving carer’s wellbeing12. Similarly, a randomised controlled trial did not show preventive effects of family meetings on the mental health of family caregivers13. Livingstone et al (2013)6, on the other hand, have found encouraging results of a manual based coping strategy programme in their London study.
A suitable training programme is therefore required for building caregivers’ knowledge and skills. We have developed a Dementia First Aid (DFA) course for the family carers of people with early dementia. This is a problem solving, stress reducing, and crisis preventive training programme. The DFA course was inspired by the principles of Mental Health First Aid programme14, developed in Australia in 2001 and introduced to England in 2007 by the National Institute for Mental Health in England.
Dementia First Aid Course
Description of the course
Dementia First Aid course is delivered over 4 hour in a group setting. Each participant received a course manual prepared by the author AJ. The content covered an overview of dementia, impact of dementia on the individual, impact of caring on families, mindfulness-based stress reduction training, and a detailed discussion of Dementia First Aid Action Plan for crises associated with behavioural and psychological symptoms of dementia (BPSD).
In November-December 2013, a group of 8 health care professionals, working within the specialist mental health services for older people in Hertfordshire, were offered the 12-hour advanced Dementia First Aid course, followed by an additional 12-hour practice training of presenting the course to a group of family carers of people with recently diagnosed dementia.
Evolution of DFA course
The original 12-hour Dementia First Aid course was delivered over three half days. Although the course was well received by both carers and trainers, the dropout rate was high. This was mainly due to the carers struggling to make alternative arrangement to look after the person with dementia while they were away. The course was therefore changed to 8 hours and then reduced to 4 hours based on feedback received by the carers.
The main aim of this pilot evaluation was to investigate the potential benefits of a Dementia First Aid course in terms of the knowledge and attitude of family carers of people with newly diagnosed dementia.
Methods
The participants were the primary family caregivers of people with dementia residing in northwest Hertfordshire. The DFA course was organised once every two month from November 2015 till March 2017.
An invitation letter, along with details of the pilot assessment, was sent to all those carers of people whose dementia was diagnosed recently in memory clinic and all participants were given at least 4 weeks’ notice prior to the course.
Selection criteria included: being aged 18 or above, the primary carer of a person with newly diagnosed dementia (i.e. currently providing at least 20 hours of direct care per week) & residing in Hertfordshire.
The training was delivered by a pair of qualified DFA instructors, who were mental health professionals experienced in dementia care in the NHS. The training was conducted using a power point presentation, group work, and audio-visual clips based on a specially designed DFA manual.
Evaluation questionnaire
The participants were asked to complete a questionnaire on their own at the beginning of the programme. Oral consent from participants were obtained prior to filling out the questionnaire, the participants were made aware that participation in the pilot assessment was voluntary and would not pose any barrier for them to join the programme.
Participants were given Alzheimer’s disease Knowledge Scale15, a questionnaire comprising of 30 questions before and after the training. They were also asked to complete the Zarit Burden Scale a 12 item self-reported scale16 to measure carer burden.
After 6 months the participants were contacted to complete ADKS and Zarit Burden Scale. ADKS is therefore completed thrice and Zarit Burden Scale is completed twice during the study.
Statistical analysis
The data collected were analysed in two ways. First, ADKS data collected at pre-test were compared to post-test scores to examine change in participants’ knowledge. The participants’ knowledge at the end of 6 months was also compared to pre and post-test scores. Similarly Zarit Burden Scores at the time of initial assessment were compared to scores 6 month post training. To evaluate the effect of the training, answers to the structured questions given at pre- and post-test and scores at 6 months were compared using a correlated group t-test.
Results
The study sample comprised 65 people who had completed the DFA course. All completed ADKS pre- and post-training and completed Zarit Burden Scale, and a further 34 provided follow-up data approximately 6 months later.
Sample characteristics:
Mean (±SD) age = 66.9 (± 13.8) years (range 31-90). 23 attendees were male, 42 were female
ADK scores
Looking first at all 65 attendees:
ADK scores for whole sample
Pre-course
Post-course
Mean
16.7
21.2
SD
5.7
4.5
Min
0
10
Max
26
29
ADK scores improved significantly immediately after attending the course (p < 0.0001).
Score improvement was not predicted by gender (p > 0.3), and the correlation between score improvement and age was not significant (R = 0.023). We did not examine age and gender further.
Analysis of sample of 34 who provided long-term follow up data:
ADK scores for sub-sample
Pre-course
Post-course
6+ Month
Mean
17.2
22.0
21.0
SD
4.9
4.5
4.8
Min
1
11
7
Max
24
29
29
For the smaller sample, ADK scores improved significantly immediately after attending the course (p < 0.0001), and this was sustained at the longer-term follow up (p < 0.0001). Although the mean ADK score dropped by a point at 6+ months, this was still a significant improvement over the pre-course (baseline) score.
Comparing post-course ADK score with 6+ month follow-up ADK score, no significant difference was observed (t[33] = 1.48, p = 0.15), suggesting that knowledge was not lost to a significant degree.
Zarit Burden Scale Scores
The response rate Zarit Burden scores was not good as only 19 of the sample completed this at 6 month follow up. The score for this cohort increased by 3.58 points, which was borderline significant and is expected as dementia is a progressively declining condition.
Discussion
This is the first report on the level of dementia knowledge among family caregivers in the UK before and immediately after the implementation of a novel post-diagnostic dementia training programme, the Dementia First Aid Course and whether the knowledge sustains after 6 month.
The mean pre-course score on the ADKS in the sub-sample that completed test at 6 months was significantly lower at 17.2 than 22.7 reported by Smyth et al. (2013)7. It was expected that the level of dementia knowledge would improve after attending the course and the findings largely fulfilled this expectation. There was a significant difference between the pre and post training score with p value < 0.0001. Further there is evidence that the knowledge sustained after 6 months of the training.
The intervention studied in a recent British trial6 is an individual therapy programme, consisting of psychoeducation about dementia, carer’s stress, behaviour management, and relaxation techniques. The effectiveness of the programme on carer’s depression and abusive behaviour was significant. To provide individual training for a huge number of families may not be possible in the NHS. Therefore, a group based training approach employed in our study may well be more sustainable.
The carer’s burden of care as measured by Zarit burden scale at the time of training and 6 months later showed only a modest increase of 3.58 points. However, it was apparent that training could not affect the relentless progression of dementia, most of which were of the Alzheimer’s type.
Limitations
Being a pilot evaluation, the sample size of this study was small. This pilot assessment may be limited by the fact that participants were not randomly selected. Since the current evaluation was conducted in only one part of the County, the sample may not reflect a wider community. The knowledge gained during the course was sustained at the end of 6 months. However training did not reduce carer burden nor it was clear whether the new knowledge and skills will be effective in preventing crises. Brodaty et al (1989)17 reported reduced psychological morbidity of the carer following dementia carers’ programme but cautioned against delay in institutionalisation of patient at the expense of the morbidity of the carer.
Finally, the present pilot evaluation was uncontrolled and non-randomised, so we do not know to what extent any impact is due to the dementia first aid training, passage of time or experience of caring. A randomised controlled study with follow-up measurements on caregivers’ knowledge, sense of burden, psychological health and wellbeing, would be the ideal next step.
Key points
Most people with dementia live at home and are cared for by their spouses, children or other family members, but these carers are not usually offered adequate information and training about dementia and the impact of caring at the time of diagnosis.
This paper describes the effectiveness of a short (4-hour) version of a novel training programme, the ‘Dementia First Aid’ course, for family caregivers of people with early dementia.br> The Dementia First Aid course includes overview of dementia including Alzheimer’s disease, impact of dementia on the person and their family caregivers, principles and practice of ‘mindfulness’ to enhance coping ability, and first action plan for common behavioural and psychological symptoms of dementia.
‘Dementia First Aid’ course, appears to enhance caregivers’ knowledge of dementia.
Conclusions
The significance of these results can be placed in the wider context of proactive dementia training for family caregivers at the time of diagnosis. The results are important in demonstrating that having dementia training is associated with improved knowledge.
This study adds to the existing literature and has implications for both care and policy regarding community care of people with dementia, and emphasises the importance of dementia training as a routine component of post-diagnostic support.
Although knowledge alone does not necessarily translate into change in care, nor does high quality of care solely depend on broad education about dementia7, our results suggest that the dementia first aid course is effective in changing the knowledge and attitude of dementia caregivers. Hopefully, this will also enhance their ability and skills of caring, which may in turn reduce caregivers’ sense of burden and wellbeing. A randomised controlled study with follow-up measurements is required to support these claims.
Bipolar affective disorder (BPAD) is one of the commonest psychiatric disorders with a lifetime prevalence of about 3% in the general population and is the sixth leading cause of disability worldwide (1,2).This disorder is characterised by repeated episodes in which the patient’s mood and activity levels are significantly disturbed. This disturbance consists on some occasions of an elevation of mood and increased energy and activity (mania or hypomania), and on other occasions of a lowering of mood and decreased energy and activity (depression) (3). As the illness starts early in life, i.e., during teens or early adulthood, persons suffering from BPAD have symptoms of illness for the major part of their life (4, 5).
In India, since professional services, both in public and private sectors are not adequately developed due to shortage of trained human resources and infrastructure, the family support system plays a major role in caring for people with mental illnesses (6). The primary caregiver is identified as an adult relative (a spouse, parent or spouse equivalent) living with a patient, who is involved in the care of the patient on a day-to-day basis, takes the responsibility for bringing the patient to the treatment facility, stays with the patient during the inpatient stay, provides financial support and/or is contacted by the treatment staff in case of emergency (7). Intensive involvement in the care of the patient is often associated with significant caregiver burden.
Caregiver burden can be defined as the presence of problems, difficulties or adverse effects which affect the lives of caregivers of patients with various disorders or illnesses, e.g. members of the household or family (8). Family burden is broadly divided into objective and subjective burden. While the notion of the objective family burden relates to measurable problems (e.g. patients’ troublesome behaviours), the idea of subjective family burden is bound to caregivers’ emotions arising in response to the objective difficulties (9). Multiple studies across the world have shown that bipolar disorder is associated with significant caregiver burden (10-31). In view of the high caregiver burden, it is now suggested that the emphasis in psychiatric rehabilitation needs to shift from a patient-focused approach to a combined patient and caregiver-focused approach. Although there are studies from different parts of the country, there is a lack of data on caregiver burden from Kashmir, which is often faced with turmoil, which can influence caregiver burden. The present study is an effort in this direction to assess caregiver burden and its correlates among primary caregivers of patients with bipolar disorder.
Methodology
The present study was conducted on primary caregivers of patients with BPAD. Primary caregivers were defined as those caregivers who were closely involved in the care of the patient during the acute episodes and during the maintenance period in terms of bringing the patient to the hospital, supervising the medications and liaison with the treating team.
The study sample comprised of 100 caregivers of 100 patients diagnosed with BPAD as per the International Classification of Diseases classification of mental and behavioural disorders, 10th revision (ICD-10) (3), attending either the outpatient or inpatient services at the Department of Psychiatry, SKIMS, Bemina, Srinagar. The study was approved by the Ethics Committee of the institute and all the participants were recruited after obtaining written informed consent.
To be included in the study, the caregivers were required to be involved in the care of patients, aged 18 or above, living with the patient for at least 1 year and were a family member taking care of patients without any wages. Caregivers who were diagnosed with psychiatric illness and staying with the patient for less than 12 months were excluded.
The caregivers were assessed by following scales:
Family Burden Interview Schedule (FBIS) (32):This is a semi-structured interview schedule having 24 items, each of which is scored on a 3-point scale, i.e. 0 indicating no burden, 1 indicating a moderate level of burden and 2 suggesting severe burden. The items of the objective burden of the scale are divided into 6 domains, i.e. financial burden, disruption of routine family activities, disruption of family leisure, disruption of family interaction, physical health and mental health. Subjective burden is evaluated by a single item. This scale has been widely used in previous studies from India (26, 33-35).
DUKE-UNC Functional Social Support Questionnaire (FSSQ) (36):The Duke-UNC Functional Social Support Questionnaire (FSSQ) is an 8-item instrument to measure the strength of the person's social support network (36). Responses to each item were scored as 1 (‘much less than I would like’), 2 (‘less than I would like’), 3 (‘some, but would like more’), 4 (‘almost as much as I would like’) and 5 (‘as much as I would like’). The scores from all eight questions are summed (maximum 40) and then divided by 8 to get an average score. The higher score indicates better perceived social support. Cronbach’s alpha for this scale is 0.84.
Hindi General Health Questionnaire (GHQ-30) (37):The modified version of Goldberg's General Health Questionnaire (GHQ) (38) was used. This is a screening device for identifying minor psychiatric disorders in the general population and within the community or non-psychiatric clinical settings such as primary care or general medical outpatients. The self-administered questionnaire focuses on two major areas: the inability to carry out normal functions and the appearance of new and distressing phenomena. In each question of the 30-item GHQ, the caregivers were asked to choose among: Better than usual or same as usual = 0, less than usual or much less than usual = 1.The results were evaluated by the two-step assessment method (0-0-1-1-method). The minimum GHQ-30 total score was 0 and the maximum total score of GHQ-30 was 30. A cut-off of 6 was used to categorize those with and without psychiatric morbidity. Cronbach’s alpha value of the GHQ-30 was 0.93. The Kappa coefficient was 0.64 (p<0.001).
The recorded data was compiled and entered into a spreadsheet (Microsoft Excel) and then exported to data editor of SPSS Version 16.0 (SPSS Inc., Chicago, Illinois, USA). Continuous variables were summarised in the form of means and standard deviations and categorical variables were summarised as percentages. Student’s independent t-test and Chi-square tests were employed for comparing caregiver burden with different variables.
Results
Table 1: Description of socio-demographic variables of caregivers
Variables
Caregiver Frequency (n=100)(%)
Patients Frequency (n=100)(%)
Age (Years)
20-29
11(11%)
12(12%)
30-39
24(24%)
26(26%)
40-49
26(26%)
31(31%)
50-59
34(34%)
14(14%)
≥ 60
5(5%)
17(17%)
Mean± SD
43.4 ±11.25
34.3±12.86
Gender
Male
52(52%)
47(47%)
Female
48(48%)
53(53%)
Marital Status
Unmarried
7(7%)
37(37%)
Married
93(93%)
63(63%)
Educational Status
No formal education
48(48%)
36(36%)
Primary
5(5%)
6(6%)
Secondary
27(27%)
32(32%)
Graduate
20(20%)
26 (26%)
Occupation
Unemployed
3(3%)
10(10%)
Labourer
27(27%)
24(24%)
Student
3(3%)
16(16%)
House maker
44(44%)
34(34%)
Employed
23(23%)
16(16%)
Socio-economic Status
Low
60(60%)
60(60%)
Middle
40(40%)
40(40%)
High
0(0%)
0(0%)
Relationship of caregiver
Father
11(11%)
Mother
22(22%)
Spouse
55(55%)
Duration of care
1-5yrs
77(77%)
6-10yrs
16(16%)
>10yrs
7(7%)
Mean ± SD
4.8±4.16
Table 2: Clinical profile of patients.
Patient Variables
Frequency(n=100)(%)
Duration of illness
1-5 Yrs
77(77%)
6-10 Yrs
16(16%)
11-15 Yrs
5(5%)
16-20 Yrs
1(1%)
> 20 Yrs
1(1%)
Mean±SD
4.83±4.25
Number of hospitalisations
Never
47(47%)
Once
24(24%)
Twice
18(18%)
Thrice
6(6%)
Four Times
5(5%)
Mean±SD
0.98±1.16
Number of episodes of mania
1-2
55(55%)
3-4
39(39%)
5-6
6(6%)
Mean±SD
2.61±1.12
Number of episodes of depression
< 3
15(15%)
3-5
64(64%)
> 5
21(21%)
Mean±SD
4.05±1.87
Number of attempts of homicide
0
75(75%)
1
8(8%)
2
4(4%)
≥ 3
5(5%)
Mean±SD
0.37±0.93
Number of attempts of suicide
0
75(75%)
1
1(1%)
2
6(%)
≥ 3
2(2%)
Mean±SD
0.23±0.74
Compliance with medication
Yes
73(73%)
No
27(27%)
Table 3: Caregiver burden, social support and psychological morbidity among caregivers
Psychosocial parameters
Mean (SD)
Range
Caregiver burden (FBI scores)
Financial burden
7.01 (2.28)
3-12
Disruption of family routine activities
5.38(1.77)
3-9
Disruption of family leisure
4.12 (1.26)
2-8
Disruption of family interactions
4.04 (1.36)
3-9
Effect on physical health of others
2.28 (0.83)
1-4
Effect on mental health of others
1.51 (0.82)
0-4
Total family burden
24.31 (7.35)
13-44
Objective burden Score < 12 Score ≥12
3 97
Subjective Caregiver burden score
1.12(0.61)
0-2
DUKE UNC FSSQ
3.17 (0.84)
1.75-4.75
GHQ-30
13.14 (5.65)
2-25
GHQ score < 6 GHQ score ≥ 6
77 (77%) 23 (23%)
Table 4: Association of caregiver burden with socio-demographic variables of caregivers
Caregiver Variables
N
Mean
SD
P-value
Age (Years)
20-29 30-39 40-49 50-59 ≥ 60
11 24 26 34 5
20.63 22.67 25.08 26.93 29.25
4.860 7.409 6.211 5.839 6.675
<0.001*
Gender
Male Female
52 48
23.60 27.35
7.384 7.309
0.012*
Marital Status
Married Unmarried
93 7
26.97 21.29
7.409 6.211
0.041*
Educational Status
No formal education Primary Secondary Graduate
48 5 27 20
28.78 27.80 24.69 22.35
7.772 7.596 7.223 5.092
0.015*
Occupation
Unemployed Labourer Student House maker Employed
3 27 3 44 23
23.15 25.47 23.05 28.05 22.07
7.268 1.399 6.891 6.891 7.312
<0.001*
Socio-economic status
Low Middle High
60 40 0
26.88 23.38 0
7.958 5.687 0
0.018*
Type of family
Nuclear Joint
82 18
28.37 23.54
5.463 6.354
0.002*
Relationship to patient
Parent Spouse Offspring
33 55 12
24.47 28.04 21.57
7.972 7.038 6.024
0.008*
Duration of care
1-5 Years 6-10 Years > 10 Years
77 16 7
22.99 33.06 35.57
5.644 6.027 5.996
<0.001*
Table 5: Clinical Profile of patients with bipolar disorder
Disease Profile
No.
Mean
SD
P-value
Duration of illness
1-5 Yrs 6-10 Yrs ≥ 10Yrs
77 16 7
22.98 33.07 37.01
5.644 6.027 2.887
<0.001*
Number of Hospitalisations
Never Once Twice Thrice Four Times
47 24 18 6 5
22.21 25.83 26.54 28.50 31.00
7.896 7.438 6.527 4.506 6.042
0.045*
Number of episodes of mania
1-2 3-4 5-6
55 39 6
22.27 27.97 38.65
5.612 6.726 2.066
<0.001*
Number of episodes of depression
< 3 3-5 > 5
15 64 21
21.93 23.91 32.81
7.611 5.817 6.615
<0.001*
Compliance with medication (>75%)
Yes No
73 27
24.51 27.94
7.328 7.377
0.041*
Table 6: Clinical Profile of patients with bipolar disorder
The study included nearly equal number of male and female patients. About two-thirds of the patients were married (63%). About one-third of the patients had not received any formal education and another third had completed their secondary education and one-fourth had completed graduation (Table 1).
Description of socio-demographic variables of caregivers
The study included nearly equal numbers of male and female caregivers. The majority (55%) of the caregivers were spouses of the patient. The majority of the caregivers were married (93%). Nearly half of the caregivers had not received any formal education (48%), were homemakers (44%) and three-fifths of them were from low socioeconomic status (60%). The majority of caregivers (77%) had been caring for duration of one to five years (Table 1).
Clinical profile of patients.
In the present study, the majority of patients (77%) had duration of illness in the range of 1-5 years, nearly half of them were never hospitalised, the majority (55%) of patients had one to two manic episodes, most of them (64%) had three to five episodes of depression, and the majority (75%) of them never attempted suicide or homicide. The majority of patients (73%) were compliant with medication. (Table 2)
Caregiver burden, social support and psychological morbidity among caregivers
As is evident from Table 3, the highest burden was reported in the financial domain, followed by disruption in family routine activities, disruption of family leisure, disruption of family interactions, effect on physical health of others and least burden was reported in the form of effect on mental health of others. The mean DUKE UNC FSSQ score was 3.17 (SD=0.84) with range 1.75-4.75.
Mean GHQ-30 score was 13.14(SD=5.65) with a range of 2-25. Of the 100 caregivers, about one-fourth (N=23) had a GHQ-30 score of 6 or more, indicative of psychological morbidity.
Association of caregiver burden with demographic and clinical variables
As is evident from Table 4, higher caregiver burden was associated with higher age, female gender, lack of formal education, being a homemaker, lower socioeconomic status, a nuclear family set-up, being spouse of the patient and longer duration of being in the caregiver role.
Clinical Profile of patients with bipolar disorder
In terms of clinical variables, higher objective caregiver burden was associated with duration of illness more than 10 years, higher number of hospitalisations and higher number of manic and depressive episodes. Caregivers of patients consuming >75% of the prescribed medications reported lower caregiver burden (Table 5).
Advancing age of patient and caregiver, increasing duration of care, prolonged illness, greater number of hospitalisations and higher number of episodes of either polarity were significantly associated with higher caregiver burden. In terms of association of social support and caregiver burden, higher social support was associated with significantly lower caregiver burden, whereas higher caregiver burden was associated with higher psychological morbidity (Table 6).
Discussion
Families play an important role in care of patients with chronic mental illnesses. In the process of caring for such patients, relatives face a considerable burden.
Findings of the present study suggest that higher burden was seen among the caregivers who were relatively older, of female gender, uneducated or illiterate, homemakers and from nuclear families. Compared to parents and siblings, spouses reported significantly higher levels of caregiver burden. Furthermore, the caregivers involved in the care of the patient for longer durations reported significantly higher levels of caregiver burden.
In terms of clinical variables of patients, higher caregiver burden was associated with longer duration of illness, higher number of lifetime hospitalisations, higher number of manic and depressive episodes and poor medication compliance. Poor social support was associated with a higher level of caregiver burden. Higher caregiver burden was associated with higher psychological morbidity.
Many previous studies from India have evaluated caregiver burden among caregivers of patients with bipolar disorder (10-32). There is a lack of consensus with respect to caregiver variables and their association with caregiver burden (39). Some of the studies suggest that there is no significant difference in the caregiver burden as reported by caregivers of either gender (6), whereas others suggest that females report higher caregiver burden (13, 40). Our findings support the studies which have reported higher caregiver burden among female caregivers. This finding of ours could have been influenced by the relationship of caregivers with patients. In the present study, spouses of patients formed a large proportion of caregivers and they reported significantly higher burden, in contrast to parents and siblings. Cultural issues like restriction of females to household activities with lower opportunities to vent out their distress, inability to spend time on leisure activities, financial dependency and lack of independence could also be responsible for higher perceived burden. It was noticed that caregivers from nuclear families had higher caregiver burden as compared to those from joint families. The joint family system is considered to promote interdependence and possibly is associated with sharing of caregiver burden and this may explain why caregivers from joint families reported lower caregiver burden. Similar findings have been reported in earlier studies from India (41).
Findings of the association of higher caregiver burden with duration of illness are supported by existing literature (14). This finding suggests that possibly with passing time, frequent relapses of illness lead to caregiver burnout, which leads to higher caregiver burden. Previous studies have also noted an association of higher caregiver burden with higher numbers of hospitalisation (30). Findings of the present study too support this association. Higher caregiver burden with greater numbers of hospitalisations possibly indicate more severe episodes and hospitalisation associated with more expenditures and loss of earnings. This suggests that all efforts must be made to pick up relapses at the earliest and manage them effectively to minimise the chances of progression to severe episodes and resultant need for inpatient care. Previous studies have also reported association between higher caregiver burden and higher number of episodes, especially manic episodes (14) and more severe manic episodes (42). Manic episodes of the illness are very disruptive to daily life, work and family relationships. Due to this, these episodes place great demands on family members involved in caregiving. These demands can persist even during remission, where residual symptoms are often still present and lead to caregiver burden. Available data from India suggest that in contrast to patients from the West, patients from India have a higher number of manic episodes (43). Taken together, this finding has important implications as this suggests that efforts must be made to prevent frequent relapses in patients with bipolar disorder, especially in the Indian context to reduce the caregiver burden (44).
In the present study, higher burden was also associated with a higher number of depressive episodes and this finding is supported by existing literature (16).
Long-term management of bipolar disorder requires continuation of medications with good compliance. Poor medication compliance has been shown to be associated with many negative patient-related outcomes like higher risk of relapses, suicidality, poor quality of life, higher residual or sub-syndromal symptoms etc (45, 46). The present study adds to this body of literature and suggests that poor medication compliance in patients is also associated with higher caregiver burden and this finding is supported by the existing literature (11).
Among the demographic variables of caregivers, higher age of caregivers was associated with higher caregiver burden. This finding is also supported by existing literature (6). This association possibly suggests that with increasing age, the caregivers possibly experience more burnout, lose hope and also lose physical vigour to take care of the mentally ill relative.
Accordingly, it is important for the mental health professionals to support the ageing caregivers.
To conclude, the present study suggests that BPAD is associated with higher caregiver burden. Higher caregiver burden is associated with clinical variables of the patients and demographic variables of the caregivers. Among the patient-related variables, longer duration of illness, those with a higher number of lifetime episodes of either polarity and poor medication adherence are associated with higher caregiver burden. Hence, all measures must be taken to minimise relapse in patients with BPAD. Among the demographic variables of caregivers, higher caregiver burden is reported by caregivers who were relatively older, of female gender, uneducated or illiterate, homemakers and from nuclear families.
Our findings highlight the need for additional research on interventions to reduce burden among caregivers of patients with bipolar affective disorder. For better outcomes of disease, more attention needs to be given to the primary caregivers in terms of psycho-education and counselling.
Medical Student Syndrome (MSS) is a unique type of hypochondriasis which specifically causes health anxiety related to the diseases medical students study during their medical training.1 However, this phenomenon does not translate into an increased number of consultations differentiating it from hypochondriasis.2 Nevertheless, the common denominator in both conditions is that the affected person persistently experiences the belief or fear of having severe disease, due to the misinterpretation of physical symptoms.3 The medical examination on multiple occasions does not identify medical conditions that fully account for the physical symptoms or the person’s concerns about the disease, making it a diagnosis of exclusion. Unfortunately, the fears frequently persist among medical students despite medical reassurance, affecting their concentration during their training.4
Earlier studies have shown a higher prevalence of MSS in various medical schools, but recent studies show a declining trend. While Howes et al5 demonstrated that 70% of medical students have groundless medical fears during their studies, Weck et al,6 on the contrary, recorded the prevalence of health anxiety only among 5-30 % of study participants. One of the reasons ascribed to this could be that earlier studies, showing a high prevalence of MSS, were uncontrolled. Also, age-matched peers were not used as controls in some studies, and no direct interviews had been conducted.7,8 Methodological issues in previous data have led to inaccurate interpretations and over-generalization of findings. For example, the high emotional disturbance in medical students resulted from comparisons made with the general population, rather than with other students of their age. 9-11
We were prompted to conduct this study because the magnitude of MSS is variable from region to region, and in this study we compared medical students with their peers, studying in different colleges of Taif University to avoid observational bias.
Methods
This study was carried out from September 2017 to June 2018 at the female campus of Taif University, Kingdom of Saudi Arabia (KSA) in medical (pre-clinical and clinical years) and non-medical colleges in accordance with research guidelines of the College of Medicine, Taif University, KSA.
Inclusion criteria
Age and gender-matched students were selected for inclusion in the study. These included:
1. Female medical students from the second to the sixth grades enrolled in the College of Medicine, Taif University, KSA.
2. Female non-medical students from first to fourth grades enrolled in colleges of Arts, Admin and Financial Sciences, Computer and Information Technology, Science and Islamic Law.
Exclusion criteria
Biology students were excluded due to the medical content of their courses. At the time of enrolment, permission for participant recruitment was obtained from the concerned faculty administrators.
The participants were approached in the common/study rooms or lecture halls. The students were informed of the voluntary nature of the participation and were randomly selected. They were not required to provide their names during completion of the questionnaire and were assured of confidentiality. The Hypochondria/Health Anxiety Questionnaire (HAQ), developed by the Obsessive Compulsive Centre of Los Angeles (http://ocdla.com/hypochondria-test), was used to collect the data. The questionnaire was translated into Arabic and underwent a revision in order to ensure compatibility with the original one. The questionnaire was not designed to provide a formal diagnosis but provided an indication as to whether or not the persons were exhibiting significant signs of the disease.
Results of this questionnaire were analyzed as under:
A) 1 to 3 test items checked: there is a low probability that the student has health anxiety, and it is unlikely that her concerns significantly impact his life.
B) 4 to 7 test items checked: there is a medium probability that she has health anxiety, and a moderately high amount of distress related to specific health-related thoughts. She spends more time than most people doing unnecessary behaviours related to these thoughts.
C) More than 7 test items checked: there is a high probability that she has health anxiety. She most likely has a significant amount of distress related to certain health-related obsessions, and likely spends a significant amount of time doing unnecessary compulsive and avoidant behaviours directly related to these obsessions.
Statistical methods
Data were statistically described regarding frequencies (number of cases) and valid percentages for categorical variables. The response of the two groups was analyzed by student t-test. P values less than 0.05 were considered to be statistically significant. All statistical calculations were done using computer program IBM SPSS (Statistical Package for the Social Science; IBM Corp, Armonk, NY, USA) release 21 for Microsoft Windows.
Results
400 students were included in the study. There were 200 medical students, and the other 200 students were from various non-medical colleges of Taif University (Colleges of Arts, Admin and Financial Sciences, Computer and Information Technology, Science and Islamic Law).
All participating students were females (100%), and the mean age of the medical students was 21 years (ranged from 19-22years). The mean age in the non-medical group was 20.5 years (ranged from 19-23 years).
All students in the non-medical colleges completed the HAQ while five students in the medical college (clinical years) did not complete it, so the data on 395 participants were finally analyzed.
According to the scaling criteria, this study showed that the overall prevalence of MSS among the total sample (medical and non-medical female students) was 16.2% (64 out 395 students). However, it was higher in the medical students (34 out of 195 students; 17.4%) than in the non-medical students (30 students out of 200; 15%) – see Table 1.
Non-medical students n=200
Medical students
p value
Pre-clinical (95)
Clinical (100)
Age
19-23
19-20
21-22
Medical student syndrome (MSS)
30 (15%)
20 (21.1%)
14 (14%)
0.22
One visit to doctor
33.3 % (10 /30)
20 % (4/20)
14.3 % (2/14)
0.0043
More than one visit to doctor
40 % (4/10)
25 % (1/4)
0 %
0.001
Table 1. The frequency of Medical Student Syndrome (MSS) among medical and non-medical students.
Figure 1. The difference of Medical Student Syndrome (MSS) between pre-clinical and clinical years (p=0.028).
Figure 2. Fears related to diseases in the study cohort.
While comparing the response of the two groups by student t-test, there was no statistically significant difference between the responses obtained from medical and non-medical colleges (p=0.31). However, from the MSS diagnosed cases in the medical college, there was a significant difference between pre-clinical and clinical years – 21.1% vs 14% (p= 0.028) – see Figure 1.
Regarding the percentage of students who visited the doctors during the last year due to fears from disease, or medical condition, it was higher in the non-medical student's group than in the medical student's group with a significant difference observed (p=0.043).
The medical conditions that caused worry among medical and non-medical students were, diabetes mellitus followed by cancers especially breast cancer. The least worried diseases were headache and heart diseases – see Figure 2.
Regarding the percentage of students who consulted more than one doctor for the same medical concern, because of doubt about the previous doctor’s diagnosis and laboratory results, it was higher in the non-medical student's group compared to the medical student's group. The difference was significant (p=0.001).
The students with MSS in the total sample (of 395 students) were categorized according to the degree of probability into low, medium and high as shown in Figure 3.
Figure 3. The probability of Medical Student Syndrome (MSS) among all groups compared to their non-medical peers.
Discussion
The unrealistic fears about illnesses recorded in this study among medical students were higher than their peers studying various non-medical courses at Taif University; however, the difference was not significant. The subgroup analysis revealed a correspondingly higher prevalence of health anxiety during pre-clinical years than clinical years as shown in Figure 1. Possibly during the pre-clinical years, students have an increased sense of body awareness and stress as demonstrated by Moss-Morris et al.7 The authors in the above study described this syndrome as a normal perceptual process and differentiated it from common hypochondriasis. Other researchers 8,12 as well affirmed this. Our results are in parallel with the finding of Azuri et al13 who recorded that first-year students visited a general practitioner (GP) or specialist more often than in other years. The authors in the above study suggested that the pre-clinical students` visits may be due to registering with a new doctor closer to university or due to necessary health checks before the beginning of their medical school. The dream content of pre-clinical medical students frequently involved a preoccupation with a personal illness of the heart, the eyes and the bowels in the above study.
Additionally, the fear of acquiring a future disease is a core feature of health anxiety, while fear of already having a disease is considered more central to the MSS.14 There is a number of instances where this syndrome manifests among students from time to time during their training. The students are even known to change their diagnosis depending upon their clinical rotation. For example, in a psychiatry rotation the student conceptualizes having schizophrenia and later shifts his or her diagnosis to Meniere's disease during an ear, nose and throat (ENT) rotation. The symptoms are thought to occur due to intensive exposure to knowledge affecting symptom perception and interpretation.15 The fact remains that the affected student is devoid of either. At times, the simple knowledge of the location of the appendix transforms the most harmless sensations in that region into symptoms of a serious threat.16 The students who study "frightening diseases" for the first time routinely experience intense delusions of having the disease, reflecting a temporary kind of hypochondriasis.17
In a study by Waterman et al18 it was observed that 80% of medical students conceptualize diagnoses ranging from tuberculosis to cancer while studying these diseases during training. This caused emotional distress and conflict in them. It was suggested that this phenomenon was present in approximately 70-80% of students in the study mentioned above. There may be multiple reasons for precipitation of this condition among medical students. The vastness of medical studies are undebatable, and medical schools cause students to experience a large amount of psychological pressure due to work required to grasp the subject matter, the stress of examinations, and the competitive environment.19
In this study, we compared medical students with the students of the same age and gender with the same cultural background in order to avoid any bias. Our results are in parallel with a more recent study, which compared three groups, medical students, non-medical students, and their peers who were not undergoing any academic course. The authors in the study mentioned above observed no significant differences between the groups on total scores in the questionnaires. However, when considering the individual components of the questionnaires, it was found that medical students were less aware of bodily changes and sensations than the other groups; nevertheless, they did not avoid seeking medical advice for any health-related fears.20
Regarding the percentage of students who visited doctors in the past 12 months due to fear of disease, it was observed in this study that the non-medical group had significantly higher visits to doctors compared to their peers studying in the medical college of the university. It is entirely possible that they had increased access to personal advice from peers, relatives, and various mentors. Of the various diseases, fear of diabetes mellitus was the highest, possibly due to a high prevalence of the disease in Saudi Arabia.21 Further, it is entirely possible that medical students subconsciously conceive these metabolic disorders as these are discussed in greater details during their courses.
MSS may lead to cyberchondria, a phenomenon of the public, seeking to diagnose themselves via the internet,11 which in turn may lead to hypochondriasis in any given student. Thus, it becomes imperative that students suffering from this disorder must be dealt with an empathetic approach and counselled properly after ruling out an organic cause of their illness. A step to circumvent it further would be that MSS must be thoroughly discussed among medical students during their training.
Limitation of the study
The drawback of this study is that that the questionnaire was translated from English into Arabic, and although it underwent a revision, there were no other formal tests such as linguistic and cultural validation to validate the translated version. Further, we believe that our focus was only on female students, and it is well known that females have better ability to cope up with anxiety and depression compared to males22,23 so the figures of MSS among male medical students needs to be studied as it may be different from what we reported in this female cohort.
Conclusion
In conclusion, the students who are suffering from MSS often overuse medical resources and outpatient’s services compared to others. Therefore, clinicians should be aware of these students, to avoid unnecessary procedures and treatments. However, it is vital that a proper evaluation is done before labelling a given student with MSS.
Several studies found that refugees develop post-traumatic stress disorder (PTSD) after having endured war trauma1, or certain circumstances related to migration like moving to a new country, being unemployed and poor housing2. PTSD is described as distress and disability due to a traumatic event that occurred in the past3. In 2013, the American Psychiatric Association revised the PTSD diagnostic criteria in the fifth edition of its Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and PTSD was included in a new category, Trauma- and Stressor-Related Disorders4. All of the conditions included in this category required exposure to a traumatic or stressful event as a diagnostic criterion4. The person with PTSD often avoids trauma-related thoughts and emotions, and discussion of the traumatic event4. PTSD patients are invariably anxious about re-experiencing the same trauma. The trauma is usually re-lived by the person through disturbing, repeated recollections, flashbacks, and nightmares4. Symptoms of PTSD generally begin within the first 3 months after the provocative traumatic event, but may not begin until several years later4. A large number of children (10-40%), 16 or younger, who have experienced a traumatic event in their life, tend to develop PTSD later on5. Moreover, many families with children growing in war zones and then moving to safer places, experience trauma, stress and reduced functioning6. These families have different resilience rates in their survival mechanisms, coping strategies and adaptation levels7.
The latest war in Syria has led to the migration of large parts of the Syrian population to neighboring countries such as Lebanon, Jordan and Turkey8.The United Nations High Commissioner for Refugees (UNHCR) estimates that approximately 1.5 million refugees are located in Lebanon9. These refugees have been exposed to several types of traumatic events that may increase the incidence of mental health problems10.
We hypothesize that the proportion of positive PTSD screens would be high among Syrian refugees with the presence of some specific related risk factors. Thus, the objective of our study was to examine PTSD symptoms and to determine the associated risk factors in a sample of Syrian refugees living in North Lebanon.
METHODS
1. Study design and population
This was a cross-sectional study that aimed to assess the proportion of Syrian refugees in North Lebanon who were at high risk of developing PTSD, and to examine the association of PTSD high risk with other factors. The survey was carried out during February and March 2016. A convenient sample of Syrian refugees of both gender, aged between 14 and 45 years, living in North Lebanon, was selected out of a population of 262,15111.
The estimated minimum sample size, calculated using Raosoft sample size calculator, with a margin of error of 5% and a confidence interval of 95%, was 384 refugees. A total number of 450 Syrian refugees, residing in individual tented settlements (ITSs), collective shelters (CS) or Primary Health Care Centers (PHCs) located in North Lebanon, was selected according to inclusion and exclusion set criteria.
The inclusion criteria were: Syrian refugees, aged (14-45 years), physically and mentally independent. Hence, all subjects that were younger than 14 or older than 45, speechless, deaf, physically and mentally dependent, or have undergone recent moderate or severe surgery (less than one week earlier), were excluded from the study.
2. Ethical considerations
The study protocol received approval from the Notre Dame University (NDU) Institutional Review Board (IRB). The approval comprised details about the procedure of the study and the rights of the participants. Informed consent was obtained from each participant. The questionnaires were answered anonymously, ensuring confidentiality of collected data.
3. The Interview questionnaire
The interview questionnaire was divided into six sections consisting of a total of 46 questions. The questions were dichotomous, close-ended and open-ended. A cover page described the purpose of the study, ensuring the anonymity and confidentiality, and soliciting the consent of participants. The questionnaire collected data on the demographic and socio-economic characteristics of the participants. Information about health status and stressful life events (SLE) were also obtained. The PC-PTSD (Primary Care Post-Traumatic Stress Disorder) tool was used to screen PTSD.
For the purposes of the study, subjects were classified as having/not having positive PC-PTSD. The results were used to calculate the proportion of Syrian refugees who are at high risk of developing PTSD.
PC-PTSD questionnaire: The PC-PTSD was initially developed in a Veteran Affairs primary care setting and is currently used to screen for PTSD, based on the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-4) diagnostic criteria12. The screen consisted of 4 questions related to a traumatic life event: In the past month you (1) Have had nightmares about it or thought about it when you did not want to?; (2) Tried hard not to think about it or went out of your way to avoid situations that reminded you of it?; (3) Were constantly on guard, watchful, or easily startled?; (4) Felt numb or detached from others, activities, or your surroundings? The answers to these questions were dichotomous (Yes/No) and the total screen was considered "positive" when a patient answered "yes" to three out of four questions. PC-PTSD showed a high sensitivity (86%) and moderate specificity (57%) when using a cutoff score of 213.
In order to validate the Arabic version of the PC-PTSD questionnaire, it was translated into Arabic and translated back into English. The original version of the Arabic questionnaire was pilot-tested on 10 Syrian refugees to ensure the validity of the answers, and to guarantee its reliability.
Anthropometric measurements: The main anthropometric measurements were weight and height. Participants were dressed in light clothes and barefooted, and standing height was measured to the nearest 0.1 cm using a stadiometer. Body weight was measured to the nearest 100g using an electronic scale. Body Mass Index (BMI) is a measure of weight adjusted to height (kg/m2), calculated by dividing weight (in kilograms) by the square of height (in metres). For the purposes of the study, BMI was recoded into four categories: underweight, normal, overweight and obese.
4. Data entry and statistical analysis
The Statistical Package for Social Science (SPSS) for Windows (version 22) was used for data entry and analysis.
First, bivariate analyses of categorical variables were performed using the Fisher exact tests, Chi-squared tests and Student’s t-test. The dependent variable was the high risk of PTSD, using the PC-PTSD tool. Thus, the PC-PTSD score was considered the dependent variable: a dichotomous variable of PC-PTSD (-) and PC-PTSD (+), and all variables that might be a risk factor or might lead to PTSD were set as the independent variables. Two main independent variables were: age and gender. Other variables included: marital status, place of residence, number of people and families living in the same household (crowding index), income, education status, profession, work status, lifestyle habits, medical or psychological problems, medication taken and SLE. Frequencies and percentages were calculated for qualitative variables, and mean and standard deviations for quantitative variables (BMI, crowding index). A p-value of 0.05 or less was considered to be statistically significant.
RESULTS
Table 1: Socio-demographic characteristics of the 450 Syrian refugees
Variables
Frequency (n) or Mean
Percentage (%) or Standard Deviation
Gender
· Male
69
15.3
· Female
381
84.7
Age (years)
27.9
8.1
Crowding index (co-residents/room)
4
2.4
Crowding index
· ≤ 2.5
135
30
· 2.51-3.5
108
24
· > 3.5
207
46
Current place of residence
· Tented settlements
62
13.8
· Collective shelters
92
20.4
· Building
296
65.8
Educational level
· Don’t know how to read and write
33
7.3
· Know how to read and write/Elementary
216
48
· Complementary/Secondary/Technical
178
39.6
· College degree
23
5.1
Marital status
· Single
54
12
· Married
378
84
· Divorced
5
1.1
· Widowed
13
2.9
Current employment status
· No
379
84.2
· Full-time job
40
8.9
· Part-time job
31
6.9
Presence of income
· No
379
84.2
· Yes
71
15.8
Perceived income (n=71)
· Satisfactory
25
35.2
· Non-Satisfactory
46
64.8
Table 2: Health characteristics and migration factors of the 450 Syrian refugees
Variables
Frequency (n)
Percentage (%)
BMI category (kg/m2)
· <18.5
11
2.4
· 18.5-24.9
176
39.1
· ≥ 25
263
58.5
Tobacco consumption
· Yes
97
21.6
· No
353
78.4
Presence of medical conditions
· No
337
74.9
· Yes
113
25.1
Migration status
· Before 2011
15
3.3
· 2011-2013
339
75.3
· After 2013
96
21.4
Seeking professional help for psychological disorders
· No
439
97.6
· Yes
11
2.4
Number of stressful life events
· None
22
4.9
· [1-2]
181
40.2
· [3-4]
235
52.2
· [5-6]
12
2.7
PC-PTSD
· Negative
237
52.7
· Positive
213
47.3
Table 3: Socio-demographic characteristics associated with positive screen for PTSD among the 450 Syrian refugees (bivariate analyses)
Variables
Positive PC-PTSD n(%) or mean±SD
Negative PC-PTSD n(%) or mean±SD
p-value
Gender
0.011*
· Male
23 (33.3)
46 (66.7)
· Female
190 (49.9)
191 (50.1)
Age (years)
28.9 ± 7.6
26.9 ± 8.5
0.009*
Crowding index (co-residents/room)
4.2 ± 2.7
3.8 ± 2.2
0.069
Crowding index
0.294
· ≤ 2.5
58 (43.0)
77 (57.0)
· 2.51-3.5
49 (45.4)
59 (54.6)
· > 3.5
106 (51.2)
101 (48.8)
Current place of residence
0.137
· Tented settlements
27 (43.5)
35 (56.5)
· Collective shelters
52 (56.5)
40 (43.5)
· Building
134 (45.3)
162 (54.7)
Educational level
0.479
· Don’t know how to read and write
16 (48.5)
17 (51.5)
· Know how to read and write/Elementary
95 (44.0)
121 (56.0)
· Complementary/Secondary/Technical
· University level
92 (51.7)
86 (48.3)
10 (43.5)
13 (56.5)
Marital status
0.000*
· Single
9 (16.7)
45 (83.3)
· Married
191 (50.5)
187 (49.5)
· Divorced
4 (80.0)
1 (20.0)
· Widowed
9 (69.2)
4 (30.8)
Current employment status
0.205
· No
184 (48.5)
195 (51.5)
· Full-time job
14 (35.0)
26 (65.0)
· Part-time job
15 (48.4)
16 (51.6)
Presence of income
0.233
· No
184 (48.5)
195 (51.5)
· Yes
29 (40.8)
42 (59.2)
Perceived income (n=71)
0.264
· Satisfactory
8 (32.0)
17 (68.0)
· Non-Satisfactory
21 (45.7)
25 (54.3)
*Significant with p-value < 0.05
Table 4: Health characteristics and migration factors associated with positive screen for PTSD among the 450 Syrian refugees (bivariate analyses)
Variables
Positive PC-PTSD n (%)
Negative PC-PTSD n (%)
p-value
BMI category (kg/m2)
0.183
· <18.5
7 (63.6)
4 (36.4)
· 18.5-24.9
75 (42.6)
101 (57.4)
· ≥ 25
131 (49.8)
132 (50.2)
Tobacco consumption
0.369
· Yes
42 (43.3)
55 (56.7)
· No
171 (48.4)
182 (51.6)
Presence of medical conditions
0.000*
· No
143 (42.4)
194 (57.6)
· Yes
70 (61.9)
43 (38.1)
Migration status
0.094
· Before 2011
5 (33.3)
10 (66.7)
· 2011-2013
154 (45.4)
185 (54.6)
· After 2013
54 (56.2)
42 (43.8)
Seeking professional help for psychological disorders
0.003*
· No
· Yes
203 (46.2)
236 (53.8)
10 (90.9)
1 (9.1)
Number of stressful life events
0.000*
· None
0 (0.0)
22 (100.0)
· [1-2]
66 (36.5)
115 (63.5)
· [3-4]
138 (58.7)
97 (41.3)
· [5-6]
9 (75.0)
3 (25.0)
* Significant with p-value < 0.05
All the socio-demographic, health and migration characteristics of our sample of Syrian refugees were described in Tables 1 and 2. Out of the 450 participants, 47.3% had positive PC-PTSD. In order to study the association between the socio-demographic characteristics among the Syrian refugeesand PTSD screening, abivariate association was explored as shown in Table 3. The results indicate a significant difference between gender groups, as almost half of the women (49.9%) had a positive screen for PTSD compared to 33.3% of the men (p=0.011). Mean age was significantly higher in refugees with positive PC-PTSD (28.9 ± 7.6 years) versus those with negative PC-PTSD (26.9 ± 8.5 years) (p=0.009). PTSD screening was shown to be significantly associated with marital status. In fact, positive PC-PTSD was mostly perceived in divorced participants (80%) compared to 69.2% of widowed, 50.5% of married, and 16.7% of single subjects (p=0.000). Yet, crowding index, current place of residence, educational level, employment status, and income were not significantly associated with positive PC-PTSD (p>0.05).
The association of health characteristics and migration factors among the Syrian refugees with PTSD screening was displayed in Table 4. A significant association was observed between the presence of medical condition and positive screen for PTSD, as 61.9% of subjects suffering from a medical condition had a positive PC-PTSD, compared to 42.4% of participants without medical conditions (p=0.000). However, BMI and tobacco consumption were not significantly associated with PTSD screening (p>0.05). PTSD screening was significantly associated with the presence of psychological disorders. Thus, 90.9% of refugees who sought professional help for psychological disorders had positive PC-PTSD, versus 46.2% of those who did not (p=0.003). Positive PC-PTSD was significantly associated with the increase in the number of SLE. In fact, none of the participants without any stressful event had a positive PC-PTSD, compared to 36.5% of participants with 1-2 SLE, 58.7% of participants with 3-4 SLE and 75% of participants with 5-6 SLE (p=0.000). On the other hand, no significant association was observed between PC-PTSD and migration status (p>0.05).
DISCUSSION AND CONCLUSION
PTSD represents the most frequently occurring mental disorder occurring among refugees14. PTSD prevalence rates ranging between 15% and 80% have been reported in refugees. A study of Cambodian refugees living in the Thailand-Cambodia border camp indicated that 15% had PTSD15. A cohort study aimed to show the prevalence of PTSD among Iranian, Afghani and Somalian refugees that have moved to the Netherlands at a 7-year interval [(T1=2003) - (T2=2010)]. Results displayed a high prevalence at both T1 (16.3%) and T2 (15.2%). The reason for this high unchanged prevalence may be due to the late onset of the PTSD symptoms, and the low use of mental health care centers16. De Jong and colleagues reported that 50% of the refugees in Rwandan and Burundese camps had serious mental health problems, mainly PTSD17. While Teodorescu and colleagues aimed to illustrate the prevalence of PTSD among refugees in Norway; results showed that 80% of the refugees had PTSD18. In our study, the high proportion of positive screen for PTSD among Syrian refugees was estimated at 47.3%. In 2006, a mental health assessment demonstrated that Lebanese citizens exposed to war were more likely to develop psychiatric problems such as PTSD19. Subsequently, a cross-sectional study was done in South Lebanon on 681 citizens in 2007 (1-year after the 2006 war in Lebanon). The aim of the study was to examine the prevalence of PTSD 12 months after 2006 war cessation. Results showed that the prevalence of PTSD was 17.8%19. A recent cross sectional study, aimed to show the prevalence of PTSD and explore its relationship with various variables. The study included 352 Syrian refugees settled in camps in Turkey. An experienced psychiatrist evaluated the participants, and results demonstrated that 33.5% of study participants had PTSD, mainly female refugees, people who experienced 2 or more SLE, or those who had a family history of psychiatric disorder20.
PTSD has been associated with a wide range of traumatic events: emotional or physical abuse21, sexual abuse22, parental break-up23, death of a loved one24, domestic violence25, kidnapping26, military services27, war trauma28, natural disasters29 and medical conditions including cancer30, heart attack31, stroke32, intensive-care unit hospitalization33, and miscarriage34.
Our findings should be interpreted taking into account several limitations. The first limitation is the use of screening tools, instead of the more accurate diagnosis of the clinician, in order to detect PTSD. Given that a standardized screening tool for PTSD was used, our rates are likely an overestimate of the true prevalence rates. Secondly, this study was conducted with a limited sample of Syrian refugees and therefore should not be generalized to all refugees of other eras or from other countries. The third limitation is represented by the lack of information on the presence of other Axis I psychiatric comorbidities such as anxiety or mood disorder that could facilitate the development of PTSD or influence its manifestations35-36.
Refugees are an important group to examine, given the high prevalence of mental health disorders. Although refugees are evaluated for health problems, currently there are no standardized screening and clinical practice guidelines for assessing PTSD in all refugees. Therefore, we may be missing opportunities to detect and treat these harmful and potentially fatal conditions. Our findings suggest the need to consider a standardized screening tool for PTSD in this population. In addition, a far greater percentage of patients may have “PTSD symptoms,” that are abnormal but do not meet full criteria of the DSM5 for PTSD diagnosis, but still cause functional impairment and may later develop into a diagnosable PTSD. Given the overall high prevalence, one possible model for evaluation would be a stepped screening approach: Positive screens for PTSD could trigger a standardized clinical diagnosis for PTSD with more comprehensive assessment and early intervention. Considering the high cost of treating individuals with PTSD, screening and intervention strategies should be addressed. Greater awareness among providers and increased targeted assessment and treatment efforts may increase early detection of a wide range of PTSD, preventing more serious future health problems and functional impairment among refugees.
Spasticity was first described by Lance1 in 1980, and according to him it was described as; a motor disorder characterised by a velocity-dependent increase in tonic stretch reflexes (muscle tone) with exaggerated tendon jerks, resulting from hyperexcitability of the stretch reflex, as one component of the upper motor neuron syndrome. Spasticity can be a consequence of many neurological conditions including traumatic brain injury, spinal cord injury, stroke and multiple sclerosis. The annual incidence of spasticity in the lower limb following a stroke, traumatic brain injury and spinal cord injury is estimated to be 30 to 485 per 100,000, 100-235 per 100,000 and 0.2 to 8 per 100,000 respectively2. Spasticity is characterised by muscle overactivity and can lead to permanent changes in the muscle fibres leading to muscle contractures. Contractures can be very painful and may interfere with seating, posture, mobility and activities of daily living, thus increasing the care cost significantly.
Phenol has been used peripherally and intrathecally for the treatment of spasticity for many years. The botulinum toxin became available in the last decade for treatment of spasticity. Its use has increased since then, and this has led to a decline in the use of phenol. It is still being used in patients who are sensitive to botulinum toxins or have developed antibodies to them. Phenol is both neurolytic and anaesthetic in nature3. The anaesthetic effect of phenol can be seen immediately after the injection where the patient reports an immediate effect. The neurolytic effect takes at least two weeks, and therefore patients should be educated not to expect any significant change in the spasticity before two to four weeks. Phenol can also be used in combination with botulinum toxin to treat multifocal spasticity where the maximum dose of botulinum exceeds the recommended safe dose. This allows several groups of muscles to be treated in a single setting3.
The lethal dosage of phenol has been reported to be greater than eight grams4. Phenol in aqueous solution is preferred for peripheral nerve and motor point blocks and is available in 5, 6 and 7% concentration. Injecting botulinum toxins is quite different from performing nerve and motor point blocks. Phenol nerve and motor point blocks take a longer duration of time to perform as compared to botulinum toxins. For motor point blocks, a nerve stimulator with a surface electrode is needed to localise the motor points on the muscles. In the present study, we highlight the importance of management of spasticity in adults with a combination of botulinum toxin and phenol nerve /motor point blocks. A case series of patients who underwent combined phenol and botulinum toxin is presented, describing the diagnosis, number and location of muscles injected, types of phenol nerve and motor point blocks and any complications encountered.
Methods
This is a retrospective study conducted at the Rehabilitation Medicine Department of the University Hospital in Cambridge UK. The study period included from December 2014 to January 2017. The patients were identified from the spasticity clinic database. All patients were assessed in the spasticity clinic, and a plan to inject the botulinum toxin along with phenol nerve block/motor point block agreed with the patient. The patients who decided to have the procedure were appointed to a clinic to perform the agreed injections and blocks. If the patients were on anticoagulants (warfarin, dalteparin or clopidogrel), they were advised to stop the anticoagulation 3 days before the procedure. International Normalised Ratio (INR) was checked before the procedure. The usual dose of anticoagulation was started after the procedure.
Patients were consented and placed on a plinth. Botulinum toxin type A only was used in our study. It was diluted with normal saline, and the muscles were injected either using the surface anatomy or electrical stimulation. Each muscle was either injected at one to two sites, depending on the size of the individual muscle.
Phenol nerve blocks and motor point blocks were performed according to the techniques described by Roy3 and Gaid5. Aqueous phenol 5% (phenol in water) was used for all the procedures. The nerves were identified using a nerve stimulator with a surface electrode using 2mA current (Figure 1). The skin was infiltrated with 1% lignocaine, and the nerve was approached with a stimulator needle. The nerve was then ablated with 5% phenol under stimulation guidance. The dose of the phenol was titrated while the nerve was being stimulated. The motor points were located similarly with the help of a surface electrode and marked before ablation with 1 to 2 ml of 5% phenol. The amount of botulinum toxin and phenol was recorded. All patients were reviewed in 6 weeks’ time for any complications.
Figure 1: Nerve Stimulator with surface electrode
Results
Between December 2014 to January 2017, we treated 29 patients with spasticity caused by different neurological conditions with a combination of aqueous phenol and botulinum toxin injections. There were 15 males and 14 females with an age range of 18 to 80 years and a mean age of 49.3 years. The most common diagnosis was multiple sclerosis followed by stroke (Figure 2). A total of 40 phenol nerve or motor point blocks were performed in 29 patients. Nineteen patients (65.5%) received phenol blocks once, 9 (31%) twice and only 1 patient (3.4%) had the phenol block done three times. Where the phenol blocks were repeated, the mean duration between the phenol injection was 14.1 months (range 6-23 months). The procedure was bilateral in 16 (55.2%) and unilateral in 13 (44.8%). The local anaesthetic (trial block) was performed in 6 (20.6%) patients who were ambulatory before the phenol block.
Figure 2: Frequency of Diagnosis
Obturator nerve block was the most common phenol procedure performed (44.8%), followed by posterior tibial nerve block (37.9%). Two (6.9%) patients had both obturator nerve blocks and posterior tibial nerve blocks, whereas 1(3.4%) patient had hamstring motor point blocks, 1(3.4%) patient had gastrocnemius motor point blocks. One patient (3.4%) had bilateral obturator nerve blocks, posterior tibial nerve blocks and rectus femoris motor point blocks (Figure 3).
Figure 3: Frequency of Phenol Nerve/Motor Point Blocks
Botulinum toxin was also injected into various muscles in all 29 patients. The botulinum was repeated every 4 to 6 months in the same muscles. Botulinum toxins were injected bilaterally in 12 (41.4%) and unilaterally in 17 (63.6%) patients. The most common muscles injected with botulinum toxin were hamstrings (44.8%) followed by finger flexors (13.8%). The frequency of botulinum toxins injections is shown in Figure 4.
Figure 4: Muscles Injected with Botulinum Toxins
The most common combination in our series was obturator nerve block and hamstring botulinum toxin injections (34.4%). The combination of posterior tibial nerve block with hamstring botulinum toxins was used in 3 (10.3%), and 2 (6.8%) patients received posterior tibial nerve block with finger flexor botulinum toxin injections. The combination of phenol and botulinum toxin injection is shown in Table 1. There were no complications noted following both phenol as well as botulinum toxin injections.
Table 1: Combination of Phenol and Botulinum Toxins used
Phenol Nerve/Motor Point Blocks
Obturator Nerve Block
Posterior Tibial Nerve Block
Obturator and Posterior Tibial Nerve Block
Hamstrings Motor Point Blocks
Gastrocnemius Motor Point Blocks
Obturator, Posterior Tibial Nerve Block and Rectus Femoris Motor Point Block
Muscles Injected with Botulinum
Hamstrings
10
3
0
0
0
0
Finger Flexors
0
2
1
0
0
0
Finger and Wrist Flexors
0
2
0
1
0
0
Wrist, Finger Flexors and Hamstrings
1
0
1
0
0
0
Elbow and Finger Flexors
0
1
0
0
1
0
Elbow, Wrist and Finger Flexors
0
1
0
0
0
0
Elbow and Wrist Flexors
0
1
0
0
0
0
Wrist Flexors and Knee Extensors
1
0
0
0
0
0
Ankle Planter Flexors
1
0
0
0
0
0
Flexor Digitorum
0
1
0
0
0
0
Discussion
Perineural injection of aqueous phenol (3 to 7%) can reduce spasticity by blocking the nerve signals to the group of muscles supplied by the nerve. Phenol produces an initial local anaesthetic effect which is followed by neurolysis caused by protein coagulation and inflammation6. The neurolysis leaves the nerve with about 25% less function than before but does not disadvantage people with little or no residual function, as a mild progressive denervation can be beneficial in reducing spasticity6. Khalili et al7first described the technique of phenol nerve blocks and also suggested that the re-growth of most axons is seen with preservation of gamma motor neurons. This means that phenol reduces spasticity without reducing the strength of the muscle significantly.
The use of combining phenol with botulinum toxins injections has been documented in children with cerebral palsy and central nervous degenerative diseases8. To date, there are no studies in the literature showing the use of combined phenol and botulinum toxins in the treatment of spasticity in adults. The combination of phenol with botulinum toxin helps to treat multifocal spasticity allowing more spastic areas to be treated. The most frequent pattern used in Gooch et al8 study was obturator nerve block and gastrocnemius botulinum toxin injections. In our study, the most common combination was obturator nerve block and hamstring botulinum toxin injections. The possible explanation for this variance is that the majority of our study population suffered from multiple sclerosis and hamstring with hip adductor spasticity is a very common pattern.
The mechanism of action of phenol is different from botulinum toxins. However, the reduction in spasticity with phenol and botulinum toxins is comparable. Manca et al9 compared botulinum toxins and phenol nerve blocks to reduce ankle clonus in spastic paresis and concluded that both patient groups showed significant clonus reduction over time with the phenol group effect greater than the botulinum toxins group. They also suggested that the two drugs have a different mechanism of action with phenol reducing the excitability of the alpha motor neuron. A randomised double-blind trial by Kirazli et10 al compared the effects of botulinum toxins Type A and phenol on post-stroke ankle plantar flexor and invertor spasticity. There was a significant change in Ashworth scores at week 2 and 4 in the group who received botulinum toxins but there was no significant difference between the two groups at week 8 and 1210. Similarly, the decrease in clonus duration (detected by electromyography) was significant in both groups. However, the group that received botulinum toxins showed significant change at week 2 and 4 compared to phenol group. The reason for this may be the delayed onset of action of phenol as compared to botulinum toxins. Burkel et al6 studied the effects of phenol into the peripheral nerves of rats and showed that Wallerian degeneration of the nerves occurs before healing by fibrosis that starts after about 4-6 months following phenol injections. Their study also concluded that following phenol the nerves are left with 25% less function than before and this does not disadvantage the people with little or no residual function6.
There is always a risk of deteriorating the mobility or function due to weakness caused by the phenol nerve block. It is our usual practice to perform a local anaesthetic block (trial block) before injecting the phenol in all ambulatory patients or patients who are using spasticity functionally to their advantage. In our series, 20.6% of patients underwent local anaesthetic block before proceeding to the phenol block. There were no adverse effects noted following the local anaesthetic block, and all six patients chose to have the phenol blocks. A recent study by McCrea et al11 looked at the effects of phenol on position and velocity components of spasticity in addition to strength in post-stroke elbow flexor spasticity. The study concluded that phenol paradoxically improved muscle strength in addition to reducing hypertonia11.
In our series, we used phenol mainly for lower limb muscles and botulinum toxins for both the lower and upper limb muscles. For smaller muscles of the upper limb, it is difficult, but not impossible, to find the motor points. The technique for upper limb phenol blocks has been well described in literature3. However, when combining the botulinum toxins with phenol, we find it useful to prefer the phenol block for the lower limb muscles. Gooch et al8 also injected larger proximal muscles with phenol, and smaller distal and deeper muscles with botulinum toxins. In our series, the maximum dose of botulinum toxins used was 1000 units of Dysport and the maximum dose of phenol used was 20 ml of 5% aqueous phenol.
Conclusion
The combination of botulinum toxin with phenol injections is effective in treating multi-focal spasticity in clinical settings. The advantage of using phenol in combination with botulinum toxins is cost-reduction and the flexibility of managing various muscle groups at the same time. Further studies are needed to evaluate the long-term cost-effectiveness and complications of combining phenol and botulinum toxins, especially after repeated injections.
Endocrine disorders are frequently accompanied by psychological disturbances. Conversely, psychiatric disorders, to significant extent demonstrate consistent pattern of endocrine dysfunctions. [1] Endocrinopathies manifests as myriad of psychiatric symptoms, as hormones affect a variety of organ systems function. The presence of psychiatric symptoms in patients with primary endocrine disorders provides a new insight for exploring link between hormones and affective function.[2] Disturbance of hypothalamic-pituitary-thyroidal axis is of considerable interest in psychiatry and is known to be associated with a number of psychiatric abnormalities.[3]Thus, the main focus of psychoneuroendocrinology is on identifying changes in basal levels of pituitary and end-organ hormones in patients with psychiatric disorders. Psychiatric symptoms may be the first manifestations of endocrine disease, but often are not recognized as such. Patient may experience a worsening of the psychiatric condition and an emergence of physical symptoms with the progression of the disorder.[4] Psychiatric manifestations of endocrine dysfunction include mood disturbances, anxiety, cognitive dysfunction, dementia, delirium, and psychosis. While dealing with treatment-resistant psychiatric disorder, endocrinopathies should also be considered as a possible cause in management. Psychotropics medicine may worsen the psychiatric symptoms and improves only once the underlying endocrine disturbance is corrected. [5]The lifetime prevalence of depression and anxiety is 11.8% to 36.8% and 5.0% to 41.2% respectively in the group with previously known thyroid disorder. [6,7].The occurrence of major depression in DM is mostly estimated around 12% (ranging from 8-18%). 15-35 % of individuals with DM report milder types depression. [8]. Depressive symptom is seen in almost half of patients with Cushing's syndrome and these experience moderate to severe symptoms. Some patients with Cushing's syndrome also experience psychotic symptoms [9]. Patients suffering from Addison's disease may be misdiagnosed with major depressive disorder, personality disorder, dementia, or somatoform disorders [4, 10]. Women with hyperandrogenic syndromes are at an increased risk for mood disorders, and the rate of depression among women with PCOS has been reported to be as high as 50 percent. Central 5-HT, system dysregulation that causes depression might simultaneously affect peripheral insulin sensitivity, or vice versa, possibly via behavioral or neuroendocrinological pathways, or both. [10]
Hollinrake 2007 showed prevalence of depression has shown it to be four times that of women without PCOS. Hollinrake screened patients with PCOS for depression and found total prevalence of depressive disorders which included women diagnosed with depression before the study, was 35% in the PCOS group[11]. No specific psychiatric symptoms have been consistently associated with acromegaly or gigantism or with elevated GH levels. Adjustment disorder may occur from changes in physical appearance and from living with a chronic illness [11]. Sheehan’s syndrome (SS) refers to the occurrence of varying degree of hypopituitarism after parturition (1). It is a rare cause of hypopituitarism in developed countries owing to advances in obstetric care and its frequency is decreasing worldwide. Reports of psychoses in patients with Sheehan’s syndrome are rare. [13] Psychiatric disturbances are commonly observed during the course of endocrine disorders .The underlying cause can be hyper- or hyposecretion of hormones, secondary to the pathogenic mechanisms. medical or surgical treatment of endocrine diseases, or due to genetic aberrations[14]. Psychiatric disorders frequently mimic the symptoms of endocrinological disorders. In view of sizable number of patients seeking treatment from our department present with comorbid endocrinolgical disorders, we planned the present study to investigate psychiatric morbidity preferably anxiety and depression pattern among endocrinolgical disorders patients. With this background, we studied the depression and anxiety in different endocrinogical disorders.
Methods
The present study was conducted in the SMHS Hospital of Government medical college Srinagar and the study sample was drawn from patients attending the endocrinogical OPD in the Department of Medicine at Government Medical College Hospital Srinagar (SMHS).The study was conducted over a period of one and half year, from April 2011 to September 2012 in patients attending the Department of Medicine Government Medical College Hospital Srinagar enrolling 152 cases of Endocrinological disorders. All patients were first examined by Consultant endocrinologist. The patients were then selected using simple random sampling choosing every alternate patient. General information including age, sex, residence, economic status, past history of thyroid disorders, family history of psychiatric disorders was included. An endocrinology specialist first examined the patients, while a psychiatrist administers Hospital Anxiety and Depression scale (HADS). Hospital Anxiety and Depression scale (HADS) was used for purpose of screening anxiety and depressive disorders in patients suffering from different endocrinogical disorders. Hospital Anxiety and Depression scale (HADS) is used for purpose of screening anxiety and depressive disorders in patient suffering from chronic somatic disease. HADS contain 14 items and consist of two subscales: anxiety and depression with seven question each. Each question is rated on four point scale (0 to 3) giving maximum total score of 21 each for anxiety and depression. Score of 11 or more is considered a case of psychological morbidity, while as score of 8-10 represents borderline and 0-7 as normal. The forward backward procedure was applied to translate HADS from English to Urdu by a medical person and professional translator. [15]
The participating physicians subjected select patient of chronic Endocrinological disorders to HADS Questionnaire and recorded scores both for anxiety and for depression.
The patients were subjected to inclusion and exclusion criteria as given below:
Inclusion criteria
1. All endocrinological disorders.
2. Both sexes will be included.
3. Age > 15 yrs.
4. Those who will give consent.
Exclusion criteria
1. Those who don’t consent.
2. If diagnoses is not clear.
3. Age less than 15 years.
4. Presence of pregnancy or a history of pregnancy in the last six months.
5. Those who are on steroids or drugs known to interfere with thyroid function
General description, demographic data and psychiatric history was be recorded using the semi structured interview which was pretested
Statistical methods: Statistical analyses were performed using the SPSS, version 16.0 for Windows. A secure computerized database was established and maintained throughout the study. Patient names were replaced with unique identifying numbers. Descriptive statics were used to generate a profiles of each illness group based on presence of depression only, anxiety only and those with both anxiety and depression. To determine whether there were any significant differences between each illness group in the prevalence of depression and anxiety disorders , an unadjusted 3×2×2 test chi square was conducted. Data were analyzed by the Pearson chi-squared test and t test. P<0.05 was considered as the significance level in the evaluations.
Consent: Informed consent was obtained from each patient; those who were considered incapable of consenting were allowed to participate with consent of their closest family member or custodian. All patients were informed about the nature of the research within the hospital and willingly gave their consent to participate. Information sheets and preliminary interviews made it clear that the choice to consent or otherwise would have no bearing on the treatment offered. The project ensured the anonymity of the subjects by replacing patient names with unique identifying numbers before the statistical procedures began.
Results
A total of 152 patients from the endocrinological departments of Govt. Medical College, Srinagar hospitals were taken up for study. They were evaluated in detail with regard to socio-demographic profile regard to presence of psychiatric co-morbidity by HADS and the results have been presented below in the tabulated form .Only patients who consented for complete interview and respond to all HADS questions were considered in final analyses.
Out of total 152 subjects 71 were males (46.72%)), and 81 were females (53.28%) (Table 1). Most of cases belong to 35-45 year age group (26.3%) followed by age group 25- 35 years (24.3%) and 67.7% were married and 18.4% were unmarried. More than half (51.97 %) of the study subjects were from nuclear families and 82 (53.9%) were illiterate and majority 84(55.4 %) belonging to middle class family. The socio-demographic profile of the studied patients is shown in Table-2 .
Out of 152 patients with endocrine disorders, 56(37%) patients elicited HADS score of 10 or less indicating absent or doubtful association anxiety or depression. 96 (63.15%) patients were found positive to HADS Questionnaire with anxiety/depression score of 11 or more. The mean HADS score for anxiety alone, depression alone and anxiety/depression patients were 13.42, 15.7 and 25.62 respectively. On the basis of HADS screening, 96(63.157%) patients had varying degree of psychiatric co morbidity. 27 (28.12%) had anxiety alone, 30(43.47%) had depression alone where 39(40.62%) as patients had anxiety and depression both.(Table 3) The breakdown of total number of different Endocrinological disorders is given in table. Maximum psychiatric comorbidity is found in thyroid patients (69.35%) followed by diabetic patients (68.05). (Table 4).
Table 1: Age and sex distribution
Sex
Total
Male
Female
Age group
< 25
14
20%
7
9%
21
14%
25 – 35
20
28%
17
21%
37
24%
35 – 45
17
24%
23
28%
40
26%
45 – 55
11
16%
19
24%
30
20%
55 & above
9
13%
15
19%
24
16%
Total
71
100%
81
100%
152
100%
Mean ± SD
51.4± 13.7
56.4± 13.1
54.1± 13.6
Table 2: Demographic Characteristics of the Studied Patients
Characteristic
N
%
Dwelling
Rural
98
64.47
Urban
54
35.52
Marital status
Unmarried
28
18.4
Married
103
67.7
Widowed
21
13.8
Occupation
Household
61
40.1
Unskilled
29
19
Semiskilled
39
25.6
Skilled
23
15.1
Professional
8
5.26
Family type
Nuclear
79
51.97
Joint
28
18.4
Extended
45
29.6
Literacy status
Illiterate
82
53.9
Primary
22
14.4
Secondary
16
10.5
Matric
13
8.55
Graduate
11
7.23
Postgraduate/Professional
8
5.26
Family Income(Rs)
< 5000
45
29.6
5000 to 10000
85
55.92
≥ 10000
22
14.4
Socioeconomic status ( Kuppuswamy Scale )
Lower
32
21
Upper lower
11
7.23
Middle
84
55.2
Upper middle
19
12.5
Upper
6
3.94
Table 3: Result of HADS Scoring
Variable
Total (n=96)
Anxiety alone
Depression Alone
Anxiety depression both
p value
Male
37(38.54%)
8(29.6%)
18(60 %)
11(28.2%)
-
Female
59(61.4%)
19( 70.3%)
12( 40%)
28(71.7 %)
-
Age (Years)
54.1± 13.6
51.4± 13.7
56.4± 13.1
54.1± 13.1
< 0.005
Mean HADS Score
-
13.42±3.4
15.73±3.3
25.62±4.3
< 0.005
Table 4: Types of endocrinological disorders
Endocrinological disorders
Number of patients(N=152)
Psychiatric comorbidity
percentage
Thyroid disorders
62 (40.7%)
43
69.35
Diabetes mellitus
47(30.92%)
32
68.05
PCOD
28(18.4%)
16
57.1
Cushings syndrome
5(3.289%)
2
40
Acromegally
2(1.31%)
0
0
Addisions disease
1(0.65%)
0
0
Sheehan’s syndrome
3(1.97%)
2
66.6
Miscellaneous
4(2.63%)
1
25
Table-5 Psychiatric Co-morbidity across Socio-demography of the Patients
Present
Absent
p value
n
%
N
%
Dwelling
Rural
59
60.02
39
39.7
<0.005 (Sig)
Urban
37
68.5
17
31.4
Marital status
Unmarried
8
28.5
20
71.4
>0.005 (NS)
Married
72
69.9
31
30
Widowed
16
76.1
5
23.8
Occupation
Household
57
93.4
4
6.55
>0.005 (NS)
Unskilled
14
48.2
15
51.7
Semiskilled
9
39.1
30
76.9
Skilled
14
60.8
9
39.1
Professional
2
25
6
75
Family type
Nuclear
45
56.9
34
43.0
>0.005 (NS)
Joint
22
78.5
6
21.4
Extended
29
64.4
23
51.1
Literacy status
Illiterate
70
85.2
12
14.6
>0.005 (NS)
Literate
26
36.1
46
63.8
Family Income(Rs)
< 5000
17
37.7
28
62.2
>0.005 (NS)
5000 to 10000
65
76.4
20
23.5
≥ 10000
14
63.6
8
36.3
Socioeconomic status
Lower
18
50
18
50
>0.005 (NS)
Upper lower
7
63.6
4
36.3
Middle
59
70.2
25
29.7
Upper middle
10
52.6
9
47.3
Upper
2
33.3
4
66.6
Discussion
This study is the first to offer data on psychiatric morbidity among endocrine patients in the Kashmiri population. 63.15% (96) patients were found positive to HADS questionnaire with anxiety/depression score of 11 or more in our study. The results of this study suggest patient suffering from endocrinological disorders are likely to have a co-morbid psychiatric disorder. [5, 16].Depressive disorders and anxiety disorders are the commonest psychiatric disorders in endocrinogical patients. [3].Numerous studies have shown a high correlation between depression and endocrinological disorders and this study supports these findings, with 43.47 %( 30) of the participants having depressive symptoms on the HADS. [3, 16] 40.62% (39) respondents had both depressive symptoms and an anxiety disorder. 28.12% (27) participants were diagnosed with an anxiety disorder, which is slightly higher than the lifetime prevalence of anxiety disorder in men [16]. Our findings of a high proportion of respondents with endocrinological disorders (45.7%) Female were more in number than their male counterparts 59(61.4%) vs. 37(38.54%) and the majority of men presenting with endocrinological disorders were between the ages of 35 and 45 years has also been reported in a previous studies. [4, 8].The findings of our study suggest that psychiatric disorders are highly prevalent in endocrinological disorders and is largely unrecognized in the primary care setting. Endocrine disorders of different kinds, irrespective of treatment have been associated with Psychological distress. Psychological wellbeing of endocrine disorders may provide new insights in clinical endocrinology. Further psychological disorders comorbid with endocrinological disorders adds to their disability as well as cost to the individual and the society.[17] Most of the clinicians do not suspect this important association of endocrinological disorders in the beginning resulting in delayed diagnosis. Thus, the high prevalence of anxiety and depression in endocrinological disorders in our study supports a case for screening for these disorders in endocrinological clinics. Furthermore, recognition and treatment of these comorbidities could improve patient outcomes. Future studies should focus on replicating or refuting these findings in larger samples as well as in testing interventions aimed at targeting psychological morbidities in this patient group. Under-recognition of psychiatric morbidity is not an uncommon phenomenon, and has been found in similar local studies of psychiatric morbidity in other medical illnesses[8].Thus, more attention should be paid to recognizing psychiatric morbidities in this group of patients.. The reasons for increase in the frequency of psychiatric disorders are multi-factorial. Being chronic illness leads to psychological stress .The major limitation of our study was relatively small sample size. Another limitation of our study is its crossectional design, which does not allow us to determine direction of causality in the relationship between endocrinological disorders and depression/anxiety. More community based studies are required to assess the magnitude of the problem and to lay down principles to help such patients.In order to clarify the temporal relationship prospective studies with a bigger sample size are essential in the future. As far as we are aware, this is a first of its kind study in kashmir. Endocrinological disorders accounts for a huge proportion of referrals to psychiatric clinics and misery is added upon an already devastating metabolic disease. To add the cost associated with psychiatric morbidity accounts individual and to the society are substantial. Thus, the high prevalence of anxiety and depression in endocrinological disorders in our study supports a case for screening for these disorders in Endocrinological clinics. Furthermore, recognition and treatment of these comorbidities could improve patient outcomes.
Hepatitis viruses are the most widespread cause of hepatitis and some cancer lymphomas in humans1.Hepatitis is a serious disease of the liver and described as a lifelong infection with swelling and inflammation (presence of inflammatory cells) in the liver, that if progresses, may lead to cirrhosis (scarring) of the liver, liver cancer, liver failure, and death. Hepatitis B (HBV) and Hepatitis C (HCV) are one of the viral types of hepatitis that leads to jaundice (a yellow discolouration of skin, mucous membrane and conjunctiva of the eye), anorexia (poor appetite), fatigue and diarrhoea and presumably it remains undiagnosed and leads to chronic carrier state but most infected individuals remain asymptomatic1-3. The hepatitis B virus is a DNA virus belonging to the Hepadnaviridae family of viruses and hepatitis C virus is a small, single stranded, RNA virus with a diameter of about 50 nm, belonging to Flaviviridae family of virus. Hepatitis B surface antigen (HBsAg) is present in the serum of those infected with hepatitis B, consisting of the surface coat lipoprotein of the hepatitis B virus. Anti-HCV antibody, a substance that the body makes to combat HCV 4. Hepatitis B virus is transmitted through blood and blood products, semen, vaginal fluids, and other body fluids. Hepatitis C virus is a blood borne or parenterally transmitted infection. Vehicles and routes of parenteral transmission include; contaminated blood and blood products, multi-transfusions (thalassemic and haemophilic patients), needle sharing, contaminated instruments (e.g. in haemodialysis, reuse of contaminated medical devices, tattooing devices, acupuncture needles, razors) and occupational and nosocomial exposure5-8. It stands to reason that an occupational risk for transmission of hepatitis virus in the health care setting, where unknown carriers of hepatitis infections are undergoing different procedures, in which there is a chance of contact of percutaneous blood, including transmission from infected patients to staff, from patient to patient, and from infected providers to patients9. There is a lack of routine serological screening prior to surgery, which is one of the factors responsible for increased disease transmission. The major risk factors include; re-use of contaminated syringes, surgical instruments and improperly screened blood products2. Without meticulous attention towards infection control and disinfection and sterilization procedures, the risk for transmission of blood borne pathogens in the health care setting is magnified.
The aim of our current study was to estimate the incidence of Hepatitis B and Hepatitis C among patients going through eye surgery at department of ophthalmology Liaquat University of Medical and Health Sciences Jamshoro at Hyderabad. This is one of the largest tertiary care centres in Sindh. This institution is a great referral centre for whole interior Sindh province.
MATERIAL AND METHODS
Study design and patients
This prospective observational study was carried out at Liaquat University Eye Hospital, Hyderabad, from June 2014 to February 2016. A total of 2200 patients undergoing eye surgery, who were unaware of hepatitis B & C infection were included in this study. No restriction was placed based on age and gender to ensure maximum participation.
Blood samples
The blood samples of all these patients were collected in the Hospital laboratory, Scientific Ophthalmic Diagnosis & Research Lab. Each patient was serologically screened, by using immuno-chromatography (ICT method) for qualitative detection of antigen for Hepatitis B and Hepatitis C virus antibodies, to find the carrier status of patients before surgery.
The blood was collected by a qualified technician / phlebotomist of our hospital laboratory under supervision of a consultant pathologist. Samples were allowed to coagulate at room temperature for 30 minutes, and then centrifuged at 3000 revolutions per minute (RPM) for 10 minutes. The serum samples were separated and kept frozen at -20°C for chemical and immunoassays. The HBV screening was based on the detection of antigen and detection of viral specific antibodies HCV in the sera using enzyme immunoassays. The test only shows whether a person has ever been infected by HBV or HCV, and not whether the virus is still present. According to the manufacturers’ literature, the relative sensitivity and specificity of HCV and HBV testing kits was 96.8% and 99% respectively.
Those patients with test results were found positive on screening test, were further confirmed by testing ELISA (Enzyme-Linked Immunosorbent Assay) method (4th generation ELISA) and were given advised for further testing on Polymerase Chain Reaction (PCR) for qualitative or quantitative detection of DNA/RNA (the viral gene).
All the data was entered in SPSS version 16 and the prevalence and percentage of all variables was measured.
RESULTS
A total number of 2200 patients were operated during the study, 1255 (57.04%) patients were male and 945 (42.95%) were female
Of these 2200 patients, 338 (15.36%) were serologically positive for hepatitis B virus & hepatitis C virus. Out of the 338 HBV or HCV positive patients, 56 patients (2.54%) were positive for hepatitis B surface antigen (HBsAg) and 282 patients (12.81%) were positive for hepatitis C antibody (HCVAb). (Figure 1&2). The majority of them were female, and 226 (66.86%) were in their 4th and 5th decade of life in both sexes (Figure 3).
Figure 1- A: Serologically positive for hepatitis C antibodies, B: Serologically negative for hepatitis C antibodies
Figure 2 - A: Serologically negative for hepatitis B antigen, B: Serologically positive for hepatitis B antigen
Figure 3: Incidence of Hepatitis B & C in different age group
DISCUSSION
Hepatitis B virus (HBV) and hepatitis C virus (HCV) are among the principal causes of liver diseases, with different frequency rates and various types all over the world. The World Health Organization (WHO) estimates that there are near 4 million people with chronic HBV infection and 170 million people with chronic HCV infection worldwide. Mortality rate of Hepatitis B is estimated to result in 563,000 deaths and hepatitis C in 366,000 deaths annually 1, 6, 10-12. The occurrence of hepatitis varies from country to country. The epidemiological estimates by WHO show that there is low prevalence of hepatitis C (<1%) in Australia, Canada and Northern Europe, and almost 1% in countries of medium endemism, such as the USA and most of Europe. The frequency is at its peak (>2%) in many countries of Africa, Latin America, Central and South-East Asia 5. As far as the Pakistani population is concerned, the incidence of hepatitis B and C is escalating. Previous studies reveal that the Pakistani population affected by HBV 10% and HCV is 5-10% 3. At times it will also vary among different regions of the same country, and is continuing to rising in certain parts, especially in the rural areas, the percentage of infected individuals is significantly higher 2. The total incidence of Hepatitis B and Hepatitis C in our study was found to be 15.36%. This was almost comparison to Naeem et al found to be 12.99% 2 and in our previous study reported incidence of anti HCV was 29.60% 7. W Ul Huda et al reported 17.33% incidence of HCV infection among their operated patients5, whereas a study conducted by Khurrum et al reported 6% incidence of anti HCV antibodies in health care workers in a local hospital13. The prevalence of Hepatitis B and Hepatitis C in preoperative cataract patients was found to be higher in males (59.18%) than females (40.82%), and Ahmed et al also showed that the total prevalence of Hepatitis B and Hepatitis C in males was very high compared to females among preoperative cataract patients14, which is controversial to our result. A study conducted in 2010 on different eye camps in Pakistan showed that 108 out of 437 patients were infected with Hepatitis B and Hepatitis C with a higher prevalence of the diseases in females with 60.18% (65/ 108) than in males with 39.81% (43/108) 15. Concerning demographic variables, the increase in the risk for HCV seropositive incidences increased with the age i.e. 7.1% at the age of 20 to 30 years whereas 21.4% at the age of 40 to 50 years. In our study, the higher prevalence of hepatitis B & C were in the age range of 30 – 60 years, which is comparable to the study of Talpur et al, in which 65% positive patients were above the age of 40 years16.
This study shows that the prevalence of these hepatitis causing viral pathogens are quite high. Doctors and paramedical staff in surgical and medical practice are at high risk of acquiring blood borne diseases from the patients on whom they operate.
CONCLUSION
The aim of the present study was to assess the prevalence of HBV and HCV infection among preoperative patients. The incidence of these hepatitis causing viruses are higher in our population. Therefore, it is a mandatory task to screen every patient for hepatitis B and C before any surgical procedure. The surgeons and health care professionals should protect themselves by using protective masks, eye protection glasses, and double gloves before handling infected cases. The used infected material, needles and other waste material should be destroyed properly using Biosafety protocols.
Anterior knee pain or patellofemoral pain is a common clinical presentation especially in females. It is a challenging clinical problem. The specific cause can be difficult to diagnose as the aetiology remains poorly understood and there are various pathologic entities that can result in pain in the anterior aspect of knee.
Multiple surgical options have been used to treat the condition. Lateral retinacular release is one of these options and has been used to treat anterior knee pain with variable results1-5. The aim of this study was to assess isolated patella lateral retinaculum release as a treatment for anterior knee pain.
Materials and Methods
We performed a retrospective review of all the patients who underwent isolated arthroscopic lateral patella retinacular release under a single surgeon between July 2007 and July 2010. Exclusion criteria included significant patellar instability and severe mal-alignment on both radiological and clinical assessment and additional procedures including cartilage debridement, meniscal tear repair/excision or patella stabilization.
Data was collected from case notes (demographics, pre-operative and intra-operative findings and any post-operative complications), archived radiographs and postal questionnaires including pre and post procedure Oxford Knee Score (OKS), as well as patient satisfaction. Patient satisfaction questions included a grading of satisfaction of 1(completely dissatisfied) - 10 (completely satisfied) and whether patient would reconsider the procedure if given the choice again.
Independent factors assessed were age, sex, tight lateral retinaculum, osteoarthritic x-ray changes of all compartments, intraoperative findings of grade of arthritis and lateral subluxation and postoperative physiotherapy. The primary outcome assessed was patient reported outcome measures, including the improvement in post procedure OKS and patient satisfaction scores. SPSS Version 20 was used for analysis.
Preoperative and Postoperative OKS – total and components - were compared using Wilcoxon Signed Rank Test. The Mann Whitney U test was used for nominal data and Kruskal-Wallis test was used for continuous data for total OKS. Individual OKS components compared were ability to kneel and ability to climb stairs - more representative of patellofemoral joint.
Results
59 patients were identified with male to female ratio of 1:1.5. The mean age was 58.7 (range 25 to 77). 40 patients (67%) returned completed forms. Four patients had further surgery; three total knee replacement and one subsequent arthroscopic procedure for meniscal tears. These patients were excluded from the study. Four patients had bilateral procedures. Therefore after the exclusions for further surgery and those who failed to return completed forms 36 patients were included, on whom 40 procedures had been performed. Changes of osteoarthritis - graded according to Kellgren and Lawrence system - on the medial and lateral facets of the patella were noted on preoperative Merchant views (Table 1) and the tibiofemoral compartment as well.
Table 1 – Pre-Operative Radiographic grades of Patellofemoral change
Medial Facet
Lateral Facet
Grade
Frequency
%
Frequency
%
0
2
5
1
2.5
1
6
15
4
10
2
15
37.5
13
32.5
3
16
40
15
37.5
4
1
2.5
7
17.5
Total
40
100
40
100
All patients had undergone standardized preoperative physiotherapy regimen with no significant benefit. Two had, had intra-articular hyaluronic acid injection with no benefit.
All procedures were performed by a single surgeon (PE) and intraoperative findings of cartilage Outerbridge grade were noted in all compartments. Closed lateral retinacular release was performed with Smiley’s knife from just below lower end up to the upper border of patella.
Mean follow up duration was 20.43 months +/- 10.64. Patients were divided into three groups of follow up durations. 6-12 months had 6, 12-18months had 18 and >18months had 16 cases. The best results were in 12-18 month follow up but no statistically significant difference was found between different groups. There was no significant difference in age and gender distribution amongst different durations of follow up. Also there was no significant difference in age, gender and different durations of follow up between responders and non-responders of the questionnaire. There were no reported postoperative complications.
24(60%) underwent post-operative physiotherapy. The mean OKS improved from 23.05 (range11-40) to 35.30 (range14-48) [p value <0.0001]. Individual components of OKS, particularly ability to climb stairs and ability to kneel, also showed statistically significant improvements (Figure 1, Figure 2).
Fig 1 – OKS – ability to climb stairs
Fig 2 – OKS – ability to kneel
Univariate analysis showed improvement of total OKS and OKS for ability to kneel were significantly associated with higher grade of radiographic lateral patellofemoral joint wear (p value 0.025 and 0.042 respectively) and postoperative physiotherapy (p value 0.018 and 0.003) and improvement in OKS for ability to climb stairs was significantly associated with higher grade cartilage wear, noted intraoperatively, for trochlea (p value 0.042) and patella (p value 0.022).
However the OKS components lost this significance if there was Outerbridge Grade 3 or more wear in tibiofemoral articulation.
The procedure had a high mean satisfaction score of 8.2 (range 4 to 10), and 32 of 36 patients would have the procedure again if needed.
Discussion
Anterior Knee pain or patella pain syndrome is a very common clinical problem faced by orthopaedic surgeons. However the aetiology remains poorly understood. Mori et al6identified evidence of degenerative neuropathy in 29 out of 35 histologically examined specimens of resected lateral retinaculum; thus suggesting it may originate in the lateral retinaculum. Lateral Retinacular release would denervate this tissue producing symptomatic relief. Osterneier et al7 measured patellofemoral contact pressures and kinematics using fresh-frozen cadaver specimens both before and after lateral release. They concluded that release could decrease pressure on the lateral patella facet in flexion but did not stabilize the patella or medialise patella tracking. This possibly explains our finding of improvement with lateral patellofemoral joint wear.
Arthroscopic lateral release remains a controversial topic because of lack of well-designed randomised studies. Fulkerson and Shea8 suggested that knees showing lateral patellar tilt without subluxation were more likely to benefit from a lateral release in the absence of grade III or grade IV changes in the articular cartilage. Korkala et al9 showed that a lateral release tended to improve symptoms in patients with grade II to grade IV chondromalacia. Our findings concur that greater the patellofemoral articulation cartilage wear the more significant the improvement.
Lodhi et al10 performed a prospective study of elderly patients with patellofemoral osteoarthritis and pain which conservative management had failed to improve and concluded that the procedure improves function and provides significant pain relief successfully deferring need for arthroplasty; therefore they recommended the procedure in middle aged to elderly patients with symptomatic patellofemoral osteoarthritis.
Twaddle and Parkinson11 suggested lateral release to be an effective, reliable and durable procedure in ‘carefully selected patients’ through their retrospective study.
Our study has deficiencies regarding single surgeon series and retrospective review. However it reflects some of the findings from previous studies suggesting that it is an effective procedure to improve symptoms associated with cartilage changes in patellofemoral articulation without significant tibiofemoral joint osteoarthritis. Further well designed randomized controlled trials are needed to give a more definitive answer.
Conclusion
Isolated lateral patella retinacular release can be effective for anterior knee pain in carefully selected patients, (without significant instability or mal-alignment, with high patellofemoral but low tibiofemoral wear), who have failed conservative management. It particularly improves patients’ ability to kneel and climb stairs, giving a high satisfaction score. The grade of wear of patellofemoral cartilage is the most significant factor in determining this, with post-operative physiotherapy further augmenting the good results.
India ranks second in world not only in terms of its population but also in disaster proneness.1 Disasters, whether they are natural or man-made, result in a wide range of mental and physical health consequences.2 International public agenda has taken notice of protection and care of children in natural and man-made disasters. This, in large part, is due to observations that those affected and overlooked often include children and adolescents.3 There is continuous controversy about the impact of disasters on victims including children4, 5and some investigations deny that serious psychological effects occur.6, 7, 8However further researchhas found that the criterion used in these studies were extremely narrow and inadequate and hence more systematic, clinically relevant investigations are required.9 For children and adolescents, response to disaster and terrorism involve a complex interplay of pre-existing psychological vulnerabilities, stressors and nature of support in the aftermath. Previous research has shown that direct exposure to differenttypes of mass traumatic events is associated with an increasein post-traumatic stress symptoms, 10, 11, 12, 13 anxiety, and depression, 11, 14 which are frequently comorbid withpost-traumatic stress reactions among youth.15 To the best of our knowledge, studies on long term psychological effects of disasters on younger age groups from South Asian countries are only a handful even though the frequency and the extent of natural disasters in this part of the world are considerable. As trauma during childhood and adolescents can etch an indelible signature in the individual's development and may lead to future disorder,16 it underscores the need for such studies.
A snowstorm followed by an avalanche took over a small mountainous village “Waltengu Nard” in South Kashmir, India on 19th Feb. 2005, about a month after the devastating Indian Ocean Tsunami. Of the total population, 24.77% (n=164) had perished. 17 As reported, the total population of children and adolescents prior to disaster was 242, of whom 52 died (21.49%).17 The present study is a discreet one which aims to determine long term psychiatric morbidity among the surviving children and adolescents of this disaster affected region five years after the snowstorm disaster. This is based on the notion that psychiatric disturbances can be present in children and adolescents years after a disaster has occurred.18, 19, 20 The socio-demographic variables of the patients are also studied. The results may support the need to apply wide area epidemiological approaches to mental health assessment after any large scale disaster.
Material and Methods
The study was designed as a survey of children attending school. Children from ages 6 years to 17 years from the high school near Waltengu Nard were taken up for the study. Only those children who were present in the area during the disaster were included in the study. Those with presence of any psychiatric disorder prior to the time of disaster, mental retardation, organic brain disorder, serious physical disability prior to disease (e.g. blindness, polio, amputated limbs etc.) or severe medical condition (e.g. congenital or rheumatic heart disease, tuberculosis, malignancy etc.) were excluded from the study. Within the school, an alphabetically ordered list was prepared including all classes of school with children aged 6-17 years 11 months. Every 3rd student on this list was chosen and subjected to inclusion and exclusion criteria until a sample size of 100 children was complete. Informed consent was obtained both from the child and one of his/her caregivers or parents.
Selected children were subjected to the Mini International Neuropsychiatric interview for children and adolescents (MINI-KID) for evaluation of symptoms and diagnosis which is a DSM-IV based diagnostic interview with high reliability and validity. 21, 22 A semi structured proforma was prepared for socio-demographic profile. Kuppuswamy's Socioeconomic Status Scale, 2007 was used for determining socio-economic status. 23 Oslo-3 Social Support Scale (OSS-3) was used to calculate social support. 24
Interviews were conducted following formal training in instituting MINI-KID by trained psychiatrists of the Department of Psychiatry GMC Srinagar. The data was then subjected to appropriate statistical methods. A p-value less than 0.05 was taken as significant.
Results
Of the 100 children and adolescents studied (41.32% of the affected population of children and adolescents) 41 were noted to have at least one psychiatric diagnosis (patients). The socio-demographic profile of these patients is represented in Table 1. Age and sex distribution of diagnoses is presented in Table 2 and Table 3 respectively.
A total of 54 diagnoses were observed in these 41 patients (Figure 1), with comorbidities present in 12 patients (29.27%). 11 of these 12 patients were experiencing two psychiatric disorders present concurrently and 1 was enduring three concurrent psychiatric diagnoses. Post-Traumatic Stress Disorder (PTSD) was the commonest comorbidity seen in 6 patients. This comes to 42.86% of total PTSD cases. This was followed by Major Depressive Disorder (MDD), Generalized Anxiety Disorder (GAD), Suicidality, Social Phobia, Panic Disorder, Agoraphobia and Separation Anxiety Disorder (SAD) in 2 each. Attention Deficit/Hyperactivity Disorder (ADHD), Conduct Disorder (CD), Specific Phobia (dark), Substance Abuse and Dysthymia were comorbid in one patient each. Studies have consistently shown presence of psychiatric comorbidities post-disaster.48, 49 Of the total 54 diagnoses, the commonest were Anxiety disorders (except PTSD), PTSD and affective disorders (includes MDD, dysthymia and mania) comprising 37.04% (N=20), 25.93% (N=14) and 14.81% (N=8) of total diagnosis respectively.
Figure 1: Diagnostic profile of the patients
Discussion
When children and their families are involved in natural or man-made disasters they may be exposed to diverse stressors which may impact mental health of the survivors, including children.25 Studies have suggested that reliance on parental reports of children’s distress may not be valid as parents typically under-report symptoms compared with child and adolescent self-report in mental health surveys.26 Thus in our study the psychiatric interview of each child was done individually without getting leads from their parents. In the early "heroic" and "honeymoon" phases of disaster relief there is much energy, optimism, and altruism. As fatigue sets in over the time and frustrations and disillusionment accumulate, more stress symptoms may appear.27 Accordingly, the study was carried out five years after disaster to catch this delayed response to disaster in the form of psychiatric morbidity.
Consequences of the extensive amount of stress on our sample population due to the snowstorm resulted in a high prevalence of psychiatric disorders in our sample which was apparently not due to any other psychological stress during this period. Despite the fact that the study was done five years after the disaster, the research generated high psychiatric morbidity. Many young survivors reported restlessness and fear with the return of the season in which snowstorm occurred. All these kept the memories of the disaster and the losses fresh in their mind thus not allowing the wounds to heal. Some said that they couldn't keep their minds off the snowstorm during the weeks approaching the anniversary. This was much like the so called anniversary reactions.28 Even children and adolescents, who have rebuilt their homes or found new dwellings to rent, frequently feel a sense of loss at the anniversary. Though the area was provided with adequate relief in terms of better infrastructure, education, employment and financial help in years post disaster to make their life without psychological distress, but, perhaps four such anniversary reactions and the fact that they are still living in the same geographical area and climate conditions have not allowed them to settle down in a routine since the psychological distress. Of the total sample of 100 patients, 41 % (N=41) reported at least one diagnosis. This is almost similar to a study by Kar and Bastia after a natural disaster in Orissa (cyclone) who found 37.9 % of adolescents with any diagnosis.29 Similarly Margoob et al found that 34.39 % had a psychiatric disorder at the end of one year, after disaster.17 Other studies yielded results in the range of 12% to 70% in terms of total psychiatric morbidity.26, 30-33
PTSD was the commonest individual diagnosis in our study with 14% (N=14) of the total population. Studies have shown PTSD prevalence after disaster from as high as 72 %34 to as low as 8 %.35 However, these were done immediately or within a few months after the disaster and the longitudinal pattern was not studied. A study conducted by Margoob et al reported a prevalence of 18.51 % in a sample of survivors one year after the same snowstorm on which this present study is based.17 Similarly, Bockszczanin et al 2.5 years after a flood in Poland reported 18 % of children to be suffering from PTSD.36 Thus our results of 14 % patients suffering from PTSD are also similar to the trend as we are studying them after a period of five years following the disaster. Diagnosis of PTSD in our study was more common among the pre-adolescent age group, 22.58 % (N=7) and adolescents 33.33% (N=2). Similar findings were reported by Hoven et al who found a prevalence of 20.1 % in this age group.30 Also PTSD was more frequent in females in our study. It was observed in 16.98 % females (N=9) as compared to 10.64 % for males. Hoven et al also found high prevalence in girls (13.3 % vs. 7.4 %).30
Anxiety Disorders (excluding PTSD) formed the most common collective diagnostic category in our sample. Anxiety disorders were present in 20 % (N=20) of our sample population which formed about 37.04 % of total diagnosis. These included cases of GAD 5% (N=5), SAD 4% (N=4), Social Phobia 3% (N=3), Agoraphobia 3% (N=3), Panic Disorder 2% (N=2), OCD 2% (N=2) and Specific Phobia 1% (N=1). Similarly Norris et al found anxiety in various forms in 32% of their sample of disaster victims.25 Similar findings were also reported by Reijneveld et al.37 Hoven et al in an important study after 9/11 found prevalence of various anxiety disorders to the magnitude of 28.6%.30 Our study correlated very closely to the above mentioned study. GAD was the commonest anxiety disorder among the above group. A prevalence of 5% (N=5) was found in the study sample. This prevalence was almost half of the earlier studies in children and adolescents after a disaster by Kar and Bastia29 where it was 12% and by Hoven et al 30 where it came out to be 10.3 %. However these studies were conducted within a few months after the disaster and hence came out with a higher prevalence of GAD than ours. It was more common in girls in contrast to boys (7.55 % vs. 2.12%) similar to study by Hoven et al.30 SAD was also seen to predominate in anxiety disorders with 4 % (N=4) of the sample receiving the diagnosis. Some studies like one by Hoven et al found it to be prevalent in 12.3 % of their sample 6 months after 9/11.30 However other studies have found SAD to be comparatively less frequent post disaster in children and adolescents.34 Thus our findings are modest and lie somewhere between the above two studies. Also ours was a long term study hence SAD figures are a bit low. SAD in our study was more prevalent in girls than boys (5.66% vs. 2.12%). Moreover, it was exclusively seen in ages below 10 years. The above findings are in tune with the study by Hoven et al.30 Panic disorder showed a low prevalence in our study and was found in only 2 % (N=2) patients. In both of these patients it was found to be comorbid, with MDD in one and with Agoraphobia in another. Studies immediately post disaster found the prevalence to be around 10.8 % (Math et al)32 and 8.7% (Hoven et al).30 However in the survivors of the same area, in which our study is based, an earlier study one year after the disaster found 3.08 % prevalence of panic disorder which is very similar to our study.17 It was more prevalent in females and is well correlated to a study by Hoven et al.30 Agoraphobia was present in 3 % (N=3) patients. It was comorbid in two patients with panic disorder and with PTSD, and an individual diagnosis in one. Hoven et al have found high rates of Agoraphobia post disaster i.e., about 14.8%.30 But again this study was done only 6 months after 9/11 hence more morbidity. Female preponderance of the diagnosis was established (3.77 % vs. 2.12 %) as with earlier studies.30 Obsessive traits are known to increase subsequent to disaster in the surviving population.38 Similarly 2 % of cases satisfying the criteria for OCD were seen in our study. The commonest obsessions were recurrent intrusive and annoying themes related to the disaster and ruminations about whether it could have been prevented, followed by worries related to harm befalling themselves, family members, or fear of harming others due to losing control over aggressive impulses. Other obsessive themes were related to scenes of trauma and commonly blood. Obsessions regarding extreme fears of contamination were also present.
The affective disorders have been studied less often than PTSD after disaster. Depression is known to occur with increased frequency subsequent to disaster.25 MDD was present in 4 % (N=4) of the total sample population. Studies conducted immediately after disasters have found higher prevalence. Math et al,32 Kar & Bastia29 and Catani et al33 found the prevalence of 13.5 %, 17.6 % and 19.6 % in their studies respectively. A study at three months and at one year after disaster on the adults in the same population as our study found the prevalence of MDD as 29.6 % and 14.28 % respectively.17 This decreasing trend is substantiated by the findings of our study and is in line with it. MDD was more common in females (5.66 % vs. 2.12%) which is similar to the study of Hoven et al.30 Our findings of increased prevalence of MDD in middle adolescents (7.69 %) as compared to other age groups is also comparable to Hoven et al.30 Of the Dysthymia cases, 3 % (N=3) were observed in our studies. Increased prevalence of dysthymia has not been reported post disaster in earlier studies. Our findings could be a part of large affective diaspora with dysthymia resulting from diminished self-esteem and a sense of helplessness subsequent to disaster. In addition to the time period for depression these patients were given the diagnosis of dysthymia because the depressed mood in them was more apparent subjectively than objectively. Finally these patients could have been on a natural course of dysthymia which usually begins in childhood. Combined dysthymia and MDD accounted for 7 % (N=7) of patients which if taken as a collective depression category, the results are slightly more comparable with the above studies. One patient had Mania (past). This patient had a positive family history of Bipolar Affective Disorder. This could be an incidental finding even though psychological stress is known to precipitate mania.39 Also the prevalence is 1 % in our study which is even less than the prevalence in general population thus it could be an artifact.40Studies have consistently found increased prevalence of adjustment disorder after disaster.41 In our study prevalence of adjustment disorder was 3% (N=3, anxiety 2, depression 1). In a study by Math et al 3 months after tsunami it was 13.5%.32 A lower prevalence in our study was again due to the long term nature of study. The role of trauma, stress, and negative life events as risk factors for suicidal ideation and behavior has long been recognized.42 A longitudinal investigation looking at the trends in suicidality and mental illness in the aftermath of Hurricane Katrina found significant increases in suicidal ideation and plans in the year after the disaster as a result of unresolved hurricane related stresses.43 The suicidality in our population sample was found to be 2% (N=2) of sample. These results were in tune with that of Kessler et al, although it was immediately after hurricane Katrina and hence a higher prevalence of 6.4%.43
Many symptoms of PTSD overlap with the symptoms of ADHD and CD.44 In our study, each of the disorders were present in 2 % of the sample. In one patient, they were comorbid with each other (ADHD with CD). In a study by Hoven et al 6 months after 9/11, the prevalence of CD was found to be as high as 12.8 %.30 This could be because of immediate post disaster nature of the study. Also because of the symptom overlap more weight was given to the PTSD diagnosis.
Three patients had a diagnosis of Substance Abuse, Tic Disorder and PDD, 1 % each. Though substance abuse is known to increase subsequent to disaster in adolescents,30 no evidence was found for relation of tic disorder or PDD to the post-disaster psychiatric stress. The cause of a low prevalence of substance abuse in our sample was because of the fact that the area is inhabited by Muslim population and hence alcohol is not religiously sanctioned, and, harder substances are either not available or they can’t be afforded. The only substance which is available is marijuana or cannabis. However, most used it only recreationally and the criterion for abuse was not met. Even the sole patient of substance abuse was also taking cannabis. Also, it is a well known phenomenon that drug dependent subjects do not reveal the true information and deny any history of abuse at first contact with the investigating team.45 Tic disorder and PDD are regarded as biological disorders and their relation to trauma is only incidental.46, 47
Studies have consistently shown presence of psychiatric comorbidities post disaster.48, 49 The same was observed in our study where 29.27 % of total patients had comorbid psychiatric diagnosis. Similar results were found by Kar and Bastia who found comorbidities in 39% of adolescents.29 PTSD is the most common comorbid disorder observed during the period post disaster48, 49 and the same was observed in our study with PTSD comorbid in 14.63 % (N=6) of cases. However when all the anxiety disorders were combined except PTSD, they were found to exceed the comorbidity of PTSD and they were comorbid in 21.95 % (N=9) patients. There is expanding literature regarding comorbidity of anxiety and depression in children and adolescents.50, 51, 52Similar comorbidity of an anxiety disorder (including PTSD) and depressive disorder (including Dysthymia) were seen in 7.32 % (N=3) of patients in our study. These results show that psychiatric diagnoses are frequently comorbid after the disaster and there is a need to be vigilant about them for a holistic psychiatric assessment, treatment and rehabilitation of the survivors.
Sociodemographic Profile: In our sample the prevalence of psychiatric morbidity was at maximum in pre-adolescents (6-10 years age group) and it was 61.29 % of the sample of pre-adolescents.This is consistent with the research that has suggested that younger children possess fewer strategies for coping with both the immediate disaster impact and its aftermath, and thus may suffer more severe emotional and psychological problems.53 Second commonest group was of 11-13 years with 40 % morbidity in them which was consistent with an earlier study which also found significant morbidity in this age group.54 The age characteristics of the total population also closely matched the above findings. More females than males were found to exhibit psychiatric morbidity in our study (47.17 % vs. 34.04 %). Though these findings were in tune with those of Hoven et al,their findings were a little lower than ours (34.7 % vs. 21.8 %).30 Some studies have found that girls express more anxiety-depressive disorders30 and PTSD symptoms55, 56 and boys seen to exhibit more behavioral problems.57 Similarly in our study rates of anxiety disorders, depressive disorders and PTSD were higher in girls and conduct disorder was exclusively found in boys.
Our study suggested that children up to 5th standard were (51.02 %) more susceptible than those in higher classes. This was in accordance with an earlier study by Kar et al.54 These findings are also in accordance with the findings of a study by Hoven et al. which found maximum morbidity (34.1 %) in preschoolers.30 Thus it could be said that higher educational status was protective, in addition to increasing age. Psychiatric morbidity was highest in children who were from joint family systems (48.15%). This was followed by children from extended nuclear (37.5%) and nuclear (27.27%) families. This pattern is consistent with an earlier study by Margoob et al.58 This could be because of the fact that in the sample of joint families there was loss of more family members in the tragedy. There were no cases of upper and upper-middle socio-economic class and lower-middle class was significantly less in our sample. This was because of the demographics per se and was not a sampling error. Consequently, higher morbidity was seen in the upper-lower socio-economic class (49.09%) followed by the lower class (31.71%). All the above findings are in accordance with an earlier study by Margoob et al.58
Psychiatric morbidity was not found to be influenced by the source of family income. Same was observed by Kar and Bastia in their study.29 The majority of the patients had poor social support (52.17%, p=0.03). These findings are substantiated by earlier studies.59 Loss of a parent was strongly associated with lower social support and high psychiatric morbidity. This was also reported by earlier studies.31, 60 Our study reported higher psychiatric morbidity in first-born children (71.43%). This could be due to increased burden of family matters on an eldest child subsequent to a disaster especially when head of family or mother has perished in such a catastrophe. This was in accordance with earlier studies on birth order and psychiatric morbidity.61 However in our study only childs also documented significant morbidity which is in contrast to earlier studies.61 This could be due to the fact that an only child had significantly less social support due to fewer siblings and death in the family due to disaster considerably compounding the problem. Also, often the youngest born is more pampered and hence more likely to feel emotionally insecure when attention is shifted from him in the aftermath of a disaster.
There was an unavoidable limitation in the study; the disaster-affected population was not compared with a normal or control population. The difficulty we faced was finding a control population as the area has a racially, geographically and culturally distinct population of Gujjars and all of them were affected. So no appropriate control group could be found. However if we compare it with most of the studies done in populations from the north India, the prevalence in our study is largely greater than those found In those studies.62
Conclusion
This research portrays and scrutinizes the experience of children and adolescents in the aftermath of a snowstorm disaster and supports the idea that children are susceptible to morbid psychological experiences long after the traumatic event has occurred. With that said, we want to stress the decisive role of support agents for children. These agents include the adults and peers who help children and youth recuperate in the long term. Provision of an outreach psychosocial and clinical service long after the disaster when no one is around to help after the initial knee jerk response of relief agencies is also stressed.
The increasing collaboration between diabetologists and general practitioners (GPs) (e.g. the IGEA project) has resulted in the GP taking a more relevant role in management of patients with diabetes. Just as measurement of arterial blood pressure has become an important tool in follow-up of patients with hypertension by the GP, SMBG has become a valuable tool to evaluate glycaemic control. In particular, self-monitoring of both blood pressure and glycaemia are important to assess the efficacy of prescribed therapies, and can help the patient to better understand the importance of control of blood pressure and blood glucose.
Several instruments for measurement of blood pressure have been validated by important medical societies involved in hypertension, and much effort has been given to compliance and patient comfort. However, less attention has been dedicated to glucometers. In particular, little consideration has been given to patient compliance, and SMBG is often perceived as an agonising experience. Moreover, hourly pre-visit glucose curves for glycaemic control, even if important, do not have the same value as a standard control over 2 to 3 months between visits. In addition, after an initial period of "enthusiasm" the fear and hassle of pricking oneself and the unpleasant feeling of pain often cause the patient to abandon SMBG.
A literature search on PubMed using the term “self-measurement of blood glucose (SMBG) and pain” retrieved only two publications, demonstrating a general lack of interest of the medical community. However, SMBG can be of important diagnostic-therapeutic value. Pain related to skin pricks on the fingertip, needed for determination of glucometric blood glucose, can significantly reduce compliance to SMBG, thus depriving the physician of a useful tool for monitoring the efficacy of anti-hyperglycaemic therapy and glycaemic control. Moreover, HbA1c has clear limitations, even if it provides a good idea of glycaemic control over the past 2-3 months, as it is a mean value of pre- and post-prandial blood glucose. It does not, therefore, measure glycaemic variability, which is an important cardiovascular risk factor. Thus, more research is needed into puncture sites as an alternative to the fingertip that are associated with less pain, which could favour greater use of SMBG.
Another problem of significant importance concerns the reproducibility and accuracy of blood glucose measurements. In the traditional method, blood samples for self-monitoring are taken from the fingertip of any finger using a lancing device with a semi-rigid prick (Figs. 1 and 2). The large blood vessels in the derma of the fingertip (Fig. 3) are lanced, and a drop of blood is obtained for the glucometer. All lances are optimised to prick the skin at a depth greater than 0.5 mm with a variability of ± 0.2 mm (Fig. 4).
Figure 1. The fingertip as a traditional site of puncture using a lancet.
Figure 2. Traditional method for self-monitoring of blood glucose.
Figure 3. Vascularisation of derma.
Figure 4. Traditional lancet.
Unfortunately, by pricking the fingertip at this depth, numerous tactile corpuscles in the dermis are also touched, causing the unpleasant sensation of pain. In a recent study by Koschinsky1 on around 1000 patients with type 1 (T1D) and type 2 diabetes (T2D), about one-half (51%) referred that they normally pricked themselves on the side of the fingertip because it is less painful. However, almost one-third (31%) used the centre of the fingertip, which is the site associated with the most pain. Other sites of puncture on the fingers are used much less frequently (5%), while 12% used other places on the body. It is also interesting to see how many times patients reused the lancet: 10% once, 19% for 2-4 times, 22% for 5-7 times, 25% for 8-10 times and 21% for more than 11 times. Pricking oneself 2 several times daily for years is not only troublesome for patients, but also leads to the formation of scars and callouses, and reduces fingertip perception and tactile sensitivity. Notwithstanding, alternative sites of puncture such as the arm, forearm and abdomen have not been evaluated in a systemic manner.
The objective of the present study is to compare alternative sites of puncture using a new semi-rigid lancet and determine if blood glucose values are similar to those obtained using traditional methods. A new puncture site was chosen, namely the area proximal to the nail bed of each finger. The sensation associated with puncture (with or without pain) was used to compare the two groups. Pain was assessed with a visual analogue scale (VAS). Blood glucose was measured in the morning after 12 hours of fasting.
Materials and methods
The present study enrolled 5 general practitioners and 70 patients with diabetes and without diabetes-related (micro-albuminuria, retinopathy, arterial disease of the lower limbs) complications. In addition, patients with diabetic neuropathy or neurological/vascular complications that could alter pain perception were excluded. The study population was composed of 20 women and 50 men with a mean age of 47.8 ± 15.3 years and a mean duration of diabetes of 11.4 ± 10.3 years; 34.3% had T1D and 65.7% had T2D. The study was carried out according to the standards of Good Clinical Practice and the Declaration of Helsinki. All patients provided signed informed consent for participation.
Semi-rigid lancets were provided by Terumo Corporation (Tokyo, Japan) and consisted in a 23-gauge needle that was remodelled to permit less painful puncture than a traditional lancet (Fig. 5). Punctures (nominal penetration from 0.2 to 0.6 mm) were made at a depth variation of ± 0.13 mm. In addition, a novel puncture site was used, namely the area proximal to the nail bed of each finger (Figs. 6-8). In this area of the finger, blood flow is abundant and it is easy to obtain a blood sample. Moreover, the area has fewer tactile and pain receptors than the fingertip, and thus when lanced less pain is produced.
Six fingers were used in a random fashion to evaluate puncture of the anterior part of the finger, the periungual zone and the lateral area of the fingertip (depth 0.2-0.6 mm), and compared to fingertip puncture at a depth of 0.6 mm. The sensation provoked by puncture (with or without pain) was used to compare groups. Pain was evaluated using a VAS ranging from ‘no pain’ to ‘worst pain imaginable’. The VAS is a unidimensional tool that quantifies the subjective sensation such as pain felt by the patient and considers physical, psychological and spiritual variables without distinguishing the impact of the different components.
Figure 5. New lancet
Figure 6. Proximal lateral area of the nail bed as a new site of puncture.
Figure 7. Method of lancing.
Figure 8. Site of lancing
Blood glucose was measured in the morning after 12 hours of fasting. The Fine Touch glucometer used was provided by Terumo Corporation (Tokyo, Japan). Statistical analysis was carried out using Fisher’s two-sided test. Differences in blood glucose with the two methods were analysed using Wilcoxon matched pairs signed ranks test. A P value <0.05 was considered statistically significant.
Results
Pain was not perceived in 90% and 94.28% of subjects punctured in the lateral area of the fingertip at a depth of 0.2 and 0.3 mm, respectively. At a depth of 0.4 mm, 67.14% of subjects did not perceive pain, while at 0.5 mm and 0.6 mm, 47.14% and 17.14% of subjects did not feel pain, respectively. There was no significant difference in pain considering punctures at 0.2 or 0.3 mm, while significant differences were seen between 0.2 and 0.4 mm (p <0.05), 0.5 mm (p <0.001) and 0.6 mm (p <0.001). All subjects who performed puncture in the central zone of the fingertip referred a painful sensation.
Using a periungual puncture site, pain was not referred by any subject, although a bothersome sensation was noted by some. The same results were obtained for all fingers used. Blood glucose levels obtained using traditional and alternative puncture sites were highly similar with no significant differences between groups (134.18 mg/dl ± 5.15 vs. 135.18 mg/dl ± 5.71 mg/dl; p = 0.5957).
Discussion
The present study evaluated the use of alternative puncture sites that are associated with less pain. These encouraging results undoubtedly warrant further investigation in a larger cohort, but nonetheless suggest that compliance with SMBG can be optimised. The use of the area close to the nail bed allowed high quality blood samples to be obtained for measurement of blood glucose, with an accuracy that was the same as that seen using the fingertip. The design of the lancet used herein was also associated with a lower perception of pain, which is composed of a hypodermic needle in a rigid casing that prevents accidental needle sticks both before and after use. Thanks to the needle point that was made using a triple-bevel cut, epidermal penetration is less traumatic and as a consequence less painful. This further favours rapid recovery of tactile function of patients with T2D. This allows the use of a larger transversal section using a puncture with less depth, and less involvement of nerves present in skin. In addition, the characteristics of the novel lancing device (Fine Touch, Terumo Corporation, Tokyo, Japan) allows adjusting the depth of puncture to the characteristics of each patient (e.g. in children, adolescents and adults).
The depth of penetration of the lancet can be varied from 0.3 to 1.8 mm with a self-incorporated selector; the maximum deviation of the lancing device in terms of depth is approximately 0.1 mm. Due to the possibility to select a minimal depth of only 0.3 mm, it can be used at alternative sites that allow a reduction in the frequency of samples taken from the fingertip. In theory, compared to traditional lancets, this would allow less perception of pain even at traditional sites as well as at periungual zones, and it was our intention to compare the different types of lancets to reinforce this idea. No puncture-related complications were reported, and another fundamental aspect that is not reported in other studies comparing traditional and alternative puncture sites is that no differences in blood glucose were observed.
In conclusion, it is our belief that a new type of finger lancet that decreases or eliminates pain associated with lancing merits additional consideration. Further studies are warranted on larger patient cohorts to confirm the present results. If validated, this would enable patients with diabetes - especially those who need to take several daily blood glucose samples - to perform SMBG with greater peace of mind and less distress.
Drug abuse is a universal phenomenon and people have always sought mood or perception altering substances. Similarly the attitude of people towards addiction varies depending upon various factors and can come across as prohibition and condemnation to tolerance and treatment1. The United Nations Narcotics Bureau describes drug abuse as the worse epidemic in the global history 2.India like rest of the world has huge drug problem. Located between two prominent drug producing hubs in the world, i.e. Golden Triangle (Burma, Laos and Thailand) and Golden Crescent (Iran, Afghanistan and Pakistan), India acts as a natural transit zone and thus faces a major problem of drug trafficking. Similarly the geographic location of Jammu and Kashmir is such that the transit of drugs is easily possible across the state. In addition the prevailing turmoil is claimed to have worsened the drug abuse problem alongside an unusual increase in other psychiatric disorders in Kashmir 3.
There are not many studies about drug use from Kashmir and hardly any about the actual community prevalence. In addition, it is difficult to conduct a study in a community affected by drug abuse due to stigma associated with drug addiction. Furthermore people hesitate to volunteer information due to laws prohibiting sale and purchase of such substances and risk of being criminally charged. In view of this difficulty the present study was conducted on the treatment seeking patients at the Drug De-addiction Centers. The present study was aimed at highlighting the epidemiological profile and pattern of drug use in Kashmir Valley.
Material and Methods
This cross-sectional study was undertaken at two Drug De-addiction Treatment Centers (Government Psychiatric Disease Hospital and Police Hospital, Srinagar). Government Psychiatric Disease Hospital is the only psychiatric hospital in the Kashmir valley that also provides treatment for substance use disorder patients. The De-addiction center at the Police Hospital is run by Police Department in the capital city Srinagar. Both these centres have a huge catchment area comprising all districts of the valley, due to lack of such services outside the capital city, thus reflecting the community scenario to a greater extent.
The study was conducted for a period of one year from July 2010 to June 2011. Substance Use Disorder Patients were diagnosed as per the Diagnostic and Statistical Manual-IV (DSM IV 2004) criteria 4. Following informed consent, a total of 125 patients were included in the study. In case of minors (<18 years of age), the consent was obtained from the guardian. Information was collected regarding the age, sex, residence, religion, marital status, educational status, history of school dropout, occupation and type of family, reasons for starting the substance of abuse, type of the substance abused, and age of initiation. The socio-economic status of the patients was evaluated by using the modified Prasad’s scale for the year 2010, based on per capita income per month 5.
Results
A total of 125 Substance Use Disorder patients were studied and all were males. The majority of the patients (50.4%) were in the age group of 20-29 years and most (73.6%) were unmarried. Most of the patients were Muslims (96%). There was nearly an equal urban to rural ratio. Most of the patients had completed their educationup to high school level or higher. There was a high rate of school dropouts (41.7%) and among those, substance use being common reason (46%) for school dropout. 71.2% belonged to nuclear families. Most of the patients (53.6%) belonged to socio-economic class I as per Prasad’s scale [Table 1]. Majority of the patients started taking substances in the age group of 10-19 years [Table 2]. Besides nicotine (89.6%), the most common substances used were cannabis (48.8%), codeine (48%), propoxyphene (37.6%), alcohol (36.8%) and benzodiazepines (36%) [Table3].
Table 1: Socio-demographic profile
N
%
Age (years)
10 to 19
20
16.0
20 to 29
63
50.4
30 to 39
27
21.6
40 to 49
12
9.6
≥ 50
3
2.4
Gender
Male
125
100.0
Religion
Islam
120
96.0
Sikh
3
2.4
Hindu
2
1.6
Residence
Urban
56
44.8
Rural
69
55.2
Marital Status
Unmarried
92
73.6
Currently Married
27
21.6
Separated/Divorced
6
4.8
Education
Illiterate
5
4.0
</= high school
71
56.8
> high school
49
39.2
Occupation
Unemployed
21
16.8
Student
25
20.0
Government Job
16
12.8
Self employed
63
50.4
Type of family
Joint
36
28.8
Nuclear
89
71.2
Socio-economic status
Class I
67
53.6
Class II
36
28.8
Class III
18
14.4
Class IV
3
2.4
Class V
1
0.8
Table 2: Age at onset of initiation of Substance use by the patients seeking treatment for Substance Use disorder
Substance
< 10 years
10 to 19 years
> 19 years
N
%
N
%
n
%
Nicotine
11
9.8
86
76.8
15
13.4
Volatile Solvents
0
0
10
76.9
3
23.1
Cannabis
0
0
43
70.5
18
29.5
Codeine
0
0
33
55
27
45
Propoxyphene
0
0
24
51.1
23
48.9
Benzodiazepines
0
0
20
44.4
25
55.6
Alcohol
0
0
19
41.3
27
58.7
Table 3: Type of substance used by the patients seeking treatment for Substance Use disorder*
Table 4: Reason for starting the Substances among the patients seeking treatment for Substance Use disorder*
Reason
N
%
Peer Pressure
91
72.8
Relief from psychological stress**
49
39.2
Curiosity/Experimenting
27
21.6
Fun/Pleasure Seeking
13
10.4
Prescription medicine abuse***
12
9.6
Others****
6
4.8
*multiple responses ** (family tragedy like death or disease in the family; history of arrests, torture in jail or death and disability in the family due to the prevailing turmoil; conflicts within family; loss of job or job dissatisfaction. ***deliberate use of prescription medications for recreational purposes in order to achieve intoxicating or euphoric psychoactive effects, irrespective of prescription status ****Family history, routine work or boredom, availability.
Peer pressure was the most common (72.8%) reason for starting the use of substance [Table 4]. Majority of the patients started using substances in the age group of 10 to 19 years with 76.8% nicotine users, 76.9% volatile substances and 70.5% cannabis users among this group. The age of onset was higher (>19 years) in case of benzodiazepines and alcohol.
Discussion:
Kashmir Valley has a population of over 6 million with around 70% people living in rural areas.6
There is almost no data available on the community prevalence of drug use in the valley. Population is predominantly Muslim with strong taboo on use of alcohol and other drugs. Interestingly, none of the patients in our sample are female which could be due to stigma associated with drug use and hence reluctance to seek treatment. The police drug addiction centre is locally in the police lines with heavy security which requires frisking, which may also prevent people, especially women, from seeking help. This does not mean females do not use drugs as evident from clinical practice and previous studies 7. The sample is mostly comprised of a young age group of 20-29 years (50.4%) followed by 30-39 years (21.6%). Similar findings have been shown by the previous study conducted by Kadri et al.8 Another study on college going male students showed a prevalence of 37.5 %9, suggesting young age at initiation and high prevalence in students. The results also show high school dropout rate due to drug use which could be due to the associated problems with drug use and negative impact on the overall quality of life and future prospects.
There is a minor rural predominance in the sample. This is consistent with findings of Drug Abuse Monitoring System India and other studies 10-12, which reveal a nearly equal rural urban ratio with slight rural predominance. This could be due to the stigma associated with these centres and reluctance from local population to seek help due to fear of being identified and shamed.
73.6% of the patients were unmarried with 4.8% separated or divorced. Similar results have been shown by Hasin DS et al 13 and Martins SS et al14. The reason for predominant unmarried sample in our study could be due the higher number of younger age patients as compared to the current marriageable age.
The majority of the patients in our study were using cannabis, medicinal opioids (codeine and Propoxyphene), benzodiazepines and alcohol. One of the major reasons for high rate of opioids and benzodiazepines abuse in present study can be explained by over the counter sale of these drugs without the prescription from the doctor. This is a worrying trend as there is no proper drug control and it is easy to access any medication. Although there are only a few outlets selling alcohol in the whole of Kashmir, it is surprising how alcohol use is so common. It is speculated that current political turmoil may be responsible and people buy alcohol legally or illegally from army depots.
Most of the substance users had started taking drugs at the age of 10 to 19 years and more so in the case of nicotine, volatile substances and cannabis. Similar results have been found in the earlier studies. 15 Nicotine was typically the first substance of abuse. Tobacco is often considered as a gateway to other drugs 16.
The overall prevalence of volatile substance abuse in this study was 10.4% but significantly higher in the adolescent age group (53.8%). About three fourths of the patients had started using volatile solvents in the age group of 10-19 years. Inhalant use has been identified as most prevalent form of substance abuse among adolescents by different studies 17-18. The observation in present study could be explained by the easy accessibility, cheap price, faster onset of action, and a regular “high” with volatile substances like glues, paint thinners, nail polish removers, dry cleaning fluids, correction fluids, petrol, adhesives, varnishes, deodorants and hair sprays.
Peer pressure is the most common cause of initiation of drug use only to be followed by self-medication for psychological stress. Previous studies have shown similar results in relation to peer pressure and also the ongoing conflict situation to be responsible for increased drug use in the valley 19-20.
Conclusion:
There is a need for further studies to find the community prevalence of drug use. The service provision is very limited, restricted to the capital city and with none in the rural areas. There is a worrying trend of early age of initiation with adverse consequences including dropping out of school. The control of prescription drug use is another major issue which needs to be addressed. It is also worrying that female drug users are not able to seek help due to lack of appropriate facilities.
Dermatomyositis (DM) is a rare autoimmune process with not yet fully understood aetiology. It is characterised by a combination of striated muscle inflammation and cutaneous changes. The pathogenesis of the cutaneous manifestations of DM is not well understood either. DM occurs in all age groups. Therefore, two clinical subgroups of DM are described: adult and juvenile. The adult form is predominant among female patients with a clinical presentation which includes a Heliotrope rash (Fig. 1), Gottron’s papules (Fig. 2), nail fold telangiectasia and other various cutaneous manifestations in association with inflammatory myopathy.1 In addition to the previous mentioned symptoms, juvenile patients also commonly suffer from ulcerative skin and recurrent abdominal pain due to vasculitis. An increased occurrence of oncological processes in combination with adult DM has been observed with a slight predominance for the female gender.2 These patients carry a higher risk for comorbid cancers. The most common ones include malignant processes of the ovary, lung, pancreas, stomach, urinary bladder and haematopoietic system.3 The significance of these observations is that the development of DM should raise suspicion with regard to a possible parallel oncological process.
Figure 1
Figure 2
Materials and Methods
A retrospective consecutive case series was performed on a group of 12 patients that were hospitalised at the Department of Dermatology, Venereology and Allergology at the Medical University of Gdansk between 1996 and 2013. The diagnostic criteria for DM included: hallmark cutaneous lesions of DM, clinically significant muscle weakness evaluated by electromyography (EMG), indicative laboratory findings - muscle enzymes, muscle biopsy, autoantibodies. All 12 cases had muscle biopsy, serum studies and EMG performed. The retrospective study analysed the age and sex of the patients, course of the disease, accompanying diseases, clinical picture and treatment. The patients with malignancies were analysed by the primary organs of origin, and the period between the diagnosis of DM and that of malignancy (Table 1).
Table 1. Patient characteristics
No.
Sex
Previous medical history
Age of onset of DM
Clinical picture
Diagnostics
Treatment
Malignancy and age at diagnosis
1
F
Chronic eosinophilic leukaemia
54
Muscle weakness of shoulder and hip area, facial oedema and erythema, palmar erythema
CK 2550, ANA Hep-2 1:640, LDH 901, AST 69, ALT 143, X-ray = N, USG = N, EMG = N
Azathioprine, Prednisone
Stage IIA ovarian cancer at 55
2
F
Peptic ulcer disease
66
Facial erythema, Gottron’s papules on the hands, muscular weakness creating difficulty in movement, weight loss, decreased appetite
ANA Hep-2 1:1280, CT = N, EMG = N
Glucocortico- steroids
Small cell carcinoma at 66
3
F
None
23
Muscular weakness of shoulder and hip area; difficulty in standing up and walking up stairs, Gottron’s papules, Heliotrope rash, upper chest erythema
ANA Hep-2 1: 2580, CPK 12022; AST 595, ALT 210, CK-MB 534; Jo 1 = N, Mi = N
Azathioprine, Prednisone Methotrexate
None
4
F
Chronic obstructive pulmonary disease
42
Muscular weakness of shoulder and hip area, facial oedema and erythema
Cyclo- phosphamide, Methyl- prednisolone
Stomach tumour at 43
5
F
None
22
Muscle weakness, painful extremities, facial oedema and erythema
ANA Hep-2 = N, CT = N
Cyclo- phosphamide, Prednisone
None
6
F
None
42
Muscle weakness, paraesthesia of hands, facial oedema and erythema
ANA Hep-2 1:640
Cyclo- phosphamide, Prednisone
None
7
F
Hypertension, diabetes type II, osteopenia, leiomyoma.
65
Muscle weakness of shoulder and hip area, facial oedema and erythema
ANA Hep-2 1:1280, LDH 650
Cyclo- phosphamide, Prednisone
None
8
F
Hyper-thyroiditis
46
Muscle weakness; difficulty in moving, facial oedema and erythema
ANA Hep-2 1:160
Cyclosporine A, Prednisone
None
9
F
Autoimmune hepatic disease, leiomyoma.
45
Muscular weakness of shoulder and hip area, facial oedema and erythema
ANA Hep-2 1:2560, CK 3700, Mi-2 = P
Azathioprine, Methyl-prednisolone
None
10
F
Hypertension, diabetes type 2, hypo-thyroidism, ovarian cysts
57
Muscle weakness of shoulder and hip area, facial oedema and erythema, upper chest erythema, Gottron’s papules, Gottron’s papules, fatigue, dysphagia
ANA Hep-2 1: 640, CK 747, LDH 363, AST 78, Ro52 = P, Mi 2 = N, Jo 1 = N, PM/Scl = N, CT= two pulmonary lesions that were biopsied and diagnosed as pneumoconiosis
Prednisone, Methotrexate
Cervical Carcinoma at 51, Breast Cancer at 57, Pulmonary Metastasis at 58
No. = number (patient), DM = dermatomyositis, F = female, M = male, CK = creatine phosphokinase, ANA = antinuclear antibodies, LDH = lactate dehydrogenase, AST = aspartate transaminase, ALT = alanine transaminase, N = negative, P = positive, USG = ultrasonography, EMG = electromyography, CT = computerised tomography
Limitations
The small sample size is a significant limitation in this retrospective analysis. DM is a rare disease with a prevalence of 1:1000. Increasing sample size, by combining cases from multiple institutions, and implementing control would further strengthen the presented material.
Results
The average age of onset of the disease was 48 years. All 12 subjects were female. Previous medical history included chronic eosinophilic leukaemia, diabetes mellitus type II, hypertension, leiomyomas, hypo- and hyper- thyroid disease, chronic obstructive pulmonary disease, peptic ulcer disease, autoimmune hepatitis and osteopenia. The two most common are diabetes mellitus type II and hypertension. The clinical picture of each case was similar in that all of the patients presented with some form of muscle weakness. In addition, typical features of DM with Gottron’s papules, periorbital oedema, facial oedema and erythema were noted in five patients. Antinuclear Antibodies (ANA) Hep-2 of values >1:160 were identified in nine patients. Additional laboratory markers such as creatine kinase (CK), lactate dehydrogenase (LDH), aspartate transaminase (AST) and alanine transaminase (ALT) were elevated in five patients. Two patients had muscle biopsies performed. The immunohistopathology picture consisted of Immunglobulin G (IgG), fibrinogen, C1q, and C3 deposition around the perimysium and granular deposits of Immunoglobulin M (IgM) in the dermal epidermal junction. Of the 12 patients, four had neoplasms in addition to the diagnosed DM. The primary cancers were originating from the cervix, breast, stomach and ovary. Of these four patients, all had the diagnosis of DM prior to the diagnosis of a malignancy.
Discussion
The diagnosis of DM is made by combining the clinical picture with the results of various laboratory findings: skin and muscle biopsies, EMG, serum enzymes and ANAs.
The clinical picture varies. The typical dermatological presentation consists of a erythematous and oedematous periorbital rash - the Heliotrope rash (Fig. 1). Symmetrical redness and flaking can be observed on the elbows and dorsal sides of the phalanges, especially over the distal metacarpal joints - Gottron’s papules (Fig. 2). Erythematous lesions can also be found on other locations such as the face, upper chest and knees.4 The dermatitis heals with atrophy, leaving behind areas that resemble radiation-damaged skin. The striated muscle inflammation most often involves the shoulder and hip area, leading to muscle weakness and atrophy. The intercostal muscles and the diaphragm may be involved causing alarm with regards to respiratory compromise. Dysphagia can be present due to inflammation of the smooth and skeletal muscles of the oesophagus. These inflammatory processes often lead to muscle calcification.5 The sum of all these changes clinically is seen most often as weakness, weight loss and subfebrile temperatures. All patients in our study had co-existing muscle and cutaneous symptoms, with variation in severity and localisation. Five patients had the classical picture of shoulder and hip area weakness. The rest of the patients had a more general muscle weakness. Two patients had atypical complaints of hand paraesthesia and extremity pain respectively.
Subtypes of DM exist for the purpose of epidemiological research and sometimes prognosis. They are categorised by the clinical presentation and presence or absence of specific laboratory findings. These subtypes are as follows: Classic DM, Amyopathic DM, Hypo-amyopathic DM and Clinically Amyopathic DM. These subtypes have little impact on routine diagnosis. Common laboratory findings in DM are enzymatic elevation of CK, AST, ALT and LDH; these mainly reflect the muscle involvement. Amyopathic DM lacks both abnormal muscle enzymes and weakness.6 Enzymatic elevation may sometimes precede the clinical symptoms of muscle involvement. Hence, an enzymatic raise in a patient with a history of DM, should raise suspicion of recurrence. Positive ANA findings are frequent in DM but not necessary for diagnosis. More myositis-specific antibodies include anti-Mi 2 and anti-Jo 1. A typical histopathological examination shows: myofiber necrosis, perifascicular atrophy, patchy endomysial infiltrate of lymphocytes and occasionally the capillaries may contain membrane attack complexes.7
Cutaneous changes and muscular complaints can correspond to: 1. Systemic scleroderma which often has a positive ANA; 2. Trichinosis, in which periorbital swelling and myositis occurs, but there is a prominent eosinophilia and a history of consuming undercooked swine or bear meat; 3. Psoriasis with joint involvement which may give a clinically similar picture to DM. However, the skin changes in psoriasis have a more flaking pattern. In doubtful cases, a skin and muscle biopsy together with an electromyography will set the diagnoses apart. A facial rash may also be observed in systemic lupus erythematosus together with nail fold telangiectasia. They are usually distinguished by a clinical picture with more organ system involvement in systemic lupus and by serological studies. A drug-induced picture of DM exists and is particularly associated with statins and hydroxyurea.8
It is estimated that around 25% of DM cases are associated with a neoplastic process that can occur prior, during or after the episode of DM. The risk of developing a malignancy is highest in the first year of DM and remains elevated for years after diagnosis. 9, 10, 11 This was the case with patient number 1, 2 and 4 in our study, where the malignant process appeared in the first year following onset of DM. Risk factors seen in DM patients include male gender, advanced age and symptoms of dysphagia.12 The age range of the four patients in our study with malignancy was between 43 and 66. Symptoms that clinically raised suspicion of a malignant process included weight loss, lack of appetite and dysphagia. All neoplasms were discovered within one year after the diagnosis of DM was made. One patient had a previous history of cervical cancer, six years prior to the onset of DM.
The most common neoplasms seen in patients with DM vary in the world. In Europe the malignancies are located mainly in the ovaries, lungs, and stomach. The cancer types associated with the DM correlate with common cancers seen in the same area. For instance, in Asia, nasopharyngeal carcinoma (which is a rare malignancy in Europe) is a frequent occurrence in DM.1, 3 The location of neoplasms seen in our study varied from gastric, breast, ovary and pulmonary. The screening in regards to malignancies in patients with DM is individualised and should be based on risk factors such as previous malignancies, alarming symptoms such as weight loss or dysphagia, or abnormal findings on physical exam. This was the case with patient number 10 in our study who had a previous history of cancer, and patient number 2 who had symptoms of weight loss and decreased appetite. Initial screening was negative for patient number 1 and 2, where the malignancy developed first after the onset of DM. Age-appropriate screening with mammography, faecal-occult blood test and Papanicolaou smear should be considered. Additional investigations with chest films, computerised tomography (CT) scanning of chest, abdomen or pelvis; colonoscopy, cancer antigens; and gynaecological ultrasonography should be done when indicated.
The main objective of treatment in DM is to improve muscle strength and obtain remission, or at least clinical stabilisation. No specific protocol exists with regard to treatment of DM. Treatment is individualised and adapted to the specific condition of the patient. High-dose corticosteroids are the basis of treatment. However, randomised placebo clinical trials failed to show their efficacy. Clinical efficacy of corticosteroid therapy demonstrates itself and hence is the initial treatment of choice. Doses start at around 1 mg/kg/day depending on the corticosteroid of preference. This dosing is maintained for approximately two months until clinical regression is achieved, followed by approximately 10 mg decrease in dose for the coming three months. A maintenance dose of approximately 5-10 mg should be achieved. The exact parameters are patient-specific. In the case of a severe flare of dermatomyositis, 1 g per day for three days of methylprednisolone intravenous pulses can be administered. The systemic effects of long term therapy with corticosteroids have to be kept in mind. Hence, yearly dual-energy X-ray absorptiometry bone scans can be administered to monitor the development of osteopenia.
Further treatment options are offered in situations where the initial disease presentation is severe, involves internal organs, if relapse occurs during steroid dose reduction, and steroid side-effects. It has been proposed that combination therapy is a better method of approach due to lower reported relapse rates and lower need to use high-dose corticosteroids. Methotrexate is second-line therapy when steroids fail alone. Methotrexate is used with a maximum dose of 25 mg per week plus folate supplementation. The limitations of Methotrexate are immunosuppression and pulmonary fibrosis. Methotrexate is considered preferable to Azathioprine because the latter has a longer onset of efficacy. Azathioprine is administered at doses ranging from 1.5 - 3 mg/kg/day and has a side-effect profile is similar to that of other immunosuppressants. Cyclosporin A is a T-cell cytokine moderator that has a similar efficacy profile to Methotrexate. Side-effects include renal impairment, gingival hyperplasia, and hypertrichosis. Dosing of Cyclosporin A ranges from 2 - 3 mg/kg/day.
An expensive but effective and rather low side-effect alternative is intravenous immunoglobulins. The dosage of this medication has not been officially established in the treatment of DM, but options are: 2 g/kg given either in 1 g/kg/day for two days every four weeks; or 0.4 mg/kg/day for five days initially, and then for three days monthly for three to six months. Other alternatives include Mycophenolate Mofetil, Cyclophosphamide, Chlorambucil, Fludarabine, Eculizumab, Rituximab.9 Further options might be treatment targeted toward malignancy when associated with DM. This was observed in our patient number 10, where full remission of DM was obtained first after lobectomy and chemotherapy for the mammary carcinoma.
Conclusion
DM mainly affects women and all 12 cases presented in our study were female. One third of our cases had malignancies associated with their course of DM. We conclude that it is reasonable to screen these patients, especially in those with already established cancer risk factor. Age-appropriate screening and beyond is indicated by high risk factors or clinical presentation. High suspicion should be raised in patients with a previous history of oncological treatment since DM can be the first clinical sign of cancer recurrence.
Population studies in Norway are showing that taking part of (creative) or receiving (receptive) cultural activities, i.e. arts, is associated with good health and good satisfaction with life among other things,1. Cultural activities have the potential to affect individuals beneficially: physiologically, biologically and emotionally, and several studies show that cultural activities can stimulate emotions and behaviors that make life easier, 2–5. Cultural activities can enrich and enhance our memory, stimulate connections among brain networks and enable us to accelerate learning and differentiate feelings of meaning and context,6,7 Cultural activities have also improved both physical health, social function and vitality among health care staff,8.
In an analysis of data from a large longitudinal cohort-study of a working population (called the SLOSH study = Swedish longitudinal occupational survey of health), some interesting associations were revealed between access to cultural activities in the workplace and health. Participants reporting many cultural activities at work had a more favorable improvement of emotional exhaustion during a follow-up period of two years than those whose workplaces did not offer these amenities,9. Other studies in which cultural activities have been offered to patients on long-term sick leave confirm that cultural activities have beneficial effects on both self-confidence and pain,10,11.
In a new approach, an artistic leadership program, called “Shibboleth”, affects not only managers included in the study, but also their employees (who did not participate in the artistic program). This one year art-based program showed statistically significantly more improvement of mental health, covert coping and performance-based self-esteem than the comparison group (who participated in an ordinary leadership program). They also experienced less winter/fall deterioration in the serum concentration of DHEA-S (dehydroepiandrostereone-sulfate), a regenerative/anabolic hormone,12.
Studies on singers, both amateur and professional singers and choir singers, show positive effects on different biological markers such as oxytocin and testosterone,13–15. On the basis of results from another Swedish project, ”Prescribed Culture”, which aimed to evaluate the effects of prescribed cultural experiences in the treatment of patients on long term sick leave, it was claimed that cultural experiences have their best effects when used in health promotion and prevention , rather than when the individual is already sick,16. Multimodal stimulation seems to have particularly strong effects. For instance, concomitant visual and auditory stimulation gives rise to stronger activation of “visual” and “auditory” parts of the brain than separate visual and auditory stimulation,17.
A mixture of different cultural activities seems to optimize influence on the limbic system since a broader emotional perception is activated,7. Cultural activities offered to participants that would not have chosen them spontaneously, could enhance already existing pathways in the brain enabling deeper cognitive behavioral change,17–20.
Despite this knowledge regarding the potential benefit of cultural activities in different contexts on both individuals and groups there is still a missing accessible practical functioning link between producers of culture and different groups of practitioners within health-care.
Burnout is characterized by emotional exhaustion, detachment from work and decreased effectiveness at work. This can develop in situations with excessive workload and insufficient resources as well as lack of control and support,21. If the process of burnout is a reaction to long-term stress, without enough recovery, this can lead to the more severe exhaustion syndrome,22. Symptoms include fatigue, impaired emotional regulation, cognitive problems and sleeping disorders. Most of these patients have an increased sensitivity for stress even after recovery,22. In recent years, Swedish rates of sick leave due to minor psychiatric morbidity , and burnout symptoms, have increased dramatically,23-24 . Complaints usually include physical, emotional and cognitive exhaustion, which in most cases appear to be related to chronic stress without restitution,25–28. Today many women in Sweden have stress related symptoms, and some are diagnosed with exhaustion syndrome. If these women are detected at an early stage, the prognosis is good,22.
Alexithymia, (the difficulty to differentiate your own and others feelings), can be a silent but severe problem for persons suffering from this personality trait. Grabe et al.,29 conducted a study in which the questionnaire TAS-20 was used for the assessment of alexithymia. Medical examination was also performed. In this study alexithymia was related to hypertension and arteriosclerotic plaques. Alexithymic personality traits may increase the risk for CVD (cardio vascular disease).
The rationale behind choosing symptoms of exhaustion, SOC and alexithymia as main outcome variables was the intention to examine whether cultural activities in this form can change pattern of thought, feelings and behavior in participants with burnout symptoms. If cultural activities prove effective for this participant group, they could have considerable benefits both financially in terms of reducing sick leave and health care consumption and of reduced individual suffering.
Aim
The aim of the study was to assess to what extent symptoms of exhaustion, sense of coherence, alexithymia and self-rated health among women with burnout symptoms can be beneficially influenced by cultural activities organized in health care centers.
Method
Participants
Four health care centers in Stockholm County hosted the cultural activities. Medical doctors and social workers distributed information about the study to women diagnosed with exhaustion disorder or exhaustion symptoms. Women, native and foreign born, with burnout/exhaustion symptoms (fatigue syndrome or stress-related fatigue) who were curious about new clinical approaches were asked by the doctor to participate in the study and screened for inclusion and exclusion criteria. Participants (women age > 18) with burnout/exhaustion symptoms such as strong fatigue, cognitive problems, and sleep disturbances were enrolled. There was an inclusion criterion with a score above 2 on the KEDS scale. The diagnosis was made by the doctor.
Exclusion criteria: Participants with difficulty in speaking and understanding Swedish, participants with alcohol or drug abuse problems, or/and participants with severe depression or psychiatric borderline. Also excluded were participants with severe somatic diseases (such as serious angina pectoris or participants who had had a stroke). Randomization was done using a 3:1 allocation to intervention or control groups.
The randomization was done using a stratified randomization by center. Randomization was done by the statistician. The group allocations were sent in individual envelopes which were distributed to centers and blinded to the site staff. Envelopes were further drawn in a consecutive order with regard to recruitment of subjects at each of the four health care centers. Thirty-six participants were allocated to the intervention group (nine patients in each group) and twelve participants to the control group. The standard care that each participant received included physiotherapy such as relaxation and physical light training.
All randomized participants gave their written consent to participation in the study. Data were collected over a period of 6 months. The project includes evaluation of six different culture activities. In the selection of the health care centers socio- economic diversity and employment status were considered. We used regularly occurring structured cultural activities in cooperation with culture producers, i.e. actors, musicians, dance teachers. The Regional Research Ethics Committee of Uppsala has approved the study (Dnr. 2012/359).
The culture palette: six different cultural packages
The following cultural activities were included in the study; five of them have previously been presented in the literature with good evidence on other groups of patients. One package (the musical show), which has not been presented previously on groups of patients, was chosen as it represents a combination of different modalities of activities at the same time. The active mechanism of all six cultural activities was to stimulate different modalities of the senses such as the visual, motor, verbal, auditory, emotional and sensational, according to Downing’s levels of perception, 30. All participants were offered six cultural packages:
1. Interactive theater: An experienced actor introduced poetical lyrics and poems and then initiated and participated in discussions with the participants regarding thoughts, emotions, and experiences evoked by the texts.
2. Movie: After showing a movie, a film expert initiated discussions among the participants about experiences and thoughts evoked by the movie.
3. Vocal improvisation and drawing: After participating in a vocal improvisation session with an experienced performance artist and pianist, the participants painted a picture representing emotions, thoughts and pictures evoked during the improvisation
4. Exploring Dance: The participants improvised dance movements under the guidance of a dance movement pedagogue/music teacher. The dance movements were staged according to the situation in the room and with focus on bodily awareness. Afterwards the group discussed their experiences during the dance session.
5. Mindfulness and contemplation: The participants contemplated and practiced mindfulness together with an experienced mindfulness instructor. Attention was on breathing and body awareness. Thoughts, feelings, images and sensations were in focus and experiences were reflected in the group after the contemplation.
6. Musical show: after a musical show including music, song and dance focusing on bodily awareness, the participants discussed thoughts regarding the body with the actor.
Every session in each one of the six different cultural packages lasted for 90 minutes. Evaluations
Three different standardized scales, and also self-rated health and self-figure drawing, were used.
KEDS - Karolinska Exhaustion Disorder Scale,31. Questions about concentration, memory, physical fatigue, endurance, recovery, sleep, hypersensitivity to sensory input, experience requirements and irritation and anger. Higher scores indicate worse disease activity/performance.
SOC - Sense of coherence,32. A key factor in being able to feel well-being and health. This factor has been shown to be crucial to helping individuals mobilize their self-healing systems. Higher scores indicate better performance.
TAS - Toronto Alexithymia Scale,33. Estimation of ability to recognize and interpret feelings in oneself and others. TAS contains three subscales; the inability to handle emotions due to emotions being poorly recognized (difficulty recognizing), the inability to describe feelings (difficulty describing), and mismatch between coping behavioral emotions (externally oriented thinking). This study used the full scale score, i.e. the summary of the three sub scores. Higher scores indicate worse performance.
Self-rated health (SRH) consists of a single item measure.
Procedures/implementation
The four different health care centers presented each activity on two consecutive occasions. After two weeks of one program, there was a new program on two consecutive occasions etc. Each participant has thus been offered 12 cultural packages during a three-month period, i.e. once a week. During the monitoring period between month 3 and month 6, there was no culture activity offered. The control group was monitored in parallel during the entire period monthly at 0, 3 and 6 months.
The participants evaluated the project individually with questionnaires prior to the sessions, after completion of the intervention at month 3, and at follow-up after 3 months i.e. month 6 (both intervention and control group). In-depth interviews with both participants and producers of culture, i.e. representatives for the various cultural activities and health care staff were conducted during the monitoring period (this data is not presented in this article).
Data analysis
The primary outcome efficacy end point/measure was mean change from baseline to three and six months in the KEDS summary score. The secondary outcome measures were mean change from baseline in the SOC summary score, the TAS summary score and the self-rated health, from baseline to three to six months.
All data were presented using descriptive statistics, i.e. mean and standard deviation for continuous variables and frequency and percentage for categorical variables. For all main outcome variables, data were further analyzed using the Linear Mixed Models, including group (intervention and control) and time (baseline, 3 month and 6 months) as fixed factors. Results were presented as marginal means, the estimated mean value adjusted for the factors included in the analyses model. The difference between intervention and control group with regard to the estimated and adjusted means are defined as the effect size, i.e. the mean difference between the intervention groups for each of the primary and secondary outcomes measures divided by the standard deviation. All tests were two-tailed and p<0.05 was regarded as statistically significant.
IBM SPSS version 22 was used for statistical calculations. In the presentation of the results from the statistical analyses, the measured effect size was used and derived as the absolute difference between active intervention and controls with regard to each of the outcome variables/endpoints used,34.
Results
There were 55 participants screened in this study, however seven participants who met the exclusion criteria of too serious/severe depression was not included into the study. In total, there were 48 participants randomized into the study, age between 41 and 70 years, mean 53.8 (SD= 8.15).
The results showed that for KEDS (exhaustion) there was a statistically significant two-way interaction (P<0.001) with a decreased mean from baseline to three and six month respectively in the intervention group whereas in the control group there was no change. The mean treatment effect size, i.e. the mean difference between groups, in favor of the intervention group was 9.9 (SE=3.0) at 6 months. See table 1 and figure 1a.
There was no difference in mean SOC - Sense of Coherence – scores between the groups. See figure 1b. Further, the results revealed a statistically significantly more pronounced decrease in the intervention group compared to the control group in the alexithymia items of total score, (P=0.007, mean treatment effect size=5.4 (SE=2.2) at 6 months in favor of the intervention group), difficulty describing (P=0.004, 2.4 (0.9)), difficulty identifying (P=0.051, 2.6 (1.3)) but not for external orientation (P=0.334 0.5 (0.8)). See table 1 and figure 1c. There was also a statistically significant difference between the groups with regard to self-rated health (P<0.001) where mean scores increased over time in the intervention group but decreased in the control group. See figure 1d.
Table 1
KEDS (Karolinska Exhaustion Disorder Scale) and TAS (Toronto Alexithymia Scale) and SRH (self-rated health) at baseline, month 3 and month 6.
Control Group (n=12)
Intervention Group (n=36)
Count
Mean
Standard Deviation
Count
Mean
Standard Deviation
KEDS
Baseline
12
32.7
8.2
35
31.7
8.4
Month 3
12
34.9
9.2
34
23.6
8.6
Month 6
12
33.9
8.7
33
23.7
10.1
Sense of Coherence
Baseline
12
117.2
29.9
33
118.0
28.0
Month 3
12
112.8
30.3
33
121.1
30.5
Month 6
12
115.1
24.2
34
123.9
28.2
Difficulty Describing
Baseline
12
14.8
4.3
34
14.2
4.6
Month 3
12
15.3
3.6
31
12.7
4.6
Month 6
12
15.2
4.1
34
11.8
3.9
Difficulty Identifying
Baseline
12
20.0
6.0
34
20.3
6.3
Month 3
12
20.6
6.5
31
19.0
6.9
Month 6
12
20.0
6.1
34
17.4
5.0
Externally Oriented
Baseline
12
14.9
4.2
34
13.8
4.4
Month 3
12
15.4
4.3
31
13.9
3.9
Month 6
12
14.6
4.8
34
13.2
3.5
TAS
Baseline
12
49.7
13.1
34
48.3
13.4
Month 3
12
51.3
13.3
31
45.6
13.9
Month 6
12
49.8
13.9
34
42.4
10.8
Self-rated Health
Baseline
12
5.2
1.5
36
4.8
1.9
Month 3
12
4.6
1.9
36
6.0
1.9
Month 6
12
3.6
1.6
35
6.4
1.9
Figure 1a
Marginal means and 95 % confidence intervals for the KEDS (exhaustion) scale by group and time. Results were based on the linear mixed models analysis adjusted for baseline.
Figure 1b
Marginal means and 95 % confidence intervals for the sense of coherence (SOC) scale by group and time. Results were based on the linear mixed models analysis adjusted for baseline.
Figure 1c
Marginal means and 95 % confidence intervals for the Toronto Alexithymia Scale (TAS) by group and time. Results were based on the linear mixed models analysis adjusted for baseline.
Figure 1d
Marginal means and 95 % confidence intervals for the self-rated health scale (SRH) by group and time. Results were based on the linear mixed models analysis adjusted for baseline.
Discussion
The results show that the different exhaustion factors measured by means of KEDS (Karolinska Exhaustion Disorder Scale) decreased in the intervention group compared to the control group. With regard to the total score of TAS (Toronto Alexithymic Scale) there was a statistically significant decrease in the intervention group compared to the controls, i.e. the participants started to improve their differentiation of feelings and emotions after three months with cultural activities. The same pattern was seen with regard to self-rated health, which improved in the intervention group. However, there was no significant difference between the groups with regard to the development of sense of coherence.
It seems that the different cultural activities have helped the participants become more aware of their feelings and sensations; to describe and to identify feelings. It is not easy to explain the positive results based on one clear paradigm. It is likely a mixture of psychological, neurological and social factors or changes that interact in a complex manner.
Previous studies have discussed the theory of the emotional brain - cultural modalities can “surprise” the cognitive brain unconsciously. LeDoux,20 discusses the upper/slower and the lower/faster pathway in the brain. Emotionally loaded visual and auditory stimuli are transmitted on both types of pathways. Music impulses are for example evoking activities in the emotional brain much more rapidly than in the cognitive brain. However, impulses spread secondarily from the emotional to the cognitive brain. This can trigger the participants awareness of different emotions and may start a process of differentiation, possibly initiating a change of life course. By using different cultural activities, that the participants normally would not try, the differentiation process may be amplified. This suggests that cultural activities can surpass automated thinking and create new "pathways '' with changes in behavior and increased well-being.
In other studies we have observed that a mixture of different cultural activities can increase the amount of stimuli affecting a broader network of emotional correlates, 14,16,18 . A very interesting long-term decrease in alexithymia, 35 was associated with lowered blood pressure and a decrease in sick leave. By allowing the participants to try new cultural stimuli we may have helped the participants change old habits. A hypothesis is that this may also have contributed to the observed decrease in exhaustion.
Why did we not see any increase in the sense of coherence in the intervention group? It is very difficult to change patterns of thought and behavior although we can argue that the participants in the control group also found a new sense of coherence just by being invited to answer questions about themselves and being focused upon. Many of the participants did not go out spontaneously because of their fear of socializing. Some of them described their situation as black or white, not wanting to change routines that made them feel less safe,36.
Despite the fact that the health care staff did not participate in the culture palette, they were also affected by the cultural activities 36. This may be a mirroring effect, or emotional contagiousness on health care staff, which may also play a role between the participants and their staff. The passive cultural activation phenomenon has previously been presented in the literature,37 and there seem to be possible well-being effects of just watching dance or visiting a theatre, 6,10 which may explain the positive health care staff response to the culture palette. The results of this study underscore the importance of regarding the health care system as a whole, where patients, health care staff and visiting relatives affect each other. Empathic behaviors contaminate in all directions and we need to be aware of how we project ourselves when working in a caring context.
Modified “culture palettes” and “train the trainer” programs and workshops are now in use in Sweden, inspiring cultural producers to further develop the health care system and a cultural health box - a box with six different books about cultural activities and the research behind this - have been distributed to all health care centers in Sweden, 38.
Developing and adapting cultural programs to fit other kind of groups of participants could cross-fertilize health care thru culture production.
Limitations
This study was limited to women with exhaustion symptoms and therefore further research on implementation of cultural activities within different groups of participants and sexes is needed before we can generalize the results to other groups of participants. Another limitation is that we did not control for outside activities, such as doing walks in nature. In this study we only presented indoor cultural activities.
Conclusion
The cultural activities in this study made exhausted women understand what makes them vital, confirmed, curious, healthy and creative. The study also illustrated that there could be synergistic effects when bringing cultural activities into the health care system36.
Increasing number of term deliveries undergo induction of labour (IOL). This figure is as high as 1 in 4 in developed countries, making it one of the most common procedures a woman may experience in pregnancy. 1 IOL may be achieved with pharmacological, mechanical or surgical methods. 1, 2 Mechanical methods were the first methods used to ripen the cervix and induce labour. The National Institute of Clinical Excellence (NICE) does not recommend the routine use of mechanical methods for IOL as only heterogeneous small studies were available at their time of publication more than half a decade ago. 2 However, since then there is increasing evidence of safety and efficacy of mechanical IOL. Subsequent publications including those from World Health Organization (WHO) and Cochrane Database of Systematic Reviews support the use of balloon catheter for IOL. 1, 3 It is therefore important to revisit the role of mechanical methods of IOL.
The Cochrane Database of Systematic Reviews concluded that mechanical methods of induction of labour have a lower risk of uterine hyperstimulation with similar caesarean section rates and delivery within 24 hours as prostaglandins. Furthermore, mechanical methods reduce the risk of caesarean section when compared with oxytocin induction of labour. 3 This review is consistent with another earlier systematic review. 4
Both Pfizer’s Prostin (PGE) and Cook Medical’s Cervical Ripening Balloon (CRB) are licensed for IOL. While the use of Prostin is a standard care in Singapore, the CRB has not been used routinely. We therefore propose a study to evaluate the use of CRB for IOL in Singapore.
Methods
A prospective cohort randomised controlled study was conducted in a tertiary referral maternity unit in Singapore. Pregnant women aged 21 – 40 years old with a singleton pregnancy with no major fetal anomaly who were suitable for vaginal delivery and scheduled for a planned IOL at 37+0 to 41+6 weeks gestation were invited for the study. Cases were excluded if at the start of the planned IOL, they were in spontaneous labour, had a cervical dilatation of ³3 cm, had confirmed rupture of membrane, had abnormal cardiotocogram (CTG), had a scarred uterus such as previous caesarean section, had malpresentation in labour, or if caesarean section delivery was indicated. Women who were unable to give or had withdrawn their consent to participate in the trial were also excluded for the study.
All suitable pregnant women receiving team care who require elective IOL were identified in antenatal clinic, antenatal or labour wards by the attending doctor or clinical research coordinator (CRC). Following routine counselling for IOL by the attending doctor, the woman will be offered participation in the study and a member of the research team will counsel and obtain informed consent from her. The woman will be made to understand that participation in the study is voluntary, does not affect her medical care and consent for participation can be withdrawn at any stage of the study. Women who were uncertain in their participation were offered the opportunity to participate during her follow-up or on the day of IOL after further consideration. Patient information leaflet on IOL as well as information of the study were made available to the participants.
On the day of the IOL, the participants were reviewed for the appropriateness of the IOL and participation in the study. A presentation scan, vaginal examination for cervical dilatation and CTG were performed. If they were suitable, they were randomly allocated PGE or CRB IOL in labour ward. Randomization was achieved with third party sealed envelope allocation. A total of 75 envelopes containing a folded paper with the words “Cervical Ripening Balloon” and another 75 identical envelopes containing a folded paper with the word “Prostin” were prepared and shuffled after sealing. These randomized envelopes were then labelled sequentially with their randomization allocation number from 1 to 150. The participants who underwent randomization were allocated to the next randomization allocation numbered envelop which contain either allocation for CRB or PGE IOL.
Participants undergoing CRB IOL will have the CRB inserted after cleaning the vulva and vagina with Cetrimide solution. The uterine and vaginal balloons of the CRB will be gradually inflated with normal saline, initially 40 ml and 20 ml respectively, and a further 20 ml each hour later until each balloon is 80 ml. CTG monitoring was undertaken before and after each inflation for at least 20 minutes. If the participant was not in labour after complete inflation of the balloons, she would be transferred to the antenatal wards for rest before removing the CRB 12 hours after insertion in labour ward when possible.
Participants undergoing PGE IOL will have 3 mg Prostin tablet inserted in the posterior fornix after cleaning the vulva with Cetrimide solution. CTG monitoring was also undertaken for at least 40 minutes after PGE insertion. If the participant was not in labour, she would be transferred to the antenatal wards. If there was no response to the first PGE, a repeat dose was given after 6 hours in labour ward when possible.
Participants will undergo artificial rupture of membrane (ARM) and/or oxytocin infusion augmentation of labour as necessary. If the participant was not in labour or ARM was not possible after removing the CRB or 2 cycles of PGE, the participant would have been considered having a failed IOL and will leave the study protocol with her subsequent management determined by the specialist attending to her. This would typically involve insertion of a third or first PGE in the PGE or CRB arm respectively.
Upon delivery of the pregnancy, a member of the research team will interview the participant and obtain demographics, labour and delivery outcomes data from the clinical notes. Pain and maternal satisfaction scores and comments were also recorded by interviewing the participants in the post-natal period; these findings will however be discussed separately.
The data was collected on a pro forma and entered into an excel spreadsheet. The data was then analysed using IBM SPSS Statistics version 19.
This study was approved by the SingHealth centralised institutional review board with the reference number of 2013/553/D.
Results
A total of 138 women were approached to join the study but 40 (29.0%) women declined. There was no significant difference in maternal age (27.8 ± 5.4 vs 28.7 ± 5.2 years; p = 0.373), ethnicity, proportion of primigravidae (62.5% vs 53.1%; p = 0.349), weight (61.2 ± 15.4 vs 64.4 ± 13.8 kg; p = 0.228), BMI (24.8 ± 5.8 vs 25.3 ± 5.0 kg/m2; p = 0.646) and primary indication for IOL between women who declined and accepted enrolment to the study respectively.
The remaining 98 women were enrolled for the study. Eight-seven women were randomized after excluding 6 women in spontaneous labour, 1 woman with non-cephalic fetal presentation, and 1 woman had confirmed ruptured of membrane on admission for their IOL, as well as 3 other cases in which the women presented for IOL without the availability of the research team (figure 1).
Figure 1. Flow diagram of recruitment, randomisation and completion status
In the CRB arm, one woman withdrew from the study after 8 hours 55 minutes as she felt the discomfort was too unbearable. Another woman was excluded when she was found to have spontaneous version to breech in labour. One woman randomized to PGE did not receive it as she went into spontaneous labour prior to IOL. Another woman in the PGE arm was subsequently found to be only 36+3 weeks and was therefore excluded from analysis (figure 1). The remaining 83 cases were analysed and their characteristics are shown in table 1.
The induction to vaginal delivery time, as well as, vaginal delivery rate were similar in both arms of the study (table 2). Compared to PGE arm, participants undergoing CRB IOL were faster in achieving cervical dilatation ≥4 cm (14.4 ± 5.7 vs 23.5 ± 16.6 hr; p = 0.001) and requesting epidural (16.4 ± 5.4 vs 23.2 ± 15.8 hr; p = 0.040), as well as more likely to require oxytocin infusion for augmentation (77.4% vs 50.0%; p = 0.020). Uterine hyperstimulation defined as >5 contractions every 10 minutes was only found in PGE arm. Cervical dilatation from 0 – 2 cm to ≥4 cm was achieved without regular contractions in 2 (6.9%) cases in the CRB arm and 1 (2.4%) case in the PGE arm. The mean frequency of uterine contractions at cervical dilatation ≥4 cm was 2.5 ± 1.4 in 10 minutes for CRB arm compared to 3.8 ± 1.4 in 10 minutes for PGE arm (p <0.001). No case of uterine rupture was observed.
There was 1 (3.2%) case for failed CRB IOL where both uterine and cervical balloons were found in the vagina suggesting that either placement of the uterine balloon was not optimal or it was expelled after placement. The woman went on to have Prostin and delivered vaginally. In the 9 (17.3%) cases in the PGE group that did not respond after 2 cycles, all went on to have the third Prostin successfully except for 2 women who required Caesarean section for persistent failed IOL.
The birth outcomes of both arms of the study were also similar with no case of stillbirth (table 3). There were 2 case of neonatal intensive care unit admission in the PGE arm for continuous positive airway pressure therapy; both were discharged from NICU within 24 hours.
Table 1. Characteristics of participants undergoing cervical ripening balloon (CRB) and Prostin (PGE) induction of labour.
CRB (n = 31)
PGE (n = 52)
p
Maternal age, years (83) 1
28.2
± 5.3
28.7
± 5.0
0.646
Ethnicity (83) 2
0.222
· Chinese
35.5%
(11)
42.3%
(22)
· Malay
54.8%
(17)
36.5%
(19)
· Indian
3.2%
(1)
15.4%
(8)
· Others
6.5%
(2)
5.8%
(3)
Primigravidae (83) 2
61.3%
(19)
44.2%
(23)
0.174
Weight, kg (83) 1
64.4
± 15.0
63.9.4
± 13.2
0.861
BMI, kg m-2 (83) 1
25.5
± 5.0
25.0
± 5.1
0.706
Pre delivery Hb, g dl-1 (80) 1
11.6
± 1.8
12.0
± 1.3
0.211
GBS positive (79) 2
22.6%
(7)
21.2%
(11)
0.204
Gestational age, weeks (83) 1
39.4
± 1.1
39.2
± 1.9
0.357
Cervical dilatation, cm (83) 1
1.0
± 0.7
0.9
± 0.7
0.954
Primary indication for IOL (83) 2
0.108
· Decreased fetal movement 3
-
11.5%
(6)
0.082
· Post dates 3
54.8%
(17)
32.7%
(17)
0.065
· Gestational diabetes 3
16.1%
(5)
13.5%
(7)
0.756
· Impending macrosomia 3
-
1.9%
(1)
0.526
· IUGR 3
3.2%
(1)
-
0.137
· Low amniotic fluid index 3
19.4%
(6)
34.6%
(18)
0.089
· Maternal request 3
3.2%
(1)
5.8%
(3)
0.489
· Pre-eclampsia 3
3.2%
(1)
-
0.373
1 Values are mean ± SD, p calculated with Student t-test; 2 Values are percentage (n), p calculated with Pearson chi-square test; 3 Values are percentage (n), p calculated with Fisher’s exact test.
Table 2. Labour outcomes of participants undergoing cervical ripening balloon (CRB) and Prostin (PGE) induction of labour.
CRB (n = 31)
PGE (n = 52)
p
IOL to ≥4 cm dilatation, hr (78) 1
14.4
± 5.7
23.5
± 16.6
0.001
IOL to full dilatation, hr (66) 1
20.8
± 6.1
24.8
± 15.7
0.150
IOL to vaginal delivery, hr (63) 1
21.2
± 6.8
25.6
± 16.1
0.136
Duration of 2nd stage, hr (63) 1
0.9
± 2.9
0.8
± 0.9
0.741
Delivery within 24 hr (83) 2
77.3%
(17)
61.0%
(25)
0.265
Failed IOL (83) 3
3.2%
(1)
17.3%
(9)
0.082
Number of PGE used (83) 2
<0.001
· 0
96.8%
(30)
-
· 1
3.2%
(1)
53.8%
(28)
· 2
-
28.8%
(15)
· 3
-
17.3%
(9)
Augmentation use (83) 3
77.4%
(24)
50.0%
(26)
0.020
Epidural use (83) 3
58.1%
(18)
55.8%
(29)
1.000
· IOL to epidural use, hr (47) 1
16.4
± 5.4
23.2
± 15.8
0.040
· Epidural use to delivery, hr (47) 1
9.2
± 4.1
7.0
± 3.8
0.065
Contractions 1
· At IOL (83)
0.2
± 0.6
0.2
± 0.5
0.579
· 3 hr after IOL (81)
2.0
± 1.9
1.6
± 1.9
0.451
Contractions >5 every 10 min 3
· 30 min after IOL (81)
-
-
-
· 3 hr after IOL (81)
-
2.0%
(1)
1.000
Vaginal delivery (83) 3
71.0%
(22)
78.8%
(41)
0.438
Indication for LSCS (20) 2
0.513
· Failed IOL
-
18.2%
(2)
· FTP in 1st stage of labour
55.6%
(5)
36.4%
(4)
· FTP in 2nd stage of labour
22.2%
(2)
9.1%
(1)
· NRFS
11.1%
(1)
27.3%
(3)
· FTP and NRFS
11.1%
(1)
9.1%
(1)
1 Values are mean ± SD, p calculated with Student t-test; 2 Values are percentage (n), p calculated with Pearson chi-square test; 3 Values are percentage (n), p calculated with Fisher exact test.
Table 3. Birth outcomes of participants undergoing cervical ripening balloon (CRB) and Prostin (PGE) induction of labour.
CRB (n = 31)
PGE2 (n = 52)
p
Male fetus (83) 2
51.6%
(16)
42.3%
(22)
0.496
Birth weight, g (83) 1
3,166
± 478
3,094
± 417
0.472
Apgar at 5 min <7 (83)
-
-
-
Meconium aspiration (83)
-
-
-
Pyrexia in labour (83) 3
6.5%
(2)
5.8%
(3)
1.000
NICU admission (83) 2
-
3.8%
(2)
0.526
ITU admission (83)
-
-
-
1 Values are mean ± SD, p calculated with Student t-test; 2 Values are percentage (n), p calculated with Pearson chi-square test; 3 Values are percentage (n), p calculated with Fisher exact test.
Discussion
To the best of our knowledge, this is the first randomized controlled study to assess the use of CRB for IOL in Singapore. Our study concur with the published literature that both CRB and PGE have similar rate of vaginal deliveries and rate of deliveries within 24 hours. Both methods are effective and safe with PGE having a higher risk of uterine hyperstimulation and need for Caesarean section for failed IOL.
Pharmacological induction of labour using PGE is the most established form of IOL. However, it is important to be able to offer alternative methods to women particularly in cases of hypersensitivity or allergy to PGE. PGE can cause bronchospasm complicating asthma, a medical condition which affects 4 – 12% of pregnant women. 5, 6 Similarly, caution should be exercised in the use of PGE in women with other common medical conditions such as hypertension and epilepsy.
In addition, women may not respond to PGE for IOL, or the PGE may only result in uterine tightenings which do not lead to cervical dilatation. In these situations the CRB may be considered as an adjunct for IOL to avoid Caesarean section of ‘failed IOL’.
The risk of uterine hyperstimulation and the need for a repeat dose in 6 to 8 hours for PGE typically require the women to be admitted for IOL. The use of CRB does not require planned intervention until 12 hours later. This potentially allows an outpatient IOL if further studies support its safety in this aspect.
The application of PGE is relatively straightforward and is already performed by both doctors and midwives. The insertion of CRB may however be considered too invasive for midwives thus limiting the type and hence availability of staff to commence IOL. We have explored the learning curve in the insertion of CRB and will discuss this separately.
Conclusion
Both CRB and PGE are effective methods for IOL at term. Each method has its own benefits and limitations. The availability of both methods in an obstetric unit will allow the clinician to choose the most appropriate form of IOL, provide a complementary method of IOL, as well as offer women choice in their IOL.
For decades the genus Acinetobacter has undergone several taxonomical modifications. Large number of non-fastidious, aerobic, Gram-negative bacteria (GNB) are included in this genus. In the last few years these organisms are genetically modifying into highly resistant forms resulting in untreatable nosocomial infections1 and health care associated infections.2 Acinetobacter is also a major cause of invasive type infections in children resulting in untreatable urinary tract infections (UTIs), skin infections and septicemia.3 One identified cause of the resistance mechanism in carbapenem resistant Acinetobacter spp. is the production of the MBL enzyme.4 It has been revealed through various published studies that Acinetobacter displays a specific type of mechanism of resistance against different antimicrobials. Some of them, for example β-lactam, are inhibited by enzymatic degradation, while quinolones are rendered ineffective due to a genetic mutation preventing the binding of an antibiotic to a distinct binding site. The same is true with aminoglycosides in which the resistant strains are noticed to acquire a gene involved in enzymatic modification.1
Although polymixin resistance in Acinetobacter spp. was reported the specific cause of resistance was unknown until 2008. In 2013, one study detected the presence of hetero-and adaptive resistance due to mutation in specific gene for the first time.1,21 Hence the aim of this current study was to evaluate the trend of sensitivity/resistance pattern of Acinetobacter spp. against broad spectrum antibiotics.
Method and Materials
The objective of the study was to evaluate the sensitivity of Acinetobacter spp. to 08 broad spectrum antibiotics. The Kirby Bauer Disc Diffusion method was used following the standard procedures as laid down by CLSI 2013.6 A total of 52 isolates were collected from Feb 2014-March 2014 from patients admitted to tertiary care hospitals in Karachi. The isolates were identified by routine lab procedures.
Antimicrobial agents and medium: Standard (Oxoid) discs of Amikacin (30 µg), Cefoperazone (75 µg), Ceftriaxone (30 µg), Ciprofloxacin (5 µg), Colistin (10µg), Fosfomycin (50 µg), imipenem (10 µg), Polymixin B (300units), Mueller Hinton Agar (Oxoid UK) and Mueller Hinton broth (Oxoid UK) were used.
0.5 McFarlan Standard: The inoculum was grown at 370C for 2-6 hrs. Turbidity Standard of 0.5 McFarland was achieved by incubating broth culture.
Inoculation of test plates:The plates were inoculated with the culture of Acinetobacter spp. by the help of sterile cotton swabs. The excess fluid was removed after the cotton swab was dipped into inoculum suspension. When the inoculum were dried the antibiotic discs were placed with sterile forceps onto the agar surface.15
Incubation of test plates: The isolates after application of antibiotic discs plates were incubated for 24 hours and results were interpreted according to CLSI standards 5,6. Interpretative standards for used antibiotics and Zone diameter of inhibition are shown in Table 25.
Control strain: Escherichia coli ATCC 25922was used as a control strain to maintain accuracy and precision of procedures.
Results
It is reported that out of all the samples 61.5% were obtained from male patients. Infections caused by Acinetobacter spp. had a high prevalence among both the genders among the age group 51-75 yrs. The most frequent site of isolate collection was tracheal aspirate (55.76%) among both genders and the second highest percentage of isolate was obtained from sputum (19.23%) as shown in Table 1. The Colistin and Polymixin B were found equally effective against Acinetobacter spp. by inhibiting 98% of isolates each and 19.23% isolates showed sensitivity against Amikacin. The isolate showed the highest degree of resistance against Imipenem (98%), followed by Cefoperazone (94.23%) and Ceftrioxone (92.3). Surprisingly 32.69% of isolates exhibited intermediate sensitivity (IS) against Fosfomycin as indicated in Table 3 and Figure 1.
Table 1: Age and gender specific distribution of Acinetobacter spp. among patients
Age
Male n=32(61.5%)
Female n=20(38.46%)
00-25
10
06
26-50
05
02
51-75
12
11
76-100
05
01
Table 2: Zone diameter interpretive standards for Acinetobacter spp. CLSI standards table of antibiotics for Acinetobacter spp.
Antibiotic
Disc Content
Zone of Inhibition (mm)
Resistance
Intermediate
Sensitive
Amikacin
30µg
≤14
15-16
≥17
Cefoperazone
75 µg
≤15
16-20
≥21
Ceftriaxone
30 µg
≤13
14-20
≥21
Ciprofloxacin
5 µg
≤15
16-20
≥21
Colistin٭
10µg
≤11
≥17
Fosfomycin٭
50 µg
≤12
13-15
≥16
Imipenem
10 µg
≤13
14-15
≥16
Polymixin B٭
300units
≤13
≥19
*Since the interpretive standards for Colistin, Fosfomycin and Polymixin B against Acinetobacter spp. is not established in CLSI 2013 mannual zone diameter interpretative standards for Enterobacter spp. and E. coli were used.20
Figure 1: Susceptibility pattern of Acinetobacter spp. against broad spectrum antibiotics
Table 3: Total % efficacy of different antibiotics among Acinetobacter spp. isolated (N= 52)
S.No.
Antibiotics
Disc Code
Resistance (%)
Intermediate (%)
Sensitivity (%)
1.
Amikacin
30µg
42(80.76)
00
10(19.23)
2.
Cefoperazone
75µg
49(94.23)
00
03(5.76)
3.
Ceftriaxone
30µg
48(92.3)
00
04(7.69)
4.
Ciprofloxacin
05µg
47(90.38)
01(1.9)
04(7.69)
5.
Colistin
10µg
01(1.9)
00
51(98)
6.
Fosfomycin
50µg
34(65.38)
17(32.69)
01(1.9)
7.
Imipenem
10µg
51(98)
00
01(1.9)
8.
Polymixin B
300 units
01(1.9)
00
51(98)
Discussion
Our present study shows that the Acinetobacter spp. were highly resistant to Cefoperazone (94.23%). This finding is further substantiated by research that observed Cefoperazone to be only effective when used in combination.7,8
We also observed that only 19% isolates were sensitive to Amikacin, which contradicts the findings of Liu et al 2013 3 who observed 100% efficacy. However, they also discovered that 82% were inhibited by Imipenem while Fluoroquinolone was also found to be effective against 70% of all isolated organisms and Cefoperazone as least effective.
Organisms isolated from sputum showed a high degree of resistance to most of antibiotics, Zheng W and Yuan S also observed such results9.Nwadike et al 201410 found a high prevalence of resistant Acinetobacter spp. isolates against Ciprofloxacin (100%) and Amikacin (50%).10
Polymixin inhibited 98% of isolates, which is similar to figures found by Haeili et al 20131 who observed 95.5% susceptibility to Polymixin B. The second most effective antibiotic was Colistin - Trottier et al 200712 also observed 100% susceptibility of A. baumanni to Colistin. Similarly, Vakilietal 201413 found a low rate (i.e, 11.6%) of Colistin resistance.
Colistin has emerged as a viable choice for treatment of multidrug resistant Acinetobacter strains. In several studies,13,14 where 98% of isolates were resistant to Imipenem these results support the work of Khajuria et al 201416 who also reported reduced efficacy. Our findings are in contradiction to the study by of Tripathi et al 201417 who reported that Imipenem was a highly effective drug in comparison to other broad spectrum antibiotics.Fosfomycin surprisingly exhibited unusual results in our study; 32% of Acinetobacter spp. were IS while 65% were resistant. However, previous studies showed that Fosfomycin were proved to be good option to treat infections caused by Acinetobacter spp.18 Zhang et al 201319 reported that Fosfomycin used alone was highly ineffective in treatment of Penicillin Drug Resistant-Acinetobacter baumannii (PDR-Ab).Another study revealed that Acinetobacter spp. has developed adaptive resistance against Polymixin.21
Acinetobacter spp. are emerging as a resistant bacteria and a common cause of nosocomial and hospital acquired infections. There is a serious need to take necessary measures by hospital administration in maintaining environmental and personnel cleanliness according to current Good Manufacturing Practices. Pharmacists should educate patients about the drawbacks of self-medication and not completing medication courses, which is resulting in development of resistant bacterial pathogens.
Surgery and anaesthesia have traditionally been viewed as expensive, resource-intensive and requiring highly specialized training.1 This misconception has led to surgery and anaesthesia taking a back seat to public health, maternal and child health, and infectious diseases in global health.2 Surgery has also been termed the “neglected stepchild of global health.3These concepts have changed rapidly since it has been found that surgical diseases contribute about 11% to Disability Adjusted Life Years4 and therefore would benefit from preventive and public health strategies necessary to achieve the Millennium Development Goals. The realization of the huge public health burden of surgical diseases in low and medium income countries (LMICs), and the fact that surgical services and treatment could be made cost effective, led World Health Organization (WHO) to launch the Global Initiative for Emergency and Essential Surgical Care (GIEESC) in 2005.5
The GIEESC is a global forum whose goal is to promote collaboration among diverse groups of stakeholders to strengthen the delivery of surgical services at the primary referral level in LMICs.5 Improvements in surgical services at the primary referral level in LMICs will equally require the provision of safe and effective anaesthesia. The provision of safe and effective anaesthesia will need adequately trained human resources and essential health technologies. The surgical and anaesthesia service capacity have generally been very low in sub-Saharan Africa (SSA) as evidenced through surveys conducted in Ethiopia,6 Gambia,7,8 Ghana,9,10 Liberia,8,11 Malawi,12 Nigeria,13 Sierra Leone,8,14 Rwanda,15,16 Tanzania,8,17 and Uganda.18 The survey from Nigeria was conducted among rural private hospitals and was administered to attending members in a conference of the Association of Rural Surgical Practitioners of Nigeria.13 This was done using the Personnel, Procedures, Equipment and Supplies (PIPES) survey tool developed by the non-governmental organization Surgeons Overseas (SOS).13 This is a tool developed to assess surgical capacity through the workforce, infrastructure, skill, equipment, and supplies of health facilities in LMICs.13 The other indicated surveys done in SSA used a comprehensive survey tool designed by the Harvard Humanitarian Initiative. This tool was adapted from the WHO Tool for Situational Analysis to Assess Emergency and Essential Surgical Care as part of an international initiative to assess surgical and anaesthesia capacity in LMICs.
The present survey used a rapid assessment tool known as the Lifebox Hospital Initial Needs Assessment questionnaire with another structured questionnaire to assess anaesthesia services in public hospitals in the Cross River State (CRS) of Nigeria. Lifebox (www.lifebox.org) is a non-profit organization saving lives by improving the safety and quality of surgical care in low-resource countries.19 Since 2001, Lifebox has trained more than 2000 anaesthesia providers, and more than 4200 pulse oximeters have been supplied to more than 70 low-resource countries thereby closing the operating room pulse oximetry gap in about 15 countries.19 This organization is supported by the World Federation of Societies of Anaesthesiologists (WFSA), Association of Anaesthetists of Great Britain and Ireland, Harvard School of Public Health and the Brigham and Women’s Hospital in Boston, United States of America.19 This survey was primarily aimed at the secondary health care facilities which are owned and managed by the CRS Ministry of Health (MOH). This survey audit will identify the anaesthesia providers in CRS, their level of training and retraining as well as equipment available for providing safe anaesthesia and monitoring patients in the peri-operative period. The data will also identify baseline information and gaps in anaesthesia and surgical capacity as a first step for the CRS MOH initiative to improve surgical and anaesthesia services. This information is a stepping-stone for national and international assistance since CRS is a relatively poor state in the Nigerian Federation.
Country and State overview
Nigeria is the most populous African country, located in the West African sub-region with a population of more than 160 million people.20 It is a Federal Republic with 36 states and a Federal Capital Territory. It is politically sub-divided into six geo-political zones: North-Central, North-Eastern, North-Western, South-Eastern, South-South and South-Western. There are 774 Local Government Areas (LGAs) where more than 60% of the population reside. The health care system is divided into three levels: primary, secondary and tertiary. There are public and private health facilities operating at all levels. The primary healthcare facilities (health centres) are managed by the Local Government, the secondary healthcare facilities (general hospitals) are managed by the State Government, while the tertiary facilities (University Teaching Hospitals and Federal Medical Centres) are managed by the Federal Government. Health indicators for Nigeria are among the worst in the world despite the fact that Nigeria is the sixth largest exporter of crude oil. The United Nations Human Development Index ranked Nigeria 156 out of 187 countries.21 In particular, Nigeria is one of the five countries contributing more than 50% to the global maternal mortality ratio22 and one of the countries with the highest physician’s and nurse’s emigration to developed countries.23 Physicians and nurses who remain in Nigeria predominantly practice in urban cities leaving the LGAs, most of them rural with severe shortages in health manpower.
CRS, with approximately 3.2 million population and 20156 square kilometres, is located in the South-South geo-political zone.24 The state has boundaries with the Republic of Cameroon in the East, Benue State in the North, Ebonyi State in the North West and Akwa-Ibom State in the South.24 It is divided into 18 LGAs with 18 general hospitals and 613 primary health centres. There is only one tertiary health facility, the University of Calabar Teaching Hospital (UCTH) located in Calabar, the capital city, which provides specialist care to the entire population. Being a tourism state, the importance of safe anaesthesia as a component of safe surgery cannot be overemphasized.
Physician-anaesthetist, nurse-anaesthetist and surgery training programs in Nigeria
Physicians are trained to be specialist anaesthetists or surgeons in a four-year training program leading to the Fellowship in Anaesthesia (FMCA) or Surgery (FMCS) of the National Postgraduate Medical College of Nigeria (NPMCN) or the Fellowship in Anaesthesia or Surgery (FWACS) of the West African College of Surgeons (WACS). This is after a six-year medical education program in the university leading to the Bachelor of Medicine and Bachelor of Surgery degree, one-year rotatory internship, and one-year of compulsory National Youth Service. Most Fellows, after completion (average time of completion is 7-8 years), work in University Teaching Hospitals and Federal Medical Centres, all located in urban cities. Another training program for doctors designed for primary and secondary healthcare is the Diploma in Anaesthesia (D.A) of the Universities or WACS, which is a 12-month training program. There is no short training program in Surgery.
Nurses are trained as nurse-anaesthetists after 18 months of training in a post-basic nursing program. The basic nursing training program is three-years of training in general nursing, after completing six years of secondary school education. There are now few university degree programs leading to the Bachelor’s degree in Nursing Science (BSN) from the universities. All these nursing training programs lead to certification by the Nursing and Midwifery Council of Nigeria.
Rural-urban practice
Physician-anaesthetists (Fellows and Diplomates), nurse-anaesthetists and consultant surgeons are all concentrated in urban hospitals leaving the rural areas and urban slums with a critical shortage of anaesthetic and surgical workforce. Therefore the majority of the surgical and anaesthetic procedures in rural areas in Nigeria are carried out by government-employed medical officers with almost all anaesthesia being provided by nurse-anaesthetists. In some very remote districts, Community Health Extension Workers (CHEWs) and Community Health Aids with little or no formal training in providing surgical care, are the only health workers available to provide some form of surgical care. The Association of Rural Surgical Practitioners of Nigeria (ARSPON) has been making some effort to address this workforce gap in rural areas by providing short on-the job training for medical officers to enable them to provide safe and affordable surgery to the rural population.13 The concept of surgical task-shifting to “non-physician clinicians” to address this rural-urban surgical workforce disparity, as is being officially done in other LMICs of SSA25 is not acceptable in Nigeria.
METHODOLOGY
A standardized questionnaire, the Lifebox Hospital Initial Needs Assessment Survey (Appendix 1) and another structured questionnaire (Appendix 2) was distributed to all 18 general hospitals (secondary health facilities) in CRS. All the general hospitals, which are the first referral hospitals in the districts, perform surgery. The site visit was conducted in April/May 2014. Permission to conduct the site visit was given by the CRS Honorable Commissioner for Health. The hospital surveys did not involve face-to-face interviews with the medical superintendents, hospital matrons or anaesthesia providers. The questionnaires were to be completed by the anaesthesia providers and medical superintendents in each of the hospitals visited. Each completed questionnaire was to be sent to the office of the Honorable Commissioner for Health at the MOH headquarters in Calabar, the capital city. The results are presented in frequency tables and charts.
RESULTS
A total of 16 well-completed questionnaires were received from 18 general hospitals/secondary healthcare facilities visited (88.9 % response rate). Averages of 3 - 53 surgeries are performed monthly in each of the hospitals (Table 1). The common procedures performed include: herniorrhaphy, appendicectomy, caesarean section, myomectomy, prostatectomy and exploratory laparotomy (Table 1). There are no practicing physician anaesthesiologists or surgeons employed by the State MOH except for one visiting consultant anaesthesiologist and five visiting consultant surgeons from the University of Calabar Teaching Hospital (UCTH), at the General Hospital, Calabar, which is located in the capital city of the State (Table 1). There are 13 nurse-anaesthetists distributed unevenly in the 16 hospitals (Table 1). There are no clinical officers cadres in the Nigerian healthcare system. Apart from the nurse-anaesthetists at the General Hospital in Calabar and Dr Lawrence Henshaw Memorial Hospital, also in Calabar, the other nurse-anaesthetists have had no refresher course or in-service training in the past two years. In the 16 general hospitals, the commonest anaesthetic technique used is Total Intravenous Anaesthesia (TIVA) with Ketamine (Table 1).
Table 1 : Summary of audit of anaesthesia services in the 16 district hospitals in Cross River State, April/May2014
GA: General Anaesthesia. ETT: Endotracheal Intubation. C/S: Caesarean Section. CSE: Combined Spinal and Epidural anaesthesia.
Basic anaesthetic equipment such as anaesthetic machines, oxygen cylinders, suction machines, and pulse oximeters were lacking in most of the hospitals visited (Box 1).
Box 1. Summary of Equipment in the 16 General Hospitals, Cross River State, Nigeria: April-May 2014 · 10% of the hospitals had pulse oximeter · 20% of the hospitals had oxygen cylinders · 20% of the hospitals had suction machines · 30% of the hospitals had anaesthetic machines · 80% of the hospitals had recovery beds · 100% of the hospitals perform surgery
The WHO Surgical Safety Checklist Information was administered to the hospitals management team at the Districts Hospital (Box 2). This shows that all the surgical teams had never used the WHO checklist, never received training, and the checklist was not available in the operating rooms, although all surgical personnel would like to receive training on the WHO checklist and pulse oximetry
Box 2. World Health Organization (WHO) Surgical Safety Checklist Information · How often do the surgical teams at your hospital use the WHO Surgical Safety Checklist? NEVER · Has your hospital received training in the WHO Surgical Safety Checklist? NO · Is the WHO Surgical Safety Checklist available in your operating rooms? NO · Would you like to receive training in pulse oximetry and the WHO checklist? YES
DISCUSSION
This survey aimed to provide a quick assessment of anaesthesia and surgical services in public hospitals in CRS of Nigeria. The data shows gross and significant shortages in anaesthesia and surgical providers in all 16 general hospitals. There were no consultant anaesthetists, diplomate anaesthetists or consultant surgeons employed in the CRS MOH. There were only 13 nurse-anaesthetists working in the 16 general hospitals, and one visiting consultant anaesthetist and five visiting consultant surgeons at the General Hospital, Calabar, the capital city. In six of the hospitals, there were no nurse-anaesthetists providing care for the surgical procedures being conducted. As it has been reported from the many surveys in SSA.6, 18, 26 most of the procedures in all the hospitals are being done by generalist medical doctors and general nurses, many without any postgraduate training in surgery and anaesthesia.
The gross inadequacy of the anaesthetic workforce in this survey represents what is found in many of the 778 LGAs (Districts) in the Federal Republic of Nigeria. This is because many of the LGAs are rural and studies have indicated the general difficulties of most health workers seeking jobs in rural hospitals. The lack of specialist anaesthetists in peripheral hospitals in most of Nigerian Districts therefore requires a re-direction of the training programs for doctors in Nigeria with greater emphasis on the shorter training program design for primary and secondary healthcare levels. Inaddition, annual refresher courses should be made mandatory for nurse-anaesthetists especially for those practicing in rural areas.
A recent review of the met and unmet needs of surgical disease in rural SSA, where district and rural hospitals are the main providers of care, shows a very huge burden.27 An important finding is the discrepancy between surgical care needs and provision.27 Since the majority of the population in SSA reside in rural areas, there is the need to strengthen the surgical services at this level. This is the first of the four recommendations of the Bellagio Essential Surgery Group.28 Many of the surveys using the WHO Situational Analysis Tool have described the lack of capacity in many district hospitals to meet the local surgical and anaesthesia needs.6-18 One study, using pulse oximeter availability as a measure of operating room resources, showed that between 58.4% and 78.4% of operating rooms in West Africa, East Africa and Central SSA do not have pulse oximeters.29 This finding was also clearly shown by our own rapid survey and assessment. Three important factors have been responsible for these findings. These are lack of resources, lack of manpower and the need for training.27 The need for training to improve the quality of the surgical and anaesthesia providers at the district hospitals is the third recommendation of the Bellagio Essential Surgery Group.28
Training programs and improvement of the facilities at the District Hospital has been shown to increase the number of operations performed.27Also, the presence of a visiting consultant anaesthetist in the District Hospital has been shown to increase the scope of anaesthesia services during the visiting period.30 The visit left more knowledgeable local staff in the care of their patients especially in peri-operative care.30 The need for developing countries in SSA, particularly in Nigeria, to concentrate more on shorter training programs in surgery and anaesthesia at their current level of development has been advocated.31,32 This has been shown by Sani et al where a 12-month training program for General Practitioners in district hospitals in Niger significantly reduced the number of referrals to the regional and specialist hospital.33 In many other SSA countries, where gross shortages of medical manpower exist, surgical task shifting has been championed and research has shown that these are cost effective interventions.25, 34 This is, however, not acceptable in Nigeria which is Africa’s most populous country with very poor health indicators.
There are some limitations to this study. Firstly, it was a snapshot of anaesthesia and surgical services which did not highlight in detail the eight key areas of surgical and anaesthesia care, as in other surveys. These key areas include: access and availability of hospital services, human resources, physical infrastructures (including availability of water and electricity), surgical and anaesthetic procedures, surgical and anaesthesia outcome, essential equipment availability, NGO and international organizations providing care, and access to essential pharmaceuticals. Secondly, this assessment did not include the only public tertiary hospital in the State and private hospitals. Lastly, this was an initial assessment in preparation for a more detailed survey based on the WHO guidelines when research funds are received.
CONCLUSION AND RECOMMENDATIONS
Therehas been a paradigm shift in global public health and the concept of primary healthcare which has resulted in increased awareness of the importance and contributions of surgical disease to the overall burden of disease especially in LMICs. This rapid survey of anaesthesia services in CRS, one of the 36 states in Nigeria, will serve as a window to inform other Nigerian State governments of the need to increase surgical and anaesthesia capacity and funding in their development agenda. It is therefore recommended that visiting consultant’s services to all the general hospitals in an organized and planned fashion should be highly encouraged. All the anaesthesia caregivers should attend refresher courses at least once every two years. These courses can be arranged locally, or sponsorship provided for attendance of relevant courses by Anaesthesia Trainers within and outside the State. Basic anaesthesia equipment and guidelines as recommended by the Nigerian Society of Anaesthetists (Box 3) must be available and followed to enhance patient safety. It is also recommended that the government should give incentives to medical and nursing staff working in rural areas so that there will be a reversal of the rural- urban shift. The Lifebox global oximetry project is interested in making high quality, low-cost pulse oximeters available in every operating room. Therefore, every secondary care facility in the country should take advantage of this laudable program.
Box 3. Nigerian Society of Anaesthetists Standard Guidelines for the Practice of Anaesthesia Anaesthetic Personnel · Certified physician anaesthetist · Trained nurse-anaesthetist under supervision by physician-anaesthetists · Maximum number of nurses to physicians should be 4:1 · Where there is no physician-anaesthetist, nurses should adhere strictly to the guidelines and conditions of their certification · The surgeon should provide coverage especially in the area of patient resuscitation and fitness for surgery and take full responsibility for any decisions made against the guidelines.
Anaesthetic equipment for each theatre Standard Continuous Flow (Boyles) Anaesthetic Machine with: - Closed breathing system - Adult semi-closed breathing system - Paediatric breathing system Suction Machine - Electric - Manual Suction Catheters – disposable, various sizes Laryngoscope set with batteries and:- - 2 standard and 1 long curved blades - 2 standard and 1 long straight blades - Neonatal laryngoscope with 2 straight blades. Intubating Forceps (Magill) - Adult - Paediatric - Neonatal Self-inflating Resuscitation Bag - Adult - Paediatric - Infant Anaesthetic Face Masks : Size 0, 1, 2, 3, 4 Paediatric (Rendell-Baker) – Size 00,0,1 Naso-gastric tubes Head – Harness Oropharyngeal Airways 00 – 5 Endotracheal tubes (cuffed, noon-cuffed 2.5. – 9.0. mm) - Red rubber, latex reinforced, portex Plastic laryngeal Mask Airway (Sizes 1–5) Endobronchial tubes Bougies Fluid Warmer Warming Mattress (for paediatrics) Pressure Infusor Syringe Infusion Pump with lines
Maintaining its longstanding presence as one of the oldest forms of art, body tattooing has increased exponentially within mainstream society, as well as in social acceptance. Generally worn to display individuality and creativity, these distinctive forms of indelible markings are present in every culture, whether on tribal men, or people of status. Procedurally when inserting the decorative markings, the approach in studio tattooing has not changed significantly as artists are still using “an electrically powered, vertically vibrating instrument to inject tattoo pigment 50 to 3,000 times per minute up to or into the dermis at a depth of 1/64th to 1/16th of an inch”1
While no national registry provides prevalence, a 2012 Harris Poll cited one in five United States (U.S.) adults have at least one tattoo (21%), an increase of 16% and 14% from previous surveys taken in 2003 and 2008 respectively.2 Tattoo numbers were even higher in some variables including age between 30-39 years (38%), Hispanics (30%), females (23%), and those living in the Western part of the U.S. (26%). No questions were identified in the 2012 poll that queried tattooed body site locations. Other studies cite almost a 25% presence of tattoos.1,3-7 The amount of tattoo studios also echoes the growing body art phenomenon.
Given the societal blaze of tattooing, the medical literature on body art has also increased. Yet, most of the information still remains focused on small case reports6 about traditional locations (arms, legs, chest, back), their decision-making, various risk-taking behaviors,8 and the small amount of complications.7 Those with various adverse skin reactions or major complications seem to have had tattoos with colored pigment.6
While body art can be found virtually everywhere on the human anatomy, several articles have surfaced concerning genital body piecing.4-5, 9-11 Current studies validate the increasing rate of all types of tattooed 4,8 people, from a variety of occupations and social classes, with markings on visible and non-visible locations.7 This article reports on the limited medical literature found about men with genital tattoos (pubic and/or the glans penis). Also presented is a subsample data analysis of 14 men from a primary study examining male genital piercings,11-12 who responded affirmatively to one survey question about penile tattoos. This synopsis and subsample data analysis are provided for clinicians to have further, recent evidence about men with genital tattoos for decision making during patient encounters in health care settings. The terminology of penile and genital tattoos will be used interchangeably in this article.
METHODS
Literature Synopsis
Historically, the cross-cultural literature is rich in visual genital tattoo descriptions. In South America, the Moche on the North Coast of Peru (A.D. 150-800) produced ceramics illustrating vivid sexual imagery and highly decorated male genitals.13 Phallus decorations with dots, concentric lines, and other tattoo markings on the penile skin and mucosa during the Upper Paleolithic era in Europe 12,700 to 11,000 years ago have been reported.14-15 Likewise, the Samoan Island culture, where the word ”tattoo” is believed to have originated from “tatau,”, has maintained ritualistic16 traditions for over two thousand years; they are initiated at the time of puberty for future leadership roles. These 10+ days of ceremonies include very painful, repeated tattooing of the scrotum (tafumiti) and the penis (tafito). Other nearby primitive Polynesian tribes have believed this tattooing as highly erotic,16 whereas the indigenous Maori (New Zealand) trust that the pigment for these tattoos can trap cosmic energy.14 Circumcision and tattooing were thought to produce the same effect of magic protection and healing powers after scar healing.14 In the Japanese culture, an examination of Yakuza (racketeers or gangsters) also describes the genitalia as a site that is tattooed,17 fulfilling their principles of tattoos always being covered.
Searching for information about genital tattoos was more challenging within the medical literature. A comprehensive longitudinal 40 year search of the national and international electronic medical literature (1973-2013) published in English and their associated reference lists was conducted with MEDLINE, EMBASE, CINAHL, SCOPUS, and OVID. Only 20 articles were located that mentioned genital tattoos. Articles were from international authors (n = 11) and the U.S. (n = 9); they all produced interesting reading. One reference cited women with genital tattoos.7
Genital tattoos in the early literature were labeled as criminal, or personality disorders tattoos;18 one recent article discussed them under the header of genital self-mutilation.16 Others described them as a valuable clue for forensic pathology identification.19-20 World War II articles cited descriptive stories of soldiers with penile tattoos,21-22 with one reporting up to 10 sailors being seen.23 Besides reporting on how the fate of Bulgaria was determined by three tattooed men (Churchill with an anchor on his left arm, Roosevelt with a family coat of arms tattoo, and Stalin with a death’s head on his chest),24 Kazandjieva25 then provides vivid examples of auto-aggression markings that his countrymen self-inflicted after the Communist takeover. This included glans penis tattoos which are described as producing great pain.15,25 One political candidate, while campaigning, is reported as suggesting punitive action for those HIV+ by “putting indelible, glow-in-the-dark tattoos on [their] genitals.”26 Traumatic tattoos associated with gunpowder explosions and blast burns are also mentioned on the glans penis.27
Two studies also described inmates with genital tattoos and discussed how these markings demonstrated aggressive behavior within this type of environment. Here large, colorful tattoo designs and wording on the glans penis tattoos were described28-29 which seemed to satisfy the inmate’s flaunt of personal pain endurance. Additionally, Cuban refugees (Marielitos) fleeing to the U.S. were reported as having genital tattoos; they also were from prison subcultures and their markings had various sexual overtones.29
Four other reports described those with penile tattoos also routinely inserting foreign bodies12,30 and paraffinoma12,31,32 into the penis. In Pehlianov’s study (also in Bulgaria) they included a control group of another 25 men with genital tattoos. Recently, a unique case of non-ischemic priapism for 3 months was reported33 following prolonged bleeding from a manual penile tattoo procedure in Iran. The authors suggested the hand-held tattoo needle had penetrated too deeply producing an arteriovenous fistula and the subsequent persistent half-rigid priapism. The authors also noted that the 21 year old patient expressed no regret, depression, or other complications related to the genital tattoo.
Original Study
The initial study queried males with genital piercings using available internet survey software,12 as it was considered a hidden variable. Anonymity and access to people nationally and internationally were major advantages for using this nontraditional approach. The university institutional review board deemed the study status as Exempt. To obtain quantitative and qualitative data about those men with genital piercings, an 83 item web-based survey was used; overall results, and another subsample of this data, are published elsewhere.11-12
Subsample of those with Penile Tattoos
From the original 445 male genital pierced individuals that responded to the question regarding having tattoos on their penis, 14 replied affirmatively. This subsample had previously been determined not be an outlier of the larger group of genital pierced men.12 While a short general description (age span at the time of tattoo procurement, urethral “play,” design types, motives, and tattooists) about the 14 member genital tattoo subsample was published in 2010,12 further investigation leading to quantitative and qualitative (Figure 1) data is presented here.
Figure 1: Subsample Respondent Qualitative Quotes *Black tribal flames on the top of the shaft, done at [age] 38 * For erotic reasons, self done with no complications, done at [age] 54 *I got it because I wanted it. After it was finished I realized I needed it, done at [age] 30 * I self tattoo’d my penis on the glans and around the corona ridge in order to make up for its’ lack of size and to enhance its appearance. I used a sailmaker’s needle and Indian ink and there were no complications, done at [age] 43. *one small cross pigment tattoo! *I’m a little more than average in size, but I still have issues with my genitals. The way they look and their size. Piercings and tattoos have helped me quite a lot. *I sketched a rose one day, like[d] the design, decided to get it tattooed on my penis. The stem is green with some yellow highlights, the bud is red, all black outline. The tattoo was applied with a standard machine . . .healing was actually quicker and easier than any of my other tattoos. *It’s a little heart just next to ‘captain hemingway’ which I hand poked and used india ink for it when I was 17. . . thought our penis deserved a reminder of our affection . . .no complications experienced but since it was hand done with a [sterile] needle it’s kind of blurry
This subsample had significantly more foreskin genital piercings (chi-square = 11.5) = 1; P = .001), whereas the most common genital piercing of the larger group of those without genital tattoos11 had Prince Albert piercings (inserted through external urethra). No question inquired which came first, the genital piercing or genital tattoos.
Data Analysis
For this subsample analysis, (and original study11-12), IBM SPSS 21was used to obtain frequencies and chi-square analysis. Cross tabulations for the subsample were obtained by comparing those with and without penile tattoos.
RESULTS
Demographics
Almost all of the subsample respondents with penile tattoos were reportedly Caucasian (92%) and their ages ranged from 18 to 67 years (average 42.3). Of those that replied, six lived in the U.S. and five cited various international locations. Over half had vocational or college education (64%) and significantly more were likely to be single (25%) or divorced (25%), (chi-square 12.6) = 5; P = .027). Data regarding religious faith was weak to non-existent (75%). Respondents self-reported a good state of health (92%) (chi-square = 8.7) = 3; P = .034), yet 50% cited no annual health check-ups.
Risk Behaviors
Within this subsample, there was no consensus about being a “risk taker”. Recreational drugs were reportedly not used (91%), over half were non-smokers (55%), but monthly alcohol use with binge drinking (5+ or more drinks) was cited (78%). Their “motives for genital tattoos were for esthetics, sexual, and personal pleasure”12; a variety of penile tattoo designs were described (Figure 1), created either “by studio artists (n = 11) or self-inflicted (n = 3)”.12 All of them described having other body art, such as piercings and other general body tattoos. Some reported an average of 4 piercings (81%) and a significant amount of general body tattoos (average 3.5) (chi-square = 11.1) = 5; P = .049), that still interest them (85%) (chi-square = 8.9) = 3) P = .031).
Sexual Activity
This subsample’s average age of first intercourse was 17 years, with most citing women as their sexual partners (92%), most preferred penile/vaginal intercourse (79%), and only one respondent reported a sexually transmitted infection (gonorrhea). When asked about any forced sexual activity (rape), this subsample had a significant amount of those who answered affirmatively (23%) (chi-square = 7.7) = 1; P = .005). Virtually no sexual, physical, or mental abuse was reported.
Need for Uniqueness
A four-item scale called the Self-Attributed Need for Uniqueness (SANU)34 was present in the survey to determine the respondent’s self-view (Cronbach alpha = .86). Using a Likert scale, the subsample’s moderate, strong and very strong perspectives were collectively summarized. These respondents with penile tattoos preferred to be different (79%), distinctive (86%), intended to do things to make themselves different than those around them (72%), and reported a Need For Uniqueness (93%) (Cronbach alpha = .77). To validate this finding, when all 5 responses of SANU were totaled,12 the mean was 12.43 documenting a more positive perspective for intentionally wanting to be different, distinctive, and unique.
DISCUSSION
This article reviewed both the cross cultural and medical literature about those with genital tattoos, as well as included both a quantitative and qualitative subsample data analysis of a small group of men who specifically reported penile tattoos. Yet, with certainty this small sample size produced limitations and reporting/survey bias. Additionally, any generalizability with the findings of this subsample should be noted as the respondents could have self-selected their participation and used their personal judgment to interpret the survey questions in this non-experimental cross-sectional study using internet survey methodology.12
From this review and to our knowledge, few have studied groups of men with genital tattoos, a difficult group of subjects to find with this hidden variable.12,31 Cultural descriptions documented a long, rich history 12, 14-17, 29,31 of genital markings for esthetics, sexual enhancement, and tribal status, whereas the medical literature reflected limited observational type information, and few actual case histories or scientific studies. Although there were no mental health evaluations12 cited in this medical literature, more psychopathic, deviant behavior discussions were made about the individuals with genital tattoos.16-18,26,32,35 In contrast, two authors30,33 comment on the “normalcy” of their patients that presented with genital tattoos.
Genital tattoos may be more common than this very small subsample size suggested as great emphasis has been placed on male penile size in many cultures, for a long time.31,36 The augmentation of these genital markings and decorative designs seemed to have motivated their sexual health, self-enhancement9-10 and well-being.31 Thus, when further studies are considered for this population with a hidden variable, these findings should assist with further ideas of investigation.
Current society has a strong 25 year renaissance of procuring tattoos with at least one in five, and perhaps even four, individuals possessing a tattoo, on virtually every part of their body, without major complications. This small subsample of those who have genital tattoos validates some similarities to those who wear general body tattoos such as a single heterosexual orientation, possessing some college/vocational education, monthly binge drinking,1,3-5,10 and a strong propensity for a Need for Uniqueness.4-5,37 They were major body art wearers and continue to enjoy them, as others have also reported.4-5,10-12
Yet other demographic assumptions were challenged for this subsample of men with genital tattoos. These international respondents tended to be older Caucasians and not as ethnically diverse; there also was not a consensus as to them being risk takers, as has been repeatedly reported by many other body art respondents.1,3-5,11-12
Subsample respondents reported their average first occurance of sexual intercourse at age 17, similar to the national figures.38 Significant experiences of rape were also reported in this subsample, as in women with genital piercings.4-5,9-10 The national rate for forced sexual activity is 10.5%38 and those with genital tattoos reported over twice that amount (23%). No sexual abuse was reported in contrast to a recent German study39 examining general body tattooing.
As with any type of invasive procedure, there can be complications with certain types of body art. When these complications occur, body art wearers typically first seek the internet and/or their studio artist for health advice before presenting to clinicians.1,8,10-12 Yet, overall for the amount of general tattooing done, this type of body art produced limited documented complications and more potential concerns.7-8,11, 30,33 More complications were reported when the tattoos contained colored pigments.6
These tattoos are an integral part of their cultural and personal expression.12,31,33 From our experience many of these male patients with genital tattoos are not seen primarily because of their decorative markings,31 but during clinical evaluations for other issues presented with the normal range of urologic issues involving overall genitourinary and sexual function. Genital tattoos can be an ambivalent findings for many clinicians, but these indelible skin markings (tattoos) are only skin deep,40 and provide valuable cues such as a history of sexual trauma.39 Currently more genital tattoos are seen among our freedom-impaired patients, where the prevalence of general body tattoos among the inmates can be as high as 67%.41
Anecdotally, when healthcare staff discover a patient with genital body art, this discovery can be met with judgmental attitudes and behaviors which could impact care. To adequately assess, evaluate and treat the individuals that have chosen to have genital tattooing, clinicians should strive to provide a thoughtful, nonjudgmental patient-centered approach, along with a generous application of health education, for their present, or even future body art.11
Adiponectin, first reported in 1995 by Scherer et al, is a novel and important member of the adipokine family.1 It is a collagen-like protein that is exclusively synthesised in white adipose tissue and is the gene product of adipose most abundant gene transcript 1 (apM1).
Adiponectin has been postulated to play an important role in the modulation of glucose and lipid metabolism in insulin-sensitive tissues in both humans and animals. Various studies have reported a protective effect of plasma adiponectinagainst type 2 Diabetes Mellitus (T2DM) 2-6. Adiponectin is also inversely associated with traditional cardiovascular risk factors, such as total and low-density lipoprotein cholesterol (LDL-C) and triglyceride levels, and is positively related to high-density lipoprotein cholesterol (HDL-C).7 Recent studies suggest that it may have anti-atherogenic and anti-inflammatory properties .8-10 A few researchers who studied the combined effects of these findings reported inverse correlation between plasma adiponectin and risk of coronary heart disease.11-15
Recent epidemiological studies have shown that association of adiponectin with insulin resistance and cardiovascular risk factors vary with ethnicity. Mente et al studied the ethnic variations in adiponectin concentrations and insulin resistance and found that South Asians and aboriginal people display a greater increase in insulin resistance with decreasing levels of adiponectin compared to Chinese and Europeans .16 However, a similar study involving Asian Indian teenagers showed that adiponectin did not correlate directly with measures of insulin sensitivity, overweight, and other cardio-metabolic variables .17 Similar studies in adults are not available.
The present study was done to assess the association between plasma adiponectin levels and coronary event in patients with diabetes. Also the relation between plasma adiponectin level and various cardiovascular risk factors were studied in patients with diabetes with and without acute coronary event.
Subjects and Methods:
The prospective study was conducted at a tertiary care centre in Bangalore, India from January 2008 to December 2009. The study was approved by the institution ethics committee. Three groups of patients, age and sex matched, were included in the study. The first group included 30consecutive T2DM patients admitted with a diagnosis of myocardial infarction (MI) at the study centre. While the second consisted of patients with T2DM without MI, the third group were patients without diabetes without any history of acute coronary event. MI was diagnosed as per World Health Organization’s criteria.18 Patients aged less than 18 years were not included in the study. Patients with diabetes with chronic kidney disease or receiving Thiazolidinediones were also excluded from the study as it would alter plasma adiponectin levels.
Fasting Blood Glucose (FBG), Post-Prandial Blood Glucose (PPBG), Glycated Hemoglobin (HbA1C), Fasting Lipid Profile, Baseline Electrocardiogram and Plasma Adiponectin were done for all the study subjects. In addition, Coronary Angiogram was done for patients with diabetes with MI to confirm Coronary Artery Disease (CAD) and treadmill tests were done for patients with diabetes without MI to exclude underlying CAD.
FBG and PPBG, serum total cholesterol and serum triglycerides were estimated using enzymatic kit method (Vital Diagnostics, Mumbai, India); and serum HDL-C (Bayer Diagnostics, Baroda, India) using a semi-auto-analyser.
Plasma Adiponectin levels was estimated using Human Total Adiponectin/Acrp30 Quantikine ELISA Kit (R&D Systems Inc., India). This assay employs the quantitative sandwich enzyme immunoassay technique. A monoclonal antibody specific for the Adiponectin globular domain has been pre-coated onto a microplate. Standards and samples are pipetted into the wells and any Adiponectin present is bound by the immobilised antibody. After washing away any unbound substances, an enzyme-linked monoclonal antibody specific for the Adiponectin globular domain is added to the wells. Following a wash to remove any unbound antibody-enzyme reagent, a substrate solution is added to the wells and colour develops in proportion to the amount of Adiponectin bound in the initial step. The colour development is stopped and the intensity of the colour is measured.
Statistical Analysis
Statistical analyses were performed using Statistical Package for Social Sciences (SPSS) for Windows 16.0 (SPSS Inc., Chicago, USA). The results for each parameter (numbers and percentages) for discrete data and average (mean + standard deviation) for continuous data are presented in tables and figures using Microsoft office 2007 software package.
Two-way Analysis of Variance (ANOVA) was performed for plasma adiponectin in patients with diabetes with MI, patients with diabetes without MI and controls as the grouping factor. Two tailed ‘P’ values less than 0.05 were considered significant. Spearman correlation was performed to analyse the association between plasma adiponectin, BMI, FBG, PPBG, HbA1C, serum triglycerides, HDL-C and LDL-C.
Results
The following results are expressed as mean± standard deviation. The mean age of the study subjects in the three groups-patients with diabetes with MI, patients with diabetes without MI and Controls- was 58.00±8.77 years, 57.17±9.34 years and 54.20±7.28 years respectively. The descriptive statistics of the various parameters under study is given in table 1.
Patients with diabetes with MI had significantly lower plasma adiponectin levels (6.11±1.82) when compared to patients with diabetes without MI (9.47±1.55) which in turn was lower than normal subjects (17.82±1.30) (P<.001) . Plasma adiponectin was significantly correlated with BMI (r=-.31), FBG (r=-.61), HbA1C (r=-.63) and triglycerides (r=-.54) (all P<.001). We did not find any significant correlation between plasma adiponectin levels and HDL-C (Table 2).
Table 1: Descriptive statistics of various parameters under study
Variable
Controls Mean (±Std. Dev)
Patients with diabetes without MI Mean(±Std. Dev)
Patients with diabetes with MI Mean(±Std. Dev)
Plasma Adiponectin
6.11(±1.82)
9.47(±1.55)
17.82(±1.30)
Fasting Blood Glucose
123.50(±17.85)
133.23(±16.14)
88.80(±6.27)
Post-Prandial Blood Glucose
190.53(±19.27)
209.33(±28.72)
125.30(±6.200
Glycated Haemoglobin
7.81(±0.92)
8.04(±1.24)
4.06(±0.62)
Total Cholesterol
205.77(±19.92)
214.43(±21.54)
138.07(±10.38)
Serum Triglycerides
148.80(±11.32)
160.53(±14.61)
127.23(±6.11)
Serum HDL
45.13(±8.57)
37.43(±9.73)
44.87(±7.78)
Serum LDL
129.30(±22.55)
137.27(±18.83)
120.03(±8.27)
Body Mass Index
27.82(±2.39)
27.08(±2.20)
25.40(±2.63)
Table 1: Patients with diabetes with MI had significantly lower plasma adiponectin when compared to patients with diabetes without MI which in turn was lower than in normal subjects
Table 2: Spearman correlation between adiponectin and body mass index, blood lipids, HbA1C, fasting and post-prandial glucose levels
Adiponectin
BMI
FBG
HBA1C
TG
HDL
LDL
Adiponectin
1.00
-0.31**
-0.61**
-0.63**
-0.54**
0.02
-0.16
BMI
1.00
0.37**
0.29**
0.25*
-0.14
0.10
FBG
1.00
0.62**
0.61**
-0.17
0.21*
HBA1C
1.00
0.83**
-0.45**
0.35**
TG
1.00
-0.53**
0.33**
HDL
1.00
-0.07
LDL
1.00
** Correlation is significant at the 0.01 level (2-tailed); * Correlation is significant at the 0.05 level (2-tailed); (BMI- Body Mass Index, FBG- Fasting Blood Glucose, HDL- High Density Lipoprotein, LDL-Low Density Lipoprotein, HBA1C- Glycated Haemoglobin, TG- Serum Triglycerides); Plasma adiponectin was significantly correlated with BMI, FBG, HbA1C and triglycerides (all P<.001). The correlation between plasma adiponectin levels and HDL-C was not statistically significant.
Discussion
In the present study, we found decreased plasma adiponectin concentrationsin the patients with diabetes which was further lower in patients with an acute coronary event indicating that it may be a predictor of macroangiopathy. Hotta et al found similar results in their study and proposed that accumulation of adiponectinin atherosclerotic vascular walls may accelerate its half-lifein plasma, resulting in the reduction of the plasma concentrationof adiponectin in subjects with CAD .19 Ouchi et al studied the molecular basis of link between adiponectin and vascular disease and found that adiponectinmodulates endothelial inflammatory response and that the measurementof plasma adiponectin levels may be helpful in assessment ofCAD risk. 20 Large scale prospective experimental research is needed to clarify these theories.
The relation between plasma adiponectin and the various known metabolic risk factors were on par with the world literature, except for HDL-C. Koenig et al reported an additive effect of HDL-C and adiponectin on CAD risk prediction. 21 In their joint analyses, the highest risk for T2DM as well as acute coronary events was observed in men with low adiponectin in combination with low HDL-C levels. In the present study, the mean HDL-C levels were lower in patients with diabetes with MI compared to patients with diabetes without MI. However, we did not find any significant correlation between plasma adiponectin levels and HDL-C in the present study. Similar findings were obtained by Schulze et al indicating that although plasma adiponectin has been established to be correlated with insulin resistance, CAD and metabolic disease, the interrelation between these is far more complex.
The molecular mechanisms by which adiponectin exerts its multiple functions and whether its actions are receptor mediated still remain a mystery. Is the primary activity of adiponectin antiatherosclerotic, or is it principally a modulator of lipid metabolism and regulator of insulin sensitivity, or is it all of the above? The answers to these and other intriguing questions will undoubtedly provide additional insight into the metabolic roles of this new adipocyte hormone.
Conclusion
The present study and the recent evidence suggest that cross-talk between inflammatory signalling pathways and insulin signalling pathways may result in insulin resistance and endothelial dysfunction that synergise to predispose to cardiovascular disorders. Large scale prospective studies are needed to examine the ability of increase in adiponectin levels and insulin sensitivity to improve primary end points including incidence of diabetes and outcomes of cardiovascular events.
Binge Eating Disorder (BED) was first defined by Stunkard in 19591; he identified peculiar food intake features characterized by a loss of control in a subgroup of obese patients. Various efforts have been made, ever since, to provide a non-sociological approach to individuals with such a behaviour disorder, which has long been considered a variant of Bulimia Nervosa.
Unlike patients affected by Bulimia Nervosa, patients with BED appear to be overweight and mainly obese. Thus, the treatment aims not only at reducing BED and its related psychopathology, but also at assessing the weight gain experienced by these patients to prevent a further worsening of physical health.
Walsh & Devlin2 evaluated the use of medication in the treatment of Bulimia Nervosa and BED, underlining the efficacy of antidepressant medication in the treatment of Bulimia Nervosa. The antidepressant efficacy led to consider its use in BED more accurately.
Williamson, Martin & Stewart3 stated that pharmacotherapy was not an effective treatment for Anorexia Nervosa. However, it did prove to be successful in Bulimia Nervosa and BED, although subjects affected with eating disorders apparently respond better to psychotherapy approaches.
Systematic investigations have been conducted on the aetiology of BED. Biological and genetic factors, neurotransmitters and hormones have been involved in the onset of binge eating and play an important role in the regulation of hunger and mood.4, 5 However, a definitive aetiological theory has not been developed and tested.3
BED is characterized by a relevant psychological component that in many cases is under-evaluated. Patients with BED have difficulty in interpreting the visceral sensations of hunger and satiety; they take large amounts of food even during regular meals and, moreover, their food contains more fat than protein.6, 7
In fact, Axis I and II disorders (DSM IV-TR) share common features with binge eating.8 Axis I psychiatric disorders (including depression, anxiety, body dysmorphic disorder, or chemical addiction) characterize many BED patients, and research has evidenced the presence of panic, loss of control, impulsivity, compulsive behavior, obsessive thoughts about food and social phobia.9 Axis II personality disorders (especially borderline personality disorder) are frequently related to patients suffering from eating disorder and comorbidity with Avoidant Personality Disorder and Obsessive-Compulsive Disorder was observed.10
BED is not associated with a restrained eating control, but probably with an increase of uncontrolled eating and emotional eating.11, 12
Pharmacological agents, compared to placebo, have been used in the treatment of BED. Appolinario, Bacaltuchuk, Sichieri et al13 evaluated the efficacy of Sibutramine to reduce the frequency of binge eating, while McElroy, Arnold, Shapira et al14 focused on Topiramate and evidenced a greater reduction in binge eating frequency but with side effects such as paraesthesia.
Other studies showed the efficacy of Selective Serotonin Reuptake Inhibitors (SSRIs) and Serotonin-Noradrenaline Reuptake Inhibitors (SNRIs) in the treatment of obesity associated with BED15, 16, 17, 18 showing that SSRIs and Tricyclic Antidepressants (TCAs) reduced the frequency of BED compared to placebo.19 To determine which patients are most likely to benefit from medications, and how to sequence the various therapeutic interventions available better, are questions still open to debate.
Moreover, results in many cases appear to be divergent. Ciao, Latner and Durso20 underlined the influences of other factors that may influence treatment efficacy. They observed that many obese individuals who might benefit from weight loss treatment nevertheless do not plan or desire to seek treatment and perceive multiple barriers to treatments.
Pharmacotherapy may enhance weight loss,19 although other results suggest that pharmacotherapy may be associated with a reduction in binge frequency in obese patients with BED, but it does not lead necessarily to weight reduction.21 These medical treatments seem to be effective in reducing binge frequency over the short-term, and the subsequent discontinuation of the medication seems to be associated with a relapse of binge eating. Thus further studies of the role of pharmacotherapy in the treatment of BED need to be carried on.22
According to Williamson, Martin and Stewart3 the additive effects of psychotherapy, e.g. Cognitive Behavioural Therapy (CBT), and pharmacotherapy have to be investigated. At present it seems that adding pharmacotherapy to psychotherapy does not help to reduce binge frequency compared to psychotherapy alone.
The efficacy of CBT has been substantiated by scientific literature23, 24, 25, 26 and Vaidya27 stating that CBT helps patients reduce eating disorder habits by making them aware of the cause of their self-sabotage, while affecting weight indirectly.
Psychotherapy treatment over a one-year period deals with binge symptoms and aims at reducing the possibility of relapse by gathering different techniques for the maintenance of long-term results through the use of specific individual intervention protocols. The main target of the intervention is to facilitate the management of no-control food intake episodes and of impulsivity through the alteration of behaviour, and of cognitive and emotional factors related to eating disorders.
Thus, the main objective of this research was to verify the possible differences between those subjects with BED who underwent psychotherapy combined with pharmacotherapy and those who underwent psychotherapy only. In particular, it aimed at verifying possible differences between the various therapeutic strategies on eating behaviour (restrained eating, uncontrolled eating and emotional eating) and the behavioural and psychopathological features (psychopathic deviate, depression and hypomania).
The main hypothesis was to determine if patients who underwent CBT and pharmacotherapy with bio-equivalent doses of the SSRI Paroxetine or SNRI Venlafaxine obtained a considerable benefit from the pharmacotherapy on impulse regulation, on eating behavior and on personality features compared to those who underwent CBT alone.
The second hypothesis was to verify if Paroxetine and Venlafaxine treatments were equally effective on impulse regulation, eating behavior and on personality characteristics.
Methodology
Participants
A group of 30 subjects with BED were selected. All these subjects applied for support to the Inter-Service-Psychology Clinic for Eating Disorders and they were assisted by a Cognitive Behavioural Therapist. They were all of Italian nationality, aged 22 to 52, with a Body Mass Index (BMI) range of 26 to 35. All participants belonged to a middle class socio-cultural level. They were informed of the objectives of the research and signed a consent form. Those subjects who were diagnosed with binge eating less than two years ago, those aged over 65, or those who were suffering from other debilitating or chronic diseases were preliminarily excluded from this research.
Measures and procedure
Participants were recruited according to the nature of the assessments. More specifically, assessments had to address the effect of psychotherapy and pharmacotherapy in subjects with BED on their impulse regulation, their eating behavior (restrained eating, uncontrolled eating and emotional eating), and their personality features (Psychopathic Deviate, Depression and Hypomania). The 30 subjects selected were randomly assigned to three different treatments. Ten subjects had only CBT, ten subjects underwent psychotherapy with Paroxetine, and ten subjects underwent psychotherapy with Venlafaxine.
Each participant answered questionnaires during the assessment phase and in the post-training phase (after one year of psychotherapy). More specifically:
Binge Eating Scale (BES)28 is a 16-item questionnaire which assesses the presence of binge eating behaviour indicative of an eating disorder. The score ranges from 0 to 46 (non-binging <17; moderate binging = 18-26; severe binging = 27 and higher), which in this research had an adequate internal consistency (a = 0.84).
Eating Disorder Inventory-2(EDI-2)29 aims at quantifying some psychological and behavioural features. It consists of 64 questions grouped into 11 scales. For each item, the participants are asked to answer by using the following frequency adverbs: "always", "usually", "often", "sometimes", "rarely", and "never". The rating is measured with a score between 0 and 3: the maximum score of 3 corresponds to the intensity of the symptom ("always" or "never" depending on whether the direction of the item is positive or negative), score 2 corresponds to a degree intensity immediately below ("usually" or "rarely"), score 1 to an even lower level of intensity ("often" or "sometimes"), while a score of 0 is assigned to the three "asymptomatic" answers. So, those items with a positive direction are assigned the following scores: always = 3, usually = 2, often = 1, sometimes = 0, rarely = 0, never = 0; items with a negative direction are evaluated in the opposite way: never = 3, rarely = 2, occasionally = 1, and often, usually, always = 0. The sub-scale scores are calculated by simply adding all the scores of items of each specific sub-scale.
This research availed the Impulse Regulation Scale with an adequate internal consistency (a = 0.82). This scale shows the ability to regulate impulsive behaviour, especially binge behaviour.
Three Factor Eating Questionnaire (TFEQ)30 is a self-report questionnaire consisting of 51 items.The questionnaire refers to daily dietary practice and measures three different aspects of eating behaviour: (1) restrained eating (conscious restriction of food intake in order to control body weight or to promote weight loss – cut-off: ≤11; a = 0.79); (2) uncontrolled eating (tendency to eat more than usual due to a loss of control over intake accompanied by subjective feelings of hunger – cut-off: ≤8; a = 0.81): (3) emotional eating (inability to resist emotional cues – cut-off: ≤7; a = 0.83).
Minnesota Multiphasic Personality Inventory-2 (MMPI-2)31 consisting of 567 items with dichotomy answers (true/false) is most commonly used by mental health professionals to assess and diagnose mental illness. The MMPI is based on ten clinical scales that are used to indicate different psychotic conditions. In this research the scoring of the three following scales were taken into consideration: (1) Psychopathic Deviate Scale (Pd) (50 items) which measures social deviation, lack of acceptance of authority and amorality. This scale can be thought of as a measure of disobedience. High scorers tend to be more rebellious, while low scorers are more likely to accept authority. An adequate internal consistency was obtained in this research (a = 0.83); (2) Depression Scale (D) (57 items). The highest scores may indicate depression, while moderate scores tend to reveal a general dissatisfaction with one’s own life. A sound internal consistency was obtained through this research (a = 0.81); (3) Hypomania Scale (H), with 46 items, identifies such characteristics of hypomania as elevated mood, accelerated speech, locomotive activity, irritability, flight of ideas, and short periods of depression. In this research the internal consistency was a = 0.79.
Results
The Statistical Package for Social Science (SPSS 10.1) was implemented to verify the hypothesis. The limited number of subjects enabled analysis of data through non-parametric statistics. In order to verify statistical differences between simple comparisons on paired data the Mann-Whitney (U) test32 was applied. In order to verify statistical differences within phases (pre- Vspost-training), Wilcoxon Signed Ranks Tests33 were calculated separately on paired data.
Table 1 synthesizes the means and standard deviations of eating behaviour and of impulse regulation obtained by the three groups: CBT alone; psychotherapy with Paroxetine (CBT+P); and psychotherapy with Venlafaxine (CBT+V) in pre- and post-treatments.
Groups
Phases
Binge Eating Disorder
Impulse Regulation Scale
MIN
MAX
M
SD
MIN
MAX
M
SD
CBT (N=10)
Pre
28
35
31.43
2.41
74
94
85.93
6.84
Post
25
33
28.71
2.46
71
91
83.07
6.67
CBT+P (N=10)
Pre
26
35
30.90
3.54
77
94
86.80
5.29
Post
24
31
27.90
2.85
74
91
82.10
5.72
CBT+V (N=10)
Pre
27
35
30.80
2.78
83
94
87.80
3.91
Post
24
31
27.20
2.57
79
90
83.80
3.71
Table 1 – Minimum and maximum scores, Means and Standard Deviations of eating behaviour and of impulse regulation obtained by three differential groups.
By comparing the total scoring in BES during the pre-treatments phase, no statistical differences between groups were noticed. Subjects who only underwent CBT had the same result than those who had addition of Paroxetine (CBT+P) [U = 64; Z = 0.35; p = 0.75] and Venlafaxine (CBT+V) [U = 59; Z = 0.62; p = 0.55]. There were no initial statistical differences between the two groups that received pharmacotherapy [U = 50; Z = 0.1; p = 0.99].
In the post-treatment phases, the presence of binge eating behaviour appeared to be the same in all groups. Subjects belonging to the CBT group obtained the same results as those belonging to the CBT+P [U = 58; Z = 0.68; p = 0.5] and the CBT+V groups [U = 47; Z = 1.36; p = 0.19]. No statistical differences between medication use were noticed [U = 41; Z = 0.69; p = 0.53].
All groups in post-treatment phases seem to equally benefit from the treatment. Comparing scores obtained by participants in the pre- and post-treatments, statistically significant differences were found in subjects undergoing CBT [Z=3.38, p < 0.001] and those with the addition of Paroxetine [Z = 2.848; p < 0.004] and Venlafaxine [Z = 2.859, p < 0.004].
For this research, Impulse Regulation Scale scores were taken into consideration. In the pre-treatment phase relative to each group, all groups showed the same difficulties to regulate impulsive behaviour. The CBT group showed the same impulse regulation as those belonging to the CBT+P [U = 68; Z = 0.12; p = 0.93] and CBT+V group [U = 64; Z = 0.33; p = 0.75]. No initial statistical differences between pharmacotherapy groups were found [U = 45; Z = 0.39; p = 0.74].
In post-treatment, no statistical differences between groups were observed. The CBT group achieved the same results as the CBT+P [U = 60; Z = 0.56; p = 0.58] and CBT+V group [U = 64; Z = 0.32; p = 0.75]. No statistical differences between the use of Paroxetine and Venlafaxine were found either [U = 39; Z = 0.84; p = 0.44].
All groups seemed to benefit from the treatments. In fact, comparing the scores obtained by participants in the pre- and post-treatments, statistically significant differences were observed in subjects who underwent CBT [Z = 3.38, p < 0.001] as well as in subjects supported by Paroxetine [Z = 2.84; p < 0.005] and Venlafaxine [Z = 2.97, p < 0.003].
Table 2 synthesizes the means and standard deviations of different features of eating behavior (restrained eating,uncontrolled eating andemotional eating)showed by the three groups (CBT, CBT+P, and CBT+V) in pre- and post-treatment.
Groups
Scales
Pre-Treatment
Post-Treatment
MIN
MAX
M
SD
MIN
MAX
M
SD
CBT
restrained eating
5
9
6.64
1.28
5
8
5.86
.95
uncontrolled eating
13
16
14.71
1.07
11
15
12.93
1.21
emotional eating
8
13
9.93
1.21
7
11
8.57
1.22
CBT+P
restrained eating
5
9
6.50
1.43
5
7
6.00
.82
uncontrolled eating
13
16
14.10
1.11
10
13
11.60
1.07
emotional eating
8
13
10.70
1.64
8
11
9.80
1.03
CBT+V
restrained eating
5
8
6.10
1.11
4
7
5.50
.85
uncontrolled eating
12
16
13.90
1.37
10
13
11.40
1.17
emotional eating
9
13
10.60
1.43
7
11
9.10
1.19
Table 2 – Minimum and maximum scores, Means and Standard Deviations of different aspects of eating behavior (restrained eating, uncontrolled eating and emotional eating) obtained by three differential groups
In eating behaviour as well, all groups in pre-treatment phases appeared to be equivalent. The CBT group had the same mean result in restrained eating than those subjects who also underwent pharmacotherapy [CBT+P: U = 65; Z = 0.33; p = 0.74; and CBT+ V: U = 53; Z = 1.03; p = 0.55]. In the pre-training no statistical differences between groups with pharmacotherapy were noticed [U = 42; Z = 0.59; p = 0.58].
By analyzing the groups in pre-treatment phases no statistical differences in uncontrolled eating behaviour and emotional eating behaviour were found. In pre-treatment phase, the CBT subjects had the same statistical mean in uncontrolled eating [U = 48; Z = 1.33; p = 0.21] and in emotional eating [U = 48; Z = 1.32; p = 0.21] as those who took Paroxetine. No statistical differences were found when comparing CBT subjects with those that were taking Venlafaxine [uncontrolled eating: U = 48; Z = 1.33; p = 0.21; emotional eating: U = 48; Z = 1.32; p = 0.21]. In the pre-training phase there were no statistical differences between groups with pharmacotherapy [uncontrolled eating: U = 46; Z = 1.44; p = 0.17; emotional eating: U = 51; Z = 1.12; p = 0.28] were found.
All groups showed indistinctively less difficulty on restrained eating habits. In fact, by comparing post-training scores, the participants of CBT obtained the same results as those treated with Paroxetine [U = 61; Z = 0.56; p = 0.62] and Venlafaxine [U = 58; Z = 0.75; p = 0.51]. Therefore CBT alone appeared to be less effective on reducing uncontrolled eating than those with the addition of Paroxetine [U = 30; Z = 2.4; p < 0.02] and Venlafaxine [U = 26; Z = 2.6; p < 0.009]. Participants who underwent only CBT presented with less difficulties on emotional eating control than those with Paroxetine [U = 31; Z = 2.31; p < 0.02], but they achieved the same post-treatment score as those supported by Venlafaxine [U = 52; Z = 1.08; p = 0.31].
Comparing post-treatment outcomes, the effectiveness of Paroxetine and Venlafaxine appeared to be the same on restrained eating behaviour [U = 34; Z = 1.2; p = 0.25], on a better controlled eating behaviour [U = 45; Z = 0.39; p = 0.69] and on a higher emotional eating control behavior [U = 33; Z = 1.29; p = 0.2].
Comparing pre- and post-treatment results helped to observe a significant improvement in all groups. Participants who followed only CBT showed less difficulty to restrained eating behaviour [Z = 2.6; p < 0.009] in post-treatment. The same results were observed in those supported by Venlafaxine [Z = 2.12; p < 0.03], while no statistical differences were detected in a post-treatment phase in those groups supported by Paroxetine [Z = 1.89; p = 0.06].
Moreover it was possible to observe a considerable decrease in uncontrolled eating behaviour in all groups [CBT: Z = 3.49; p < 0.0001; CBT+P: Z = 2.84; p < 0.005; CBT+V: Z = 2.88; p < 0.004]. The same results were observed in the way emotional eating was handled. All groups benefited from treatments [CBT: Z = 3.27; p < 0.001; CBT+P: Z = 2.46; p < 0.01; CBT+V: Z = 2.87; p < 0.004].
Table 3 synthesizes means and standard deviations of Psychopathic Deviate (Pd), Depression (D) and Hypomania scales (H) obtained by the three groups (CBT, CBT+P, and CBT+V) in pre- and post-treatments.
Groups
Scale
Pre-Treatment
Post- Treatment
MIN
MAX
M
SD
MIN
MAX
M
SD
CBT
Psychopathic Deviate (Pd)
68
85
74.14
5.02
60
80
66
5.38
Depression (D)
63
81
72.50
5.58
61
78
69.86
5.39
Hypomania (H)
39
75
62
8.15
41
72
59.50
7.29
CBT+P
Psychopathic Deviate (Pd)
70
84
74.80
3.97
67
80
71.40
3.72
Depression (D)
66
76
70.80
3.91
64
74
67.80
3.58
Hypomania (H)
46
70
60.30
7.94
40
63
54.10
8.08
CBT+V
Psychopathic Deviate (Pd)
70
85
76.20
5.41
66
80
72.90
5.06
Depression (D)
65
80
70.80
4.76
61
75
66.60
4.69
Hypomania (H)
42
72
60.10
10.67
41
69
54
10.27
Table 3 – Minimum and maximum scores, Means and Standard Deviations of different aspects of Psychopathic Deviate obtained by three differential groups
As evidence of the homogeneity of the groups, the comparisons revealed no statistically significant differences in the pre-treatment phase. The CBT group subjects and those who received Paroxetine [Pd: U = 58; Z = 0.67; p = 0.51; D: U = 55; Z = 0.85; p = 0.39; H: U = 63; Z = 0.41; p = 0.71] showed similar scores. Likewise, the CBT group subjects and those treated with Venlafaxine [Pd: U = 53; Z = 0.99; p = 0.34; D: U = 58; Z = 0.71; p = 0.51; H: U = 65; Z = 0.26; p = 0.79] showed similar scores. The two groups treated subsequently with pharmacological support showed similar initial scores as well [Pd: U = 44; Z = 0.46; p = 0.68; D: U = 50; Z = 0.1; p = 0.99; H: U = 44; Z = 0.45; p = 0.68].
Comparing the results obtained in the post-treatment phase instead, those participants exposed to CBT alone showed a greater reduction of Pd compared to those who had taken Paroxetine [U = 23; Z = 2.76; p < 0.005] and Venlafaxine [U = 23; Z = 2.77; p < 0.005], whereas no differences were found comparing the scores obtained post-treatment in both groups of subjects with pharmacological treatments [U = 41; Z = 0.65; p = 0.53].
In post-treatment, the CBT group participants showed similar scores when compared to those taking Paroxetine [D: U = 54; Z = 0.91; p= 0.37; H: U = 41; Z = 1.67; p = 0.09] and Venlafaxine [D: U = 44; Z = 1.49; p = 0.14; H: U = 49; Z = 1.2; p = 0.23]. There have been no further significant differences in scores obtained post-treatment by the two pharmacotherapy groups [D: U = 39; Z= 0.84; p = 0.44; H: U = 0.47; Z = 0.23; p = 0.85].
All participants seem to have benefited from the proposed treatment. The CBT group had a significant reduction of Pd [Z = 3.3; p < 0.001], D [Z = 3.37; p < 0.001] and H [Z = 3.19; p < 0.001].
A similar result was found by comparing the pre- and post-treatment scores of the subjects supported by Paroxetine [Pd: Z = 2.7; p < 0.007; D: Z = 2.82; p < 0.005; H: Z = 2.82; p < 0.005].
Even in the group treated with Venlafaxine, a significant reduction of Pd [Z=2.87; p< 0.004], D [Z = 2.84; p < 0.004] and H [Z = 2.81; p < 0.005] was confirmed.
Discussion
The use of pharmacological therapy for overweight patients with BED has been less thoroughly studied. SSRIs (Citalopram, Sertraline, Fluoxetine, and Fluvoxamine) have mainly been used as the active compound in the pharmacological trials of patients with BED in order to improve mood symptoms and weight loss.34 Likewise, in many cases, promising results have been obtained with Venlafaxine in BED.35
Most of the research has focused on specific aspects of binge eating disorder, such as reduction in binge frequency and weight reduction. In general the results are associated with higher discontinuation rates.36
In this research, we did not only focus on the binge eating behaviour and impulse regulation in patients with BED. The main objective of this research was to analyze some aspects of eating behaviour (restrained eating, uncontrolled eating and emotional eating) and, more specifically, different psychotic conditions (psychopathic deviate, depression and hypomania).
The first hypothesis of this research was to verify differences between patients with binge eating disorder that followed CBT either with or without a pharmacotherapy support. The results confirmed that CBT and pharmacotherapy are equally effective in the treatment of BED and equally modified patients’ impulse regulation. Paroxetine and Venlafaxine medications did not enhance the control of binge eating or guarantee management of impulse regulation better than CBT alone.
This research also aimed at evaluating the efficacy of CBT with or without pharmacotherapy on some factors related to eating behaviour, such as the tendency to consciously monitor and reduce the caloric intake (restriction), the tendency to lose control on food intake (uncontrolled eating) and the conscious perception of the sensation of craving for food (emotional eating). The results suggest that CBT offers the same results regarding the reduction of caloric intake (restriction) as pharmacological treatment. It is less efficient in reducing the lack of control in food intake (uncontrolled eating), although it helps to reduce the sensation of craving for food (emotional eating) compared to pharmacotherapy.
In this research the effects of standardized treatments of CBT with or without the use of pharmacotherapy with bio-equivalent doses of Paroxetine and Venlafaxine were analyzed on psychopathic deviation, depression, and hypomania. The results confirmed that CBT showed a greater reduction of psychopathic deviation compared to those groups who underwent pharmacotherapy. Moreover, pharmacotherapy led to a higher reduction of depression and hypomania than CBT alone.
The second hypothesis was to verify if the SSRI Paroxetine and SNRI Venlafaxine were equally effective on impulse regulation, eating behaviour and personality features. The analysis showed that Paroxetine and Venlafaxine were equally effective on binge eating control and impulse regulation, but some differences in reducing dysfunctional eating behavior were found. Venlafaxine, compared to Paroxetine, seems to offer a greater improvement in emotional eating and restriction eating behavior. In fact CBT could be efficient to assess the tendency to reduce caloric intake (restriction) and to reduce the sensation of craving for food (emotional eating) more than Paroxetine alone. In order to reduce the tendency to lose control on food intake (uncontrolled eating) it could be helpful to administer Paroxetine or Venlafaxine.
Limitations
While the clinical groups were equivalent in all the parameters taken into consideration in the pre-treatment phase, the absence of a control group (no treatment) significantly reduced the possibility to accurately verify the conclusion. Due to ethical reasons we were not allowed to select a group of patients without any specific treatment. In order to correct this weakness in the research, it might be helpful to extend the sample and analyze the changes over a longer period of time.
It is relevant to analyze appropriately these aspects through controlled trials in order to test the efficacy and long-term outcome of psychotherapy, pharmacotherapy, and psychotherapy in combination with pharmacotherapy for treating BED.
Conclusion
In conclusion, patients with eating disorders usually suffer from other psychiatric disorders besides their eating disorder. Many results also confirm substantial comorbidity among obesity, BED, mood and anxiety disorders and metabolic syndrome in weight loss seeking populations.37
In such cases, it is important to understand the characteristics of the additional psychiatric disorders and the impact these ones have throughout the treatment.
As underlined by American Dietetic Association (ADA) Reports,38 understanding the complexities of eating disorders, such as influencing factors, comorbid illness, medical and psychological complications, is critical in the effectiveness of the treatment of eating disorders.
Eating disorders are complex medical illnesses since they have psychological, behavioural, and physiological components. Previous researchers underlined the importance to investigate gender differences in binge eating and associated behavioural correlates39 and, in order to prevent eating disorders, it is important to carry out individual treatment even on personality traits if the individual disorders have already occurred.40 Of course, a multidisciplinary approach involving a collaborative team of psychological, nutritional, and medical specialists as underlined in this research must be pursued in order to obtain important and at least short-term results.41
The results of this research confirm the need to analyze BED from an integrative perspective and to suggest treatments based on an interdisciplinary approach. The psychological (CBT) and pharmacological (Venlafaxine and Paroxetine) therapies were both efficient in different ways on the reduction of all the negative variables related to eating disorder. However any treatment could be inadequate in the absence of an accurate diagnosis that takes into consideration biological, genetic, psychological and nutritional components.
The assessment phase still plays an important role in determining which treatment is best for each patient. Accuracy in the medical examination when dealing with medical issues, as well as during the assessment examination and the psychological functioning evaluation is recommended.
Community acquired urinary tract infection (UTI) due to Escherichia coli is one of the most common form of bacterial infections, affecting people of all ages. Originally ESBL (extended spectrum β-lactamases) producing E. coli was isolated from hospital setting but lately this organism has begun to disseminate in the community.1
In India community presence of ESBL producing organisms has been well documented. However, various epidemiological factors associated with ESBL producing strains need to be documented. This will allow clinicians to separate patients with community UTI with these factors so that appropriate and timely treatment can be given.2 A community UTI when complicated may be a potentially life-threatening condition. In addition, for deciding the empirical treatment for patients with a UTI a thorough knowledge of local epidemiology is required. Therefore, the primary objective of this study was to determine the epidemiological factors associated with ESBL positive community acquired uropathogenic E. coli isolates and to determine their susceptibility to newer oral drugs. Mecillinam is a novel β-lactam antibiotic that is active against many members of family Enterobacteriaceae. It binds to penicillin binding protein (PBP 2), an enzyme critical for the establishment and maintenance of bacillary cell shape. It is given as a prodrug that is hydrolyzed into active agent. It is well tolerated orally in the treatment of acute cystitis.3
Material and Methods
This prospective study was conducted, from Jan 2012- July 2012, in our tertiary care hospital, which caters to medical needs of the community in North India.
Study Group:
The study group included patients diagnosed as having a UTI in outpatient clinic, or the emergency room or patients diagnosed within 48 hrs after of hospitalization. These patients and were labeled as patients having a community UTI. A diagnosis of symptomatic UTI was made when patient had at least one of the following signs or symptoms with no other recognized cause: fever ≥ 38.8˚C, urgency, frequency, dysuria or suprapubic tenderness and a positive urine culture (i.e. ≥105 microorganisms/ml of urine).4 Various epidemiological factors for each patient were recorded on individual forms. This included age, presence of diabetes mellitus, renal calculi, pregnancy, history of urinary instrumentation, recurrent UTI (more than 3 UTI episodes in the preceding year) and antibiotics intake (use of β-lactam in the preceding 3 months).2
Patients with a history of previous or recent hospitalization were excluded from study.
Antibiotic susceptibility testing was carried out following Clinical Laboratory Standards Institute (CLSI) guidelines using the Kirby-Bauer disc diffusion method.5 The antibiotics, which were tested included Amoxyclav (30/10µg), Norfloxacin (10µg), Ciprofloxacin (5µg), Tetracycline (30µg), Nitrofurantoin (300µg), Trimethoprim-sulfamethoxazole (23.75/1.25µg), Cephalexin (30µg), Cefaclor (30µg), Cefuroxime (30µg), Mecillinam (10µg) (Hi-Media, Mumbai, India).
Detection of ESBL
ESBL detection was done for all isolates according to latest CLSI criteria.5
Screening test - According to latest CLSI guidelines, zone diameter of E. coli strain for Ceftazidime <22mm and for Cefotaxime < 21mm is presumptively taken to indicate ESBL production.
Confirmatory test - As per CLSI guidelines, ESBLs were confirmed by placing a disc of Cefotaxime and Ceftazidime at a distance of 20mm from a disc of Cefotaxime /Clavulanic acid (30/10µg) and Ceftazidime/Clavulanic acid (30/10µg) respectively on a lawn culture of test strain (0.5 McFarland inoculum size) on Mueller-Hinton agar. After overnight incubation at 37° C, ESBL production was confirmed if there was a ≥5mm increase in zone diameter for either antimicrobial agent tested in combination with Clavulanic acid versus its zone when tested alone
Control strain - Standard strain of Klebsiella pneumonia ATCC 700603 was used as ESBL positive controland Escherichia coli ATCC 25922 was used as ESBL negative control.
Results
Out of total of 140 strains of E. coli, which were screened for ESBL production, 30 (21.4 %) isolates were found to be positive. High-level resistance was seen for many antimicrobial agents like Cephalexin (92.8%), Cefaclor (90%), Amoxy-clavulanate (88.57%), Cefuroxime (75.7%), Sulfamethoxazole-trimethoprim (72.8%), Norfloxacin (75.71%) and Ciprofloxacin (70%). Sensitivity to Nitrofurantoin was found to be 90%. Only 4.5% of uropathogenic E. coli were resistant to Mecillinam.
Various epidemiological factors seen in ESBL producers include female patients (n =24, 80%), history of antimicrobial intake (n = 17,57 %), elderly age >60 years (n =16 53%), renal calculi (n =15, 50%), history of recurrent UTI (n =11, 37 %), pregnancy (n = 11,37%), diabetes mellitus (n = 7, 23%) and history of urogenital instrumentation (n = 7, 23%).
Discussion
The epidemiology of ESBL positive uropathogenic E. coli is becoming more multifaceted, with increasingly indistinct boundaries between the community and hospital.6 In addition, infection with an ESBL producing organisms causing community UTI is associated with treatment failure, delayed clinical response, higher morbidity and mortality. These organisms are multi-resistant to other antimicrobials like Aminoglycosides, Quinolones and Co-trimoxazole. Therefore, empirical therapy with Cephalosporins and Fluoroquinolones often fail in patients with community UTI.7
The rate of ESBL producers in our study is lower than that described by other authors. In a similar study Mahesh E et al. reported higher rate (56.2%) of ESBL positivity from E. coli, which were causing UTIs from a community setting.8 Additionally Taneja N et al. described a higher rate (36.5%) of ESBL positivity in uropathogens. 9,10
A high rate of resistance was seen to almost all antimicrobial agents. This is in agreement with other authors like Mahesh et al. and Mandal J et al.8,11 Mecillinam showed very good results with only 4.5% resistance. Wootton M et al. reported similar high activity of Mecillinam against E. coli(93.5%).3 Auer S et al. reported that Mecillinam can be a good oral treatment options in patients with infections due to ESBL organisms.7
A limitation of our study was that being a developing country with limited resources, molecular typing and determination of antimicrobial resistance profiles of the isolates was not done. In our study female patients, elderly, patients with a history of antimicrobial intake, renal calculi and history of recurrent UTI were important factors for infection due to ESBL producers. These findings are similar to risk factors studied by other authors.2 In conclusion; this study confirms that ESBL-producing E. coli strains are a notable cause of community onset infections especially in predisposed patients. The widespread and rapid dissemination of ESBL-producing E. coli seems to be an emerging issue worldwide. Further clinical studies are needed to guide clinicians in the management of community onset infections caused by E. coli.
The American Diabetic Association (ADA) and the American College of Endocrinology (ACE) recommend HbA1c levels as diagnostic criteria for diabetes mellitus. Physicians have adopted HbA1c levels as a convenient way to screen for diabetes, as well as to monitor therapy. There exists concern that because HbA1c is formed from the glycation of the terminal Valine unit of the β-chain of haemoglobin, it may not be an accurate surrogate to ascertain glycemic control in certain conditions that affect the concentration, structure and function of haemoglobin. It makes logical sense to infer that HbA1c levels should at least in part reflect the average haemoglobin concentration ([Hb]). Kim et al (2010) stated that iron deficiency is associated with shifts in HbA1c distribution from <5.0 to ≥5.5% 1 and significant increases was observed in the patients' absolute HbA1c levels 2 months after treatment of anaemia.2 There is a dearth of literature on HbA1c levels in the anaemia population, and a reference range for this unique population does not currently exist. There are a few documented studies on this matter, the findings of which are at best, inconsistent.
It is thought that the various types of haemoglobin found in the myriad of haemoglobinopathies may affect haemoglobin-glucose bonding and/or the lifespan of haemoglobin, and by extrapolation, HbA1c level. Hence, extending target HbA1c values to certain haemoglobinopathaties may be erroneous due to potential differences in glycation rates, analytical methods (HbF interfers with the immunoassay method) and some physiological challenges (markedly decreased red cell survival).3
There is a significant positive correlation between haemoglobin concentration and HbA1c in the patients with haemolytic anaemia.4,5 Cohen et al (2008) reported that observed variation in red blood cell survival was large enough to cause clinically important differences in HbA1c for a given mean blood glucose,6 and haemolytic disorders may cause falsely reassuring HbA1c values.7 Jandric et al (2012) inferred that in diabetic population with haemolytic anaemia, HbA1c is a very poor marker of both overall glycemia and haemolysis.8 Mongia et al (2008) report that immunoassay methods for measuring HbA1c may exhibit clinically significant differences owing to the presence of HbC and HbS traits.9 However, Bleyer et al report that sickle cell trait does not affect the relationship between HbA1c and serum glucose concentration and it does not appear to account for ethnic difference in this relationship in African Americans and Caucasians.10
Koga & Kasayama (2010) advise that caution should be entertained when diagnosing pre-diabetes and diabetes in people with low or high haemoglobin concentration when the HbA1c level is near 5.7% or 6.5% respectively, citing the implication of changes in erythrocyte turnover. They further assert that the trend for HbA1c to increase with iron deficiency does not appear to necessitate screening for iron deficiency to ascertain the reliability of HbA1c in this population.11
In the light of the uncertainty in the influence of anaemia and haemoglobinopathies on HbA1c, it is imperative that clinicians are aware of the caveats with HbA1c values when they make management decisions in the anaemic population.12 There is currently a call for the use of other surrogates for ascertaining average glycemic control in pregnancy, elderly, non-Hispanic blacks, alcoholism, in diseases associated with postprandial hyperglycemia, genetic states associated with hyperglycation, iron deficiency anaemia, haemolytic anaemias, variant haemoglobin states, chronic liver disease, and end-stage renal disease (ESRD).13,14
Study objectives and hypothesis
The study attempts to discern clinical differences in HbA1c levels in patients with anaemia compared to non-anaemic population, as well as to quantify and show the direction of such difference if they indeed exist. We hypothesize that as glucose is covalently bound to haemoglobin in glycosylated haemoglobin, HbA1c levels in non-diabetic anaemic population is significantly lower than in non-diabetic, non-anaemic population.2 However, this relationship may not hold true for certain anaemias, haemoglobinopathies and hyperglycation states in some genetic syndromes.
Study design and method
The study is a retrospective chart review of patients with and without anaemia who underwent haemoglobin concentration and HbA1c level testing at The Brooklyn Hospital Center (TBHC) from July, 2009 to June, 2013. Using Cohen (1987) power table, assuming a power of 0.8, alpha level of 0.05, and a small effect size of 0.2 standard deviations (SD), sample size estimation of 461 was computed. A convenient sampling method was used to select patients who meet inclusion criteria, absent exclusionary conditions. In using this sampling method, we queried the electronic medical record at the TBHC using the below-listed inclusion and exclusion criteria. The query generated a list of “potential subjects”. We then reviewed the electronic chart of each patient on this list to confirm that they indeed meet all study criteria (excluding further if any exclusion criteria was identified on “second look”. We continued the selection until the computed minimum sample size of 461 was significantly exceeded. During this process, we had to examine every patient on the “potential subject” list generated by the initial query to achieve this goal. For the purpose of the study, anaemia is defined as haemoglobin concentration <11g/dl.
Inclusion criteria:
iStudy participant must be at least 21 years of age. We adopted this age criteria because at TBHC, electronic medical records was only available for the non-pediatric population over the study period. Patients below 21 years were managed at the pediatrics department using paper charts until the recent adoption of the EMR system. It would have been difficult conducting the study using paper charts.
iStudy participant must have at least one documented HbA1c level obtained within a month of a haemoglobin concentration assay. This criterion was adopted to allow for more inclusiveness in the study. It is our experience that haemoglobin assays may not be available on the same day as HbA1C assays considering the retrospective nature of the study.
Exclusion criteria:
Confirmed cases of diabetes mellitus (using two or more of the following: presence of symptoms related to diabetes, fasting blood glucose, 2 hours post-prandial glucose, and oral glucose tolerance test).
Documented history of gestational diabetes (GDM)
Documented history of endocrinopathy with affect for glycemic control
Current or prior use of medication with potential to increase or decrease HbA1c (includes, but not limited to antidiabetics, corticosteroids, statins, and antipsychotics)
Pregnancy or pregnancy-related condition within three months of HbA1c assay
Haemoglobin concentration <6 g/dl or >16g/dl.
Blood loss or blood transfusion within two months of HbA1c assay
The study assumed a consistent HbA1c assay method at the study center over the study period. 482 (229 anaemic and 253 non-anaemic) were selected. The study reviewed electronic medical records of selected patients, extracting data on HbA1c, fasting blood glucose (FBG), 2-hour post-prandial serum glucose (2HPPG), 2-hour oral glucose tolerance test (OGTT), haemoglobin concentration and electrophoresis, and anaemia work-up results when available. Subsequent measures of HbA1c two months after correction of anaemia was also documented and compared to pre-treatment levels.
Results and Analysis
The mean age of the anaemic and non-anaemic was 51.8 and 64.6 years respectively. Using the student’s t-test and x2 analysis respectively, the difference in mean age of both groups (anaemia and non-anaemic) was significant at p0.05 while gender distribution was similar (p>0.05), see table 1. The mean HbA1c for anaemic and non-anaemic groups was 5.35% and 5.74% respectively, amounting to a 0.4 unit difference in (8%) in mean HbA1c. This difference was statistically significant (p0.02). A significantly higher variance was observed in the anaemia group (0.79 vs. 0.64).
Table 1: Gender and age distribution and statistics
Age in years Anaemia
#(%)
Gender (M/F)
Mean Age (in yrs)
21-44
20(8.7)
17/41
45-64
76(33.2)
43/86
≥65
133(58.1)
10/32
Total
229(100.0)
70/159
64.6
Non-anaemic
21-44
64(25.3)
23/42
45-64
134(53.0)
58/81
≥65
55(21.7)
18/31
Total
253(100)
99//154
51.8
p-Values: Age=0.023, Gender=0.061
Assuming that 95% of the population is normal, computation of HA1c reference range (mean ±1.96SD) for the anaemia and non-anaemic group yielded 3.8-6.9 and 4.5-7.0 respectively. There was a significantly positive spearman correlation between [Hb] and HbA1C (r=0.28, p0.00). The mean HbA1c level and proposed reference ranges for the five anaemia subgroups (anaemia of chronic disease [ACD], iron deficiency anaemia [IDA], mixed anaemia, macrocytic anaemia and sickle-cell disease) are shown in table 2. Using one-way ANOVA analysis, the difference in the mean [Hb] and HbA1c across anaemia subtypes was not statistically significant (p0.08 and p0.36 respectively), see table 2.
Table 2: Anaemia subtypes with HbA1c statistics
Anaemia Type
#
Mean[Hb]
MeanHbA1c
95% CI (HbA1c)
Ref. range (HbA1c)
ACD
92
9.23
5.41
5.24-5.59
3.5-7.1
IDA
78
9.41
5.38
5.22-5.54
3.9-6.8
Mixed
11
9.11
5.21
4.82-5.59
3.9-6.5
Macrocytic
43
8.83
5.14
4.92-5.37
3.7-6.6
SCD
5
9.12
5.55
4.84-6.26
3.8-7.3
Anaemia (all types)
229
9.21
5.35
5.24-5.44
3.8-6.9
Non-anaemic
253
12.87
5.735
5.66-5.81
4.5-7.0
p-values: [Hb] for anaemia subtypes=0.08, HbA1C for anaemia subtypes=0.36, HbA1C anaemia vs. non-anaemia=0.02. ACD: anaemia of chronic disease, IDA: iron deficiency anaemia, SCD: sickle cell disease.
The study also examined the anaemia group to document the effect of anaemia correction on HbA1c levels. Only 62 of the 229 anaemic participants had documented [Hb] and HbA1c after interventions to correct anaemia, see table 3 and 4.
Table 3: Trend in [Hb] and HbA1c
N
Mean
SD
SEM
Change
p-Value
[Hb]1
62
9.2
1.07
0.14
[Hb]2
62
10.1
1.98
0.25
[Hb]=0.9
0.00
HbA1c1
62
5.37
0.69
0.88
HbA1c2
62
5.35
0.66
0.83
HbA1c=0.02
0.78
[Hb]1 and [Hb]2: haemoglobin concentration pre- and post- treatment for anaemia. HbA1c1 and HbA1c2: HbA1c pre- and post-treatment for anaemia
Table 4: Trend in [Hb] and HbA1c for anaemia subtypes
N
Mean[Hb]1
Mean[Hb]2
Δ Hb
pValue
MeanHbA1c1
MeanHbA1c2
ΔA1c
pVal
ACD
33
9.1
9.7
0.6
0.0
5.44
5.35
0.09
0.3
IDA
21
9.4
10.7
1.3
0.0
5.30
5.33
0.03
0.8
Mixed
1
Macrocytic
6
SCD
1
Total
62
9.2
10.1
0.9
0.0
5.37
5.35
0.02
0.8
ΔHb: change in haemoglobin concentration ([Hb]), ΔA1c: change in HbA1c
Using the student’s t-test, analysis, a 0.9g/dl mean improvement in [Hb] in the anaemia group (significant at p0.00) did not result in a statistically significant change in HbA1c (-0.02 units, p0.78). Similar results were obtained with anaemia of chronic disease and iron deficiency anaemia (ICD: change [Hb] =+0.6g/dl, change HbA1c=0.09, p0.31; IDA: change [Hb]=+1.3g/dl, change HbA1c=0.03, p0.79).
Discussion
There was an over-representation of the elderly in the anaemia group (58.1% vs. 21.7%). This is not unexpected as nutritional anaemia and anaemia of chronic disease increase in prevalence with the increasing co-morbidities associated with increasing age. The linear relationship between [Hb] and HbA1c holds true for anaemic and non-anaemia populations. There is a statistically significant difference of 0.4units (8%) in the mean HbA1c between the anaemic and the non-anaemic population. This difference is even more marked when the lower limit of the range is compared (3.8 vs. 4.5, difference of 0.7unit, 18%), the significance of which is not as clinically impacting as the upper limit of the range (diabetes mellitus diagnostic criteria). However, the relatively lower limit of normal for HbA1c in anaemic subgroups (especially of anaemia of chronic disease) may make low values of HbA1c in these patients less indicative of over-enthusiastic glycemic control, as well as less predictive of the increase in mortality associated with such tight control.
The upper range of normal for HbA1c for the anaemia and the non-anaemic groups and by extrapolation the proposed diagnostic criteria for diabetes, is however more similar (6.9 vs. 7.0%). This result appear consistent with Koga and Kasavama (2010) assertion that the trend in HbA1c does not appear to necessitate screening for iron deficiency to ascertain the reliability of HbA1c in this population.11 Our observation is explained by the greater variance associated within the anaemia group. The significantly higher variance observed in the anaemia may be explained by the convenient homogenization of clinically heterogeneous anaemia entities in the anaemia group. Perhaps a prospective study that avoids this may report differently.
The significantly higher variance (23%) in the anaemia is explained by the heterogeneity of the subtypes within the anaemia group. The myriad of pathophysiologies (from variant haemoglobin affecting structure and function, and perhaps glycation rates of haemoglobin, to shortened erythrocyte lifespan due to intravascular and extravascular haemolysis) accounts for a less precise HbA1c reference range for the anaemia group. Separating the anaemia group into unique anaemia subtypes created less heterogeneity, reduced some within group variance and yielded a more precise references range for some anaemia subtypes.
The widened 95% CI of mean and reference ranges observed with mixed and sickle cell anaemia (95% CI of mean =4.82-5.59 and 4.84-6.26 respectively) may be attributable in part to the small number of participants in these subgroups (11 and 5 respectively, the normal curve is less robust in these circumstances [when n<30])). Furthermore, the marked variability in the type, severity, and the number of chronic morbidities and deficiencies causing mixed anaemia may be contributing. The imprecision of HbA1c observed with the sickle cell may be compounded by the unstable clinical course of sickle disease, marked by periodic crises with fluctuating [Hb] associated with intermittent or chronic haemolysis. These observations make the case for defining HbA1c reference ranges for each anaemia type.
A modest correction of anaemia (Δ [Hb] of +0.9g/dl, i.e <1g/dl) did not appear to cause a significant change in HbA1c levels. It is possible that higher increments in [Hb] may produce significant change in HbA1c (we predict in the direction of increment). A similar pattern was observed with anaemia of chronic disease and iron deficiency anaemia subtypes, where improvements in [Hb] of 0.6 and 1.3g/dl respectively did not cause a significant change in HbA1c. We propose that with anaemia of chronic disease, the change in [Hb] concentration was too modest to cause a significant change in HbA1c. The relative small size of participants (33) examined also makes type II statistical errors highly likely. We further propose that with anaemia of chronic disease, the myriad of functional cellular and system abnormalities (many, potentially affecting cellular homeostasis, especially acid-base balance and haemoglobin molecule covalent binding) associated with the primary disorder may impact on the potential for increase in HbA1c with increasing [Hb]. In view of the retrospective nature of the study, we could not ascertain the timelines of certain interventions and hence accurately determine the persistence of anaemia correction. Theoretically, a recent correction in [Hb] is less likely to impact on HbA1c. As alluded to above Kim et al (2010) evaluated for changes in HbA1c two months after correction of anaemia. Similar explanations are offered for the observation with iron deficiency anaemia. There were only 21 participants in the iron anaemia subgroup (i.e. <30, probable violation of a rule for use of parametric tests), making the parametric statistical tests less robust for the analysis. We did not study patterns with mixed, macrocytic and SCD, as each subtype had <7 (1,6,1) participants.
The study examined a large volume of data, eliminating as much as possible, potential extraneous factors in the relationship between [Hb] and HbA1c levels. However, the retrospective nature of the study made the control of other extraneous variables and certain patient attributes infeasible. It was also difficult to discern critical timelines and hence eliminate the potential impact of certain therapeutic interventions. Also, our exclusion of the younger population of patients (i.e. 16-20 years) does not necessarily indicate the result of the study may not be extended to this population of anaemia patients. In fact the similar human haemoglobin physiology in this group advises that the results may be extended to this younger population without concern. Due to the retrospective nature of the study, and in our attempt to increase inclusiveness, we allowed haemoglobin concentration and HbA1c assays done within a month of each other. In reality though, the majority (57%) had same day assays and even a greater majority (79%) had within same week assays. We recommend a larger scale prospective study with participants representative of all anaemia subtypes and ages so that the results can be extrapolated to the general population of anaemia patients.
Conclusion
The study emphasizes the need to exercise caution when applying HbA1c reference ranges to anaemic populations. It makes the case for defining HbA1c reference ranges and thus, therapeutic goals for each anaemia subtype. Redefining such reference ranges may increase the sensitivity of HbA1c in diagnosing diabetes in anaemic population if indeed the lower mean HbA1c (observed in this study) translates into significantly lower upper limits of references ranges (not observed in this study). Also, the realized reduced lower limits of reference range in this population will lead to appropriate clinical tolerance for lower HbA1c levels, with avoidance of inappropriate intervention for erroneous perception of over-enthusiastic control of diabetic hyperglycemia. We recommend that, absent risks factors for and symptoms relatable to diabetes, marginal elevations in HbA1c levels (i.e. HbA1c >6%) in anaemic patients should warrant confirmation of diagnosis using fasting blood glucose and 2HPPG or OGTT. The use of other surrogates of glycemic control, immune to the blur associated with haemoglobin type and concentration, may circumvent the problem associated with use of HbA1c in this special population. To this end, fructosamine and glycated albumin assays are currently being examined. 1,15
Currently, depression is the leading cause of disability in the world and is predicted to become the second largest killer after heart disease by the year 20201. Eighty percent of individuals with depression report functional impairment while 27% report serious difficulties at work and home life2. According to a study conducted in 2011, India has the highest rate of depression (36%) in low income countries with women being affected twice more than men3. Cancer in children occurs randomly and spares no ethnic group, socio-economic class, or geographical region. An estimated 11,630 new cases are expected to occur among children aged 0-14 years in 2012 in the US, out of which 1,310 will die by end of 2013 due to it4. Based on Karachi Cancer Registry it is estimated that about 7500 children get cancer every year in Pakistan5. The mortality rates for childhood cancers have declined by 68% over the past four decades, from 6.5 per 100,000 in 1969 to 2.1 in 20094. However, the diagnosis of cancer in one’s child marks the beginning of social and psychological devastation for the whole family especially the mother. The length and intensity of treatment can be as distressing as the disease itself, negatively affecting their functionality as parents and in turn the child’s ability to handle the treatment6. As a primary care provider mother’s responsibility increases substantially starting a vicious cycle of anxiety and socio-economic uncertainty leading her to depression much more than the father7. The available data supports that mothers of children with cancer represent a group prone to high levels of emotional distress, and that the period following their child’s diagnosis and the initiation of treatment may be predominantly stressful and disturbing leading them to depression8. Such mothers have difficulty in taking care of themselves, their household and especially their sick children. Many parents continue to suffer from clinical levels of distress, even after five years off treatment of their child7. Many studies have shown that chronic depression and distress may lead to decrease in immune functioning and an increased risk of infectious disease in healthy individuals 9-11. Mothers are generally with the child mainly and hence are most affected from their child’s disease. In this study, we intended to estimate the frequency and severity of depression in mothers having children with cancer.
There is limited evidence from Pakistan regarding depression in mothers of children with cancer. The previous studies conducted had certain limitations such as small sample size, assessment of depression in both parents and that too of children with leukaemia only. This study intends to determine the frequency and severity of depression among mothers of children with cancer.
Methods
A cross sectional survey was conducted in the paediatric oncology clinics at The Aga Khan University Hospital, a teaching hospital in Karachi over a period of six months (September 2011- March 2012). Mothers of children with cancer were enrolled in the study, consecutively according to the inclusion and exclusion criteria. Mothers having children less than 15 years of age with any type of cancer, diagnosed by oncologist (2 months after diagnosis to rule out bias for normal grief period)12, mothers bringing their sick child for the first time to the teaching hospital or as follow up or for day care oncology procedures were included in the study. Mothers who had existing psychiatric illness (and or already diagnosed as having depression by a doctor) and/or taking medications for it, any recent deaths in family (within six months of interview) or having other co-morbidities (malignancy, myocardial infarction in previous year, neuromuscular disease limiting ambulation or blindness) were excluded.
A pre-coded validated and structured Urdu13, 14 and English15, 16 version of the questionnaire was used for data collection the questionnaire took about 20 minutes to complete and consisted of two sections. Section A, included mother’s and child’s demographic details and treatment status. Section B, consisted of Hamilton Depression Rating Scale (HAM-D 17) a validated scale (sensitivity 78.1% and specificity 74.6%) for assessing frequency and severity of depression in both hospitalised patients and the general population15. Scores of < 7 indicate no depression and scores > 7 are labelled as depressed. Mothers who were found to be depressed were further classified into mild (scores 8-13), moderate (scores 14-18), severe (scores 19-22) and very severe depression (scores > 23)16. Mothers with mild to moderate depression were referred to the family physicians; those with severe, very severe, or suicidal tendencies were urgently referred to a psychiatrist.
Institutional Ethical Committee of the Aga Khan University Hospital approved the study. Confidentiality of participants was maintained and informed written consent was obtained.
Sample sizecalculated by WHO software. The prevalence of maternal depression ranges from 56.5% to 61.5% 17, 18 as evident from different international studies. With 95%, confidence interval and bound on error of 10% the sample size came out to be 95. After an addition of 5% for non-responders, the total required sample size was 100 study participants. Data was double entered and analyzed in SPSS version19. The outcome variable was dichotomized as no depression and depression (cut off score7). Analysis was performed by calculating frequencies of categorical variables (maternal age, education, current marital status, employment, co- morbidities, diagnosed depression and treatment in mother, number of children, gender of sick child, cancer type, time since diagnosis of cancer in child, treatment given so far and current treatment status of child and family income). Means and Standard Deviation was reported for current age of the child.
Results
One hundred and sixty mothers were approached out of which 100 mothers consented to participate in the study yielding a response rate of 62.5% (100/160). With regards to the mothers the most common age group was the 30-39 year old category (43%). Fifty-five percent of mothers had a high level of education (those who had completed class 11-12 or engaged in professional education). Nearly all the mothers (98%) were married and were homemakers (95%). Only 5% of mothers were working outside the home. More than half of the participants (57%) had one to three children while 43% had more than three. Monthly financial income for 65% of the participants were more than fifty thousand Pakistani rupees (Table 1).
Table 1: Demographic Characteristics of Mothers (N=100)
Variables
N
%
Age of mother
20-29 years
39
39.00%
30-39 years
43
43.00%
40 years and above
18
18.00%
Education Level of mothers*
No education
13
13.00%
Primary/secondary/intermediate
32
32.00%
Higher
55
55.00%
Marital status of mothers
Currently Married
98
98.00%
Divorced
1
1.00%
Widow
1
1.00%
Maternal Employment Status
Housewife
95
95.00%
Working
5
5.00%
Number of children
1-3
57
57.00%
More than 3
43
43.00%
Family Income
< 20,000
4
4.00%
20,000-50,000
31
31.00%
>50,000
65
65.00%
* (Not Educated: Those who do not have primary education, Primary 1-5 years of schooling, Secondary: 6 to 10 years of schooling, Intermediate: Who have studied class 11 and 12, Higher: Who have completed or engaged in professional education)
The demographic characteristics of child are detailed in Table2. Seventy-five percent of sick children were male while 25% were females (n=100). Half the children were diagnosed with cancer between the age of three to nine. Fifty percent of children (n=50), had their diagnosis of cancer in the last one to five years. More than half of children (57%) were on treatment during study phase. Different types of cancers occurring in children are shown in Figure1.
Table 2: Demographics and social characteristics of sick child (N=100)
Variables
N
%
Current age of child *
6.90(±3.40)*
Gender of Sick Child
Male
75
75.0%
Female
25
25.00%
Age of child at cancer diagnosis
10months-3 years
40
40.0%
3 -9 years
50
50.00%
More than 9 years
10
10.00%
Time since diagnosis of child’s cancer
< 1 year
15
15.0%
1-5 years
50
50.00%
>5 years
35
35.00%
Current treatment status of child
On treatment
57
57.0%
Off treatment
43
43.00%
*Mean (SD) (t-test values)
Figure 1: Frequency of various types of cancer in children (N=100)
*Others (BLL, Rhabdomyosarcoma, Glioblastoma, Nephroblastoma) Seventy eight percent of the mothers were depressed. Sixty-nine percent (n= 54) had mild depression, nearly 25% (n=19) had moderate, while 5% (n= 4) had severe and 1% (n=1) had very severe depression.(Table 3)
Table 3: Frequency and levels of severity of Depression in mothers
Variables
N
%
Frequency of Depression (n=100)
Depression present
78
78%
Depression absent
22
22%
Severity of Depression (n=78)
Mild
54
69%
Moderate
19
25%
Severe
4
5%
Very severe
1
1%
Discussion
Depressed patients are frequently encountered in nearly all specialty clinics. However, depression in caregivers accompanying patients is usually overlooked and hence missed, as doctors are mostly focused on the patient’s evaluation, condition, and treatment. When the patient is a child and the diagnosis is cancer, this difficult circumstance has a sudden and long term impact on both the child and the family. Many parents of a child with cancer will have very strong feelings of guilt. As such, parents of cancer survivors may be at risk for impaired physical and mental health. An increasing body of literature supports the conclusion that various levels of parental distress are ongoing, long after treatment is completed 19, 20.
The prevalence of depression in mothers in this study was as high as 78%. Mild depression was seen in 69% of mothers, moderate in 25%, severe in 5% while 1% had very severe depression. This high prevalence of depression in such mothers has not been reported from Pakistan before. The soaring levels of depression however have been consistent with the study conducted in Turkey in 2009 in mothers of children with leukaemia21 where 88% (n=65) mothers were depressed. Mild depression was reported in 22.7 % (n=18) and major depression in 61.5% (n=40). Similar results were reported from a study conducted on both parents of children with leukaemia in 2002 in Pakistan where 65% (n=60) of mothers were found to be depressed 17. Nevertheless, severity of depression in this study was not noted. A Sri Lankan study in 2008, showed moderate to severe depression to be 22.9% and 21.9% in mothers having children with mental and physical disorders respectively22. Another study conducted in Florida in 2008 suggests that an increased symptom of depression in mothers is related to significantly lower ratings in quality of life for their children18.
The existing data supports the argument that mothers of children with cancer represent a group prone to high levels of emotional distress. The time following their child’sdiagnosis and the commencement of treatment may be particularly stressful and traumatic 23 with an incidence as high as 40% 24-26. There could be multi-factorial reasons for this alarmingly high rate of depression seen in Pakistani mothers. One of the causes could be the political instability that Pakistan has been facing for past few years leading to economic volatility. The study conducted in 2002 in Karachi saw 65% of maternal depression17 which has now risen to 78% in this study. Due to political unrest, everyday strikes, bombasts, these mothers may also have difficulty in reaching hospital on scheduled visits leading to postponement of treatment. Other reason could be economic inflation. The cost of daily living has soared while allocated medical budget was 0.27% of its gross domestic product (GDP) on health in 2011-12, which is insufficient to cater the needs of the population (Economic Survey of Pakistan of 2011-2012).
Lately, there has been a recent trend towards nuclear families in Pakistan rather than living in extended families as before27. This in turn may lead to mother being the soul person in looking after the sick and her healthy children as well as managing house chores and doctor’s appointments leading to more frustration. This study also showed that 57% mothers had three children while 43% had more than three children. This could also be one of the factors for high rate of depression as looking after multiple children is demanding and may lead to decreased coping skills of mothers.
Other possible reason for this high rate of depression could be that mostly educated mothers were visiting the hospital that have access to internet and can search up all details, good or bad, on their child’s disease. This may start a vicious circle of worry for mothers. Other possible reasons for this growing depression could be gender of child (as in this society male child is thought to be the support and bread earner of the family), child’s current treatment status and time since diagnosis of cancer in child.
Strengths and Limitations
To the best of authors’ knowledge, this study has touched upon a topic that was not yet been attended to, in local context. Moreover, in this study adjustment phase of two months, for acute stress and posttraumatic stress disorder was given for diagnosis of depression in mothers. It was done to rule out bias in study. HAM-D also focuses on symptoms in the past 1 week, to minimize the recall bias. The findings in this study offer evidence and importance of the need for developing psychological support for families especially mothers who are caring for a child with cancer, in Pakistan.
This study has several limitations. The study was conducted in a tertiary care private hospital, which mostly caters a specific segment of population. Hence, the results may not be a true representation of the population. All data in this study was self-reported by the participants. Thus, it is anticipated that there may be some bias in their responses and recall. Lastly, since this was a cross-sectional study so temporality is difficult to establish.
Conclusion
In conclusion more than three-fourth of our study participants were depressed.
The outcome is expected to identify depressed mothers so that effective strategies can be developed to enhance their coping skills and medically treat them when required. This in the long term is expected to increase quality of life for both their sick and healthy children as well as mothers themselves.
Future Research and Policy Recommendations
Future studies are recommended in order to confirm our findings. Such studies need to be conducted on a larger scale, at national level, in various hospitals and settings to counteract limitations of our study with appropriate means of measuring depression in mothers. Factors, not explored in this study such as personality styles and coping skills of mothers can be explored as these may be significant aspects leading to depression. Further co-morbidities, such as anxiety and post traumatic stress disorder symptoms related to child’s cancer should also be investigated.
Other associated factors, such as the political and economic situation, which perhaps may also be a leading cause of depression in our part of the world, should also be assessed. Simultaneously, measures should be taken to root such factors out at national levels.
The results of current study show the need of incorporating mothers into a treatment process designed for psychological interventions, not only after the diagnosis of cancer in their child but also during their child’s treatment. Psychosocial services should be recognised as an important constituent of comprehensive cancer care for families of children with cancer.
It is highly advocated that the healthcare professionals who work with the families of children with cancer should evaluate the children and their families concerning the psychological and social aspects of their lives. Arrangements for family counselling for those needing help should be made. Mothers should also be referred to family physicians and social support if available. The mother’s crucial position in the family and the proximal and distal effects of her adaptation to the crisis of cancer in the family should lead to the design of interventions intended at decreasing her distress and at promoting her adaptive coping skills as improving mothers’ problem-solving skills has been associated with reductions depression and anxiety28. Thus, all hospitals, dealing with paediatric cancer cases should have a family counselling and support system.
Organ transplantation is an effective therapy for end-stage organ failure and is widely practiced around the world. According to World Health Organization (WHO), kidney transplants are carried out in 91 countries. Around 66,000 kidney donations, 21,000 liver donations and 6000 heart transplants were performed globally in 2005. 1 In India the rate of organ donation is only 0.16 per million populations, compared to America's 26 and Spain's 35.2 The shortage of organ is virtually a universal problem. Though many efforts were undertaken by the government to motivate the public towards donation of organs, the rate of organ donors has not paralleled the growing waiting list3,4, 5 and inadequate organ donation in India remains a major limiting factor for transplantation. There are several factors which could facilitate and hinder the general public to donate a organ. Identifying these factors could help in planning effective strategies to combat the problem. Hence the present study was conducted with the aim to explore the general publics perceived barriers and facilitating factors of organ donation.
Materials and methods
The present study was a cross sectional, exploratory survey conducted among the general public of Puducherry U.T, India. 400 eligible subjects who fulfilled the following criteria were included a) Subjects aged 18 and above, and b) who understand either the local language Tamil or English. Subjects with intellectual, psychiatric and emotional disturbances that could affect the reliability of their responses were excluded from the study. The population registry in the primary health centers of the selected community area was used as a sample frame to select subjects randomly. Every eligible subject was explained about the purpose of the study and signed a written consent form. Formal ethical clearance was obtained from the institute ethics committee before actual data collection procedure.
Preparation of the questionnaire
An extensive literature review was carried out to understand the possible barriers and facilitators reported in the past. Reported barrier and facilitating factors in the literature were included in constructing the questionnaire, including specific cultural and religious oriented items specific to Indians. Subject’s intention to donate the organ was assessed using a single dichotomous question (yes or no). For assessing the barriers and facilitators related to organ donation a questionnaire with a total of 18 items (9 items each) was prepared in the form of closed ended question i.e. yes or no. Along with closed ended questions, an open ended question i.e. any other? was also included for obtaining an extended response apart from the framed questions. As knowledge is an important factor which could serve both as a barrier and facilitator for organ donation, 8 items related to knowledge were also included as a part of the questionnaire. Knowledge items of the questionnaire were evaluated by assigning a score of 1 for each correct response with a maximum possible score of 8. Interpretation of the knowledge component was also done by categorizing the knowledge as follows - Below 50% of the total score - Inadequate knowledge, 51 – 75% - Moderately adequate knowledge, above 75% - Adequate knowledge, for ease of understanding. The draft tool was validated for its content by 10 experts from the field of surgery, medicine, nursing, anthropology and psychology for its appropriateness. After appropriate modification the content validity index for the tool was calculated and it was found to be highly valid (0.98). The reliability of the tool was estimated by a test re-test reliability method among 10 subjects with an interval of 2 weeks from the first and second time of administration of the questionnaire. It was found to be highly reliable with reliability coefficient of 0.91. A face to face interview method was used to collect data from each subject. Collected data was analyzed using SPSS for windows version 14 (SPSS Inc., Chicago, Il, USA) with appropriate descriptive and inferential statistics. A probability value of < 0.05 was set as the level of significance
Results
Basic Demographic details
Of the total 400 subjects enrolled the majority were male (56%), between the age group of 31-40 years (48%), and followed Hinduism (68%) at the time of interview. Most of the subjects were literate (70%) with education up to high school and resided in a rural area (53%).
Knowledge regarding organ donation
The mean knowledge score of the subjects regarding organ donation was 4.74 1.45 score which ranged from a minimum score of 1 to a maximum score of 8. Most subjects responded correctly to questions related to organ matching (85.3%) and consent procedure (84.7%). Details of different aspects of knowledge regarding organ donation of the subjects can be found in Table 1. When subjects were asked about the source of information regarding organ donation, 51.3% of the subjects reported that they gained knowledge through television, 23% from health personnel, and 12% from friends and 7% through books and internet (Figure1). Whilecategorizing knowledge scores the majority of the subjects (38.6%) had inadequate knowledge, 50.6% had moderate knowledge and only 10.6% had adequate knowledge regarding organ donation.
Intention to donate organs: Barriers and Facilitators
Of the total 400 subjects interviewed 69.75% of the subjects reported that they wish to donate their organs, whereas the remaining 30.25% reported that they will not donate their organs either during their life or after their death. Subsequently the factors for barriers and facilitators were also analyzed using the pretested questionnaire. The most common barriers perceived by the subjects related to organ donation were as follows, ‘family opposition’ (82.8%), ‘complicated organ donation procedure’ (69%), ‘fear that donation affects their future’ (58.36%), and ‘misuse of organs’ (55.2%). More information about barriers is detailed in Table 2.The most important facilitating factors of organ donation as reported by the subjects were ‘thought of saving someone’s life’ (95.9%), ‘feeling of improved sense of humanity’ (95%), ‘to save the life of a close relative’, ‘thought that their organ live after their death (92.6%) and ‘being a role model for others’ (77.7%). More details of facilitating factor can be seen in Table 3
While associating the subject’s intention to donate organs with demographic variables like age, gender, residence, education, religion, marital status, type of family and knowledge; only educational level had a significant association with the subject’s intention to donate organ. Specifically graduate people are more likely to report intention to donate organ their organs than others (p<0.001).
Figure 1: Distribution of source of information regarding organ donation among the subjects.
Table 1: Item wise distribution of different aspects of knowledge regarding organ donation (n=400)
S. No
Aspects of Knowledge
Correct response
Incorrect response
1.
Definition of organ donation
24%
76%
2.
Knowledge regarding Commonly donated organ
71.3%
28.7%
3.
Knowledge regarding Consent procedure for living donor
76%
24%
4.
Knowledge regarding Consent procedure after death
84.7%
15.3%
5.
Knowledge regarding Consent for mentally retarded person
41.3%
58.7%
6.
Knowledge regarding Consent for unclaimed dead bodies
35.3%
64.7%
7.
Knowledge regarding Organ matching procedure
85.3%
14.7%
8.
Knowledge regarding legal consideration for organ donation
56%
44%
Table 2: Perceived Barriers towards organ donation (n=121)
S. No
Barrier factors
Percentage
1.
Oppose from the family
82.8%
2.
Fear
72.4%
3.
Procedures are complicated
69%
4.
Affects physical appearance
65.5%
5.
Affects the future
58.6%
6.
Create psychological problem
58.6%
7.
Organs could be misused
55.2%
8.
Against religious belief
48.3%
9.
Insults human rights and dignity
48.3%
Table 3: Perceived Facilitators towards organ donation (n=279)
S. No
Facilitating factors
Percentage
1.
Save someone’s life
95.9%
2.
Improve the sense of humanity
95%
3.
Save the life of a close relative
92.6%
4.
Wishes organ to be alive after death
92.6%
5.
To become a role model
77.7%
6.
Empathy for others
53.7%
7.
Rewarding experience
51.2%
8.
Due to family pressure
29.8%
9.
For economic benefit
27.3%
Discussion
The current study was conducted with the aim to explore the general publics intention towards organ donation and to identify the perceived barriers and facilitators. The present study revealed that 69.7% of the subjects have an intention to donate their organs either during their life or after their death, which is similar to the finding of Chung et al6 and Shahbazian H et al7. Similar to the previous studies8 the current study also confirmed a positive association between public intentions to donate their organ with their educational status. Though many studies in the past reported attitudes9,10 of public towards organ donation, the present study was the first of its kind to analyze specifically the barriers and facilitators of organ donation among the general public, this adds strength to this study. The most common barrier reported in the present study was ‘opposition form family in donating their organs’; these findings were similar to a previous study.6 Illegal organ donation and misuse of organ is a major problem in India for the low organ donation rate among public11, this fact was reflected even in the current study as 55.2% of the subjects reported misuse of an organ as barrier to organ donation. The most important facilitating factors of organ donation reported in the present study was ‘thought of saving someone’s life’ (95.9%), ‘feeling of improved sense of humanity’ (95%), ‘to save the life of a close relative’ (92.9%), these finding were similar to the findings of Neelam et al conducted in India12. The majority of the respondents in this study reported "lack of information" about organ donation and transplantation. These findings are comparable with those reported from previous studies, which all indicate the importance of public education about the importance of organ donation13,14,15,16. Our study identified that the principle respondents' source of information about organ donation was the television (TV). The contribution of other sources of information in providing respondents with knowledge about organ donation was minimal. Generally, studies had shown the importance of visual media in increasing the awareness of the public about organ donation. 17,18
Conclusion
Better knowledge may ultimately translate into the act of donation. Effective measures should be taken to educate people with relevant information with the involvement of media, doctors and religious scholars.
Population-based studies indicated that diabetes remains as a nationwide epidemic that continues to grow tremendously affecting 25.8 million people or 8.3% of the US population.1 This number is expected to reach 68 million or 25% of the population by 20302 as incidence of obesity is rising.3
The American Diabetes Association (ADA) recognizes diabetes education (DE) as an essential part of comprehensive care for patients with diabetes mellitus and recommends assessing self-management skills and knowledge at least annually in addition to participation in DE.4 With the objective of improving the quality of life and reducing the disease burden, the ADA and the U.S. Department of Health and Human Services through its Healthy People 2020 program have emphasized three key components for effective disease management planning: regular medical care, self-management education and ongoing diabetes support.5,6
The hallmark of preventing the chronic complications of diabetes lies in optimizing metabolic parameters such as glycaemic control, blood pressure, weight and lipid profile. Pharmacologic intervention can only do so much in achieving treatment goals. It should be complemented with appropriate DE emphasizing dietary control, physical activity and strict medication adherence.7,8 Adequate glycaemic control is clinically important because a percentile reduction in mean HbA1C is associated with a 21% reduction in diabetes-related death risk, 14% reduction in heart attacks and 37% reduction in microvascular complications.9
Diabetes self-management (DSM) education programs are valuable strategy for improving health behaviours which have significant impact on metabolic parameters.10 This is supported by chronic care model that is based on the notion that improving the health of patients with chronic diseases depends on a number of factors that include patients’ knowledge about their disease, daily practice of self-management techniques and healthy behaviors.11,12,13
A systematic review by Norris et al. has shown that DSM training confers positive effect on patients’ knowledge about diabetes, blood glucose monitoring, and importance of dietary practices and glycaemic control.14 In another retrospective observational study, evidence has suggested that participation in a multifactorial diabetes health education significantly improved glycaemic and lipid levels in the short term.10
Diabetes education/support group provides a comprehensive patient education, fosters a sense of community, and engages the patients to become active part of a team managing their diabetes. The diabetes support group at Queens Hospital Centre provides services to a diverse population from different socioeconomic backgrounds and is offered to any patients with diabetes. It is facilitated by certified diabetes nurse educators in the hospital and in the clinic. Patients meet once a month per session and are provided education in self-management of diabetes, education in medication, diet, lifestyle modifications, regular exercise, weight management and translation in their respective languages, if needed.
Few researches have been conducted comparing the efficacy of DE and combination of diabetes education and peer support group (DE+PS) in improving the metabolic parameters of patients with DM. In patients with DM, the primary objective of this study was to assess the clinical impact of DE and combined DE+PS group on metabolic parameters such as lowering HbA1C, reducing weight or BMI, controlling blood pressure, and improving lipid profile.
Methods
The study subjects were identified through retrospective review of electronic medical records of adult patients aged more than 18 years old with diabetes and being treated at the Diabetes Centre and/or Primary Care Clinic of Queens Hospital Centre, Jamaica, New York from January 01, 2007 to June 01, 2011. A total of 188 study subjects were selected and assigned to three groups: (1) control group (n=62), who received primary care only, (2) DE group (n=63), who received diabetes teaching from DM nurse educator in addition to primary care, and (3) DE+PS group (n=63), who received both diabetes education and attended at least 2 or more sessions of peer support group in addition to primary care. The subjects in control group, education group, education plus peer support group were matched on age, sex, weight and BMI. Considering the data availability, the duration of follow up measured in each group varied; the control group was followed up for 8 months, the DE group for 13 months and the DE+PS group for 19 months. The changes from mean baseline to the third month, sixth month and final follow up period were calculated for the following metabolic parameters: HbA1C, weight, BMI, SBP, TC, HDL-C, LDL-C and TG-C. T sample T-test was used to compare statistical differences in the mean changes in the metabolic parameters in each group from baseline to follow up period. All data management and statistical analyses were conducted with MiniTab version 14. A p-value of less than 0.05 is considered statistically significant.
Results
Among the 188 study subjects included in our study between ages 20 to 88 years with mean age of 60, the predominant gender was female (n=132, 70%). African American makes up the majority (n=74, 39%), followed by Asian (n=40, 21%), Caucasian (n=34, 18%), Hispanic (n=22, 12%) and Indian (n=18, 10%). Majority of our patients with DM have concurrent hypertension (91%), hyperlipidemia (90%), and obesity (47%). See Table 1 for baseline demographics.
Table 1. Baseline demographic characteristics of the study population
Control [C] N=62
Diabetes Education [DE] N=63
Diabetes Education + Peer support [DE+PS] N=63
Baseline Characteristics
Age range (years) [median]
32-76 [61]
20-88 [58]
26-86 [62]
Sex-male [N (%)]
22 (35)
20 (32)
14 (22)
Race
African American
31 (50)
26 (41)
17 (27)
White
11 (17)
23 (37)
0 (0)
Indian
18 (29)
0 (0)
0 (0)
Asian
1 (2)
10 (16)
29 (46)
Hispanic
1 (2)
4 (6)
17 (27)
Comorbidities [N (%)]
Hypertension#
54 (87)
59 (94)
58 (92)
Hyperlipidemia¥
56 (90)
61 (97)
53 (84)
Obesity*
29 (47)
29 (46)
31 (49)
Active cigarette smoker
6 (10)
5 (8)
1 (2)
# Hypertension is defined as mean systolic blood pressure > 140 mmHg and/or diastolic > 90 mmHg measured on two separate occasions. These patients have either hypertension diagnosed prior to or after diagnosis of DM. ¥ Hyperlipidemia is defined as LDL > 100 mg/dl in patients with diabetes and diagnosis hyperlipidemia could be before or after diagnosis of DM. * Obesity is defined as body mass index (BMI) of at least 30 kg/m2 or greater.
The group analysis showed that the DE group had a statistically significant decrease in mean HbA1C (mean change: -0.78%, p=0.013), TC (mean change: -16.89 mg/dL, p=0.01) and LDL-C (mean change: -11.75 mg/dL, p=0.04) from baseline to final follow up (see Table 2). The DE group had non-significant mean weight gain of 2.17 pounds and BMI of 0.52 kg/m2.
* Final follow up varies for the three groups. 8 months for control (C), 13 months for education (DE) group and 19 months for education plus peer support (DE+PS) group
Although DE+PS group were observed to have decreased in mean HbA1C (-0.48%), weight (-0.38 pounds), SBP (-3.24 mmHg), TC (-4.43 mg/dL) and TG-C (-12.89 mg/dL) and increased in HDL-C (+095 mg/dL), they were not statistically significant from initial to final follow up period. There were greater improvements in HbA1C and SBP from baseline to final follow up in DE+PS group compared to the control group. Only the control and DE+PS groups showed a decrease in weight from initial to final follow up.
Between the two intervention arms, the DE group exhibited greater reduction compared to DE+PS group in mean HbA1C (-0.78 vs. -0.48%), SBP (-3.78 vs. -3.24 mmHg), TC (-16.89 vs. -4.43 mg/dL), LDL-C (-11.75 vs. +0.08 mg/dL) and TG-C (-14.75 vs. -12.89 mg/dL).
Discussion
Our results suggested that among patients with DM, the subjects who participated in DE exhibited significant reduction in baseline HbA1C, TC and LDL-C compared to control. Furthermore, the significant impact of DE alone on optimizing control of HbA1C and LDL-C appeared to persist through time. In addition patients who received DE+PS also demonstrated moderate improvement in HbA1C, SBP, TC and TG-C and HDL-C even though they were not statistically significant on final follow up. It must be noted that the baseline mean HbA1Cs were higher in both interventions DE and DE+PS groups compared to control group and this may be associated with greater reduction in HbA1C in the intervention groups and may skew the finding. Our study results showed that DE group had greater percentage reduction in HbA1C (9%) compared to DE+PS group (5%) from baseline to the first follow up. The average change in HbA1C and LDL-C levels recorded in our study is similar to what has been reported in a previous study which showed significantly greater improvement in mean glycaemic levels and LDL-C levels in patients who participated in DE.10
However our findings are in stark contrast to a previous study that showed that DE+PS intervention has led to substantially greater weight reduction and improvement in HbA1C at second month post-intervention compared to education and control group.15 This difference may be accounted for by the effect of sample size and the duration of follow up. The DE+PS group in our study included twice the number of patients being sampled compared to previous study (63 patients vs. 32 patients), and longer duration of follow up (19 months vs. 4 months)15. These differences are significant as they can influence the data trend.
In general, all groups had improvement in HbA1C, TC, TG-C levels, and SBP (though not significant). Only control and DE+PS groups had weight reduction and DE group had weight increase. Although the DE+PS group had improvement in most of the metabolic parameters they were not statistically significant throughout the entire follow up period compared to DE group. This scenario might be attributed to retrospective nature of the study, possible non-compliance of patients to medications, differences in duration of follow up between groups, and limited number of patients sampled thus hindering the appreciation of potential significant effect. The statistically significant differences in baseline HB A1C among the three groups could also explain the differing magnitude of change from baseline; DE group had higher baseline HbA1C compared to control group (9.3 vs. 7.5%; p=0.00) allowing for a greater change from baseline value. Similarly in DE+PS group, baseline HbA1C was considered statistically significant compared to control group (8.3 vs. 7.5%, p=0.018).
A previous randomized controlled trial assessing the effect of peer support on patients with type 2 diabetes with a 2-year follow up demonstrated no significant differences in HbA1C (-0.08%, 95% CI -0.35% to 0.18%), SBP (-3.9 mmHg, -8.9 to 1.1 mmHg) and TC (-0.03 mmol/l, -0.28 to 0.22 mmol/l).16 It was suggested that the effect of DSM education on glycaemic control is greatest in the short-term and progressively attenuated over time and this may suggest that learned behaviour changes with time.17,18 However, the result of the present study showed a persistently significant beneficial effect on HbA1C and LDL-C from the earliest follow up until the final month for patients receiving DE alone.
Previous meta-analysis of randomized trials of DSM education programs by Norris and colleagues (2002) demonstrated the beneficial effect of DE with estimated effect on glycaemic control (HbA1C) at -0.76% (95% CI: 0.34,1.18) compared to control immediately after the intervention.17 However, the findings of the present study on the effect of peer education are in direct contrast with the results of the randomized trial using the Project Dulce model of peer-led education showing significant improvement from baseline to the tenth month of follow-up in HB A1C (-1.5%, p=0.01), TC (-7.2 mg/dl, p=0.04), HDL-C (+1.6 mg/dl, p=0.01) and LDL-C (-8.1 mg/dl, p=0.02).19 This could be accounted for the different baseline values of the metabolic parameters in the present study, thus creating a bias in the magnitude of change.
It has been suggested that the most effective peer support model includes both peer support and a structured educational program. The emphasis on peer support is based on the recognition that people living with chronic illness can share their knowledge and experiences to one another.20 It has been observed that participants in peer support groups were not interested in the topic of diabetes itself but on the effect and meaning of the disease on the lives of the patients.21
There are a number of limitations to be taken into consideration when interpreting the results of our study. Since our study is a retrospective review of medical records, the data collection was limited to availability of the required clinical data. Some parameters were not possible to obtain on a consistently uniform time frame. This resulted in varying mean duration for the 3 study groups (8 months for control group, 13 months for DE and 19 months DE+PS group). Because of unavailability of some of the clinical parameters at a specific time frame, there were variables missing on the earlier follow-ups. Our study also examined the effect of the intervention over a relatively short time. A longer-term study is necessary to determine if the intervention has lasting impact on improving the metabolic parameters, uplifting the quality of life and preventing morbidity and mortality from diabetes. The limited sample size could also be important factor that may influence the generalizability of the data. The differing baseline values in the metabolic parameters could have blunted the appreciation of possible significant improvement in the metabolic parameters in the DE+PS group. Other confounding factors that were not analysed in the present study and could have affected the results include the use of insulin regimen among the different groups, initiation of additional oral hypoglycemic agents, medication adherence by the patients and adjustment by physicians, and whether the patients were seen by endocrinologists or not.
The present study suggested that participation in DE may assist with optimizing HbA1C, TC and LDL-C. The DE group had improvement in glycaemic control and other metabolic parameters. The significant metabolic improvement gained from DE appeared to be sustained over time. However, participation in both DE+PS showed relative improvement but not significant as it is likely due to confounding different baseline metabolic parameter and duration being compared. Our findings underscore the importance of DE as part of the treatment plan for patients with DM. The addition of peer support group may or may not contribute to significant improvement of metabolic parameters.
Anterior Cruciate Ligament (ACL) reconstructions are increasingly being undertaken throughout the United Kingdom (UK). Advances in General and Local Anaesthetic as well as surgical technique allow reconstructions as a day case procedure.1,2,3 There are currently no studies showing post-operative pain suffered by ACL reconstruction patients, nor showing the comparison of day case and inpatient pain scores.
We document, prospectively, patients’ post-operative pain after ACL reconstructions. We aim to identify and assess factors that affect pain post-ACL reconstructions including additional procedures, type of nerve blocks and whether the procedure was performed as a day case or as an inpatient.
We propose that patients having ACL reconstruction have no difference in pain scores when having the procedure as a day case compared to performance with inpatient stay. We also propose that additional procedures do not cause an increase in pain. We hypothesise that patients having Femoral nerve block have no increase in pain compared to patients having combined Femoral and Sciatic nerve block.
Method
All patients having ACL reconstruction between April 2010 and September 2010 were evaluated prospectively. Four strand arthroscopic hamstring reconstructions were performed by two specialist knee surgeons using a similar technique. Anaesthetic was performed by varying anaesthetists with a General Anaesthetic and Regional Nerve Block with Bupivicaine (Femoral nerve block or Femoral plus Sciatic nerve block). This was performed in the anaesthetic room under ultrasound guidance. Intra-operatively, patients received standardised anaesthesia. All patients received one dose of intravenous Paracetamol and two intravenous doses of opiates (Morphine or Fentanyl, as tolerated) at the beginning and end of the procedure.
Inclusion criteria used were all ACL reconstructions performed on patients over 16 years of age. No exclusion criteria were used.
Arthroscopic hamstring reconstruction was undertaken using a four strand Semitendinosus and Gracilis graft. During the arthroscopy, any additional procedures necessary were performed (e.g. meniscal repair, menisectomy, etc.). No drains were used. The knee was placed into an immobilisation splint until the nerve blocks had worn off.
Patients were discharged once they were back on the ward and deemed safe for discharge by the physiotherapists, medical and nursing staff. They were discharged with Paracetamol, a non-steroidal anti-inflammatory (if tolerated) and a mild opiate (Tramadol or Codeine Phosphate).
After discharge from the ward, patients were brought back to an aftercare clinic, with a senior physiotherapist, any time up to 48 hours post-operatively, to assess whether the nerve block had worn off, perform a wound check and to reinforce physiotherapy advice.
Patients were given a discharge questionnaire asking them to record their pain scores daily, when the pain was at its worst, using the Numerical Rating Scale (NRS) from 0-10. Documentation commenced on the day of the procedure and was requested daily for one week. Complications were also documented by the patient. These questionnaires were handed in at the two week follow up appointment, at which point patients were also asked if they would have the surgery performed as a day case again.
Pain scores were analysed using a Box-whisker plot followed by a Shapiro-Wilk W test which showed a non-parametric data spread. Scores were subsequently analysed using a Mann-Whitney U test to assess significance (p=0.05).
Results
ACL Reconstruction was attempted in 50 patients from April 2009 up to and including September 2009. The average age of patients was 31.0 years (Range 16-55). Of the cohort, there were 36 male patients with 14 females. All of the ACL reconstructions had a General Anaesthetic and all had infiltration of their graft site and medial wound with Bupivicaine. 42 had a Femoral nerve block with Bupivicaine and eight had a Bupivicaine Femoral and Sciatic nerve block. Of the 50 patients, 13 patients had additional procedures formed.
29 patients from the group were discharged as a day case. 21 patients required inpatient stay for the reasons documented in Figure 1 below.
Figure 1. Reasons for inpatient stay after ACL reconstruction
Social reasons for inpatient stay included out-of-area patients and those who had no home support to care for them on the day of surgery. Patients who were unable to safely mobilise post-operatively were classified as having failed physiotherapist discharge assessment. Two patients were unable to be discharged due to excessive pain. Two patients had symptoms related to General Anaesthesia (e.g. nausea and dizziness) which prohibited discharge. Seven patients arrived back onto the ward with insufficient time for recovery and physiotherapy assessment, thus preventing day case discharge. In all seven cases, this was due to ACL reconstruction being performed late on the operating list.
On day one post-operatively the average NRS pain score for the day case group was 4.1, the average score for the inpatient group was 5.52. The pain score decreased steadily as the week went on. Pain scores on days one to four was statistically lower (p=0.05) in day case patients compared to inpatients (Table 1 and Figure 2). Figures 3 and 4 show the box whisker plots for inpatient vs day case pain scores on day 1 and 2.
Post-operative day
Daycase Pain Score (N1=29)
Inpatient Pain Score (N2=21)
P Value (*=significant)
1
4.1
5.52
0.03*
2
3.93
5.14
0.04*
3
3.62
4.81
0.03*
4
3.1
4.38
0.03*
5
3.1
4.29
0.09
6
2.69
4.00
0.03*
7
2.52
3.62
0.06
Table 1. Average NRS pain scores of patients undergoing ACL reconstruction
Figure 2. Comparison of NRS pain scores of day case and inpatient ACL reconstruction
Figure 3. Box Whisker plots for pain scores on day 1 for inpatients and day case patients
Figure 4. Box Whisker plots for pain scores on day 2 for inpatients and day case patients
Out of 50 patients, 42 patients had Femoral nerve blocks with the remaining eight patients having a combined Femoral and Sciatic nerve block. On average, patients receiving only a Femoral nerve block had lower pain scores compared to those receiving the combined block, although with the difference in cohort numbers, there was no statistical difference (p=0.09 on day 1 and p=0.5 on day 2) [Figure 5].
Figure 5. Comparison of daily pain scores with Femoral and Femoral/Sciatic nerve blocks
Of the 50 patients attempting the day case procedure, there were 17 additional procedures. Eight partial medial menisectomies, three partial lateral menisectomies, two lateral meniscal repairs, two medial meniscal repairs and two medial femoral condyle microfractures. There was no correlation identified between additional procedures and increased patient pain scores (Figure 6).
Figure 6. Comparison of pain scores of patients having additional procedures
When patient satisfaction among the 29 patients who had day case ACL reconstruction was asked, 100% were happy with the day case procedure. One patient felt that they would opt to have the operation done with an inpatient stay as they felt “groggy” overnight. They were otherwise happy with the day case procedure.
All patients had quadriceps function return at day 1 post-operatively and there were no re-admissions due to pain or being unable to cope at home. There were no infections amongst the groups.
Discussion
Day case ACL reconstructions are commonly undertaken in the UK. Literature from Sheffield, Glasgow and Romford1,2,3 shows that the rate of admission and complications is low and the procedure is safe and effective. It has been well tolerated by patients.
Day case surgery is encouraged by the government led Department of Health.3,4,5 It reduces the risk of cancellations and infections and can also have economic benefits for the National Health Service (NHS). In the United States (US), Bonsell has shown that a single day case ACL reconstruction saves the hospital $2234 compared to a procedure with an inpatient stay. Bonsell also proposed that day case ACL reconstructions are performed significantly quicker than inpatient reconstructions (approximately 23 minutes quicker) which could save the hospital $85000 per year.6
Day case patients were found to have statistically significant, lower pain scores compared to inpatients. Farrar et al have shown that, using the NRS pain scoring system, only a difference of greater than two points can be deemed clinically significant. However, the results of this study have shown that there is no clinical difference or worse pain when the procedure is performed as a day case.7 Krywulak et al noted that the average Visual Analogue Score (VAS) score for patients’ satisfaction post-day case ACL reconstructions was 85.1 compared to the inpatient average score of 78.2.8 This is validated in our study of which 100% of patients were happy with the day case procedure.
Patients were encouraged to take analgesia regularly for two weeks post-operatively but the amount of medication actually taken was not formally documented. This could potentially lead to some of the bias in this study. However, the significance of this bias is difficult to determine accurately as the NRS pain scores were recorded when the patients’ pain was at its worst. This would most likely be between analgesic doses so hopefully eliminating some of the bias.
Little is known about pain associated with the procedure of day case ACL reconstruction and also pain suffered compared to those undergoing inpatient stay. We have been able to compare pain scores of patients undergoing ACL reconstruction as a day case procedure with those undergoing the procedure as an inpatient. We found that patients having the procedure as a day case had significantly lower pain scores on days 1-4 post-ACL reconstruction compared to inpatients.
Day case ACL reconstructions are safe and not associated with any difference in pain compared to inpatient stays. This is important in pre-operative guidance given to patients and, in view of the risks of hospital inpatient stays and also additional costs to the Health Service and Primary Care Trust (PCT), ACL reconstruction as a day case procedure should be highly recommended to patients compared to an inpatient surgical procedure.9-11 Information can be given to patients advising them that pain will not be worse when the procedure is performed as a day case which will encourage more patients to accept same day discharge.
Further work needs to be done to assess the possible difference in pain scores associated with Femoral nerve blocks compared with combined Femoral and Sciatic nerve blocks but our results appear to show that significant difference is unlikely.
Patient satisfaction with the day case ACL procedure was excellent and subsequently day case ACL reconstruction is now routinely performed in this Trust.
Colorectal cancer (CRC) is the third most common cancer in men and women worldwide (1) and a leading cause of cancer related deaths (2). 5FU synthesized in 1957 by Heidelberger (3) is the mainstay in all current standard regimens for CRC (4). Chemotherapy induced hepatic toxicity in 5FU based regimens can be an acute or delayed outcome (5, 6); whereas steatosis is a hallmark of 5FU induced hepatic toxicity (7). Chemotherapy induced nephrotoxicity (8) is also an area of concern for oncologists. The antimetabolite 5FU is often linked with kidney damage (9). Therapeutic outcomes and toxicity of 5FU differs markedly in different doses, combinations, schedules of administration and routes of administration. Leucovorin (LV) incorporated in 5FU based regimen enhances the cytotoxicity of 5FU. In this study we opt to report abnormalities in hepatic enzymes and renal biomarkers biochemically assessed in the serum after alternate cycles of treatment in CRC patients subjected to 5FU/LV based chemotherapy.
Methods:
The study was designed in the Department of Pharmacology, University of Karachia and conducted in a leading cancer hospital in Pakistan. Following institutional authorisation, informed consent was obtained from patients being admitted during 2008-2011. The inclusion criteria was maintained on the following grounds:
1. Histologically confirmed advanced colorectal carcinoma 2. Adequate blood count before therapy 3. Age 20-80 years 4. ECOG score of < 3 5. Serum bilirubin < 5× normal 6. Serum creatinine < 135µmol/liter 7. Serum transaminases < ×2.5 normal
Twenty three patients (median age 59 years) who underwent surgery were included in the study. All the patients had measureable disease at CT scan, ultrasonography or clinical examination. Patient’s characteristics are shown in Table 1. Seventeen patients were treated with the adjuvant bimonthly regimen of 5FU/LV - high dose Folinic acid (de Gramont Regimen); whereas, six patients were treated with adjuvant monthly regimen of 5FU/LV –low dose Folinic acid(Mayo Clinic Regimen) as follows.
5Fluorouracil/ Leucovorin (de Gramont’s regimen)
5Fluorouracil: 400mg/m2 IV followed by 600mg/m2 CIV for 22 hours on day 1-2. Leucovorin: 600 mg/m2 IV as 2 hours infusion before 5FU on day 1-2. Cycle repeated after 2 weeks.
5Fluorouracil/ Leucovorin (Mayo clinic regimen)
5Fluorouracil: 425mg/m2 IV on day 1-5. Leucovorin: 20 mg/m2 IV before 5FU on day 1-5. Cycle repeated after 4-5 weeks.
Premedication with oral phenothiazines, 5HT3RA and 10-20 mg of dexamethasone was given.
The blood samples were collected before the initiation of the therapy and after each alternate cycle of treatment. The blood was drawn when the patient was rested and comfortable from the antecubital vein under minimal tourniquet pressure. The blood drawn was sampled and collected into vacutainers (BD). The biochemical profile of the pretreatment and subsequent treatment was comparatively assessed. SGOT, SGPT, bilirubin and alkaline phosphatase levels were measured after each cycle of treatment or on the clinical presentation of any hepatic adverse effect notified by the physician or oncologist and the levels were compared to the pretreatment values. The serum creatinine levels and BUN was measured before the start of chemotherapy and after each alternate cycle of treatment up to six times in each patient.
Table 1 Patient characteristics
Parameters
Arm A
Arm B
de Gramont
Mayo Clinic
No. of Patients
%
No. of Patients
%
Demographic Characteristics
Male
12
70.58
4
66.6
Female
5
29.41
2
33.3
Total Patients
17
6
Age: Years
Median
59
Range
56-65
ECOG Performance Status (21)
0
1
5.88
1
16.6
1
3
17.64
1
16.6
2
13
76.47
4
66.6
3
0
0
0
0
Primary Site
Colon
11
64.7
3
50
Rectum
5
29.4
2
32.3
Multiple
1
5.88
1
16.6
Metastases
Synchronous
11
64.7
4
66.6
Metachronous
6
35.2
2
32.3
Metastatic Site
Liver
8
47.0
1
16.6
Lymph nodes
4
23.5
2
32.3
Other*
5
29.4
3
50
No. of Sites
1
7
41.1
2
32.3
> 2
10
58.8
4
66.6
CEA
< 10ng/ml
2
11.7
1
16.6
>10ng/ml
8
47.0
1
16.6
Unknown
7
41.1
4
66.6
* = Peritoneal/ovary
Results:
Table 2shows that the SGOT levels are raised after each cycle of treatment and the difference between the SGOT levels of the patients before treatment and after subsequent cycle of treatment is significant in the patients treated with 5FU/LV (p value < 0.05). The difference in the SGPT levels of the patients from the pretreatment value is not highly significant (p value >0.05). The difference in the bilirubin levels of the patients after the sixth cycle of chemotherapy with 5FU/LV regimens is highly significant from the pretreatment level (p value < 0.05). The difference in the alkaline phosphatase levels of the patients after chemotherapy with the pretreatment value in the same patients is not significant (p value >0.05).The difference in the triglyceride levels is not significant before and after chemotherapy in the patients treated with 5FU/LV.
Table 2 Comparative changes in hepatic biomarkers in patients treated with 5FU/LV regimen
Paired Samples Test
Paired Differences
t
p-value
Mean
Std. Deviation
Hepatic
TGS
Control - Cycle 2
-1.200
1.643
-1.633
0.178
Control - Cycle 4
-3.200
3.033
-2.359
0.078
Control - Cycle 6
-3.400
2.966
-2.563
0.062
Control - Cycle 8
-10.000
10.198
-2.193
0.093
Control - Cycle 10
-3.600
8.414
-0.957
0.393
Control - Cycle 12
-8.800
12.872
-1.529
0.201
SGOT / AST
Control - Cycle 2
-12.667
5.033
-4.359
0.049
Control - Cycle 4
-22.000
3.464
-11.000
0.008
Control - Cycle 6
-22.667
3.055
-12.851
0.006
Control - Cycle 8
-25.333
3.055
-14.363
0.005
Control - Cycle 10
-27.000
7.810
-5.988
0.027
Control - Cycle 12
-28.667
7.024
-7.069
0.019
SGPT / ALT
Control - Cycle 2
-2.667
3.055
-1.512
0.270
Control - Cycle 4
-3.667
2.082
-3.051
0.093
Control - Cycle 6
-9.333
8.505
-1.901
0.198
Control - Cycle 8
-12.667
8.083
-2.714
0.113
Control - Cycle 10
-17.667
5.859
-5.222
0.035
Control - Cycle 12
-22.667
10.214
-3.844
0.062
Bilirubin
Control - Cycle 2
0.033
0.058
1.000
0.423
Control - Cycle 4
0.000
0.100
0.000
1.000
Control - Cycle 6
-0.267
0.058
-8.000
0.015
Control - Cycle 8
-0.267
0.058
-8.000
0.015
Control - Cycle 10
-0.267
0.058
-8.000
0.015
Control - Cycle 12
-0.367
0.115
-5.500
0.032
ALKPO4
Control - Cycle 2
-6.667
5.774
-2.000
0.184
Control - Cycle 4
-10.000
10.000
-1.732
0.225
Control - Cycle 6
-26.667
11.547
-4.000
0.057
Control - Cycle 8
-43.333
40.415
-1.857
0.204
Control - Cycle 10
-60.000
36.056
-2.882
0.102
Control - Cycle 12
-63.333
40.415
-2.714
0.113
Table 3 shows that the creatinine levels are raised in patients following each subsequent cycle of treatment with 5FU/LV regimens. The difference in the serum creatinine levels after the fourth and the tenth cycle of treatment with the pretreatment levels was significant (p<0.05). The difference in the BUN levels measures before and after chemotherapy with 5FU/LV was not significant following alternate cycles of treatment.
Table 3 Comparative changes in renal biomarkers in patients treated with 5FU/LV regimen
Paired Samples Test
Paired Differences
t
p-value
Mean
Std. Deviation
Renal
Creatinine
Control - Cycle 2
-0.120
0.130
-2.058
0.109
Control - Cycle 4
-0.160
0.114
-3.138
0.035
Control - Cycle 6
-0.242
0.204
-2.646
0.057
Control - Cycle 8
-0.264
0.225
-2.627
0.058
Control - Cycle 10
-0.546
0.422
-2.893
0.044
Control - Cycle 12
-0.566
0.463
-2.734
0.052
BUN
Control - Cycle 2
-1.800
1.924
-2.092
0.105
Control - Cycle 4
-1.800
1.924
-2.092
0.105
Control - Cycle 6
-2.000
2.449
-1.826
0.142
Control - Cycle 8
-3.000
2.550
-2.631
0.058
Control - Cycle 10
-4.400
4.037
-2.437
0.071
Control - Cycle 12
-6.400
8.204
-1.744
0.156
Discussion:
The hepatocellular enzyme findings are indicative of deteriorating liver function. The levels of SGOT and SGPT both differ from the control values and point toward 5FU induced hepatic toxicity. Increase in SGOT and SGPT up to grade 2 (CTC of NIC) is reported by Hotta and colleagues (10) in a study based on clinicopathological assessment of 36 patients treated with 5FU/LV. They did not report grade 3 or grade 4 elevations in SGOT and SGPT ratio. In our data there is considerable difference in SGOT levels (mean value) after the second cycle of treatment as compared to the pretreatment levels (mean value). Similarly SGPT levels are perturbed following treatment and the difference in the SGPT levels from the pretreatment control value is statistically significant after the tenth cycle of treatment. The pooled data of all the patients cannot be used for prognostic or diagnostic assessment; however, it shows a pattern of drug induced alterations in hepatic functions. An early effect on SGOT level show mild progressive damages correlated with a prominent rise in SGPT levels. SGOT is found in cytosol whereas SGPT is in mitochondria. Any mild to moderate damage to the hepatic cells will result in a rise in SGOT levels even though SGPT levels may remain normalised. Moderate to Severe hepatic damage will give a rise in both SGOT and SGPT elevation. SGOT is located in red blood cells, kidneys, brain, skeletal muscle and cardiac tissues; hence a prompt rise in SGOT level is indicative of associated damages. SGPT is present in skeletal muscles and cardiac tissues and the serum levels are affected with myocardial and skeletal muscle damages. Cytotoxic chemotherapy is frequently associated with fatty liver disease, chemical hepatitis and reactivation of hepatitis B (11). The elevation in triglyceride levels is indicative of drug induced steatosis (fat globule deposition in hepatocytes) leading to postoperative hepatic insufficiency(8). A significant change in bilirubin from the pretreatment level is observed after the 6th cycle of treatment. Biliary changes are detectable and persistent since the drug is excreted in the bile. Sclerosing cholangitis with elevation in alkaline phosphatase and bilirubin levels secondary to 5FU plus mitomycin therapy is reported by Fukuzumi et al (12). After intravenous administration, 5FU is converted into its active form ‘5-fluoro-deoxyuridine-monophosphate’ by anabolic reactions in the tissues. The drug undergoes catabolism primarily in the liver by reduction of the pyrimidine ring by enzymatic action of dihydrouracil dehydrogenase (13). The compound is then cleaved to urea, ammonia, carbondioxide and α-fluoro-β-alanine. The catabolic process in the liver amounts for 5FU induced hepatic toxicity. Hepatic and renal toxicity associated with 5FU is reported earlier with IV administration of 5FU (14). The risk of 5FU induced hepatic damages is increased in older patients (15). Older patients included in our study with increased post-treatment transaminase levels were more frequently presented with pruritus and hand and foot syndrome.This complexity of the situation is that altered hepatic function increases the risk of 5FU concentration (since it is catabolized in the liver cells), which in turn adds to the hepatic damage.
Creatinine clearance and blood urea nitrogen (BUN) are conventional biomarkers of renal function for convenient and cost-effective assessment (16). A detectable change in the creatinine levels of the patients ensue after the fourth cycle of treatment. Besides suggesting a decline in the renal function, it also indicates defect in hepatic functional status and progressive cachexia (muscle wasting), both of which are readily assessed in the patients during treatment. BUN levels are also affected by dexamethasone pretreatment, dehydration and azotemia besides renal function. Nephrotoxicity with 5FU chemotherapy is usually reported when it is combined with cisplatin with worsened creatinine levels (17, 18). Tubular damage induced by 5FU plus high dose leucovorin chemotherapy (similar to de Gramont’s regimen in our study) is reported by Kintzel, who also reported 50% decline in creatinine clearance in three patients (19). Chemotherapy induced renal damages are detected with abnormal creatinine and BUN levels, but in most cases the renal tubes remain intact and functional as the normal renal blood flow and GFR is reversibly attained (20). Adequate hydration and simultaneous treatment with mesna, which neutralises the toxic metabolites can effectively reduce chemotherapy induced renal damage (8).
Conclusion:
SGOT and bilirubin levels are raised after each cycle of treatment and the difference between the SGOT levels of the patients treated with 5FU/LV, before treatment and after subsequent cycle of treatment are highly significant indicative of mild to moderate progressive hepatic toxicity. Risk of clinical and subclinical renal damage is observed by a subsequent rise in serum creatinine and BUN levels. Renal toxicity marked by creatinine elevation is prominent after the fourth cycle of treatment.
Clozapine has shown to have superior efficacy compared to other antipsychotics and is the drug of choice for treatment-resistant schizophrenia. 1 However there is evidence that this treatment is actually under-prescribed. 2 Clozapine requires careful monitoring during the initial titration period. In the UK, this has originally been done in hospital settings to follow the manufacturer’s recommendations because of the risks of hypotension, excessive sedation and fits. Starting clozapine in a hospital setting ceased to be a mandatory regulatory requirement in the UK when the Summary of Product Characteristics was harmonised across Europe following an opinion and recommendation issued on the 12th of November 2002 by the Committee for Proprietary Medical Products of the European Medicines Agency. 3 Despite this happening several years ago, there is little information published about the practicality of successfully commencing clozapine in the community, with previous studies ranging from a single case report 4 to a few small case series of patients 5-8. Our study aimed to examine this practice in a larger sample to highlight the advantages and difficulties of initiating clozapine in the community.
Method
The Central Manchester day hospital was established in 1985, with a focus on acute psychiatric treatment as an alternative to in-patient care. From March 1997, the acute day hospital in Central Manchester was extended to 24 hours, seven days a week, adopting the name of the Home Option Service, focussing on flexible individualised care delivered at patient’s home or team base according to patient choice. 9 In 2007 as part of implementation of new teams across the city to comply with NHS policy guidance 10 the Home Option service developed further to become the crisis resolution and home treatment team (CRHT) for central Manchester, whilst CRHTs were set up de novo in North, and South Manchester, thereby, providing acute community psychiatric care to a metropolitan area of about 500,000 people.
This study describes a large case series of patients referred for clozapine titration in the community to these teams during a three year period. We collected data retrospectively from April 2007 to April 2010 of all the referrals to the three crisis teams, which have assumed the responsibility of providing the service of initiating clozapine in the community. The teams followed the Trust protocol for non-inpatient clozapine titration, which includes recommended monitoring parameters, dosing schedule and algorithms for the management of complications. This protocol is in essence similar to established guidelines. 4,5, 11
Statistical analysis was done using SPSS version 15 for Windows. Comparisons were made using the Student t-test, non parametric tests or the Chi-Square test according to the type of data.
Results
There were 6542 referrals to the crisis teams and 66 of those were related to clozapine initiation. Out of these, 36 were for a first time titration and 30 were referred for re-titration. The latter group were patients previously taking clozapine but who had discontinued it abruptly for a period longer than 48 hours. The reasons for stopping clozapine in those cases were lack of adherence (n=21), a supplying difficulty (n=5) and medical complications (n=4), such as neutropenia, collapse secondary to dehydration, or undergoing surgery. Two of the patients in the re-titration group restarted clozapine in hospital but were discharged early to continue the titration in the community under the care of the crisis team. Six patients in the titration group were initially referred to the crisis team for stabilisation of their mental state following a crisis; however, during the course of this intervention it was decided to start them on clozapine as they showed poor response to other antipsychotic trials.
Fig. 1 - Referrals, number of patients starting clozapine and drop-outs
The characteristics of the sample are presented in Table 1. The majority of patients were single white males, with a diagnosis of schizophrenia and a mean age of 38.8 years (standard deviation = 9.2). The flowchart in Figure 1 outlines the number of referrals, titrations, and reasons for stopping. Clozapine titration commenced in 54 cases (81.8% of referrals), the other 12 patients refused this treatment. Out of the patients who refused treatment, 8 were severely mentally unwell and were admitted to hospital compulsorily under the Mental Health Act. There were 46 (85.2%) patients who successfully completed the community titration. The attrition rate of 14.8% (8 cases) was due to 7 patients withdrawing their consent and one patient who was unable to tolerate the titration. This person was admitted to hospital with hypotension and vomiting. The other 7 patients withdrew their consent for the following reasons: lack of adherence (n=2), deterioration in mental state (n=1), refusal to continue with the physical monitoring (n=1), lack of motivation (n=1), and reluctance to continue due to side-effects (n=2). The mean final dose of clozapine was 309.1 mg (s.d. - 75.1 mg). The mean duration of titration was 34.6 days (s.d. - 20.3) and the mean length of admission to the crisis team was 45.9 days (s.d. - 39.5). The median waiting time for crisis team intervention after the referral was 2 days (range 140 days). The median waiting time to start clozapine was 7 days (range 217 days) from the point of referral.
Table 1. Sample characteristics
Total
Age in years, mean (s.d.)
38.8
(9.2)
Gender, n (%)
Male
45
(68.2)
Female
21
(31.8)
Ethnicity, n (%)
White
45
(68.2)
Black
14
(21.2)
Asian
3
(4.5)
Other
4
(6.1)
Marital Status, n (%)
Single
54
(81.8)
Married or cohabiting
5
(7.6)
Separated or divorced
6
(9.1)
Widowed
1
(1.5)
Diagnosis, n (%)
Schizophrenia
54
(81.8)
Schizoaffective disorder
8
(12.1)
Bipolar affective disorder
1
(1.5)
Other
3
(4.5)
Crisis Team, n (%)
North
21
(31.8)
Central
31
(47.0)
South
14
(21.2)
Days waiting to crisis team intervention
Mean (standard deviation)
9.5
(25.6)
Median (range)
2
(140)
Days waiting to start clozapine
Mean (standard deviation)
23.1
(40.9)
Median (range)
7
(217)
Days taken to complete the titration
Mean (standard deviation)
34.6
(20.3)
Median (range)
28
(101)
Days under the care of the crisis team
Mean (standard deviation)
45.9
(39.5)
Median (range)
34
(235)
Final dose in mg
Mean (standard deviation)
309.1
(75.1)
There were few significant differences between the group of patients starting clozapine for the first time (titration) and those restarting it following a treatment break (re-titration). There was a shorter wait for patients in the re-titration group to recommence clozapine from the time of referral to the service (median=6 days, range = 41 days), compared to those starting clozapine for the first time (median=13 days, range= 217 days). This difference was statistically significant (Mann Whitney U= 201.5, z=-2.529, p=0.01). Patients with first titration on clozapine reached a lower final dose (mean=288 mg, s.d.=50 mg), compared to those having re-titration (mean dose =340 mg, s.d.=94 mg). The mean difference of 52.7 mg (95% C.I. 8.7 to 96.8) between these two groups was significant (t test=-2.178, d.f 42, p=0.02). In terms of ethnicity, patients in the initial titration group were more likely to be Caucasian (n=30, 83%), whereas only half of the patients in the re-titration group were Caucasian (n=15, 50%). This difference was statistically significant (Chi-square with continuity correction = 6.915, df = 1, p=0.009).
There were also significant differences in the distribution of titrations and re-titrations across the three crisis teams. The Central Team dealt with more re-titrations (n=23) than the North (n=4) and the South (n=3) teams. Conversely, the Central Team had fewer patients referred for initial titration (n=8), compared to the North (n=17) and the South (n=11) teams. These differences were significant (Chi-square=19.493, df=2, p<0.0001). Another difference between the teams was the duration of clozapine titration, with the South team taking shorter time (mean=24.15 days, s.d.=7.151), compared to the North (mean=29.5 days, s.d=16.342) and the Central team (mean =43.67 days, s.d.= 23.797). This difference was statistically significant (Kruskal-Wallis chi-square=8.823, d.f.=2, p=0.0121).
No significant differences were found between teams and titration or re-titration groups in terms of patient’s diagnosis, gender, marital status, age, rate of accepted referrals, proportion of successfully finished titrations or waiting time to crisis team intervention.
With regards to adverse events, most patients experienced transient tachycardia (n=30, 55.5%). Other side-effects were excessive salivation (n= 15), hypotension (n= 13), sedation (n=10), hyperthermia (n=8), dizziness (n=6), constipation (n=6), hypertension (n=5), headaches (n=4), nausea (n=2), and heartburn (n=2). Less common adverse events (n=1) were syncope, seizures, transient neutropenia, atrial fibrillation, blurred vision, swelling of the arms , acute dystonic reaction, nocturnal incontinence, exacerbation of asthma, diabetes, erectile dysfunction and delayed ejaculation. Only the patient who developed syncope, which was associated with vomiting and severe hypotension, had to be advised to stop the treatment in the community and was admitted to hospital. For the rest of the patients, the other reported adverse events did not impede the successful completion of clozapine titration in the community.
In terms of longer term outcomes, a total of 50 patients (75.8% of the total sample) were still taking clozapine at the time the data was collected. This is after a median 337 days (range 824 days) from being referred to the crisis team. The majority of patients (n=14, 21.2%) who were not on clozapine had chosen to discontinue the treatment. One patient had died, but the cause of death was not related to clozapine treatment. One patient had developed neutropenia and needed to discontinue clozapine for this reason. Out of the 46 patients who successfully completed the titration, 40 (86.96%) were still continuing clozapine at the time we collected the data. This is after a median 365.5 days (range 824 days) after they commenced clozapine in the community.
Discussion
The results of this study confirm that clozapine can be safely and successfully started in the community. Comparing this to published evidence, we found only one case report 4 and a small study 5,6 previously conducted in the UK. O’Brien et al. 5,6 initially considered 26 patients in their study; however, only 14 patients started clozapine in the community as the rest were considered too unwell and were admitted to hospital. One patient refused daily access and therefore only 13 patients completed the titration. The side effects reported in this study were minor, including sedation in 5 cases, dizziness in 4 patients, hypotension in two and nausea and vomiting once. Compared to our results, O’Brien et al. described a larger proportion of patients needing to be admitted to hospital for clozapine titration.
We found two published studies 7,8 regarding clozapine community titration that were conducted in the United States. The first study included 47 patients who started clozapine in a partial hospitalisation program. Adverse reactions here were common. Patients were titrated much more quickly than in our report, (i.e. to 350 mg over 2 weeks), which might explain the higher incidence of side effects reported, including drowsiness (93.6%), hypersalivation (93.6%), constipation (89.4%), weight gain (72.3%) and tachycardia (57.4%). However no patient discontinued clozapine, and the potentially serious complications were much less frequent, including 3 cases (6%) of seizures and 2 of leukopenia. The other study 8 conducted in the US demonstrated some evidence of cost savings associated with decreased hospitalisation in 28 patients who started clozapine on an outpatient basis.
Johnson et al. 7 discuss in their report that the reluctance to start clozapine outside inpatient settings may be due partly to the potential adverse reactions, but also to clinicians’ fears of making mistakes, avoidance of additional duties, and anticipation of difficulties in patients with a history of non-adherence to treatment. The results of our study support a careful approach to starting clozapine at home in this latter group of patients, as they represented the bulk of cases not achieving the intended outcome of a successful community clozapine titration. However, our study confirms that other reasons to deny a patient the opportunity to start clozapine at home, such as potential adverse events, are hardly justified.
The general advantages of community psychiatric care as opposed to inpatient treatment have been described elsewhere 9. These include accessibility, flexibility and user satisfaction. Treating patients in their own homes avoids the stigma of hospital admission, prevents the breakdown of important social networks and avoids disruption to patients' benefits. A recent Cochrane review 12 found that crisis/home care reduces the number of people disengaging early, reduces family burden, and is a more satisfactory form of care for both patients and families. Some patients who might have been reluctant to start clozapine if they had to be admitted to hospital can therefore benefit from starting this treatment at home supported by crisis teams.
Although a detailed cost-benefit evaluation of this service was not undertaken, it is fair to assume that the costs associated with titrating clozapine at home would be significantly lower than those associated with in-patient care, as demonstrated in previous studies. 8,12
In summary, clozapine can be safely started in the community, but has to be carefully monitored. Patients’ adherence to the treatment and to the physical monitoring requirements is the key element to a successful outcome. Crisis teams are in an ideal position to support patients undergoing initiation of clozapine at home, although this specific role was not originally identified in policy guidance.10 The results of this multi-site study are encouraging and can be applicable to other crisis or community teams nationally.
One in four people in general and one in five people in Canada suffer from a mental disorder and only half of these individuals will seek help for their mental health.1 Doctors have an increasingly demanding job with increasing expectations of excellence in clinical, academic and managerial roles. It seems surprising that with their rigorous training doctors have higher rates of suicide compared to the general population.2 Studies have revealed that two-thirds of Canada’s physicians consider their workload too heavy, and more than half say that personal and family life have suffered because of their career choice.3 One third of Canadian physicians disagreed with a statement that their work environment encourages them to be healthy .4
A systematic review of mental health studies of medical students in the US and Canada found consistently higher rates of psychological distress in medical students compared with both the general population and age-matched peers .5 Medical students were also less likely to seek help for psychological distress than their peers. 6
A survey of psychiatrists and physicians in the UK found that most would be reluctant to disclose personal mental illness to colleagues or professional institutions. Their choices regarding disclosure and treatment would be influenced by issues of confidentiality, stigma, and career implications rather than quality of care.7,8
To reduce stigma and ease physician access to mental healthcare it is important to understand and address the above issues. This will facilitate psychiatrists gaining optimized mental health as well as to improve recruitment and retention of these professionals.9 The objective of our study was to assess the understanding of Canadian psychiatrists to the incidence of mental illness amongst psychiatrists in comparison to the general population and also in comparison to their medical/surgical colleagues. The study also assessed the attitudes of psychiatrists towards preference for disclosure, and treatment should they develop a mental illness in addition to their own experience of mental illness.
Method
Ethics approval (Study code PSIY-336-11) from Queen’s University in Kingston, Ontario was granted. Funding was obtained from TH’s research initiation grant. A mailing list of all psychiatrists in the province of Ontario was provided by the College of Physicians and Surgeons of Ontario (CPSO) for specific use to this research project. The College of Physicians and Surgeons Ontario is a body similar to the General Medical Council (GMC) in the UK. Their role is to regulate the practice of medicine in the province of Ontario. Other provinces have their own respective College of Physicians and Surgeons. In the remainder of the text the term ‘psychiatrist’ refers to consultant psychiatrist.
The list obtained from the CPSO did not include approximately 10% of psychiatrists who opted out of having their postal details released for research purposes. In total 1231 psychiatrists were sent a survey package. This package included a covering letter, a 2-page questionnaire, and a stamped return addressed envelope. Consent was assumed based on taking part in the survey. The 10-item questionnaire was based on a review of the literature, previous research, and discussion with colleagues. It comprised broadly of three sections. The first collected information on the respondents’ perception of prevalence of mental illness in psychiatrists in comparison to the general population and then in comparison with other medical/surgical specialties. The second required psychiatrists to identify to whom they were most likely to disclose a mental illness and reasons for non-disclosure. The third asked psychiatrists their preference of treatment in both an outpatient and inpatient setting. The identifiable information requested was the amount of experience the respondent had as a psychiatrist and whether they had experienced mental illness in the past. A free-text box was included at the end for comments and complete anonymity was maintained. Psychiatrists were divided into 3 groups: Group 1 (less than 5 years of experience as a psychiatrist), Group 2 (5-10 years of experience) and Group 3 (greater than 10 years of experience).
Analysis
A series of two-sample chi-square tests (χ²) were conducted to examine associations between certain categorical variables. In cases where 20% of contingency cells were <5 or where any cell=0, Fisher’s Exact test was used. Phi (φ) or Cramer’s V (for associations >2x2) were used as measures of effect size, these provide an association coefficient between 0 and 1. All analyses were done using SPSS 19.
Results
Of the 1231 questionnaires sent to doctors 487 were returned, a response rate of 39.6%. The respondents were placed into three groups: those in attending for <5 years (55, 11.3%), 5-10 years (53, 10.9%) and >10 years (369, 75.8%). The frequency of responses to all questions, both overall and as a function of attending group are shown in Table 1.
Table 1. Responses to all questions and comparisons between attending groups. Discrepancies between the overall column and the sum of the attending group columns are due to missing cases in responses to the attending group question.
Attending Group
Overall
<5 years
5-10 years
>10 years
Incidence of psychiatric illness amongst doctors is higher than general population?
Yes
124 (25.5% )
15 (27.3%)
19 (35.8%)
89 (24.1%)
No
247 (50.7% )
30 (54.5%)
26 (49.1%)
184 (49.9%)
Don’t know
116 (23.8% )
10 (18.2%)
8 (15.1%)
96 (26.0%)
Incidence of psychiatric illness amongst medical/surgical professionals higher than that of psychiatrists?
Yes
37 (7.6% )
1 (1.8%)
8 (15.1%)
28 (7.6%)
No
285 (58.5% )
36 (65.5%)
29 (54.7%)
214 (58.0%)
Don’t know
165 (33.9% )
18 (32.7%)
16 (30.2%)
127 (34.4%)
Have you ever experienced a mental illness, which had affected your personal, social or occupational life?
Yes
151 (31.0%)
14 (25.5%)
18 (34.0%)
118 (32.0%)
No
336 (69.0%)
41 (74.5%)
35 (66.0%)
251 (68.0%)
If you were to develop a psychiatric illness affecting your personal, social or occupational life, to whom would you initially be most likely to disclose this?
Church/Clergy
3 (0.6% )
1 (2.0%)
0 (0.0%)
2 (0.6%)
GP/Family physician
153 (31.4%)
18 (35.3%)
16 (32.0%)
116 (32.3%)
Family/friends
204 (41.9% )
22 (43.1%)
27 (54.0%)
152 (42.3%)
Colleagues
54 (11.1% )
4 (7.8%)
1 (2.0%)
46 (12.8%)
Mental health profess.
32 (6.6%)
4 (7.8%)
5(10.0%)
22 (6.1%)
None
15 (3.1% )
1 (2.0%)
0 (0.0%)
14 (3.9%)
Other
9 (2.0% )
1 (2.0%)
1 (2.0%)
7 (1.9%)
What is the most important factor that would affect your decision not to disclose your mental illness?
Stigma
114 (23.4%)
12 (22.2%)
14 (26.4%)
86 (24.2%)
Career implications
168 (34.5%)
21 (38.9%)
22 (41.5%)
121 (34.1%)
Professional standing
80 (16.4%)
11 (20.4%)
6 (11.3%)
63 (17.7%)
Other
109 (22.4%)
10 (18.5%)
11 (20.8%)
85 (23.9%)
If you were to suffer from a mental illness affecting your personal, social or occupational life requiring out-patient treatment, what would be your first treatment preference?
Informal profess. advice
83 (17.0%)
6 (10.9%)
7 (13.5%)
70 (19.1%)
Formal profess. Advice
365 (74.9%)
42 (76.4%)
40 (76.9%)
275 (75.1%)
Self-medication
25 (5.1%)
7 (12.7%)
2 (3.8%)
15 (4.1%)
No treatment
9 (1.8%)
0 (0.0%)
3 (5.8%)
6 (1.6%)
If you were to develop a mental illness requiring in-patient treatment, where would be your first preference?
Local
109 (22.4%)
6 (10.9%)
5 (9.4%)
96 (26.5%)
Out of area
370 (76.0%)
49 (89.1%)
48 (90.6%)
266 (73.5%)
In choosing in-patient preference, which of the following influenced your decision most?
Quality of care
130 (26.7%)
7 (12.7%)
7 (13.2%)
112 (30.5%)
Convenience
44 (9.0%)
0 (0.0%)
5 (9.4%)
38 (10.4%)
Confidentiality
257 (52.8%)
39 (70.9%)
34 (64.2%)
180 (49.0%)
Stigma
32 (6.6%)
6 (10.9%)
4 (7.5%)
22 (6.0%)
Other
21 (4.3%)
3 (5.5%)
3 (5.7%)
15 (4.1%)
Perception of the incidence of mental illness
Just over half of respondents disagreed that the incidence of mental illness was higher in doctors than the general population (247, 50.7%). Just over a quarter (124, 25.5%) agreed and just under a quarter replied ‘don’t know’ (116, 23.8%). As can be seen in Table 1, the pattern of responding was similar across all attending groups on this question (χ²=5.92; df=4; p=.205; Cramer’s V=.08). Most disagreed that psychiatric illness was greater in medical/surgical professionals than in psychiatrists (285, 58.5%), a small minority agreed (37, 7.6%). Again the attending groups responded similarly on this question (χ²=7.06; df=4; p=.133; Cramer’s V=.09). Nearly a third of respondents (151, 31.0%) claimed to have experienced a mental illness, and once more the attending groups did not differ significantly in their responses to this (χ²=1.12; df=2; p=.57; Cramer’s V =.05).
Disclosure of mental illness
Respondents would be most likely to disclose their mental illness in the first instance to family and friends (204, 41.9%) although many would instead prefer to disclose to their family physician (153, 31.4%). Relatively few would disclose to a colleague (54, 11.1%) in the first instance or to a mental health professional (32, 6.6%), very few would choose no-one (15, 3.1%) and the clergy was the least endorsed option (3, 0.6%). When considering only the three most popular response options (family/friends, family physician, and colleague) the three attending groups responded similarly (χ²=6.63; df=4; p=.157; Cramer’s V=.09). When asked about the most important factor affecting the decision not to disclose, the most common response was career implications (168, 34.5%). However stigma (114, 23.4%) and professional standing (80, 16.4%) were also reasonably common responses.
Again, when including only the three most popular disclosure choices in the analysis, there emerged an association between choice of whom to disclose and factor affecting disclosure (χ²=12.52; df=6; p=.051; Cramer’s V=.13) (see Table 2). Those who would choose to disclose to their family physician or to family/friends were more likely to cite stigma as a factor influencing their choice than those who would choose to disclose to colleagues. Those who would disclose to colleagues would be more likely to cite professional standing as a factor influencing their choice compared to those who would disclose to their family physician or to their family/friends. There was no association between choice of whom to disclose and previous experience of mental illness (χ²=1.22; df=2; p=.545; Cramer’s V=.05).
Table 2. Preferences for disclosure and the factors influencing that preference.
Factors influencing disclosure
Stigma
Career implications
Professional standing
Other
Total
Preference for disclosure
Family Physician
33 (22.6%)
57 (39.0%)
29 (19.9%)
27 (18.5%)
146 (100.0%)
Family/friends
56 (28.1%)
71 (35.7%)
24 (12.1%)
48 (24.1%)
199 (100.0%)
Colleagues
7 (13.7%)
17 (33.3%)
14 (27.5%)
13 (25.5%)
51 (100.0%)
Treatment for mental illness
When considering out-patient treatment, the majority of respondents would opt for formal professional advice (365, 74.9%). A small proportion would choose informal professional advice (83, 17.0%) and very few would self-medicate (25, 5.1%) or have no treatment (9, 1.8%). With regard to in-patient treatment, the majority would opt for an out of area mental health facility (370, 76.0%). Only just over a quarter of respondents (130, 26.7%) claimed that quality of care would influence their choice of in-patient care, just over half would be most concerned about confidentiality (257, 52.8%) There was a strong association between in-patient preference and the factor influencing that preference (Fisher’s Exact=228.25; p<.001; Cramer’s V=.70). As shown in Table 3, those who would choose an out of area facility, were much more likely to cite confidentiality and stigma as factors influencing their choice, than those who would choose a local facility. Conversely, those choosing a local facility were more likely to cite quality of care and convenience as influencing factors.
Table 3. In-patient treatment choice and the factors influencing that choice.
Factors influencing in-patient choice
Quality of Care
Convenience
Confidentiality
Stigma
Total
In-patient treatment choice
Local MH Facility
57 (56.4%)
39 (38.6%)
4 (4.0%)
1 (1.0%)
101 (100.0%)
Out of area MH Facility
69 (19.4%)
4 (1.1%)
252 (70.8%)
31 (8.7%)
356 (100.0%)
There was an association between attending group and out-patient preference (Fisher’s Exact=12.00; p=.042; Cramer’s V=.13). As can be seen in Table 1, the >10 years group would be more likely to select informal advice than the <5 years group, but the >10 years group were less likely to self-medicate than the <5 years group. The >10 years group responded similarly to the 5-10 years group with regard to self-medication. There was also an association between attending group and in-patient preference (χ²=12.66; df=2; p=.002; Cramer’s V=.16). The >10 years group, although still largely in favor of out of area care, would be more likely than the other two groups to opt for local care. There was also a significant association between attending group and the factors influencing in-patient choice (Fisher’s Exact =25.335; p=.001; Cramer’s V=.16). As shown in Table 1, the >10 years group would be more influenced by quality of care and less influenced by confidentiality than the other two groups.
Finally, previous experience of mental illness was not associated with in-patient choice (χ²=0.542; df=1; p=.462; φ=-.04), but it was associated with out-patient choice (χ²=11.51; df=3; p=.009; Cramer’s V=.16). As Table 4 shows, although both groups are more likely to opt for formal over informal advice, this pattern is more pronounced in the group who have had mental illness, than in the group who have not previously had mental illness.
Table 4. Previous experience of mental illness and out-patient treatment preference.
Out-patient treatment preference
Informal prof. advice
Formal prof. advice
Self-medication
No treatment
Total
Previous experience of mental illness
No
69 (20.8%)
242 (72.9%)
17 (5.1%)
4 (1.2%)
332 (100.0%)
Yes
14 (9.3%)
123 (82.0%)
8 (5.3%)
5 (3.3%)
150 (100.0%)
Discussion
This is the first study to assess the attitudes of Canadian psychiatrists to becoming mentally ill themselves. As this study was carried out in one province of Canada and the results cannot be generalized across the country. There is a significantly large scope of research potential in this area especially among psychiatric residents and other healthcare professionals .10
Physician Impairment is any physical, mental or behavioral disorder that interferes with the ability to engage safely in professional activities.11 Impairment among medical practitioners and psychiatrists in particular is a significant problem characterized by chronicity, under reporting and in many cases, poor outcomes.12 However early detection, intervention and treatment programs that are more sensitive to the needs of impaired practitioners, that are more continuous, better structured, and rehabilitation and recovery focused may be more likely to produce a positive outcome. 13 It is extremely important to remember and advocate that although a physician may be mentally ill he/she is not necessarily impaired.
It is concerning that stigma continues to play a role in psychiatrists’ decision making process to obtain mental healthcare. This is consistent with the findings of a survey in the USA which showed that half of all psychiatrists with a depressive illness would self-medicate rather than risk having mental illness recorded in their medical notes.10 Both entertainment and news media provide a dramatic and distorted image of mental illness that emphasize dangerousness, criminality and unpredictability.14 With this increased stigma doctors subsequently are concerned whether to disclose a mental health problem to their Licensing Boards for fear of being discriminated.15 Studies of US medical licensing bodies have demonstrated a trend towards increasing stigmatizing approaches 16-19 and the concern is whether there is a similar trend in Canada.9 Most psychiatrists in Canada are not aware what to expect from provincial colleges once their mental illness is disclosed and as a result tend to expect the worst. More work is needed by psychiatrists to inform the Provincial Colleges on physician mental health. Only then can the Provincial Licensing Colleges do more to assure physicians that the recovery model of treatment applies to them as it does other psychiatric patients.
The gap however continues to lie between ‘I need help’ and active psychiatric management. Psychiatrists will be well aware of the profound impact that such illnesses can have on a person’s personal and professional competency. However to reflect it on oneself can at times be met with denial in the first instance. Dr. Mike Shooter (ex-President of the Royal College of Psychiatrists, UK) suffered from depression and he highlights the need to speak out and combat stigma. He points out the need to seek treatment early and how not doing so can adversely affect the doctor-patient relationship.20 For some however the fear of stigmatization by health professionals for health professionals can lead to very tragic consequences. Dr. Suzanne Killinger-Johnson was a family physician with a psychotherapy practice in Toronto. She suffered from postpartum depression and in November of 2000 she jumped in front of a subway train cradling her son. Her son died instantly and Dr. Killinger-Johnson died 9 days later.21
Over the past 15 years a greater understanding has developed on the incidence, stressors and complications of physician mental illness. 22 The CPA published its first position paper on the mentally ill physician as early as 1984 with the latest version in 1997 currently under review. 22 The Canadian Medical Association should be congratulated on the most comprehensive strategy document for mentally ill physicians. Physician Health Matters - A mental health strategy for physicians in Canada was published by the CMA in February 2010. In addition to outlining the mental health of medical students, residents and physicians it addresses the current gaps in services and strategic direction needed to achieve ‘optimal mental health for all physicians’. This sets out the necessary groundwork for institutions to implement based on current evidence. In Canada there was the inauguration of the position of ‘The Bell Mental Health and Anti-Stigma Research Chair’ with Queen’s University in February 2012. 23 This position was offered to Dr. Heather Stuart, Professor of Community Health and Epidemiology. Stigma is a social process characterized by exclusion, rejection, blame or devaluation resulting from an adverse social judgment about a person or group. 24 There is a cultural pressure amongst physicians not to be sick so that one can provide care resulting in physicians unfortunately trying to control their own illness and treatment. 25 This concept is exacerbated for mental health issues and the stigma is considerably attached to physicians acknowledging mental health issues or illness, as well as seeking help. 26
Over the past decade the physician health community has been working to destigmatise physician mental health and to provide support services in this regard. All Canadian provinces have Physician Health Programs (PHPs) to help physicians with mental health difficulties. Referrals can be from physicians, families, colleagues, and self. 9 Physicians with psychiatric or drug dependence problems are referred from outside the PHP though the PHP (depending on the province) may or will be involved in monitoring the physician.
One of the most important factors influencing where a doctor is treated is the issue of confidentiality. 8 At present in Canada many hospitals are either switching or have switched to electronic patient records. Patient data in an electronic environment will be accessed from multiple portals by different professionals. This potentially poses serious concerns for psychiatrists if they have significant concerns around confidentiality of their record. A mechanism by which patients can access a list of professionals who have accessed their information may alleviate some concern regarding confidentiality.
Conclusions
Education surrounding mental illness in physicians needs to begin in medical school. Medical students require more assurance that seeking help for psychological problems will not be penalized. Junior doctors are receptive to education on physician impairment and substance misuse and this should be a mandatory component of their training. 27 Education and training of medical students and psychiatric residents to assess doctors as patients would make this scenario less taboo than it is currently perceived.
CPSO in liaison with relevant partners must develop a clear and concise document outlining steps the CPSO will take in helping the mentally ill physician. This document must be clearly advertised on the CPSO website to ease access and would reduce the catastrophizing interpretation psychiatrists (and physicians) may make to the CPSO’s involvement with the mentally ill physician. By the CPSO taking a lead this will prove a stimulus for other provincial licensing colleges to follow suit.
The bridge from ‘I need help’ to ‘I am getting help’ is paved with multiple barriers. By addressing some of the concerns raised by psychiatrists will help the psychiatrist easily cross over.
Candida species is a leading cause of nosocomial infections and the most common fungal infection in intensive care units. Candida infection ranges from invasive candidal disease to blood stream infections (candidaemia). The incidence of Candida infection has been rising over the past two decades, particularly with the use of immunosuppressive drugs for cancer and HIV1,2,3 , and most of these infections occur in ICU settings.4 Candida infection is associated with high mortality and morbidity. Studies have shown that mortality attributable to candidaemia ranges from 5 to 71% depending on the study. 5.6.7Candidaemia is also associated with longer length of hospital stay and higher cost of care.
Early recognition of Candida BSI has been associated with improved outcome. Candida sepsis should be suspected in a patient who fails to improve and has multiple risk factors for invasive and bloodstream Candida infection. A variety of risk factors identified for candidaemia include previous use of antibiotics, sepsis, immunosupression, total parenteral nutrition, central venous line, surgery, malignancy and neutropaenia. Patients admitted to ICU are frequently colonised with Candida species. The role of colonisation in Candida blood stream infection and invasive candidal disease has always been debated. Few studies support the use of presumptive antifungal treatment in ICU based on colonisation and number of sites colonised by Candida. The NEMIS study has raised doubt about this approach of presumptive treatment. The Infectious Disease Society of America (IDSA) 2009 guidelines identify Candida colonisation as one of the risk factors for invasive candidiasis, but warn about the low positive predictive value of the level of Candida colonisation. 8 We conducted a retrospective cohort study in our medical ICU to identify risk factors for Candida blood stream infections including the role of Candida colonisation.
Hospital and Definitions:
This study was conducted at Interfaith Medical Center, Brooklyn, New York. It is a 280 bed community hospital with 13 medical ICU beds. A case of nosocomial Candida blood stream infection was defined as a growth of Candida Species in a blood culture drawn after 48 hours of admission. Cultures in our hospital are routinely done by the Bactec Method – aerobic and anaerobic cultures. Cultures are usually kept for 5 days at our facility and if yeast growth is identified, then species identification is done. In our ICU it is routine practice to do endotracheal culture and urine culture for all patients who are on mechanical ventilator supports and failing to improve. In patients who are not mechanically ventilated, it is routine practice to send sputum culture and nasal swabs to identify MRSA colonisation.
Study Design:
This study was a retrospective cohort study. We retrospectively reviewed all patients’ charts admitted to our medical ICU from 2000 to 2010 which stayed in the ICU for more than 7 days, irrespective of their diagnosis. Data were collected for demographics – age and sex. Data were also collected for risk factors for candidaemia – co-morbidities (HIV, cancer, COPD, diabetes mellitus, end-stage renal failure (ESRF)), presence or absence of sepsis, current or previous use of antibiotics, presence of central venous lines, steroid use during ICU stay, requirement of vasopressor support and use of total parenteral nutrition (TPN). Culture results for Candida including species identification were obtained for blood, urine and endotracheal aspirates.
Statistical Methods:
Patients were divided in two groups based on presence or absence of Candida BSI. Demographic data and risk factors were analysed using the chi square test to look at the difference between the two groups. Endotracheal aspirates and sputum cultures were combined to create a group with Candida respiratory tract colonisation. Binary logistic regression with forward likelihood ratio method was used to create models. Different models were generated for risk factors. Interactions between antibiotic use, steroid use, vasopressor support and sepsis were analysed in different models. Interactions between urine cultures and endotracheal aspirates/sputum cultures were also analysed by a different model. The model with the lowest Akaike information criterion (AIC) was chosen as the final model. The candidaemia risk score was calculated based on this final model to predict the risk of Candida BSI. Receiver operating curve (ROC) analysis was used to select the best cut-off value for the candidaemia risk score. Candida species in urine and endotracheal aspirates were compared with Candida species in blood culture using the kappa test. Data were analysed using SPSS statistical analysis software version 18.
Study Results:
A total of 1483 patients were included in the study. 56 patients (3.77%) had a blood culture positive for Candida species. Table 1 demonstrates demographic characteristics of the study population. There were no significant differences in the both groups for age, sex, diabetes mellitus, COPD, HIV, cancer and ESRF. As demonstrated in the table, 82.1% of patients in candidaemia groups recently used or were taking antibiotics as compared to 39.6% of patients in groups with no candidaemia. The P value was significant for this difference. Similarly, 71.4% of patients in the group with candidaemia had sepsis as compared to 30.6% in the other group with a P value of 0.000. Use of vasopressor (severe septic shock) was different between two groups – 23.2% and 10.1%, P value of 0.004. Steroid use, central lines and total parenteral nutrition use was higher in the candidaemia group as compared to the group without candidaemia. Similarly the rate of positive Candida cultures in urine and endotracheal aspirates was higher in the candidaemia group as compared to the group without.
Table 2 shows that 57.1% of Candida BSI were caused by C. Albicans, 30.4% by C. Glabrata and 12.5% by C. Parapsilosis. This incidence rate of species is similar to that found in other studies. Table 3 shows the two models with the lowest AIC value. The only difference between these two models was antibiotic use- previous or current use of antibiotics compared to current use of antibiotic in sepsis. Table 4 shows that when multifocal site positivity (urine and endotracheal culture) were used in the model, the AIC value increased significantly. This means that when multifocal sites were used in place of individual sites for the model, good amounts of information were lost and this model did not have good predictive value as compared to the model where individual sites are used for prediction of candidaemia. The model with lowest AIC was chosen as the final model. Binary logistic regression analysis with forward conditional analysis showed that only TPN, central venous line, previous or current antibiotic use, endotracheal aspirate culture positivity for Candida species and urine culture positive for Candida species were included in a statistical significant model. The final model had a P value of 0.000. Odds ratio with 95% confidence intervals and respective P values for all these risk factors are shown in Table 5. Age greater than 65 years, sex, sepsis or septic shock, co-morbidities and steroid use were not significant risk factors for candidaemia.
From this model, the candidaemia risk score calculated would be: Candidaemia risk score = 1.184 for previous or current antibiotic use + 0.639 for presence of central venous line + 1.186 for total parenteral nutrition + 0.760 for positive endotracheal culture for Candida + 1.255 for positive urine culture for Candida.
Table 6 shows the relationship between the Candida strain identified in endotracheal/sputum culture to that in blood culture. Similarly, Table 7 shows the relationship between the Candida strain identified in urine culture and that in blood culture. Strains identified in endotracheal aspirate culture had a very high value for the Kappa test and urine culture had a moderate value for agreement by the Kappa test. Thus, it can be inferred that Candida strain identified in blood culture was very similar to that identified in urine or endotracheal culture.
Table 1: Demographic characteristic of study population
Characteristic
Candidaemia (total 56) N (% of candidaemia)
No candidaemia (total 1427) N (% of no candidaemia)
Chi Square
Age >65 years
34 (60.7%)
676(47.40%)
0.06
Male sex
27 (48.2%)
694(48.6%)
0.530
Diabetes mellitus
22 (39.3%)
506(35.5%)
0.325
COPD
1(1.8%)
75(5.3%)
0.206
HIV
9 (16.1%)
253(17.7%)
0.458
Cancer
4(7.1%)
99(6.9%)
ESRF
11(19.6%)
251(17.6%)
0.401
Previous or current antibiotic use
46 (82.1%)
565(39.6%)
0.00
Sepsis
40(71.4%)
436(30.6%)
0.000
Vasopressor support (Septic shock)
13(23.2%)
144(10.1%)
0.004
Steroid use
27(48.2%)
431(30.2%)
0.004
Central line
30(53.6%)
267(18.7%)
0.000
Total parenteral nutrition
7(12.5%)
29(2.0%)
0.000
Candida in endotracheal aspirate/sputum culture
13(23.2%)
112(7.8%)
0.000
Candida in urine culture
34(60.7%)
262(18.4%)
0.000
Table 2: Candida strains responsible for Candida blood stream infection
Species in the blood culture
Number (%)
Candida Albicans
32(57.1%)
Candida Glabrata
17 (30.4%)
Candida Parapsilosis
7 (12.5%)
Table 3: Models with lowest two AIC
Variables
-2 log likelihood
AIC
Previous or current antibiotic use CVP line Total parenteral nutrition Endotracheal culture Urine culture
394.822
406.822
CVP line Total parenteral nutrition Endotracheal culture Urine culture Current antibiotic use in sepsis
395.730
407.73
Table 4: Model with 2 sites positive for Candida
Variables
-2 log likelihood
AIC
Sepsis CVP line Total parenteral nutrition Endotracheal and urine culture
407.920
417.92
Table 5: Odds ratio with 95% confidence interval for risk factors for candidaemia
Effect
Co efficient(β)
Odds ratio
95 % Confidence limit
P value
Lower
Upper
TPN
1.186
3.274
1.263
8.486
0.015
CVP line
0.639
1.895
1.032
3.478
0.039
Antibiotic Use
1.184
3.268
1.532
6.972
0.002
Endotracheal/ sputum culture
0.760
2.150
1.078
4.289
0.030
Urine
1.255
3.508
1.926
6.388
0.000
Table 6: Endotracheal aspirate culture in candidaemic patients
Endotracheal/Sputum Culture
Blood Culture
Kappa Test For Agreement
C. Albicans
C. Glabrata
C. Albicans
9
0
0.83
C.Glabrata
0
3
C. Tropicalis
0
1
Table 7: Urine cultures in candidaemic patients
Urine culture
Blood culture
Kappa Test for the agreement
C. Albicans
C. Glabrata
C. Tropicalis
C. Albicans
15
5
1
0.47
C. Glabrata
1
10
0
C. Krusei
1
1
0
Discussion
Candida is the most common nosocomial fungal infection in the ICU. Candidaemia accounts for approximately 5-8% of nosocomial BSI in the hospitals in the US.9,10,11 It accounts for approximately 50-75% of the cases of invasive fungal infection in the ICU12,13 and its rate varies from 0.2-1.73 per 1000 patient days.9,14,15 In a study done by Theoklis et al., candidaemia was associated with a mean 10.1 day increase in length of stay and a mean $39,331 increase in hospital charges.16 A study of 1,765 patients in Europe found that Candida colonisation was associated with increased hospital length of stay and increase in cost of care by 8000 EUR.17 ICU patients are at increased risk of infection because of their underlying illness requiring ICU care, immunosuppressant use, invasive or surgical procedures and nosocomial transfer of infections. A number of risk factors have been identified in different studies. In a matched case-control trial, previous use of antibiotic therapy, Candida isolated at other sites, haemodialysis and presence of a Hickman catheter were associated with increased risk of candidaemia.13 Similarly age of more than 65 years, steroid use, leucocytosis and prolonged ICU stays were risk factors for Candida BSI in 130 cases.18 Surgery, steroids, chemotherapy and neutropaenia with malignancy are the other identified risk factors.19
Candida BSI has a very high mortality rate. The attributable mortality varies from 5-71% in different studies.5,12,16,20 Even with treatment, there is high mortality as demonstrated in a study by Oude Lashof et al where out of 180 patients treated for candidaemia, 33% died during treatment and 55% completed treatment without complications.21 Risk factors for increased mortality in patients receiving antifungal treatment are delayed Candida antifungal treatment or inadequate dosing.22 Multivariate analysis of 157 patients with Candida BSI, APACHE II score, prior antibiotic treatment and delay in antifungal treatment were independent risk factors for mortality with odds ratio of 1.24, 4.05 and 2.09, respectively.23 Delayed treatment is also associated with increased fluconazole resistance as compared to early treatment and preventive treatment.24 Inadequate antifungal medication dose and retention of central venous catheters were also associated with increased mortality in a study of 245 Candida BSI, with adjusted odds ratios of 9.22 and 6.21, respectively.25,26
Candida albicans accounts for 38.8-79.4 % of the cases of Candida BSI. C. Glabrata is responsible for 20-25% of cases of candidaemia and C. tropicalis is responsible for less than 10% of cases of candidaemia in the US.9,20 ICU patients are frequently colonised with different Candida species. Candida colonisation can be from either endogenous or exogenous sources. Candida colonisation rates vary with the site- tracheal secretion (36%), throat swabs (27%), urine (25%) and stool (11%).27 Candida colonisation increases with the duration of the stay, use of urinary catheters and use of antibiotics. 28,29,30
The role of Candida colonisation in Candida BSI is frequently debated. Some studies have suggested that Candida colonisation of one or more anatomical sites are associated with increased risk of candidaemia.31,32,33,34 Typically, 84-94% of the patients developed candidaemia within a mean time of 5- 8 days after colonisation according to two studies.35,36 In another study, only 25.5% of colonised patients developed candidaemia.37 Similarity between strain identified in blood culture and that identified at various colonising sites was observed in one study.38 Candida colonisation by exogenously acquired species has also been implicated as a cause of candidaemia.39 In one study, 18-40% of cases of candidaemia were associated with clustering defined as “isolation of 2 or more strain with genotype that had more than 90% genetic relatedness in the same hospital within 90 days.” 40 Similar correlations for clusters are also noted for C. tropicalis candiduria41 and for C. Parapsilopsis.42 In a prospective study of 29 surgical ICU patients colonised with Candida, the APACHE II score, length of previous antibiotic therapy and intensity of Candida colonisation was associated with a significant risk of candidaemia. The Candida colonisation index calculated by non-blood body sites colonised by Candida over the total number of distinct sites tested for patients, was associated with a 100% positive and negative predictive value of candidaemia.29 Other studies do not support Candida colonisation as a risk factor for candidaemia. In a case-control study of trauma patients, only total parenteral nutrition was associated with an increased risk of candidaemia. Candida colonisation, steroid use, use of central venous catheters, APACHE II score, mechanical ventilation for more than 3 days, number and duration of antibiotics, haemodialysis, gastrointestinal perforation and number of units of blood transfused in first 24 hours of surgery. were not significant risk factors for candidaemia.43 NEMIS study found that in a surgical ICU, prior surgery, acute renal failure, total parenteral nutrition and triple lumen catheters were associated with increased risk of candidaemia; the relative risk for each risk factor being 7.3, 4.2, 3.6 and 5.4, respectively. Candida colonisation in urine, stool or both were not associated with increased risk of candidaemia.15
The effect of Candida colonisation of the respiratory tract on candidaemia and on mortality and morbidity is unclear. In a retrospective study of 639 patients, Candida respiratory tract colonisation was associated with increased hospital mortality (relative risk of 1.63) and increased length of stay (median increase of 21 days).30 In a study of 803 patients by Azoulay et al., respiratory tract colonisation was associated with prolonged ICU and hospital stays. These colonised patients were at increased risk of ventilator-associated Pseudomonas pneumonia, with an odds ratio of 2.22.44 However, in a postmortem study of 25 non-neutropaenic mechanically ventilated patients, 40% of the patients were colonised with Candida, but only 8% had Candida pneumonia.45,46 Jordi et al. found that out of 37 patients, definite or possible colonisation was found in 89% of patients and only 5% of cases were defined as Candida BSI.47.The effect of candiduria is also ill defined. Candida colonisation in urine has been implicated as a risk factor in certain studies. In a study done by Bross J et al., central lines, bladder catheters, 2 or more antibiotics, azotaemia, transfer from another hospital, diarrhoea and candiduria were significant risk factors for candidaemia. Candiduria had an odds ratio of 27 for development of candidaemia.48 Similar findings about candiduria were noted by Alvarez-Lerma et al.49
IDSA recommends starting empirical antifungal treatment for high risk neutropaenic patients who fail to improve on antibiotics after 4 days. Recommendation to start empirical antifungal therapy in low-risk neutropaenic patients and non-neutropaenic patients are not made by IDSA because of low risk of candidaemia.8 However, early detection of Candida BSI is vital because of increased mortality associated with delayed antifungal treatment and failure to remove central venous lines. Early detection of Candida BSI in a colonised patient can be facilitated by using a score based on the risk factors.50,51Similarly, b-D glucan assays can be used in patient colonised with Candida, to determine Candida BSI and need for antifungal treatment.52 Combined used of such risk factor identification systems and b-D glucan assays will help to detect candidaemia in earlier stages and will decrease mortality. Our study suggests that total parenteral nutrition, previous or current antibiotic use, central lines, candiduria and respiratory tract colonisation are risk factors for Candida BSI. With the help of our candidaemia risk score system, a score of more than 2 is associated with a higher risk of Candida BSI. This risk factor scoring system along with b-D glucan assays can be used to detect Candida BSI in earlier stages.
Conclusion:
Our study suggests that urine or respiratory tract colonisation is associated with an increased risk of Candida BSI, along with total parenteral nutrition, central venous lines and previous or current antibiotic use. We identified a scoring system which can be used along with a b-D glucan assay to detect candidaemia earlier.
Routine pulse palpation is the recommended screening method to detect asymptomatic atrial fibrillation (AF) in clinical practice¹. Since this is part of the blood pressure (BP) measurement technique when using the Riva Rocci (mercury) device or the aneroid device, most patients are evaluated for rhythm irregularity while checking their BP, and, if pulse isn’t palpated, heart rhythm can be evaluated through auscultation of Koroktoff’ sounds. According to the European Community law (2007/51 CE; 2007 September 27th), the mercury sphygmomanometers should not be sold any more, therefore aneroid or automatic devices will replace them in a few years. Recently new devices with embedded algorithms to detect irregular heart beat and possible AF have been commercialised. Whether the switch from Riva-Rocci or aneroid sphygmomanometer to this device will affect detection of AF in usual care is unknown. We explored this issue using a retrospective, naturalistic observation of a group of GPs who abandoned the “old” Riva-Rocci or the aneroid sphygmomanometer and adopted this new device.
Methods
In September 2011 the members of the Italian College of General Practitioners based in Bologna (a medium size city in Central Italy) decided to standardize their office BP measurements. They received an unconditional grant for 30 automatic upper arm blood pressure monitors (Microlife- Afib ®) to be used in office by the GP him/herself. This device embeds an algorithm that calculates the irregularity index (standard deviation divided by mean) based on interval times between heartbeats; if the irregularity index is above a certain threshold value, atrial fibrillation is likely to be present and an atrial fibrillation icon is displayed on the screen. The 30 general practitioners who received the device agreed to a later proposal to examine their database to evaluate detection of new AF patients. They all had the same professional software (Millewin®), and used an automatic extraction. All the patients with recorded diagnosis of hypertension were identified, then BP recording and AF diagnosis were extracted before (365 days preceding the use of Microlife) and after (4 months since starting the use of Microlife) the adoption of the automatic devices. The proposal to examine AF detection was made after four months after they received the devices, therefore the GPs weren’t aware of this study during the usual professional activity. This study was also neither planned nor known by Microlife. Fourteen other GPs, who were using the traditional device, volunteered to provide the same data extraction from their personal database.
Results The 30 participants GPs cared for 48,184 individuals, 12,294 (25.5%) of whom had hypertension (mean age 69.9±13.4). The 16 control GPs cared for 23,218 patients, 5,757 (24.8%) with hypertension (mean age 69.7±13.6). The four-monthly AF detection rate for the original group and the control group is reported in table 1. All the new detected AF were then confirmed on ECG. Statistical analysis was made with the chi-square (χ²) test.
Table 1: Four-monthly AF detection rate in the original GP group and in the control group*
N° GPs and (n° hypertensive patients)
Detected AF % and (n° pts) October 2010- January 2011
Detected AF % and (n° pts) February 2011- May 2011
Detected AF % and (n° pts) June 2011- September 2011
Detected AF % and (n° pts) October 2011-January 2012
30 (12294) - original group
0.37% (46) *
0.3% (39) *
0.37% (45) *
0.63% (77) **
16 (5757) - controls
0.35% (20) ‡
0.45% (26) ‡
0.56% (32) ‡
0.33% (19) ‡‡
*‡ Use of the traditional device: original group vs controls: p NS ( χ² = 3.0421, df 1) ** use of the automatic device (other quarters use of traditional device) **‡‡ Original group: use of the automatic device vs traditional device in AF detection: p < 0.005 (χ ² = 9.487, df 1)
Discussion
Atrial fibrillation can be difficult to diagnose as it is often asymptomatic and intermittent (paroxysmal). The irregularity of heart rhythm can be detected by palpation of the pulse. It may therefore be detected in patients who present with symptoms such as palpitations, dizziness, blackouts and breathlessness, but may also be an incidental finding in asymptomatic patients during routine examination. The diagnosis must be confirmed with an ECG, which should be performed in all patients, whether symptomatic or not, in whom atrial fibrillation is suspected due to the detection of an irregular pulse. Heart rhythm should be evaluated while measuring BP with traditional sphygmomanometers, while this information may be lost with automatic devices, therefore the use of automatic devices with algorithms which can detect possible AF is an appealing choice. The hypothesis that these devices are equal or superior to systematic pulse palpation is currently under investigation by NICE². At the moment the consequences of switching from the classical Riva-Rocci devices to these new ones in usual care isn’t known. The AF opportunistic screening in people aged > 65 leads to a 1.63% detection rate while usual care has a detection rate of 1.04%, very similar to that observed in our hypertensive population (1.13%)³. Our data show that, at least in the short term, switching from the usual device to an automatic device with algorithm for irregular beat detection increases the identification rate of previously unknown AF in the hypertensive population. While waiting for a formal appraisal, GPs who wish or must renounce to their “old” Riva-Rocci can use this device implementing their “usual care” performances.
The effective relief of pain is of paramount importance to anyone treating patients undergoing surgery. Not only does effective pain relief mean a smoother postoperative course with earlier discharge from hospital, but it may also reduce the onset of chronic pain syndromes1. Regional anaesthesia is a safe, inexpensive technique, with the advantage of prolonged postoperative pain relief. Research continues concerning different techniques and drugs that could prolong the duration of regional anaesthesia and postoperative pain relief with minimal side effects1. Magnesium is the fourth most plentiful cation in the body. It has antinociceptive effects in animal and human models of pain 2,3. Previous studies had proved the efficacy of intrathecally administered magnesium in prolonging intrathecal opioid analgesia without increase in its side effectsThese effects have prompted the investigation of epidural magnesium as an adjuvant for postoperative analgesia4.
Midazolam, a water-soluble benzodiazepine, has proved epidural analgesic effect in patients with postoperative wound painSerum concentrations of midazolam after an epidural administration were smaller than those producing sedative effects in humans5.
The purpose of this study is to compare the analgesic efficacy of epidural magnesium to that of midazolam when administered with bupavacaine in patients undergoing total knee replacement.
Methods:
After obtaining the approval of the Hospital Research & Ethical Committee and patient’s informed consent, 120 ASA I and II patients of both sexes, aged 50-70 years undergoing total knee replacement surgery were enrolled in this randomised, double blinded placebo-controlled study. Those who had renal, hepatic impairment, cardiac disease, spine deformity, neuropathy, coagulopathy or receiving anticoagulants for any cause were excluded from the study.
Prior to surgery, the epidural technique as well as the visual analogue scale (VAS; 0: no pain; 10: worst pain) and the patient-controlled epidural analgesia device (PCEA) were explained to the patients.
The protocol was similar for all patients. Patients received no premedication. Heart rate (HR), mean arterial pressure (MAP) and oxygen saturation (SpO2) were measured. Intravenous access had been established and an infusion of crystalloid commenced.
Before the induction of anaesthesia, an epidural catheter was placed at the L3-L4 or L4-L5 intervertebral space under local anaesthesia with the use of loss of resistance technique, and correct position was confirmed by injection of lidocaine 2% (3ml) with epinephrine in concentration 1: 200 000. An epidural catheter was then inserted into the epidural space. The level to be blocked was up to TIn a double blind fashion and using a sealed envelope technique, patients were randomly allocated to one of three equal groups to receive via epidural catheter either 50 mg magnesium sulphate (MgSO4) in 10 ml as an initial bolus dose followed by infusion of 10 mg/h (diluted in 10 ml saline) during the surgery (Mg group) or 10 ml saline followed by infusion of saline 10 ml/h during the surgery (control group) or 0.05 mg/kg of midazolam in 10 ml saline (Midazolam group) followed by infusion of saline 10 ml/h during the surgery. All patients received epidural bupivacaine 0.5 % in a dose of 1ml/segment .
Sensory block was assessed bilaterally by using loss of temperature sensation with an ice cube. Motor block was evaluated using a modified Bromage scale 6 (0: no motor block, 1: inability to raise extended legs, 2: inability to flex knees, 3: inability to flex ankle joints). During the course of operation, epidural bupivacaine 0.5% was given, if required, to achieve a block above T10MAP, HR, SpO2 and respiratory rate (RR) were recorded before and after administration of the epidural medications and every 5 minutes till end of the surgery.
When surgery was complete, all patients received PCEA using a PCEA device (Infusomat® Space, B.Braun Space, Germany) containing fentanyl 2 µg/ml and bupivacaine 0.08% (0.8 mg/ml). The PCEA was programmed to administer a demand bolus dose of fentanyl 5 ml with no background infusion and lockout interval 20 min. The PCEA bolus volume was titrated according to analgesic effect or occurrence of side-effects. Patients’ first analgesic requirement times were recorded. The time from the completion of the surgery until the time to first use of rescue medication by PCEA was defined as the time to first requirement for postoperative epidural analgesia. A resting pain score of ≤ 3 was considered as a satisfactory pain relief. If patients had inadequate analgesia, supplementary rescue analgesia with intramuscular pethidine 50 mg was available. MAP, HR, SpO2, RR and pain assessment using VAS were recorded at 30 minutes, and then at 1, 2, 4, 8, 12, and 24 h in the postoperative period. Epidural fentanyl consumption was also recorded at the same time points. Patients were discharged to the ward when all hemodynamic variables were stable with completely resolved motor block, satisfactory pain relief, and absence of nausea and vomiting. Adverse events related with the epidural drugs (sedation, respiratory depression, nausea, vomiting, prolonged motor block) and epidural catheter were recorded throughout the 24 h study period. Sedation was assessed with a five-point Scale: 1: Alert/active, 2: Upset/wary, 3: Relaxed, 4: Drowsy, 5: Asleep. A blinded anaesthesiologist who was unaware of the drug given, performed all assessments.
The results were analyzed using SPSS version 17. The number of subjects enrolled was based on a power calculation of finding a 20% change in HR and MAP. The α-error was assumed to be 0.05 and the type II error was set at 0.20. Numerical data are presented as median and 95% CI. The groups were compared with analysis of variances (ANOVA). The VAS pain scores were analyzed by Mann-Whitney U test. Categorical data were compared using the Chi square test. P value of 0.05 was used as the level of significance.
Results:
The three groups were comparable in respect of age, weight, height, sex, ASA status and duration of surgery (Table 1). Patients in all groups were comparable regarding intra or postoperative MAP, HR (Figure 1,2), RR and SpO2 during the observation period with no case of hemodynamic or respiratory instability. No difference in the quality of sensory and motor block before and during the surgery was noted between groups, and none of the patients required supplemental analgesia during surgery.
Control
Mg
Midazolam
of patients
40
40
40
Sex (female/male)
17/23
20/20
19/21
Age (yrs)
59.5 ± 6.1
61.1 ± 4.9
61.9 ± 3
ASA (I/II)
12/28
14/26
11/29
Weight (Kg)
69.7 ± 4.2
66.9 ± 6.7
70.1 ± 5.5
Height (cm)
165.9 ± 8.6
170.2 ± 4.5
167.2 ± 6.9
Duration of surgery (min)
144 ± 21
129 ± 30
130 ± 27
( median and 95% CI or number). No significant difference among groups
Table 1: Demographic data and duration of surgery.
Figure 1: Heart rate changes (HR) of study groups. Data are mean±SD.
Figure 2: Mean Arterial pressure changes (MAP) of study groups. Data are mean±SD.
The intraoperative VAS was significantly less in magnesium and midazolam groups compared to control group after 15 and 30 minutes (Figure 3). Whereas the postoperative VAS was significantly less in the magnesium group in the first postoperative hour compared to other groups (Figure 4).
Figure 3: The intra-operative Visual analogue score of study groups. Data are mean±SD.
Figure 4: The post-operative Visual analogue score of study groups. Data are mean±SD.
The time of request for postoperative analgesia was significantly delayed and the number of patients requesting postoperative analgesia was significantly reduced in magnesium group (Figure 5). Moreover, the pethidine rescue analgesia consumption and the total amount of postoperative fentanyl infusion were significantly reduced in magnesium group compared to other groups (Table 2) (Figure 5).
Control
Mg
Midazolam
P
Pethidine (mg)
92.38±10.91
52.56±9.67
70±9.23
0.014*
Total Fentanyl infusion (mcg)/24H
320.67±112.19
219.9±56.86
256.2±53.49
0.00*
Data are expressed as median and 95% CI. * Significant difference (P < 0.05).
Table 2: Pethidine rescue analgesia and total fentanyl infusion over 24 hours of study groups
Figure 5: The number of patients and time of requesting analgesia in the first 3 postoperative hours in the study groups. Data are numbers.
No significant differences were recorded regarding the incidence of sedation or any adverse effects between groups (Table 3).
Control
Mg
Midazolam
P
Sedation
0
0
2
0.068
Bradycardia
1
0
0
0.103
Nausea & Vomiting
3
1
2
0.571
Data are expressed as numbers. Significant difference (P < 0.05).
Table 3: Incidence of sedation, bradycardia and nausea & vomiting in the study groups
Discussion:
The efficacy of postoperative pain therapy is a major issue in the functional outcome of the surgery7. It was evident that epidural analgesia regardless the agent used provides better postoperative analgesia compared with parental analgesiaThe addition of adjuvants to local anaesthetics in epidural analgesia gained widespread popularity as it provides a significant analgesia which allows the reduction of the amount of local anaesthetic and opioid administration for postoperative pain and thus the incidence of side effects9.
Our study demonstrates a significant intraoperative improvement in VAS in magnesium and midazolam groups, while in the postoperative period magnesium group showed a significant reduction in the number of patients requesting early postoperative analgesia as well as total fentanyl consumption.
The antinociceptive effects of magnesium are primarily based on the regulation of calcium influx into the cell, as a calcium antagonism and antagonism of N-methyl-D-aspartate (NMDA) receptorTanmoy and colleagues10 evaluated the effect of adding MgSO4 as adjuvants to epidural Bupivacaine in lower abdominal surgery and reported reduction in time of onset and establishment of epidural block. Whereas, Arcioni and colleagues 11 proved that combined intrathecal and epidural MgSO4 supplementation reduce the postoperative analgesic requirements. Farouk et al12 found that the continuous epidural magnesium started before anesthesia provided preemptive analgesia, and analgesic sparing effect that improved postoperative analgesia. Also, Bilir and colleagues 4 showed that the time to first analgesia requirement was slightly longer with significant reduction in fentanyl consumption after starting epidural MgSO4 infusion postoperatively. Asokumar and colleagues13 found that addition of MgSO4 prolonged the median duration of analgesia after intrathecal drug administration.
On the other hand, Ko and colleagues14 found that peri-operative intravenous administration of magnesium sulfate 50 mg/kg does not reduce postoperative analgesic requirements which could be attributed to the finding that the perioperative intravenous administration of MgSO4 did not increase CSF magnesium concentration due to inability to cross blood brain barrier.
Nishiyama et al17,18,19 reported that epidural midazolam was useful for postoperative pain relief. It was suggested that epidurally administered midazolam exerts its analgesic effects through the ᵞ-aminobutyric acid receptors in the spinal cord, particularly in lamina II of the dorsal horn15 as well as through the opioid receptorsNishiyama et al20 showed that intrathecally administered midazolam and bupivacaine had synergistic analgesic effects on acute thermal- or inflammatory-induced pain, with decreased behavioral side effects. While, Kumar et al21 reported that single-shot caudal coadminstration of bupivacaine with midazolam 50 µg/kg was associated with extended duration of postoperative pain relief in lower abdominal surgery. Whereas, Jaiswal et al22 concluded that epidural midazolam can be useful and safe adjunct to bupivacaine used for epidural analgesia during labor.
In the present study, there were no significant hemodynamic changes between groups. This is in agreement with many authors who used epidural MgSO44,12,23 and midazolam 24 and did not report any hemodynamic or respiratory instability during the observation period.
This study did not record any neurological or epidural drugs related complications postoperatively. Our results are in accord with some of the trials that have previously examined the neurological complications of using epidural MgSO4,11,12,23.Moreover, Goodman and colleagues 25, found that inadvertent administration of larger doses MgSO4 (8.7 g and 9.6 g) through epidural catheter did not reveal any neurological side effects.
Regarding epidural midazolam, Nishiyama19 said that epidural administration of midazolam has a wide safety margin for neurotoxicity of the spinal cord due to the small dose used.
Our results did not reveal any significant difference regarding the sedation score. This is in agreement with Bilir et al4 and El-Kerdawy23 who did not report any case with drowsiness or respiratory depression when using epidural magnesium.
Whereas, De Beer et al26 and Nishiyama et al27 reported that a dose of 50 µg/kg midazolam appears to be the optimum dose for epidural administration, while many patients fell into complete sleep with no response to verbal command and respiratory depression when they used epidural midazolam 0.075 mg/Kg or 0.01 mg/KgMoreover, Nishiyama et al17,28 reported that when 50 µg/kg epidural midazolam was used, serum midazolam concentration was less than 200 ng/ml which was considered as the lower limit for sedation by intravenous administration.
In conclusion, co-administration of epidural magnesium provides better intraoperative analgesia as well as analgesic-sparing effect on PCEA consumption without increasing the incidence of side-effects compared to bupivacaine alone or with co-administration of epidural midazolam in patients undergoing total knee replacement. The results of the present investigation suggest that magnesium may be one of the useful adjuvants to epidural analgesia.
Driven by a global rise in opioid dependence, Opioid Substitution Treatment (OST), the prescribing of opioids (usually methadone or buprenorphine) as maintenance treatment, has expanded worldwide over the last two decades3. Participation in OST reduces the risk of death by overdose4, reduces the risk of HIV transmission5 and reduces participants’ involvement in property crime6. For these reasons, maintenance with methadone remains the major public health response to reduce the harms caused by heroin addiction.
In the United Kingdom (UK) in the late 1990s, government funding to expand access to OST was provided, with the explicit objective of reducing crime7. The expansion of treatment was supported with clinical guidelines2, and targets were set to try to ensure good outcomes. Given the research evidence on the importance of retention in producing better outcomes, service providers were set a target of retaining at least 75% of people in treatment for 3 months. A tool to monitor outcomes, the Treatment Outcomes Profile (TOP)8, was developed and service providers nationally were set a target of 80% of people in OST completing TOP at entry and after 6 months9. This 20-item self-report questionnaire records a set of core data for the previous 28 days, including the number of days on which heroin and cocaine have been used.
The amount of methadone prescribed in England and Scotland increased fourfold over the decade 1998 – 20083. However, in 2010, Britain’s newly-elected government signalled a change in the direction of drug policy1. The paradigm on which the new policy is based is “recovery”, a concept embracing self-help, mutual support, and optimism about the possibility of positive change. The policy is in part driven by the perception that treatment services have a defeatist attitude, expecting little positive change – hence the claim that there are too many patients “parked on methadone”. To counteract this perceived pessimism, the “recovery agenda” includes incentives to services to promote abstinence from all drugs including prescribed OST medication. This policy has been criticized as being inconsistent with the available evidence10, but has been defended on the grounds that many patients on methadone were doing poorly, and needed encouragement to make positive changes in their lives.
In 2010, we decided to investigate to what extent people were responding poorly to treatment, and whether this could be improved by implementation of evidence-based treatment.
Methods
This quality improvement project was undertaken in two OST clinics in Merseyside, managing in total over 1000 patients. The services had the same senior leadership and medical staff, but separate teams of nurses and key workers. Supervised administration was provided by local retail pharmacies.
In October 2010 key workers were provided with list of patients currently under their care and asked to identify patients they thought were using heroin regularly. A research assistant then checked case notes of identified patients, looking at self-reported heroin use as recorded in TOP monitoring forms, and at the results of previous urine toxicology tests. Those whose most recent TOP was performed at entry to treatment were excluded (since their self-reported heroin use covered a time when they were not in treatment). Among the remainder patients reporting use of heroin on at least 8 days in the 4 weeks preceding their last TOP interview were classified as “non-responding” patients. The case notes of all identified “non-responders” were reviewed using an audit tool covering age, sex, postcode, date of entry into treatment, duration of treatment, dose of medications, extent of supervised administration, dates and results of recent urine toxicology, and date and self-reported drug use from previous TOP questionnaires. This data was collected at baseline and again at re-audit (follow-up) 9 months later.
Postcodes were used to derive Index of Multiple Deprivation (IMD) scores11. The English index of multiple deprivation (IMD) is a measure of multiple deprivations, with domains including employment deprivation, health deprivation and disability, education skills and training deprivation, barriers to housing and services, living environment deprivation, and crime.
In one clinic, the “implementation clinic”, beginning January 2011, key workers were asked to refer all non-responders for a medical review. Patients were also screened for comorbidity, taking advantage of a separate project running concurrently which was designed to test the psychometric properties of a new questionnaire on mental health and well-being. All service users at the implementation clinic were invited to take part. The study had National Research Ethics approval and approval from the Merseycare NHS Trust R&D Office. Quality of life was assessed with the EQD12 which comprises 5 domains measuring health-related quality of life: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. Depression was screened for with the Beck Depression Inventory13.
UK guidelines recommend for patients doing poorly “..ensuring medication is provided within evidence-based optimal levels, changing to another substitute medication, increasing key working or psychosocial interventions and increasing supervised consumption” 3. The recommended dosage for effective treatment is listed as in the range 60-120mg/day of methadone. At medical review, the plan was for the doctor to assess the non-responding patients, and propose raising methadone dose progressively until heroin use ceased, or a maximum dose of 120mg/day was reached; and requiring supervised consumption of methadone for patients persisting in heroin use.
Establishing the medical reviews in one of the two clinics was necessary for logistic reasons, but it also allowed an opportunity to assess the impact of the reviews, by comparing the outcomes of non-responders in the two clinics. If effective, it was proposed to extend this approach to the second ‘treatment as usual’ service. Referrals for medical review ceased in June 2011, and over the next three months staff feedback about the process was sought. In October 2011 a repeat audit of case notes including TOP results of all previously identified non-responders at both services was undertaken.
At follow-up data on the frequency of medical appointments in the preceding 6 months were also collected. In cases where people had left treatment, the TOP performed on exit from treatment was used. Those non-responders who had left treatment were identified and tabulated according to the reason for leaving treatment.
Flowchart 1: The audit and re-audit process
Ethics
The audit was approved by the local NHS Trust R&D Office. Funding was obtained to undertake the work by Mersey Care NHS Trust.
Analysis
Data was entered into SPSS version 18 (for windows). Summary statistics and standard hypothesis tests compared non responders in the intervention service to non responders in treatment as usual to ensure there were no statistically significant differences between the two groups at baseline. Chi-square and t-tests compared age, sex distribution, IMD scores, methadone dose and months in treatment this episode. Mann-Whitney U tests compared the number of TOP forms completed in each group during the previous 6 and 12 months. Regression analysis explored whether there was a relationship between attendance for supervised administration, self-reported quality of life and depression for non responders in the implementation group. Differences in baseline and at 9 month re audit methadone dose and heroin use were tabulated for each group. Mann-Whitney U-tests compared any differences between the two groups. Differences within each group were also compared using the Wilcoxon signed ranks test.
Results
The implementation service managed 534 patients, of whom 130 (24%) were initially identified as non-responders, reporting heroin use on 8 or more days in the previous month at their last TOP interview. At the TAU service there were 485 patients, of whom 112 (23%) were identified as non-responders. Of the 242 non-responders in total, 67 (28%) were new to treatment, and were excluded. This is illustrated in the flowchart 2.
Flowchart 2: Sample Algorithm
Approximately 50% of the non-responders in each group reported daily heroin use at baseline. The two groups of non-responders did not differ significantly in terms of age, sex distribution, nor on the Index of Multiple deprivation scores (mean of 62 reflecting very severe social exclusion across both groups). Non responders in the implementation service had been in treatment a median of 18 months compared to 17 months for those in treatment as usual. Urine testing was performed infrequently in both services, but a result was available from the six months prior to baseline for 133 of the remaining 175 subjects. The urine tests results were broadly consistent with the patients self-report. Aspects of treatment at the two services differed, as shown in Table 1. At baseline, doses did not differ significantly, but the treatment as usual group was significantly less likely to have their methadone administration supervised, and had less frequent TOP monitoring.
Table 1 Profile of non-responders and their treatment at baseline
Implementation
TAU
Total
N
104
71
175
Mean age in years (min, max)
42 (25,66)
43 (23,63)
42 (23,66)
Male (%)
65 (63%)
48 (68%)
113 (65%)
Mean IMD Score (SD)
62 (14.6)
62 (14.7)
62 (14.7)
Mean methadone dose in mg (SD)
60 (17.8)
60 (21.3)
60 (20.3)
Median Months in this Rx episode (IQR)
18 (20)
17 (10)
18 (14)
Any supervised doses
56 (54%)
22 (31%)*
78 (45%)
Last TOPS > 6/12 ago
15 (15%)
29 (42%)**
44 (25%)
*Pearson Chi square 9.995, df=2, p=0.007 **Mann-Whitney U =2654, p=0.002
Despite almost all non-responders being booked in for an appointment and given reminders at the implementation service, only 47 (45%) of the 104 identified attended at least one medical review. Keyworkers commented that the main reason for non-attendance was that clients were quite happy continuing heroin use and did not see stopping as something they wanted to do. When patients were told they would only receive their prescription renewal after attending, some patients chose to go without methadone and make contact a few days later, rather than attend an appointment. Among those who did attend, there was frequently resistance to increasing their methadone dose, and anger at the suggestion that medication administration should be supervised. Word of mouth spread through the service that doctors were proposing dose increases and more supervision. This increased resistance among patients, and appears to have generated some resistance among keyworkers, some of whom saw their role as advocates for the patients.
The attempt to implement change in one clinic appears to have had small effects in increasing average doses there, and having more patients seen by a doctor. Between baseline and 9 month re-audit (follow-up), mean methadone doses increased in the implementation group and fell in the TAU group, as shown in Table 2. There was a small and statistically significant increase in methadone dose in the implementation group compared to the TAU group. The difference in change in methadone dose between the two groups was statistically significant (Mann-Whitney U= 2745, p=0.002), but the mean dose increase (3mg) in the implementation group was small. In the 6 months prior to the collection of follow-up data, medical reviews in both services were infrequent; 36% of patients in the implementation group and 66% of patients in the TAU group had not seen a doctor in their OST service (Chi square =13.38, df=1, p=0.001).
In both groups, the reductions in heroin use over time were statistically significant (Wilcoxon signed ranks test p = <0.05), but the change in heroin use over time did not differ significantly between the two services (Mann-Whitney U 2832.5, p=0.7). The changes from baseline audit to 9 month re audit are shown in Table 2. Among the 47 patients who attended a medical review, the mean prescribed methadone dose rose from 58 to 66mg/day, but the number receiving supervised doses actually fell, from 23 at baseline to 20 at follow-up. Mean days of reported heroin use fell from 20 to 12 (6 patients reported abstinence) – changes almost identical to what was observed in the TAU group.
Table 2 Changes in dose and heroin use between baseline (T1) and follow-up/re audit (T2)
Implementation
TAU
Time 1
Time 2
Time 1
Time 2
N
104
103
71
68
Mean Self-report heroin days/28 (SD)
19.9 (8.6)
13.4 (10.8)
19.6 (8.3)
11.7 (10.8)
Reported daily heroin use
52 (50%)
33 (32%)
25 (42%)
17 (25%)
Heroin abstinence
-
14 (14%)
-
15 (22%)
Urine test positive morphine %
88%
76%
85%
70%
Mean daily methadone dose
59.5
62.9
60.1
57
Proportion self-report cocaine
67%
54%
53%
44%
Urine test cocaine positive
66%
57%
58%
45%
29 non responders (28%) from the implementation service, and 27 (38%) from the TAU service had left the service between baseline and 9 month re audit. Most discharges (31/56) were transfers to another service as part of a local policy to move more people into treatment in primary care. Eight patients from the Implementation service dropped out of treatment, and 4 patients from the TAU service did so. Differences in the pattern of leaving the two services did not approach significance.
Table 3 Reason for discharge
Reason
Implementation
TAU
Total
Transfer of Rx
13
17
30
Did not attend (DNA)
8 (28%)
4 (15%)
12
Elective Withdrawal
3
3
6
Deceased
2
0
2
Prison/drug diversion program
3
3
6
Total
29
27
56
44 non responders who attended a medical review at the implementaion service completed questionnaires on health, quality of life, and depression. Ninety-six percent were not in education, employment or training (NEET). On the Beck Depression Inventory, 50% of respondents reported depression in the moderate to severe range. Regression analysis indicated that having to attend for supervised doses was associated with less depression measured on the BDI (r=-.332, p=0.039), and with better quality of life in terms of EDQ scale of self-care (r=-.598, p<0.001) and being able to undertake usual activities (r=-.605, p<0.001).
Discussion
Many people persisting in heroin use were receiving care that was out of line with guidelines – doses below 60mg, often with no supervised doses, and seldom attending for medical reviews. However, the attempt to systematically implement guidelines was not effective. Most patients did not attend, and many of those who did attend resisted changes. Although patients who attended received slightly higher doses, changes in heroin use in the subset who actually attended for review were no different to the changes observed in the TAU group.
Higher methadone doses, and patients having control over their doses, have been shown in a meta-analysis to be independently predictive of better outcomes14. One possible explanation for the failure to implement guidelines is that it may have been perceived as challenging clients’ control over their treatment. If so, it was a challenge easily defeated. Patients clearly had substantial control over their treatment, choosing whether to attend appointments, whether to accept higher doses, and whether to accept supervised doses. However, this degree of control over their treatment did not appear to be beneficial. “Non-responders” reported depression, disability and a poor quality of life.
Guidelines need to move beyond systematic reviews of effectiveness, to include evidence about implementing evidence in a real world setting15. Our conclusion is that the failure to implement guidelines was that the approach adopted was not congruent with clinic culture, which emphasised “support” rather than “structure”. “Structure” refers to both cognitive and behavioural elements of treatment. The cognitive elements are defined and agreed objectives, a sense of the direction and purpose of treatment. In all areas of mental health, clinical interactions are most useful if focused on specific performance goals related to the patient’s circumstances16. In the OST services studied, there appeared to be a focus on process and on supporting patients, rather than achieving outcomes.
Structure also includes behavioural elements - expectations and rules regarding attendance, and daily attendance for supervised administration. Interviews with UK patients in OST have indicated that they understand and value the role of supervision, not only in minimizing diversion and misuse, but in providing an activity for many people without social roles17. Consistent with the benefits of supervision, in the current audit more supervision was associated with less depression and less-poor quality of life.
This audit had several limitations. It did not attempt to measure the proportion of patients responding poorly to long-term methadone treatment, and it is possible that the true proportion may be higher than the 17% identified by key workers. Documentation of treatment outcomes, using TOP reports and UDS results, was unsystematic, limiting the number of patients in whom complete data was available. “Non-responders” self-reported heroin use to keyworkers, who administered the TOP questionnaire, and there may have been under-reporting. However, while this study may not have identified all non-responding patients, this does not invalidate the observation that attempting to implement guidelines was not successful.
Most importantly, the observations from these clinics may not be generalisable to other treatment settings. However, certain key data are available suggesting the treatment and outcomes observed in this study were not atypical. A report on national TOP monitoring noted patchy availability of follow-up data, and confirmed a high rate of persisting heroin use in treatment, with 38% of participants reporting abstinence from heroin18. Despite this high rate of heroin use, a recent survey reported a mean dose of 56mg of methadone in a national survey19. In this regard, the clinics in this report thus seem representative.
Medical staff appeared to have a peripheral role in delivery of OST in these clinics. Most non-responders did not have a medical review in 6 months – despite persisting heroin use, and self-reported depression. In the 1980s in the US, methadone treatment underwent a process labelled “demedicalisation”, marginalisation of the role of medical practitioners, and a loss of the sense that methadone was a medical treatment with clearly defined objectives and guidelines20. This contributed to a situation in which much methadone treatment in the US was out of line with research evidence21. The current audit suggests that a similar process of demedicalisation and deviation from evidence-based treatment has been occurring in some NHS services in the UK.
If these observations are representative of at least some treatment culture in the UK, they lend support to the criticisms made of methadone treatment in the new UK drug strategy1. To the extent that the recovery agenda challenges clinic culture and shifts the focus of treatment onto outcomes, it is a positive development.
However, many well-intentioned policies have unintended consequences, and there are well-based fears that the new policy promoting abstinence from OST as an objective of recovery will lead to an increase in overdose deaths3. This is specifically because of the risk of overdose deaths after leaving treatment. The reason for the increased risk of overdose after leaving treatment is that newly abstinent addicts who have reduced opioid tolerance, and a dose of heroin they previously used during periods of addiction becomes a potentially fatal dose once they are abstinent. This risk attaches to all forms of drug free treatment, as well as to patients who have left methadone. The critical issue is that lapses to heroin use, and relapses to dependent heroin use, are very common among newly-abstinent addicts. It is the high probability of relapse to heroin use which is the basis of long-term maintenance treatment – better to keep people safe and functioning normally, albeit while still taking a medication, than the risk of relapse and re-addiction, or relapse and fatal overdose. In the UK, implementation of the recovery agenda has included incentives to abstinence, and this is not consistent with evidence about the risk of relapse. If the recovery agenda can accommodate the evidence that indefinite maintenance as a valid option for many, perhaps most heroin users, then the evidence of this study is that far from being in contradiction, the recovery agenda may facilitate the implementation of evidence-based practice.
Post- graduate medical education in the United Kingdom has seen numerous dramatic changes in the last decade, with the introduction of structured training programmes and changes in assessment of skills driven by Modernising Medical Careers.1 Overall these new developments emphasise a competency based curriculum and assessments. Alongside and contingent on these wider changes in medical education, psychiatric trainees have faced major transformations in their membership (MRCPsych) examinations.
The MRCPsych examination was first introduced in 1972, a year after the Royal College of Psychiatrists was founded. There have been various modifications in its structure since its inception but a radical change occurred in the last decade with the introduction of an OSCE in 2003 and the CASC, a modified OSCE in June 2008. The CASC is considered as a high- stakes examination as it is now the only clinical and final examination towards obtaining the membership of the College. The MRCPsych qualification is considered as an indicator of achieving professional competence in the clinical practice of psychiatry and has the main aim of setting a standard that determines whether trainees are suitable to progress to higher specialist training.2 In his commentary to Wallace et al3 , Professor Oyebode describes the aims, advantages and disadvantages of the various assessment methods used in the MRCPsych examination and conclude that the precise assessment of clinical competence is essential.4
Traditionally, assessment of clinical skills involved a long case examination since it was introduced in clinical graduating examination by Professor Sir George Paget at Cambridge, UK in 1842. This has been followed by most of the medical institutions worldwide and remained as the clinical component of the MRCPsych examination until 2003. There are some shortcomings with this assessment method and the outcome can be influenced by several factors such as varying difficulty of the cases, co-operation of the real patient and examiner- related factors. The reliability of assessment of clinical competency with a single long case is low and it is necessary for the candidate to interview at least ten long cases to attain the reliability required for a high stakes examination like MRCPsych.5 A fair, reliable and valid examination is necessary to overcome these difficulties. The OSCEs proved to be one of the answers to these difficulties.
One important aspect of assessing the validity and acceptability of assessment methods is asking the opinions of examiners and candidates about their experiences and views about the examination once it has been rolled out. As far as the authors are aware there has been one previous published survey of CASC candidates’ views on this method of examination and this was based at a revision course. Whelan et al6 showed that approximately 70% of the candidates did not agree with the statement “there is no longer a need to use real patients in post-graduate clinical psychiatry exams”. In addition, only 50% of the candidates preferred the CASC compared to previous long case and the other 50% remained undecided. This raises doubts about the acceptability of the CASC format and merits further exploration.
Method
We conducted a national on-line survey asking both candidates and examiners about their views on the CASC examination.
Questionnaire development
Two questionnaires (one each for examiners and candidates) based on previously available evidence on this exam format6,7,8 were developed following discussions among the authors.
The final version of the questionnaire for both groups had the same seven questions with a five point Likert scale. It included questions on whether the exam effectively assessed the competency needed for real life practice, whether there was over testing of communication skills, whether feedback was adequate, respondents’ views on validity and reliability of the method and finally whether the clinical examination should revert to the previous style of long case and viva.
Sampling procedure
The examiners and the candidates who have already appeared in the CASC examination were invited to complete the online survey. The links to the questionnaires were distributed via the Schools of Psychiatry in thirteen deaneries in the United Kingdom (including Wales, Northern Ireland and Scotland). We approached 400 candidates and 100 examiners from different deaneries making sure the wide geographical distribution. The sample size was chosen based on the data that around 500 candidates appear in CASC exam each time and there are approximately 431 examiners on CASC board (personal contact with the College).Participants were assured that their responses were confidential. The survey was open from mid-March to mid-April 2011. Reminders were sent half way through the survey period.
Results
A total of 110 candidates and 22 examiners completed the survey. The response rate was better for candidates (27.5%) compared to the examiners (22%). Albeit the low response rate, the responses showed good geographical spread. Responses were received from most of the deaneries (87%). The London, East and West Midlands deaneries showed higher response rate (14% each) while Scotland, Severn and North Western deaneries showed least response rate (2% each).
Among the 110 candidates, 52% were males and 48% were females and among the examiners, 73% were males and 27% were females. 55% of the examiners were involved in the previous Part 2 clinical exam while only 7% of the candidates had the experience of previous Part 2 clinical exam. The results are summarised in Tables 1 and 2.
Table 1. Candidates’ views ( n= 110 )
Survey questions
Strongly agree
Agree
Neutral
Disagree
Strongly disagree
CASC examines the required competencies to progress to higher training
10%
38%
7%
26%
19%
CASC examines all skills and competencies compared to previous Part 2 clinical exam
4%
11%
46%
21%
18%
CASC scenarios reflects the real life situations faced in clinical practice
12%
36%
13%
22%
17%
CASC gives more emphasis on testing communication and interviewing skills than overall competencies
29%
31%
14%
19%
7%
CASC is more valid and reliable as a clinical exam
9%
19%
29%
20%
23%
Feedback system ‘areas of concern’ are helpful to the unsuccessful candidates
1%
11%
28%
26%
34%
CASC needs to be replaced by traditional style of exam – a long case and a viva
14%
22%
25%
24%
15%
Table 2. Examiners’ views ( n= 22 )
Survey questions
Strongly agree
Agree
Neutral
Disagree
Strongly disagree
CASC examines the required competencies to progress to higher training
14%
45%
14%
18%
9%
CASC examines all skills and competencies compared to previous Part 2 clinical exam
4%
14%
23%
45%
14%
CASC scenarios reflects the real life situations faced in clinical practice
14%
63%
5%
9%
9%
CASC gives more emphasis on testing communication and interviewing skills than overall competencies
22%
26%
17%
22%
13%
CASC is more valid and reliable as a clinical exam
9%
37%
27%
9%
18%
Feedback system ‘areas of concern’ are helpful to the unsuccessful candidates
0%
36%
14%
27%
23%
CASC needs to be replaced by traditional style of exam – a long case and a viva
18%
14%
41%
9%
18%
Clinical competencies and skills
59% of the examiners and 48% of the candidates have accepted that CASC examines the required competencies to progress to higher training. Strikingly only 18% of the examiners and 15% of the candidates agreed that CASC allows the assessment of all the skills and competencies necessary for the higher trainees in comparison to the previous Part 2 clinical exam.
Content of the CASC
Majority of the examiners (77%) and nearly half of the candidates (48%) agreed that CASC scenarios reflect real life situations faced by clinicians in normal practice. However 60% of the candidates and 48% of the examiners felt that CASC excessively emphasizes communication and interview skills.
Feedback - “areas of concerns”
More than half of the candidates (60%) and half of the examiners (50%) felt that the feedback indicating “areas of concerns”, for the failed candidates was not helpful to improve their preparations before the next attempt.
Validity and reliability of the CASC as a clinical exam
Just over one fourth of the candidates (28%) and less than half of examiners (46%) considered CASC as a valid and reliable method of clinical examination. However, only 36% of the candidates and 32% of the examiners supported replacing CASC with a traditional clinical exam (a long case and a viva). Broadly comparable numbers (39% of the candidates and 27% of the examiners) disagreed with the statement that the CASC should be replaced by the previous examination style.
Discussion
To our knowledge this is the first study of candidate and examiner views since the introduction of the CASC. Its predecessor OSCEs has a good reliability and validity in assessing medical students8 and it has become a standard assessment method in undergraduate examinations. Whilst OSCEs have been held to be reliable and valid in a number of assessment scenarios,8 there have been doubts about their ability to assess advanced psychiatric skills,9 which was one of the main reasons to retain the long case in MRCPsych Part 2 clinical exam.2 Over the years, most of the Royal Colleges introduced OSCEs into their membership examinations and used simulated patients in some scenarios. However CASC is the first examination with only simulated patients in a combination of paired and unpaired stations. So far there has been no published literature evaluating this method systematically.
In a recent debate paper10 it has been argued that CASC may have significant problems related to its authenticity, validity and acceptability. The findings of our survey reflect similar doubts about the reliability and validity of the CASC exam amongst both the candidates and examiners. The content validity of CASC has been demonstrated by the College Blueprint11 and the face validity appears to be good. However, as far as we are aware, the concurrent and predictive validity testing data have not been published. Although the global marking system appears to have better concurrent validity than other checklists, it gives the examiners the similar flexibility as the long case in making judgements which may affect CASC transparency and fairness. This may indicate that this new and promising examination method requires further systematic evaluations and modifications before its user’s fully accept it.
According to the results of our study the content of the CASC exam satisfies its purpose of assessing the candidates’ competencies to progress to the higher professional training. However many of the respondents felt that it lacked the completeness of previous traditional clinical examination, which collate skills. Although there were some differences between the candidates and the examiners on how they perceived the CASC exam, most of the respondents agreed that CASC laid more emphasis on communication and interviewing skills rather than overall assessment of the candidate’s competency.
Harden et al,12 in their paper on OSCEs, criticised the compartmentalisation of knowledge and discouraging candidates from a broader thinking during the clinical examinations. They also suggested using a long case and/or workplace based assessments rather than relying on OSCEs only in assessing trainees. Benning & Broadhurst13 expressed similar concerns on the loss of long case in MRCPsych examination. Our findings support the arguments that CASC assesses competencies in a piecemeal fashion rather than being reflective of the demands on senior doctors in real practice which often involve deciding what is and is not important depending on context.
The OSLER14 (Objective Structured Long Examination Record) method might overcome the shortcomings and improve the objectivity and transparency of long case. In this method, two examiners assess the candidate and grade their skills individually in a ten item objective record. Later they decide together the appropriate grade for each item and agree an overall grade. The ten items include four on history, three on examination and another three covering investigations, management and clinical acumen. The OSLER method is also practical as no extra assessment time is required and it can be used for both norm referenced & criterion referenced exams. The case difficulty can be determined by the examiners and all candidates are assessed for identical items. Thus this method assesses the candidate’s overall clinical competency and eliminates the subjectivity associated with the long case.
Another alternative might be using a combination of assessment methods as suggested by Harden.12 An 8-10 stations OSCE can be combined with a long case assessment using OSLER method. The OSCE stations might include patient management scenarios along with interview and communication skills scenarios. The final score determining the result could also include marks from work place based assessments as they provide a clear indication of the candidate’s skills and competence in real life situation.
It is also evident from our findings that both candidates and examiners are largely unsatisfied with the extent and usefulness of feedback that is provided to unsuccessful candidates. The feedback system have been criticised for its inability to clarify the specific areas or skills which need to be improved by the unsuccessful candidates. The recent “MRCPsych Cumulative Results Report’’ 15 states that the pass rate of the candidates declines after the first attempt. Perhaps this could be improved if failed candidates receive more detailed feedback about their performance.
There are a number of limitations to this study. The response rate was low but it was broadly in the range of other online surveys16 and there was representation from most of the deaneries in the United Kingdom. There could be a number of reasons for low response rate. As far as we are aware few deaneries were not willing to distribute the questionnaire through their School of Psychiatry and we had to contact the individual trusts in the area to distribute the survey. The poor response rate from the examiners could be because of their low interests in participating and lack of time. Also older examiners and those with more experience of CASC may have had particular views which might have had an influence on the responses. But when this was examined further, there were no major differences between respondents who had the experience of previous Part 2 examinations from those who had not. In addition one of the survey questions consisted of two parts (views on validity and reliability) which could have been difficult to answer accurately.
The findings of this preliminary study raise some doubts on acceptability of the CASC by both candidates and examiners. There might be a possibility of subjective bias in the responders’ views, perhaps influenced by other ongoing and controversial changes in the NHS, including the role of GMC and the College in the post- graduate medical education. However on the other hand it might be a signal that it is worthwhile to reconsider the implications of the CASC on education and training and to evaluate systematically this assessment method further.
Foreign body ingestion is a common occurrence, especially in children, alcoholics, mentally handicapped and edentulous people wearing dentures. However, majority of the individuals pass these objects without any complications.1 Most foreign bodies pass readily into the stomach and travel the remainder of the gastrointestinal tract without difficulty; nevertheless, the experience is traumatic for the patient, the parents, and the physician, who must await the removal or the ultimate passage of the foreign body.2 The alimentary canal is remarkably resistant to perforation: 80% of ingested objects pass through the gastrointestinal tract without complications. 3 About 20% of ingested foreign bodies fail to pass through the entire gastrointestinal tract.4 Any foreign body that remains in the tract may cause obstruction, perforation or hemorrhage, and fistula formation. Less than 1% result in perforations from the mouth to the anus and those are mostly caused by sharp objects and erosions. 5, 18 Of these sharp objects, chicken bones and fish bones account for half of the reported perforations. The most common sites of perforation are the ileo-ceacal junction and sigmoid colon.3
Materials and Methods
This study, “Gastrointestinal tract perforations due to foreign bodies; a review of 21 casesover a ten year period” was carried out in the Department of General Surgery at the Sher-i-Kashmir Institute of Medical Sciences Srinagar (SKIMS), a tertiary care hospital in North India, from January 2002 to December 2011. A total of 21 consecutive patients who underwent surgery for an ingested foreign body perforation of the GI tract over a period of ten years were retrospectively reviewed. Computer database and extensive case note search of patient’s personal data including age, sex, residence, presenting complaints with special stress on clinical examination findings was done. The type and nature of the foreign objects, mode of entry into the gastrointestinal tract, preoperative diagnosis, perforation site, and treatment received were recorded. The complications arising due to perforation of GIT because of the foreign body ingestion and complications arising due to specific treatment received were noted. Important findings on various laboratory tests, including a complete blood count, erythrocyte sedimentation rate, [pre-op/post-op/follow up], blood cultures, and serum chemistry, chest and abdominal X-rays were penned down. Special efforts were made to identify the predisposing factors for ingestion of foreign bodies including edentulous patients with dentures, psychosis, extremes of age and hurried eating habits. Clinical, laboratory and radiological findings, treatment modalities, operative findings and therapeutic outcomes were summarized. Data collected as such was described as mean and percentage.
I/V Antibiotics ( Ceftriaxone + Metronidazole ) were given in the emergency room and changed to specific therapy as per the culture sensitivity postoperatively.
Results
The average follow up duration was 13 months (range 7 – 19 months). There were 14 male(66.66%) and 7 female (33.33%) patients ranging in age from 7 years to 82 years with a median age of 65 yrs at the time of diagnosis . The most frequently ingested objects were dietary foreign body (n = 17). Four patients had ingested objects like toothpicks (n =2) and metallic staples (n=2) {as shown in figure 1}. Among the dietary foreign bodies fish bone was found in 7(33.3%) and chicken bone in 10(47%) {as shown in figure 2} . All the patients described their ingestion as accidental and involuntary. A definitive preoperative history of foreign body ingestion was obtained in 4(19.04%) patients and an additional 9(42.8%) patients admitted ingestion of foreign body in the post operative period. Of these 13 patients the average duration between ingestion of foreign body and presentation was 9.3 days. Remaining 8 (38.09%) patients did not recall any history of foreign body ingestion; dietary or otherwise. In terms of impaction and perforation of ingested foreign body, ileum was the commonest site with 14(66.66%) patients showing perforation near the distal portions of the ileum followed by sigmoid colon in 5(23.8%). Jejunal perforation was seen in 2(9.5%) patients.
Fig 1: X ray abdomen AP view showing ingested metallic pin
Fig 2: Intra operative picture showing perforation of small gut due to chicken bone
All our patients presented with acute abdomen and were admitted first in emergency department. Since majority of patients did not give any specific history of foreign body ingestion, they were managed as cases of acute abdomen with urgency and level of care varying according to the condition of patients. Eight patients presented with free air in the peritoneum and air under the right side of diaphragm. The most common preoperative diagnoses were acute abdomen of uncertain origin: 12 (57.14%); acute diverticulitis:5 (23.8%) and acute appendicitis: 4 (19.04%).
Table 1: Showing demographic profile, site of perforation, etiology, presentation and management.
S No
Age
Sex
Site
Foreign Body
Presentation & Pre Op Diagnosis
Procedure Performed
1
78
Male
40 cm from ileo-caecal valve
Fish bone
Acute abdomen, peritonitis
Removal of foreign body and repair
2
65
Female
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
3
80
Male
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
4
43
Male
Jejunum
Tooth pick
Acute abdomen, peritonitis
Removal of foreign body and repair
5
10
Male
10 cm from ileo-caecal valve
Metallic staple
Acute abdomen, appendicitis
Removal of foreign body and repair
6
72
Female
Jejunum
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
7
65
Male
20 cm from ileo-caecal valve
Fish bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
8
59
Male
Sigmoid colon
Chicken bone
Acute abdomen, diverticulitis
Removal of foreign body and repair
9
65
Female
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Removal of foreign body and repair
10
49
Female
40 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Removal of foreign body and repair
11
7
Male
Sigmoid colon
Metallic staple
Acute abdomen, diverticulitis
Removal of foreign body and repair
12
78
Female
15 cm from ileo-caecal valve
Fish bone
Acute abdomen, appendicitis
Resection of the perforated distal ileum and ileum stoma
13
72
Male
15 cm from ileo-caecal valve
Fish bone
Acute abdomen, appendicitis
Resection of the perforated distal ileum and ileum stoma
14
56
Male
20 cm from ileo-caecal valve
Tooth pick
Acute abdomen, appendicitis
Resection of the perforated distal ileum and ileum stoma
15
65
Male
Sigmoid colon
Fish bone
Acute abdomen, diverticulitis
Removal of foreign body and repair
16
63
Male
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
17
82
Female
30 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Removal of foreign body and repair
18
55
Female
Sigmoid colon
Fish bone
Hematochizia acute abdomen, diverticulitis
Removal of foreign body and repair
19
56
Male
20 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
20
69
Male
Sigmoid colon
Fish bone
Acute abdomen, diverticulitis
Removal of foreign body and repair
21
71
Male
40 cm from ileo-caecal valve
Chicken bone
Acute abdomen, peritonitis
Resection of the perforated distal ileum and ileum stoma
All the patients underwent an emergency celiotomy and confirmation of foreign body induced perforation was possible in all the 21 patients .Patients with a suspected appendicitis were explored via classical grid iron incision and rest via midline incision. Varying degrees of abdominal contamination was present in all the patients. Out of the 21 patients 11(52.38%) underwent removal of foreign body and primary repair of their perforations after minimal debridement. Intestinal resection with stoma formation (resection of the perforated ileum and ileum stoma) was done in 10 (47.6%) of the 21 patients as shown in Table 1. Take down of stoma was done at a later date. Three (14.28%) patients developed incisional superficial surgical site infection which responded to local treatment. Two (9.5%) patients died in the postoperative period due to sepsis. One patient (Patient no. 3 in table 1) who was a diabetic on Insulin, Chronic obstructive pulmonary disease and Hypertension died on 3rd postoperative day in surgical Intensive care unit due to severe sepsis. Another patient, (Patient no. 12 in table 1 ) an elderly female with no co-morbid illness developed severe sepsis due to Pseudomonas aeruginosa, died on 4th postoperative day. She was managed at a peripheral primary care center for first 3 days for her vague abdominal pain with minimal signs. All the other patients had an uneventful recovery and were discharged home between 6-14th postoperative day.
Discussion:
Foreign bodies such as dentures, fish bones, chicken bones, toothpicks and cocktail sticks have been known to cause bowel perforation6. Perforation commonly occurs at the point of acute angulation and narrowing. 7, 8 The risk of perforation is related to the length and the sharpness of the object.9 The length of the foreign body is also a risk factor for obstruction, particularly in children under 2 years of age because they have considerable difficulty in passing objects longer than 5 cm through the duodenal loop into the jejunum. In infants, foreign bodies 2 or 3 cm in length may also become impacted in the duodenum.10 The most common sites of perforation are the ileo-ceacal junction and sigmoid colon. Other potential sites are the duodeno-jejunal flexure, appendix, colonic flexure, diverticulae and the anal sphincter.3 Colonic diverticulitis or previously unsuspected colon carcinoma have been reported as secondary findings in cases of sigmoid perforation caused by chicken bones.11,12 Even colovesical or colorectal fistulas have been reported as being caused by ingested chicken bones. 13,14 .In our study ileum was the most common site with 14 patients showing perforation near the distal portions of the ileum followed by sigmoid colon. Jejunal perforation was seen in 2 patients.
The predisposing factors for ingestion and subsequent impaction are dentures causing defective tactile sensation of the palate, sensory defects due to cerebro-vascular accident, previous gastric surgery facilitating the passage of foreign bodies, achlorhydria where the foreign body passes unaltered from the stomach, previous bowel surgery causing stenosis and adhesions and diverticula predisposing to impaction.3 Overeating, rapid eating, or a voracious appetite may be contributing factors for ingesting chicken bones. The mean time from ingestion to perforation is 10.4 days.15 In cases when objects fail to pass the tract in 3 to 4 weeks, reactive fibrinous exudates due to the foreign body may cause adherence to the mucosa, and objects may migrate outside the intestinal lumen to unusual locations such as the hip joint, bladder, liver, and peritoneal cavity.16 The length of time between ingestion and presentation may vary from hours to months and in unusual cases to years, as in the case reported by Yamamoto of an 18 cm chopstick removed from the duodenum of a 71-year-old man, 60 years after ingestion.17 In our study the average duration between ingestion of foreign body and presentation was 9.3 days.
In a proportion of cases, definitive preoperative history of foreign body ingestion is uncertain.18 Small bowel perforations are rarely diagnosed preoperatively because clinical symptoms are usually non-specific and mimic other surgical conditions, such as appendicitis and caecal diverticulitis.19 In our study the most common preoperative diagnoses were acute abdomen of uncertain origin (n =12), acute diverticulitis (n = 5) and acute appendicitis (n = 4). Patients with foreign body perforations in the stomach, duodenum, and large intestine are significantly more likely to be febrile with chronic symptoms with a normal total white blood cell count compared to those with foreign body perforations in the jejunum and ileum.18 Plain radiographs of neck and chest in both anteroposterior and lateral views are required in all cases of suspect foreign body ingestion and perforations in addition to abdominal films. CT scans are more informative especially if radiographs are inconclusive.20 Computerised tomography (CT) scanning and ultrasonography can recognise radiolucent foreign bodies. An ultrasound scan can directly visualize foreign bodies and abscesses due to perforation. The ability to detect a foreign body depends on its constituent materials, dimensions, shape and position.21 Contrast studies with Gastrograffin may be required in excluding or locating the site of impaction of the foreign body as well as determining the level of a perforation. Using contrast is important in identifying and locating foreign bodies if intrinsically non-radiopaque substances, such as wooden checkers or fish and chicken bones are ingested.20 The high performance of computed tomography (CT) or multi-detector-row computed tomography (MDCT) scan of the abdomen in identifying intestinal perforation caused by foreign bodies has been well described by Coulier et al. 22 Although, in some cases imaging findings can be nonspecific, however, the identification of a foreign body with an associated mass or extraluminal collection of gas in patients with clinical signs of peritonitis, mechanical bowel obstruction, or pneumoperitoneum strongly suggests the diagnosis.8,20 Finally, endoscopic examination, especially in the upper gastrointestinal tract, can be useful in diagnosis and management of ingested foreign bodies.
Whenever a diagnosis of peritonitis subsequent to foreign body ingestion is made, an exploratory laparotomy is performed. However, laparoscopically assisted, or complete, laparoscopic approaches have been reported.17,23 The treatment usually involves resection of the bowel, although occasionally repair has been described.8 The most common treatment was simple suture of the defect. 24 Once the foreign body passes the esophagogastric junction into the stomach, it will usually pass through the pylorus25; however, surgical removal is indicated if the foreign body has sharp points or if it remains in one location for more than 4 to 5 days especially in the presence of symptoms. A decision should be based on the nature of the foreign body in those cases, as to whether a corrosive or toxic metal in ingested. 26 Occasionally, objects that reach the colon may be expelled after enema administration. However, stool softeners, cathartics and special diets are of no proven benefit in the management of foreign bodies.7
Non adherence to medication is a significant problem for client group in Psychiatry. Between a third and half of medicines that are prescribed for long term conditions are not used as recommended2, 3. In the case of Schizophrenia, studies reveal that almost 76% of the sufferers become non-compliant to the medication within the first 18 months of treatment 4.
Non-adherence has consequences for both clients and the Health Care System. If the issues of non-adherence are better identified and addressed actively, it has the potential of improving the mental health of our clients which will reduce the burden of cost to mental health resources. It is estimated that unused or unwanted medications cost the NHS about £300 million every year. This does not include indirect costs which result from the increased likelihood of hospitalization and complications associated with non-adherence5.
The WHO identified non-adherence as “a worldwide problem of striking magnitude”. This problem is not only just linked with our psychiatric client groups, but also is prevalent with most chronic physical conditions. It has been reported that adherence to medications significantly drop after six months of treatment6.
In broad term compliance is defined as the extent to which the patient is following the medical advice. Adherence on the other hand is defined as the behavior of the clients towards medical advice and their concordance with the treatment plan. Adherence appears to be a more active process in which patients accept and understand the need of their treatment through their own free will and portray their understanding with either a positive or negative attitude towards their medications7
Unfortunately there is no agreed consensual standard to define non adherence. Trials suggest a rate of >50% compliance as adequate adherence while other researchers believe it should be at least >95%. As per White Paper of DOH (2010), it has been recommended that clinicians have the responsibility to identify such issues and improve collaborative relationships among multidisciplinary teams to deliver a better clinical and cost effective service8.
Methods:
Sampling:
Our cohort included a prospective consecutive sample of 179 patients. The study was conducted in North Essex Partnership NHS Trust which provides general adult services for a catchment area of approximately 147,000 in Tendring area. All these clients were seen at the out patient’s clinic at Clacton & District Hospital. Informed consent was taken as per recommendation of local clinical governance team. The study was conducted during a 2 month period from October to November in 2010. No patient was excluded from the study. Sample consists of clients who were aged 16years and above.
Tools Used:
All the clients were asked questions using a standard questionnaire and MARS (Medication Adherence Rating Scale). MARS was developed by Thompson et al in 1999 as a quick self-reported measure of adherence mainly around psychiatric clients. It was mainly devised from a 30 item Drug Attitude Inventory (DAI) and a 4 item Morisky Medication Adherence Questionnaire (MAQ). The validity and reliability of MARS has been established by Thompson et al and then Fialko et al in 2008 in a large study and has been reported to be adequate9,10.
The patient questionnaire directly asked clients about their current medications and dosage regimens. It also enquired about various factors leading to non-compliance. It included factors like whether the medication makes them feeling suicidal, causes weight gain, makes them aggressive, causes sleep disturbances, causes sexual side effects, the form and size of tablets, stigma and family pressure, their personal belief about medication or do they feel that they become non adherent because as a direct effect and consequence of the illness.
Medication Adherence Rating Scale focuses both on adherence as well as the patient’s attitudes towards medications. It includes questions about how frequently they forget to take medications or are they careless about taking their medications. It also asks them if they stop taking their medication do they feel well or more unwell. Other aspects include whether they only take medicines when they are sick and do they believe that it is un-natural for their thoughts to be controlled by medications. It also asks about the effect of medication on them, such as; are they able to think clearly, or do they feel like a zombie on them?, or are they tired all the time?. It also checks their belief that if they remain compliant to medication, will it prevent them from getting sick again.
Results:
In total 179 clients were seen in the outpatient clinic during the period of two months. Out of those (54%, n=97) were females whereas nearly half (46%, n=82) were males. Age of the clients ranged from 18 years to 93 years. The mean age of the client group was 55; mode 41 and median was 69.5.
The diagnosis profile was quite varied. As far as the primary diagnosis is concerned, the majority (n=144) of service users were given a primary diagnosis using the ICD 10 criteria. Mood disorders were the most common primary diagnosis whereas personality disorder and anxiety were the most common secondary diagnosis. Table 1 show the number and percentage of the service users who presented with the most common diagnosed conditions:
Table 1: List of primary and Secondary diagnosis
Diagnosis
Primary
Secondary
Mood Disorders
72 (50%)
07 (26.92%)
Psychotic illness
25 (17.36%)
01 (3.85%)
Anxiety and PD
13 (9%)
13 (50%)
Dementia
24 (16.7%)
02 (7.69%)
Neurological disorder
07 (4.86%)
01 (3.85%)
Drugs related illness
02 (1.39%)
02 (7.69%)
Eating disorder
01 (0.69%)
00 (0.0%)
Subjectively 160 (89%) patients reported that they were compliant with medications whereas 19(11%) patients admitted that they have not been adherent to medications. Out of those who said that they were non-adherent, 8 were suffering from Mood disorders, 2 had schizoaffective disorder, 3 had psychotic illness, 3 had organic brain disorder, 2 clients had personality disorder, whereas 1 client had anxiety and 1 had neurological illness.
Prescription rate varied between different types of psychotropic medications. Antipsychotics were the most prescribed medication in our cohort. Table 2 shows data of each individual category.
Table 2: Number and percentage of individual medication category prescribed
Medication category
N=number of prescribed meds
% of total prescriptions
Antipsychotics
100
44%
Antidepressants
72
31%
Mood Stabilisers
21
09%
Anxiolytics
21
09%
ACH Inhibitors
12
05%
Hypnotics
04
02%
Less than half (39%, n=69) of service users had only one type of psychotropic medication whereas the majority (58%, n=104) of patients were on more than one psychotropic medication. A very small number of clients (3%, n=6) were not using any medications at all. When explored further it was revealed that almost two third of the antidepressant prescriptions comprised of SSRI’s (67%, n=55), about one fourth of SNRI (24%, n=21), a small proportion (6%, n=5) of NARI’s and very few (3%, n=3) were given tricyclic antidepressants. Similarly in antipsychotics, 75% of patients were on atypical and 25% were prescribed typical antipsychotics.
Factors leading to non-adherence:
Below is the graphical representation of what clients perceived as the major factors leading to the non adherence to the medication. Weight gain, illness effect, stigma and personal belief appear to be the major factors as displayed in Chart 1.
Chart 1: Number of responses for each individual factor leading to non-adherence:
Attitude towards Medications:
The overall Service users’ attitude towards medication did not appear to be particularly good. They mainly complained of getting tired and forgetting to take medication. Below in Chart 2 is the graphical representation of what overall attitude they had expressed towards psychotropic medications.
Chart 2: Number of responses for each factor indicating attitude towards medication
As far as overall MARS score is concerned, the majority of patients (63%, n=110) scored >6 and about one third of patients (37%, n=63) scored <6. A score of less than 6 is generally considered as a poor level of adherence which means that almost one third of our client group does not comply with medications.
Discussion:
The aim of our study was to highlight the importance of the factors which often lead to non-adherence to medications and to explore patients’ attitudes towards medications. Results are indicating that the problem of non-adherence is much wider and deeper in our clients group. There is a significant gap in between subjective and objective rate of adherence. However we should be mindful that adherence appears to be more of a continuum rather than a fixed entity e.g. some patients can be more adherent than others but still have inadequate adherence and hence arises the concept of partial adherence. It is evident from the results that patients’ attitudes were not encouragingly positive towards psychotropic medications.
Human beings are born potentially non-compliant. It is our tendency to crave and indulge in things which we know might not be good for our health e.g. eating non healthy food, alcohol and substance misuse. We have better compliance to issues which give us the immediate reward like pain relief or euphoria from illicit drugs where as because of lack of this immediate reward, our compliance gradually becomes erratic. Compliance and adherence appears to be a learnt phenomenon which needs to be nurtured throughout our life.
Manifestations of non-adherence:
The consequences of non-adherence are mainly manifested and expressed through clinical and economic indicators. Clinically it means an increase in the rate of relapse and re-hospitalisation. As per one study non-adherent patients have about a 3.7 times high risk of relapse within 6 months to 2 years as compared to patients who are adherent11. In US it was estimated that at least 23% of admissions to nursing homes were happening due to non adherence which meant a cost of $31.3 billion/380,000 admissions per year12. Similarly 10% of admissions happened for the same reason costing the economy an amount of $15.2 billion/3.5 million patients13,14. Figures in UK are also not much different where the cost of prescriptions issued in 2007-08 was estimated to be £8.1 billion and it was highlighted that £4.0 billion out of that amount was not used properly15. Similarly in terms of hospitalization, about 4% admissions happen every year happen because of non-adherenceThe total cost of hospitalization in 2007 was estimated to be £16.4 billion and it was suggested that non-adherence had a burden of costs in the region of £36-196 million17.
From a clinical aspect it has been suggested that non-adherence causes about 125,000 deaths just in the US every yearMet analysis has suggested significant statistical association between non-adherence and causing depression in certain chronic physical conditions e.g. Diabetes19.
Dimensional Phenomenon?
We need to be aware that adherence is a multidimensional and a multifaceted phenomenon and is better understood in dimensional rather than categorical terms. It has been widely accepted that if concordance is the process, then adherence will be the ultimate outcome. This was highlighted by WHO guidelines using following diagram:
Chart 3: WHO diagram of the five dimension of adherence:
Therefore any strategy developed to address the issue of non-adherence should be able to consider all these five dimensions; otherwise it will be less likely to have any chance of success.
Measures to improve Compliance:
All the known as clinical and economic indicators suggest that non-adherence issue needs significant attention and special measures which ought to be taken in order to avoid complications. There are already some running campaigns in other countries in order to improve adherence and we need to learn from their experiences such as the National Medication Adherence Campaign in US (March 2011). The campaign is basically a research-based public education effort targeting patients with chronic conditions, their family caregivers, and health care professionals20.
Levine (1998) demonstrated that the following steps may help in increasing adherence:
To appropriately asses the patient’s knowledge and understanding about the disease process and the need for treatment and to address those issues if there is some dysfunctional belief.
To link the taking of medication with other daily routines of the life
To use aids to assist medication adherence e.g. MEMS, ePills, Calendar or Dossette box
To simplify the dosage regimen
Flexible Health care team who is willing to support
Addressing current Psychosocial and environmental issues which might hinder the adherence21.
It is extremely important for the clinician to take time to discuss in detail with their patients all the possible side effects and indications of the prescribed medications. Unfortunately clinicians may not be able to predict the possibility of having side effects but can certainly educate patients about their psychopathology, indication and rationale for the medication and make them realise how important it is for them to remain adherent to medication. Health education is considered equally effective as compared to any sophisticated adherence therapy and should be used routinely22.Clinicians also have very important role to play in simplifying the dosage regimen and emphasise to the patients that “Medications don’t work in patients who don’t take them”23.
Various studies have tried to estimate the efficacy of a single factor and the multi factor approaches to improve adherence 24. Studies have showed proven efficacy for education in self management25,26, pharmacy management programmes27,28, nursing, pharmacy and other non medical health professional intervention protocols29,30, counselling31,32, behavioural interventions33,34 and follow up35,36. However multi factor approaches have been found to be more effective than single factor approaches,38Therefore it has been suggested that we need to address all the five dimensions of adherence (Chart 3) with multiple interventions to improve the adherence in our patients.
One factor potentially of concern leading to non-adherence is the possibility of the current overt or covert misuse of alcohol, illicit substances and over the counter available medications. This issue understandably can lead to partial or complete non adherence as well as worsening of existing psychiatric conditions. Therefore it needs to be explored further in future research projects.
There are approximately over 1.6 billion overweight people with a body mass index (BMI) greater than 25 kg/mAnnually, around 2.8 million deaths are attributed to overweight and obesity worldwide(1). Many overweight individuals underestimate their weight and despite acknowledging their overweightness, many are not motivated to losing weight(2).Accurate measurement is important as it identifies patients with diagnoses which subsequently impact on their management. Self-reported weight is often used as a means of surveillance but has been shown to bias towards under reporting of body weight and BMI as well as over reporting on height(3). Several estimation techniques has been devised to quantify anthropomorphic measurements when actual measurement cannot take place(4),(5),(6), however, these methods are associated with significant errors for hospitalised patients(7). There is no published study that questions the validity of visual estimation of obesity in daily clinical setting despite its relevance to the daily practice. We aim to investigate the accuracy of visual estimation compared to actual clinical measurements in the diagnosis of overweight and obesity.
Methods:
This is a case control study. Patients for this study were attending the endocrinology, cardiology and chest pain out-patient clinic in Cork University Hospital, Cork, Ireland. The questionnaire session was carried out at every endocrinology, cardiology and chest pain clinic for 5 consecutive weeks. A total of 100 patients were recruited allowing for a 10% margin of error at 95% confidence level in a sample population of 150 000. Ten doctors of varying grades were chosen randomly to visually score the subjects. Exclusion criteria included patients who were pregnant and who are wheelchair bound. Consent was obtained from patients prior to filling questionnaires. Ethical approval was received from the Clinical Research Ethics Committee of the Cork Teaching Hospitals.
In the waiting room, patients were asked to self- report their weight, height and waist circumference to the best of their estimate. Demographics and cardiovascular risk were obtained from medical charts and presented in Table 1. The questionnaires have a section that specifically tests patients’ awareness of abdominal obesity and patients were asked to choose between obesity and abdominal obesity, relying on their own knowledge of markers of cardiovascular risks. Clinical measurements were taken in the nurses’ assessment room. Weight was measured by using portable SECA scales (Seca 755 Mechanical Column Scale) and was measured to the nearest 0.1kilogram. All patients were measured on the same weighing scale to minimize instrumental bias. Patients were asked to remove their heavy outer garments and shoes and empty their pockets and to stand in the centre of the platform, so that weight is distributed evenly to both feet.
Height was measured by using a height rule attached to a fixed measuring rod (Seca 220 Telescopic Measuring Rod). Patients were asked to remove their shoes and are asked to stand with their back to the height rule. It was ensured that the back of the head, back, buttocks, calves and heels are touching the wall. Patients were asked to remain upright with their feet together. The top of the external auditory meatus is leveled with the inferior margin of the bony orbit. The patients were asked to look straight. Height is recorded to the resolution of the height rule (i.e. nearest millimeter).
Waist circumferences were measured using a myotape. Patients were asked to remove their outer garments and stand with their feet close together. The tape is placed horizontally at a level midway between the lower rib margin and iliac crest around the body. They were then asked to breathe normally and the reading of the measurement was taken at the end of gentle exhaling. This prevents patients from holding their breath. The measuring tape is held firmly, ensuring its horizontal position and loose enough that it allows placement of one finger between the tape and the subject's body. A single operator who has been trained to measure waist circumference as per the WHO guidelines is used repeatedly in order to reduce measurement bias(8).
The doctors were asked to visually estimate the patients' weight, height, waist circumference and BMI. The estimation is recorded on a separate sheet. All doctors were blinded to the actual clinical measurements. The questionnaires were then collected at the end of the clinic and matched to individual patients. Data entry was performed in Microsoft Excel and exported for statistical analysis on SPSS version 16.
Findings
The study enrolled 100 patients. Demographic and cardiovascular risk details are shown in Table 1. Among these, 42 were obese, 35 were overweight and 23 patients had a normal BMI. The sample has a mean BMI of 29.9kg/m2 (95% CI 28.7-31.1) with a mean waist circumference (WC) of 103.2cm (95% CI 100.7-107.2). The average male waist circumference is 105.8 cm while the average female waist circumference is 101.6cm. The mean measured weight was 84.6kg (95% CI 81.0-88.2) and the mean height measurement was 1.68m (95% CI 1.66-1.70).
Table 1: Cardiovascular risk factors
Sex
Male(n=55)
Female(n=45)
Mean age
53.6(19-84)
56.7(23-84)
Diabetes
17
14
Hypertension
16
20
Hypercholesterolaemia
24
19
Active smoker
10
5
Ex- smoker (>10years)
8
3
Previous stroke or heart attack
6
6
Previous PCI
6
3
Patient’s perception and doctor’s estimation of anthropomorphic measurements were compared to actual measurements and is displayed in Table 2.
Table 2. Deviation from actual measurement values in both groups
Patient’s Estimation
Mean estimated
Mean deviation (estimated – actual measurements)
95% Confidence interval of Mean Deviation
Weight
81.16
-3.71
-5.10 to -2.32
Height
1.6782
0.0039
-0.0112 to 0.0033
Waist
90.85
-13.09
-15.48 to -10.70
BMI
28.68
-1.24
-1.87 to -0.61
Doctor’s visual estimation
Weight
80.85
-3.78
-5.54 to -2.02
Height
1.6710
-0.0113
-0.224 to 0.002
Waist
92.10
-11.84
-13.87 to -9.81
BMI
29.08
-8.47
-1.54 to -0.15
In terms of patients own estimation of height, weight and waist circumference, 49% of patients under estimated their weight by up to 1.5kg, 35% reported accurately to 1.5 kg and 16% over reported weight. 67% of patients estimated height accurately, 18% of patients under-estimated, and 15% over-estimated. When asked to estimate their waist circumference, 68% of patients under estimated by up to 5cm, 30% over estimated and 2 patients estimated accurately to 5cm (Figure 1). We found that 70% of patients regarded obesity as the higher threat to health compared to abdominal obesity. There were no differences in patient’s self reported weight and doctor’s weight estimation (p= 0.236).
Figure 1. Graphical representation of patients estimated weight, height and waist circumference
We then analysed the doctor’s estimation of height, weight, waist circumference and BMI. For the purpose of interpreting the data on BMI, the estimates that is recorded by doctors that matches the patient’s real BMI by clinical measurement is considered accurate. Therefore, for patients who have a normal BMI, 69.5% were correctly estimated as normal and the rest (30.5%) were estimated as overweight. For those patients who are obese, 81% were estimated as obese and by the doctors as a group and the rest (19%) is estimated to be overweight. In patients who are overweight, 63% were correctly estimated as being overweight by doctors, 9% were estimated as being obese and the rest (28%) were mistakenly estimated as having a normal BMI. Accurate BMI estimation by doctors was achieved in 72% patients (Figure 2).
Figure 2. Doctors estimation of BMI compared to actual clinical measurement
Doctors were noted to underestimate the patients’ weight in 53 patients, over estimated in 26, while being accurate in their estimation in 21 patients. Estimation of waist circumference to the nearest 5 cm shows marked under estimation of waist circumference in 71% of patients, over reporting in 3% of patients and 26% accurate estimation. The majority of underestimation of waist circumference happens in the region of 10 to 15cm. For patients who are obese, doctors were able to estimate waist circumference correctly in 58% of obese individuals.
Discussion:
This is the first study demonstrating the relationship of visual estimation of a cardiovascular risk factor and comparing to actual clinical measurements. As obesity and abdominal obesity becomes an increasingly common phenomenon, our perception of the 'normal' body habitus may be distorted(9).
It is observed that in the bigger hospitals out-patient departments, physicians and nurses are commonly affected by clinical workload and tend to spend a limited amount of time with patients in order to achieve a quicker turnaround time. Cleator et al looked at whether clinically significant obesity is well detected in three different outpatient department and whether they are managed appropriately once diagnosed(10). In all the outpatient departments involving the specialties of rheumatology, cardiology and orthopedics, the actual cases of clinical obesity is higher than what is being diagnosed and the management of obesity was heterogeneous and minimal in terms of intervention. With the ever increasing obese patients attending hospitals, it is understandable that healthcare providers such as physicians, nurses, dietician and physiotherapist resort to relying on visual estimation.
In terms of patient’s own estimation of height, weight and waist circumference, we gained that patients were reasonably good at estimating their own height but tend to under estimate weight. This is probably due to the fact that these patients have not had a recent measurement of weight and their weight estimation is based on previous historical measurement from months to years back, which in the majority of people, is less than their current weight. This also explains why their height estimation is more accurate, as adult heights do not undergo significant changes and are relatively constant.
When attempting to obtain patient’s own estimation of waist circumference, we found that most patients are not at all aware of the method used to measure waist circumference. Some patients even mistaken waist circumference as being their trousers’ waist size. In those who were able to give estimation, a large proportion would under estimate.
The majority of patients think that general obesity is more predictive of cardiovascular outcome compared to abdominal obesity. This lack of awareness is reflective on clinician’s effort in addressing abdominal obesity as an important cardiovascular risk factor to patients during consultations. The lack of proper awareness campaign by healthcare providers along with the evolving markers of cardiovascular risk may further confuse the general public.
Recently, waist circumference, waist to hip ratio along with many serum biomarkers have been noted to correlate to adverse outcomes in obese individuals, independent of BMI. Waist circumference measurement is a relatively new tool compared to the measurement of BMI. This would explain the discrepancy between doctors’ estimation of BMI and waist circumference. Visual estimation is further compromise as many patients would be covered in items of clothing during consultations. In order to obtain a better estimation of waist circumference, the individual have to be observed from many angles, a task that may be impossible in a busy clinic.
Although BMI is a convenient method to quantify obesity, recent studies have shown that waist circumference is a stronger predictor of cardiovascular outcomes(11),(12),(13),(14).The importance of waist circumference in predicting health risk is thought to be due to the relationship between waist circumference and intra-abdominal fat(15),(16),(17),(18),(19),(20).We now know that the presence of intra-abdominal visceral fat is associated with a poorer outcome in that patients are prone to develop metabolic syndrome and insulin resistance(21).We have yet to devise a more accurate measurement on visceral fat and at present limited to using waist circumference measurements.
Although doctors are generally good at BMI estimation, we found that in estimating overweight patients’ BMI, close to 30% were wrongly estimated as having normal BMI. Next to the obese, these groups of patients are likely to have metabolic abnormalities and increased cardiovascular risk. If actual measurement of BMI is not routinely done, we may neglect patients who would benefit from intervention. A simple, short counseling during the outpatient visit with emphasis on weight loss, the need to increase their daily activity levels and the morbidity related to being overweight may be all that is needed to improve the population health in general. Further intervention may include referrals to hospital or community dieticians and prescribed exercise programmes. These intervention tools already exist in the healthcare system and could be accessed readily.
The nature of our study design exposes it to several potential selection and measurement biases. Future studies should include patients of differing ages and socioeconomic background. Additionally, clinicians of differing appointments from various different specialties should be included to obtain a more applicable result. A measure of diagnostic efficacy should also be employed to further assess the value of clinical measurement and therapeutic intervention.
Conclusion:
The appropriateness of visual scoring of markers of obesity by doctors is flawed and limited to the obese individuals. True anthropometric measurements would avoid misdiagnosing overweight individuals as normals. We can conclude that patients’ own estimation of weight is unreliable and that they are unaware of the impact of high abdominal fat deposition on cardiovascular outcome. The latter should be addressed in consultations by both hospital physicians and general practitioners. Further emphasis and education in schools and awareness campaigns should also advocate this emerging cardiovascular risk factor.
The widespread use of office-software in general practice makes the idea of simple, automatic computerised support an attractive one. Different tools for different diseases have been tested with mixed results, and in 2009 a Cochrane review1 concluded that “Point of care computer reminders generally achieve small to modest improvements in provider behavior. A minority of interventions showed larger effects, but no specific reminder or contextual features were significantly associated with effect magnitude”. One year later another review2 reached similar conclusion: “Computer reminders produced much smaller improvements than those generally expected from the implementation of computerised order entry and electronic medical record systems”. Despite this, simple, non-expensive, automatic reminders are frequently part of GPs’ software, even if their real usefulness is seldom tested in real life.
Repeated hospitalisation for heart failure is an important problem for every National Health System; it is estimated that about half of all re-hospitalisation could be avoided3. Adherence to guidelines can reduce re-hospitalisation rate4, and pharmacotherapy according to treatment guidelines is associated with lower mortality in the community5. In 2004 a software commonly used in Italian primary care implemented a simple reminders’ system to help GPs to improve prescription of drugs recommended for heart failure. We evaluated if this could lead to a decrease in re-hospitalisation rate.
METHODS
In 2003, using Millewin ®, a software commonly used by Italian GPs, we showed that appropriate prescription could increase using a simple pop-up reminders6; a year later, using the Italian General Practitioners database ‘Health Search – CSD Patient database (HSD) (www.healthsearch.it), we observed a lower than expected prevalence of codified diagnosis of heart failure and of prescription of both beta-blockers and ACE-Inhibitors/ARBs (data on file). Therefore in 2004 Millewin® embedded a simple reminder system to help heart failure (HF) management. The first reminder aimed to identify patients with HF, but without codified diagnosis: in case of loop diuretic and/or digoxin prescription without codified HF diagnosis a pop-up told the GP that the patients could be affected by HF and invited the physician to verify this hypothesis and eventually to record the diagnosis. The second reminder appeared when a patient with codified HF diagnosis had no beta-blocker and/or ACE-inhibitor/ARB prescription: a pop-up invited the GP to prescribe the missing drug. This reminder system was already activated in the 2004 release of the software, but required voluntary activation in the successive releases. This is a common choice in real life, where positive choices in clinical practice by software-house neither are welcomed nor accepted by GPs. We had no possibility to know who decided to keep using the reminders.
We examined the 2004-2009 HF hospitalisations in Puglia, a Southern Italian Region with a population of over 4000000, and with high HF hospitalisation rate compared with the Italian mean7. We compared the hospitalisations for patients cared for by GPs who used Millewin® in 2004 to those of the patients cared for by GPs who never used Millewin®. Data were provided by the local Health Authority, and were extracted from the administrative database.
RESULTS
We identified 64591 patients (mean age 76 y, sd 12; 49.9% men) with one or more HF hospitalisation; 17810 had > 2 hospitalisations, and were analysed for the current study.
Figure 1 - Selection process leading to the identification of the patients with > 2 HF hospitalisations
The selection that led to this group is summarised in figure 1. There was no statistically significant difference between patients cared for GPs using or non using Millewin® software as far as age and gender are concerned. The re –hospitalisation rate according to the use or non-use of Millewin® of patients’ GPs is summarised in table 1.
Table 1: Re-hospitalisation rate of patients cared by Millewin® users and non-users
Patients with ≥ 2 hospitalisation (N, %)
Time
No MW users
MW users
Total
P
Within 1 year
11260 (23.1%)
1136 (22.9%)
12396 (23.1%)
=N.S.
Within 2 years
13851 (28.4%)
1410 (28.4%)
15261 (28.4%)
=N.S.
Within 3 years
15144 (31.0%)
1543 (31.1%)
16687 (31.0%)
=N.S.
Within 4 years
15803 (32.4%)
1612 (32.4%)
17415 (32.4%)
=N.S.
Within 5 years
16083 (33.0%)
1643 (33.1%)
17726 (33.0%)
=N.S.
Within 6 years
16156 (33.1%)
1654 (33.3%)
17810 (33.1%)
=N.S.
MW = Millewin®, N.S = Not significant
The mean time before the first re-hospitalisation was 108.5 day +/- 103.3 for Millewin® non-users and 116.4 +/- 107.5 for users (p < 0.05).
DISCUSSION
Even if reasonable and clinically sound, the availability of computerised reminders aimed to help GPs to identify HF patients and to prescribe them with recommended drugs didn’t reduce re-hospitalisation rate. The first possibility to explain this result is that, after the first year, GPs didn’t re-activate the reminders’ system. Unfortunately we couldn’t verify this hypothesis, but it is known that the level of use of such a system may be low in usual care8; furthermore providers may agree with less than half of computer generated care suggestions from evidence-based CHF guidelines, most often because the suggestions are felt to be inapplicable to their patients or unlikely to be tolerated9. Epidemiological studies have shown that heart failure with a normal ejection fraction is now a more common cause of hospital admission than systolic heart failure in many parts of the world10-11. Despite being common, this type of heart failure is often not recognised, and evidence based treatment—apart from diuretics for symptoms—islacking12. It is therefore possible that increasing ACE-I/ARBs and beta-blockers use in these patients doesn’t influence the prognosis and hospitalisation rate. Unfortunately administrative databases do not permit to distinguish the characteristic of HF. We must also consider that the use of appropriate drugs after HF hospitalisation could spontaneously increase in the last years; a survey in Italian primary care showed that 87% of HF patients used inhibitors of the renin-angiotensin system, and 33% beta-blockers13. A further relevant increase in ACE-I/ARBS is therefore unlikely, while a improvement is clearly needed for beta-blockers. Could more complex and information-providing reminders be more useful? This is unlikely since adding symptom information to computer-generated care suggestions for patients with heart failure did not affect physician treatment decisions or improve patient outcomes14. Furthermore, consultation with a cardiologist for starting beta-blocker treatment is judged mandatory by 57% of Italian GPs13, thus reducing the potential direct effect of reminders on prescription. Finally we must remember that part of the hospitalisation due to HF worsening can be due to non-cardiac disease, such as pneumonia, anemia, etc; all these cause cannot be affected by improved prescription of cardiovascular drugs.
Albeit simple and inexpensive, computerised reminders aren’t a neutral choice in professional software. Too many pop-ups may be disturbing and may lead to systematic skipping the reminders’ text. This can be a problem, since computerised reminders have proved to be useful for other important primary-care activity, such as preventive interventions15. In our opinion, at the moment, a computerised reminder-system should be proposed only as a part of a more complex strategy, such as long-term self or group audit and/or pay for performance initiative.
CONCLUSIONS
Availability of computerised automatic reminders aimed to improve detection of heart-failure patients and prescription of recommended drugs doesn’t decrease repeated hospitalisation; these tools should be probably tested in the context of a more complex strategy, such as a long-term audit.
The prevalence of current use of alcohol in India ranged from 7% in western states of Gujarat (officially under prohibition) to 75% in the North eastern state of Arunachal Pradesh 1.The prevalence of hazardous use of alcohol was 14.2% in rural south India2. Thus, alcohol abuse has a major public, family and health related problems withimpairment of social, legal, inter personal and occupational functioning in thoseindividuals who have been addicted to alcoholism.
A wide variety of biochemical and haematological parameters are affected by regular excessive alcohol consumption. The blood tests traditionally used most commonly as markers of recent drinking are the liver enzymes, gamma glutamyltranserase (GGT), aspartate aminotransferase (AST) and alanine aminotransferase (ALT), and the mean volume of the red blood cells (mean corpuscular volume (MCV). But they were not sensitive or specific enough for use as single tests3.
Elevated Gamma glutamyltransferase levels are an early indicator of liver disease; chronic heavy drinkers, especially those who also take certain other drugs, often have increased GGT levels. However, GGT is not a very sensitive marker, showing up in only 30–50 percent of excessive drinkers in the general population. It is not a specific marker of chronic heavy alcohol use, because other digestive diseases, such as pancreatitis and prostate disease, also can raise GGT levels 4.
AST and ALTare enzymes that help metabolize amino acids, the building blocks of proteins. They are an even less sensitive measure of alcoholism than GGT; indeed, they are more useful as an indication of liver disease than as a direct link to alcohol consumption. Nevertheless, research finds that when otherwise healthy people drink large amounts of alcohol, AST and ALT levels in the blood increase. Of the two enzymes, ALT is the more specific measure of alcohol-induced liver injury because it is found predominantly in the liver, whereas AST is found in several organs, including the liver, heart, muscle, kidney, and brain. Very high levels of these enzymes (e.g., 500 units per liter) may indicate alcoholic liver disease. Clinicians often use a patient’s ratio of AST to ALT to confirm an impression of heavy alcohol consumption. However, because these markers are not as accurate in patients who are under age 30 or over age 70, they are less useful than some of the other more comprehensive markers5.
AST /ALT ratio of more than1.5 strongly suggests and ratio >2.0 is almost indicative of alcohol induced damaged to liver 6.It has been suggested that an AST/ ALT ratio greater than 2 is highly suggestive or indicative of alcoholic etiology of liver disease. But extreme elevations of this ratio, with AST level greater than five times the normal should suggest non-alcoholic cause of hepatocellular necrosis 7.
Sialic acid, which is a derivative of acetyl neuraminic acid, attached to non-reducing residues of carbohydrate chain of glycoproteins and glycolipids is found to be elevated in alcohol abuse 8.
In this study we compared sensitivity, specificity and diagnostic efficiency of serum Sialic acid with other traditional markers like AST (Aspartate amino transaminase), ALT (Alanine amino transaminase), GGT (Gamma Glutamyl Transferase), as a marker of alcohol abuse.
MATERIALS AND METHODS:
This was a case-control study which was conducted on 100 male subjects aged 20-60 years, 50 cases and 50 controls. Cases comprised of patients diagnosed to have Alcohol Dependant Syndrome (ADS) who were admitted in Psychiatry-ADS ward, at Mahathma Gandhi Memorial Hospital,Warangal. Study was approved by the Institutional ethical committee. Amount, duration and the type of alcohol in the form of Rum, Whisky, Brandy, Vodka, Gin, Arrack, etc consumed was enquired, those subjects who consumed more than half bottles of these spirits daily (or intermittently with abstinence of 2-3 days), for more than 5 years were chosen for this study. Dependence of their alcoholism was enquired in the form of CAGE questionnaire 9.
C : Cut down drinking, A : Annoyed others by drinking, G : Guilty feeling of drinking. E : Eye-opener
Those who satisfied two or more questions were taken as cases 10 and their blood samples were collected for the study after their informed consent. Controls were selected from healthy subjects came for master health check up at MGMH health clinic, with no history ofalcoholism.
Exclusion criteria:
Patients with history of Diabetes mellitus, Cardiac disease, Viral/Bacterial Hepatitis, Alcoholic hepatitis, tumors, meningitis and history of current use of hepatotoxic and nephrotoxic drugs were excluded from the study.
4ml of blood was collected from each subject from median cubital vein by venipuncture, serum was separated and the different parameters were analyzed. Estimation of serum Sialic acid was done by modified thiobarbturic acid assay of warren11 (Lorentz and Krass) by colorimetric method. Estimations of Aspartate transaminase 12, 13, 14 Alanine transaminase 13, 15, 16 Gamma glutamyl transferase 17, 18 were done by IFCC recommended methods on Dimension Clinical chemistry system (auto analyzer).
Statistical analysis: Student t test (two tailed, independent) has been used to find the significance of study parameters between controls and cases. Receiving Operating Characteristics (ROC) tool (SPSS 17 version) has been used to find the diagnostic performance of study parameters.
RESULTS:
It was observed that all the study parameters were significantly increased (p value < 0.001) in subjects with alcohol abuse when compared to the controls as shown in the Table 1. The ROC analyses of the different parameters were shown in Fig 1 and Table 2. GGT was having highest Diagnostic efficacy followed by AST and SA as a marker of alcohol abuse.
Figure 1: ROC Curve analysis of different parameters
Table1: Comparison of study parameters between controls and cases
Parameters
controls
cases
P value
AST(U/L)
24.83±7.57
87.9 ±53.72
<0.001
ALT(U/L)
47.63 ±18.77
88.83± 46.53
<0.001
AST/ALT
0.58 ± 0.23
0.982 ± 0.29
<0.001
GGT(U/L)
39.36 ±v 20.23
264.13± 298.74
<0.001
SA(m mol/L)
1.81 ± 0.42
2.92±0.706
<0.001
Table 2: ROC Analysis of different study parameters
Parameters
Best-Cutoff value
Sensitivity
Specificity
Diagnosticefficacy
AUC
AST(U/L)
37.50
86.66 %
93.33%
90%
0.946
ALT(U/L)
71.00
63.33%
93.33%
78.33%
0.811
AST/ALT
0.732
83.33%
76.66%
80%
0.869
GGT(U/L)
55.50
96.66%
86.66%
91.66%
0.929
SA(m mol/L)
2.3
80%
93.33%
86.66%
0.939
DISSCUSSION:
Alcoholism is a serious health issue with major socio-economic consequences. Significant morbidity is related to chronic heavy alcohol use and alcoholics seek advice only when a complication of drinking sets in. The diagnosis is often based on patients self-reporting of alcohol consumption, which is unreliable and requires high degree of clinical suspicion.
Clinical histories and questionnaires are the commonest initial means of detection of alcohol abuse. They are cheap, easily administered but are subjective. If the history remains uncertain and there is suspicion of alcohol abuse, biological markers provide objectivity. A combination of markers remains essential in detection. Liver is the prime target organ for alcohol-induced disease. Liver enzymes are also important indicators of liver dysfunction, possibly as markers of alcohol dependence. Commonly used markers are GGT, AST and ALT. Laboratory markers help clinicians to raise the issue of excessive drinking as the possible cause of health problem, unfortunately because of lack of sensitive and specific methods, the detection of problem dinking in clinical settings has remained difficult. Therefore, findings of increased serum SA concentrations in alcoholics have raised the possibility of developing new tools for such purpose.
In the present study on analyzing the results it was found that an increased concentration of Serum Sialic acid and other traditional biochemical markers GGT, AST, ALT was observed in cases compared to that of controls. Over all GGT had a good sensitivity and specificity. The other traditional markers used in alcohol abuse varied considerably in their specificities and sensitivities. The increase in serum Sialic acid concentration in alcohol abusers in our present study is in accordance with the studies conducted by other investigators 8, 19, 20, 21.The diagnostic accuracy of SA was in accordance with the study by Antilla P et al 19 .The increase in serum GGT, ALT and AST concentration in alcohol abusers were in accordance with the studies conducted by other investigators 19, 22.
CONCLUSION:
In our study, Sialic Acid proved to be a good test with sensitivity of 80% and specificity of 93.33% with a diagnostic accuracy of 86.66% showing that SA can be used as a biochemical marker in alcohol abuse where secondary effects of liver disease hamper the use of traditional markers.
Limitations of the study are as follows: This study was done in small group of people only; a larger study consisting of alcohol abusers with and without specific liver disease should be conducted to confirm the role of SA as a new marker for alcohol abuse where the traditional markers will be altered by the different liver diseases.
Nosocomial pneumonia in patients receiving mechanical ventilation, also called ventilator-associated pneumonia (VAP), is an important nosocomial infection worldwide which leads to an increased length of hospital stay, healthcare costs, and mortality.(1,2,3,4,5) The incidence of VAP ranges from 9% to 27% with a crude mortality rate that can exceed up to 50%. (6,7,8,9) Aspiration of bacteria from the upper digestive tract is an important proposed mechanism in the pathogenesis of VAP.(9, 10) The normal flora of the oral cavity may include up to 350 different bacterial species, with tendencies for groups of bacteria to colonize different surfaces in the mouth. For example, Streptococcus mutans, Streptococcus sanguis, Actinomyces viscosus, and Bacteroides gingivalis mainly colonize the teeth; Streptococcus salivarius mainly colonizes the dorsal aspect of the tongue; and Streptococcus mitis is found on both buccal and tooth surfaces.(11) Because of a number of processes, however, critically ill patients lose a protective substance called fibronectin from the tooth surface. Loss of fibronectin reduces the host defence mechanism mediated by reticuloendothelial cells. This reduction in turn results in an environment conducive to attachment of microorganism to buccal and pharyngeal epithelial cells.(12) Addressing the formation of dental plaque and its continued existence by optimizing oral hygiene in critically ill patients is an important strategy for minimizing VAP.(13) Two different interventions aimed at decreasing the oral bacterial load are selective decontamination of the digestive tract involving administration of non absorbable antibiotics by mouth, through a naso-gastric tube, and oral decontamination, which is limited to topical oral application of antibiotics or antiseptics.(14) Though meta-analysis of antibiotics in decontamination of digestive tracts have found positive results(15) , the use of this intervention is, however, limited by concern about the emergence of antibiotic resistant bacteria.(16) One alternative to oral decontamination with antibiotics is to use antiseptics, such as chlorhexidine which act rapidly at multiple target sites and accordingly may be less prone to induce drug resistance.(17) Recently a meta-analysis of four trials on chlorhexidine failed to show a significant reduction in rates of ventilator associated pneumonia(18) but, subsequent randomised controlled trials, however, suggested benefit from this approach.(19) Current guidelines from the Centres for Disease Control and Prevention recommend topical oral chlorhexidine 0.12% during the perioperative period for adults undergoing cardiac surgery (grade II evidence). The routine use of antiseptic oral decontamination for the prevention of ventilator associated pneumonia, however, remains unresolved.(8) Despite the lack of firm evidence favouring this preventive intervention, a recent survey across 59 European intensive care units from five countries showed that 61% of the respondents used oral decontamination with chlorhexidine. As the emphasis on evidence based practice is increasing day by day, integrating recent evidence by meta-analysis could greatly benefit patient care and ensure safer practices. Hence we carried out this meta-analytic review to ascertain the effect of oral decontamination using chlorhexidine in the incidence of ventilator associated pneumonia and mortality in mechanically ventilated adults.(20)
Methods
Articles published from 1990 to May 2011 in English which were indexed in the following databases were searched: CINAHL, MEDLINE, Joanna Briggs Institute, Cochrane Library, EMBASE, CENTRAL, and Google search engine. We also screened previous meta-analyses and the references lists from all the retrieved articles for additional studies. Further searches were carried out in two trial registers (www.clinicaltrials.gov/ and www.controlled-trials.com/) and on web postings from conference proceedings, abstracts, and poster presentations.
Articles retrieved were assessed for inclusion criteria by three independent reviewers from the field of nursing with masters degrees. The inclusion criteria set for this meta-analysis were as follows: a) VAP definition meeting both clinical and radiological criteria b) Intubation for more than 48 hours in ICU.
We excluded the studies where clinical pulmonary infection score alone was considered for diagnosing VAP. Thereafter the articles were evaluated for randomisation, allocation concealment, blinding techniques, clarity of inclusion and exclusion criteria, outcome definitions, similarity of baseline characteristics, and completeness of follow-up. We considered randomisation to be true if the allocation sequence was generated using computer programs, random number tables, or random drawing from opaque envelopes. Finally, based on the above characteristics, only 9 trials which fulfilled the inclusion criteria was included for the pooled analysis. A brief summary of the 9 trials were listed in Table 1. The primary outcomes in this meta-analysis were incidence of VAP and mortality rate.
Table 1: Brief summary of trials
Source
Subjects
Intervention
ComparedWith
Outcome with respect to VAP
Outcome with respect to Mortality
C
E
C
E
DeRiso et al., 1996
353- Open Heart surgery patients
Chlorhexidine 0.12% 15 ml preoperatively and twice daily postoperatively until discharge from intensive care unit or death
Placebo
9/180
3/173
10/180
2/173
Fourrier et al., 2000
60- Medical and surgical patients
Chlorhexidine gel 0.2% dental plaque decontamination 3 times daily, compared with bicarbonate solution rinse 4 times daily followed by oropharyngeal suctioning until 28 days discharge form ICU or death
Standard treatment
15/30
5/30
7/30
3/30
Houston et al., 2002
561- cardiac surgery patients
Chlorhexidine 0.12% rinse compared with Listerine preoperatively and twice daily for 10 days postoperatively or until extubation, tracheostomy, death, or diagnosis of pneumonia.
Standard treatment
9/291
4/270
NA
NA
MacNaughton et al., 2004
194 – Medical and surgical patients
Chlorhexidine 0.2% oral rinse twice daily until extubation or death
Placebo
21/101
21/93
29/93
29/101
Fourrier et al., 2005
228 –ICU patients
Chlorhexidine 0.2% gel three times daily during stay in intensive care unit until 28 days
Placebo
12/114
13/114
24/114
31/114
Segers et al.,2005
954 – cardiac surgery patients
Chlorhexidine 0.12%, nasal ointment, and 10 ml oropharynx rinse four times daily on allocation and admission to hospital until extubation or removal of nasogastric tube
Placebo
67/469
35/485
6/469
8/485
Boop et al., 2006
5- cardiac surgery patients as pilot study
0.12% chlorhexidine gluconate oral care twice daily until discharge
Standard treatment
1/3
0/2
NA
NA
Koeman et al., 2006
385 –General ICU patients
2 treatment group: 2%Chlorhexidine, chlorhexidine and colistin, placebo four times daily until diagnosis of ventilator associated pneumonia, death, or extubation
Placebo
23/130
13/127
39/130
49/127
Tontipong et al., 2008
207 –General medical ICU or wards
2% chlorhexidine solution times per day until endotracheal tubes were removed.
Standard treatment
12/105
5/102
37/105
36/102
NA-Not available; C-Control group; E- Experimental group
Data analysis
Meta-analysis was performed in this study by using Review Manager 4.2 (Cochrane Collaboration, Oxford) with a random effect model. The pooled effects estimates for binary variables were expressed as a relative risk with 95% confidence interval. Differences in estimates of intervention between the treatment and control groups for each hypothesis were tested using a two sided z test. We calculated the number of patients needed to treat (NNT, with 95% confidence interval) to prevent one episode of ventilator associated pneumonia during the period of mechanical ventilation. A chi-squared test was used to assess the heterogeneity of the results. A Forest plot graph was drawn using Stats direct software version 2.72 (England: Stats Direct Ltd. 2008). We considered a two tailed P value of less than 0.05 as significant throughout the study.
Results
Effect of Chlorhexidine in reducing the Incidence of VAP
A total of nine trials were included in this meta-analysis(19,21,22,23,24,25,26,27,28). Pooled analysis of the nine trials with 2819 patients revealed a significant reduction in the incidence of VAP using chlorhexidine (Relative risk 0.60, 0.47 to 0.76; P< 0.01) (Figure 1). In relation to the Number Needed to Treat (NNT), 21 patients would need to receive oral decontamination with Chlorhexidine to prevent one episode of Ventilator associated pneumonia (NNT 21, 14 to 38).
Figure 1: Forest Plot showing the effect of Chlorhexidine oral decontamination in preventing the incidence of ventilator-associated pneumonia. Test for heterogeneity:χ2 =15.5, df =8, p < 0.01. Test for overall effect: z =4.33, p <0.05.
Effect of Chorhexidine in overall mortality rate
For assessing the outcomes in terms of mortality, only seven out of nine trials were included, since the other two(23,27) did not report the mortality rate. Pooled analysis of the seven trials with 2253 patients revealed no significant effect in reducing the overall mortality rate in patient who received chlorhexidine oral decontamination.(Relative risk 1.02, 0.83 to 1.26; P= 0.781 (Figure 2).
Figure 2: Forest plot showing the effect of Chlorhexidine oral decontamination in reducing overall mortality rate. Test for heterogeneity:χ2 =0.05, df =6, p = 0.81. Test for overall effect: z =0.27, p = 0.78
Discussion
The effectiveness of oral decontamination to prevent VAP in patients undergoing mechanical ventilation has remained controversial since its introduction, due to partly discordant results of individual trials. In the present meta-analysis nine trials were included to estimate the pooled effect size; the results revealed a significant reduction in the incidence of VAP among patients who were treated with oral chlorhexidine. But, it had no effect in reducing the overall mortality rate among these patients. There is a firm body of evidence that oropharyngeal colonization is pivotal in the pathogenesis of VAP. More than 25 years ago, Johanson et al described associations between increasing severity of illness, higher occurrence of oropharyngeal colonization, and an increased risk of developing VAP .(29,30)Subsequently, cohort and sequential colonization analyses identified oropharyngeal colonization as a important risk factor for VAP. (31,32,33) Our finding confirms the pivotal role of Oro- pharyngeal colonization in the pathogenesis of VAP , since this meta-analysis indicates that oral decontamination may reduce the incidence of VAP. Chlorhexidine was proven to have excellent antibacterial effects, with low antibiotic resistance rates seen in nosocomial pathogens, despite long-term use(34). Previous meta-analyses examining the effect of prophylaxis using selective decontamination of the digestive tract reported a significant reduction in the incidence of ventilator associated pneumonia(35,36,37). The most recent meta-analysis indicated that such an intervention combined with prophylactic intravenous antibiotics reduces overall mortality(38). In comparison our review suggests that oral antiseptic prophylaxis alone can significantly reduce the incidence of ventilator associated pneumonia, but not mortality. A similar result was documented by Ee Yuee Chan et al (2007)(14) who performed a meta-analysis with seven trials with a total of 2144 patients and found a significant result (Odds ratio 0.56, 0.39 to 0.81). Another comparable finding in the present study was, Mortality rate was not influenced by use of Chlorhexidine use, which was in line with the findings of Ee Yuee Chan et al (2007)(14) . Our meta-analysis on Chorhexidine differs from the findings of Pineda et al, who pooled four trials on chlorhexidine and did not report lower rates of ventilator associated pneumonia (odds ratio 0.42, 0.16-1.06; P=0.07)(18) . Our results also extend those of Chlebicki et al, who did not find a statistically significant benefit using the more conservative random effects model after pooling seven trials on chlorhexidine (relative risk 0.70, 0.47- 1.04; P=0.07), although their results were significant with the fixed effects model(39). Our meta-analysis included larger data set with a total of 9 trials including recent trials(28) which further adds strength to our analysis.
Limitations
Though our literature search was comprehensive, it is possible that we missed other relevant trials. Electronic and hand searches do not completely reflect the extent of research outcomes. For example, trials reported at conferences are more likely than trials published in journals to contain negative reports. In addition, more positive than negative results tend to be reported in the literature. This failure to publish more studies with negative outcomes is probably more due to authors’ lack of inclination to submit such manuscripts than to the unwillingness of editors to accept such manuscripts. Furthermore, many studies not published in English were not included e.g. a study by Zamora Zamora F (2011).(40) These limitations may lead to a risk for systematic reviews to yield a less balanced analysis and may therefore affect the recommendations resulting from the reviews. In addition, the heterogeneity which we found among the trials with respect to populations enrolled, regimens used, outcome definitions, and analysis strategies, may limit the ability to generalize results to specific populations.
Conclusion
The finding that chlorhexidine oral decontamination can reduce the incidence of ventilator associated pneumonia could have important implications for lower healthcare costs and a reduced risk of antibiotic resistance compared with the use of antibiotics. These results should be interpreted in light of the moderate heterogeneity of individual trial results and possible publication bias. It may not be prudent to adopt this practice routinely for all critically ill patients until strong data on the long term risk of selecting antiseptic and antibiotic resistant organisms are available. Nevertheless, Chlorhexidine oral decontamination seems promising. Further studies are clearly needed in testing the effect of Chlorhexidine in specific populations with standard protocols (which includes specific concentration, frequency, and type of agents) to generalize the findings. Studies also may be done to test the effect of different oral antiseptics in reducing VAP, so as to enrich the body of knowledge within this area.
Nerve blocks have a variety of applications in anaesthesia enabling an extra dimension for patients with regards to their pain control and anaesthetic plan. Anaesthetists can perform nerve blocks by a range of methods including landmark techniques and ultrasound guidance, with both of these techniques having the potential to be used with a nerve stimulator.
Nerve blocks are associated with complications including nerve damage, bleeding, pneumothorax and failure. Ultrasound, if used correctly, may help limit such complications.1 NICE guidance on the use of ultrasound guidance for procedures, has evolved over the years. Ultrasound guidance is now considered an essential requirement for the placement of central venous lines2 and is recommended when performing nerve blocks.3
Method
This survey aimed to assess the methods used by anaesthetists in performing nerve blocks and audited the use and competencies of clinicians in performing such blocks under ultrasound guidance and landmark techniques. This survey also looked at whether performing nerve blocks under ultrasound guidance was hindered by the lack of availability of appropriate resolution ultrasound machines in the workplace.
A paper survey was completed by anaesthetists of all grades at Kettering general hospital, UK and Birmingham Heartlands Hospital, UK between October and December 2011. The survey consisted of a simple, easy to use, tick box table and a generic area in which participants made further contributions. From this we ascertained the following:
Grade of clinician.
Any courses undertaken in ultrasound guided nerve blocks.
Which nerve blocks the clinicians felt they could perform competently with either method (landmark versus ultrasound guided).
In the event the anaesthetist could perform a block with or without ultrasound guidance; which method was used if ultrasound equipment was available.
Was the ability to perform ultrasound guided nerve blocks limited by the availability of an ultrasound machine.
The term “landmark technique” is used when the landmark technique is combined with or without a nerve stimulator and the term “ultrasound technique” when ultrasound guidance is used with or without a nerve stimulator.
Results
We surveyed a total of 52 anaesthetists, subdivided into Consultants 26 (50%), ST/staff grade 17 (33%), CT trainees 9 (17%). Of all grades, only 50% had completed a course in ultrasound guided nerve blocks. 42% of clinicians had encountered situations when they could not use ultrasound guidance for a nerve block because there was no ultrasound machine available at the time of the procedure.
The competencies of clinicians with the landmark and ultrasound technique varied depending on the type of nerve block and the grade of clinician (figure 1).
Various routinely performed blocks were surveyed and this revealed a good comparison of the use of ultrasound and landmark technique. For the Interscalene block, the consultants and middle grades combined were competent in performing this block, with the landmark technique 56% and the ultrasound technique 33%. For the Lumbar plexus block, 0% of the consultants surveyed felt competent in performing this block with the ultrasound technique compared to 73% with the landmark technique. The majority of clinicians felt competent in performing the TAP block with the ultrasound technique, 65% versus 35%, for the landmark technique.
Consultant (%) n-26
ST/Staff Grade (%) n-17
CT1/2 (%) n-9
Nerve block
Competent Landmark
Competent US
Competent Landmark
Competent US
Competent Landmark
Competent US
Brachial Plexus
Interscalene
54
34
58
29
0
0
Supra/Infra clavicular
31
23
29
18
0
0
Axillary
31
31
47
18
0
0
Elbow
12
19
29
12
0
0
Lumbar Plexus
73
0
65
12
11
0
Sciatic
Anterior
39
8
64
12
0
0
Posterior
42
27
76
18
0
0
Femoral
100
69
100
76
36
11
Epidural
100
19
100
18
36
0
Spinal
100
12
100
18
56
0
Abdominal
TAP
38
85
29
65
33
11
Rectus Sheath
19
35
18
47
0
11
Figure 1. This table illustrates competencies for different nerve blocks with the landmark technique and ultrasound technique for different grades of anaesthetists.
Discussion
The findings of this survey and audit have a range of implications for anaesthetists in the workplace:
1) Junior grades of doctors do not feel competent in performing nerve blocks. This may lead to a reliance on senior doctors during on calls to assist in performing blocks such as femoral and TAP blocks. Specific training geared towards junior doctors to make them proficient in such blocks would enable them to provide an anaesthetic plan with more autonomy.
2) A large percentage of consultant grade clinicians felt competent in performing nerve blocks with the landmark technique but not in performing the same blocks with ultrasound guidance. This has implications for training because consultants are the training leads for junior grades of anaesthetists. If consultants do not feel competent in the use of ultrasound guidance for nerve blocks, this could lead to a self perpetuating cycle.
3) Only 50% of clinicians in this survey had completed a course for ultrasound guided nerve blocks, this coupled with the finding that clinicians did not feel comfortable in performing nerve blocks with ultrasound, indicates the possible need for local training accessible to clinicians to improve their everyday practice.
4) It has been shown that ultrasonic guidance improves the success rate of interscalene blocks.4 The practice amongst clinicians in this survey reveals that the majority of anaesthetists (middle and consultant grades) are competent with the landmark technique 56% compared to the ultrasound technique 36%. This also highlights a training deficit which if addressed would enable clinicians to offer a more successful method of performing the interscalene block.
5) This survey highlighted the lack of availability of appropriate ultrasound machines in different departments, leading to some clinicians utilising the landmark technique, when ultrasound guidance was the preference. This has the potential of a patient receiving a nerve block technique which may have been riskier and less efficient. This highlights a potential need for investment and accessibility of appropriate resolution ultrasound machines in the different work places of a hospital environment.
The main limitation of this project was the small number of clinicians in the respective hospitals the survey was performed in. However, we feel the results reflect the practice of clinicians across most anaesthetic departments. The recommendations highlight a training need for anaesthetic trainees in the use of ultrasound guided nerve blocks. This survey could form the basis of a much larger survey of clinicians across the UK to provide a more insightful review of the competencies and preferences of anaesthetic trainees in performing nerve blocks and the availability of appropriate resolution ultrasound machines.
The difference in the number of clinicians in each category limited comparisons between groups. A larger cohort of participants would enable comparison of nerve block techniques between different grades of clinicians.
This survey included all clinicians regardless of their sub-specialist interest. This may result in a skewing of results, depending on the area of interest of the clinicians surveyed.
This work only highlights the competencies and preferences of clinicians in performing nerve blocks. No extrapolation can be made to complications that arise from the choice of either technique. Studies have shown an improved success rate when performing nerve blocks with ultrasound.4 However this does not directly apply to a specific clinician who may have substantial experience in their method of choice in performing a nerve block.
Acute non-traumatic knee effusion is a common condition presenting to the Orthopaedic department which can be caused by a wide variety of diseases(Table 1). Septic arthritis is the most common and serious etiology. It can involve any joint; the knee is the most frequently affected. Accurate and swift diagnosis of septic arthritis in the acute setting is vital to prevent joint destruction, since cartilage loss occurs within hours of onset1,2. Inpatient mortality due to septic arthritis has been reported as between 7-15%, despite improvement in antibiotic therapy3,4. Crystal arthritis (Gout/Pseudogout) is the second most common differential diagnosis. It is often under-diagnosed and subsequently patients do not receive rheumatology referral for appropriate treatment and follow-up. In addition, some patients are misdiagnosed and treated as septic arthritis with inappropriate antibiotics. Untreated crystal-induced arthropathy has been shown to cause degenerative joint disease and disability leading to a considerable health economic burden.6,7
When the patient is systemically unwell, it is common practice to start empirical antibiotic treatment after joint aspiration for the fear of septic arthritis. This aims to minimize the risk of joint destruction while awaiting gram stain microscopy and microbiological culture results. In a persistent painful swollen knee with negative gram stain and culture, antibiotic therapy can be continued with or without arthroscopic knee washout based on clinical suspicion of infection 8.
We have therefore undertaken a retrospective study to review our management of patients with non-traumatic hot swollen knees and in particular patients with crystal-induced arthritis.
Materials and methods:
We performed a retrospective review of 180 patients presenting consecutively with acute non-traumatic knee effusion referred to the on-call Orthopaedic team in the hospital of study between November 2008 and November 2011. Sixty patients were included in the study (Table 2). There were 43 males and 17 females, with a mean age of 36 years (range, 23- 93 years).
Patient demographics, clinical presentation, co-morbidities, current medications and body temperature were recorded. The results of blood inflammatory markers (WBC, CRP), blood cultures, synovial fluid microscopy, culture and polarized microscopy were also collected. Subsequent treatment (e.g. antibiotics, surgical intervention), complications, and mortality rates were reviewed.
Results:
On presentation, a decreased range of movement was evident in all patients. Associated knee pain was reported by 55 patients (92%), and 24 patients (40%) had fever (temperature ≥ 37.5º). All joints were aspirated prior to starting antibiotics and samples were sent for gram stain microscopy, culture and antibiotic sensitivity, and polarized light microscopy.
Of the 60-patient cohort, 26 were admitted and started on intravenous antibiotics based on clinical suspicion of infection (Table 3). The median duration of inpatient admission was 4 days (range, 2 to 14 days). The median duration of antibiotic therapy was 6 days (range, 2 to 25 days). Eighteen patients were treated non-operatively by means of antibiotics and anti-inflammatory medications. Arthroscopic washout was performed in the remaining eight knees. In this group of patients, leucocyte count in the joint aspirate ranged from 0-3 leucocyte/mm3, blood leucocyte count ranged from 4-20 leucocyte/mm3, while mean CRP was 37.8 mg/l (range, 1-275 mg/l).
Review of laboratory results revealed that four patients had positive microscopic growth on gram stained films. Two samples showed staphylococcus aureus growth and two grew beta haemolytic streptococci. Eight patientshad crystals identified on polarized light microscopy of joint aspirate. Three showed monosodium urate (MSU) crystals while five had calcium pyrophosphate (CPP) crystals. They received antibiotic therapy for a mean duration of 10 days (range, 1-30 days). Two patients were taken to theatre for arthroscopic lavage. Only two patients received rheumatology referral.
Seven patients developed complications during their hospital stay. Four contracted diarrhoea; three of which had negative stool cultures but one was positive for clostridium difficile, developed toxic megacolon and died. One patient with known ischemic heart disease had a myocardial infarction and died. Two further patients acquiredurinary tract infections.
Discussion:
Acute monoarthritisof the knee joint can be a manifestation of infection, crystal deposits, osteoarthritis and a variety of systemic diseases. Arriving at a correct diagnosis is crucial for appropriate treatment 9. Septic arthritis, the most common etiology, develops as a result of haematogenous seeding, direct introduction, or extension from a contiguous focus of infection. Joint infectionis a medical emergency that can lead to significant morbidity and mortality. Mainstay of treatment comprises appropriate antimicrobial therapy and joint drainage 10,11. Literature reveals the knee is the most commonly affected joint (55%) followed by shoulder (14%) in the septic joint population12-13.
The second most common differential diagnosis is crystal-induced monoarthritis. Gout and pseudogout are the two most common pathologies 14. They are debilitating illnesses in which recurrent episodes of pain and joint inflammation are caused by the formation of crystals within the joint space and deposition of crystals in soft tissue. Gout is caused by monosodium urate (MSU) crystals, while pseudogout is inflammation caused by calcium pyrophosphate (CPP) crystals, sometimes referred to as calcium pyrophosphate disease (CPPD) 15,16. Misdiagnosis of crystals arthritis or delay in treatment can gradually lead to degenerative joint disease and disability in addition to renal damage and failure 5. The clinical picture of acute crystal-induced arthritis can sometimes be difficult to differentiate from acute septic arthritis. It is manifested by fever, malaise, raised peripheral WBC, CRP and other acute phase reactants. Synovial fluid aspirate can be turbid secondary to an increase in peripheral polymorphonuclear cells. Diagnosis can be challenging and therefore crystal identification on polarized microscopy is considered the gold standard 17, 18, 19. Rest, ice and topical analgesia may be helpful but systemic non-steroidal anti-inflammatory medications are the treatment of choice for acute attacks provided there are no contraindications 20.
In this study, all joints were aspirated and samples were sent for microscopy, culture and sensitivity, and polarized microscopy for crystals in-line with the British Society of Rheumatology and British Orthopaedic Association guidelines 8. Aspiration not only helps diagnosis but in addition reduces the pain caused by joint swelling. Twenty six patients were admitted, on clinical and biochemical suspicion of septic arthritis. They presented with acute phase response manifested by malaise, fever and raised inflammatory markers and were treated with antibiotic therapy and non steroidal anti-inflammatory medications while awaiting the results of microbiology and polarized light microscopy. Four of theses patients developed complications secondary to antibiotic therapy including death due to clostridium difficile infection and subsequent toxic megacolon.
Infection was confirmed to be underlying cause in four patients (6%) who showed positive microscopic growth on gram stained films. They underwent arthroscopic washout and continued antibiotic therapy according to the result of culture and sensitivity of their knee aspirate till their symptoms and blood markers were normal. Arthroscopic washout was required for four patients with negative microscopic growth due to persistant symptoms despite antibiotic treatment, as recommended by the British Society of Rheumatology and the British Orthopaedic Association 8. Two patients showed calcium pyrophosphate crystals on polarized microscopy and two had no bacterial growth or crystals.
We retrospectively reviewed laboratory results and found that eight patients (13%) were confirmed to have crystal arthritis as crystals (MSU/CPP) were identified in their knee aspirates by means of polarized microscopy. However, only two patients (25%) received this diagnosis whilst in hospital. In both cases, antibiotic therapy was discontinued and they were referred to a rheumatologist for appropriate treatment and follow up. The remaining six patients continued to receive antibiotics and two of them were taken to theatre for arthroscopic lavage on clinical suspicion of infection as symptoms did not improve significantly with medications.
Our study shows that crystal-induced arthritis can easily be overlooked or misdiagnosed as septic arthritis. This results in patients having unnecessary antibiotic therapy, developing serious complications and undergoing surgical procedures, all of which can be avoided. Moreover, they were not referred to a rheumatologist.
Acute knee effusion is a common presentation to the Orthopaedic department and although we seem to be providing a good service for septic arthritis, patients with crystal arthropathy are still slipping through the net. Clinicians should always remember that crystal arthritis is almost as common as septic arthritis and will eventually lead to joint damage if not managed appropriately. It must be excluded as a cause of hot swollen joints by routine analysis of joint aspirate using polarized light microscopy. If crystal arthritis is proved to be the underlying pathology, patients must be treated accordingly and receive a prompt rheumatology referral for further management.
Non-attendance in outpatient clinics accounts for a significant wastage of health service resources. Psychiatric clinics have high non-attendance rates and failure to attend may be a sign of deteriorating mental health. Those who miss psychiatric follow-up outpatient appointments are more ill with poor social functioning than those who attend (1). They have a greater chance of drop out from clinic contact and subsequent admission (1). Non-attendance and subsequent loss to follow up indicate possible risk of harm to the patient or to others (2).
Prompts to encourage attendance at clinics are often used and may take the form of reminder letters (3), telephone prompting(4) and financial incentives (5). Issuing a copy of the referral letter to the appointee may prompt attendance for the initial appointment (6). Contacting patients by reminder letters prior to their appointments has been effective in improving attendance rates in a number of settings, including psychiatric outpatient clinics and community mental health centres (3).
Studies investigating the efficacy of prompting for improving attendance have generated contrasting findings and non-attendance remains common in clinical practice. We, therefore, carried out a naturalistic, prospective controlled study to investigate whether reminder letters would improve the rate of attendance in a community-based mental health outpatient clinic.
Design and Methods
The study was carried out at the Community Mental Health Centres based in Runcorn and Widnes in Cheshire, UK. The community mental health team (CMHT) provides specialist mental health services for adults of working age. Both CMHTs are similar in demographics, socio-economic need and, have relatively higher non-attendance rates in the clinic. In the week prior to the appointment, clerical staff from community mental health team sent a standard letter to some patients reminding the date and time of the appointment and name of the consulting doctor. They recorded whether patient attended, failed to attend or cancelled the appointment irrespective of whether they had received a reminder letter or not.
We compared the attendance rates between experimental group (those who had received the reminder letters) and the control group ( those who had not received the reminder letters) over a period of 18 months. Throughout the study period, the same medical team held the clinics and there had been no major change in the outpatients’ clinic setting or administrative and procedural changes influencing outpatients’ attendance. Care Planning Approach (CPA) was implemented and in operation even before the introduction of reminding letters at both the sites.
Attendance rates for all the clinics held during the study period were obtained from medical records. For all subjects who failed to attend, age and gender, was obtained from patients’ database. Patients whose appointments were cancelled were also included in the study.
Statistics and Data analysis
The data was analysed using SISA - Simple Interactive Statistical Analysis (7). Chi -squared tests were used to investigate the attendance rates between the groups, new patients and follow-ups, with the P value for statistical significance set at 0.05. Odds ratios were calculated to measure the size of the effect. In addition, we examined how age and gender may have influenced the effect of the text based prompting on attendance.
Results
In the experimental group a total of 114 clinics were booked, with clinic lists totalling 843 patients. Of these, 88 were new referrals and 755 were follow-up appointments. 65 of 114 clinics had full attendance. A total of 228 patients failed to attend the clinic. Of those who failed to attend, 25 patients were new referrals and 203 were follow-up patients. 28 follow up patients and 2 patients newly referred to the team called to cancel their appointments.
In the control group, a total of 71 clinics were booked amounting to a total of 623 patients. Of these, 86 were new referrals and 537 were for follow-up patients. Only 25 out of 71 clinics had full attendance. A total of 211 patients failed to attend. Of those who failed to attend, 32 were new referrals and 179 were follow-up patients. 55 follow up patients and 13 patients newly referred to the team called to cancel their appointments.
Of those who failed to attend in the experimental group, 98 (43%) were women. The mean age of non-attendees was 38 years; with a range of 18-76 yrs .Of those who failed to attend in the control group110 (52%) were women. The mean age of non-attendees was 32 years; with a range of 19-70 yrs.
In our study, failure to attend was not distributed evenly but had seasonal peaks at Christmas and during the summer vacation period.
The outcome from prompting in the experimental group is compared with the control group and displayed in Table 1.
Outcomes
Control group n (%)
Experimental group n (%)
χ2 (df)
P
OR (CI)
No of clinics with full attendance
25
65
8.32
0.0039
2.44(1.32-4.50
Total No of Pts attended
344
585
15.05
0.0001
1.57(1.25-1.98)
No of new Pts attended
41
61
3.743
0.053
1.9 (0.98-3.67)
No of follow up Pts attended
303
524
11.39
0.0007
1.52(1.19-1.94)
No of Cancellations
68
30
38.63
0
3.85(2.46-6.04)
χ2 = Chi square, df = degree of freedom, OR= Odds Ratio, CI= Confidence Interval
The attendance rate in the experimental group was 71.95% (585/813) as opposed to 56.57% (344/555) in the control group (OR=1.57; p=0.0001).
The attendance rate for new patients in the experimental group was 70.9%( 61/88) as opposed to 56.16 %( 41/ 86) in the control group (OR=1.9; p=0.053).
The attendance rate for follow up patients in the experimental group was 72.0%( 524/727) and 62.8% (303/482) in the control group (0R=1.52; p=0.0007).
In addition, there were significantly more (by 22%) number of clinics with full attendance in the experimental group (OR= 2.44, P=0.003).
The observed difference was not influenced by patient’s age or gender.
Discussion
The results from this study confirm previous findings that reminder letters within a week before the appointment can improve attendance rates in community mental health clinics. Our results are similar to those of the Cochrane systematic review, which has suggested that a simple prompt in the days just before the appointment could indeed encourage attendance (8).
Although it has been reported elsewhere(8) that text based prompting increases the rate at which patients keep their initial appointments, our study did not show a similar result for new patients.
It is already demonstrated that new patients and follow-up patients in psychiatric clinics are distinct groups with different diagnostic profiles, degrees of mental illness and with different reasons for non-attendance. Follow-up patients are severely ill, socially impaired and isolated than new patients. (1). Forgetting the appointment and being too unwell are the most common reasons given for non-attendance by follow-up patients, while being unhappy with the referral, clinical error and being too unwell are the most common reasons in the new patient groups (1). In addition, it has also been observed that increased rate at which patients keep their first appointments is more likely related to factors other than simple prompting (4) This explains our finding that prompting was more beneficial for follow-up patients as opposed to new referrals to the Community Mental Health Team.
We also identified several patients with severe mental illness who ‘did not attend’ for three successive outpatient appointments. Their care plans were reviewed and arrangements made to follow up with their community psychiatric nurses as domiciliary visits at regular intervals. Such measures should reduce duplication of the services and shorten the waiting times for psychiatric consultation, which are well-recognised factors associated with non-attendance (9).
Non-attendance is an index of severity of mental illness and a predictor of risk (1). In addition to reminder letters, telephone prompts are also known to improve attendance (4). Successful interventions to improve attendance may be labour intensive but they can be automated and, ultimately, prove cost effective (8)
We noticed that there is limited research and lack of quality randomised controlled trials in the area of non-attendance and the effectiveness of intervention to improve attendance in mental health setting. More large, well-designed randomised studies are desirable. We also recommend periodic evaluation of outpatient non-attendance in order to identify high-risk individuals and implement suitable measures to keep such severely mentally ill patients engaged with the services.
There was no randomisation in this study and we relied on medical records. We have not directly compared the characteristics of non-attendees with those patients who did attend the clinics. We did not evaluate other clinical and socio-demographic factors (e.g. travelling distance, financial circumstances, etc) that are known to influence the attendance rates in mental health setting. Hence, there may be limitations in generalising the results beyond similar populations with similar models of service provision.
Hepatitis B (HB) is a major disease and is a serious global public health problem. About 2 billion people (latest figures so far by WHO) are infected with the hepatitis B virus (HBV) all over the world. Interestingly, rates of new infection and acute disease are highest among adults, but chronic infection is more likely to occur in persons infected as infants or young children, which leads to cirrhosis and hepatocellular carcinoma in later life. More than 350 million persons are reported to have chronic infection globally at present1,2. These chronically infected people are at high risk of death from cirrhosis and liver cancer. This virus kills about 1 million persons each year. For a newborn infant whose mother is positive for both HB surface antigen (HBsAg) and HB e antigen (HBeAg), the risk of chronic HB Virus (HBV) infection is 70% - 90% by the age of 6 months in the absence of post-exposure immunoprophylaxis3.
HB vaccination is the only effective measure to prevent HBV infection and its consequences. Since its introduction in 1982, recommendations for HB vaccination have evolved into a comprehensive strategy to eliminate HBV transmission globally4. In the United States during 1990–2004, the overall incidence of reported acute HB declined by 75%, from 8.5 to 2.1 per 100,000 population. The most dramatic decline occurred in children and adolescents. Incidence among children aged <12 years and adolescents aged 12-19 years declined by 94% from 1.1 to 0.36 and 6.1 to 2.8 per 100,000 population, respectively2,5.
Population of countries with intermediate and high endemicity rates are at high risk of acquiring HB infection. Pakistan lies in an intermediate endemic region with a prevalence of 3-4% in the general population6. WHO has included the HB vaccine in the Expanded Programme on Immunisation (EPI) globally since 1997. Pakistan included the HB vaccination in the EPI in 2004. Primary vaccination consists of 3 intramuscular doses of the HB vaccine. Studies show seroprotection rates of 95% with standard immunisation schedule at 0, 1 and 6 months using a single antigen HB vaccine among infants and children7,8. Almost similar results have been reported with immunisation schedules giving HB injections (either single antigen or in combination vaccines) at 6, 10 and 14 weeks along with other vaccines in the EPI schedule. But various factors like age, gender, genetic and socioenvironmetal influences, are likely to affect seroprotection rates9.So there is need to know actual seroprotection rates in our population where different vaccines, EPI procured and privately procured incorporated in different schedules are used. This study has been conducted to know the real status of seroprotection against HB in our children. Results will help in future policy-making, highlighting our shortcomings, comparing our programme with international standards and moreover augment future confidence in vaccination programmes.
Materials And Methods
This study was conducted at vaccinations centres and paediatrics OPDs (Outpatient Departments) of CMH and MH, Rawalpindi, Pakistan. Children reporting for measles vaccination at vaccination centres at 9 months of age were included. Their vaccination cards were examined and ensured that they had received 3 doses of HB vaccine according to the EPI schedule, duly endorsed in their cards. They included mainly children of soldiers but some civilians also who were invited for EPI vaccination at the MH vaccination centre. Children of officers were similarly included from the CMH vaccination centre and vaccination record was ensured by examining their vaccination cards. Some civilians who received private HB vaccination were included from paediatric OPDs . Some children beyond 9 months and less than 2 years of age who reported for non-febrile minor illnesses in the paediatric OPD at CMH and MH, were also included and their vaccination status was confirmed by examining their vaccination cards.
Inclusion Criteria
1) Male and female children >9 months and <2 years of age.
2) Children who had received 3 doses of HBV according to the EPI schedule at 6,10 and 14 weeks.
3) Children who had a complete record of vaccination- duly endorsed in vaccination cards.
4) Childen who did not have history of any chronic illness.
Exclusion Criteria
1) Children who did not have proper vaccination records endorsed in their vaccination cards.
2) Interval between last dose of HBV and sampling was <1 month.
3) Children suffering from acute illness at time of sampling.
4) Children suffering from chronic illness or on immunosuppressive drugs.
Informed consent for blood sample collection was obtained from the parents or guardians. The study and the informed consent form was approved by the institutional ethical review board. Participants were informed about results of HBs antibody screening. After proper antiseptic measures, blood samples (3.5 ml) were obtained by venepuncture. Autodisabled syringes were used. Collected blood samples were taken in vaccutainers and labelled by identification number and name of child. Samples were immediately transported to the Biochemistery Department of Army Medical College. Samples were kept upright for half an hour and then centrifuged for 10 minutes. Supernatant serum was separated and stored at -20 0C in 1.5 ml eppendorf tubes till the test was performed. Samples were tested using ELISA (DiaSorin S.p.A Italy kit) for detection of anti-HBs antibodies according to manufacturers’ instructions. The diagnostic specificity of this kit is 98.21% (95% confidence interval 97.07-99.00%) and diagnostic sensitivity is 99.11% (95% confidence interval 98.18-99.64%) as claimed by the manufacturer. Anti-HBs antibody enumeration was done after all 3 doses of vaccination (at least 1 month after the last dose was received).
As per WHO standards, anti-HBs antibody titres of >10 IU/L is taken as protective and samples showing antibody titres <10 IU/L were considered as non-protected. Samples having antibody titres >10 IU/L were taken as seroprotected against HB infection. All relevant information was entered in a predesigned data sheet and used accordingly at the time of analysis. Items entered included age, gender, place of vaccination, type of vaccination (private or government procured), number of doses and entitlement status (dependent of military personnel or civilian). The study was conducted from 1st January 2010 to 31st Dec 2010.
Statistical Analysis
Data was analysed using SPSS version 15. Descriptive statistics were used to describe the data, i.e. mean and standard deviation (SD) for quantitative variables, while frequency and percentages were used for qualitative. Quantitative variables were compared through independent samples’ t-test and qualitative variables were compared through the chi-square test between both the groups. A P-value <0.05 was considered as significant.
The mean age of the children was 13.7 months. The overall frequency of children with titres <10 IU/L was 61 (31.4%) while frequency of children with titres >10 IU/L was 133 (68.6%).
Geometric mean titres (GMT) were 85.81 for the seroprotected (>10 IU/L) category.
Results
One hundred and ninety-four children, who had received HB vaccination according to EPI schedule, were tested for anti-HBs titres. Out of them 61 (31.4%) had anti-HBs titres less than 10 IU/L (non-protective level) while 133 (68.6%) had anti-HBs titres above 10 IU/L (protective level) as shown in Figure 1. The GMT of anti-HBs among the individuals having protective levels (> 10 IU/L) was found to be 85.81 IU/L.
Figure 1
Figure 2
Figure 2 shows that anti-HBs titres between 10–100 IU/L was found in 75 (50.4%) children. Twenty-six (19.5%) individuals had titres between 100–200 IU/L. Twenty (14%) children had titres between 20–500 IU/L, 10 (7%) children had titres between 500–1000 IU/L and only 2 (1.5%) children had anti-HBs titres > 1000 IU/L.
One hundred and eighty-four children received vaccination supplied by government sources (Quinevaxem by Novartis) out of which 61 (33.1%) children had anti-HBs titres <10 IU/L (non- protective) and 123 (66.9%) had anti-HBs titres >10 IU/L (protective level). Only 10 children had received vaccination obtained from a private source (Infanrix Hexa by GSK), out of which all 10 (100%) had anti-HBs titres >10 IU/L (protective level). Comparison between the two groups revealed the difference to be significant (P value= 0.028).
One hundred and thirty-two children received vaccination from army health facilities (CMH and MH) out of which 36 (27.3%) had anti-HBs titres < 10 IU/L while 96 (72.7%) had anti-HBs titres >10 IU/L. Sixty-two children were vaccinated at civilian health facilities (health centres or vaccination teams visiting homes). Out of them 25 (40.3%) had anti-HBs titres <10 IU/L while 37 (59.7%) had anti- HBs titres >10 IU/L. The difference was insignificant (P value= 0.068). Gender analysis revealed that in the study group 129 (68.5%) were male children. Out of them 34 (26.4%) had anti-HBs titres <10 IU/L and 95 (73.6%) had anti-HBs titres >10 IU/L. Sixty-five (31.5%) were female children and out of them 27 (41.5%) had anti-HBs titres <10 IU/L while 38 (58.5%) had anti-HBs titres > 10 IU/L. Statistical analysis revealed the difference between males and females was significant (P value= 0.032).
One hundred and twenty-two (62.9%) children were less than 1 year of age. Out of them 37 (30.3%) had anti-HBs titres <10 IU/L and 85 (69.7%) had anti- HBs titres >10 IU/L. Seventy-two (37.1%) children ranged between 1 to 2 years of age. Out of them 24 (33.3%) had anti-HBs titres <10 IU/L while 48 (66.7%) had anti-HBs titres >10 IU/L. On comparison the difference between the two groups was insignificant (P value= 0.663), as shown in Table 1.
Patient characteristics
Anti-HBs titres (< 10 IU/L) (n = 61)
Anti-HBs titres (> 10 IU/L) (n = 133)
P – values
Age groups
0.63 NS
< 1year (n = 122)
37 (30.0%)
85 (69.7%)
> 1year (n = 72)
24 (33.3%)
48 (66.7%)
Gender
0.032
Male (n = 129)
34 (26.4%)
95 (73.6%)
Female (n = 65)
27 (41.5%)
38 (58.5%)
Hospital
0.068 NS
Army (n = 132)
36 (27.3%)
96 (72.7%)
Civilian (n = 62)
25 (40.3%)
37 (59.7%)
Vaccine Type
0.028
Government (n = 184)
61 (33.2%)
123 (66.8%)
Private ( n = 10)
0 (0%)
10 (100%)
Table 1 (NS = Insignificant; * = Significant )
Discussion
HB is a global health problem with variable prevalence in different parts of the world1. Various studies carried out in different parts of Pakistan in different groups of population have shown diverse figures regarding prevalence of HB. However, a figure of 3-4% is accepted as general consensus by and large, thus making Pakistan an area of intermediate endemicity for HB6. Yet when we extrapolate these figures to our population, it is estimated that Pakistan hosts about seven million carriers of HB which is about 5% of the worldwide 350 million carriers of HB10,11.
Age at the time of infection plays the most important role in acquisition of acute or chronic HBV disease. HBV infection acquired in infancy is responsible for a very high risk of chronic liver disease due to HBV in later life12. HB is a preventable disease and fortunately vaccination at birth and during infancy can eradicate the disease globally, if vaccination strategy is effectively implemented13. This can be claimed as the first anti-cancer vaccine which prevents hepatocellular carcinoma in later life.
In Pakistan, the HB vaccine was included in the EPI in 2004, given along with DPT (Diphtheria, Pertussis, Tetanus) at 6, 10 and 14 weeks of age. The vaccine is provided through government health infrastructure to health facilities. Private HB vaccines supplied as a single antigen or in combination vaccines are also available in the market. The efficacy of these recombinant vaccines is claimed to be more than 95% among children and 90% among normal healthy adults14. The immunity of the HB vaccination is directly measured by development of anti-HBs antibodies more than 10 IU/L, which is considered as a protective level15. However, it is estimated that 5–15 % of vaccine recipients may not develop this protective level and remain non-responders due to undermentioned reasons.16 Published studies regarding antibody development in relation to various factors in terms of immunogenicity and seroprotection, show highly varied results. Multiple factors like dose, dosing schedules, sex, storage, site and route of administration, obesity, genetic factors, diabetes mellitus and immunosupression, affect HB antibodies development response17.
Although the HB vaccine was included in the EPI in 2004 in Pakistan, until now no published data showing seroconversion and seroprotection among vaccine recipients of this programme is available on a national level to our knowledge. Our study has revealed that out of 194 children, only 133 (68.6%) had anti-HBs titres in the protective range (>10 IU/L) while 61 (31.4%) did not develop seroprotection. These results are low as compared to other international studies. A study from Bangladesh among EPI vaccinated children shows a seroprotection rate of 92.2%13 while studies from Brazil18 and South Africa19 have separately reported seroprotection rates of 90.0% and 86.6%, respectively. Studies from Pakistan carried out in adults also show seroprotection rates (anti-HBe titres >10 IU/L) of more than 95% in Karachi University students14 and 86% in health care workers of Agha Khan University Hospital20, respectively. However, in these studies the dosing schedule was 0, 1 and 6 months, and participants were adults. These results are consistent with international reports.
The gravity of low seroprotection after HB vaccination is further aggravated when we extrapolate these figures to our overall low vaccination coverage rates of 37.6% to 45% as shown in studies at Peshawar and Karachi respectively21,22. One can imagine a significantly high percentage of individuals vulnerable to HBV infection even after receiving HB vaccine in an extensive national EPI programme. Therefore, a large population still remains exposed to risk of HBV infection, and national and global eradication of HBV infection will remain a dream. Failure of seroprotection after receiving the HBV vaccination in the EPI will also be responsible for projecting a sense of false protection among vaccine recipients.
Dosing schedule is an important factor in the development of an antibody response and titre levels. According to the Advisory Committee on Immunization Practices (ACIP) of America, there should be a minimum gap of 8 weeks between the second and third doses and at least 16 weeks between the first and third doses of the HB vaccination23. To minimize frequent visits and improve compliance, the dosing schedule has been negotiated in the EPI to 6, 10 and 14 weeks24. Although some studies have shown this schedule to be effective, the GMT of anti-HBs antibodies achieved was lower than that achieved by the standard WHO schedule25. This may be one explanation of lower rates of seroprotection in our study. The GMT achieved in our study among the children having protective levels of antibodies is 85.81 IU/L which is lower than most other studies. This supports the observation that GMT achieved in this schedule is lower than that produced by the standard WHO schedule. This may result in breakthrough infection of HB in vaccinated individuals in later life due to waning immunity. However, the immune memory hypothesis supports protection of vaccinated individuals in later life in spite of low anti-HBs antibody titres26. Yet further studies are required to dispel this risk.
Another shortcoming of this schedule is to miss the dose at birth (‘0 dose’). It has been reported that the 0 dose of the HB vaccine alone is 70% - 95% effective as post-exposure prophylaxis in preventing perinatal HBV transmission without giving HB immunoglobulins27. This may also be a factor contributing to lower rates of seroprotection in our study as we have not done HBsAg and other relevant tests to rule out HBV infection in these children. Moreover pregnant ladies by and large are not screened for HBV infection in Pakistan routinely in the public sector except in a few big cities like Islamabad, Lahore or Krachi. Therefore, we do not know the HB status of pregnant mothers and the risk of transmission to babies remains high. Different studies have reported much varied figures of HB status in pregnant ladies. A study from Karachi reports 1.57% pregnant ladies are positive for HBsAg while a study from Rahim Yar Khan reports this figure to be up to 20%28,29. A study by Waheed et al regarding the transmission of HBV infection from mother to infants reports the risk to be up to 90%30. All of these studies support the importance of the birth dose of the HB vaccination and augment the fact that control and eradication of HB with the present EPI schedule is not possible. Jain from India has reported a study using an alternative schedule of 0, 6 weeks and 9 months. He has reported it to be comparable to the standard WHO schedule of 0, 1, 6 months in regards to seroprotection and GMT levels achieved31. This schedule can be synchronised with the EPI schedule, avoiding extra visits and incorporating the birth dose. A similar schedule can also be incorporated in our national EPI.
In our study, seroprotection rates were found to be low in the female gender and the difference was significant. This finding differs with other studies which report lower seroprotection rates in males32. Although the number of female children was less, there is no plausible explanation for this observation. The site of inoculation of the HB vaccine is also very important for an adequate immune response. Vaccines given in the buttocks or intradermally produce lower antibody titres than intramuscular injections given in the outer aspect of the thigh in children, due to poor distribution and absorption of the vaccine within the host body. The practice of giving vaccinations in the buttocks by vaccinators is a common observation which they feel convenient for intramuscular injection in children. This may also be one reason for low seroprotection rates in our study, as we picked the children at random who had received vaccination at public health facilities except a small number of private cases.
The effectiveness of the vaccine also depends on the source of procurement and proper maintenance of the cold chain. In this study 100% seroprotection was observed in children who received the HB vaccine procured from a private source. Although the number of private cases was less, this factor of source and the cold chain also needs attention. To address this issue proper training of EPI teams regarding maintenance of temperature, injection techniques, motivation and monitoring can improve outcomes substantially.
The findings of this study are different from published literature because this is a cross-sectional observational study. This reports the actual seroprotection rates after receiving the HB vaccination in the EPI schedule. While most other studies show the results after ensuring control of influencing factors such as type of vaccine, dose, schedule, route of administration, training and monitoring of local EPI teams and health status of vaccine recipients, etc. Therefore, this is an effort to look at a practical scenario and evaluate outcomes which can help in framing future guidelines to achieve the goal of control and eradication of HB infection. Further studies are required at a large scale to determine the effect of HB vaccination at a national level.
Conclusion
The HB vaccination programme has decreased the global burden of HBV infection, but evidence of decreased burden is not uniform amongst world population.Of course figures witness marked decrease in developed world while in developing world statistics show little change. Unfortunately, implementation of this programme is not uniformly effective in all countries, thus resvoirs of infection and the source of continued HBV transmission persists. HBV infection is moderately endemic in Pakistan. The HB vaccine has been included in the national EPI since 2004. The present study shows seroprotection rates of only 68.6% in vaccine recipients, which is low when compared with other studies; 31.4% of vaccine recipients remain unprotected even after vaccination. Moreover GMT achieved in seroprotected vaccine recipients is also low (85.81 IU/L). There can be multiple reasons for these results, such as type of vaccine used, maintenance of the cold chain, route and site of administration, training and monitoring of EPI teams and dosing schedule. In present practice, the very important birth dose is also missing. These observations warrant review of the situation and appropriate measures to be taken to rectify the above mentioned factors, so that desired seroprotection rates after HB vaccination in the EPI can be achieved among vaccine recipients.